Introduction
The Computer Forensics Tool Testing (CFTT) program is a joint project of the
Department of Homeland Security (DHS) Science and Technology Directorate (S&T),
the National Institute of Justice, and the National Institute of Standards and Technology
(NIST) Special Programs Office and Information Technology Laboratory. CFTT is
supported by other organizations, including the Federal Bureau of Investigation, the U.S.
Department of Defense Cyber Crime Center, U.S. Internal Revenue Service Criminal
Investigation Division Electronic Crimes Program, and DHS’s Immigration and Customs
Enforcement, U.S. Customs and Border Protection and U.S. Secret Service. The objective
of the CFTT program is to provide measurable assurance to practitioners, researchers,
and other applicable users that the tools used in computer forensics investigations provide
accurate results. Accomplishing this requires the development of specifications and test
methods for computer forensics tools and subsequent testing of specific tools against
those specifications.
Test results provide the information necessary for developers to improve tools, users to
make informed choices, and the legal community and others to understand the tools’
capabilities. The CFTT approach to testing computer forensics tools is based on well-
recognized methodologies for conformance and quality testing. Interested parties in the
computer forensics community can review and comment on the specifications and test
methods posted on the CFTT website (https://www.cftt.nist.gov/).
This document reports the results from testing NUIX Workstation v9.6.5.283 for SQLite
data recovery, including displaying recovered SQLite database information, identifying,
categorizing and reporting Write-Ahead Log (WAL), Rollback Journal data and sequence
WAL journal data.
Test results from other tools can be found on the S&T-sponsored digital forensics web
page: https://www.dhs.gov/science-and-technology/nist-cftt-reports.
How to Read This Report
This report is divided into four sections. Section 1 identifies and provides a summary of
any significant anomalies observed in the test runs. This section is sufficient for most
readers to assess the suitability of the tool for the intended use. Section 2 lists testing
environment and SQLite data objects used for testing. Section 3 provides an overview of
the test case results reported by the tool.