Introduction
The Computer Forensics Tool Testing (CFTT) program is a joint project of the
Department of Homeland Security’s (DHS) Science and Technology Directorate (S&T),
the National Institute of Justice, and the National Institute of Standards and Technology’s
(NIST) Special Programs Office and Information Technology Laboratory. CFTT is
supported by other organizations, including the Federal Bureau of Investigation, the U.S.
Department of Defense’s Cyber Crime Center, U.S. Internal Revenue Service’s Criminal
Investigation Division Electronic Crimes Program, and DHS’ Bureau of Immigration and
Customs Enforcement, U.S. Customs and Border Protection and U.S. Secret Service. The
objective of the CFTT program is to provide measurable assurance to practitioners,
researchers, and other applicable users that the tools used in computer forensics
investigations provide accurate results. Accomplishing this objective requires the
development of specifications and test methods for computer forensics tools and
subsequent testing of specific tools against those specifications.
Test results provide the information necessary for developers to improve tools, users to
make informed choices, and the legal community and others to understand the tools’
capabilities. The CFTT approach to testing computer forensics tools is based on well-
recognized methodologies for conformance and quality testing. Interested parties in the
computer forensics community can review and comment on the specifications and test
methods posted on the CFTT Web site (https://www.cftt.nist.gov/).
This document reports the results from testing NUIX Workstation v9.10.5.374 for SQLite
data recovery, including displaying recovered SQLite database information; identifying,
categorizing and reporting Write-Ahead Log (WAL); Rollback Journal data; and
sequence WAL journal data.
Test results from other tools can be found on the S&T-sponsored digital forensics web
page at http://www.dhs.gov/science-and-technology/nist-cftt-reports.
How to Read This Report
This report is divided into four sections. Section 1 identifies and provides a summary of
any significant anomalies observed in the test runs. This section is sufficient for most
readers to assess the suitability of the tool for the intended use. Section 2 identifies the
mobile devices used for testing. Section 3 lists the testing environment and the internal
memory data objects used to populate the mobile devices. Section 4 provides an overview
of the test case results reported by the tool.