Have you ever heard of scoped shadow copies? They have been around since the release of Windows 8, but not much information is available on this topic.
A shadow copy becomes scoped when data blocks not required by the system restore process are excluded from copy-on-write operations. When you create a restore point, a scoped shadow copy is created by default for a system volume (in Windows 8, 8.1 & 10).
Continue reading “Scoped shadow copies”
Many unexpected things happen under the hood when you do live forensics. Tools used to acquire data from running Windows systems often utilize direct access to logical drives to copy locked files and extract NTFS metadata. But did you know that NTFS metadata is updated when you read a logical drive directly?
Continue reading “You write to a logical drive when you read from it”
There are forensic tools capable of carving file records and index entries ($I30) from memory dumps, but there is much more NTFS-related metadata which isn’t exposed by usual memory forensics frameworks. For example, file control blocks.
Continue reading “Carving file control blocks from memory dumps”
Here is a list of open research topics in Windows forensics. All topics in this list are relevant to my research. Feel free to pick one for your research. Originally, I wrote this list for myself, but it’s better to make it public.
More topics and ideas (in other areas too) can be found here and here.
Continue reading “Windows forensics: open research topics”
1. Shadow copies can contain invalid data
During the development of the parser for shadow copies, I observed many systems containing invalid data in shadow copies. For unknown reasons, some allocated files may contain null blocks instead of valid data blocks as well as blocks of data which should not be there.
Continue reading “Things you probably didn’t know about shadow copies”
A small addition to this post.
Starting from Windows 10 “Redstone 3” (Fall Creators Update), it’s possible to create an NTFS volume using one of the following cluster sizes: 128K, 256K, 512K, 1M, 2M. Previously, the largest supported cluster size was 64K.
Currently, I’m not aware of any third-party tools that support such large clusters: there is no support in the NTFS-3G driver, no support in the Linux kernel (#1, #2), no support in The Sleuth Kit, no support in RawCopy, no support in several proprietary forensic tools.
This update also changed the way how the “sectors per cluster” field (located in an NTFS boot sector) is treated. Previously, this was an unsigned byte and its value was treated literally. Now, this is a signed byte and its value is used as shown in the following pseudocode:
// - SectorsPerCluster: a signed byte (from the offset 13 in an NTFS boot sector).
// Return value:
// - A true number of sectors per cluster.
if ((unsigned)SectorsPerCluster > 0x80)
return 1 << -SectorsPerCluster
This isn’t the same as the algorithm used when dealing with the “file record segment size” and “index record size” fields in an NTFS boot sector, note the edge case when the byte is equal to 0x80 (this corresponds to a negative value, but it’s still used as unsigned for backward compatibility, because 0x80 is used for 64K clusters).
A sample file system image can be found here.
Memory images, page files, hibernation files, crash dumps are standard targets for memory forensics. But there are unusual ones: for example, chunks of disclosed (leaked) uninitialized kernel memory found on a drive.
Continue reading “Forensic analysis of disclosed uninitialized kernel memory”
No operation on a file is allowed to include unallocated (deleted) data into the user-readable area of that file. Otherwise, an unprivileged program could read data from a deleted file even if such access was forbidden when this file was allocated.
But this is not an issue when dealing with files readable by privileged programs only (because such programs can read allocated and unallocated data from a file system directly). However, allocated files containing pieces of unallocated data are very rare (unlike the slack space, such data is a part of file’s data).
Continue reading “NTFS: unallocated data marked as allocated”
In the official NTFS implementation, all metadata changes to a file system are logged to ensure the consistent recovery of critical file system structures after a system crash. This is called write-ahead logging.
The logged metadata is stored in a file called “$LogFile”, which is found in a root directory of an NTFS file system.
Currently, there is no much documentation for this file available. Most sources are either too high-level (describing the logging and recovery processes in general) or just contain the layout of key structures without further description.
Continue reading “How the $LogFile works?”
Things are changing and file systems are not an exception. Even when their version numbers are staying the same.
This post will outline some interesting things found in the current NTFS implementation which are either poorly documented or not documented elsewhere.
Continue reading “NTFS today”