If you visit the “Configure Storage Sense or run it now” page in the “Settings” window of Windows 10 “19H2”, you may notice the “Delete files in my Downloads folder if they have been there for over” option. The same option in “20H1” reads: “Delete files in my Downloads folder if they haven’t been opened for more than“.
So, this old new NTFS feature has something to do with Storage Sense. It’s a component used to delete unneeded files “to keep your storage optimized”. And the “Last Access” updates are a good way to detect such unneeded files (and the “StorageUsage.dll” library actually uses last access timestamps to find “cold” files).
But there is something you might not notice. Look at the same settings page in Windows 10 “19H2” and read:
Content will become online-only if not opened for more than"
Wait a minute! The “Last Access” updates are on for a relatively small subset of Windows 10 “19H2” installations only… Does this option really work for systems with large system volumes?
Are you aware of DLL hijacking? If yes, let’s suppose there is a program that executes the following line of code:
Its executable has the following name: “i_use_riched32.exe” (just as an example).
Now, take a look at the following contents of a directory containing this executable, the screenshots were taken of three tools: Explorer, FTK Imager Lite, The Sleuth Kit (each one points to the same directory).
Is the “riched32.dll” library hijacked for the “i_use_riched32.exe” executable? Let’s assume that no attempts to hijack the library have been made outside of the directory shown above.
Have you ever encountered on-disk artifacts originating from another system?
Typically, this is something you see when a custom operating system image had been deployed to multiple computers by IT staff (on-disk artifacts appeared before the image is captured become a part of that image).
But there are some minor artifacts existing in installation images coming from Microsoft!
Each piece of evidence is stored as a registry value (REG_BINARY), its name is set to an executable path and its data is set to a binary structure with a FILETIME timestamp inside (this is believed to be the last execution timestamp).
In the Linux world, a deleted file which is still open isn’t actually removed from a disk. Instead, it’s just unlinked from the directory structure. This is why a system call used to remove files is named “unlink”.
unlink() deletes a name from the filesystem. If that name was the last link to a file and no processes have the file open, the file is deleted and the space it was using is made available for reuse.
If the name was the last link to a file but any processes still have the file open, the file will remain in existence until the last file descriptor referring to it is closed.
Have you ever heard of scoped shadow copies? They have been around since the release of Windows 8, but not much information is available on this topic.
A shadow copy becomes scoped when data blocks not required by the system restore process are excluded from copy-on-write operations. When you create a restore point, a scoped shadow copy is created by default for a system volume (in Windows 8, 8.1 & 10).
Many unexpected things happen under the hood when you do live forensics. Tools used to acquire data from running Windows systems often utilize direct access to logical drives to copy locked files and extract NTFS metadata. But did you know that NTFS metadata is updated when you read a logical drive directly?
There are forensic tools capable of carving file records and index entries ($I30) from memory dumps, but there is much more NTFS-related metadata which isn’t exposed by usual memory forensics frameworks. For example, file control blocks.
Here is a list of open research topics in Windows forensics. All topics in this list are relevant to my research. Feel free to pick one for your research. Originally, I wrote this list for myself, but it’s better to make it public.
More topics and ideas (in other areas too) can be found here and here.
During the development of the parser for shadow copies, I observed many systems containing invalid data in shadow copies. For unknown reasons, some allocated files may contain null blocks instead of valid data blocks as well as blocks of data which should not be there.