OneDrive and NTFS last access timestamps

You might already know that the NTFS “Last Access” updates will be back by default in Windows 10 “20H1”. Previously, there were back for installations with small system volumes only. What is the reason behind this? Why do we need last access timestamps?

If you visit the “Configure Storage Sense or run it now” page in the “Settings” window of Windows 10 “19H2”, you may notice the “Delete files in my Downloads folder if they have been there for over” option. The same option in “20H1” reads: “Delete files in my Downloads folder if they haven’t been opened for more than“.

So, this old new NTFS feature has something to do with Storage Sense. It’s a component used to delete unneeded files “to keep your storage optimized”. And the “Last Access” updates are a good way to detect such unneeded files (and the “StorageUsage.dll” library actually uses last access timestamps to find “cold” files).

But there is something you might not notice. Look at the same settings page in Windows 10 “19H2” and read:

"OneDrive
Content will become online-only if not opened for more than"

Wait a minute! The “Last Access” updates are on for a relatively small subset of Windows 10 “19H2” installations only… Does this option really work for systems with large system volumes?

Continue reading “OneDrive and NTFS last access timestamps”

Deceptive NTFS short file names

Are you aware of DLL hijacking? If yes, let’s suppose there is a program that executes the following line of code:

LoadLibrary('riched32.dll');

Its executable has the following name: “i_use_riched32.exe” (just as an example).

Now, take a look at the following contents of a directory containing this executable, the screenshots were taken of three tools: Explorer, FTK Imager Lite, The Sleuth Kit (each one points to the same directory).

svl-explorer
Explorer
svl-ftki
FTK Imager Lite
svl-tsk
The Sleuth Kit

Is the “riched32.dll” library hijacked for the “i_use_riched32.exe” executable? Let’s assume that no attempts to hijack the library have been made outside of the directory shown above.

Continue reading “Deceptive NTFS short file names”

The “\$Extend\$Deleted” directory

In the Linux world, a deleted file which is still open isn’t actually removed from a disk. Instead, it’s just unlinked from the directory structure. This is why a system call used to remove files is named “unlink”.

unlink() deletes a name from the filesystem. If that name was the last link to a file and no processes have the file open, the file is deleted and the space it was using is made available for reuse.

If the name was the last link to a file but any processes still have the file open, the file will remain in existence until the last file descriptor referring to it is closed.

(Source.)

The same behavior can be observed in other Unix-like operating systems.

But in Windows 10, similar behavior can be seen too!

Continue reading “The “\$Extend\$Deleted” directory”

You write to a logical drive when you read from it

Many unexpected things happen under the hood when you do live forensics. Tools used to acquire data from running Windows systems often utilize direct access to logical drives to copy locked files and extract NTFS metadata. But did you know that NTFS metadata is updated when you read a logical drive directly?

Continue reading “You write to a logical drive when you read from it”

Things you probably didn’t know about shadow copies

1. Shadow copies can contain invalid data

During the development of the parser for shadow copies, I observed many systems containing invalid data in shadow copies. For unknown reasons, some allocated files may contain null blocks instead of valid data blocks as well as blocks of data which should not be there.

Continue reading “Things you probably didn’t know about shadow copies”

NTFS: large clusters

A small addition to this post.

Starting from Windows 10 “Redstone 3” (Fall Creators Update), it’s possible to create an NTFS volume using one of the following cluster sizes: 128K, 256K, 512K, 1M, 2M. Previously, the largest supported cluster size was 64K.

format.png

Currently, I’m not aware of any third-party tools that support such large clusters: there is no support in the NTFS-3G driver, no support in the Linux kernel (#1, #2), no support in The Sleuth Kit, no support in RawCopy, no support in several proprietary forensic tools.

This update also changed the way how the “sectors per cluster” field (located in an NTFS boot sector) is treated. Previously, this was an unsigned byte and its value was treated literally. Now, this is a signed byte and its value is used as shown in the following pseudocode:

// Argument:
// - SectorsPerCluster: a signed byte (from the offset 13 in an NTFS boot sector).
// Return value:
// - A true number of sectors per cluster.
NtfsGetTrueSectorsPerCluster(SectorsPerCluster)
{
	if ((unsigned)SectorsPerCluster > 0x80)
		return 1 << -SectorsPerCluster
	else
		return (unsigned)SectorsPerCluster
}

This isn’t the same as the algorithm used when dealing with the “file record segment size” and “index record size” fields in an NTFS boot sector, note the edge case when the byte is equal to 0x80 (this corresponds to a negative value, but it’s still used as unsigned for backward compatibility, because 0x80 is used for 64K clusters).


A sample file system image can be found here.