The “\$Extend\$Deleted” directory

In the Linux world, a deleted file which is still open isn’t actually removed from a disk. Instead, it’s just unlinked from the directory structure. This is why a system call used to remove files is named “unlink”.

unlink() deletes a name from the filesystem. If that name was the last link to a file and no processes have the file open, the file is deleted and the space it was using is made available for reuse.

If the name was the last link to a file but any processes still have the file open, the file will remain in existence until the last file descriptor referring to it is closed.

(Source.)

The same behavior can be observed in other Unix-like operating systems.

But in Windows 10, similar behavior can be seen too!

Continue reading “The “\$Extend\$Deleted” directory”

Scoped shadow copies

Have you ever heard of scoped shadow copies? They have been around since the release of Windows 8, but not much information is available on this topic.

A shadow copy becomes scoped when data blocks not required by the system restore process are excluded from copy-on-write operations. When you create a restore point, a scoped shadow copy is created by default for a system volume (in Windows 8, 8.1 & 10).

Continue reading “Scoped shadow copies”

You write to a logical drive when you read from it

Many unexpected things happen under the hood when you do live forensics. Tools used to acquire data from running Windows systems often utilize direct access to logical drives to copy locked files and extract NTFS metadata. But did you know that NTFS metadata is updated when you read a logical drive directly?

Continue reading “You write to a logical drive when you read from it”

NTFS: large clusters

A small addition to this post.

Starting from Windows 10 “Redstone 3” (Fall Creators Update), it’s possible to create an NTFS volume using one of the following cluster sizes: 128K, 256K, 512K, 1M, 2M. Previously, the largest supported cluster size was 64K.

format.png

Currently, I’m not aware of any third-party tools that support such large clusters: there is no support in the NTFS-3G driver, no support in the Linux kernel (#1, #2), no support in The Sleuth Kit, no support in RawCopy, no support in several proprietary forensic tools.

This update also changed the way how the “sectors per cluster” field (located in an NTFS boot sector) is treated. Previously, this was an unsigned byte and its value was treated literally. Now, this is a signed byte and its value is used as shown in the following pseudocode:

// Argument:
// - SectorsPerCluster: a signed byte (from the offset 13 in an NTFS boot sector).
// Return value:
// - A true number of sectors per cluster.
NtfsGetTrueSectorsPerCluster(SectorsPerCluster)
{
	if ((unsigned)SectorsPerCluster > 0x80)
		return 1 << -SectorsPerCluster
	else
		return (unsigned)SectorsPerCluster
}

This isn’t the same as the algorithm used when dealing with the “file record segment size” and “index record size” fields in an NTFS boot sector, note the edge case when the byte is equal to 0x80 (this corresponds to a negative value, but it’s still used as unsigned for backward compatibility, because 0x80 is used for 64K clusters).


A sample file system image can be found here.