Disk encryption: wide-block modes, authentication tags aren’t silver bullets

Recently, IEEE released the P1619/D12 (October 2024) draft that changes the XTS mode of operation of the AES cipher. In particular, there is a new requirement:

The total number of 128-bit blocks in the key scope shall not exceed 2^44 (see D.6). For optimum security, the number of 128-bit blocks in the key scope should not exceed 2^36 (see D.4.3).

The current limit (IEEE Standard 1619-2018) is significantly higher:

The total number of 128-b blocks shall not exceed 2^64.

The proposed soft limit means that you are recommended not to encrypt more than 1 TiB of data without changing the keys, “for optimum security”. And the proposed hard limit means that you are not allowed to encrypt more than 256 TiB of data without changing the keys (the current limit is 268435456 TiB).

This requirement makes existing full-disk encryption implementations non-compliant, as mentioned by Milan Brož and Vladimı́r Sedláček in the “XTS mode revisited: high hopes for key scopes?” paper. The authors state that the proposed standard lacks a clear threat model, as well as rules defining how keys should be generated.

When exploring possible alternatives, the authors of this paper suggest the following:

From a long-term perspective, it might be more beneficial to switch to a different (wide) encryption mode ([…]) if length-preserving ciphertext is required. Or if the storage device provides space for authentication tags, authenticated encryption would be a strong candidate as well.

So, let’s explore the XTS mode in practice, then take a look on its alternatives…


Frankly speaking, the XTS mode is controversial, because it’s susceptible to:

  • Precise traffic analysis: an adversary capable of observing multiple versions of encrypted data can deduce what 16-byte blocks have changed between these versions.
  • Precise randomizing attacks: an adversary can turn a specific decrypted 16-byte block into random garbage by introducing a modification to the corresponding ciphertext.

These issues are well-known: e.g., see this Wikipedia article.

In practice, both flaws are crucial to the current full-disk encryption implementations (like BitLocker and LUKS). Here are examples for the traffic analysis attacks:

  • In the TPM-only mode of operation: a physically-present attacker can boot a target computer “seamlessly” up to the operating system’s login screen — thus, enabling traffic analysis attacks (new versions of encrypted data are produced due to background write activities of the operating system).
  • In the TPM plus network key mode of operation (which is used to implement secure unattended boot for servers): the same traffic analysis attacks are possible too (although a target computer is required to reside in a corporate network during the attack, otherwise a password is needed to unlock the TPM-bound encryption key).
  • In the TPM plus password mode of operation: if the password (i.e., one of two “factors”) is compromised, the same traffic analysis attacks are possible (the password alone isn’t enough to decrypt the data).
  • Sometimes full-disk encryption is used to protect data against legitimate unprivileged users (e.g., on a corporate laptop, it’s used to “enforce” existing file system access controls): such users have physical access to target computers, but no access to the corresponding encryption keys (even if an encryption password is set, it’s not enough to obtain the necessary key — i.e., the key is bound to the TPM), although they can boot the target computer and log in to its operating system — thus, enabling similar traffic analysis attacks.
  • In the dual-boot scenarios, a compromised operating system installation can be used to attack another, encrypted installation. Over time, a compromised operating system is likely to observe multiple states of the encrypted volume (of another operating system).
  • Finally, some authors suggest full-disk encryption as a measure against backdoors in the HDD/SSD firmware (along with other protections — against DMA and execution of untrusted code): obviously, traffic analysis attacks are possible to some extent (although there is no much space for such a backdoor to store old copies of encrypted data). A similar scenario is the “explicit” or “implicit” network boot (e.g., explicitly via the iSCSI protocol or implicitly via launching a virtual machine using a disk image stored on a network-based file system): an attacker can control the disk image storage (e.g., the iSCSI target), but not the virtualization host (which runs the virtual machine).

These scenarios are also relevant to randomizing attacks: attackers can turn some 16-byte blocks of their choice into random garbage and then observe runtime effects of this. The first two examples demonstrate that even one-time access to a target computer (and without any kind of prior knowledge of a secret) may expose multiple versions of some ciphertext blocks to unauthenticated attackers, and they can force the operating system to decrypt some modified ciphertext blocks.

It perfectly matches the following threat model from Microsoft:

The classic solution to this problem is to run a low-level disk encryption driver with the key provided by the user (passphrase), a token (smart card) or a combination of the two. The disadvantage of the classic solution is the additional user actions required each time the laptop is used. Most users are unwilling to go through these extra steps, and thus most laptops are unprotected.

BitLocker improves on the classic solution by allowing the user actions during boot or wake-up from hibernate to be eliminated. This is both a huge advantage and a limitation. Because of the ease of use, corporate IT administrators can enable BitLocker on the corporate laptops and deploy it without much user resistance. On the downside, this configuration of BitLocker can be defeated by hardware-based attacks.

[…]

In practice, we expect that many laptops will be used in the TPM-only mode and that scenario is the main driver for the disk cipher design.

[…]

In the BitLocker attack model we assume that the attacker has chosen some of the plaintext on the disk, and knows much of the rest of the plaintext. Furthermore, the attacker has access to all ciphertext, can modify the ciphertext, and can read some of the decrypted plaintext. (For example, the attacker can modify the ciphertext which stores the startup graphic, and read the corresponding plaintext off the screen during the boot process, though this would take a minute or so per attempt.) We also assume that the OS modifies some sectors in a predictable way during the boot sequence, and the attacker can observe the ciphertext changes.

However, the attacker cannot collect billions of plaintext/ciphertext pairs for a single sector. He cannot run chosen plaintext differences through the cipher. (He can choose many different plaintexts, but they are all for different sectors with different tweak values, so he cannot generate chosen plaintext differences on a single sector.) And finally, though this is not a cryptographic argument, to be useful the attack has to do more than just distinguish the cipher from a random permutation.

(Source.)

Continue reading “Disk encryption: wide-block modes, authentication tags aren’t silver bullets”

CVE-2025-21210 aka CrashXTS: a practical randomization attack against BitLocker

Please, refer to this Wikipedia article if you need some theory… My paper is here.

Background: attacks on AES-CBC

Do you remember code execution attacks against full-disk encryption implementations using AES-CBC? Like that one described in the “Code Execution In Spite Of BitLocker” article or another one detailed in the “Practical malleability attack against CBC-Encrypted LUKS partitions” post.

In these attacks, physically-present adversaries manipulate ciphertext blocks to flip specific bits in the decrypted form, thus constructing code execution payloads to be executed on the next boot. This is not a direct payload injection, since the attackers don’t actually write their “raw” code into the encrypted volumes, but rather flip the bits belonging to existing executables in order to transform their code.

Such an attack consists of two stages:

  • finding the exact position of bits to flip;
  • flipping these bits (in a way that turns original code into attacker-chosen code).

The nature of AES-CBC makes it easy to flip bits in the decrypted form, because each block of plaintext is XORed with the previous ciphertext block before being encrypted:

In other words, the attacker can flip arbitrary bits in one block at the cost of randomizing the previous block.

(Source.)

The hardest part here is finding the exact blocks of ciphertext to manipulate. In order to inject custom code, the attacker must know what code to modify (i.e., its plaintext bytes) and its precise location on the disk. Otherwise, flipped bits won’t produce meaningful plaintext changes (since Attacker-chosen code = Original code XOR Delta, the attacker must know Original code and then produce Delta to obtain Attacker-chosen code).

Currently, the demonstrated bit-flipping attacks rely on predictable locations of executables. The attacker installs the same operating system version on similar media to learn the disk offsets required (this is based on one assumption: two operating system installations on similar media write at least some executables to the same disk offsets). According to the “Code Execution In Spite Of BitLocker” article:

In our testing, two installations of Windows 8 onto the same format of machine put the system DLLs in identical locations. This behavior is far from guarenteed, but if we do know where a file is expected to be, perhaps through educated guesswork and installing the OS on the same physical hardware, then we will know the location, the ciphertext, and the plaintext.

Obviously, this won’t work with operating system installations that are not “fresh”… Months (or even years) of operating system updates, their installation order, and many housekeeping activities are likely to move executables to unpredictable offsets, undermining the idea of predictable locations.

Thus, the most important part of the attack – finding the exact location of bits to flip – is extremely hard to complete in reality.

Concerns: attacks on AES-XTS

Bit flips through ciphertext manipulations are impossible with AES-XTS (and this is why this mode is preferred over AES-CBC when encrypting storage devices).

In this mode, any change to a ciphertext block will turn its decrypted form into garbage and won’t affect any preceding or subsequent blocks.

More than 15 years ago, Niels Ferguson and Vijay Bharadwaj argued that code execution payloads are still possible with AES-XTS:

The attacker now replaces the first few bytes of the ciphertext block and hopes that the randomized plaintext decrypts to a value such that the first instruction executed in the block is a jump instruction to the epilog code. On the x86 a relative jump instruction is 2 bytes long and can jump up to 128 bytes forward. The probability of getting the right plaintext is one in 2^16; high enough for a practical attack.

(Source.)

In other words, attackers can get specific byte values even through unpredictable (random) ciphertext manipulations.

Practical attack: CVE-2025-21210 vs. BitLocker & AES-XTS

The idea is to force the Windows operating system to write some sensitive data to the underlying drive without encrypting it first (i.e., to leak data in the plaintext form).

Continue reading “CVE-2025-21210 aka CrashXTS: a practical randomization attack against BitLocker”