This post is a written description of a presentation titled Phone unlocking tools and where to find them that we have delivered privately to different events and organizations, including Primavera Hacker 25, the Freedom of the Press Foundation, and the Public Interest Technology Group. It provides a technical overview of how commercial forensic tools compromise mobile devices, focusing on the specific attack surfaces exploited at each stage of the device lifecycle. Further posts will focus on sample analysis, and defenses. See also the previous posts of this serie:
Introduction #
Commercial phone unlocking tools are not magic. They exploit the same classes of vulnerabilities that security researchers publish about at conferences and in academic papers. What makes them distinctive is not the novelty of the techniques, but the systematic commercialization of exploitation: acquiring or developing exploits for each hardware and software combination, and reaching a certain degree of stability, packaging them into automated workflows that require minimal operator expertise.
This post aims to demystify the technical foundations of these tools. The attack vectors available to a forensic examiner with physical access to a device are constrained by well-understood boundaries: the boot chain, the trusted execution environment, the secure element (when present), and the kernel’s peripheral driver stack. This post mostly details Android internals, however iOS equivalent concepts are technically similar.
Much of the background research referenced here comes from Quarkslab, whose extensive publications on TEEs, and Android’s data encryption architecture have been invaluable to understand these tools. Their work, alongside reporting from Amnesty International’s Security Lab and other NGOs, and analysis by the GrapheneOS project, provides the empirical basis for the claims made in this post.
Hardware security architecture #
Before examining attack vectors, it is necessary to understand the hardware security architecture of modern Android devices. The following sections describe the components that forensic tools must eventually hack or bypass.
The System-on-Chip: Normal World and Secure World #
Modern ARM-based mobile processors implement TrustZone™, a hardware security extension that partitions the processor into two execution environments: the Normal World and the Secure World. The Normal World runs the Android operating system, user applications, and the Linux kernel. The Secure World runs a separate, minimal operating system, the Trusted OS, along with small, specialized applications known as Trusted Applications (TAs).
The two worlds share the same physical processor cores but are hardware-isolated: the Normal World cannot directly read or write Secure World memory. Communication between the two happens through a monitor, a piece of code running at the highest privilege level (EL3 on ARMv8), which handles context switches triggered by Secure Monitor Call (SMC) instructions. Each physical processor core effectively provides two virtual cores, with the Non-Secure bit in the Secure Configuration Register determining which world is active at any given time.
The Trusted OS hosts critical security services. Two are directly relevant to device unlocking:
- Gatekeeper: responsible for verifying user credentials (PIN, password, pattern). It implements rate-limiting to throttle brute-force attempts and, upon successful authentication, issues a signed authentication token.
- Keymaster: the key management service that stores and operates on cryptographic keys. Some keys are authentication-bound, meaning they can only be used after Gatekeeper has issued a valid token.
Different SoC vendors ship different TrustZone™ implementations. Google (and AOSP in general) uses TrustyOS; Qualcomm uses QSEE; Samsung uses TEEGRIS (on both Exynos and MediaTek SoCs in Samsung devices); Huawei uses TrustedCore; Trustonic’s Kinibi is also often found on MediaTek and Exynos SoCs. The security properties vary between implementations.
The Secure Element #
A Secure Element (SE) is a physically separate, tamper-resistant hardware chip that provides an additional layer of security beyond the TEE. Unlike the Secure World, which runs on the same SoC as the Normal World, a Secure Element is an independent processor with its own boot chain, its own firmware, and its own cryptographic key storage.
In the Android ecosystem, the Secure Element is exposed through the StrongBox API (introduced in Android 9). Google’s Pixel phones, starting with the Pixel 3, include the Titan M chip as their Secure Element, that was later upgraded with the Titan M2 from Pixel 6 onward. On Apple devices, the equivalent is the Secure Enclave Processor (SEP).
The Secure Element’s role in device encryption is to hold key-encryption keys and enforce rate-limiting for credential verification in hardware that the main SoC cannot tamper with, even if fully compromised. This is an important distinction: on a device without a Secure Element, compromising the TEE is sufficient to extract or bypass credential-derived keys. On a device with a Secure Element, the attacker must additionally compromise a separate, hardened chip.
Unfortunately, only flagship devices typically include a Secure Element. Most mid-range and budget devices rely solely on the TEE for credential protection. This architectural gap is the single most important factor determining the chances that a device has to resist forensic unlocking when powered off.
The boot chain #
The boot chain establishes a chain of trust from immutable hardware to the running operating system. Each stage verifies the integrity of the next before executing it.
The chain proceeds as follows:
Boot ROM: the first code executed at power-on. It is burned into silicon at manufacture and is, in principle, immutable and unpatchable (with a few modern exceptions to deploy patches for vulnerabilities). The Boot ROM verifies and loads the next stage (the preloader or secondary bootloader). It forms the hardware root of trust.
Preloader/Secondary Bootloader (BL2): loaded and verified by the Boot ROM. On MediaTek SoCs, this is often called the preloader. It initializes hardware, loads the Trusted OS into the Secure World, and loads the main bootloader into the Normal World.
Trusted OS: loaded into the Secure World by the preloader. It hosts Gatekeeper, Keymaster, and other Trusted Applications. Its integrity is verified by the preloader as part of the secure boot chain.
Bootloader (BL3): the Normal World bootloader (e.g., Little Kernel on MediaTek, ABL on Qualcomm). It enforces Android Verified Boot (AVB), anti-rollback protections, and device-state checks before loading the Android kernel.
Android OS: the Linux kernel and userspace. Verified Boot (dm-verity) ensures the integrity of system partitions at runtime.
When a Secure Element is present, it runs its own independent boot chain in parallel: Boot ROM → Bootrom EXT → Bootloader → OS. The Secure Element’s boot chain is entirely separate from the SoC’s, providing the most isolation.
If any stage in the boot chain is compromised, all subsequent stages are untrustworthy. This is why Boot ROM vulnerabilities are so devastating: they are often unpatchable, they undermine everything built on top, and they affect every device using the vulnerable silicon for its entire hardware lifetime.
Android data encryption #
Understanding the attack vectors requires understanding what the attacker is ultimately trying to obtain: the decryption keys for the user’s data.
Full Disk Encryption vs. File-Based Encryption #
Android has used two encryption schemes over its history:
Full Disk Encryption (FDE) encrypts the entire data partition with a single key, derived from the user’s credentials. The key must be available before the OS can boot, so the user is prompted for their PIN or password at boot time. FDE was the default until Android 6 and has been deprecated since Android 13.
File-Based Encryption (FBE) encrypts individual files with different keys, allowing more granular access control. FBE has been the default since Android 7 and is required since Android 10. It divides storage into two classes:
- Device Encrypted (DE) storage: available immediately after boot, before the user authenticates. Used for system-critical functionality (alarms, phone dialer, Direct Boot). The DE key is derived without user credentials.
- Credential Encrypted (CE) storage: available only after the user authenticates for the first time after boot. This is where all user data resides—messages, photos, application data. The CE key is derived from the user’s credentials, stretched through scrypt, and protected by keys held in the TEE (and, when present, the Secure Element).
CE key derivation: The role of credentials #
Quarkslab’s Android Data Encryption in depth provides the most detailed public analysis of how the CE key is derived. The process, in simplified form, proceeds as follows:
The user’s credentials (PIN, password, or pattern) are stretched using scrypt, a memory-hard key derivation function, with parameters and salt stored in DE-protected files under
/data/system_de/<uid>/spblob. The scrypt step is intentionally slow: it introduces a negligible delay for a single authentication attempt but makes brute-forcing computationally expensive.The stretched credentials are combined with other material to form a password blob, which is sent to the Gatekeeper Trusted Application in the TEE. Gatekeeper verifies the blob against a stored password handle using an HMAC with an internal key. If verification succeeds, Gatekeeper issues a signed authentication token.
The authentication token is presented to Keymaster, which uses it to unlock an authentication-bound key. This key is used to perform the first decryption of the Synthetic Password, an intermediate secret stored in encrypted form on the filesystem.
The Synthetic Password is decrypted a second time using a key derived from the user’s credentials (the
applicationId). This second decryption uses AES-GCM, which provides authenticated encryption: if the wrong credentials are supplied, the GCM tag will not match and decryption will fail. This is the cryptographic check that ultimately binds the CE key to the correct credentials.From the Synthetic Password, the system derives the CE file encryption keys used by the Linux kernel’s
fscryptsubsystem to decrypt individual files.
The critical observation is that user credentials are cryptographically embedded in the key derivation chain. There is no way to bypass them purely through software attacks: an attacker who compromises the TEE can bypass the authentication token check, but they still need to brute-force the credentials through scrypt to derive the correct key material.
Key derivation with a Secure Element (Weaver) #
On devices with a Secure Element, the key derivation process includes an additional step. Instead of relying solely on Gatekeeper in the TEE, the system uses Weaver, a service running on the Secure Element. Weaver stores a secret value that is released only upon correct credential verification and enforces its own independent rate-limiting.
This means that even if an attacker fully compromises the TEE, or other Trusted Applications as shown by Synacktiv’s Kinibi TEE blogpost, they cannot extract the Weaver secret or bypass its throttling, if implemented properly. Brute-forcing must proceed online, at the rate the Secure Element permits, which, for a device like the Pixel with Titan M, means exponential backoff.
Attack vectors #
With the architecture established, we can now map the specific attack surfaces that forensic tools exploit. These fall into two main categories, corresponding to teo device states: powered off or powered on but never unlocked (BFU), and powered on and previously unlocked (AFU).
BFU attacks on devices without a Secure Element #
The most complete form of BFU attack targets the boot chain itself. If an attacker can compromise the Boot ROM or the preloader, they can break the entire chain of trust, including the TEE.
The attack proceeds as follows:
Boot ROM exploitation: the attacker connects to the device via USB and exploits a vulnerability in the Boot ROM (or uses a vendor-specific download/recovery mode, such as MediaTek’s Download Mode or Qualcomm’s EDL mode) to gain code execution before the bootloader runs.
Boot chain patching: with Boot ROM-level access, the attacker patches or replaces the preloader to disable secure boot verification of subsequent stages. This allows loading a modified Trusted OS and modified Trusted Applications.
TEE patching: the attacker replaces or patches Gatekeeper to accept any credentials and issue valid authentication tokens regardless of input. They also patch or extract keys from Keymaster. As Quarkslab demonstrated in their proof-of-concept on Samsung A22 devices (MT6769V and MT6833V SoCs), this involved patching Samsung’s TEEGRIS operating system: first disabling verification of the root filesystem, then disabling TA signature verification, and finally patching Gatekeeper’s credential comparison to always succeed.
Key material extraction: with a rooted Android system and a compromised TEE, the attacker extracts the encrypted Synthetic Password and the intermediate decryption result from Keymaster.
Offline brute force: the attacker now has all the material needed for offline brute-forcing. The brute-force loop is:
- Generate a candidate password
- Stretch it through scrypt with the stored parameters
- Derive the
applicationId - Attempt AES-GCM decryption of the Synthetic Password
- If the GCM tag matches, the correct credentials have been found
Because this brute-force runs entirely offline, on the attacker’s hardware, it is bounded only by computational resources and password complexity. A numeric PIN of six digits or fewer is trivially recoverable. Even longer numeric PINs fall quickly on dedicated hardware. Only a strong alphanumeric and symbols password provides meaningful resistance.
MediaTek devices are particularly vulnerable to this attack. Multiple MediaTek SoCs contain Boot ROM vulnerabilities that are publicly known and exploited by open-source tools such as MTKClient. Because Boot ROM code is immutable, these vulnerabilities cannot be patched: every device using the affected silicon is permanently compromised, regardless of Android version or security patch level. Quarkslab’s proof-of-concept targeted exactly this class of device, demonstrating the full chain from Boot ROM exploit to CE key recovery.
BFU attacks on devices with a Secure Element #
When a Secure Element is present, the attack chain is fundamentally harder. Even if the attacker achieves everything described above, they still cannot extract the Weaver secret from the Secure Element, under normal circumstances. Chances of a Secure Element exploit are pretty low, especially as being distributed as part of a widely available forensics commercial suite.
As anticipated, the Secure Element runs its own independent boot chain on physically separate silicon. Its keys cannot be extracted by the main SoC, and its rate-limiting logic is enforced in hardware that the attacker does not control.
The result is that brute-force can often only proceed online: each attempt must query the Secure Element, which enforces some kind of backoff. This slows down the attack significantly. A ten-digit PIN that falls in seconds to offline brute-force may take a long time against a properly implemented Secure Element. An alphanumeric password becomes likely unbreakable.
This is why, according to Cellebrite’s own February 2025 support matrix, brute-force capabilities are generally not available for Pixel devices with Titan M. The Secure Element is doing exactly what it was designed to do.
That said, the Secure Element is not immune to vulnerabilities. Quarkslab demonstrated code execution on the Titan M chip through CVE-2022-20233, a single-byte out-of-bounds write in the Keymaster task that could be exploited to hijack execution flow. This vulnerability has since been patched, but it demonstrates that Secure Elements, while vastly more resistant than TEE-only configurations, can still have issues. Google is offering up to $1,500,000 for a zero-click vulnerabilities on the Pixel Titan M2.
AFU attacks: The lock screen is just UI #
After the first unlock, the threat model fully changes. In AFU state, the CE decryption keys are loaded in memory and remain there. The lock screen is no longer a cryptographic barrier, it is just a user interface element, a screen overlay that prevents interaction but does not re-encrypt data.
This means that an AFU attacker does not need to brute-force credentials at all. They need to achieve code execution in the kernel or a privileged process, at which point they have direct access to the decrypted filesystem. The lock screen is bypassed, and no encryption needs to be broken.
The primary attack surface in AFU state is the USB interface. When a device is locked but in AFU state, the USB controller remains active and exposes the device’s kernel to a substantial attack surface.
As the diagram illustrates, a peripheral emulator connected to a locked device’s USB port can interact with the kernel through approximately 200 reachable drivers, including MTP (Media Transfer Protocol), MSC (Mass Storage Class), HID (Human Interface Device), Audio, and CVC (Communication Voice Class) subsystems. Each of these drivers is a potential attack surface for memory corruption vulnerabilities.
The exploit used by Cellebrite against a Serbian activist’s device in December 2024, as documented by Amnesty International, attempted to use at least three Linux kernel USB driver vulnerabilities:
- CVE-2024-53104: a vulnerability in the USB Video Class (UVC) driver.
- CVE-2024-53197: an out-of-bounds write in the ALSA USB-audio driver, triggered when a malicious USB device provides a
bNumConfigurationsvalue exceeding allocated memory. This vulnerability affects legacy code present since Linux kernel 2.6.26 (2008) and was confirmed by CISA to be actively exploited in the wild. - CVE-2024-50302: a memory leak in the HID subsystem that exposes uninitialized kernel memory, including encryption keys and authentication tokens, when processing crafted USB HID reports.
The Cellebrite Turbo Link device, a specialized hardware adapter that connects between the forensic workstation and the target device, likely functions as a peripheral emulator capable of presenting itself as multiple USB device types in rapid succession, probing for each vulnerability to achieve kernel-level code execution.
Many of these USB drivers are loaded by default and remain reachable even when the device is locked. The kernel does not distinguish between a legitimate USB accessory and a malicious peripheral emulator, thus this is the main weakness that AFU exploitation often relies upon.
AFU attacks on locked devices in practice #
Once the attacker achieves kernel code execution through USB driver exploitation, the lock screen is irrelevant. The attacker can:
- Directly read the decrypted filesystem, since CE keys are already in memory
- Perform a Full File System (FFS) extraction of all user data
- Access application data, messages, photos, credentials, and metadata
- Potentially install persistent backdoors
As we already knew, a device in AFU state is fundamentally less secure than a device in BFU state, regardless of the strength of the user’s password or the presence of a Secure Element. The Secure Element protects credential-derived keys at boot; it does not protect already-decrypted data in memory after the first unlock.
This is confirmed by Cellebrite’s February 2025 support matrix, which shows AFU extraction capabilities even for devices (including recent Pixels with stock Android) that resist BFU attacks. Notably, according to the same documentation, GrapheneOS on Pixel devices resists AFU extraction, probably thanks to the attack surface reduction and the USB restrictions that are implemented beyond stock Android.
A note on iOS: checkm8 and USB Restricted Mode #
Apple devices face analogous attack surfaces. The checkm8 exploit, publicly released in 2019, targets an unpatchable Boot ROM vulnerability present in all Apple devices from the iPhone 4S through the iPhone X (A5 through A11 SoCs). Like MediaTek Boot ROM vulnerabilities, checkm8 cannot be fixed through software updates: every affected device is permanently exploitable.
For newer Apple devices, AFU attacks rely on USB-based exploitation. Apple introduced USB Restricted Mode in iOS 11.4.1 to mitigate this: if the device has been locked for more than one hour, the Lightning or USB-C port is restricted to charging only, disabling data connections that forensic tools require.
However, Quarkslab’s analysis of CVE-2025-24200 (reported by CitizenLab) revealed that USB Restricted Mode was bypassable. The vulnerability, reported by Citizen Lab and patched in iOS 18.3.1, allowed a physical attacker to re-enable USB data connections on a locked device by exploiting a flaw in the Accessibility framework. The profiled daemon, which handles device management settings, failed to check whether the device was locked before processing requests to disable USB Restricted Mode. In other words, the USB port was only soft-disabled: something as simple as a logic bug in the policy enforcement allowed it to be re-enabled.
Defensive implications #
The attack vectors described above lead to a clear set of defensive priorities.
BFU is your best defense #
The most impactful single action when facing potential device seizure is to power the device off. This returns it to BFU state, where:
- CE storage is encrypted and keys are not in memory
- USB driver exploitation cannot yield decrypted data
- The attacker must compromise the boot chain and brute-force credentials
- On devices with a Secure Element, brute-force is rate-limited by hardware
Auto-reboot: returning to BFU automatically #
If a device cannot be manually powered off, an automatic reboot mechanism provides the next best defense. After a configurable period without user authentication, the device reboots, returning to BFU state and purging decryption keys from memory.
GrapheneOS has implemented this feature for years. Apple introduced it in iOS 18. Stock Android on Pixel devices gained a limited version in Android 15, but its implementation remains less configurable and less aggressive than GrapheneOS’s. The feature is the most useful if the timeout is short (2-4 hours) and loses its value when its multiple days, as Google enforces likely as a result of negotiation with law enforcement.
USB port restriction #
Disabling USB data transfer when the device is locked directly eliminates the AFU attack surface described above. Android 15 introduces USB restriction options; iOS has had USB Restricted Mode since version 11. GrapheneOS provides more trustworthy and configurable USB restriction.
Password strength #
The brute-force analysis makes the case for strong passwords unambiguous:
- A 4-digit PIN: trivially crackable offline, and crackable even online within hours to days
- A 6-digit PIN: trivially crackable offline; crackable online within days to weeks depending on rate-limiting implementation
- A pattern: equivalent to or weaker than a short PIN in entropy
- An alphanumeric password of 10+ characters with mixed case and symbols: computationally unlikely to brute-force offline through scrypt; almost impossible to brute-force online against a Secure Element
The password is only required at boot. Biometric authentication (fingerprint, face recognition) handles daily unlocking, unless the user is in a jurisdiction where physical coercion is likely, whether lawfully or not. The marginal inconvenience of entering a strong password once after each reboot is minimal compared to the protection it provides.
Device choice #
According to Cellebrite’s own February 2025 documentation:
- MediaTek-based devices: effectively no mitigation possible against BFU attacks due to unpatchable Boot ROM vulnerabilities.
- Most non-Pixel, non-Samsung Android devices: considered unlockable, with few exceptions, due to a combination of delayed security updates, weak TEE implementations, and absent Secure Elements.
- Samsung devices (Exynos): partial protection, varying by model and patch level.
- Google Pixel with stock Android: resistant to BFU attacks on recent models, but AFU file system extraction is possible.
- Google Pixel with GrapheneOS: resistant to both BFU and AFU attacks on recent models (6a and newer). This is the strongest protection available on any Android device.
It is worth noting that alternative Android distributions such as LineageOS or CalyxOS, while valuable for other reasons, do not meaningfully change the forensic unlocking picture from the stock or vendor ROM.
Application-level encryption #
Applications that implement their own encryption layer with a separate password can provide defense-in-depth against full device compromise. Password managers’ master password vaults are a good example: even if the entire device filesystem is extracted, the vault remains encrypted under a key that the forensic tool has not obtained.
However, the effectiveness of application-level protection depends on several factors: whether the app actually implements its own encryption independently from the OS credential protection, whether decryption keys remain in memory after first unlock (making them vulnerable to AFU extraction), whether the app relies on the system biometric authentication (which, without a Secure Element, is broken along with the rest of the OS key hierarchy), and whether notifications or message previews are cached in plaintext outside the app’s encrypted storage.
Conclusion #
The way these tools operate is not magic nor secret. The more we collect samples, reverse engineer and read promotional and support material, the better we understand capabilities and defenses.
As we said many times these tools simply shouldn’t commercially exist. The companies that build these tools are active participants in the zero-day vulnerability trade, stockpiling security flaws that weaken every device, for every user, everywhere. The technical defenses described here are effective but place the burden on individuals and communities.
References #
- Quarkslab, “Android Data Encryption in depth” (2023)
- Quarkslab, “Introduction to Trusted Execution Environment: ARM’s TrustZone™” (2018)
- Quarkslab, “A Deep Dive into Samsung’s TrustZone™” (series)
- Quarkslab, “Attacking Titan M with Only One Byte” (2022)
- Quarkslab, “First analysis of Apple’s USB Restricted Mode bypass (CVE-2025-24200)” (2025)
- Quarkslab, “Reverse Engineering Samsung S6 SBoot” (series)
- Synacktiv, “Kinibi TEE: Trusted Application exploitation” (2018)
- Amnesty International Security Lab, “A Digital Prison: Surveillance and the suppression of civil society in Serbia” (2024)
- Amnesty International Security Lab, “Cellebrite zero-day exploit used to target phone of Serbian student activist” (2025)
- GrapheneOS, “Discussion on Cellebrite Premium July 2024 documentation” (2024)
- Samsung Knox, “Encryption systems description”
- Google, “Titan Hardware Chip documentation”
- Google, “Titan M makes Pixel 3 our most secure phone yet” (2018)
- Android Source, “File-Based Encryption”
- Android Source, “Full-Disk Encryption”
- Bjoern Kerler, “MTKClient” — open-source tool exploiting MediaTek Boot ROM vulnerabilities




