Windows 11, the most-used consumer desktop operating system in the world, undoubtedly has its problems. Yet, despite those problems, it’s the most refined version of the company’s operating system, regardless of the unsavory additions that have seen many users either commit to Windows 10 for as long as possible or even make the jump to Linux instead. There’s one specific aspect of Windows that has always bugged me, though, and it’s Microsoft’s long-standing policy of requiring drivers to be digitally signed before the operating system will load them.
In simple terms, a driver is low-level code (often running in the operating system’s kernel) that lets hardware or software interact with Windows. A signed driver includes a cryptographic signature from a trusted authority (such as Microsoft’s own certificate or, in the past, a Microsoft-authorized Certificate Authority), which Windows verifies for authenticity and integrity before allowing it to run. This driver signature enforcement has evolved over the decades, becoming a mandatory gatekeeper in the process, and it has a dual nature.
On one hand, it can’t be denied that it drastically improves security by blocking malware from running at such a deep level (without a proper certificate anyway, which can be stolen), but on the other hand, it limits user control and demands compliance with Microsoft’s rules. It’s hostile to user freedom, yet has clear benefits, too. It’s one of the best security features of Windows, yet its existence is inherently anti-consumer.
What are driver signatures?
They have a long history
Driver signing is part of Microsoft’s Code Integrity security feature, first introduced in the Windows Vista era and made mandatory with Windows 10, version 1607. The concept is straightforward: any code that runs in the Windows kernel (Known as “Ring 0”) must carry a valid digital signature from a trusted authority. According to Microsoft’s official documentation, Code Integrity “improves the security of the operating system by validating the integrity of a driver or system file each time it’s loaded into memory,” and on 64-bit versions of Windows, “kernel-mode drivers must be digitally signed.” In practice, this means Windows will refuse to load any driver that isn’t signed by a recognized certificate.
Like on other operating systems, the kernel (ntoskernel.exe, or the Windows NT kernel) is the core of the OS with the highest privileges, so blocking unauthorized code from running in this region is critical. Digital signatures ensure that a driver was published by an identified developer and hasn’t been tampered with since. Unsigned or maliciously modified drivers simply won’t install under the default policy, either, and from a security perspective, this is a good thing that protects consumers and companies alike.
In practice, this means that legitimate hardware vendors and developers go through a signing process for their driver, and in modern Windows, this often involves getting an Extended Validation Certificate and submitting the driver to Microsoft for approval. If code tries to execute in the kernel without this approval, you’ll see an error along the lines of “Windows cannot verify the digital signature for the drivers required for this device.” This prevents a whole class of attacks where malware might install a rootkit or malicious driver to gain total control of the system. On modern 64-bit Windows, loading a device driver is essentially the only supported way to execute arbitrary code in the kernel, and this is completely disabled for unsigned executables.
What about an Administrator? Well, not even accounts with that level of privilege are exempt. No matter who you are, you can’t load an unsigned driver on 64-bit Windows. The only way to disable it is to use the “Disable driver signature enforcement” boot option, which will reset on your next boot, or use bcdedit to disable the checks entirely. It’s a safety net placed by Microsoft that not even the owner of the computer should ever cross.
Microsoft has tightened the requirements over the years
It all started with a simple driver verifier
Microsoft’s path toward mandatory driver signatures began in the mid-2000s amid growing concern over spyware, rootkits, and OS stability. Starting with Windows 2000, Driver Verifier was a command-line program that could be used to test drivers for illegal functions and detect bugs, before it was updated with a GUI coinciding with the launch of Windows XP. Back then, driver signing was present but not strictly required, though a group policy option could be set to disallow installation entirely, warn the user but still allow installation, or just install silently.
This changed with the x64 editions of Windows. Starting with Windows Vista (and even Windows XP x64 Edition in a limited form, though you could self-sign a certificate), 64-bit Windows systems required kernel-mode drivers to be signed, as part of a broader security initiative that also included Kernel Patch Protection, informally referred to as PatchGuard. The introduction of mandatory signing in Vista x64 was controversial at the time, but Microsoft’s stated aim was to eliminate entire categories of malware and, according to some reports at the time, protect DRM.
It’s no secret that driver signature enforcement aligns with many industry interests, and it also meant that at the time that Microsoft could essentially force companies to pay for a license to distribute their drivers. Otherwise, those drivers simply wouldn’t install on most machines. Since then, the requirements have only become more stringent, and as already mentioned, Windows 10 version 1607 enforced the requirement that all drivers must be attestation-signed by Microsoft.
Windows 11, which requires UEFI Secure Boot and a TPM by default on new systems, doubles down on ensuring the boot process and drivers are trusted. In essence, modern Windows has a central authority (Microsoft) that says which low-level code is allowed to run. The result is a much harder target for attackers to penetrate, while conveniently placing Microsoft (and a handful of certificate vendors) as gatekeepers of the Windows platform.
Microsoft wants to protect the kernel at all costs
Even if it means regular developers can’t use it, either
To be clear, there’s a strong case to be made that driver signature enforcement has significantly improved security on Windows. By blocking unsigned drivers, all kinds of digital attacks, such as rootkits and kernel-level malware, that could otherwise hide from antivirus software are severely hampered. In the past, many of the most advanced malware would try to operate as a driver in order to access memory or alter the system at a deep level. Today, if a piece of malware doesn’t have a stolen or leaked digital certificate, it simply cannot load a driver on a fully-patched 64-bit Windows system, which is a pretty big barrier to cross when compared to the Windows XP days. If an unsigned driver is found at boot, the system simply won’t start.
Modern anti-cheat systems for online games have also become major beneficiaries of Windows’ driver signing requirements. In many competitive titles, many of the more advanced cheat developers will try to run their cheat programs in kernel mode to avoid detection by user-mode anti-cheat tools. This is why Easy-Anti Cheat, Faceit, Riot Vanguard, and many more anti-cheat solutions all install their own kernel drivers as part of the anti-cheat suite. These anti-cheats operate with a level of privilege above even an Administrator user (remember how not even an Administrator can install an unsigned driver?) to monitor the system for any cheats, block memory access to the game, and ensure the game’s code isn’t being tampered with. Driver signing is a core part of this protective moat that developers build around their games. Because Windows will reject any driver that isn’t properly signed, cheat developers can’t simply build a custom kernel driver and load it on a whim to bypass anti-cheat, as the OS won’t even allow it.
In response, cheat providers and malware developers have looked for loopholes that prove the effectiveness of driver enforcement. One common technique is known as BYOVD, or Bring Your Own Vulnerable Driver. where attackers find an already signed driver that has known security holes. The legitimate driver is loaded, accepted by Windows, and then its vulnerabilities are exploited in order to execute code in the kernel. One such example abuses the Lenovo Mapper driver, deploying an unsigned cheat driver and disabling the TPM check conducted by Riot’s Vanguard.
All of this relates back to Direct Memory Access (DMA) attacks and cheats, too. DMA allows hardware devices to access system memory directly, bypassing the CPU and potentially allowing a secondary computer to read from or write to the game’s memory. However, Windows has Kernel DMA Protection that makes use of IOMMU to block unauthorized PCIe devices from accessing memory. Only devices with DMA Remapping-compatible drivers are able to, and again, this driver feature is protected as a part of Microsoft’s signature enforcement. Combining this with Secure Boot, which prevents boot-time malware or cheat loaders from inserting themselves before Windows starts, and TPM-based boot attestation, and what you’ve got is a considerably secure environment, considering it’s a user-controlled PC.
This technique isn’t unique to game cheats, either. There are many ransomware examples that have abused drivers to disable system security features and load malicious code into the kernel, essentially shifting the attack surface from the operating system to the low-level code that Microsoft has vetted and signed. Malware developers need to piggyback off of an existing driver already installed on the user’s computer, which means either finding a vulnerability in an extremely common driver or tricking the user into installing a piece of software with the vulnerability.
Driver signature enforcement is merely a cog in an overall security architecture, and alone, it’s not enough to stop everything. Yet, paired with these other technologies Microsoft also uses, there’s no denying that it raises the bar considerably. A stolen certificate will be running on borrowed time before it’s discovered and revoked, and hardware workarounds are often fleeting, too.
Why is driver signature enforcement anti-consumer?
It comes down to what you can and can’t run on your own hardware
If driver signing is so good for security, what makes it anti-consumer? The criticism comes from the fact that this security mechanism inherently restricts user freedom and control over their own system. There’s an implicit trade-off between security and openness, and Microsoft leans heavily on the former rather than the latter. The company chose a model where the operating system only trusts low-level code vetted by Microsoft, essentially centralizing authority in a way that also aligns with industry interests and makes them money on certification, too.
Going back to disabling driver signature verification, developing your own custom driver for personal use, be it for your own hardware or for a piece of hardware you own, is a nuisance. You’ll need to either boot with the “Disable Driver Signature Enforcement” start-up option, disabling integrity checks entirely, or enable the Windows test-signing mode, neither of which is particularly convenient to use. You don’t own your computer in the same way “ownership” would typically imply; at the kernel level, Microsoft retains control.
As well, only big companies or well-resourced developers can easily meet the driver signing requirements. To get a driver properly signed for a modern Windows version, a developer must obtain an EV Code Signing Certificate, requiring rigorous identity verification and a hardware token, while paying several hundred dollars per year for it, too. Notepad++ is a famous example of this code-signing debacle, which sees a user-level program affected by the refusal to pay Microsoft a yearly fee for certification. The same concept applies to drivers, too.
However, the requirement can also prevent regular consumers from using old hardware that never received a signed driver update. Let’s say you have an old PC peripheral you want to plug into a Windows 11 computer; if its driver is from the Windows XP era and never had a digital signature, it’ll be blocked outright. You can disable signature enforcement (with all of the issues that brings) or abandon the hardware. While you could have modified the driver in the past, DIY solutions are largely unheard of these days, given the associated cost and the complexity of it.
In fact, even when community members take matters into their own hands, things can backfire quickly. There are two known drivers developers can use to control system fans in their own applications: InpOut32 and WinRing0. The former conflicts with Riot’s Vanguard, so many opted for the latter, and it was the backbone of tools like Fan Control. However, it was discovered in 2020 that WinRing0 had a massive vulnerability that saw it be flagged and blocked by Windows Defender a few years later, resulting in applications relying on it being dead in the water.
This problem is compounded by the cost problem when it comes to developing and maintaining a valid driver that Microsoft will accept. Here’s a passage from an article on The Verge which spells out the problem:
SignalRGB founder Timothy Sun says the security risk is more complicated, though. “Since WinRing0 installs system-wide, we realized we were dependent on whatever version was first installed on a user’s system. This made it extremely difficult to verify whether other applications had installed potentially vulnerable versions, effectively putting our users at risk despite our best efforts,” he says.
That’s why his company invested in its own RGB interface instead, eventually ditching WinRing0 in 2023 in favor of a proprietary SMBus driver. But the developers I spoke to, including Sun, agree that’s an expensive proposition.
“I won’t sugarcoat it — the development process was challenging and required significant engineering resources,” says Sun. “Small open source projects do not have the financial ability to go that route, nor dedicated Microsoft kernel development experience to do so,” says OpenRGB’s [Adam] Honse.
WingRing0’s developer, OpenLibSys, appears inactive these days, and it’s unlikely that the same driver, if updated, would be approved by Microsoft for signing under the company’s stricter guidelines. Microsoft also knew how many applications relied on it (Razer Synapse, SteelSeries Engine, and more all used it, too), giving it a few more years of life before axing it in 2025.
What about Linux?
A very different ethos
Unlike Windows, Linux is an open system: there is no single authority that dictates what can run in the kernel. Linux distributions do have the ability to enforce module signing (particularly if Secure Boot is enabled, some distros require kernel modules to be signed by a key), but ultimately, the user can recompile the kernel or disable those checks. This is one of the many reasons anti-cheat software can’t be deployed on Linux in the same way that it can be on Windows.
On Linux, a cheater with root-level access is all-powerful. They could recompile the kernel to remove anti-cheat hooks, or load their own kernel module with no central signing authority to stop them, and given that many cheaters simply ran the cheats as root in the /root directory as a sufficient method to avoid detection, you can see why game developers aren’t too keen on porting their anti-cheat software to Linux. Even if a game insisted on root access (which would arguably be worse than just the anti-cheat equivalent on Windows), you could run it in a “fakeroot” environment so that the game thinks it has root access when it doesn’t.
All of this is to say that the open nature of Linux means any defensive measure can be counteracted by an equally privileged offense, and this reality is reflected in the current state of Linux gaming. While many popular titles are playable on Linux (oftentimes running even better than they would on Windows), many competitive games outright refuse to run on Linux as a result. These same concepts apply to malware, too, though the landscape on Linux when it comes to malicious software is very different.
Microsoft’s driver signature enforcement is attractive to companies. By locking down the kernel, Windows enables a level of security and control (for fighting cheats, malware, and more) that simply cannot be achieved on a more open system without those restrictions. For many gamers and companies, that trade-off is often worth it, even if it frustrates a segment of users. Linux users enjoy unparalleled control, but that very freedom means any client-side anti-cheat mechanism is usually futile. To even get the security benefits Windows has in this area on a Linux machine, you’d be recreating the same constraints that made you want to leave Windows in the first place.
As for the reason Linux users are still secure, despite not having a central certificate authority? The answer is found across a myriad of reasons. Between the advanced user permissions system of Linux, an open-source community rapidly patching vulnerabilities as they appear (even when some major ones slip through, like xz-utils), its reduced market share making it less of an interesting target, and software packages largely being installed through vetted repositories, it’s nowhere near as attractive as targeting a Windows user instead.
Freedom is not always compatible with security
Linux and Windows differ greatly
Microsoft’s driver signature policy is undeniably effective for security: by requiring all kernel drivers to be signed and vetted, Microsoft has built one of the most robust consumer OS defenses against low-level malware and cheating tools. Windows, as a platform, is uniquely capable of maintaining a trusted operating system that users and software can trust. That’s why it’s one of the “best” security features, because it works really well and has made a big difference in protecting systems.
Yet that security comes at a cost to consumers. It takes away control and hands it to a central authority, and not being able to fully control what your operating system does doesn’t sit right with many who value open computing. In a sense, Windows 11 treats the user a bit like an untrusted participant when it comes to kernel code, assuming anyone (including you) could do something harmful if not prevented.
From a security standpoint, this particular feature is a great example of risk reduction. It significantly closes one of the most dangerous avenues of attack, but from a consumer rights standpoint, it can feel like we’re simply renting the functionality to use our own hardware, as we’re at the behest of what Microsoft does and does not allow. What if this concept is expanded to “protect” the operating system? What if debloating tools and scripts make modifications that Microsoft isn’t happy about, as they modify the system?
For the average user, enforcing driver signatures is a great move. Undeniably. Yet it doesn’t feel great that open-source developers are ousted from the platform thanks to the costs associated with developing their own software and sharing it with others, nor does it feel great to feel as if I don’t truly own my hardware, so long as Windows is the primary way that I interface with it.