Look to the east slightly ahead of dawn on Sept. 2 to catch a fleeting glimpse of fleet-footed Mercury alongside the bright star Regulus, before it becomes lost in the light of the rising sun.
TOP TELESCOPE PICK:
(Image credit: Amazon)
Want to get a close look at Mercury, Saturn and Jupiter? The Celestron NexStar 4SE is ideal for beginners wanting quality, reliable and quick views of celestial objects. For a more in-depth look at our Celestron NexStar 4SE review.
Stargazers hoping to catch a brief glimpse of Mercury should find a viewing spot with a clear view of the eastern horizon and be in position an hour before sunrise in order to stand a chance of spotting the elusive planet as it rises slightly before the sun.
The rocky world will appear as a bright morning star hugging the horizon among the stars of the constellation Leo, with the bright star Regulus — also known as the “heart of the lion” — located less than 2 degrees to its lower right. Remember, the width of your little finger held at arm’s length is approximately 1 degree in the night sky.
Venus will be visible as a bright point of light to the upper right of Mercury on Sept. 2, with mighty Jupiter positioned beyond to form a diagonal line of solar system planets. A 6-inch telescope will allow you to resolve the cloud bands on Jupiter, along with the moon-like phases of Venus, though we would strongly advise against turning binoculars or a telescope on Mercury owing to its proximity to the rising sun.
Mercury appears close to Regulus predawn on Sept. 2. (Image credit: Starry Night.)
As the closest planet to the sun, Mercury’s tight orbit prevents it from ever rising particularly high in Earth‘s sky. On the morning of Sept. 2, Mercury will reach an altitude of just 6 degrees at sunrise from the perspective of viewers in New York, and will quickly be overwhelmed by the glow of the sun as it draws closer to the horizon.
Mercury will be nearly impossible to spot for the remainder of the month as it draws closer to the sun in the sky ahead of its superior solar conjunction on Sept. 13, when it will pass on the opposite side of the sun relative to Earth. The months that follow will see Mercury appear as a bright evening star, which will be visible low on the western horizon in the hours after sunset.
Breaking space news, the latest updates on rocket launches, skywatching events and more!
Five years ago, investor Katelin Holloway made what she calls a “literal moon shot” investment. A founding partner of the generalist venture firm Seven Seven Six admits she and her team had “no clue” what rocket company Stoke Space was talking about when they pitched the firm on its reusable launch technology. “We knew full well we were not the specialist,” she says.
Since then, Holloway has also invested in Interlune, a company planning to harvest helium-3 from the moon and sell it back to Earth for quantum computing and medical imaging applications.
Holloway is well aware of the skepticism these bets might attract. At the same time, her journey from space novice to investor reflects a broader change in venture capital, as VCs without aerospace engineering degrees increasingly back space startups. In fact, global venture investment in space technology reached $4.5 billion across 48 companies as of July, according to PitchBook; that’s more than four times the amount that space startups attracted in 2024.
What’s driving this trend? For starters, SpaceX and other companies have substantially reduced launch costs, making space accessible to founders with applications-focused business models. “We are literally as a species sitting on the precipice of space becoming part of our day-to-day lives,” Holloway told this editor in a recent episode of TC’s StrictlyVC Download podcast. “And I truly do not think the world understands that or is ready for it.”
That has allowed VCs to look past companies that build rockets to startups that use space-based data and infrastructure for new applications like climate monitoring, intelligence gathering, and communications. They’re also betting on orbital logistics, in-space manufacturing, satellite servicing, and lunar infrastructure development. Companies like Interlune represent this new category. For investors like Holloway, the appeal often lies at the “space tech meets climate tech” intersection, meaning startups that want to avoid repeating Earth’s environmental mistakes in space.”
Geopolitical tensions are also making defense-related space startups attractive because China’s rapidly advancing space capabilities are driving increased U.S. investment. VCs can be a nervous lot, and defense spending – knowing the U.S. government provides a reliable customer base and validation for emerging technologies – gives them greater confidence in the commercial viability of space ventures. At the Department of the Air Force Summit in March, Defense Secretary Pete Hegseth said, “I feel like there’s no way to ignore the fact that the next and most important domain of warfare will be the space domain.”
Numerous U.S. defense-focused space startups closed sizable rounds this year, including military-class orbital systems developer True Anomaly, which announced a $260 million Series C led by Accel in July; and satellite manufacturer K2 Space, which is right now working on its first government mission and closed a $110 million round in February co-led by Lightspeed Venture Capital and Altimeter Capital. The defense angle adds sheen to space investments that might otherwise seem too risky. Indeed, Holloway notes that helium-3, the gas that Interlune plans to harvest, has national security applications, too, including detecting nuclear weapon movements.
Techcrunch event
San Francisco | October 27-29, 2025
AI is creating even more momentum, including at the intersection of geospatial analytics and intelligence. In March, for example, the first satellite launched from Fire Sat, a partnership between Google, nonprofit Earth Fire Alliance, and satellite builder Muon Space designed to detect wildfires from orbit. The collaboration, announced last year, plans to deploy more than 50 satellites specifically built for wildfire detection. Earth imaging operator Planet Labs has also teamed up with Anthropic to analyze Earth observation data.
Perhaps most remarkably, the timetable for returns on these investments has shortened to a surprising degree. Traditional space companies required decades to generate returns, but today’s VCs believe they can achieve liquidity within standard 10-year fund horizons. “Our fund model hasn’t changed, so we still have a 10-year horizon,” Holloway explains. “We would not have made this investment if we did not think we could create outsized returns within 10 years.”
That kind of schedule sounds ambitious, but the public markets certainly seem receptive to these new space companies. Space infrastructure company Voyager listed in New York in June with a $1.9 billion market cap and closed its first day up 82% from its IPO price. (Its shares have since fallen roughly 45%.) The 48-year-old space systems manufacturer Karman Space & Defense opened 30% above its listing price in February. (Its shares have risen nearly 60% more since then.)
For Interlune, Holloway envisions potential exits including strategic acquisitions by aerospace or defense giants, energy company purchases, or even a government buyout given the national security implications that she describes.
All these converging forces – cheaper launches, defense spending, AI applications, and compressed timelines for returns – are reshaping who can invest in space. Holloway’s background – from public school teacher to Pixar script supervisor to Reddit’s VP of People & Culture to venture capitalist – highlights the broader skill sets these companies actually need. While she’s self-effacing when it comes to helium-3 harvesting physics, she brings operational chops.
“At the end of the day, a company is a company,” she says. “If you’re bringing humans together to build something hard, you need someone with a background in building strong companies.”
Whether the approach will pay off remains to be seen. The space economy is still mostly untested at scale, and many of these ambitious ventures face technical and regulatory hurdles that more traditional software startups have never encountered. But as more generalist VCs like Holloway place their bets, space is beginning to look less like a specialized niche and more like another buzzy sector where, if you have the operational know-how, you don’t need an aerospace engineering degree.
Google is preparing a major Play Games update which will introduce Steam-like public “gamer profiles” that show stats, milestones, and other data about your time playing games across Android and Play Games on Windows.
Starting on September 23, Google says that Play Games profiles will be updated to include “gaming stats and milestones” from games you’ve installed, as well as offering “new social features.”
Google Play Games profiles already exist, but they’re not readily accessible, and certainly lack any kind of “social features.” As it stands today, they mainly just show achievements.
Details are a little unclear, but the company’s description sounds a lot like what Steam offers to its players. Public profiles on Steam can show what games a user has played, what achievements they’ve earned, and more information. But, like Steam, Google will allow users to change their profile settings to hide this data from public view which the company explains on a support page.
Advertisement – scroll for more content
Google’s email (which apparently started rolling out late last week) to Play Games users reads as follows:
We’ll soon update the way gamer profiles work on Google Play.
Starting on September 23rd, we’ll begin updating Play Games profiles, including yours. Your profile will include your gaming stats and milestones from games you’ve installed from Google Play and new social features. Your profile and related features will soon show up right on Google Play so it’s easier to access all of our gaming offerings.
To power features and services related to your gaming profile, Google will collect information about your game usage, such as which games you’ve played and when you’ve played them. We’ll also use this data to improve the Google Play gaming experience. Just like today, developers may receive information about your profile and your activity and purchases in their game to help them provide and improve the game, subject to their privacy policies. Developers may also send data to Google about your activity in their games, such as your achievements and game progress.
When we update your profile, we’ll use your existing profile visibility settings as the default settings for your updated profile. For example, if your current profile is set to “visible to everyone,” then information about your updated profile will also be visible to everyone. You can learn more about or update your current profile visibility settings here.
Your profile will be updated automatically, so you don’t need to take any action. Remember that you can delete your Play Games profile from your Google Account at any time.
You can also delete your Google Account entirely. Deleting your Google Account will delete all data and content in that account, like emails, files and photos.
We’re excited to show you your new integrated gaming experience and look forward to seeing you on Google Play.
On another support page relaying the same message, Google clarifies that these changes won’t take effect in the EU and UK until October 1.
More on Google Play:
Follow Ben:Twitter/X, Threads, Bluesky, and Instagram
FTC: We use income earning auto affiliate links. More.
Wi-Fi is no longer just for internet access. A new system named WhoFi uses how the human body alters radio waves to identify a person with up to 95.5 percent rank 1 accuracy, according to a new study.
It runs on consumer grade gear and still works in poor light. The approach relies on radio measurements, not cameras.
How the WhoFi system works
The idea is simple to state but technical under the hood. A WhoFi deployment captures Channel State Information (CSI), the fine-grained description of how a Wi-Fi signal changes as it moves through a room and around a person.
It then turns those changes into a compact biometric signature that is unique to that individual.
The work comes from Danilo Avola, of the Sapienza University of Rome, whose team built and tested the pipeline on public data.
The team reports that Wi-Fi is not just a stand-in for cameras, but offers different strengths that visual systems lack.
CSI is a matrix of amplitudes and phases across antennas and subcarriers. In plain terms, it captures tiny differences in how radio energy arrives at the receiver, which encode body shape and motion.
Those CSI sequences feed a deep network that learns a person-specific embedding. The best results came from a Transformer encoder that excels at long-range temporal patterns in the signal.
What the data say
The researchers evaluated WhoFi on the NTU-Fi Human ID benchmark, which contains recordings of 14 subjects performing short walks under different clothing conditions.
The NTU-Fi Human ID dataset includes measurements captured with higher resolution CSI tools that expose many subcarriers, allowing finer distinctions between people.
WhoFi’s top line result is a 95.5 percent rank 1 identification rate, with a mean average precision of 88.4 percent on the test split.
That score came from the Transformer configuration that was trained on amplitude sequences and validated against a held-out set of test data.
The hardware was not unusual. The paper documents tests using two TP-Link N750 routers, one transmitter and one receiver, recording 114 subcarriers per antenna pair and roughly 2,000 packets per sample.
The sample size is small, which matters because biometric systems can drift when scaled. The authors acknowledge that training stability and overfitting are real risks when models get deeper than necessary.
How WhoFi is different
Early attempts tied Wi-Fi to cameras to get the job done. In 2020, a camera Wi-Fi fusion system called EyeFi demonstrated about 75 percent accuracy in identifying people during live tests, when group sizes ranged from two to ten.
WhoFi goes a different route. It removes cameras from the loop and learns directly from CSI dynamics, which travel with a person across spaces.
The change is not just academic. Removing cameras avoids face capture, reduces sensitivity to clothing variation, and uses infrastructure that already exists in homes and offices.
Why it matters
The promise of radio based sensing is not new. A decade ago, researchers at MIT showed that low bandwidth Wi-Fi could track people through walls, establishing that consumer frequencies propagate through common barriers.
WhoFi rides that physics. It is insensitive to lighting, it can operate when a person is not in direct line of sight, and it can keep working when cameras would be occluded.
The system is also quiet in operation. There is no need to ask users to wear anything or carry a device, which changes the conversation about consent.
How the WhoFi fingerprint forms
Wi-Fi routers send data across many narrow frequency slices called subcarriers.
When a person stands or walks between a transmitter and receiver, the amplitudes and phases on those slices shift in ways that correlate with their body and gait.
WhoFi ingests those time series and uses an encoder to create a fixed length vector that represents that person.
The encoder’s output is normalized, then matched against a gallery to see if there is a close neighbor from the same individual.
Training leans on in batch negatives. Each training batch pairs queries and galleries so the model learns to push mismatched people apart while pulling the same person’s signatures together.
Wi-Fi and WhoFi
The NTU-Fi benchmark is controlled, with short walks inside a defined area and set clothing conditions. That reduces real world noise from crowds, variable furniture, and reflections that can interfere in busy environments.
Sample diversity is another constraint. Fourteen subjects do not capture the range of body types, mobility aids, and cultural garments found in daily life.
“Wi-Fi signals offer several advantages over camera-based approaches: they are not affected by illumination, they can penetrate walls and occlusions, and most importantly, they offer a privacy preserving mechanism for sensing,” wrote Avola.
The paper’s conclusion stresses that the Transformer encoder was both accurate and efficient in this setting. The authors also note that common preprocessing steps, like amplitude filtering, did not always help.
Security and privacy questions
Accuracy numbers excite engineers, but deployment raises policy issues. A store could in principle use this technique to ping returning customers without asking for permission.
Law enforcement and regulators have a stake as well. Radio based identifiers might bypass laws written with cameras and faces in mind, which means legal frameworks will need review.
Wi-Fi operates on a shared spectrum, which makes sensing cheap to scale. That lowers the barrier for third parties who want to track presence and movement across access points.
The flip side is that any re-identification tool needs a gallery. Without a reference signature, a system can say the same person appeared twice, but it cannot assign a civil identity.
What happens with WhoFi next
There are benign uses. Hospitals might want fall detection in dark rooms, and home routers already ship with motion sensing features that rely on CSI.
Industrial safety is another candidate. In zones where cameras are banned, radio could watch for entry violations and trigger alarms.
The WhoFi team kept the system purely academic so far. They trained with public data, documented every step, and compared encoders in a reproducible way.
Future validation will need larger cohorts, varied buildings, and longer time gaps to test stability. It will also need clearer rules on consent and retention.
The study is published in arXiv.
—–
Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.
Internet personality xQc, who has over 12.2 million Twitch followers, has a unique perspective on games as both a content creator and Overwatch ex-pro. As NBA 2K’s next title nears its official release date, xQc has tagged 2K games in a callout post and asked the franchise’s team to solve its hacker problem. Let’s recap the situation and what it means for the industry.
Streamer and content creator xQc has extensive history as an Overwatch pro, but he is also an avid gamer in many other titles. One of these is 2K Games’ basketball franchise NBA 2K, where players compete on the court and climb leaderboards.
NBA 2K26 is not released yet, but it’s already drawing divided reception. Steam reviews at the time of writing are “Mixed”, with 60% leaning positive. As the franchise draws closer to its newest installment’s September 5 release date, its fans are sparking discourse about the series’ state.
YO @NBA2K THERE IS NO WAY THAT IN 2025 THERE ARE NO SOLUTIONS TO THE CHEATER EPIDEMIC. EVERY SINGLE LOBBY IS RIDDLED FULL OF ZEN USERS. THE GAME IS UNPLAYABLE HOW IS THIS ACCEPTABLE? YOUR PLAYER BASE DESERVES BETTER 🗣️
On September 1 2025, xQc brought up his own opinions in an X.com account condemning NBA 2k’s cheating epidemic and calling on 2k Games to take action. The streamer posted in the wee hours of the night tagging NBA 2k’s official page and saying:
“THERE IS NO WAY THAT IN 2025 THERE ARE NO SOLUTIONS TO THE CHEATER EPIDEMIC. Every single lobby is riddled full of Zen users. The game is unplayable, how is this acceptable? Your player base deserves better.”
xQc’s post has since received over 7,000 likes and over 700,000 views.
xQc’s ‘Zen’ statement refers to a type of modded controller. Many NBA 2K players use them to perfectly make difficult shots using scripts.
They destroyed their community. On realse weekend, only 30,000 viewers watching 2K. The biggest creators in the world are all streaming and barely anybody is watching. It’s sad. They’ve neglected the entire community for YEARS and have allowed cheating every single year.
Many NBA 2K community members agree with xQc’s take. In his replies, they call out the game’s development team for ‘neglecting the community’ and being unresponsive to player feedback. One user, @RioStaysTrue, says:
“They destroyed their community… on [release] weekend, only 30,000 viewers watching 2k […] they’ve neglected the entire community for YEARS and have allowed cheating every single year.”
NBA 2k is far from the only esports title with a cheating and hacking epidemic. Valve’s Counter-Strike 2 is notorious for hackers in every rank, and most Rainbow Six Siegeplayers have seen AKs spinning in the sky like helicopters as an invisible player aimbots. However, most of these titles’ developers are aware of the situation and actively taking steps to combat the problems. For example, Counter-Strike 2 offers a separate FACEIT matchmaking queue that requires a dedicated anti-cheat and ID verification for all players.
NBA 2K’s franchise is a sports game titan, and also operates successfully in the esports world as the NBA 2K League. Its lineup combines esports orgs like G2 with subsets from traditional basketball teams like the 76ers, the Lakers and the Celtics.
At the end of the day, players want to see their own experiences reflected on the big screen. When cheaters run rampant, a 2K pro match or high-level stream will be less appealing, and players will lose their motivation to climb the leaderboards.
With high-engagement streamers like xQc calling out the issue, 2K Games will likely have to respond. Further developments may arise in the upcoming weeks.
Nintendo has rebranded its transmedia division, changing the name from Warpstar Inc. to Nintendo Stars.
Warpstar is a consolidated subsidiary launched just last April to manage its film, TV, and other transmedia business. Now, however, in a memo posted to the official Nintendo Japan website, the company said it was “strengthening” the “ancillary use” of films featuring Nintendo IP and expanding it to also encompass live events and merchandise, “through project development, licensing, and other means.”
“By implementing and licensing various ancillary uses in films, Nintendo Stars Inc. aims to familiarize people around the world with Nintendo IP and provide new ways to experience it,” the statement added (thanks, GameDeveloper).
The press release also hinted that Nintendo Stars as it’s now known, will continue its development of the Kirby franchise, by “utilizing the know-how cultivated in this business,” and “expand[ing] the number of people who have access to Nintendo IP through cooperation with partners around the world, and strengthen the relationship between Nintendo and its customers.”
The statement also reminded investors that there’s a new animated film based on Super Mario Bros. scheduled to release on April 3, 2026, and a live-action film based The Legend of Zelda slated to debut on May 7, 2027.
As Apple moves to diversify its manufacturing operations, it appears to be leaning on suppliers to shoulder the costs of automating their assembly lines. Here are the details.
The latest collateral effect of Trump’s trade war
In an exclusive report today, DigiTimes Asia says that Apple has been doubling down on industrial automation as it shifts away from China manufacturing.
The report says that while Apple has always incentivized its suppliers to invest in automation, the company “plans to intensify implementation starting in 2025.”
From the report:
“According to supply chain sources, Apple now mandates automation as a precondition for securing orders, requiring suppliers to independently invest in related equipment rather than relying on Apple to fund such upgrades.”
DigiTimes Asia says that this push touches on “all major product lines, including the iPhone, iPad, Apple Watch, and Mac,” and that while suppliers may see increased costs while spinning up their lines, this should be amortized in the long-term, with “yield rates and reduced production expenses.”
In practice, this means that while Apple looks to diversify its supply chain away from China due to Trump’s tariff wars, local workforces in other countries may not reap the benefits, as Apple’s reliance on automation reportedly “aim at receding labor dependency.”
Although Apple’s automation push doesn’t come as a surprise, it does undercut one of the U.S. government’s talking points when it comes to Apple and the pressure to drive it away from China.
Here’s a statement made last April by Commerce Secretary Howard Lutnick, as the tariff war was still heating up:
“The army of millions and millions of human beings screwing in little screws to make iPhones. That kind of thing is going to come to America.”
To be fair, Apple has done its best to appease the U.S. government with flashy announcements of recycled infrastructure plans. Still, today’s report makes it clear that, regardless of where production goes, automation, not labor, will probably drive Apple’s move away from China.
Accessory deals on Amazon
FTC: We use income earning auto affiliate links. More.
Windows 11, the most-used consumer desktop operating system in the world, undoubtedly has its problems. Yet, despite those problems, it’s the most refined version of the company’s operating system, regardless of the unsavory additions that have seen many users either commit to Windows 10 for as long as possible or even make the jump to Linux instead. There’s one specific aspect of Windows that has always bugged me, though, and it’s Microsoft’s long-standing policy of requiring drivers to be digitally signed before the operating system will load them.
In simple terms, a driver is low-level code (often running in the operating system’s kernel) that lets hardware or software interact with Windows. A signed driver includes a cryptographic signature from a trusted authority (such as Microsoft’s own certificate or, in the past, a Microsoft-authorized Certificate Authority), which Windows verifies for authenticity and integrity before allowing it to run. This driver signature enforcement has evolved over the decades, becoming a mandatory gatekeeper in the process, and it has a dual nature.
On one hand, it can’t be denied that it drastically improves security by blocking malware from running at such a deep level (without a proper certificate anyway, which can be stolen), but on the other hand, it limits user control and demands compliance with Microsoft’s rules. It’s hostile to user freedom, yet has clear benefits, too. It’s one of the best security features of Windows, yet its existence is inherently anti-consumer.
What are driver signatures?
They have a long history
Driver signing is part of Microsoft’s Code Integrity security feature, first introduced in the Windows Vista era and made mandatory with Windows 10, version 1607. The concept is straightforward: any code that runs in the Windows kernel (Known as “Ring 0”) must carry a valid digital signature from a trusted authority. According to Microsoft’s official documentation, Code Integrity “improves the security of the operating system by validating the integrity of a driver or system file each time it’s loaded into memory,” and on 64-bit versions of Windows, “kernel-mode drivers must be digitally signed.” In practice, this means Windows will refuse to load any driver that isn’t signed by a recognized certificate.
Like on other operating systems, the kernel (ntoskernel.exe, or the Windows NT kernel) is the core of the OS with the highest privileges, so blocking unauthorized code from running in this region is critical. Digital signatures ensure that a driver was published by an identified developer and hasn’t been tampered with since. Unsigned or maliciously modified drivers simply won’t install under the default policy, either, and from a security perspective, this is a good thing that protects consumers and companies alike.
In practice, this means that legitimate hardware vendors and developers go through a signing process for their driver, and in modern Windows, this often involves getting an Extended Validation Certificate and submitting the driver to Microsoft for approval. If code tries to execute in the kernel without this approval, you’ll see an error along the lines of “Windows cannot verify the digital signature for the drivers required for this device.” This prevents a whole class of attacks where malware might install a rootkit or malicious driver to gain total control of the system. On modern 64-bit Windows, loading a device driver is essentially the only supported way to execute arbitrary code in the kernel, and this is completely disabled for unsigned executables.
What about an Administrator? Well, not even accounts with that level of privilege are exempt. No matter who you are, you can’t load an unsigned driver on 64-bit Windows. The only way to disable it is to use the “Disable driver signature enforcement” boot option, which will reset on your next boot, or use bcdedit to disable the checks entirely. It’s a safety net placed by Microsoft that not even the owner of the computer should ever cross.
Microsoft has tightened the requirements over the years
It all started with a simple driver verifier
Microsoft’s path toward mandatory driver signatures began in the mid-2000s amid growing concern over spyware, rootkits, and OS stability. Starting with Windows 2000, Driver Verifier was a command-line program that could be used to test drivers for illegal functions and detect bugs, before it was updated with a GUI coinciding with the launch of Windows XP. Back then, driver signing was present but not strictly required, though a group policy option could be set to disallow installation entirely, warn the user but still allow installation, or just install silently.
This changed with the x64 editions of Windows. Starting with Windows Vista (and even Windows XP x64 Edition in a limited form, though you could self-sign a certificate), 64-bit Windows systems required kernel-mode drivers to be signed, as part of a broader security initiative that also included Kernel Patch Protection, informally referred to as PatchGuard. The introduction of mandatory signing in Vista x64 was controversial at the time, but Microsoft’s stated aim was to eliminate entire categories of malware and, according to some reports at the time, protect DRM.
It’s no secret that driver signature enforcement aligns with many industry interests, and it also meant that at the time that Microsoft could essentially force companies to pay for a license to distribute their drivers. Otherwise, those drivers simply wouldn’t install on most machines. Since then, the requirements have only become more stringent, and as already mentioned, Windows 10 version 1607 enforced the requirement that all drivers must be attestation-signed by Microsoft.
Windows 11, which requires UEFI Secure Boot and a TPM by default on new systems, doubles down on ensuring the boot process and drivers are trusted. In essence, modern Windows has a central authority (Microsoft) that says which low-level code is allowed to run. The result is a much harder target for attackers to penetrate, while conveniently placing Microsoft (and a handful of certificate vendors) as gatekeepers of the Windows platform.
Microsoft wants to protect the kernel at all costs
Even if it means regular developers can’t use it, either
To be clear, there’s a strong case to be made that driver signature enforcement has significantly improved security on Windows. By blocking unsigned drivers, all kinds of digital attacks, such as rootkits and kernel-level malware, that could otherwise hide from antivirus software are severely hampered. In the past, many of the most advanced malware would try to operate as a driver in order to access memory or alter the system at a deep level. Today, if a piece of malware doesn’t have a stolen or leaked digital certificate, it simply cannot load a driver on a fully-patched 64-bit Windows system, which is a pretty big barrier to cross when compared to the Windows XP days. If an unsigned driver is found at boot, the system simply won’t start.
Modern anti-cheat systems for online games have also become major beneficiaries of Windows’ driver signing requirements. In many competitive titles, many of the more advanced cheat developers will try to run their cheat programs in kernel mode to avoid detection by user-mode anti-cheat tools. This is why Easy-Anti Cheat, Faceit, Riot Vanguard, and many more anti-cheat solutions all install their own kernel drivers as part of the anti-cheat suite. These anti-cheats operate with a level of privilege above even an Administrator user (remember how not even an Administrator can install an unsigned driver?) to monitor the system for any cheats, block memory access to the game, and ensure the game’s code isn’t being tampered with. Driver signing is a core part of this protective moat that developers build around their games. Because Windows will reject any driver that isn’t properly signed, cheat developers can’t simply build a custom kernel driver and load it on a whim to bypass anti-cheat, as the OS won’t even allow it.
Source: River’s Educational Channel
In response, cheat providers and malware developers have looked for loopholes that prove the effectiveness of driver enforcement. One common technique is known as BYOVD, or Bring Your Own Vulnerable Driver. where attackers find an already signed driver that has known security holes. The legitimate driver is loaded, accepted by Windows, and then its vulnerabilities are exploited in order to execute code in the kernel. One such example abuses the Lenovo Mapper driver, deploying an unsigned cheat driver and disabling the TPM check conducted by Riot’s Vanguard.
All of this relates back to Direct Memory Access (DMA) attacks and cheats, too. DMA allows hardware devices to access system memory directly, bypassing the CPU and potentially allowing a secondary computer to read from or write to the game’s memory. However, Windows has Kernel DMA Protection that makes use of IOMMU to block unauthorized PCIe devices from accessing memory. Only devices with DMA Remapping-compatible drivers are able to, and again, this driver feature is protected as a part of Microsoft’s signature enforcement. Combining this with Secure Boot, which prevents boot-time malware or cheat loaders from inserting themselves before Windows starts, and TPM-based boot attestation, and what you’ve got is a considerably secure environment, considering it’s a user-controlled PC.
This technique isn’t unique to game cheats, either. There are many ransomware examples that have abused drivers to disable system security features and load malicious code into the kernel, essentially shifting the attack surface from the operating system to the low-level code that Microsoft has vetted and signed. Malware developers need to piggyback off of an existing driver already installed on the user’s computer, which means either finding a vulnerability in an extremely common driver or tricking the user into installing a piece of software with the vulnerability.
Driver signature enforcement is merely a cog in an overall security architecture, and alone, it’s not enough to stop everything. Yet, paired with these other technologies Microsoft also uses, there’s no denying that it raises the bar considerably. A stolen certificate will be running on borrowed time before it’s discovered and revoked, and hardware workarounds are often fleeting, too.
Why is driver signature enforcement anti-consumer?
It comes down to what you can and can’t run on your own hardware
Source: Benjamin Zeman
If driver signing is so good for security, what makes it anti-consumer? The criticism comes from the fact that this security mechanism inherently restricts user freedom and control over their own system. There’s an implicit trade-off between security and openness, and Microsoft leans heavily on the former rather than the latter. The company chose a model where the operating system only trusts low-level code vetted by Microsoft, essentially centralizing authority in a way that also aligns with industry interests and makes them money on certification, too.
Going back to disabling driver signature verification, developing your own custom driver for personal use, be it for your own hardware or for a piece of hardware you own, is a nuisance. You’ll need to either boot with the “Disable Driver Signature Enforcement” start-up option, disabling integrity checks entirely, or enable the Windows test-signing mode, neither of which is particularly convenient to use. You don’t own your computer in the same way “ownership” would typically imply; at the kernel level, Microsoft retains control.
As well, only big companies or well-resourced developers can easily meet the driver signing requirements. To get a driver properly signed for a modern Windows version, a developer must obtain an EV Code Signing Certificate, requiring rigorous identity verification and a hardware token, while paying several hundred dollars per year for it, too. Notepad++ is a famous example of this code-signing debacle, which sees a user-level program affected by the refusal to pay Microsoft a yearly fee for certification. The same concept applies to drivers, too.
However, the requirement can also prevent regular consumers from using old hardware that never received a signed driver update. Let’s say you have an old PC peripheral you want to plug into a Windows 11 computer; if its driver is from the Windows XP era and never had a digital signature, it’ll be blocked outright. You can disable signature enforcement (with all of the issues that brings) or abandon the hardware. While you could have modified the driver in the past, DIY solutions are largely unheard of these days, given the associated cost and the complexity of it.
In fact, even when community members take matters into their own hands, things can backfire quickly. There are two known drivers developers can use to control system fans in their own applications: InpOut32 and WinRing0. The former conflicts with Riot’s Vanguard, so many opted for the latter, and it was the backbone of tools like Fan Control. However, it was discovered in 2020 that WinRing0 had a massive vulnerability that saw it be flagged and blocked by Windows Defender a few years later, resulting in applications relying on it being dead in the water.
This problem is compounded by the cost problem when it comes to developing and maintaining a valid driver that Microsoft will accept. Here’s a passage from an article on The Verge which spells out the problem:
SignalRGB founder Timothy Sun says the security risk is more complicated, though. “Since WinRing0 installs system-wide, we realized we were dependent on whatever version was first installed on a user’s system. This made it extremely difficult to verify whether other applications had installed potentially vulnerable versions, effectively putting our users at risk despite our best efforts,” he says.
That’s why his company invested in its own RGB interface instead, eventually ditching WinRing0 in 2023 in favor of a proprietary SMBus driver. But the developers I spoke to, including Sun, agree that’s an expensive proposition.
“I won’t sugarcoat it — the development process was challenging and required significant engineering resources,” says Sun. “Small open source projects do not have the financial ability to go that route, nor dedicated Microsoft kernel development experience to do so,” says OpenRGB’s [Adam] Honse.
WingRing0’s developer, OpenLibSys, appears inactive these days, and it’s unlikely that the same driver, if updated, would be approved by Microsoft for signing under the company’s stricter guidelines. Microsoft also knew how many applications relied on it (Razer Synapse, SteelSeries Engine, and more all used it, too), giving it a few more years of life before axing it in 2025.
What about Linux?
A very different ethos
Unlike Windows, Linux is an open system: there is no single authority that dictates what can run in the kernel. Linux distributions do have the ability to enforce module signing (particularly if Secure Boot is enabled, some distros require kernel modules to be signed by a key), but ultimately, the user can recompile the kernel or disable those checks. This is one of the many reasons anti-cheat software can’t be deployed on Linux in the same way that it can be on Windows.
On Linux, a cheater with root-level access is all-powerful. They could recompile the kernel to remove anti-cheat hooks, or load their own kernel module with no central signing authority to stop them, and given that many cheaters simply ran the cheats as root in the /root directory as a sufficient method to avoid detection, you can see why game developers aren’t too keen on porting their anti-cheat software to Linux. Even if a game insisted on root access (which would arguably be worse than just the anti-cheat equivalent on Windows), you could run it in a “fakeroot” environment so that the game thinks it has root access when it doesn’t.
All of this is to say that the open nature of Linux means any defensive measure can be counteracted by an equally privileged offense, and this reality is reflected in the current state of Linux gaming. While many popular titles are playable on Linux (oftentimes running even better than they would on Windows), many competitive games outright refuse to run on Linux as a result. These same concepts apply to malware, too, though the landscape on Linux when it comes to malicious software is very different.
Microsoft’s driver signature enforcement is attractive to companies. By locking down the kernel, Windows enables a level of security and control (for fighting cheats, malware, and more) that simply cannot be achieved on a more open system without those restrictions. For many gamers and companies, that trade-off is often worth it, even if it frustrates a segment of users. Linux users enjoy unparalleled control, but that very freedom means any client-side anti-cheat mechanism is usually futile. To even get the security benefits Windows has in this area on a Linux machine, you’d be recreating the same constraints that made you want to leave Windows in the first place.
As for the reason Linux users are still secure, despite not having a central certificate authority? The answer is found across a myriad of reasons. Between the advanced user permissions system of Linux, an open-source community rapidly patching vulnerabilities as they appear (even when some major ones slip through, like xz-utils), its reduced market share making it less of an interesting target, and software packages largely being installed through vetted repositories, it’s nowhere near as attractive as targeting a Windows user instead.
Freedom is not always compatible with security
Linux and Windows differ greatly
Microsoft’s driver signature policy is undeniably effective for security: by requiring all kernel drivers to be signed and vetted, Microsoft has built one of the most robust consumer OS defenses against low-level malware and cheating tools. Windows, as a platform, is uniquely capable of maintaining a trusted operating system that users and software can trust. That’s why it’s one of the “best” security features, because it works really well and has made a big difference in protecting systems.
Yet that security comes at a cost to consumers. It takes away control and hands it to a central authority, and not being able to fully control what your operating system does doesn’t sit right with many who value open computing. In a sense, Windows 11 treats the user a bit like an untrusted participant when it comes to kernel code, assuming anyone (including you) could do something harmful if not prevented.
From a security standpoint, this particular feature is a great example of risk reduction. It significantly closes one of the most dangerous avenues of attack, but from a consumer rights standpoint, it can feel like we’re simply renting the functionality to use our own hardware, as we’re at the behest of what Microsoft does and does not allow. What if this concept is expanded to “protect” the operating system? What if debloating tools and scripts make modifications that Microsoft isn’t happy about, as they modify the system?
For the average user, enforcing driver signatures is a great move. Undeniably. Yet it doesn’t feel great that open-source developers are ousted from the platform thanks to the costs associated with developing their own software and sharing it with others, nor does it feel great to feel as if I don’t truly own my hardware, so long as Windows is the primary way that I interface with it.
Robert Whittaker’s career has taken a difficult turn.
For much of the current decade, Whittaker’s goal has been to recapture the UFC Middleweight crown, which he lost during the rise of Israel Adesanya in 2019. For five years, Whittaker dispatched numerous top contenders while in the hunt for gold, first to earn a second title shot against “Stylebender” (which he lost) and then work to earn another chance after that disappointing rematch.
Advertisement
For a few years, Whittaker was the clear-cut second-best Middleweight only to Adesanya. Then, he lost to Dricus Du Plessis, who went on to capture gold shortly afterward. It was a major upset and tough loss, but “Stillknocks” proved to be a special fighter. Undeterred, Whittaker kept pushing forward and picked up two excellent wins, a clear decision over Paulo Costa and tremendous knockout over highly-touted Ikram Aliskerov.
“The Reaper” was riding hot into his title eliminator versus Khamzat Chimaev back in Oct. 2024, but he wound up with his teeth shattered inside a round. He attempted to rebound versus red-hot Reinier de Ridder, but the split-decision went against him after 25 grueling minutes.
At 34 years of age, Whittaker is no longer the No. 2-ranked guy. There are now quite a few fighters between him and the title, forcing the fan favorite to reevaluate his position and future plans. While Whittaker won’t retire after the consecutive losses, he also admits that retaking UFC belt is no longer at the forefront of his mind.
“The belt is kind of like a pipe dream at the moment after losing to de Ridder,” Whittaker admitted on Submission Radio. “It’s another loss. It pushes me back into much further than I want to be from the title, the pathway to where I wanted to finish up. So right now, my trajectory has kind of changed. I’ve got a few fights left, I want to enjoy the journey. I want to enjoy the fights. I want to enjoy the camp process, fight week, the fight itself. I want my family to be part of that. I want my boys and my kids to see the show, see the background of the big corporation that is the UFC … That’s my biggest goal right now.”
Advertisement
As for his immediate future, Whittaker mentioned that he’s still considering a Light Heavyweight move and eyeing the Sean Strickland match up. He’s discussed both options before in the past, but the timing may be right for the “Tarzan” brawl, seeing as Strickland has lost two of his last three. There are lots of rising contenders at 185-pounds right now, so there are plenty of interesting match ups available either way.