OSAKA — Nintendo’s Switch 2 game console, which is selling at a record pace and remains elusive in shops a month after its release, may not become easier to purchase until the spring, say analysts who project sales of 18 million to 20 million units this fiscal year.
Nintendo’s official online store in Japan began accepting applications for a fifth lottery sale of the Switch 2 on Wednesday. Many people who lost previous lotteries appear to be applying, but a mood of resignation pervades social media, with some complaining that they keep striking out no matter how many times they apply.
Apple is reportedly developing smart glasses that could compete with the Meta Ray-Bans, but they are not expected to launch for a few more years.
Earlier this week, Apple supply chain analyst Ming-Chi Kuo said that he expects Apple’s smart glasses to enter mass production in the second quarter of 2027. Similar to the Meta Ray-Bans, he said that Apple’s glasses will allow users to take photos, record videos, and listen to music, with both touch and hands-free voice control. These type of smart glasses are intended to let you capture a moment without needing to take your phone out of your pocket.
Kuo said that Apple plans to offer multiple frame and material options for its smart glasses, but he did not indicate if it will partner with a major glasses brand, such as Ray-Ban or Oakley. Meta’s smart glasses are offered with three different Ray-Ban frames, including the iconic Wayfarer style that has been popular for decades.
Like the Meta Ray-Bans, Kuo said Apple’s first glasses will not have built-in augmented reality displays. However, next-generation Meta Ray-Bans with such displays are expected to launch later this year, so Apple will remain well behind.
Meta’s glasses are equipped with a 12-megapixel camera with 1080p video capture, dual speakers, five microphones, a touchpad on the right arm, and an LED that indicates when video recording is active. Meta says the glasses last up to four hours with a single charge, and up to 36 hours with a fully-charged carrying case.
Meta Ray-Bans were released in September 2023, with U.S. pricing starting at $299. In February, Ray-Ban owner EssilorLuxottica announced that it had sold more than two million pairs of the glasses, making them a relative hit in a growing device category.
For now, Apple’s only head-mounted device is the Vision Pro, which starts at a hefty $3,499. It is estimated that Apple has sold only 500,000 to 700,000 units of the Vision Pro, at best, since it launched in February 2024. Kuo believes that Apple’s smart glasses will be far more successful, with shipments reaching 3-5 million units or more in 2027.
The unfortunate part is that 2027 remains quite a while away, with Apple’s competitors in this space innovating at a much faster pace.
If you’re like me, you depend on a lot of systems and services, even within your home LAN. Because I work from home, that’s amplified to the point where I need certain applications available to me that aren’t hosted by a third party, for flexibility, ease of use, reliability and security.
Thankfully, Docker is there to make deploying those apps and services considerably easier; otherwise, I’d wind up having to first deploy a collection of virtual machines (VMs), keep them running and worry about upgrading/managing them efficiently.
Yeah, Docker makes this entire process easier. Even better, I can spin up those apps and services in seconds, instead of having to go the traditional route, which can often take quite a bit longer to deploy.
But what are the apps and services that I depend on for my LAN to keep me productive? Surprise, surprise: I have a list, and here it is.
Nextcloud
Nextcloud has essentially become my Google services for my home LAN. I began using Nextcloud in earnest on my LAN when I started fearing that Google would use my documents within Drive to train its AI. After that thought danced across the synapses of my mind, I pulled those documents and moved them to a Nextcloud deployment on my home network. Problem solved.
But Nextcloud isn’t just a document server; it’s much more. Nextcloud is an entire suite of applications that can be used for just about every need you have for a home office. There’s audio/video chat, calendars, email, whiteboard, AI assistant and agentic AI, file sharing, collaboration, file access control, versioning, machine learning (ML), tons of integrations, monitoring/auditing and so much more.
There’s even an app store, where you can extend the feature set to meet your exact needs.
Nextcloud is free to use and can be deployed with Docker from Docker Hub as simply as:
Grocy
If you need to manage things in your home, Grocy is the way to go. As you might have suspected from the name, Grocy is all about groceries and meal planning. If you’re as busy as I am, planning meals isn’t always the easiest thing to do, but this handy Docker app makes it considerably easier. Not only can you keep track of the items you have in your kitchen or pantry, but you can also categorize them by location (e.g., fridge, freezer, pantry, garage, basement, etc.) and even keep track of recipes. On top of all this, Grocy even lets you keep track of chores you need to take care of around the house. You can even keep track of batteries, charging cycles and warranties so you can take the guesswork out of when you replaced those batteries in your smoke detectors.
Grocy can be deployed with a docker-compose and a Dockerfile that looks like this:
Tududi
If you want a task manager that can be accessed from any machine on your network, consider Tududi. Tududi can help manage those tasks and even projects with a well-designed, user-friendly UI. The Tududi feature list includes comments, due dates, project names, status, priorities, hierarchical structure for tasks and projects, smart recurring tasks, areas, notes, tags and Telegram integration.
With the Telegram integration, you get the ability to create tasks directly through Telegram messages, receive daily digests of your tasks and quickly capture ideas and to-dos on the go. You also get smart parent-child relationships such that when a recurring task generates a new instance, each generated task maintains a link to the parent, those tasks are displayed as a Recurring Task Instance (with inherited settings), users can edit the parent recurrence pattern from the child task and changes to the parent settings affect all future instances within a series.
Tududi can be installed from Docker Hub with the command:
Bitwarden
Bitwarden is one of the finest password managers on the market. The app/service enjoys one of the best feature lists of all password managers and uses industry-standard encryption. Even so, there are certain highly sensitive bits of information that I would prefer to retain on my home LAN. For that, I make use of the Bitwarden server, which can be easily deployed via Docker. The Bitwarden server acts almost identically to the standard service, only it’s housed privately, so it doesn’t have to be available beyond your LAN. With that in mind, you could house highly sensitive information, and (as long as your network is secure), you shouldn’t have to worry about anyone stumbling upon your vault or the items contained within.
Bitwarden can be deployed with Docker with the command:
Portainer
If you want to manage all of your containers with the help of a powerful, web-based GUI tool, Portainer is hard to beat. Portainer allows you to see all running containers, view all container logs, get quick console access to containers, deploy code into containers using a simple form and turn your YAML into custom templates for easy reuse. Oh, and you can deploy, stop, run and remove containers. In fact, there’s very little you can’t do with Portainer.
Portainer is considered one of the most popular container management systems in the world and does require a bit of work to get up and running. You can check out the official Portainer documentation and get up to speed on the process.
Although this is a short list of containers I regularly use on my LAN, there’s always room for more. Make sure to check out Docker Hub to see if there’s another app/service you could benefit from.
YOUTUBE.COM/THENEWSTACK
Tech moves fast, don’t miss an episode. Subscribe to our YouTube
channel to stream all our podcasts, interviews, demos, and more.
Jack Wallen is what happens when a Gen Xer mind-melds with present-day snark. Jack is a seeker of truth and a writer of words with a quantum mechanical pencil and a disjointed beat of sound and soul. Although he resides…
Launched in early preview last May, Gemma 3n is now officially available. It targets mobile-first, on-device AI applications, using new techniques designed to increase efficiency and improve performance, such as per-layer embeddings and transformer nesting.
Gemma 3n uses Per-Layer Embeddings (PLE) to reduce the RAM required to run a model while maintaining the same number of total parameters. The technique consists of loading only the core transformer weights into accelerated memory, typically VRAM, while the rest of the parameters are kept on the CPU. Specifically, the 5-billion-parameter variant of the model only requires 2 billion parameters to be loaded into the accelerator; for the 8-billion variant, it’s 4 billion.
Another novel technique is MatFormer, short for Matryoshka Transformer), which allows transformers to be nested so that a larger model, e.g. with 4B parameters, contains a smaller version of itself, e.g. with only 2B parameters. This approach enables what Google calls elastic inference and allows developers to choose either the full model or its faster but fully-functional sub-model. MatFormer also support a Mix-n-Match method to let developers create intermediate-sizes versions:
This technique allows you to precisely slice the E4B model’s parameters, primarily by adjusting the feed forward network hidden dimension per layer (from 8192 to 16384) and selectively skipping some layers.
In the future, Gemma 3n will fully support elastic inference, enabling dynamic switching between the full model and the sub-model on the fly, depending on the current task and device load.
Another new feature in Gemma 3n aimed at accelerating inference is KV cache sharing, which is designed to accelerate time-to-first-token, a key metric for streaming response applications. Using this technique, which according to Google is particularly efficient with long contexts:
The keys and values of the middle layer from local and global attention are directly shared with all the top layers, delivering a notable 2x improvement on prefill performance compared to Gemma 3 4B.
Gemma 3n also brings native multimodal capabilities, thanks to its audio and video encoders. On the audio front, it enables on-device automatic speech recognition and speech translation.
The encoder generates a token for every 160ms of audio (about 6 tokens per second), which are then integrated as input to the language model, providing a granular representation of the sound context.
Google says they have observed strong results translating between English and Spanish, French, Italian, and Portuguese. While Gemma 3n audio encoder can process arbitrarily long audios thanks to its streaming architecture, it will initially be limited to clips of up to 30 seconds at launch.
As a final note about Gemma 3n, it is worth highlighting that it supports resolutions of 256×256, 512×512, and 768×768 pixels and can process up to 60 frames per second on a Google Pixel device. In comparison with Gemma 3, it delivers a 13x speedup with quantization (6.5x without) and has a memory footprint that is four times smaller.
Scientists are striving to discover new semiconductor materials that could boost the efficiency of solar cells and other electronics. But the pace of innovation is bottlenecked by the speed at which researchers can manually measure important material properties.
A fully autonomous robotic system developed by MIT researchers could speed things up.
Their system utilizes a robotic probe to measure an important electrical property known as photoconductance, which is how electrically responsive a material is to the presence of light.
The researchers inject materials-science-domain knowledge from human experts into the machine-learning model that guides the robot’s decision making. This enables the robot to identify the best places to contact a material with the probe to gain the most information about its photoconductance, while a specialized planning procedure finds the fastest way to move between contact points.
During a 24-hour test, the fully autonomous robotic probe took more than 125 unique measurements per hour, with more precision and reliability than other artificial intelligence-based methods.
By dramatically increasing the speed at which scientists can characterize important properties of new semiconductor materials, this method could spur the development of solar panels that produce more electricity.
“I find this paper to be incredibly exciting because it provides a pathway for autonomous, contact-based characterization methods. Not every important property of a material can be measured in a contactless way. If you need to make contact with your sample, you want it to be fast and you want to maximize the amount of information that you gain,” says Tonio Buonassisi, professor of mechanical engineering and senior author of a paper on the autonomous system.
His co-authors include lead author Alexander (Aleks) Siemenn, a graduate student; postdocs Basita Das and Kangyu Ji; and graduate student Fang Sheng. The work appears today in Science Advances.
Making contact
Since 2018, researchers in Buonassisi’s laboratory have been working toward a fully autonomous materials discovery laboratory. They’ve recently focused on discovering new perovskites, which are a class of semiconductor materials used in photovoltaics like solar panels.
In prior work, they developed techniques to rapidly synthesize and print unique combinations of perovskite material. They also designed imaging-based methods to determine some important material properties.
But photoconductance is most accurately characterized by placing a probe onto the material, shining a light, and measuring the electrical response.
“To allow our experimental laboratory to operate as quickly and accurately as possible, we had to come up with a solution that would produce the best measurements while minimizing the time it takes to run the whole procedure,” says Siemenn.
Doing so required the integration of machine learning, robotics, and material science into one autonomous system.
To begin, the robotic system uses its onboard camera to take an image of a slide with perovskite material printed on it.
Then it uses computer vision to cut that image into segments, which are fed into a neural network model that has been specially designed to incorporate domain expertise from chemists and materials scientists.
“These robots can improve the repeatability and precision of our operations, but it is important to still have a human in the loop. If we don’t have a good way to implement the rich knowledge from these chemical experts into our robots, we are not going to be able to discover new materials,” Siemenn adds.
The model uses this domain knowledge to determine the optimal points for the probe to contact based on the shape of the sample and its material composition. These contact points are fed into a path planner that finds the most efficient way for the probe to reach all points.
The adaptability of this machine-learning approach is especially important because the printed samples have unique shapes, from circular drops to jellybean-like structures.
“It is almost like measuring snowflakes — it is difficult to get two that are identical,” Buonassisi says.
Once the path planner finds the shortest path, it sends signals to the robot’s motors, which manipulate the probe and take measurements at each contact point in rapid succession.
Key to the speed of this approach is the self-supervised nature of the neural network model. The model determines optimal contact points directly on a sample image — without the need for labeled training data.
The researchers also accelerated the system by enhancing the path planning procedure. They found that adding a small amount of noise, or randomness, to the algorithm helped it find the shortest path.
“As we progress in this age of autonomous labs, you really do need all three of these expertise — hardware building, software, and an understanding of materials science — coming together into the same team to be able to innovate quickly. And that is part of the secret sauce here,” Buonassisi says.
Rich data, rapid results
Once they had built the system from the ground up, the researchers tested each component. Their results showed that the neural network model found better contact points with less computation time than seven other AI-based methods. In addition, the path planning algorithm consistently found shorter path plans than other methods.
When they put all the pieces together to conduct a 24-hour fully autonomous experiment, the robotic system conducted more than 3,000 unique photoconductance measurements at a rate exceeding 125 per hour.
In addition, the level of detail provided by this precise measurement approach enabled the researchers to identify hotspots with higher photoconductance as well as areas of material degradation.
“Being able to gather such rich data that can be captured at such fast rates, without the need for human guidance, starts to open up doors to be able to discover and develop new high-performance semiconductors, especially for sustainability applications like solar panels,” Siemenn says.
The researchers want to continue building on this robotic system as they strive to create a fully autonomous lab for materials discovery.
This work is supported, in part, by First Solar, Eni through the MIT Energy Initiative, MathWorks, the University of Toronto’s Acceleration Consortium, the U.S. Department of Energy, and the U.S. National Science Foundation.
Amazon Prime Day kicks off next week on Tuesday, July 8, but one of the best early mobile offers I’ve seen is already here. That’s right — you don’t need to be a Prime member to snag these savings, and this smartphone is already under $1,000.
The new Nothing Phone 3 just launched (seriously, our expert is still in London post-launch party), and Amazon already has a tempting preorder offer.
Also: The best Prime Day tech deals
If you preorder the Nothing Phone 3 on Amazon, you can scoop up the 512GB model for $799. That’s a $100 discount on the 16 + 512GB model, and a chance to swipe up double the storage for the same price as the 12+ 256GB model, which is also available for preorder for $799. You save $100, get twice the storage, and get the new Nothing Phone 3 when it ships on July 15.
Expert Prakhar Khanna says that Nothing’s new flagship phone is the brand’s most expensive and risky product yet. “At $799, the Nothing Phone 3 no longer undercuts its midrange competitors. Instead, the handset takes on the likes of the iPhone Pro, Pixel Pro, and Galaxy S phones of the world with a striking design, functional AI features, and fine software tuning,” he says.
Also:I tried the controversial Android phone that’s got the internet buzzing – and left impressed
Khanna got a first-look at the new device at Nothing’s launch party in London this week, and he says it’s “poised to make a splash” in a competitive mobile market.
The display on the Nothing Phone 3.
Prakhar Khanna/ZDNET
At 218 grams, Khanna says the new Phone 3 isn’t the heaviest flagship phone. It features flat sides with curved corners and a glass design that he says feels ergonomic to hold. It also supports IP68 dust and water resistance.
The phone features a 6.67-inch LTPS AMOLED display, which goes from 30Hz to 120Hz instead of going all the way down to 1Hz like LTPO panels. Khanna says the latter is more battery efficient, but that the Phone 3’s 5,150mAh battery should last a whole day.
Also:I’m a phone expert, and you won’t want to miss these July 4th phone deals
Unlike other flagships, the Nothing Phone 3 is powered by Qualcomm’s Snapdragon 8s Gen 4 chipset, which doesn’t have the new Oryon CPU cores. The Nothing flagship features three 50MP cameras on the back and a 50MP selfie camera on the front. The Nothing Phone 3 also supports TrueLens Engine 4, which is said to process photos 125% faster than the previous Phone 2. It also helps improve HDR performance, real-time scene segmentation, lower noise, and smoother motion.
It runs Android 15-based Nothing OS and is promised to get Android 16 later this year. Khanna says it’s also housing some handy AI features, like Essential Space, which holds everything that you record with the Essential Key, plus new features like Flip to Record, which records and transcribes audio recordings when you put the phone face down.
Intrigued by Nothing’s latest release? Preorder the Nothing Phone 3 for a double storage upgrade and $100 savings at Amazon while you can.
Looking for the next best product? Get expert reviews and editor favorites with ZDNET Recommends.
How I rated this deal
This $100 savings offer translates to 11% savings, which isn’t typically a great deal. However, this phone releases July 15 and is brand new to the market, so it’s a first-time discount on a freshly launched product. Plus, when you factor in that the phone is already under $1,000 and the larger, 512GB model is selling for $799, it’s a pretty good bargain. While the 256GB model isn’t on sale, you can take advantage of a double the storage offer by grabbing the 512GB model for the same $799 price, boosting your storage and essentially saving you $100. Due to these factors, I’ve bumped up this first-time offer to a 4/5 Editor’s deal rating.
While many sales events feature deals for a specific length of time, deals are on a limited-time basis, making them subject to expire anytime. ZDNET remains committed to finding, sharing, and updating the best offers to help you maximize your savings so you can feel as confident in your purchases as we feel in our recommendations. Our ZDNET team of experts constantly monitors the deals we feature to keep our stories up-to-date. If you missed out on this deal, don’t worry — we’re always sourcing new savings opportunities at ZDNET.com.
Show more
We aim to deliver the most accurate advice to help you shop smarter. ZDNET offers 33 years of experience, 30 hands-on product reviewers, and 10,000 square feet of lab space to ensure we bring you the best of tech.
In 2025, we refined our approach to deals, developing a measurable system for sharing savings with readers like you. Our editor’s deal rating badges are affixed to most of our deal content, making it easy to interpret our expertise to help you make the best purchase decision.
At the core of this approach is a percentage-off-based system to classify savings offered on top-tech products, combined with a sliding-scale system based on our team members’ expertise and several factors like frequency, brand or product recognition, and more. The result? Hand-crafted deals chosen specifically for ZDNET readers like you, fully backed by our experts.
Microsoft has begun rolling out a long-awaited feature to Copilot for Windows 11 and Windows 10, allowing the AI assistant to search for files stored locally or synced via OneDrive. Previously, this feature was only available to Windows Insiders, but is now available to all Copilot users, reports Windows Latest.
Copilot’s File Search feature uses Windows Search indexing to find documents by name, type, or date. Microsoft Office formats (DOCX, XLSX, PPTX), PDF, text files, and more are supported. However, developer-specific extensions, such as .dart, are not currently supported.
By default, Copilot only has access to the Documents and Downloads folders, but the search scope can be customized in Windows permissions settings. To activate the feature, users need to enable it in Copilot settings. Microsoft deliberately did not enable it by default for privacy reasons.
Microsoft says the feature will be rolling out gradually over several weeks and does not require a Copilot Pro subscription. The company sees local file search as a natural extension of Copilot’s desktop integration, helping users quickly find and interact with documents without leaving the chat interface.
Xiaomi has introduced new smart glasses to the global market. The eyewear is called the Smart Audio Glasses and is currently available on AliExpress for $86.54. While that’s an affordable price tag, the feature set is not as extensive as some of the other options in the market.
To be specific, as the name somewhat suggests, the wearable is basically a pair of glasses with functionalities of Bluetooth earbuds. Of course, given the design, these smart glasses can be considered an alternative to open-ear headphones, and as Xiaomi highlights, there are sound leakage protections for enhanced privacy in public spaces.
The company also notes that the smart glasses feature an SLS0820 ultrasonic speaker and sound cavity structure algorithm. This combination allows the eyewear to offer “optimized sound quality” and deliver “rich, deep bass” along with “vibrant, clear treble.” The wearable has an echo-cancelling algorithm as well, which is said to lower distortion and ensure clear audio output.
Another highlight of the Xiaomi Smart Audio Glasses is the battery life. The company claims that in standby mode, the runtime can reach 11 days, while they are said to offer up to 10 hours of battery life when listening to audio. Regarding charging the built-in battery, the wearable relies on a magnetic pogo pin charger, similar to what most smartwatches come with.
Design-wise, Xiaomi highlights that the smart glasses have a frame that weighs 40 grams. With a comfort-forward temple curve and adjustable nose pads, the wearable is said to offer all-day comfort. Other highlights include a detachable hinge design for interchanging frames, an IP54 water and dust resistance rating, and touch controls.
Abid Ahsan Shanto – Senior Tech Writer – 1767 articles published on Notebookcheck since 2023
Abid’s journey as a technophile began when he first assembled his PC. Since then, his insatiable curiosity has driven him to delve into every aspect of this rapidly evolving technological landscape. And as a tech reporter, he prioritizes transparency, accuracy, and unbiasedness.
Tower Defense is an immensely popular genre on Roblox, and there are several amazing titles like Anime Vanguards that you can play. In the game, you can essentially end up using the various anime-based characters to defend your base from the waves of enemies. Besides regularly obtaining items in Anime Vanguards, one of the finest methods that you can use is the codes that are released by the developers. These codes are easy to redeem and straight away provide the associated rewards. Here is a list of all the active Anime Vanguards codes that you can use for the freebies.
Working Anime Vanguards codes (July 2025)
Use the codes below to get free rewards (Image via Roblox)
The following are the active Anime Vanguards codes that you can use to get your hands on free rewards inside the game:
Kat – A cat animation plays
PysephBirthday – 1x Flower
Spring – 1500x Flowers + 1500x Gems + 5000x Gold
Sorry4Bugs – 40x Rerolls + 20x Stat Chips
Late – 5x Phoenix Shards + 5x Elemental Shards
You will be able to use them directly inside the game to get the aforementioned rewards. Please keep in mind that these codes have the tendency to expire after a set duration of time. As a result, you must redeem them before they expire. All the latest codes for Anime Vanguards can be found by following the game and the developers on their respective social media handles.
Expired Anime Vanguards codes
Here are the Anime Vanguards that have now ended up expiring and do not work anymore:
DELAY
FateUpdate
500MVISITS
enumaelish
SEASONOFLOVE
YT200K
LATEUPDATESORRY
100MVISITS
HALLOWEENWASLASTMONTH
10KLIKES
HeavenOrHell
UntilThenIsTheBestGame
CORRUPTION
600KLIKES
EXTENDEDMAINT
AURA
WECURSESHAVENOLIMITS
ALSISBEST
SLAYER
DOUBLEEVOLUTION
1MILLION
DELAYGUARDS
BYEDIVALO
800KLIKES
300KLIKES
10MVISITS
HAPPYNEWYEAR
ROST10K
AV50MIL
STEELBALL
THXFOR1MLIKES
LordShadow
200KLIKES
PvP
25MVISITS
PART7
300KPLAYERS
SALTERBOSS
THESYSTEM
OneInAVanguardillion
SORRY4SHUTDOWN
WinterUpdateSoon
RELEASE
100KLIKES
WHYISTHISNOTWORKING????????
100kSubs
2MLIKES
TIKTOK50K
ODYSSEYFIX
400KLIKES
23RDPRESIDENT
TURBONICLEGRANDMA
70MVISITS
NewLobby!
STANDPROUD
UPDATE1
AV500KLIKES
LotsofPresents!
Update3!
SHIBUYA
Steps to use Anime Vanguards codes
Listed below are the steps that you can follow to utilize Anime Vanguards codes: The process is quite simple and would only take you a few minutes. Listed below are the steps that you can follow:
Get started by opening Anime Vanguards on your device.
Once the game is open, click on the “Codes” icon. This is located on the right side of the screen.
A dialog box will appear, and you must enter the code in the text field.
Finally, click on the “Redeem Code” button. The redemption process will be complete.
You can then use the rewards to your benefit in Anime Vanguards.
Users still clinging on to PowerShell 2.0 just received notice to quit as the command-line tool is officially leaving Windows.
The confirmation came in a Windows Insider update.
The move away from PowerShell 2.0 is a long time coming; Microsoft has for years encouraged users to move to later versions. Version 5.1 is preinstalled on most modern editions of Windows, and there is a newer, cross-platform version in the form of PowerShell 7.x.
However, version 2 lingered on in the name of backward compatibility, despite the fact it was deprecated in 2017.
PowerShell is a command line tool with a rich scripting language. Admins could use command.com to scratch that Command Line Interface (CLI) itch in the early days of Windows and MS-DOS, and Windows Script Host and a variety of command line interpreters were also available, but it wasn’t until the debut of PowerShell that Windows administrators could properly flex their scripting muscles.
PowerShell 2.0 first arrived as a component in Windows 7 (“where it was not an optional feature”, according to Microsoft). It was also shipped to other versions of Windows, including Windows Server 2008 and 2003, Vista, and even XP.
Even when later versions superseded it, PowerShell 2.0 remained as an optional side-by-side component.
However, in 2017, Microsoft announced the application would be deprecated. Not removed, but no longer be actively developed. At the time, it noted some of the company’s first-party products, such as some versions of SQL Server, still used PowerShell 2.0 “under the hood” and said “Windows PowerShell 2.0 will remain a part of Windows 10 and Windows Server 2016, and we have no plans to remove it until those dependencies are mitigated.”
Many years and one pandemic later, PowerShell 2.0 has finally come to the end of the road, at least as far as Windows 11 is concerned. While it is removed from most current Insider Preview builds, Microsoft said, “More information will be shared in the coming months on the removal of Windows PowerShell 2.0 in an upcoming update for Windows 11.
PowerShell 2.0 has also long been deprecated for Windows Server, with administrators encouraged to move to a newer version. Microsoft has not yet provided a timeline for its removal from its server operating system. ®