South Korea’s FADU has announced it has signed deals to supply its next-generation SSD controllers to two of the world’s largest cloud service operators.
The Korea Herald reports at a press conference in Seoul marking the company’s 10th anniversary, CEO Lee Ji-hyo revealed the news, saying, “We have been confirmed for mass production supply for two of the four global hyperscalers.”
“We are also in talks with another hyperscaler, and we expect to finalize that deal by the end of this year,” he added, “within two to three years, we are confident we will be supplying to all four major companies.”
The four hyperscalers in question are of course AWS, Google, Microsoft and Meta – and while FADU has not said which two of these are under contract, Meta is widely viewed as a strong candidate to be one of them after appearing alongside the company at the recent Future of Memory and Storage 2025 conference.
At that event, FADU unveiled its PCIe 6.0 controller, codenamed Sierra, which supports capacities up to 512TB and sequential speeds of 28GB/s.
The product delivers random read performance of 6.9 million IOPS while operating under 9W.
ChosunBiz reported at the time that Meta engineer Ross Stenfort shared the keynote stage with Lee as FADU introduced the controller and detailed new energy monitoring features developed with industry partners to reduce costs in large-scale data centers.
Lee took the opportunity during the keynote to also underline the company’s long-term vision.
“Since our establishment in 2015, FADU has dedicated the past 10 years to technology development, striving to create the fastest and most innovative SSD controllers in the world targeting the global market, and we have validated our technological prowess with global clients,” he said.
“We will lead the storage market with SSDs that offer greater capacity, faster performance, and higher efficiency as demanded by the AI era.”
FADU shipped its Gen5 controller in late 2024 and expects its Gen6 line to launch in 2026.
The Lenovo Yoga Tab Plus may not boast any groundbreaking features, or even the latest silicon, but it sure does deliver as a well-rounded productivity and entertainment machine at a fair asking price. Thanks to an ongoing sale on Lenovo, the Yoga Tab Plus can now be purchased at a pretty lucrative all-time low price of just $444.59, courtesy of a $325 discount as well as a couple of coupons.
The Yoga Tab Plus is a pretty large tablet at 12.7-inches, featuring a resolution of 2944 x 1840 which allows for a decently sharp pixel density of 273 PPI. The IPS panel refreshes at 144 Hz, which will surely elate gamers and chronic scrollers. However, the benefits of an OLED panel are obviously missing, such as true blacks, instant response times, and impressive color accuracy. Our review of the tablet noted a 73% coverage of the DCI-P3 color space, which is passable, but nothing quite to phone home about.
That said, the majority of folks with casual gaming and content consumption requirements will hardly ever find any room for complaint with the Lenovo Yoga Tab Plus, considering the affordable pricing. The same is reflected in the tablet’s performance, which is decent, courtesy of the Snapdragon 8 Gen 3 SoC, but can’t really keep up with higher-end chips. Lenovo offers four years of OS upgrades, which is quite disappointing since Samsung offers an additional three years.
Of course, the Yoga Tab Plus is a productivity-focused machine, like many other offerings from Lenovo, and ships with a keyboard with a touchpad, a stylus, as well as a cover with a kickstand. The list of accessories offered is undoubtedly generous, which we were pretty happy with in our detailed review. Other notable features of the tablet are as follows:
In order to take full advantage of the ongoing sale, potential customers are requested to use the coupons “EXTRAFIVE” and “LENOVOLIVE10” during checkout.
As a true tech nerd and lover of all things Apple, I love reading, researching and writing about the latest advancements from the Cupertino giant. My fascination with Apple’s design philosophy, commitment to user satisfaction , and dedication towards pushing the boundaries of consumer technology keeps me eagerly anticipating what they’ll unveil next.
This week a new Ubuntu X1E Concept ISO was published for Ubuntu 25.04 ARM64 with the latest Qualcomm Snapdragon X Elite/Plus laptop optimizations. With this new ISO the Linux 6.17 kernel is now leveraged for the latest upstream kernel bits. Additionally, the new X1E ISO is finally working again on the Acer Swifth 14 AI laptop that I have used for my Snapdragon X Elite Linux testing.
The plucky-desktop-arm64+x1e-20250827.iso image is now available for those wanting the latest Ubuntu Linux experience catering to the Snapdragon X Elite laptops. The ISO is available for download from this directory.
Excitingly, when trying it out on the Acer Swift 14 AI laptop I’ve used for the Linux testing, this new ISO is working properly on that laptop without any Device Tree issues or boot issues post-install. It’s been months since this “supported” laptop has been working nicely with the new Ubuntu X1E Concept images – I think the last time was when installing the Ubuntu 24.10 X1E Concept ISO and then upgrading manually to 25.04 packages at the time.
This new concept image is working out fine on the Acer Swift 14 AI. Also making this new ISO image more exciting is that it has transitioned to using the Linux 6.17 development kernel for the very latest upstream kernel support for the Qualcomm Snapdragon X hardware.
Post-install you still need to install qcom-firmware-extract and run it for those laptops (most models out there, unfortunately) without any upstream linux-firmware support by extracting the firmware files from your Windows on ARM partition.
In any event I am now working on some fresh Ubuntu X Elite benchmarks now that I have a working laptop. Stay tuned for a fresh comparison against AMD Ryzen and Intel Core laptops.
I am often heard complaining that similar genres seem to surge in popularity all at once. It certainly happened with cozy farming simulators and is currently happening with extraction shooters. After accepting my Life Below appointment at gamescom 2025, I was bombarded with city builders, too. It seems that as soon as I am looking for something, it pops up consistently. With that said, I am always happy to see a studio do something different with an oversaturated genre, and developer Megapop is doing just that with Life Below.
I was able to sit down with Game Director Lise Hagen Lie and have her walk me through the game in a quick 30-minute demo that showed me various stages of the game from beginning to end, after players are more established. Though it doesn’t look like your typical city builder, Life Below is just that, only for marine life. You’re not just building typical housing and commercial buildings. Instead, you’re building an entire ecosystem, the absolute base of life as we know it.
Life Below
Sitting down with Hagen Lie, I could see her passion behind the game. Not only did she enjoy gaming and making games, but she also has a passion for the ocean and the life within it. Like most women my age, we all went through our Marine Biologist stage, and she did too, only she stuck with it and channelled it into a game. The result is a gameified learning experience, filled with both facts and fiction to make this underwater world a bit more exciting.
“Life Below has a lot going on under the surface…get it?”
Life Below was created by a lover of all things underwater, but the team went a step further. Rather than relying on the developers’ knowledge, they consulted bona fide marine biologists to help understand how these things really work and bring as much knowledge as they can to the game. Even creatures that are not biologically accurate are based on real creatures, with made-up names to pay homage to them. The team researched what creatures live with whom, who are natural enemies, and what other items, like coral, do to affect the ocean floor.
There is, indeed, a story in Life Below, and they say it on their Steam page better than I could: “The ocean is dying. Coral reefs are vanishing, and ecosystems are breaking down. By the power of the mysterious reef heart, you must restore balance, using the ocean floor itself as the foundation to build vibrant havens for sealife.” A bleak world with a dash of hope, one that is a bit too real when you take a moment to look around us.
Life Below
Life Below has a lot going on under the surface…get it? You don’t just place the fish and creatures you want. You place their habitats, their food, and lures to call them to the world you’ve created. If you don’t have what they need, they simply won’t stay. There is more to it than that, though. Like the ocean, or even an aquarium you have at home, you have to control the water’s temperature and pH levels, too.
Some items heat things up, some cool them down, and later, you will have access to crafting items that can help counteract this. There will also be human-made obstacles, such as oil spills and garbage, that you have to take on. Do I see a 2026 Games for Impact Award for Life Below in the future?
The crafting system in Life Below seems pretty intense, and I am looking forward to seeing all the different lures, creatures and habitats I can build or unlock. Before I got too far into the game, I was a little disappointed to think about the limits that would be put on Life Below, since you are building on the bottom of the ocean floor, and the game is meant to be semi-realistic. That means a lot of the ocean’s favourite creatures would be left out, since, you know, dolphins don’t scrounge the bottom of the ocean.
Life Below
Luckily, Megapop considered this and did not want any of its favourite aquatic friends left out. The studio created a visitor system that allows creatures such as dolphins and turtles (among others, though those were the examples discussed) to enter your habitat and spend some time there. Of course, summoning animals without reason would feel strange, so players can use lures to bring them in for specific purposes.
“Don’t let the beautiful hand-drawn graphics fool you—Life Below is a city…biodiversity…builder that requires planning and strategy.”
For example, during my demo, there was a significant jellyfish problem disrupting the ecosystem. I used lures to attract turtles, which are natural predators of jellyfish. The turtles arrived, snacked on the jellyfish, and were accompanied by a pop of red in the water. I was told that animation is being updated, but for now, it struck the right mix of dark and fun. In total, more than 40 wildlife species can be lured to your underwater sanctuary, and I am eager to see how they interact with one another.
Multiple biomes were highlighted, including a standard version, a colder variant and a hotter option, with more on the way. When I asked about the possibility of exploring dark, deep-sea monsters, nothing was confirmed or denied, though there was mention of an oceanic chasm. I am curious to see how that develops. Overall, Megapop appears committed to keeping Life Below fresh and engaging.
Life Below
Don’t let the beautiful hand-drawn graphics fool you—Life Below is a city…biodiversity…builder that requires planning and strategy. I’m excited to try it out during a longer playthrough, and am even looking forward to learning more about our oceans.
Magnetic compasses have long been reliable tools for navigation, especially before the advent of satellites and digital GPS. These timeless instruments align with Earth’s magnetic field, guiding sailors, explorers, and adventurers for centuries. But what happens when a compass itself goes haywire?
There are strange and scientifically intriguing places on Earth where magnetic compasses become unreliable, spinning unpredictably or pointing in entirely the wrong direction. These failures aren’t caused by faulty instruments but by rare and extreme natural conditions such as magnetic anomalies, massive iron deposits, volcanic activity, and even shifts in Earth’s magnetic field. Such disruptions affect how the needle aligns with magnetic north, sometimes making navigation dangerous or even impossible using traditional means.
While the modern world largely relies on GPS and satellite-based navigation, understanding why compasses fail in certain regions offers fascinating and important insights into Earth’s dynamic inner workings.
Here are five such locations that defy compass logic:
The Google Pixel range of phones is known for its industry-leading cameras, AI-powered tools, and deep integration with Google’s services. The latest version is one of the most refined and advanced phones available right now, and you can get yourself a free Google Pixel 10 when you activate a new line at T-Mobile.
Ordinarily, flagship phones cost the earth if you’re buying upfront, by this deal changes all that. The 24-month plan you sign up for at Verizon must be Essentials or above, but it still makes owning the phone eminently affordable. This is one of the best phone deals around at the moment, and I’d suggest moving quickly if you’re due an upgrade.
Our Google Pixel 10 review landed only a few days ago, and we called it “the closest Google has come to hitting the iPhone bullseye”. What Apple has thus far failed to do, Google has certainly delivered on.
The Pixel 10 phone features the latest Google AI, including the new Magic Cue feature. This innovative feature uses all the information Google has on you to provide handy links and ‘cues’ to information you might need. This includes dinner reservation emails, links to Google Map locations, and more. The impressive part of this is that these ‘cues’ appear exactly when you need them.
As well as AI, the Pixel 10 phone boasts a set of cameras that “take photos that look almost as good as those from the Pro Pixel phones”. Not only is that seriously impressive, but it also means that even more users will be fine without the Pro upgrade.
Our guides to the best Android phones and best iPhones will give you a rundown of all our favorite choices. We’ve categorized them to make it easier to find the perfect choice no matter your budget or feature preferences.
Amanara took the leap into High Goal polo without knowing if there would be water at the bottom — and there certainly was. Nicky Sen’s organization, usually competing in the Medium Goal and even crowned champions last year of both the Silver and Gold Cups, decided to step up to the top category for the very first time.
With a youthful team, Amanara first captured the Silver Cup, defeating Black Bears in the final, and this past Saturday lifted the Gold Cup after overcoming Dos Lunas.
Following the team’s great comeback —they were four goals down at the end of the second chukker— Lorenzo Chavanne spoke with Pololine about the secret behind Amanara’s success this season, the organization’s upcoming plans, and his participation in both the Jockey Club Open and the Qualification for the Argentine Open.
👉 Watch the full interview on Pololine.tv
There are many ways to test the intelligence of an artificial intelligence — conversational fluidity, reading comprehension or mind-bendingly difficult physics. But some of the tests that are most likely to stump AIs are ones that humans find relatively easy, even entertaining. Though AIs increasingly excel at tasks that require high levels of human expertise, this does not mean that they are close to attaining artificial general intelligence, or AGI. AGI requires that an AI can take a very small amount of information and use it to generalize and adapt to highly novel situations. This ability, which is the basis for human learning, remains challenging for AIs.
One test designed to evaluate an AI’s ability to generalize is the Abstraction and Reasoning Corpus, or ARC: a collection of tiny, colored-grid puzzles that ask a solver to deduce a hidden rule and then apply it to a new grid. Developed by AI researcher François Chollet in 2019, it became the basis of the ARC Prize Foundation, a nonprofit program that administers the test — now an industry benchmark used by all major AI models. The organization also develops new tests and has been routinely using two (ARC-AGI-1 and its more challenging successor ARC-AGI-2). This week the foundation is launching ARC-AGI-3, which is specifically designed for testing AI agents — and is based on making them play video games.
Scientific American spoke to ARC Prize Foundation president, AI researcher and entrepreneur Greg Kamradt to understand how these tests evaluate AIs, what they tell us about the potential for AGI and why they are often challenging for deep-learning models even though many humans tend to find them relatively easy. Links to try the tests are at the end of the article.
[An edited transcript of the interview follows.]
Our definition of intelligence is your ability to learn new things. We already know that AI can win at chess. We know they can beat Go. But those models cannot generalize to new domains; they can’t go and learn English. So what François Chollet made was a benchmark called ARC-AGI — it teaches you a mini skill in the question, and then it asks you to demonstrate that mini skill. We’re basically teaching something and asking you to repeat the skill that you just learned. So the test measures a model’s ability to learn within a narrow domain. But our claim is that it does not measure AGI because it’s still in a scoped domain [in which learning applies to only a limited area]. It measures that an AI can generalize, but we do not claim this is AGI.
There are two ways I look at it. The first is more tech-forward, which is ‘Can an artificial system match the learning efficiency of a human?’ Now what I mean by that is after humans are born, they learn a lot outside their training data. In fact, they don’t really have training data, other than a few evolutionary priors. So we learn how to speak English, we learn how to drive a car, and we learn how to ride a bike — all these things outside our training data. That’s called generalization. When you can do things outside of what you’ve been trained on now, we define that as intelligence. Now, an alternative definition of AGI that we use is when we can no longer come up with problems that humans can do and AI cannot — that’s when we have AGI. That’s an observational definition. The flip side is also true, which is as long as the ARC Prize or humanity in general can still find problems that humans can do but AI cannot, then we do not have AGI. One of the key factors about François Chollet’s benchmark… is that we test humans on them, and the average human can do these tasks and these problems, but AI still has a really hard time with it. The reason that’s so interesting is that some advanced AIs, such as Grok, can pass any graduate-level exam or do all these crazy things, but that’s spiky intelligence. It still doesn’t have the generalization power of a human. And that’s what this benchmark shows.
One of the things that differentiates us is that we require that our benchmark to be solvable by humans. That’s in opposition to other benchmarks, where they do “Ph.D.-plus-plus” problems. I don’t need to be told that AI is smarter than me — I already know that OpenAI’s o3 can do a lot of things better than me, but it doesn’t have a human’s power to generalize. That’s what we measure on, so we need to test humans. We actually tested 400 people on ARC-AGI-2. We got them in a room, we gave them computers, we did demographic screening, and then gave them the test. The average person scored 66 percent on ARC-AGI-2. Collectively, though, the aggregated responses of five to 10 people will contain the correct answers to all the questions on the ARC2.
There are two things. Humans are incredibly sample-efficient with their learning, meaning they can look at a problem and with maybe one or two examples, they can pick up the mini skill or transformation and they can go and do it. The algorithm that’s running in a human’s head is orders of magnitude better and more efficient than what we’re seeing with AI right now.
So ARC-AGI-1, François Chollet made that himself. It was about 1,000 tasks. That was in 2019. He basically did the minimum viable version in order to measure generalization, and it held for five years because deep learning couldn’t touch it at all. It wasn’t even getting close. Then reasoning models that came out in 2024, by OpenAI, started making progress on it, which showed a step-level change in what AI could do. Then, when we went to ARC-AGI-2, we went a little bit further down the rabbit hole in regard to what humans can do and AI cannot. It requires a little bit more planning for each task. So instead of getting solved within five seconds, humans may be able to do it in a minute or two. There are more complicated rules, and the grids are larger, so you have to be more precise with your answer, but it’s the same concept, more or less…. We are now launching a developer preview for ARC-AGI-3, and that’s completely departing from this format. The new format will actually be interactive. So think of it more as an agent benchmark.
If you think about everyday life, it’s rare that we have a stateless decision. When I say stateless, I mean just a question and an answer. Right now all benchmarks are more or less stateless benchmarks. If you ask a language model a question, it gives you a single answer. There’s a lot that you cannot test with a stateless benchmark. You cannot test planning. You cannot test exploration. You cannot test intuiting about your environment or the goals that come with that. So we’re making 100 novel video games that we will use to test humans to make sure that humans can do them because that’s the basis for our benchmark. And then we’re going to drop AIs into these video games and see if they can understand this environment that they’ve never seen beforehand. To date, with our internal testing, we haven’t had a single AI be able to beat even one level of one of the games.
Each “environment,” or video game, is a two-dimensional, pixel-based puzzle. These games are structured as distinct levels, each designed to teach a specific mini skill to the player (human or AI). To successfully complete a level, the player must demonstrate mastery of that skill by executing planned sequences of actions.
Video games have long been used as benchmarks in AI research, with Atari games being a popular example. But traditional video game benchmarks face several limitations. Popular games have extensive training data publicly available, lack standardized performance evaluation metrics and permit brute-force methods involving billions of simulations. Additionally, the developers building AI agents typically have prior knowledge of these games — unintentionally embedding their own insights into the solutions.
Try ARC-AGI-1, ARC-AGI-2 and ARC-AGI-3.
This article was first published at Scientific American. © ScientificAmerican.com. All rights reserved. Follow on TikTok and Instagram, X and Facebook.
We may not have a date for Stardew Valley‘s next major update, but we have confirmation that it’s happening. Eric Barone, the developer behind the hit farming sim, announced that there will be a 1.7 update during the Stardew Valley Symphony of Seasons concert in Seattle, later confirming the news with a post on X. Barone, better known as ConcernedApe, didn’t reveal a release date, nor any teasers about content.
Considering the numbered update, we’re expecting more than just a patch and something similar to the fresh content added in the 1.6 update. The previous update released in March of last year and delivered a ton of free content, including the Meadowlands Farm, a new three-day festival, more crops and novel NPC interactions.
Fans will always welcome more content for Stardew Valley, but some expressed concern about how this will impact the release timeline for Barone’s upcoming title, Haunted Chocolatier. The developer revealed the standalone title in 2021 and told PC Gamer in April of this year that he wouldn’t work on any more Stardew Valley updates until he’s done with Haunted Chocolatier. To offer some reassurance, Barone replied on X that the 1.7 update “will not hinder Haunted Chocolatier development.”