Author: admin

  • How to watch Loh Kean Yew live in badminton action in Paris — full schedule

    How to watch Loh Kean Yew live in badminton action in Paris — full schedule

    Loh Kean Yew’s schedule and draw at 2025 World Championships

    In the opening round, Loh will take on unfancied Mauritian Georges Julien Paul, ranked 234th in the world. This match will take place on either 25 or 26 August 2025.

    If, as expected, Loh gets past his first-round encounter, a last-32 matchup awaits against either England’s Harry Huang or Finnish hope Kalle Koljonen. That clash will take place on either 26 or 27 August.

    If Loh reaches the round of 16, the other seed in this eighth of the draw is a man who needs no introduction to Loh: Japan’s Naraoka Kodai. Loh has Naraoka’s number, having won all six of their career meetings, although they have yet to face each other this year. That match would take place on 28 August.

    Quarter-finals will be played on 29 August with potential opponents including Li Shifeng, Lu Guangzu, or Lee Zii Jia. The semi-finals occur on 30 August and the final on 31 August.

    Continue Reading

  • Chikitha Taniparthi wins U21 women’s compound gold medal

    Chikitha Taniparthi wins U21 women’s compound gold medal

    India’s Chikitha Taniparthi etched her name in history by winning the Under-21 women’s compound crown at the World Archery Youth Championships 2025 in Winnipeg, Canada, on Saturday.

    It was India’s first-ever archery gold medal in this category at the Youth World Championships.

    The 20-year-old Indian archer was in imperious form throughout the knockout stages.

    She eased past Spain’s Paula Diaz Morillas 142-133 in the semi-finals before holding her nerve in the gold medal clash against Park Yerin of the Republic of Korea, prevailing 142-136.

    In the quarter-finals, she had defeated compatriot and 2023 senior Asian champion Parneet Kaur 146-143.

    India also produced a strong showing in the U18 compound events. In an all-Indian women’s semi-final, Prithika Pradeep edged out Surya Hamsini Madala 130-128, before going down narrowly 143-140 to the USA’s O’Donohue Savannah in the final to settle for silver.

    Surya missed out on the podium after losing the bronze playoff 139-127 to the Netherlands’ Fenna Stallen.

    Aditi Swami won the women’s U18 crown in the previous edition held in Limerick, Ireland, two years ago. Now 19, Aditi Swami did not compete in the 2025 edition in the U21 division.

    Continue Reading

  • Bubble or not, the AI backlash is validating one critic’s warnings

    Bubble or not, the AI backlash is validating one critic’s warnings

    First it was the release of GPT-5 that OpenAI “totally screwed up,” according to Sam Altman. Then Altman followed that up by saying the B-word at a dinner with reporters. “When bubbles happen, smart people get overexcited about a kernel of truth,” The Verge reported on comments by the OpenAI CEO. Then it was the sweeping MIT survey that put a number on what so many people seem to be feeling: a whopping 95% of generative AI pilots at companies are failing.

    A tech sell-off ensued, as rattled investors sent the value of the S&P 500 down by $1 trillion. Given the increasing dominance of that index by tech stocks that have largely transformed into AI stocks, it was a sign of nerves that the AI boom was turning into dotcom bubble 2.0. To be sure, fears about the AI trade aren’t the only factor moving markets, as evidenced by the S&P 500 snapping a five-day losing streak on Friday after Jerome Powell’s quasi-dovish comments at Jackson Hole, Wyoming, as even the hint of openness from the Fed chair toward a September rate cut set markets on a tear.

    Gary Marcus has been warning of the limits of large language models (LLMs) since 2019 and warning of a potential bubble and problematic economics since 2023. His words carry a particularly distinctive weight. The cognitive scientist turned longtime AI researcher has been active in the machine learning space since 2015, when he founded Geometric Intelligence. That company was acquired by Uber in 2016, and Marcus left shortly afterward, working at other AI startups while offering vocal criticism of what he sees as dead-ends in the AI space.

    Still, Marcus doesn’t see himself as a “Cassandra,” and he’s not trying to be, he told Fortune in an interview. Cassandra, a figure from Greek tragedy, was a character who uttered accurate prophecies but wasn’t believed until it was too late. “I see myself as a realist and as someone who foresaw the problems and was correct about them.”

    Marcus attributes the wobble in markets to GPT-5 above all. It’s not a failure, he said, but it’s “underwhelming,” a “disappointment,” and that’s “really woken a lot of people up. You know, GPT-5 was sold, basically, as AGI, and it just isn’t,” he added, referencing artificial general intelligence, a hypothetical AI with human-like reasoning abilities. “It’s not a terrible model, it’s not like it’s bad,” he said, but “it’s not the quantum leap that a lot of people were led to expect.”

    Marcus said this shouldn’t be news to anyone paying attention, as he argued in 2022 that “deep learning is hitting a wall.” To be sure, Marcus has been wondering openly on his Substack on when the generative AI bubble will deflate. He told Fortune that “crowd psychology” is definitely taking place, and he thinks every day about the John Maynard Keynes quote: “The market can stay solvent longer than you can stay rational,” or Looney Tunes’s Wile E. Coyote following Road Runner off the edge of a cliff and hanging in midair, before falling down to Earth.

    “That’s what I feel like,” Marcus says. “We are off the cliff. This does not make sense. And we get some signs from the last few days that people are finally noticing.”

    Building warning signs

    The bubble talk began heating up in July, when Apollo Global Management’s chief economist, Torsten Slok, widely read and influential on Wall Street, issued a striking calculation while falling short of declaring a bubble. “The difference between the IT bubble in the 1990s and the AI bubble today is that the top 10 companies in the S&P 500 today are more overvalued than they were in the 1990s,” he wrote, warning that the forward P/E ratios and staggering market capitalizations of companies such as Nvidia, Microsoft, Apple, and Meta had “become detached from their earnings.”

    In the weeks since, the disappointment of GPT-5 was an important development, but not the only one. Another warning sign is the massive amount of spending on data centers to support all the theoretical future demand for AI use. Slok has tackled this subject as well, finding that data center investments’ contribution to GDP growth has been the same as consumer spending over the first half of 2025, which is notable since consumer spending makes up 70% of GDP. (The Wall Street Journal‘s Christopher Mims had offered the calculation weeks earlier.) Finally, on August 19, former Google CEO Eric Schmidt co-authored a widely discussed New York Times op-ed on August 19, arguing that “it is uncertain how soon artificial general intelligence can be achieved.”

    This is a significant about-face, according to political scientist Henry Farrell, who argued in the Financial Times in January that Schmidt was a key voice shaping the “New Washington Consensus,” predicated in part on AGI being “right around the corner.” On his Substack, Farrell said Schmidt’s op-ed shows that his prior set of assumptions are “visibly crumbling away,” while caveating that he had been relying on informal conversations with people he knew in the intersection of D.C. foreign policy and tech policy. Farrell’s title for that post: “The twilight of tech unilateralism.” He concluded: “If the AGI bet is a bad one, then much of the rationale for this consensus falls apart. And that is the conclusion that Eric Schmidt seems to be coming to.”

    Finally, the vibe is shifting in the summer of 2025 into a mounting AI backlash. Darrell West warned in Brookings in May that the tide of both public and scientific opinion would soon turn against AI’s masters of the universe. Soon after, Fast Company predicted the summer would be full of “AI slop.” By early August, Axios had identified the slang “clunker” being applied widely to AI mishaps, particularly in customer service gone awry.

    History says: short-term pain, long-term gain

    John Thornhill of the Financial Times offered some perspective on the bubble question, advising readers to brace themselves for a crash, but to prepare for a future “golden age” of AI nonetheless. He highlights the data center buildout—a staggering $750 billion investment from Big Tech over 2024 and 2025, and part of a global rollout projected to hit $3 trillion by 2029. Thornhill turns to financial historians for some comfort and some perspective. Over and over, it shows that this type of frenzied investment typically triggers bubbles, dramatic crashes, and creative destruction—but that eventually durable value is realized.

    He notes that Carlota Perez documented this pattern in Technological Revolutions and Financial Capital: The Dynamics of Bubbles and Golden Ages. She identified AI as the fifth technological revolution to follow the pattern begun in the late 18th century, as a result of which the modern economy now has railroad infrastructure and personal computers, among other things. Each one had a bubble and a crash at some point. Thornhill didn’t cite him in this particular column, but Edward Chancellor documented similar patterns in his classic Devil Take The Hindmost, a book notable not just for its discussions of bubbles but for predicting the dotcom bubble before it happened. 

    Owen Lamont of Acadian Asset Management cited Chancellor in November 2024, when he argued that a key bubble moment had been passed: an unusually large number of market participants saying that prices are too high, but insisting that they’re likely to rise further.

    Wall Street banks are largely not calling for a bubble. Morgan Stanley released a note recently seeing huge efficiencies ahead for companies as a result of AI: $920 billion per year for the S&P 500. UBS, for its part, concurred with the caution flagged in the news-making MIT research. It warned investors to expect a period of “capex indigestion” accompanying the data center buildout, but it also maintained that AI adoption is expanding far beyond expectations, citing growing monetization from OpenAI’s ChatGPT, Alphabet’s Gemini, and AI-powered CRM systems.

    Bank of America Research wrote a note in early August, before the launch of GPT-5, seeing AI as part of a worker productivity “sea change” that will drive an ongoing “innovation premium” for S&P 500 firms. Head of U.S. Equity Strategy Savita Subramanian essentially argued that the inflation wave of the 2020s taught companies to do more with less, to turn people into processes, and that AI will turbo-charge this. “I don’t think it’s necessarily a bubble in the S&P 500,” she told Fortune in an interview, before adding, “I think there are other areas where it’s becoming a little bit bubble-like.” 

    Subramanian mentioned smaller companies and potentially private lending as areas “that potentially have re-rated too aggressively.” She’s also concerned about the risk of companies diving into data centers too such a great extent, noting that this represents a shift back toward an asset-heavier approach, instead of the asset-light approach that increasingly distinguishes top performance in the U.S. economy.

    “I mean, this is new,” she said. “Tech used to be very asset-light and just spent money on R&D and innovation, and now they’re spending money to build out these data centers,” adding that she sees it as potentially marking the end of their asset-light, high-margin existence and basically transforming them into “very asset-intensive and more manufacturing-like than they used to be.” From her perspective, that warrants a lower multiple in the stock market. When asked if that is tantamount to a bubble, if not a correction, she said “it’s starting to happen in places,” and she agrees with the comparison to the railroad boom.

    The math and the ghost in the machine

    Gary Marcus also cited the fundamentals of math as a reason that he’s concerned, with nearly 500 AI unicorns being valued at $2.7 trillion. “That just doesn’t make sense relative to how much revenue is coming [in],” he said. Marcus cited OpenAI reporting $1 billion in revenue in July, but still not being profitable. Speculating, he extrapolated that to OpenAI having roughly half the AI market, and offered a rough calculation that it means about $25 billion a year of revenue for the sector, “which is not nothing, but it costs a lot of money to do this, and there’s trillions of dollars [invested].”

    So if Marcus is correct, why haven’t people been listening to him for years? He said he’s been warning people about this for years, too, calling it the “gullibility gap” in his 2019 book Rebooting AI and arguing in The New Yorker in 2012 that deep learning was a ladder that wouldn’t reach the moon. For the first 25 years of his career, Marcus trained and practiced as a cognitive scientist, and learned about the “anthropomorphization people do. … [they] look at these machines and make the mistake of attributing to them an intelligence that is not really there, a humanness that is not really there, and they wind up using them as a companion, and they wind up thinking that they’re closer to solving these problems than they actually are.” He said he thinks the bubble inflating to its current extent is in large part because of the human impulse to project ourselves onto things, something a cognitive scientist is trained not to do.

    These machines might seem like they’re human, but “they don’t actually work like you,” Marcus said, adding, “this entire market has been based on people not understanding that, imagining that scaling was going to solve all of this, because they don’t really understand the problem. I mean, it’s almost tragic.”

    Subramanian, for her part, said she thinks “people love this AI technology because it feels like sorcery. It feels a little magical and mystical … the truth is it hasn’t really changed the world that much yet, but I don’t think it’s something to be dismissed.” She’s also become really taken with it herself. “I’m already using ChatGPT more than my kids are. I mean, it’s kind of interesting to see this. I use ChatGPT for everything now.”

    Continue Reading

  • Bubble or not, the AI backlash is validating what one researcher and critic has been saying for years

    Bubble or not, the AI backlash is validating what one researcher and critic has been saying for years

    First it was the release of GPT-5 that OpenAI “totally screwed up,” according to Sam Altman. Then Altman followed that up by saying the B-word at a dinner with reporters. “When bubbles happen, smart people get overexcited about a kernel of truth,” The Verge reported on comments by the OpenAI CEO. Then it was the sweeping MIT survey that put a number on what so many people seem to be feeling: a whopping 95% of generative AI pilots at companies are failing.

    A tech sell-off ensued, as rattled investors sent the value of the S&P 500 down by $1 trillion. Given the increasing dominance of that index by tech stocks that have largely transformed into AI stocks, it was a sign of nerves that the AI boom was turning into dotcom bubble 2.0. To be sure, fears about the AI trade aren’t the only factor moving markets, as evidenced by the S&P 500 snapping a five-day losing streak on Friday after Jerome Powell’s quasi-dovish comments at Jackson Hole, Wyoming, as even the hint of openness from the Fed chair toward a September rate cut set markets on a tear.

    Gary Marcus has been warning of the limits of large language models (LLMs) since 2019 and warning of a potential bubble and problematic economics since 2023. His words carry a particularly distinctive weight. The cognitive scientist turned longtime AI researcher has been active in the machine learning space since 2015, when he founded Geometric Intelligence. That company was acquired by Uber in 2016, and Marcus left shortly afterward, working at other AI startups while offering vocal criticism of what he sees as dead-ends in the AI space.

    Still, Marcus doesn’t see himself as a “Cassandra,” and he’s not trying to be, he told Fortune in an interview. Cassandra, a figure from Greek tragedy, was a character who uttered accurate prophecies but wasn’t believed until it was too late. “I see myself as a realist and as someone who foresaw the problems and was correct about them.”

    Marcus attributes the wobble in markets to GPT-5 above all. It’s not a failure, he said, but it’s “underwhelming,” a “disappointment,” and that’s “really woken a lot of people up. You know, GPT-5 was sold, basically, as AGI, and it just isn’t,” he added, referencing artificial general intelligence, a hypothetical AI with human-like reasoning abilities. “It’s not a terrible model, it’s not like it’s bad,” he said, but “it’s not the quantum leap that a lot of people were led to expect.”

    Marcus said this shouldn’t be news to anyone paying attention, as he argued in 2022 that “deep learning is hitting a wall.” To be sure, Marcus has been wondering openly on his Substack on when the generative AI bubble will deflate. He told Fortune that “crowd psychology” is definitely taking place, and he thinks every day about the John Maynard Keynes quote: “The market can stay solvent longer than you can stay rational,” or Looney Tunes’s Wile E. Coyote following Road Runner off the edge of a cliff and hanging in midair, before falling down to Earth.

    “That’s what I feel like,” Marcus says. “We are off the cliff. This does not make sense. And we get some signs from the last few days that people are finally noticing.”

    The bubble talk began heating up in July, when Apollo Global Management’s chief economist, Torsten Slok, widely read and influential on Wall Street, issued a striking calculation while falling short of declaring a bubble. “The difference between the IT bubble in the 1990s and the AI bubble today is that the top 10 companies in the S&P 500 today are more overvalued than they were in the 1990s,” he wrote, warning that the forward P/E ratios and staggering market capitalizations of companies such as Nvidia, Microsoft, Apple, and Meta had “become detached from their earnings.”

    In the weeks since, the disappointment of GPT-5 was an important development, but not the only one. Another warning sign is the massive amount of spending on data centers to support all the theoretical future demand for AI use. Slok has tackled this subject as well, finding that data center investments’ contribution to GDP growth has been the same as consumer spending over the first half of 2025, which is notable since consumer spending makes up 70% of GDP. (The Wall Street Journal‘s Christopher Mims had offered the calculation weeks earlier.) Finally, on August 19, former Google CEO Eric Schmidt co-authored a widely discussed New York Times op-ed on August 19, arguing that “it is uncertain how soon artificial general intelligence can be achieved.”

    This is a significant about-face, according to political scientist Henry Farrell, who argued in the Financial Times in January that Schmidt was a key voice shaping the “New Washington Consensus,” predicated in part on AGI being “right around the corner.” On his Substack, Farrell said Schmidt’s op-ed shows that his prior set of assumptions are “visibly crumbling away,” while caveating that he had been relying on informal conversations with people he knew in the intersection of D.C. foreign policy and tech policy. Farrell’s title for that post: “The twilight of tech unilateralism.” He concluded: “If the AGI bet is a bad one, then much of the rationale for this consensus falls apart. And that is the conclusion that Eric Schmidt seems to be coming to.”

    Finally, the vibe is shifting in the summer of 2025 into a mounting AI backlash. Darrell West warned in Brookings in May that the tide of both public and scientific opinion would soon turn against AI’s masters of the universe. Soon after, Fast Company predicted the summer would be full of “AI slop.” By early August, Axios had identified the slang “clunker” being applied widely to AI mishaps, particularly in customer service gone awry.

    John Thornhill of the Financial Times offered some perspective on the bubble question, advising readers to brace themselves for a crash, but to prepare for a future “golden age” of AI nonetheless. He highlights the data center buildout—a staggering $750 billion investment from Big Tech over 2024 and 2025, and part of a global rollout projected to hit $3 trillion by 2029. Thornhill turns to financial historians for some comfort and some perspective. Over and over, it shows that this type of frenzied investment typically triggers bubbles, dramatic crashes, and creative destruction—but that eventually durable value is realized.

    He notes that Carlota Perez documented this pattern in Technological Revolutions and Financial Capital: The Dynamics of Bubbles and Golden Ages. She identified AI as the fifth technological revolution to follow the pattern begun in the late 18th century, as a result of which the modern economy now has railroad infrastructure and personal computers, among other things. Each one had a bubble and a crash at some point. Thornhill didn’t cite him in this particular column, but Edward Chancellor documented similar patterns in his classic Devil Take The Hindmost, a book notable not just for its discussions of bubbles but for predicting the dotcom bubble before it happened.

    Owen Lamont of Acadian Asset Management cited Chancellor in November 2024, when he argued that a key bubble moment had been passed: an unusually large number of market participants saying that prices are too high, but insisting that they’re likely to rise further.

    Wall Street banks are largely not calling for a bubble. Morgan Stanley released a note recently seeing huge efficiencies ahead for companies as a result of AI: $920 billion per year for the S&P 500. UBS, for its part, concurred with the caution flagged in the news-making MIT research. It warned investors to expect a period of “capex indigestion” accompanying the data center buildout, but it also maintained that AI adoption is expanding far beyond expectations, citing growing monetization from OpenAI’s ChatGPT, Alphabet’s Gemini, and AI-powered CRM systems.

    Bank of America Research wrote a note in early August, before the launch of GPT-5, seeing AI as part of a worker productivity “sea change” that will drive an ongoing “innovation premium” for S&P 500 firms. Head of U.S. Equity Strategy Savita Subramanian essentially argued that the inflation wave of the 2020s taught companies to do more with less, to turn people into processes, and that AI will turbo-charge this. “I don’t think it’s necessarily a bubble in the S&P 500,” she told Fortune in an interview, before adding, “I think there are other areas where it’s becoming a little bit bubble-like.”

    Subramanian mentioned smaller companies and potentially private lending as areas “that potentially have re-rated too aggressively.” She’s also concerned about the risk of companies diving into data centers too such a great extent, noting that this represents a shift back toward an asset-heavier approach, instead of the asset-light approach that increasingly distinguishes top performance in the U.S. economy.

    “I mean, this is new,” she said. “Tech used to be very asset-light and just spent money on R&D and innovation, and now they’re spending money to build out these data centers,” adding that she sees it as potentially marking the end of their asset-light, high-margin existence and basically transforming them into “very asset-intensive and more manufacturing-like than they used to be.” From her perspective, that warrants a lower multiple in the stock market. When asked if that is tantamount to a bubble, if not a correction, she said “it’s starting to happen in places,” and she agrees with the comparison to the railroad boom.

    Gary Marcus also cited the fundamentals of math as a reason that he’s concerned, with nearly 500 AI unicorns being valued at $2.7 trillion. “That just doesn’t make sense relative to how much revenue is coming [in],” he said. Marcus cited OpenAI reporting $1 billion in revenue in July, but still not being profitable. Speculating, he extrapolated that to OpenAI having roughly half the AI market, and offered a rough calculation that it means about $25 billion a year of revenue for the sector, “which is not nothing, but it costs a lot of money to do this, and there’s trillions of dollars [invested].”

    So if Marcus is correct, why haven’t people been listening to him for years? He said he’s been warning people about this for years, too, calling it the “gullibility gap” in his 2019 book Rebooting AI and arguing in The New Yorker in 2012 that deep learning was a ladder that wouldn’t reach the moon. For the first 25 years of his career, Marcus trained and practiced as a cognitive scientist, and learned about the “anthropomorphization people do. … [they] look at these machines and make the mistake of attributing to them an intelligence that is not really there, a humanness that is not really there, and they wind up using them as a companion, and they wind up thinking that they’re closer to solving these problems than they actually are.” He said he thinks the bubble inflating to its current extent is in large part because of the human impulse to project ourselves onto things, something a cognitive scientist is trained not to do.

    These machines might seem like they’re human, but “they don’t actually work like you,” Marcus said, adding, “this entire market has been based on people not understanding that, imagining that scaling was going to solve all of this, because they don’t really understand the problem. I mean, it’s almost tragic.”

    Subramanian, for her part, said she thinks “people love this AI technology because it feels like sorcery. It feels a little magical and mystical … the truth is it hasn’t really changed the world that much yet, but I don’t think it’s something to be dismissed.” She’s also become really taken with it herself. “I’m already using ChatGPT more than my kids are. I mean, it’s kind of interesting to see this. I use ChatGPT for everything now.”

    This story was originally featured on Fortune.com

    Continue Reading

  • Cheteshwar Pujara announces retirement from Indian cricket

    Cheteshwar Pujara announces retirement from Indian cricket

    Mexican boxer Chavez Jr. to be released pending trial


    MEXICO CITY: Mexican boxer Julio Cesar Chavez Jr. will be released while awaiting trial for alleged links to drug cartels, his lawyer said Saturday after a court hearing in Mexico.


    Chavez Jr., 39, son of boxing legend Julio Cesar Chavez, was deported from the United States on Monday and appeared before a federal judge in the northwestern state of Sonora on Saturday.


    “He will be released immediately as ordered by the judge,” lawyer Ruben Fernando Benitez told reporters.


    The attorney general’s office did not immediately respond to AFP’s request for comment.


    Chavez faces charges of “organized crime” without a leadership role, and for allegedly participating in the “clandestine introduction of weapons into Mexico,” the lawyer said.


    Benitez said “very strict measures,” including a travel ban, were imposed, but added that Chavez would comply.


    During the hearing, the attorney general’s office requested three additional months to gather evidence, according to local media.


    The next hearing is set for November 24.


    US authorities arrested Chavez in July for being in the United States illegally.


    They also said he was wanted in Mexico for alleged ties to the Sinaloa Cartel, one of six Mexican drug trafficking groups designated as terrorist organizations by the United States.


    After Chavez’s deportation, Mexican authorities transferred him to a prison in Hermosillo, the capital of Sonora state.


    Chavez’s arrest in July came days after his lopsided loss to YouTuber-turned-boxer Jake Paul in a cruiserweight bout before a sell-out crowd in California.


    If convicted, Chavez could face four to eight years in prison, his lawyer said.

    Continue Reading

  • How Sonos is making existing products more sustainable

    How Sonos is making existing products more sustainable

    You would expect new audio devices to be more environmentally friendly than those that preceded them. But did you know that some companies keep working on their existing product lineup in order to refine their design and make them more sustainable?

    Sonos is one such company. When the firm’s five-star Arc Ultra soundbar launched, it was made using 5 per cent recycled plastic. Thanks to work from the firm’s materials team, that figure will soon rise to 44 per cent.

    Continue Reading

  • Vlček wins, Biedrzyński leads in Master ERC

    Vlček wins, Biedrzyński leads in Master ERC

    On his first outing behind the wheel of a Hankook-equipped Škoda Fabia RS Rally2 having switched from driving a Hyundai i20 N Rally2, Vlček took a comfortable victory ahead of Pole Biedrzyński, who continues to rely on Hyundai power for his Master ERC bid.

    Tomáš Kurka, driving a Škoda Fabia RS Rally2 for J2X Rally Team, hit back from a crash in testing to complete the top three on his Master ERC debut.

    With two rounds remaining, Biedrzyński – who has finished second twice this season – is 13 points ahead of Vlček.

    JDS Machinery Rali Ceredigion hosts the penultimate round of the Master ERC season from 5 – 7 September.

    Continue Reading

  • Week in review: Covertly connected and insecure Android VPN apps, Apple fixes exploited zero-day

    Week in review: Covertly connected and insecure Android VPN apps, Apple fixes exploited zero-day

    Here’s an overview of some of last week’s most interesting news, articles, interviews and videos:

    Android VPN apps used by millions are covertly connected AND insecure
    Three families of Android VPN apps, with a combined 700 million-plus Google Play downloads, are secretly linked, according to a group of researchers from Arizona State University and Citizen Lab.

    Apple fixes zero-day vulnerability exploited in “extremely sophisticated attack” (CVE-2025-43300)
    Apple has fixed yet another vulnerability (CVE-2025-43300) that has apparently been exploited as a zero-day “in an extremely sophisticated attack against specific targeted individuals.”

    Using lightweight LLMs to cut incident response times and reduce hallucinations
    Researchers from the University of Melbourne and Imperial College London have developed a method for using LLMs to improve incident response planning with a focus on reducing the risk of hallucinations.

    Russian threat actors using old Cisco bug to target critical infrastructure orgs
    A threat group linked to the Russian Federal Security Service’s (FSB) Center 16 unit has been compromising unpatched and end-of-life Cisco networking devices via an old vulnerability (CVE-2018-0171), the FBI and Cisco warned on Wednesday.

    What happens when penetration testing goes virtual and gets an AI coach
    Cybersecurity training often struggles to match the complexity of threats. A new approach combining digital twins and LLMs aims to close that gap.

    AWS Trusted Advisor flaw allowed public S3 buckets to go unflagged
    AWS’s Trusted Advisor tool, which is supposed to warn customers if their (cloud) S3 storage buckets are publicly exposed, could be “tricked” into reporting them as not exposed when they actually are, Fog Security researchers have found.

    How security teams are putting AI to work right now
    AI is moving from proof-of-concept into everyday security operations. In many SOCs, it is now used to cut down alert noise, guide analysts during investigations, and speed up incident response.

    Alleged Rapper Bot DDoS botnet master arrested, charged
    US federal prosecutors have charged a man with running Rapper Bot, a powerful botnet that was rented out to launch large-scale distributed denial-of-service (DDoS) attacks around the world.

    Fractional vs. full-time CISO: Finding the right fit for your company
    In this Help Net Security interview, Nikoloz Kokhreidze, Fractional CISO at Mandos, discusses why many early- and growth-stage B2B companies hire full-time CISOs before it’s needed.

    Commvault plugs holes in backup suite that allow remote code execution
    Commvault has fixed four security vulnerabilities that may allow unauthenticated attackers to compromise on-premises deployments of its flagship backup and replication suite.

    The AI security crisis no one is preparing for
    In this Help Net Security interview, Jacob Ideskog, CTO of Curity, discusses the risks AI agents pose to organizations.

    Exploit for critical SAP Netweaver flaws released (CVE-2025-31324, CVE-2025-42999)
    A working exploit concatenating two critical SAP Netweaver vulnerabilities (CVE-2025-31324, CVE-2025-42999) that have been previously exploited in the wild has been made public by VX Underground, Onapsis security researchers have warned.

    Password crisis in healthcare: Meeting and exceeding HIPAA requirements
    In 2025, healthcare organizations are facing a new wave of password security risks.

    Noodlophile infostealer is hiding behind fake copyright and PI infringement notices
    Attackers pushing the Noodlophile infostealer are targeting businesses with spear-phishing emails threatening legal action due to copyright or intellectual property infringement, Morphisec researchers have warned.

    Five ways OSINT helps financial institutions to fight money laundering
    Here are five key ways OSINT tools can help financial firms develop advanced strategies to fight money laundering criminals.

    DevOps in the cloud and what is putting your data at risk
    In this Help Net Security video, Greg Bak, Head of Product Enablement at GitProtect, walks through some of the biggest security risks DevOps teams are dealing with.

    New NIST guide explains how to detect morphed images
    The National Institute of Standards and Technology (NIST) has published new guidelines on how organizations can use detection tools to catch morph attacks before they succeed.

    The 6 challenges your business will face in implementing MLSecOps
    As organizations start to establish more robust ML and AI security, they will face six major challenges. It’s important that leadership and security strategists know how to identify the problems and what to do if they suspect risks in their models.

    What makes airport and airline systems so vulnerable to attack?
    In this Help Net Security video, Recep Ozdag, VP and GM at Keysight Technologies, explains why airline and airport systems are so difficult to secure.

    Google unveils new AI and cloud security capabilities at Security Summit
    Google used its Cloud Security Summit 2025 today to introduce a wide range of updates aimed at securing AI innovation and strengthening enterprise defenses.

    The cybersecurity myths companies can’t seem to shake
    Cybersecurity myths are like digital weeds: pull one out, and another quickly sprouts in its place.

    LudusHound: Open-source tool brings BloodHound data to life
    LudusHound is an open-source tool that takes BloodHound data and uses it to set up a working Ludus Range for safe testing. It creates a copy of an Active Directory environment using previously gathered BloodHound data.

    Buttercup: Open-source AI-driven system detects and patches vulnerabilities
    Buttercup is a free, automated, AI-powered platform that finds and fixes vulnerabilities in open-source software.

    Review: Data Engineering for Cybersecurity
    Data Engineering for Cybersecurity sets out to bridge a gap many security teams encounter: knowing what to do with the flood of logs, events, and telemetry they collect.

    Cybersecurity jobs available right now: August 19, 2025
    We’ve scoured the market to bring you a selection of roles that span various skill levels within the cybersecurity field. Check out this weekly selection of cybersecurity jobs available right now.

    Webinar: Why AI and SaaS are now the same attack surface
    The lines between SaaS and AI are vanishing. AI agents are now first-class citizens in your SaaS universe: accessing sensitive data, triggering workflows, and introducing new risks that legacy SaaS security posture management tools (SSPM) miss.

    Product showcase: iStorage datAshur PRO+C encrypted USB flash drive
    The iStorage datAshur PRO+C is a USB-C flash drive featuring AES-XTS 256-bit hardware encryption.

    New infosec products of the week: August 22, 2025
    Here’s a look at the most interesting products from the past week, featuring releases from Doppel, Druva, LastPass, and StackHawk.

    Continue Reading

  • ‘Luke Combs has ruined Fast Car for me’: Martha Wainwright’s honest playlist | Martha Wainwright

    ‘Luke Combs has ruined Fast Car for me’: Martha Wainwright’s honest playlist | Martha Wainwright

    The song that changed my life
    My mom was Kate McGarrigle, who formed folk duo Kate and Anna McGarrigle with my aunt. My dad is the folk singer Loudon Wainwright III. They were only married for five years, but wrote songs about each other and about my older brother Rufus and I. My dad wrote Five Years Old about missing my fifth birthday. That changed my life. Sometimes it’s easier to apologise in music than in person.

    The song I can no longer listen to
    Recently, I’m upset that I can’t listen to Fast Car by Tracy Chapman, because the Luke Combs country version has ruined it for me.

    The first song I fell in love with
    When I was eight, I had my mom write out the lyrics to Walking on Sunshine by Katrina and the Waves so I could listen to it on repeat on this little boombox in our house and learn all the words.

    The song I’d like played at my funeral
    I’ve been learning The Kiss by Judee Sill. She was a remarkable artist, and it is considered her masterpiece. It is absolutely gorgeous, but also tragic, because she died at such an early age.

    The song I do at karaoke
    I’m really bad at karaoke. I guess I would do Dreams by Fleetwood Mac. The singer in me wants to sound like Stevie Nicks, but unfortunately Martha shows up pretty fast in my karaoke interpretations.

    The best song to have sex to
    I’ll tell you the worst: when you can hear your parents playing their own music – such as (Talk to Me of) Mendocino by Kate and Anna McGarrigle – while they are having sex. Don’t ever do that.

    The song I inexplicably know every lyric to
    I’m really bad with lyrics. I’ll sing songs thousands of times but still have to have them written down. When I was young, I learned all the background parts to I’m Your Man by Leonard Cohen, because I wanted to be his backing singer. There’s a really weird song on there called Jazz Police that I know all the lyrics to.

    The song I secretly like
    Memory from Cats is the sort of song I’d have made fun of in the past. But now it gets me, because I learned it with my son.

    The song that makes me cry
    Le Blues du Businessman – The Businessman’s Blues – from 70s Canadian-French rock opera Starmania.

    Martha Wainwright is touring to 30 August; tour starts Shrewsbury folk festival, 24 August.

    Continue Reading

  • Strange ripples frozen in Mars’ sands could hold keys to human survival

    Strange ripples frozen in Mars’ sands could hold keys to human survival

    On Mars, the past is written in stone — but the present is written in sand. Last week, Perseverance explored inactive megaripples to learn more about the wind-driven processes that are reshaping the Martian landscape every day.

    After wrapping up its investigation at the contact between clay and olivine-bearing rocks at “Westport,” Perseverance is journeying south once more. Previously, attempts were made to drive uphill to visit a new rock exposure called “Midtoya.” However, a combination of the steep slope and rubbly, rock-strewn soil made drive progress difficult, and after several attempts, the decision was made to return to smoother terrain. Thankfully, the effort wasn’t fruitless, as the rover was able to gather data on new spherule-rich rocks thought to have rolled downhill from “Midtoya,” including the witch hat or helmet-shaped rock “Horneflya,” which has attracted much online interest.

    More recently, Perseverance explored a site called “Kerrlaguna” where the steep slopes give way to a field of megaripples: large windblown sand formations up to 1 meter (about 3 feet) tall. The science team chose to perform a mini-campaign to make a detailed study of these features. Why such interest? While often the rover’s attention is focused on studying processes in Mars’ distant past that are recorded in ancient rocks, we still have much to learn about the modern Martian environment.

    Almost a decade ago, Perseverance’s forerunner Curiosity studied an active sand dune at “Namib Dune” on the floor of Gale crater, where it took a memorable selfie. However the smaller megaripples — and especially dusty, apparently no longer active ones like at “Kerrlaguna” — are also common across the surface of Mars. These older immobile features could teach us new insights about the role that wind and water play on the modern Martian surface.

    After arriving near several of these inactive megaripples, Perseverance performed a series of measurements using its SuperCam, Mastcam-Z, and MEDA science instruments in order to characterize the surrounding environment, the size and chemistry of the sand grains, and any salty crusts that may have developed over time.

    Besides furthering our understanding of the Martian environment, documenting these potential resources could help us prepare for the day when astronauts explore the Red Planet and need resources held within Martian soils to help them survive. It is hoped that this investigation at “Kerrlaguna” can provide a practice run for a more comprehensive campaign located at a more extensive field of larger bedforms at “Lac de Charmes,” further along the rover traverse.

    Written by Melissa Rice, Professor of Planetary Science at Western Washington University

    Continue Reading