Category: 3. Business

  • CERAWeek 2026: Energy Security Strikes Back

    CERAWeek 2026: Energy Security Strikes Back

    As a result, the debate was less about which technology should win and more about how to build enough of everything. The dominant approach was “all of the above.” Renewables, battery storage, natural gas, nuclear, geothermal, and grid-enhancing technologies all have a role to play. The old idea that the transition would mainly consist of replacing hydrocarbons is giving way to a new reality: the world is entering a phase of energy addition rather than energy substitution. Demand is growing too quickly, and the system is too strained, for any single solution to suffice.

    This is especially relevant for renewables. Far from fading, they have gained a broader justification. Solar and wind are no longer seen only as climate solutions, but also as strategic assets that enhance domestic resilience. At the same time, their effectiveness depends on being paired with storage, transmission, and system flexibility. The next phase of the energy transition is not just about adding clean generation, but about integrating it into a more robust system. Batteries are among the clearest examples of a technology that has moved from promise to operational relevance.

    Batteries are among the clearest examples of a technology that has moved from promise to operational relevance.

    Natural gas also emerged as one of the key winners. Once widely described as a bridge fuel, it is increasingly seen as a long-term pillar of the global energy system. Its role in power generation, industry, system flexibility, and energy security is harder to question. The narrative of an impending LNG glut has weakened, replaced by concerns about tighter supply, geopolitical risk, and a renewed emphasis on long-term contracts. In this environment, U.S. LNG is not only commercially important, but strategically significant. Europe remains dependent on global gas markets, while Asia is highly exposed to disruptions in Persian Gulf supply, and the premium on reliability has increased.

    Asia represents the epicenter of vulnerability. Many economies remain heavily reliant on imported energy, particularly from the Middle East, making them especially exposed to prolonged disruptions. At the same time, Asia is where the future of the energy system is being most intensely contested. Some countries are rapidly scaling renewables and batteries; others are doubling down on coal for security reasons. Nuclear remains part of the mix in some markets, while gas continues to play a central role in others. India stood out as a country that could leapfrog traditional pathways by accelerating solar, storage, and electrification. What Asia ultimately shows is that there is no single energy model—different countries are responding to the same pressures in very different ways.

    Nuclear is another area where the tone has shifted. What many now call a nuclear renaissance is becoming more credible, driven by rising electricity demand, stronger political support, and growing interest from technology companies seeking firm, low-carbon power. Yet significant constraints remain: cost overruns, supply chains, standardization, and labor availability. Nuclear is part of the solution for the 2030s, but not a quick fix for today’s pressures.

    The same realism applies to infrastructure. One of the recurring frustrations is that demand is moving at sprint speed, while energy systems move at marathon pace. This is partly about capital and engineering, but also about bureaucracy, overregulation, permitting, and project execution. Even when capital is available and the strategic need is clear, infrastructure takes too long to approve and build—especially in advanced economies. In a world where energy has become the new bottleneck, these delays matter more than ever.

    The deeper meaning of CERAWeek 2026 is the end of a certain innocence in the energy debate. The transition is not over, but it is no longer seen as a smooth, linear process driven primarily by policy ambition and declining clean-tech costs. It is now understood as a more complex, uneven, and geopolitical transformation. The world is not simply choosing between fossil fuels and renewables—it is trying to assemble a system capable of delivering enough energy, at speed, under conditions of rivalry, geopolitical risk, industrial competition, and national security. At the same time, AI and hyperscalers are not only adding pressure on the demand side; they are increasingly part of the solution, bringing capital, long-term offtake agreements, and new tools to optimize grids, improve efficiency, and accelerate innovation across the energy system.

    Energy has returned to the center of economic strategy, technological competition, and national security

    At times, Houston felt less like an energy conference and more like a discussion of power and statecraft. Energy has returned to the center of national security, economic strategy, and technological competition. The winners in this new era will not necessarily be those with the most ambitious rhetoric, but those able to combine resilience, affordability, and execution. CERAWeek 2026 did not bury the energy transition. But it did bring to an end the illusion that it could happen without trade-offs, without geopolitics, and without a much more uncomfortable conversation about how the world will actually power itself in the decades ahead. In this new landscape, energy security strikes back—and regains center stage.

    Continue Reading

  • Arm chief Haas in line to lead much of SoftBank’s international business – Financial Times

    1. Arm chief Haas in line to lead much of SoftBank’s international business  Financial Times
    2. Arm CEO Haas set to oversee SoftBank’s international operations- FT  Investing.com UK
    3. Arm chief Haas in line to lead much of SoftBank’s international business – FT  marketscreener.com
    4. Arm Chief Rene Haas to Lead SoftBank’s International Business: FT  Global Banking & Finance Review®

    Continue Reading

  • Microsoft Singapore announces new MPowerHer collaboration to upskill women in tech and AI – Microsoft Source

    1. Microsoft Singapore announces new MPowerHer collaboration to upskill women in tech and AI  Microsoft Source
    2. Microsoft bets $5.5 billion on Singapore to power AI talent and infrastructure  HR Katha
    3. Microsoft Commits $5.5 Billion To Build AI Talent Hub  BW People
    4. Microsoft to invest $5.5bn in Singapore, boosting AI skills and workforce readiness  ET CIO
    5. Microsoft announces Fabric Go Local and Windows 365 Link Device Availability in Singapore  Microsoft Source

    Continue Reading

  • Oil prices rise as investors eye fragile US-Iran ceasefire – BBC

    Oil prices rise as investors eye fragile US-Iran ceasefire – BBC

    1. Oil prices rise as investors eye fragile US-Iran ceasefire  BBC
    2. Brent Oil Slides as Pakistan Urges Iran to Open Strait of Hormuz  Bloomberg.com
    3. Oil plunges, Dow sees its best day in a year after US-Iran ceasefire, but ‘hurdles remain’  CNN
    4. Empty ships and shut wells: Why the Iran war oil crisis is not over yet  Al Jazeera
    5. Oil Prices Suffer Biggest Drop Since Covid  WSJ

    Continue Reading

  • Samsung Receives New TÜV Rheinland Certifications for 2026 Micro RGB, OLED, Mini LED, Soundbars and More – Samsung Newsroom Malaysia

    Samsung Receives New TÜV Rheinland Certifications for 2026 Micro RGB, OLED, Mini LED, Soundbars and More – Samsung Newsroom Malaysia

    Expanded certifications further reinforce the eco-friendly value of Samsung’s premium lineup while advancing its sustainability efforts across product categories

     

    Samsung Electronics Co., Ltd. announced that approximately 34 models across its 2026 TV and soundbar lineup have received Product Carbon Reduction and Product Carbon Footprint certifications from TÜV Rheinland, a globally recognized certification organization based in Germany. The achievement reflects Samsung’s continued efforts to reduce carbon emissions across its premium product lineup.

     

    “As a global leader in premium displays and audio, Samsung sees sustainability as an essential part of innovation,” said Taeyong Son, Executive Vice President of Visual Display (VD) Business at Samsung Electronics. “We remain committed to reducing carbon emissions across our products, so consumers do not have to choose between cutting-edge technology and a more responsible product experience.”

     

    Samsung received Product Carbon Reduction certification for 14 premium display and audio models, including its 2026 OLED TVs, The Frame Pro and its flagship HW-Q990H soundbar.[1] An additional 20 products, including Micro RGB and Mini LED TVs, earned Product Carbon Footprint certification.[2]

     

    TÜV Rheinland grants Product Carbon Footprint certification to products that meet international standards for evaluating greenhouse gas emissions across the full product life cycle, including manufacturing, transportation, use and disposal.

     

    Product Carbon Reduction certification, on the other hand, is granted to products that have already received Product Carbon Footprint certification and further demonstrate a measurable reduction in carbon emissions compared with their predecessors. Notably, the HW-Q990H earned both certifications, extending Samsung’s sustainability efforts beyond TVs.

     

    In 2021, Samsung’s Neo QLED became the first TV with 4K resolution or higher to receive Product Carbon Reduction certification. In the six years since, the company has continued to expand its portfolio of certified products across QLED, OLED, Lifestyle TVs, monitors and signage.

     

    These efforts also reflect Samsung’s broader leadership in premium display and audio categories, where it has led the global TV market for 20 years[3] and remained the No. 1 global soundbar brand for 12 years.[4]

     

    For more information on Samsung’s 2026 TV lineup, please visit: https://www.samsung.com/my/

     

     

     

    [1] 14 Product Carbon Reduction-certified models include OLED (S90H, S85H 55’’, 65’’, 77’’, 83’’), The Frame Pro (LS03HW 65”, 75”, 85”) and soundbar (HW-Q990H model).
    [2] 20 Product Carbon Footprint-certified models include Micro RGB (R95H 65’’, 75’’, 85’’, R85H 55”, 65”, 75”, 85”, 100’’), OLED (S95H 55’’, 65’’, 77’’, 83’’, S85H 48’’), Mini LED (M70H), and The Frame Pro (LS03HW 55’’).
    [3] Omdia Q4 2025 Public Display Report, by unit sales.
    [4] FutureSource Consulting, 2025.

    Continue Reading

  • How KFC, AKA Korean fried chicken, took over the world | South Korea

    How KFC, AKA Korean fried chicken, took over the world | South Korea

    Inside a teaching kitchen south-east of Seoul, I coat a whole chicken – cut into eight parts – in batter and dip the pieces carefully into a bowl of powdered mix until covered in a light, fluffy layer.

    A chef watches intently. “Don’t rub it,” he says. “Keep it delicate.”

    The chicken, already brined in what I’m told is a secret marinade, goes into a fryer filled with an olive oil blend, heated to 170C. I slowly lower the pieces a third of the way, then drop them in away from myself to avoid splashing. I set a timer for 10 minutes.

    Korean fried chicken is prepared for frying

    This is Chicken University,a sprawling campus with a giant chicken statue at the entrance. It exists to train would-be owners of the BBQ Chicken franchise chain through a two-week residential programme. More than 50,000 people have passed through its classrooms.

    This humble dish is relatively simple, and is not even traditional Korean cuisine, but it is part of a national obsession that has gone global, both physically and culturally as part of the K-food wave. The country has been only half-jokingly dubbed the Republic of Fried Chicken.

    South Korea has around 40,000 fried chicken restaurants – just a few thousand short of the number of McDonald’s branches worldwide. Most are small, family-run operations. But now, Korean chicken brands operate more than 1,800 stores in around 60 countries, nearly double the number of stores a decade ago. From London to Los Angeles, Korean fried chicken appears on the menu.

    About an hour south-east of Seoul, past fields and factories, sits Chicken University, a sprawling campus with a giant chicken statue at the entrance. Photograph: Raphael Rashid/The Guardian

    It is the most popular Korean food among international consumers, according to a South Korean government survey of about 11,000 consumers across 22 cities, spanning Asia, Europe, the Americas and Australia.

    From post-war import to K-food export

    South Korea’s most successful culinary export is not traditionally Korean. Fried chicken arrived with American soldiers stationed in the country after the Korean war, but the technique that made it distinctly Korean emerged decades later.

    About 1980, a chicken shop owner in the southern city of Daegu, Yoon Jong-gye, noticed customers abandoning their chicken once it grew cold, when the meat became dry. So he began experimenting with brining the chicken to keep it juicy and a glaze made from chilli powder. A neighbourhood grandmother suggested adding corn syrup.

    The result was yangnyeom chicken – sweet, sticky and spicy – and still appealing at room temperature. Yoon never patented his recipe and died in December 2025 at 74, having watched his invention spread far beyond his tiny shop where it began.

    South Korea’s distinct take on fried chicken has evolved over decades, with a range of recipes tailored to recipes around the world. Photograph: Raphael Rashid/The Guardian

    Korean chicken brands had been expanding internationally since the early 2000s, but the cultural breakthrough came in 2014, when the Korean drama My Love from the Star became a sensation across China.

    A line from its lead character – that “on the day of the first snow, you should have chicken and beer” – reportedly triggered queues outside Korean chicken restaurants, even during an avian flu outbreak.

    Chimaek, the portmanteau meaning “fried chicken and beer” from the Korean words “chikin” and “maekju”, has since become a cultural shorthand, even entering the Oxford English Dictionary.

    It describes as much an act of collective pleasure as a meal: friends gathered around a table, with a plate of chicken at the centre and draught beer within reach. Every July, Daegu hosts a chimaek festival that draws more than a million visitors.

    One defining feature of Korean fried chicken is how it is served. Kim Ki-deuk, who has run an independent chicken shop near Korea University in Seoul with his wife Baek Hye-kyeong for more than 20 years, puts it simply. “In fast food places, they may sell one or several pieces,” he says. “Korean chicken is one full bird.”

    Kim Ki-deuk and his wife Baek Hye-kyeong at their shop near Korea University in Seoul. Photograph: Raphael Rashid/The Guardian

    Technique is another factor, though methods vary.

    At shops like Kim and Baek’s, chicken is fried twice. “We fry it once first, then when the customer orders, we fry it again,” he says. “Otherwise it gets soggy. That’s what makes it extra crispy.”

    The batter, typically made with potato or corn starch, holds up under the sauce – whether a sweet-spicy yangnyeom glaze or a soy-garlic coating – allowing it to stay crisp long after it has been boxed up for delivery.

    Prof Joo Young-ha, a cultural anthropologist at the Academy of Korean Studies who specialises in food culture, argues that Korean chicken’s global success stems from its simplicity.

    “Unlike pork, chicken crosses religious prohibition boundaries,” he says. “And unlike kimchi, which is treated like a side dish, or bibimbap, which isn’t immediately obvious as a dish, fried chicken is immediately recognisable as a meal.”

    Beyond its global appeal, fried chicken’s rise in South Korea reflects something about modern life there. Prof Joo traces its rise to the 1980s and 1990s, when apartment living, dual-income households, and delivery culture were reshaping Korean life. Fried chicken, fast, convenient, and boxed for takeaway, fitted the moment.

    The industry has long attracted mid-career Koreans seeking a route back to income after leaving corporate jobs, though the market is fiercely competitive and margins are thin.

    Back at their fried chicken shop, Kim Ki-deuk slides another batch of chicken gizzards, another popular menu item, into the crackling oil. “Same as usual,” one customer says.

    “It’s great that Korean chicken is known worldwide,” Kim says, wiping down the counter between orders. “Chicken is for everyone, young and old.

    “Korea is such a small place. One bird doing all this work, introducing our country, our culture. It’s quite something.”

    Continue Reading

  • Differences in AI adoption in Europe and the US: Explanations and implications for productivity growth – CEPR

    1. Differences in AI adoption in Europe and the US: Explanations and implications for productivity growth  CEPR
    2. GenAI to change global labour markets: study  Dawn
    3. Examining the Gap Between AI Use and Impact: Insights from the Global Opportunity Index  Milken Institute
    4. Report: AI Will Reshape Work More than Replace It, but Global Impact Is Uneven  Campus Technology
    5. AI Decision Brief: How leaders can drive Frontier Transformation  Microsoft

    Continue Reading

  • Cracks in the Bedrock: Agent God Mode

    Cracks in the Bedrock: Agent God Mode

    Executive Summary

    Our first article about the boundaries and resilience of Amazon Bedrock AgentCore focused on the Code Interpreter sandbox, and how it can be bypassed using DNS tunneling. In this second part, we delve into the identity and permissions model of AgentCore and the AgentCore starter toolkit. This toolkit is described by AWS as “a Command Line Interface (CLI) toolkit that you can use to deploy AI agents to an Amazon Bedrock AgentCore Runtime.” This toolkit abstracts backend provisioning complexity by automating the creation of runtimes, Amazon Elastic Container Registry (ECR) images and execution roles. We discovered that the toolkit’s auto-create logic generates identity and access management (IAM) roles that grant privileges broadly across the AWS account, rather than being scoped to individual resources. While the toolkit makes it easy to quick-start with AgentCore, the default deployment configuration model favors this deployment ease over a strict adherence to the principle of least privilege.

    ​​The starter toolkit’s default deployment configuration introduces an attack vector that we call Agent God Mode, because the overly broad IAM permissions effectively grant an individual agent the “omniscient” ability to escalate privileges and compromise every other AgentCore agent within the AWS account.

    Our investigation uncovered a multi-stage attack chain that exploits this excessive access. We found that an attacker who compromises an agent could:

    • Exfiltrate proprietary ECR images
    • Access other agents’ memories
    • Invoke every code interpreter
    • Extract sensitive data

    We disclosed our findings to the AWS Security team. Following our disclosure, the AWS documentation was updated to include a security warning, stating that the default roles are “designed for development and testing purposes” and are not recommended for production deployment, as shown in Figure 1.

    Figure 1. AWS starter toolkit updated documentation warning note.

    Palo Alto Networks customers are better protected from the threats discussed in this article through the following products and services:

    If you think you might have been compromised or have an urgent matter, contact the Unit 42 Incident Response team.

    Technical Analysis

    Identity and permissions are two of the most critical pillars of setting boundaries and maintaining isolation in cloud workloads and applications. We explain the default IAM roles and permissions that are provisioned by the AgentCore starter toolkit, to demonstrate how compounding attack primitives ultimately enables a full attack chain.

    The Default Deployment Architecture

    We began our analysis by evaluating the default IAM roles that the toolkit’s setup process automatically generates. The agentcore launch command automates the infrastructure provisioning required for an AI agent. Based on the user’s configuration, the toolkit creates:

    • The AgentCore Runtime
    • A memory store
    • An ECR Repository
    • An IAM execution role

    Figure 2 shows this configuration, created with the Agent Name ori_agent_01.

    A screenshot showing configuration details of an agent deployment. The list includes a region and mentions an obscured account number. Key configuration settings are highlighted and the memory retention is set to 30 days.
    Figure 2. Starter toolkit configuration.

    Upon execution, the toolkit confirms the deployment and associated resources, as shown in Figure 3.

    A screenshot of a deployment success message displaying details about an agent. It includes an Agent ARN, an ECR URI with AWS and Amazon's domains, and ARM64 container deployment confirmation to Bedrock AgentCore.
    Figure 3. Starter toolkit deployment.

    Although the toolkit simplifies the setup, the auto-create configuration for the execution role introduces a significant security risk.

    Cross-Agent Data Access

    AgentCore agents rely on memory resources to store both long and short-term conversation state and context. An attacker who gains read access to this resource could exfiltrate sensitive interaction data between the AI agent and its users. The default IAM policy generated by the toolkit reveals the permission set, as Figure 4 shows.

    A screenshot of a code snippet displaying a JSON policy configuration for AWS. The policy allows specific actions related to "bedrock-agentcore," such as creating events, getting events, and managing memory records. The resource is specified, followed by redacted content.
    Figure 4. BedrockAgentCoreMemory policy statement.

    The policy applies actions such as GetMemory and RetrieveMemoryRecords to the wildcard memory resource arn:aws:bedrock-agentcore:*:memory/*. This effectively allows the agent whose role was assigned with this policy to read the memories of all other agents in the account.

    Since the default role permits access to “*”, any AI agent can read or poison the state of any other AI agent in the account. The last piece required for exploitation is the knowledge of the target’s unique MemoryID.

    Indirect Privilege Escalation

    AgentCore Runtime utilizes Code Interpreter to execute dynamic logic. Crucially, these interpreters operate under their own distinct IAM roles, separate from Agent Runtime. This means that when an agent invokes the interpreter, the resulting actions are performed using the interpreter’s permissions, not the agent’s. The default policy indicates that the InvokeCodeInterpreter action is granted on all Code Interpreter resources (*), as Figure 5 shows.

    A screenshot of a code snippet showing AWS IAM policy permissions for Bedrock's agent core code interpreter. The policy includes actions like creating, starting, invoking, stopping, and deleting code interpreter sessions. Specific AWS resource ARNs are referenced.
    Figure 5. BedrockAgentCoreCodeInterpreter policy statement.

    These permissions introduce the risk of a direct exploitation cycle. Using a compromised AI agent, an attacker could perform reconnaissance to list available interpreters, identify a high-privileged target, and attempt to pivot by executing code within that context.

    ECR Exfiltration

    Perhaps the most critical finding relates to the Elastic Container Registry (ECR). As AgentCore Runtimes are distributed as Docker images, the default policy grants the AI agent unrestricted ability to pull images from any repository (arn:aws:ecr:*:repository/*) within the account. Figure 6 details this specific part of the policy.

    A screenshot of a JSON code snippet showing AWS IAM permissions. The code includes actions such as "BatchGetImage" and "GetAuthorizationToken" for Amazon Elastic Container Registry (ECR). Certain values, such as a repository identifier, are blurred for privacy.
    Figure 6. ECR policy statements.

    This configuration creates a high-risk exfiltration vector. From a compromised agent, an attacker could generate an authentication token to download source code, proprietary algorithms, internal files and other sensitive data from images of other agents and unrelated workloads across the entire account.

    First, the attacker retrieves a valid ECR authorization token, as Figure 7 shows.

    A screenshot of a code editor and a terminal. The code editor contains a Python script with an import statement for BedrockAgentCEP and code to connect to a service using boto3 for an agent. Below, a terminal displays a command being executed using `agentcore`, along with a network endpoint and response details.
    Figure 7. Retrieve authorization token using agent’s role.

    With these credentials, the attacker authenticates the Docker CLI and pulls the image of a target agent – or any other container in the registry – as detailed in Figure 8.

    A screenshot of a terminal displaying code and error messages related to Docker login and image pulling. It shows attempts to access a repository from Amazon's Elastic Container Registry. The error message indicates access denial. There are also multiple lines displaying "Pull complete" with corresponding hashes.
    Figure 8. Pulling another agent’s image using a previously retrieved token.

    After downloading the image, the attacker has full read access to the target’s file system, as Figure 9 shows.

    Screenshot of a server file management interface displaying directory contents. The highlighted folder is "app," sized at 0 Bytes. The interface shows activity status as "Running".
    Figure 9. Exploring image content.

    Bypassing the Memory ID Barrier

    As noted in the Cross-Agent Data Access section, the primary barrier to cross-agent memory poisoning is the obscurity of the target’s MemoryID. The ECR exfiltration vulnerability eliminates this constraint. As Figure 10 shows, an attacker can recover configuration details that are baked into the container or environment files, by performing static analysis on the downloaded Docker image.

    A screenshot of a command line interface showing a directory with a folder named "Files" highlighted. A portion of code at the bottom shows configuration details, including paths and an identifier.
    Figure 10. Extracting memory ID.

    The env-output.txt file that can be found within the image contains the following target identifier:

    BEDROCK_AGENTCORE_MEMORY_ID=ori_agent_01_mem-AsDiQiDikR

    The Kill Chain

    By abusing the default permission configurations, an attacker could:

    1. Exfiltrate: Leverage ECR permissions to download the image of a high-value target.
    2. Extract: Recover the MemoryID from the container’s static configuration.
    3. Execute: Use the ID to dump or poison the target’s conversation history.

    This completes the attack vector. The AgentCore starter toolkit God Mode permissions allow an attacker who compromises an initial agent to exfiltrate the source code of a target, extract the specific resource IDs and hijack the target’s memory state, without restriction.

    Invoking Other Agents

    In addition, we observed that the policy scope extends to the runtime API, granting InvokeAgentRuntime permissions on the arn:aws:bedrock-agentcore:*:runtime/* resource. This effectively allows any agent in the account to trigger the execution of any other agent, as Figure 11 demonstrates.

    A screenshot of a JSON code snippet showing permissions for "BedrockAgentCoreRuntime." The "Effect" is set to "Allow," with several "Actions"" included. The "Resource" specifies an AWS ARN in a specified region.
    Figure 11. BedrockAgentCoreRuntime policy statement.

    This architecture allows an agent designed for non-sensitive data access or non-administrative tasks to invoke another agent that has higher privileges.

    Conclusion

    While building and deploying AI agents on other platforms can require significant effort, AWS has effectively streamlined this process with the AgentCore starter toolkit. Following our communication with AWS, the AWS security team provided the following statement: “It is important for anyone using the toolkit to understand that the IAM roles generated by the auto-create feature provide a flat permission structure that does not align with the principle of least privilege, and should never be used in a production system.”

    Our analysis of the automatically attached IAM policy revealed the presence of an overly permissive IAM role. Instead of scoping permissions to the specific AI agent resources, the policy grants the agent’s role the ability to perform actions on wildcard resources (*) in Bedrock AgentCore and ECR. This exposes the environment to unauthorized cross-resource access.

    The overly permissive IAM policies create the following security risks:

    • Source code exposure: Unrestricted ECR access allows full retrieval of container images.
    • Data compromise: Wildcard permissions on memory resources facilitate cross-agent data leakage.
    • Privilege escalation: Unchecked access to Code Interpreters enables lateral movement.

    As recommended by the AWS Security team, customers should always create a custom, least-privilege IAM role for production agents. This is the most effective mitigation to limit the potential impact of a compromised agent. Following our collaboration with AWS, their Security team made updates to documentation, to enhance transparency and promote safer deployment practices for all users.

    Disclosure Timeline

    • Nov. 17, 2025 – We responsibly reported to the AWS Security team.
    • Nov. 18, 2025 – AWS Security team responded that they are investigating.
    • Dec. 14, 2025 – AWS Security team reached out for more details.
    • Jan. 28, 2026 – AWS Security team provided clarifications regarding our findings.

    Palo Alto Networks Protection and Mitigation

    Palo Alto Networks customers are better protected from the threats discussed above through the following products:

    Organizations are better equipped to close the AI security gap through the deployment of Cortex AI-SPM, which helps to provide comprehensive visibility and posture management for AI agents across AWS and Azure environments. Cortex AI-SPM is designed to mitigate critical risks including, over-privileged AI agent access, misconfigurations, and unauthorized data exposure. Cortex AI-SPM helps enable security teams to enforce compliance with NIST and OWASP standards, monitor for real-time behavioral anomalies, and secure the entire AI lifecycle within a unified cloud security context.

    Cortex Cloud Identity Security encompasses Cloud Infrastructure Entitlement Management (CIEM), Identity Security Posture Management (ISPM), Data Access Governance (DAG) and Identity Threat Detection and Response (ITDR). It provides clients with the necessary capabilities to improve their identity-related security requirements by providing visibility into identities, and their permissions, within cloud and container environments. This helps accurately detect misconfigurations and unwanted access to sensitive data. It also allows real-time analysis surrounding usage and access patterns.

    The Unit 42 AI Security Assessment can help empower safe AI use and development.

    The Unit 42 Cloud Security Assessment is an evaluation service that reviews cloud infrastructure to identify misconfigurations and security gaps.

    If you think you may have been compromised or have an urgent matter, get in touch with the Unit 42 Incident Response team or call:

    • North America: Toll Free: +1 (866) 486-4842 (866.4.UNIT42)
    • UK: +44.20.3743.3660
    • Europe and Middle East: +31.20.299.3130
    • Asia: +65.6983.8730
    • Japan: +81.50.1790.0200
    • Australia: +61.2.4062.7950
    • India: 000 800 050 45107
    • South Korea: +82.080.467.8774

    Palo Alto Networks has shared these findings with our fellow Cyber Threat Alliance (CTA) members. CTA members use this intelligence to rapidly deploy protections to their customers and to systematically disrupt malicious cyber actors. Learn more about the Cyber Threat Alliance.

    Additional Resources

    Continue Reading

  • Vanda Pharmaceuticals Announces Initiation of The Thetis Study, a Clinical Trial of NEREUS™ for the Prevention of Vomiting Induced by GLP-1 Receptor Agonists

    Vanda Pharmaceuticals Announces Initiation of The Thetis Study, a Clinical Trial of NEREUS™ for the Prevention of Vomiting Induced by GLP-1 Receptor Agonists

    WASHINGTON, April 8, 2026 /PRNewswire/ — Vanda Pharmaceuticals Inc. (Vanda) (Nasdaq: VNDA) today announced the initiation of Thetis, a clinical trial evaluating NEREUS™ (tradipitant) for the prevention of vomiting in patients receiving glucagon-like peptide-1 (GLP-1) receptor agonist therapies. NEREUS™ was recently approved for the prevention of vomiting induced by motion.1

    GLP-1 receptor agonists, including semaglutide and tirzepatide, have transformed the treatment of type 2 diabetes and obesity. However, gastrointestinal side effects, particularly nausea and vomiting, remain a significant challenge for many patients and are a leading cause of treatment discontinuation or dose reduction. Recent studies and approvals in the GLP-1 space further underscore this. Last month, a “high dose” of Wegovy was approved by the U.S. Food and Drug Administration (FDA) on the basis of providing additional weight-loss benefits, yet it comes with the tradeoff that the top two reported adverse effects of nausea and vomiting for this high dose are of increased frequency compared to the previously approved Wegovy maximum dose.2

    GLP-1 receptor agonists offer significant benefits, but vomiting and nausea can severely impact patient adherence and quality of life,” said Mihael H. Polymeropoulos, M.D., President CEO and Chairman of the Board of Vanda Pharmaceuticals. “NEREUS™ has demonstrated potent antiemetic effects in prior clinical studies. We are excited to advance this program, which has the potential to improve tolerability and allow more patients to fully benefit from these important therapies.”

    The Thetis study is a multicenter, randomized, double-blind, placebo-controlled trial that will evaluate the efficacy and safety of oral tradipitant in patients initiated at a high dose of a GLP-1 receptor agonist. The primary endpoint is the proportion of patients free from vomiting episodes during the treatment period.

    The Phase 2 study, as previously announced in Vanda’s press release dated November 15, 2025, was similar in design where patients were pre-treated with either tradipitant or placebo before administering a 1 mg injection of Wegovy®, a dose that normally takes 9 weeks of titration to reach. The phase 2 study succeeded and met its primary endpoint, with only 29.3% of tradipitant-treated participants (17/58) experiencing vomiting compared to 58.6% on placebo (34/58) (p=0.0016), representing a 50% relative reduction. The study also met the key secondary endpoint of the proportion of participants with vomiting and significant nausea at 22.4% in the tradipitant group (13/58) versus 48.3% on placebo (28/58) (p=0.0039).3

    Vanda expects topline results from the Thetis study by Q4 2026. Following completion of the Thetis study, additional study data may be required prior to approval of a New Drug Application (NDA).

    References

    1. See full U.S. NEREUS Prescribing Information, available at: www.nereus.us.
    2. See full U.S. Wegovy Prescribing Information, available at: https://www.novo-pi.com/wegovy.pdf.
    3. See Vanda Pharmaceuticals Reports Positive Results for Tradipitant in Preventing GLP-1 Induced Nausea and Vomiting, available at: https://www.prnewswire.com/news-releases/vanda-pharmaceuticals-reports-positive-results-for-tradipitant-in-preventing-glp-1-induced-nausea-and-vomiting-302617739.html.

    About NEREUS™

    NEREUS™ (tradipitant) is a neurokinin-1 receptor antagonist licensed by Vanda from Eli Lilly and Company. NEREUS™ is approved for the acute prevention of vomiting induced by motion in adults, and is currently in clinical development for a variety of indications, including gastroparesis and the prevention of nausea and vomiting induced by GLP-1 receptor agonists. Full NEREUS™ Prescribing Information can be found at: https://www.nereus.us.

    About Vanda Pharmaceuticals Inc.

    Vanda is a leading global biopharmaceutical company focused on the development and commercialization of innovative therapies to address high unmet medical needs and improve the lives of patients. For more on Vanda Pharmaceuticals Inc., please visit www.vandapharma.com and follow us on X @vandapharma.

    CAUTIONARY NOTE REGARDING FORWARD-LOOKING STATEMENTS

    Various statements in this press release, including, but not limited to, statements regarding the design, objectives and potential outcomes of the Thetis clinical trial; the potential benefits, effectiveness and safety of NEREUS™ (tradipitant) for the prevention of vomiting in patients receiving GLP‑1 receptor agonist therapies; the significance of results from prior clinical studies; the expected timing of topline results from the Thetis study; and the risk that additional study data may be required prior to approval of an NDA, are “forward‑looking statements” within the meaning of the securities laws. All statements other than statements of historical fact are statements that could be deemed forward‑looking statements. Forward‑looking statements are based on current expectations and assumptions that involve risks, changes in circumstances and uncertainties. Important factors that could cause actual results to differ materially from those reflected in Vanda’s forward-looking statements include, among others, risks inherent in clinical development, including the risk that the Thetis trial may not demonstrate efficacy or safety consistent with prior studies, may not meet its primary or secondary endpoints, or may experience delays; variability in patient response to therapy; regulatory considerations affecting the development of NEREUS™ for additional indications; and the risk that additional study data may be required prior to approval of an NDA. Therefore, no assurance can be given that the results or developments anticipated by Vanda will be realized or, even if substantially realized, that they will have the expected consequences to, or effects on, Vanda. Forward‑looking statements in this press release should be evaluated together with the risks and uncertainties described in the sections titled “Cautionary Note Regarding Forward‑Looking Statements,” “Risk Factors,” and “Management’s Discussion and Analysis of Financial Condition and Results of Operations” in Vanda’s most recent Annual Report on Form 10‑K, as updated by Vanda’s subsequent Quarterly Reports on Form 10‑Q, Current Reports on Form 8‑K, and other filings with the U.S. Securities and Exchange Commission, which are available at www.sec.gov.

    All forward‑looking statements attributable to Vanda or any person acting on its behalf are expressly qualified in their entirety by the cautionary statements set forth herein. Vanda cautions investors not to place undue reliance on forward‑looking statements. The information contained in this press release is provided as of the date hereof, and Vanda undertakes no obligation, and expressly disclaims any obligation, to update or revise any forward‑looking statements, whether as a result of new information, future events, or otherwise, except as required by law.

    Corporate Contact:

    Kevin Moran
    Senior Vice President, Chief Financial Officer and Treasurer
    Vanda Pharmaceuticals Inc.
    202-734-3400
    [email protected]

    Jim Golden / Jack Kelleher / Dan Moore
    Collected Strategies
    [email protected]

    Follow us on X @vandapharma

    SOURCE Vanda Pharmaceuticals Inc.

    Continue Reading

  • Why legal operations needs a technology reset

    Why legal operations needs a technology reset

    Legal operations teams have more access to technology than ever before. New tools, platforms, and AI capabilities are entering the market at a rapid pace, promising to improve efficiency, reduce costs, and deliver better insights.

    On paper, it sounds like progress. In practice, many legal teams are experiencing something very different.

    To simplify workflows, legal tech stacks are becoming more complex and harder to manage. Systems don’t always connect. Data lives in multiple places. Teams spend more time learning and navigating tools than using them to drive outcomes.

    With thousands of legal technology providers and an ever-growing number of specialized solutions, it’s easier than ever to add another tool to the stack. But each new addition introduces new workflows, new data sources, and new decisions. Over time, that accumulation adds unnecessary cost, creates greater inefficiency, and makes it hard to deliver strategic value to the business.

    Is AI solving the problem or making it more challenging?

    According to our 2026 Future Ready Lawyer Survey Report, more than 90% of legal professionals regularly use AI. AI is now standard across the legal operations landscape.

    But that raises an important question: is AI helping to solve the tool proliferation problem, or simply adding to the burden?

    For many organizations, AI is being layered onto existing systems without a clear strategy for how it fits into the bigger picture. The result isn’t always simplification. In some cases, it’s just another layer of complexity.

    How can legal teams simplify their tech stacks?

    As legal operations teams explore ways to streamline and deliver value to their organizations, continuing to add technology without a clear plan isn’t sustainable. But neither is forgoing innovation. They need to balance adopting new capabilities with leveraging what they already have to reduce complexity and maximize efficiency.

    That’s what our new eBook is all about. Too Many Tools, Not Enough Clarity calls for a legal operations technology reset; a shift away from unchecked accumulation and toward a more intentional, streamlined approach to building and managing a cohesive and simplified legal tech stack.

    It takes a closer look at:

    • Why legal tech ecosystems have become so complex
    • How AI is impacting the landscape
    • What it means to take a more intentional approach to technology

    Now is the time for legal operations teams to pause and take a closer look at their legal technology strategies so they can move forward with clarity and confidence. Read the eBook to understand why a reset is needed and what’s at stake if it doesn’t happen.

    Continue Reading