Category: 3. Business

  • Aker Solutions Secures First Contract with Gassco to Upgrade Franpipe Onshore Facility

    The project involves upgrading critical pipeline components at the Franpipe onshore facility in Dunkerque, France.  

    The upgrade will enhance reliability and safety, supporting the integrity of gas transport from the Norwegian continental shelf to the European market. Engineering activities will be led by Aker Solutions’ Bergen office, with work scheduled to begin in early 2026 and construction planned for 2027. 

    “Aker Solutions looks forward to partnering with Gassco and supporting our new customer in strengthening Europe’s energy security,” said Paal Eikeseth, Executive Vice President and head of Aker Solutions’ Life Cycle Business. 

    Continue Reading

  • Multi-agent AI could change everything – if researchers can figure out the risks

    Multi-agent AI could change everything – if researchers can figure out the risks

    You might have seen headlines sounding the alarm about the safety of an emerging technology called agentic AI.

    That’s where Sarra Alqahtani comes in. An associate professor of computer science at Wake Forest University, she studies the safety of AI agents through the new field of multi-agent reinforcement learning (MARL).

    Alqahtani received a National Science Foundation CAREER award to develop standards and benchmarks to better ensure that multi-agent AI systems will continue to work properly, even if one of the agents fails or is hacked.

    AI agents do more than sift through information to answer questions, like the large language model (LLM) technology behind tools such as ChatGPT and Google Gemini. AI agents think and make decisions based on their changing environment – like a fleet of self-driving cars sharing the road.

    Multi-agent AI offers innumerable opportunities. But failure could put lives at risk. Here’s how Alqahtani proposes to solve that problem.

    What’s the difference between the AI behind Chat GPT and the multi-agent AI you study?

    ChatGPT is trained on a huge amount of text to protect what the next word should be, what the next answer should be. It’s driven by humans writing. For AI agents that I build – multi-agent reinforcement learning – they think, they reason and they make decisions based on the dynamic environment around them. So they don’t only predict, they predict and they make decisions based on that prediction. They also identify the uncertainty level around them and then make a decision about that: Is it safe for me to make a decision or should I consult a human? 

    AI agents, they live in certain environments and they react and act in these environments to change the environments over time, like a self-driving car. ChatGPT still has some intelligence, but that intelligence is connected to the text, predictability of the text, and not acting or making a decision.

    You teach teams of AI agents through a process called multi-agent reinforcement learning, or MARL. How does it work? 

    There are a team of AI agents collaborating together to achieve a certain task. You can think of it as a team of medical drones delivering blood or medical supplies. They need to coordinate and make a decision, on time, what to do next—speed up, slow down, wait. My research focus is on building and designing algorithms to help them coordinate efficiently and safely without causing any catastrophic consequences to themselves and to humans.

    Reinforcement learning is the learning paradigm that actually is behind even how us humans learn. It trains the agent to behave by making mistakes and learning from their mistakes. So we give them rewards and we give them punishments if they do something good or bad. Rewards and punishments are mathematical functions, or a number a value. If you do something good as an agent, I’ll give you a positive number. That tells the agent’s brain that’s a good thing. 

    Us researchers, we anticipate the problems that could happen if we deploy our AI algorithms in the real world and then simulate these problems, deal with it, and then patch the security and safety issues, hopefully before we deploy the algorithms. As part of my research, I want to develop the foundational benchmarks and standards for other researchers to encourage them to work on this very promising area that’s still underdeveloped.

    It seems like multi-agent AI could offer many benefits, from taking on tasks that might endanger humans to filling in gaps in the healthcare workforce. But what are the risks of multi-agent AI?

    When I started working on multi-agent reinforcement learning, I noticed when we add small changes to the environment or the task description for the agents, they will make mistakes. So they are not totally safe unless we train them in the same, exact task again and again like a million times. Also, when we compromise one agent, and by saying compromise, I mean we assume there’s an attacker taking over and changing the actions from the optimal behavior of that agent, the other agents will be also impacted severely, meaning their decision-making will be disrupted because one of the team is doing something unexpected.

    We test our algorithms in gaming simulations because they are safe and we have clear rules for the games so we can anticipate what’s going to happen if the agents made a mistake. The big risk is moving them from simulations to the real world. That’s my research area, how to still keep them behaving predictably and to avoid making mistakes that could affect them and humans. 

    My main concern is not the sci-fi part of AI, that AI is going to take over or AI is going to steal our jobs. My main concern is how are we going to use AI and how are we going to deploy AI? We have to test and make sure our algorithms are understandable for us and for end users before we deploy it out there in the world.

    You have received an NSF CAREER award to make multi-agent AI safer. What are you doing?

    Part of my research is to develop standards, benchmarks, baselines that encourage other researchers to be more creative with the technology to develop new, cutting-edge algorithms.

    My research is trying to solve the transitioning of the algorithms from simulation to the real world, and that involves paying attention to the safety of the agents and their trustworthiness. We need to have some predictability of their actions, and then at the same time, we want them to behave in a safe manner. So we want to not only optimize them to do the task efficiently, we want them also to do the task safely for themselves, as equipment, and for humans. 

    I’ll test my MARL algorithms on teams of drones flying over the Peruvian Amazonian rainforest to detect illegal gold mining. The idea is to keep the drones safe while they are exploring, navigating and detecting illegal gold mining activities while avoiding being shot by illegal gold miners. I work with a team of diverse expertise – hardware engineers, researchers in ecology and biology and environmental sciences, and the Sabin Center for Environment and Sustainability at Wake Forest.

    There’s a lot of hype about the future of AI. What’s the reality? Do you trust AI?

    I do trust AI that I work on, so I would flip the question and say, do I trust humans who work on AI? Would you trust riding with an Uber driver in the middle of the night in a strange city? You don’t know that driver. Or would you trust the self-driving car that has been tested in the same situation, in a strange city? 

    I would trust the self-driving car in this case. But I want to understand what the car is doing. And that’s part of my research, to provide explanations of the behavior of the AI system for the end users before they actually use it. So they can interact with it and ask it, what’s going to happen if I do this? Or if I put you in that situation, how are you going to behave? When you interact with something and you ask these questions and you get to understand the system, you’ll trust it more. 

    I think the question should be, do we have enough effort going into making these systems more trustworthy? Do we spend more effort and time to make them trustworthy?


    Categories: Experts, Research & Discovery

    Continue Reading

  • Attorney General Rayfield Announces Nearly $150 Million Settlement with Mercedes-Benz Usa And Daimler Over Emissions Fraud – Oregon Department of Justice

    1. Attorney General Rayfield Announces Nearly $150 Million Settlement with Mercedes-Benz Usa And Daimler Over Emissions Fraud  Oregon Department of Justice
    2. Mercedes reaches $150 million settlement with US states over diesel scandal  Reuters
    3. Mercedes-Benz to pay $150M for emissions cheating, misleading consumers  WRGB
    4. Attorney General Sunday Announces $6.6 Million Share for Pennsylvania from National Settlement with Mercedes-Benz Over Emissions Fraud  attorneygeneral.gov
    5. BREAKING: Mercedes, Daimler Ink $150M Deal In Emissions Cheating Claims  Law360

    Continue Reading

  • In-Depth: Privacy, Data Protection and Cybersecurity | Insights

    In-Depth: Privacy, Data Protection and Cybersecurity | Insights

    The 12th edition of Lexology In-Depth: Privacy, Data Protection and Cybersecurity (formerly The Privacy, Data Protection and Cybersecurity Law Review) provides an incisive global overview of the legal and regulatory regimes governing data privacy and security. With a focus on recent developments, it covers key areas such as data processors’ obligations; data subject rights; data transfers and localisation; best practices for minimising cyber risk; public and private enforcement; and an outlook for future developments. A number of lawyers from Sidley’s global Privacy and Cybersecurity practice have contributed to this publication. See the chapters below for a closer look at this developing area of law.

    • Global Overview: David C. Lashway
    • EU Overview: William RM Long, Francesca Blythe, Lauren Cuyvers, Matthias Bruynseraede
    • USA: David C. Lashway, Jonathan M. Wilan, Sheri Porath Rockwell
    • United Kingdom: William RM Long, Francesca Blythe, Eleanor Dodding

    Continue Reading

  • Financial Industry Forum on Artificial Intelligence workshop: interim report

    December 22, 2025

    Ottawa, Ontario

    On November 13, 2025, the Financial Consumer Agency of Canada (FCAC) and the Global Risk Institute (GRI) co-hosted a workshop on financial well-being and consumer protection. Issues discussed included the opportunities and emerging risks associated with AI adoption, best practices in the use of AI to empower and protect financial consumers, and the path forward for AI in the financial services sector in Canada. 

    The workshop was attended by over 55 representatives from a diverse array of organizations, from Canada’s largest banks and technology companies to consumer advocacy groups, law firms and academia.

    The event was the fourth in a series of workshops co-hosted by GRI and financial sector regulators as part of the second Financial Industry Forum on Artificial Intelligence (FIFAI II). 

    You can read the interim report summarizing the FCAC-GRI workshop here. A full report on the outcome of all 4 workshops will be available in spring 2026. 

    Associated links

    Interim report from FIFAI II Workshop 1: Security and Cybersecurity

    Interim report from FIFAI II Workshop 2: Financial Crime

    Interim report from FIFAI II Workshop 3: Financial Stability

    OSFI-FCAC Risk Report – AI Uses and Risks at Federally Regulated Financial Institutions

    Continue Reading

  • Yes, Climate Wins Are Still Happening! Let’s Keep It Up in 2026. – NRDC

    1. Yes, Climate Wins Are Still Happening! Let’s Keep It Up in 2026.  NRDC
    2. How Trump’s First Year Reshaped U.S. Energy and Climate Policy  The New York Times
    3. Why Are Wind, Solar and Nuclear Advocates So Optimistic in 2026?  Broadband Breakfast
    4. The Green Renaissance: Why ESG and Clean Energy are Defying the 2025 Slump  FinancialContent
    5. Clean Energy Faces Turbulent 2025 Amid Policy Shifts and Challenges  SSBCrack News

    Continue Reading

  • AHN, Highmark Health Aim to Bridge Language Gaps in Health Care with New ‘I Speak’ Card Program

    AHN, Highmark Health Aim to Bridge Language Gaps in Health Care with New ‘I Speak’ Card Program

    Each year, thousands of AHN patients and Highmark members require interpretation services to better understand their care journeys

    PITTSBURGH, Dec. 22, 2025 /PRNewswire/ — Allegheny Health Network (AHN), in partnership with its parent organization Highmark Health, today announced the launch of “I Speak,” a campaign to improve communication with non-English speaking patients or with patients whose first language isn’t English. The campaign uses hospital posters, handouts, and wallet cards to help caregivers quickly identify a patient’s preferred language and swiftly connect them to appropriate interpreter services upon arrival to an AHN medical facility.

    “I Speak” was originally introduced last year at Highmark Health’s insurance arm, Highmark Inc., in retail locations throughout Delaware. Today’s announcement marks a significant expansion of the program to its health care facilities in the western Pennsylvania region and Highmark’s Direct Stores in Pennsylvania and New York.

    The wallet-sized cards, designed to mimic an insurance card, are printed in 29 languages including American Sign Language (ASL) and are placed in all high-traffic areas – like emergency departments and labor & delivery units – across the network’s 10 acute, full-service hospitals. They are also being made available at AHN’s outpatient medical facilities. 

    Each year, thousands of AHN patients and Highmark members require real-time translation services when receiving care, making an appointment, or seeking information about their insurance coverage. The U.S. Census Bureau estimates that more than 25 million people in the U.S. have limited English proficiency.

    According to research published by Midwestern University, communication barriers between patients and health professionals may negatively impact the quality of medical care. For example, patients who present at a health care facility with a language barrier are more likely to receive larger workups like blood draws, longer emergency department stays, more hospital admissions, and increased medical charges.

    Lack of appropriate medical interpretation has also been cited as a source of increased anxiety for patients, and patients with language barriers have been known to spend less time with certain therapies, experience more medical errors, and are much less likely to seek mental health counseling.

    The “I Speak” initiative is led by Highmark Health’s Institute for Strategic Social and Workforce Programs (S2W), a team focused on accessible, high-quality care across the region and beyond.

    “When we can better understand one another, we can better take care of one another,” said Veronica Villalobos, vice president of S2W at AHN and Highmark Health. “Our primary goal with every initiative is to ensure that all patients and members we serve receive the best possible care, regardless of their primary language. ‘I Speak’ cards at AHN facilities specifically empower patients to communicate their needs effectively and ensure that our care teams can provide appropriate support.”

    After a patient identifies his or her preferred language, AHN can engage with its language interpreter service provider, Cyracom, which is available 24/7 and tailored for healthcare specific needs and requests.

    “This program helps to ensure requests for interpretation services made by AHN caregivers take place with maximum efficiency and minimum misunderstanding,” Villalobos continued. “It’s an opportunity for our organization to further simplify care and improve the patient experience for everyone we serve.

    Across AHN facilities from January through June of this year, Spanish was the most requested non-English language by a significant margin (25.7%), followed by Haitian Creole (22.7%), Nepali, Arabic, and ASL (American Sign Language). Out of all answered calls to 412-DOCTORS, 46% were Spanish-speaking callers with the second non-English speaking language being Nepali at 14.1%.

    SOURCE Allegheny Health Network

    Continue Reading

  • Driving growth in PEI’s tech industry

    CLIENT NAME PROJECT TOTAL MEDIA CONTACT Discovery Garden Support the company’s digital repository product evolution and development $675,000 repayable
      Gerry Lawless, CEO
    gerry@discoverygarden.ca Stemble Learning Develop and implement artificial intelligence to enhance its educational technology software. $550,000 repayable
      Jason Pearson, CEO
    jason@stemble.ca Tracktile Support the development of its operations software for manufacturers.

    $250,000 repayable

     

    Jordan Rose, CEO jordan.rose@tracktile.io Thinking Big Product development and marketing of a new service navigation tool. $222,200 repayable Charley McGivern, Administrator
    charley@thinkingbig.net Thinking Big Support business development focused on export growth to new markets for digital products and professional services. $50,000
    Non-repayable Charley McGivern, Administrator
    charley@thinkingbig.net Iron Fox Games Support the scale-up of its organization as it expands its video game development team. $200,000
    repayable Ryan Filsinger, CEO
    ryan@ironfoxgames.com Causable Support commercialization of fundraising software and export market expansion. $191,000
    repayable Joeanne Thomson, CEO
    joeanne@causable.io 3 Pie Squared Develop the company’s digitalization, automation, and AI integration strategy to help ABA practices improve their systems. $50,000
    Non-repayable Stephen Smith, CEO
    stephen@3piesquared.com TOTAL   $2,188,200  

    Continue Reading

  • AG Jennings announces nearly $150 million emissions fraud settlement with Mercedes-Benz USA and Daimler AG

    AG Jennings announces nearly $150 million emissions fraud settlement with Mercedes-Benz USA and Daimler AG

    NOTE: AG Jennings and Connecticut AG William Tong announced this settlement at a Zoom press conference this morning. Video of the Zoom announcement is available here.

    Attorney General Kathy Jennings, together with her counterparts in Connecticut and Maryland, led a coalition of 50 attorneys general announcing a nearly $150 million settlement with Mercedes-Benz USA and Daimler AG for violating state laws prohibiting unfair or deceptive trade practices by marketing, selling and leasing vehicles equipped with illegal and undisclosed emissions defeat devices designed to circumvent emissions standards. The settlement also includes more than $200 million in potential consumer relief.

    “For nearly a decade, Mercedes sold vehicles that were marketed as clean and environmentally responsible while secretly polluting far beyond legal limits,” said AG Jennings. “This settlement holds Mercedes accountable for deceiving consumers, evading emissions laws, and putting public health at risk. We expect honesty in the marketplace and clean air in our communities. Today’s agreement delivers both meaningful penalties and real relief for affected drivers.”

    “Vehicle emissions are one of the largest contributors to air pollution in Delaware, so our air quality depends on properly operated vehicle emission control systems,” said Delaware Department of Natural Resources & Environmental Control Sec. Greg Patterson. “All vehicle manufacturers need to do their part by meeting the emission requirements, and we appreciate Attorney General Jennings and her staff leading this multistate investigation and settlement concerning our air.”

    Beginning in 2008 and continuing to 2016, the states allege Mercedes manufactured, marketed, advertised, and distributed nationwide more than 211,000 diesel passenger cars and vans equipped with software defeat devices that optimized emission controls during emissions tests, while reducing those controls outside of normal operations. The defeat devices enabled vehicles to far exceed legal limits of nitrogen oxides (NOx) emissions, a harmful pollutant that causes respiratory illness and contributes to the formation of smog. Mercedes engaged in this conduct to achieve design and performance goals, such as increased fuel efficiency and reduced maintenance, that it was unable to meet while complying with applicable emission standards. Mercedes concealed the existence of these defeat devices from state and federal regulators and the public. At the same time, Mercedes marketed the vehicles to consumers as “environmentally-friendly” and in compliance with applicable emissions regulations.
    Today’s settlement requires Mercedes-Benz USA and Daimler AG to pay $120 million to the states upon the effective date of the settlement. An additional $29,673,750 will be suspended and potentially waived pending completion of a comprehensive consumer relief program. Delaware will receive $3.6 million through today’s settlement.

    The consumer relief program extends to the estimated 39,565 vehicles that had not been repaired or permanently removed from the road in the United States by August 1, 2023. Mercedes must bear the cost of installing approved emission modification software on each of the affected vehicles. The companies must provide participating consumers with an extended warranty and will pay consumers $2,000 per subject vehicle.

    The companies must also comply with reporting requirements, reform their practices, and refrain from any further unfair or deceptive marketing or sale of diesel vehicles, including misrepresentations regarding emissions and compliance.
    Today’s settlement follows similar settlements reached previously between the states and Volkswagen, Fiat Chrysler and German engineering company Robert Bosch GmbH over its development of the cheat software. Automaker Fiat Chrysler and its subsidiaries paid $72.5 million to the states in 2019. Bosch paid $98.7 million in 2019. Volkswagen reached a $570 million settlement with the states in 2016.

    Delaware co-led this multistate investigation and settlement with the attorneys general of Connecticut and Maryland. They were assisted by Alabama, Georgia, New Jersey, New York, South Carolina, and Texas.  The final settlement was also joined by Alaska, Arkansas, Colorado, the District of Columbia, Florida, Hawaii, Idaho, Illinois, Indiana, Iowa, Kansas, Kentucky, Louisiana, Maine, Massachusetts, Michigan, Minnesota, Mississippi, Missouri, Montana, Nebraska, Nevada, New Hampshire, New Mexico, North Carolina, North Dakota, Ohio, Oklahoma, Oregon, Pennsylvania, Rhode Island, South Dakota, Tennessee, Utah, Vermont, Virginia, Washington, West Virginia, Wisconsin, Wyoming, and Puerto Rico.

    image_printPrint

    Continue Reading

  • Gibson Dunn Ranked in the 2026 Edition of Lexology Data 100 Global Elite

    Gibson Dunn Ranked in the 2026 Edition of Lexology Data 100 Global Elite

    Accolades  |  December 22, 2025

    Lexology


    Gibson Dunn has been ranked No. 7 in the 2026 edition of Lexology Data 100 Global Elite (formerly Global Data Review 100), which recognizes “the world’s best data law firms.” Highlighted as a “powerhouse in US litigation and contentious work,” the firm was ranked No. 2 in Litigation, and No. 6 for both Investigations and Advisory. The publication added that the “firm is routinely tapped to work for the biggest clients on global and truly business-critical issues.”

    Continue Reading