Category: 3. Business

  • Stretching the boundaries of competition law too far – the rail fares litigation

    Stretching the boundaries of competition law too far – the rail fares litigation

    The Competition Appeal Tribunal (CAT) has handed down its much awaited judgement in the “Boundary Fares” cases. It held the Train Operating Companies had not abused any dominant position they may have as a result of their conduct in relation to so-called Boundary Fares.

    Collective Proceedings (or mass claims based on an infringement of UK Competition Law) have been much in vogue in recent years as private enforcement of Competition Law has become a reality in the English Courts. A number of claimants and their advisers have sought to bring Collective Proceedings which have a very tenuous basis under UK Competition Law.

    In the rail fares litigation, the claimant (or class representative) sought to broaden the types of behaviour deemed to be an “abuse of a dominant position” to include:

    1. Failure to make Boundary Fares sufficiently available for sale to customers holding a valid TfL Travelcard, and
    2. Failure to ensure that customers holding a valid TfL Travelcard were aware of the existence of Boundary Fares when buying tickets.

    Having reviewed the evidence, the CAT unanimously concluded, on the assumption that the three Train Operating Company defendants each held a dominant position, none of the conduct alleged against them constituted an abuse of that position.

    Whilst “abuse” is a broad concept and the concept of exploitative abuse by “unfair” conduct should develop to reflect new patterns of commerce, the CAT observed the concept is not unlimited. It also observed that Competition Law is not a general law of consumer protection.

    The fact that the dominant company could have carried out a particular aspect of its business better, or in a different way that would have benefited consumers, does not mean that this conduct crosses the line to constitute “abuse”.

    Strong and compelling evidence is required to establish abuse of a dominant position, and this was lacking in the rail fares litigation. There were particular reasons why so-called Boundary Fares were made available in the way that they were by the three Train Operating Companies, and the fact that passengers did not buy a Boundary Fare or bought smaller numbers of Boundary Fares compared to the number of Travelcards sold, did not establish that they were unaware of this option.

    The CAT noted that a dominant company has no duty under Competition Law actively to assist all its customers to pay the lowest price or to buy the optimal product for their needs. It was accepted that it would have been possible for the Train Operating Companies to do further marketing in relation to Boundary Fares. However, this was just one type of fare among many. Each company had to choose its priorities, both in terms of expenditure generally and as the subject of its marketing campaigns.

    Collective Proceedings are an expensive form of litigation often funded by litigation funders. When chosen appropriately, such proceedings can help to bring redress to persons who would otherwise find it difficult to pursue a valid claim. A number of Collective Proceedings claims in the English Courts have either failed or have recovered much smaller sums than were originally claimed. This is to be expected as a system develops and “finds its feet”.

    Litigation funders and lawyers alike can be expected to study carefully the findings of the CAT in the rail fares litigation. There are lessons to be learned from this litigation and other Collective Proceedings actions.

    Such claims will continue to be brought but may become more limited to situations where the legal basis for the claim is clear and/or the level of loss to class members is readily ascertainable.

    In the meantime, the Train Operating Companies in the rail fares litigation will feel relieved that the concept of “abuse” has not been extended to cover situations not previously found to constitute a breach of the so-called “special responsibility” which applies to dominant companies.

    Continue Reading

  • High Representative Izumi Nakamitsu Delivers Keynote Remarks at the Singapore International Cyber Week "Shaping the Next Era of Global Cybersecurity" – United Nations Office for Disarmament Affairs

    1. High Representative Izumi Nakamitsu Delivers Keynote Remarks at the Singapore International Cyber Week “Shaping the Next Era of Global Cybersecurity”  United Nations Office for Disarmament Affairs
    2. Singapore International Cyber Week 2025: Shaping the Future of Cyber Resilience in the Indo-Pacific  Australian Cyber Security Magazine
    3. AISec @ GovWare 2025 to Lead Industry Dialogue and AI Security  ANTARA News
    4. S2W showcases AI security platforms at GovWare 2025 in Singapore – CHOSUNBIZ  Chosun Biz
    5. Criminal IP to Showcase ASM and CTI Innovations at GovWare 2025 in Singapore  Yahoo Finance

    Continue Reading

  • Shared Residual Liability for Frontier AI Firms

    As artificial intelligence (AI) systems become more capable, they stand to dramatically improve our lives—facilitating scientific discoveries, medical breakthroughs, and economic productivity. But capability is a double-edged sword. Despite their promise, advanced AI systems also threaten to do great harm, whether by accident or because of malicious human use.

    Many of those closest to the technology warn that the risk of an AI-caused catastrophe is nontrivial. In a 2023 survey of over 2,500 AI experts, the median respondent placed the probability that AI causes an extinction-level event at 5 percent, with 10 percent of respondents placing the risk at 25 percent or higher. Dario Amodei, co-founder and CEO of Anthropic—one of the world’s foremost AI companies—believes the risk to be somewhere between 10 percent and 25 percent. Nobel laureate and Turing Award winner Geoffrey Hinton, the “Godfather of AI,” after once venturing a similar estimate, now places the probability at more than 50 percent. Amodei and Hinton are among the many leading scientists and industry players who have publicly urged that “mitigating the risk of extinction from AI should be a global priority” on par with “pandemics and nuclear war” prevention.

    These risks are far-reaching. Malicious human actors could use AI to design and deploy novel biological weapons or attempt massive infrastructural sabotage and disable power grids, financial networks, and other critical systems. Apart from human-initiated misuses, AI systems by themselves pose major risks due to the possibility of loss of control and misalignment—that is, the gap between a system’s behavior and its intended purpose. As these systems become more general purpose and able, their usefulness will drive mounting pressure to integrate them into increasing arenas of human life, including highly sensitive domains like military strategy. If AI systems remain opaque and potentially misaligned, as well as vulnerable to deliberate human misuse, this is a dangerous prospect.  

    Despite the risks, frontier AI firms continue to underinvest in safety. This underinvestment is driven, in large part, by three major challenges: AI development’s judgment-proof problem, its perverse race dynamic, and AI regulation’s pacing problem. To address these challenges, I propose a shared residual liability regime for frontier AI firms. Modeled after state insurance guaranty associations, the regime would hold frontier AI companies jointly liable for catastrophic damages in excess of individual firms’ ability to pay. This would lead the industry to internalize more risk as a whole and would incentivize firms to monitor each other to reduce their shared financial exposure.

    Three Challenges Driving Firms’ Underinvestment in Safety

    No single firm is financially capable of covering the full damages of a sufficiently catastrophic event. The cost of the coronavirus pandemic to the U.S., as a reference point, has been estimated at $16 trillion; an AI system might be used to deploy a virus even more contagious and deadly. Hurricane Katrina caused an estimated $125 billion in damages; an AI system could be used to target and compromise infrastructure on an even more devastating scale.

    No AI firm by itself is likely to have the financial capacity to fully cover catastrophic damages of this magnitude. This is AI’s judgment-proof problem. (A party is “judgment proof” when it is unable to pay the full amount of damages for which it is liable.) Two principal failures result: underdeterrence and undercompensation. Firms lack financial incentive to continue scaling up the risk they internalize because their liability is effectively capped at their ability to pay. The shortfall between total damages and what firms can actually pay is accordingly externalized, absorbed by the now undercompensated victims of the harm that the firm causes.

    This judgment-proof problem is compounded by the perverse race dynamic that characterizes frontier AI development. There are plausibly enormous first-mover advantages to bringing a highly sophisticated, general-purpose AI model to market, including disproportionate market share, preferential access to capital, and potentially even dominant geopolitical leverage. These stakes make frontier AI development an extremely competitive affair in which firms have incentives to underinvest in safety. Unilaterally redirecting compute, capital, and other vital resources away from capabilities development and toward safety management risks ceding ground to faster-moving rivals who don’t do the same. Unable to trust that their precaution will not be exploited by competitors, each firm is incentivized to cut corners and press forward aggressively, even if all would prefer to slow down and prioritize safety. 

    Recent comments by Elon Musk, the CEO of xAI, illustrate this prisoner’s dilemma in unusually bald terms:

    You’ve seen how many humanoid robot startups there are. Part of what I’ve been fighting—and what has slowed me down a little—is that I don’t want to make Terminator real. Until recent years, I’ve been dragging my feet on AI and humanoid robotics. Then I sort of came to the realization that it’s happening whether I do it or not. So you can either be a spectator or a participant. I’d rather be a participant. Now it’s pedal to the metal on humanoid robots and digital superintelligence.

     The structure of competition, in other words, can render safety investment a strategic liability.

    Traditional command-and-control regulatory approaches struggle, meanwhile, to address these issues because the speed of AI development vastly outpaces that of conventional regulatory response (constrained as the latter is by formal legal and bureaucratic process). Informational and resource asymmetries fuel and compound this mismatch, with leading AI firms generally possessing superior technical expertise and greater resources than lawmakers and regulators, who are on the outside looking in. By the time regulators develop sufficient understanding of a given system or capability and navigate the relevant institutional process to implement an official regulatory response, the technology under review may already have advanced well past what the regulation was originally designed to address. (For example, a rule that subjects models only above a certain fixed parameter threshold to safety audits might quickly become underinclusive as new architectures achieve dangerous capabilities at smaller scales.) A gap persists between frontier AI and the state’s capacity to efficiently oversee it. This is AI regulation’s pacing problem. 

    Shared Residual Liability for Frontier AI Firms

    In a recent paper, I propose a legal intervention to help mitigate these challenges: shared residual liability for frontier AI firms. Under a shared residual liability regime, if a frontier AI firm causes a catastrophe that results in damages exceeding its ability to pay (or some other predetermined threshold), all other firms in the industry would be required to collectively cover the excess damages. 

    Each firm’s share of this residual liability would be allocated proportionate to its respective riskiness. Riskiness could be approximated with a formula that takes into account inputs like compute, parameter count, and revenue from AI products. This mirrors the approach of the Federal Deposit Insurance Corporation (FDIC), which calculates assessments based on formulas that synthesize various financial and risk metrics. The idea, in part, is to continue to incentivize firms to decrease their own risk profiles; the less risky a firm is, the less it stands to have to pay in the event one of its peers triggers residual liability. 

    The regime exposes all to a portion of the liability risk created by each. In doing so, it prompts the industry to collectively internalize more of the risk it generates and incentivizes firms to monitor each other.

    The Potential Virtues of Shared Residual Liability

    Shared residual liability has a number of potential virtues. First, it would help mitigate AI’s judgment-proof problem by increasing the funds available for victims and, therefore, the amount of risk the industry would collectively internalize. 

    Second, it could help counteract AI’s perverse race dynamic. By tying each firm’s financial fate to that of its peers, shared residual liability incentivizes firms to monitor, discipline, and cooperate with one another in order to reduce what are now shared safety risks. 

    Thus incentivized, firms might broker inter-firm safety agreements that commit, for instance, to the increased development and sharing of alignment technology to detect and constrain dangerous AI behavior. Currently, firms plausibly face socially suboptimal incentives to unilaterally invest in such technology, despite its potentially massive social value. This is because (a) alignment tools stand to be both non-rival (one actor’s use of a given software tool does not diminish its availability to others) and non-excludable (once released, these tools may be easily copied or reverse-engineered) and so are difficult to profit from, and (b) under the standard tort system, firms are insufficiently exposed to the true downsides of catastrophic failures (the judgment-proof problem). Shared residual liability changes this incentive landscape. Because firms would bear partial financial responsibility for the catastrophic failures of their peers, each firm would have a direct stake in reducing not only its own risk but also that of the industry more generally. Developing and widely sharing alignment tools (which lower every adopter’s risk) would, accordingly, be in every firm’s interest. 

    Inter-firm safety agreements might also commit to certain negotiated safety practices and third-party audits. Plausibly, collaboration of this sort, undertaken genuinely to promote safety, would withstand antitrust’s rule of reason; but, to remove doubt on this score, the statute implementing the regime could be outfitted with an explicit antitrust exemption. 

    Firms could also establish mutual self-insurance arrangements to protect themselves from the new financial risks residual liability would expose them to. An instructive analogue here is the mutuals that many of the largest U.S. law firms—despite the competitive nature of Big Law and the commercial availability of professional liability insurance—have formed and participate in. To improve efficiency and curb moral hazard, firms might structure these mutuals in tiers (e.g., basic coverage for firms that meet minimum safety standards, with further layers of coverage tied to additional safety practices) and scale contributions (premiums, as it were) to risk profiles. In seeking to efficiently protect themselves, firms—harnessing the safety-promoting, regulatory power of insurance—would simultaneously benefit the public.  

    To be sure, firms may well devise other, more efficient means of reducing shared risk. Shared residual liability’s light-touch approach embraces this likelihood. Firms, with their superior knowledge, resources, and ability to act in real time, are plausibly the best positioned to identify and stage effective and cost-efficient safety-management interventions. Shared residual liability gives firms the freedom (and with the stick of financial exposure, provides them with the incentive) to do just this, leveraging their comparative advantages for pro-safety ends. This describes a third virtue of shared residual liability: By shifting some responsibility for safety governance from slow-moving, resource-constrained regulators to the better-positioned firms themselves, it offers a partial solution to the pacing problem. 

    Finally, a fourth virtue of shared residual liability is its modularity. In principle, it is compatible with many other regulatory instruments; as a basic structural overlay, it is a team player that stands to strengthen the incentives of whichever other regulatory interventions it is paired with. This compatibility makes it particularly attractive in a regulatory landscape that is still evolving.

    Shared residual liability might, for instance, be layered atop commercial AI catastrophic risk insurance, should such insurance become available. This coverage would simply raise the threshold at which residual liability would activate. Or it might be layered atop reforms to underlying liability law. Shared residual liability is itself separate from and agnostic as to the doctrine that governs first-order liability determinations; it simply provides a structure for allocating the residuals of that liability once the latter is found and once the firm held originally liable proves judgment proof. (By the same token, as a second-order mechanism, an effective shared residual liability regime requires efficient underlying liability law. If firms are rarely found liable in the first instance, shared residual liability loses its bite, as firms would then have little reason to fear residual liability, and the regime’s intended incentive effects would struggle to get off the ground.) 

    What About Moral Hazard? 

    One might worry that shared residual liability, in spreading the costs of catastrophic harms, invites moral hazard. Moral hazard describes the tendency for actors to take greater risks when they do not bear the full consequences of those risks. It theoretically arises whenever actors can externalize part of the costs of their risky conduct. Under a shared residual liability regime, if a firm expects peers to absorb part of the fallout from its own failures, one concern might be that each firm’s incentive to individually take care will weaken. 

    With the right design specifications, however, moral hazard can be largely contained. Likely the cleanest way of doing so is with an exhaustion threshold: Residual liability would not activate until the firm that causes a catastrophe exhausts its ability to pay. This follows the model of state guaranty funds, which are not triggered until a member insurer goes insolvent. An exhaustion threshold minimizes moral hazard by ensuring that responsible firms bear maximum individual liability before any costs are transferred to peers; solvency functions as a kind of deductible.

    An exhaustion threshold, however, may not be optimal in the AI context. Requiring a frontier AI firm to fail before residual liability activates could be counterproductive if that firm is, for example, uniquely well positioned to mitigate future industrywide harm—perhaps because it is on the verge of a major safety breakthrough or some other humanity-benefiting discovery. Its failure not only might disrupt or set back ongoing safety efforts but also could lead to talent and important assets being acquired by even less scrupulous or adversarial actors, including foreign competitors, raising greater safety and national security risks as a net whole. All things considered, it might be better to keep such a firm afloat.

    Alternatives to an exhaustion threshold include a fixed monetary trigger (e.g., any judgment above $X), a percentage of the responsible firm’s maximum capacity, or a discretionary determination made by a designated administrative authority. Another approach might be to retain exhaustion as the default, but with exceptions permitted for exceptional cases such as those just gestured at above. 

    Any moral hazard introduced from lowering the threshold below exhaustion can be addressed via further design decisions. Some moral hazard will be mitigated by a good residual liability allocation formula. When contribution rates are scaled according to a firm’s riskiness (the safer a firm is, the less it stands to owe), firms are incentivized to take care in order to reduce their obligations, countervailing moral hazard temptations notwithstanding. This is akin to how insurance uses scaled, responsive premium pricing to mitigate moral hazard. Additional design tools might include structuring residual payouts from nonresponsible firms as conditional loans that the responsible firm must pay back to the collective fund over time, and restricting access to government grants and regulatory safe harbors for firms that trigger residual liability. Moral hazard is, again, likely most neatly accounted for via an exhaustion threshold, but preexhaustion triggers might also be workable with the right design.   

    Finally, it is worth noting that moral hazard is not unique to residual liability. It is present under standard tort as well, only it goes by a different name: judgment proofness. Under the status quo, firms may engage in riskier behavior than is socially optimal because they know their liability is effectively capped by their own ability to pay. Any damages above that threshold are externalized onto victims. The cost of moral hazard is borne, in other words, by the public. Under a shared residual liability regime, by contrast, it is shifted to the rest of the AI industry. 

    Thus shifted, moral hazard—to the extent it persists—can in fact function as a sort of feature, not a bug, of the regime: If firms believe their peers are now emboldened to take greater risks, firms have all the more reason to pressure their peers against doing so. That is, the incentive to peer-monitor grows sharper as concerns about recklessness increase. The threat of moral hazard, thus redirected, can act as a productive force.

    ***

    Shared residual liability is not a panacea. It cannot by itself fully eliminate catastrophic AI risk or resolve all coordination failures. But it does offer a potentially robust framework for internalizing more catastrophic risk (mitigating AI development’s judgment-proof problem), and it would plausibly incentivize firms to coordinate and self-regulate in safety-enhancing directions (counteracting AI development’s perverse race dynamic and helping to get around AI regulation’s pacing problem). By aligning private incentives with public safety, a shared residual liability regime for frontier AI firms could be a valuable component of a broader AI governance architecture.

    Continue Reading

  • Barclays plays down £20bn exposure to private credit industry | Barclays

    Barclays plays down £20bn exposure to private credit industry | Barclays

    Barclays has insisted it has the right controls in place to manage a £20bn exposure to the under-fire private credit industry despite warnings from the International Monetary Fund (IMF) and the Bank of England.

    The bank’s chief executive, CS Venkatakrishnan, said it ran a “very risk-controlled shop” and was comfortable with its lending standards for the private credit industry.

    That was despite taking a £110m loss over the US sub-prime auto lender Tricolor, which collapsed amid fraud allegations last month.

    Losses stemming from the dual collapse of Tricolour and the US auto parts company First Brands have raised fears over potentially weak lending standards in the private credit industry. There are concerns that the potential fallout could destabilise traditional banks that issue loans to the shadow banking sector.

    The governor of the Bank of England, Andrew Bailey, said this week that the recent failures had worrying echoes of the sub-prime mortgage crisis that kicked off the global financial crash of 2008. Last week the IMF warned that a downturn could have ripple effects across the financial system, given banks were increasingly exposed to a largely unregulated private credit industry.

    The chief executive, CS Venkatakrishnan, says Barclays runs a ‘very risk-controlled shop’. Photograph: Brendan McDermid/Reuters

    Venkatakrishnan said: “There are obviously connections between what non-bank financial institutions do and what banks do.” However, he suggested that the IMF report was pointing out probabilities and was ultimately “subjective”.

    When asked whether he agreed with the JP Morgan chief executive, Jamie Dimon, who said last week that more “cockroaches” could emerge from the private credit sector, the Barclays boss quipped: “I’m not an entomologist.”

    He said: “Whatever forms of lending you do, you should do it carefully and with the right controls.” When it came to private credit, he said Barclays limited lending to private credit loan portfolios “constructed by some of the largest, most experienced managers with a strong track record.

    “We have controls over them … [and] we think we run … a very risk-controlled shop when it comes to it, and that’s something we’ve instituted for a long, long time.”

    Venkatakrishnan said Barclays even turned down potential exposure to First Brands despite being approached multiple times.

    Venkatakrishnan said the loss on Tricolor itself was not a surprise. He said: “The surprise was the fraud. Now fraud is no excuse; we take our credit risk management very seriously at all points in the cycle.” However, he said lenders always had to be prepared for “all outcomes including fraud”.

    While Barclays revealed a £20bn exposure to the private credit sector, Venkatakrishnan noted it was a “relatively small” compared with the £346bn of loans currently issued to consumers and business customers across the bank.

    skip past newsletter promotion

    His comments came as Barclays reported a 7% drop in pre-tax profits to £2.08bn in the three months to the end of September, down from £2.2bn during the same period last year.

    Alongside the Tricolor loss, Barclays’ earnings were also hit by a £235m provision to cover compensation over the car loan commissions scandal. It makes it the latest high street bank to put aside extra cash in response to the Financial Conduct Authority’s proposed £11bn redress programme.

    It takes Barclays’ total compensation pot to £325m. The company no longer provides car finance but is dealing with the fallout for the remaining loans on its books

    However, that did not stop the bank from announcing fresh payouts for investors – another £500m worth of share buybacks. The bank also plans to switch to quarterly payouts for shareholders, rather than waiting for half-year and end-of-year earnings.

    “I continue to be pleased with the ongoing momentum of Barclays’ financial performance over the last seven quarters,” Venkatakrishnan said, adding that he was upgrading the profitability guidance – under a measure known as return on tangible equity – for the full year.

    Continue Reading

  • Jaguar Land Rover hack has cost UK economy £1.9bn, experts say | Jaguar Land Rover

    Jaguar Land Rover hack has cost UK economy £1.9bn, experts say | Jaguar Land Rover

    The hack of Jaguar Land Rover has cost the UK economy an estimated £1.9bn, potentially making it the most costly cyber-attack in British history, a cybersecurity body has said.

    A report by the Cyber Monitoring Centre (CMC) said losses could be higher if there were unexpected delays to the return to full production at the carmaker to levels before the hack took place at the end of August.

    JLR was forced to shut down systems across all of its factories and offices after realising the extent of the penetration. The carmaker, Britain’s biggest automotive employer, only managed a limited restart in early October and is not expected to return to full production until January.

    As well as crippling JLR, the hack has affected as many as 5,000 organisations across Britain, given the wide extent of the carmaker’s complex supply chain. While JLR has been able to rely on its large financial buffers, smaller suppliers were immediately forced to lay off thousands of workers and contend with a painful pause in cashflow.

    “This incident appears to be the most economically damaging cyber event to hit the UK, with the vast majority of the financial impact being due to the loss of manufacturing output at JLR and its suppliers,” the CMC’s report said.

    The CMC is an independent non-profit organisation made up of industry specialists including the former head of Britain’s National Cyber Security Centre, Ciaran Martin. Martin said it looked like the most costly UK attack “by some distance”, and added that organisations needed to work out how to react if vital networks were disrupted.

    JLR, which is owned by India’s Tata Group, will report its financial results in November. A spokesperson for the carmaker declined to comment on the report.

    The luxury carmaker has three factories in Britain that together produce about 1,000 vehicles a day. The incident was one of several high-profile hacks to affect large UK companies this year. Marks & Spencer lost about £300m after a breach in April forced the retailer to suspend its online services for two months.

    JLR, which analysts estimated was losing about £50m a week from the shutdown, was promised a £1.5bn loan guarantee by the UK government in late September to help it support suppliers. However, before receiving that cash, the carmaker launched its own efforts to support its supply chain, paying for parts upfront.

    skip past newsletter promotion

    The CMC, which is funded by the insurance industry and categorises the financial impact of significant cybersecurity incidents affecting British businesses, ranked the JLR hack as a category 3 systemic event, out of a scale of five.

    The £1.9bn estimate “reflects the substantial disruption to JLR’s manufacturing, to its multi-tier manufacturing supply chain, and to downstream organisations including dealerships”, the report said.

    Continue Reading

  • Powering the Next Space Age

    Powering the Next Space Age

    Government ambitions in space are approaching what some may still think of as science fiction. In August 2025, NASA set a 2030 target for construction of a lunar nuclear reactor to support the US-led and EU-supported Artemis program’s plan for a permanent moon base.  While this target faces significant technical and financial challenges, it reflects a real sprint to outcompete autocratic rivals. China and Russia are coordinating on a similar “International Lunar Research Station” powered by a nuclear reactor for completion in the mid-2030s.

    Emerging technologies are key drivers of this new space race. Analysts with experience at the European Space Agency, NASA, MIT, and in the private space sector argue that these technologies could make a cislunar economy—economic activity spanning Earth, the Moon, and the space between—feasible by mid-century, though experts debate when key capabilities will mature. Key technologies include:

    • artificial intelligence (AI), which could facilitate autonomous in-space servicing and assembly (ISAM), enabling individually-launched modular components to self-assemble into mega-structures such as next-generation telescopes and orbital refueling stations. Even factories could be built this way, leveraging microgravity and space extremes to produce items impossible to make on earth, with applications in fiber optics, semiconductors, and novel materials. The first pieces of this world are already here: US-based Varda Space Industries uses microgravity for biopharmaceutical drug development.
    • quantum technologies, which could safeguard military and commercial data in space using a network of quantum-encrypted satellites. Space-based atomic clocks developed by the European Space Agency could synchronize these systems and allow greater autonomous navigation in deep space. Emerging quantum sensors measure tiny gravitational fluctuations to identify more-and-less dense materials below the Earth’s surface, enabling satellites to map aquifers and critical mineral deposits. The same measurements could identify high-value sites for mining on the Moon.
    • biotechnologies, which could be key to sustaining long-term human activity in cislunar space. Researchers are engineering lightweight, self-healing composites made from fungi to serve as radiation shields for space stations and Moon bases. Near-future synthetic biology applications could reduce the need to resupply space habitats through the use of bioregenerative life support systems that generate oxygen and food.

    The United States and the EU already support these industries; the EU’s draft Space Act and the Trump administration’s August executive order on commercial space development each signal backing for the industry. Yet, staying ahead of China demands more. Allies should leverage complementary strengths by investing in each other’s commercial space sectors and reducing barriers to integrating advanced capabilities. These steps will not suffice by themselves, but they would materially boost competitiveness—positioning the United States and the EU to outpace China and unlock the cislunar economy.

     

    Continue Reading

  • Apple and Google may be forced to change app stores

    Apple and Google may be forced to change app stores

    The way we download apps onto our phones could be about to change after a ruling from the UK’s competition regulator.

    The Competition and Markets Authority (CMA) has designated Apple and Google as having “strategic market status” – effectively saying they have a lot of power over mobile platforms.

    This means the two tech giants may have to make changes, after the CMA said they “may be limiting innovation and competition”.

    The ruling has drawn fury from the tech giants, with Apple saying it risked harming consumers through “weaker privacy” and “delayed access to new features”, while Google called the decision “disappointing, disproportionate and unwarranted”.

    “We simply do not see the rationale for today’s designation decision,” Google competition lead Oliver Bethell said.

    But the CMA said it did not “find or assume wrongdoing” from the firms.

    “The app economy generates 1.5% of the UK’s GDP and supports around 400,000 jobs, which is why it’s crucial these markets work well for business,” said Will Hayter, the CMA’s executive director for digital markets.

    The investigation into Apple and Google’s app stores, browsers and operating systems focused on how prominent their own apps are compared with rivals.

    “Around 90-100% of UK mobile devices running on Apple or Google’s mobile platforms,” the CMA has previously said, adding this meant the firms “hold an effective duopoly”.

    According to analysis from Uswitch, 48.5% of UK users have an iPhone – which runs Apple’s iOS operating system (OS) – with the vast majority of the rest using Google’s Android OS.

    It comes after a separate decision taken in October, where the CMA designated Google’s search division as having strategic market status.

    It is unknown exactly what changes the regulator will look to request, but in July it published roadmaps outlining potential measures it would take if the firms were found to have strategic market status.

    These include requiring it to be easier for people to transfer data and easily switch between Apple and Android devices, and for both firms to rank apps “in a fair, objective and transparent manner” in their app stores.

    Apple specifically may be required to allow alternative app stores on its devices, and let people download programs directly from companies’ websites.

    Such a move would be a significant change to the so-called “closed system” which has defined iPhones since their inception, where apps can only be downloaded from Apple’s own App Store.

    Both of these things are currently possible on Android devices – but the roadmap said Google may have to “change the user experience” of downloading apps directly from websites, as well as “remove user frictions” when using alternative app stores, such as listing them directly on the Google Play Store.

    Android is an open-source operating system, which means developers can use and build on top of it for free.

    Google argues this means it opens up competition.

    Mr Bethell said “the majority of Android users” use alternative app stores or download apps directly from a developer’s website, and claimed there is a far greater range of apps available for Android users compared to those on Apple devices.

    “There are now 24,000 Android phone models from 1,300 phone manufacturers worldwide, facing intense competition from iOS in the UK,” he said.

    Meanwhile, Apple warned the UK could lose access to getting new features – as has happened in the EU – which the company blames on tech regulation.

    For example, some Apple Intelligence features which have been rolled out in other parts of the world are not available in the EU.

    “Apple faces fierce competition in every market where we operate, and we work tirelessly to create the best products, services and user experience,” the company said in a statement.

    “The UK’s adoption of EU-style rules would undermine that, leaving users with weaker privacy and security, delayed access to new features, and a fragmented, less seamless experience.”

    But consumer group Which? said curbs on these companies’ power in other countries “are already helping businesses to innovate and giving consumers more choice”.

    “Their dominance is now causing real harm by restricting choice for consumers and competition for businesses,” said its head of policy and advocacy Rocio Concha.

    Continue Reading

  • “Future of Professionals Report” analysis: How AI can help corporate functions align with their organization’s strategy

    “Future of Professionals Report” analysis: How AI can help corporate functions align with their organization’s strategy

    Our research shows the critical importance of aligning departmental goals with the organization’s overall strategy to enhance efficiency, foster innovation, and drive long-term success 

    Key takeaways:

        • Alignment of goals — Aligning departmental goals with the organization’s overall strategy is crucial for enhancing efficiency, fostering innovation, and driving long-term success.

        • Role of AI — AI can play a pivotal role in achieving this alignment by helping corporate functions define value and align their goals with the organization’s strategy.

        • Top-down approach needed — Successful alignment often requires a top-down approach to AI implementation, ensuring that AI strategies are integrated across the broader enterprise.


    In today’s fast-paced and ever-evolving corporate landscape, the alignment of departmental goals with the overarching strategy of the organization is more crucial than ever. This alignment ensures that every in-house function is working towards a common objective, thereby enhancing the overall efficiency, innovation, and success of the organization as a whole.

    Additionally, aligning departmental goals with the organization’s strategy also can eliminate the perception that certain corporate functions are merely cost centers, according to Thomson Reuters recently published 2025 Future of Professionals report. Indeed, many corporate functions — especially in areas like legal, risk, tax, and trade — are often seen as misaligned with the organization’s overall goals, which can lead to inefficiencies and a lack of strategic contribution from these departments.

    Today, corporate leaders are under immense pressure to demonstrate how various functions contribute strategically to the value of the business rather than just managing costs. This urgency is also driven by the understanding that companies are navigating unprecedented regulatory and geopolitical complexity in the current environment, which really underscores the need for new ways to address this situation.


    In today’s fast-paced and ever-evolving corporate landscape, the alignment of departmental goals with the overarching strategy of the organization is more crucial than ever.


    Increasingly, in-house function leaders are looking to AI tools and solutions to find a way to bridge this critical intersection of commerce and compliance.

    Yet the Future of Professionals report showed there is a strategic gap in AI usage. While nearly half (48%) of corporate professionals responding to the survey say they expect transformational AI-driven changes within their corporate functions this year, just 19% say that these functions have a departmental AI strategy in place.

    In most successful transformations, however, organizations adopt an end-to-end approach that starts at the top and has the AI strategies cascade down directly from overarching enterprise goals to departmental implementation. This ensures that AI is not implemented in isolation but is integrated into the broader organizational strategy, thereby maximizing its potential to drive alignment and strategic contribution.

    Empowering corporate functions with AI-driven tech

    However, for departments to align their goals with the organization’s strategy, they need to be empowered with advanced technology — and it’s up to the C-Suite to drive this empowerment. Corporate management needs to ensure that their in-house functions are equipped with the tools they need to contribute strategically to the organization’s success by enabling new business, driving operational efficiency, and maintaining strict compliance. By leveraging this advanced technology, departments then can move beyond managing costs and demonstrate their strategic value to the overall enterprise.

    Not surprisingly, as the report addressed, there are barriers to these efforts, with the two major hurdles being organizational silos and leadership commitments. Silos themselves are a significant challenge to many corporate initiatives that require collaboration and a change of mindset. As our research has shown, when corporate functions implement AI in isolation or without a unified enterprise strategy, they’re going to miss out on the full potential of AI to break down those internal barriers.

    As for the commitment from corporate leaders, all of them should first assess where their organizations and key departments sit on their AI adoption journey. The goal should be to craft a custom-tailored AI strategy that will allow each function to secure additional ROI while acting in concert with the organization’s overall strategy.

    All of this serves the greater purpose, because those organizations that can demonstrate a clarity of vision around AI will be the ones reporting better outcomes more quickly. It will be these organizational leaders who foster a culture in which former cost centers are now seen as a growth engine that can drive their professionals and the overall organization forward. For that to happen, however, these leaders must think beyond the technology and focus on how their departments’ mindset — and that of the overall organization — needs to change.

    Achieving mindset shift and cultural change

    Not surprisingly, achieving alignment between departmental goals and organizational strategy requires a significant mindset shift and cultural change. Today, there is a growing understanding that in-house functions should not be viewed as cost centers but as strategic business partners — and this shift in mindset is crucial for fostering a culture in which AI is seen as a growth engine and a tool for achieving strategic goals.


    Not surprisingly, achieving alignment between departmental goals and organizational strategy requires a significant mindset shift and cultural change.


    In this way, in-house departments can become the type of business partners that can really add value and that can use AI in a manner that will truly empower their ability to achieve these goals. And this mindset shift needs to happen not only among the leaders of the enabling functions, but within the C-Suite itself. If all parts of the organization are focused on how each can create value and how they can leverage AI as a tool to do that, it becomes a powerful accelerator.

    AI itself also has a pivotal role to play in aligning departmental goals with organizational strategy by helping corporate functions define value, especially in today’s complex regulatory and geopolitical environment in which departments may have their hands full simply navigating these unprecedented challenges daily.

    To demonstrate this, however, departments need to measure their progress as they move away from a focus on cost reduction and towards strategic value creation. Using specific success metrics — including those that measure a department’s ability to enhance foresight and prediction and improved decision-making — departments can demonstrate how each in-house function contributes to the enterprise’s strategic goals.

    In fact, many organizations and their in-house functions seem well on their way down this path toward tighter alignment. While there are some corporate executives that are uncertain about AI and the level of change it will bring about, it’s clear that this is not the time nor the environment to bury your head in the sand.

    Looking forward

    To aligning departmental goals with the organization’s overall strategy is essential for driving efficiency, fostering innovation, and achieving long-term success. And to make this happen, C-Suite executives need to ensure that each of their corporate functions has its own AI strategy — one which complements the overall organization’s key goals. Further, departmental leaders need to develop AI strategies and then encourage collaboration with other function leaders to break down practical barriers and learn from each other.

    By empowering functions with advanced technology, adopting a top-down approach to AI implementation, and leveraging success metrics, organizations can ensure that all departments are working towards a common objective and contributing strategically to the overall success of the enterprise.


    Continue Reading

  • Workday Introduces New Custom AI Model Library to Power Smarter, Faster Contract Reviews

    Workday Introduces New Custom AI Model Library to Power Smarter, Faster Contract Reviews

    With 120+ Pre-Built AI Models, the Workday Contract Intelligence Agent Helps Customers Quickly Analyze Contracts, Flag Risks, and Gain Insights Across HR, Finance, Legal, IT, and More

    PLEASANTON, Calif., Oct. 22, 2025 /PRNewswire/ — Workday, Inc. (NASDAQ: WDAY), the enterprise AI platform for managing people, money, and agents, today announced a new Custom AI Model Library for the Workday Contract Intelligence Agent, powered by Evisort. The library includes more than 120 pre-built AI models trained to identify key clauses, risks, line items, and terms in contracts — from HR agreements to vendor contracts to sales deals. By giving organizations access to these specialized models, Workday is enabling faster contract reviews, earlier risk detection, and significantly less manual effort.

    The Workday Contract Intelligence Agent already helps legal and business teams make smarter decisions by reviewing contracts at scale to flag risks, track obligations, and uncover opportunities. With the addition of the Custom AI Model Library, customers can now automatically analyze a wider range of contract terms — from employment agreements and vendor security clauses to payment schedules, data privacy obligations, and renewal provisions — across HR, Finance, Legal, IT, and Sales. The new models are pre-trained and ready to deploy, but customers can also refine them further by simply providing feedback — no coding required.

    “AI in the enterprise often delivers piecemeal automation without true transformation,” said Jerry Ting, vice president, head of agentic AI & Evisort, Workday. “We aren’t just adding features; we are giving our Contract Intelligence Agent new skills that help solve real business problems. Our goal is to make deep, complex contract analysis fast and actionable for every team.”

    The Custom AI Model Library delivers a deeper level of contract analysis by enabling models to summarize, calculate, and classify key terms — turning complex documents into actionable insights. With these new models, teams can:

    • Summarize complex employment terms, like non-competes, in plain language and connect them across systems.
    • Identify and extract financial details such as dates, amounts, and addresses to enable faster, more accurate invoice processing.
    • Analyze terms related to data privacy, security, access, minimization, and deletion across a range of contract types.
    • Extract lease agreement terms such as square footage, property tax requirements, and rights of first refusal, and summarize detailed repair and maintenance obligations.
    • Analyze terms from sales agreements like publicity and logo rights, included products and services, and uplift on renewals — and share them with CRMs.

    For more information

    • Learn more about Evisort’s AI-powered contract intelligence and contract lifecycle management solutions available through Workday here.
    • Visit the Workday Marketplace to learn more about our contract intelligence agent here.

    About Workday
    Workday is the enterprise AI platform for managing people, money, and agents. Workday unifies HR and Finance on one intelligent platform with AI at the core to empower people at every level with the clarity, confidence, and insights they need to adapt quickly, make better decisions, and deliver outcomes that matter. Workday is used by more than 11,000 organizations around the world and across industries – from medium-sized businesses to more than 65% of the Fortune 500. For more information about Workday, visit workday.com.

    © 2025 Workday, Inc. All rights reserved. Workday and the Workday logo are registered trademarks of Workday, Inc. All other brand and product names are trademarks or registered trademarks of their respective holders.

    Forward-Looking Statements
    This press release contains forward-looking statements including, among other things, statements regarding Workday’s plans, beliefs, and expectations. These forward-looking statements are based only on currently available information and our current beliefs, expectations, and assumptions. Because forward-looking statements relate to the future, they are subject to inherent risks, uncertainties, assumptions, and changes in circumstances that are difficult to predict and many of which are outside of our control. If the risks materialize, assumptions prove incorrect, or we experience unexpected changes in circumstances, actual results could differ materially from the results implied by these forward-looking statements, and therefore you should not rely on any forward-looking statements. Risks include, but are not limited to, risks described in our filings with the Securities and Exchange Commission (“SEC”), including our most recent report on Form 10-Q or Form 10-K and other reports that we have filed and will file with the SEC from time to time, which could cause actual results to vary from expectations. Workday assumes no obligation to, and does not currently intend to, update any such forward-looking statements after the date of this release, except as required by law.

    Any unreleased services, features, or functions referenced in this document, our website, or other press releases or public statements that are not currently available are subject to change at Workday’s discretion and may not be delivered as planned or at all. Customers who purchase Workday services should make their purchase decisions based upon services, features, and functions that are currently available.

    SOURCE Workday Inc.

    For further information: For further information: Investor Relations: ir@workday.com, Media Inquiries: media@workday.com

    Continue Reading

  • Insurance as a key catalyst for climate action: Insights from New York Climate Week

    Insurance as a key catalyst for climate action: Insights from New York Climate Week

    It was clear from New York Climate Week (NYCW) that the insurance industry is widely recognized as a cross-cutting enabler of climate action. In the context of resilience, the industry plays an important role in understanding and quantifying risk, pricing it accurately, promoting the adoption of solutions, and incentivizing measures that strengthen resilience against climate impacts. In the context of the energy transition, insurance can be an enabler of investment by mitigating risks associated with projects.

    While this was my first experience of NYCW, colleagues who have attended in previous years confirmed that insurance has always been part of the conversation, but it was more central to the dialogue this year. However, although insurance is a key part of the solution to climate change, it cannot provide all the answers.

    Rising expectations on the insurance industry

    A central theme to emerge from NYCW was the growing challenges of insurability and affordability amid increasing climate-related losses. Insured losses are rising sharply, driven in part by more frequent and severe climate events, which is putting pressure on the availability and affordability of (re)insurance coverage. As the likelihood and severity of extreme weather events increases, the risk of loss — and consequently, insurance costs — also rise. While homeowners have been more affected to date, this same dynamic increasingly applies to businesses.

    Wildfires in California are a case in point. Total insured losses from the California wildfire in January 2025 are estimated to cost between US$25 billion to US$39.4 billion. Meanwhile, the California FAIR Plan, the state’s insurer of last resort, had just US$377 million available to pay claims. These economics are not sustainable, emphasizing the need to focus on building resilience into homes, businesses, and communities.

    Given this, participants were keen to better understand how insurance can drive solutions through risk-based pricing and incentives, how to navigate insurer climate risk disclosures, and how to demonstrate the return on investment (ROI) of adaptation measures.

    Elevating resilience in business decisions 

    During the week, we presented key findings from the Marsh Climate Adaptation Survey 2025, revealing that 60% of respondents report having sufficient funds allocated for climate adaptation. However, a general consensus at NYCW was that organizations do not have enough resources for these efforts. A missed opportunity may include not using a cost-benefit analysis to build a compelling business case for these investments, as revealed in our survey. Many organizations still struggle to frame such adaptation spending in terms of potential avoided future costs.

    Supporting this trend, industry feedback at the event emphasized the importance of including resilience-related questions in client requests for proposals (RFPs). This approach allows companies to underscore the importance of resilience to their leadership. By highlighting long-term financial risks, businesses can strengthen the economic rationale for proactive adaptation, potentially mitigating future losses.

    A practical example of successful adaptation investment is New York City’s redevelopment of Brooklyn Bridge Park. This mixed-use space combines residential areas, revitalized public spaces, and resilience-building infrastructure such as elevated structures and saltmarsh grasses that protect against flood waves. Funded partly through property taxes, these flood resilience measures also contribute to lowering insurance premiums, demonstrating a practical synergy between adaptation investment and risk reduction.

    Investing in nature-based solutions and quantifying their value

    The third key theme from NYCW was the growing interest in nature-based solutions. Academic research shows that natural measures — such as restoring mangrove forests — can protect communities and properties from coastal flooding and storm surges.

    An inspiring story shared was the US Fish and Wildlife Service’s project in New Mexico’s Carson National Forest, which draws on Indigenous knowledge to manage forests more resilient to wildfires and floods. This initiative integrates ecological restoration, cultural stewardship, and community engagement, highlighting the value of traditional practices in modern climate resilience efforts.

    However, scaling nature-based solutions can be challenging. As my colleague Lovey Sidhu explained during the webinar, Harnessing Nature for Resilience, “One of the major issues is the complexity of valuing nature benefits, which are often hard to quantify and accrue over long-time horizons, making them difficult to capture on traditional balance sheets.”

    Insurance can play an important role here. One way to quantify the benefits of nature-based solutions is to integrate them into catastrophe models. The reduced risk can potentially lead to lower insurance premiums and improved access to insurance for vulnerable areas.

    Challenges ahead, but optimism prevails 

    Enhancing how monetary value is assigned to nature-based solutions is key to addressing the three major themes highlighted in New York: making insurance more affordable by demonstrating measurable risk reductions, supporting adaptation initiatives, and encouraging broader adoption of effective solutions. Together, insurance, adaptation, and nature-based solutions create a powerful, mutually reinforcing pathway toward stronger climate resilience.

    Continue Reading