Author: admin

  • ‘Alien is a warning, isn’t it?’: Essie Davis on Alien: Earth and Tasmania’s ecological crisis | Television

    ‘Alien is a warning, isn’t it?’: Essie Davis on Alien: Earth and Tasmania’s ecological crisis | Television

    Essie Davis didn’t watch much horror growing up in Tasmania; the 55-year-old actor can still bitterly recall the moment when, aged four, she was left at home while her older siblings went to see Jaws at the local cinema in Hobart.

    “I stood by the back door going, ‘I will remember this day for the rest of my life!’” Davis recalls, speaking from her current family home, also in Tasmania.

    She finally saw the film on VHS years later, while dating a production designer she had met while performing at Belvoir St theatre. That designer was Justin Kurzel, now one of Australia’s most celebrated directors – and also her husband. Back in the mid-90s, Kurzel’s courtship rituals included a crash course in horror classics – Jaws was high on the list, followed closely by Ridley Scott’s 1979 space slasher Alien.

    “I love that first Alien film so much, I wish I’d seen it in a cinema,” Davis says. “They’re definitely a huge part of my film psyche.”

    It would take another few decades before Davis entered the Alien universe herself, in a new prequel series set shortly before the original film. Alien: Earth focuses on Wendy (Sydney Chandler), a “forever girl” whose consciousness is transferred from her terminally ill human body to a synthetic one, making her a world-first “hybrid”. Davis plays Dame Sylvia, one of the scientists responsible for Wendy’s second life. In one of many allusions to Peter Pan, Hawley named the character after Sylvia Llewelyn Davies, the real-life mother of the boys who inspired JM Barrie to write his Neverland saga.

    The show’s themes – and Sylvia’s attempts to balance Wendy’s humanity with her new, artificial immortality – felt particularly timely to Davis.

    “AI was a thing that was coming, but it wasn’t suddenly upon us,” she says. “And then we had the writers’ strike and the actors’ strike, and then ChatGPT suddenly was in the schools in Tasmania, and I was just going, ‘hang on a minute’.

    “There’s a tightrope of ethics and morality, and everyone has a different version of it. I really hope that people will enjoy this and get hooked into that quandary of genetic engineering and ethics and that strange quest to own everything and beat everyone and be younger than anyone.”

    Davis is a horror icon herself, thanks to a breakout role in Jennifer Kent’s 2014 film The Babadook. The low-budget Australian production became a global hit, with fans including The Exorcist director William Friedkin, who placed the film alongside Alien as one of the scariest films he had ever seen. It remains a modern cult classic 10 years later.

    “I remember watching a screening way before it was released, and just went, ‘Oh, this is great, but it’s not scary’,” she says. “And then we went to the Sundance film festival, and I sat up the back as people swore and leapt out of their seats.”

    Davis in The Babadook. Photograph: Icon Film Distribution/Sportsphoto/Allstar

    Davis credits the film’s enduring appeal – its top-hatted spook has even been embraced as an unlikely Queer icon – to something deeper than jump scares. “It’s not just a horror film,” she says. “It’s in fact a kind of psychological thriller about mental health and grief and parenting and love.”

    It remains a defining role for Davis, alongside her star turn in Miss Fisher’s Murder Mysteries – the 1920s detective franchise that ran for three series and a film, based on the novels of Kerry Greenwood, who died in April. “A terrible loss, but she’s forever in us now,” says Davis.

    “I was crying, working out whether I should do it or not,” she adds, of donning Phryne Fisher’s signature black bob. “I’m really glad I did, because that character was such a positive force, and it’s just so fun to play someone so clever and positive and naughty and irreverent – and someone who really cares about social justice, and is not going to bow for anyone, and stands up for the underdog.”

    Davis as Phryne Fisher in the film Miss Fisher and The Crypt of Tears. Photograph: AP

    Along with roles in Game of Thrones, Baby Teeth and Netflix’s One Day, Davis has also collaborated with her film-maker husband, responsible for films including Snowtown, Nitram, and television adaptations of Peter Carey’s The True History of the Kelly Gang and most recently Richard Flanagan’s The Narrow Road to the Deep North. Davis appeared in the latter three.

    Their kids were old enough to be watching Alien for a high school English class when the script for Alien: Earth hit Davis’s inbox; the series is led by Noah Hawley, the showrunner behind the award-winning small-screen adaptation of Fargo. She was intrigued; the show’s depiction of a future Earth carved up and controlled by mega-corporations – Dame Sylvia is employed by Prodigy, a rival to the franchise’s longstanding faceless villains, the Weyland-Yutani Corporation – particularly resonated with her.

    “It’s terribly prescient – the richest of corporations and the richest people taking over the world, essentially running the world,” she says.

    David Rysdahl as Arthur and Davis as Dame Sylvia in Alien: Earth. Photograph: Copyright 2025, FX. All Rights Reserved.

    For Davis, the perils of corporate profits have been plain to see from her home in Tasmania, where she and Kurzel returned to raise their family.

    “It is terrifying what is happening to our beautiful place here in Tassie, and the total corporate capture of our government by big industry,” she says of the controversy around the state’s fish farming industry, of which she has become one of many high-profile critics, alongside Richard Flanagan and former ABC journalist turned political candidate Peter George.

    These days, Davis doesn’t have to go to the cinema to witness coastal dread. “When you look out over the water from Bruny Island, everywhere you look you see rows and rows of fish pens, and huge, industrial factory ships,” she says. “We had mass fish mortalities, rotting salmon washing up on our beaches. And 53 cormorants got shot because they were fishing out of the pens.”

    Davis says the public opposition to such practices “began as lots of individuals around Tasmania making constructive criticism, and asking for a bit of negotiation on pollution”. It was being ignored by salmon companies and successive governments, she says, that connected and galvanised the far-flung island community.

    What began as a movement, Davis says, has now become an “insurrection”, evident in the rise of Peter George, who was elected to Tasmania’s state parliament as an independent days after our interview.

    “But we’re not going to stop,” she says. “We’re just going to keep on until we have people representing the people of Tasmania and not just corporations and party politics.

    “I guess Alien is a warning, isn’t it?” she adds. “A warning of what greed and money and this kind of pursuit of immortality can do to a planet.”

    Alien: Earth launches on Disney+ on 12 August in Australia and the US and on 13 August in the UK

    Continue Reading

  • Despite corporate hype, few signs that the tech is taking jobs — yet

    Despite corporate hype, few signs that the tech is taking jobs — yet

    The job market has begun looking shakier. How much is artificial intelligence to blame?

    Not a whole lot. At least not yet.

    A review of employment surveys, interviews with labor market analysts and recent company earnings reports shows little evidence, so far, that would support assertions of a widespread economic impact from AI’s growing usage.

    “It’s such an emotional thing for people, many of whom are determined to see it in the data,” said Martha Gimbel, executive director and co-founder of the Budget Lab at Yale University and a former President Joe Biden economic adviser. “And it’s just not there yet.”

    Much is riding on the payoff from AI. The stock market has been hitting record highs largely thanks to gains from tech giants like Nvidia, Google parent Alphabet, Facebook parent Meta and Microsoft, which have made enormous investments in pumping out AI-related products.

    For precisely that reason, analysts say, some businesses may be incentivized to hype AI’s potential as a disruptive force. Through the end of July, the term “AI” has been cited on about two-thirds of second-quarter earnings calls conducted by S&P 500 companies, according to the data provider Factset. That’s up from less than half in the first quarter.

    Amid a downshifting economy, cost pressures are mounting, prompting corporate leaders to hype AI’s potential as a savings source — even if it’s not quite there yet.

    “In 2023, you’d have a high-profile public company do a job cut and cite rising interest rates or uncertain macro conditions,” Roger Lee, a tech entrepreneur who also runs a website that tracks tech industry layoffs, said. ”Today, it’s AI.”

    The most extreme warning about AI’s short-term impact has come from Dario Amodei, co-founder and CEO of AI firm Anthropic. In May, he told Axios that he foresees half of all entry-level white-collar jobs being wiped out in the next one to five years, spiking unemployment to between 10% and 20%.

    So far, evidence for this scenario is mixed. All job openings, entry-level or otherwise, have been declining since 2023, according to labor market analytics company Revelio Labs, though the trend has not been linear. Revelio said entry-level jobs exposed to AI have been declining fastest — but senior roles exposed to AI have actually begun to recover.

    The broader picture for white-collar professions most at risk of disruption actually indicates fairly stable employment trends. Last week’s official jobs report showed office and administrative roles have actually returned to their pandemic-era highs, while employment in other professional sectors, like accounting and legal services, has held relatively steady.

    It’s a gloomier story in tech — but also a more nuanced one when it comes to AI’s impact. The leaders of Amazon and Microsoft have both signaled the ability to run their businesses with reduced headcount thanks to AI. Tech layoffs tracked by Lee’s website hit a three-month high in July, with three companies — Intel, Microsoft and Recruit Holdings, the parent of Indeed and Glassdoor — largely responsible.

    All three of those companies cited artificial intelligence as playing a role in the job reductions, Lee said. But he noted that in the case of Recruit Holdings, there were no specifics about how AI had impacted the lost positions. The company simply said the technology was “changing the world.”

    “It does seem like many of the roles being cut are in line with ones being used by AI,” Lee said. “But it’s still being used as a cover in other cases.”

    A representative for Recruit did not respond to a request for comment.

    The simple calculus behind AI is that businesses will be able to do more with less, increasing overall productivity while reducing hiring needs. Yet economists say it is difficult to calculate accurate changes in productivity over the short term — though so far, the broadest national measure has shown a deceleration in recent quarters.

    Most of the benefits of AI are instead accruing to consumers, not businesses, according to a forthcoming paper from researchers at Carnegie Mellon and Stanford University. If it feels like much of the value from the current generation of AI seems mostly to allow ordinary people to generate emails and papers faster, or do quicker research, you’re not imagining things.

    “Free goods are invisible in the GDP numbers, even if they make consumers better off,” the authors, Avinash Collis and Erik Brynjolfsson, wrote in a recent Wall Street Journal op-ed. They calculate consumers derived the equivalent of $97 billion in surplus welfare from generative AI in 2024, compared with $7 billion in revenues logged by the tech firms actually creating AI products.

    Economies typically see a “J-curve” effect when transformative technologies are introduced, Collis told NBC News. At first there is a bottleneck that can cause some disruptions, though these initial effects are often not captured in official figures. For example, the iPhone increased the total global volume of photos from billions to trillions, something that directly impacted workers at camera giant Kodak, but created incalculable opportunities elsewhere, Collis said.

    “There will likely be a lot of impact, perhaps on some sectors negatively,” Collis said. “But at the same time lots of new jobs could be created as well.”

    Other indicators do suggest the stirrings of a more pronounced AI effect on jobs. The July employment survey from consultancy Challenger, Gray and Christmas found companies have blamed “automation and AI implementation” for 20,000 job cuts in 2025, with another 10,000 or so directly attributable to artificial intelligence. Challenger said this shows “a significant acceleration in AI-related restructuring.”

    Those figures are dwarfed by cuts related to government spending declines and general economic and market conditions, which account for nearly 500,000 lost roles this year, Challenger said.

    Some companies appear to be keeping payroll counts steady in response to the broad uncertainty in the economy, and using any additional resources to explore AI’s potential to boost their bottom lines. Stacy Spikes, CEO of MoviePass, told NBC News that internal workflows at his company become vastly more efficient thanks to AI. That’s made him more gun-shy about bringing on new workers into certain departments, like software. As of Tuesday, MoviePass’ careers page showed no open positions.

    “We haven’t seen headcount need to increase,” Spikes said.

    Businesses like MoviePass still appear to be the exception, however. Analysts at Goldman Sachs say only about 9% of all companies are regularly using new AI tools to produce goods or services. As a result, they see only limited effects at the moment.

    “When I look at the impact that AI has had on the overall labor market data so far, it looks pretty small to me,” Joseph Briggs, head of the global economics team at Goldman Sachs Research, said on a recent company podcast. Even for recent college grads, who have seen unemployment rates tick higher, “the anecdotes and the relationship that the anecdotes have to AI is often a little bit overstated,” Briggs said.

    JP Morgan analysts came to a similar conclusion, finding that, for now, its research “failed to find a significant impact on job growth.”

    But they cautioned that this could change at the next economic downturn.

    For white-collar workers, “we think that during the course of the next recession the speed and the breadth of the adoption of the AI tools and applications in the workplace might induce large scale displacement for occupations,” they said in a recent note to clients.

    Others remain more optimistic about the potential for new opportunities to overcome any negative effects. That’s how Nvidia co-founder and CEO Jensen Huang sees it. As the head of an AI giant, he may also have reason to hype its potential — but his outlook is notably rosier than Anthropic’s Amodei’s. Huang told Axios last month that the technology would ultimately lead to more jobs, even if there are some redundancies elsewhere.

    “Everyone’s jobs will change,” he said. “Some jobs will be unnecessary. Some people will lose jobs. But many new jobs will be created. … The world will be more productive.”

    Continue Reading

  • Machine Gun Kelly spills why he, Megan Fox split

    Machine Gun Kelly spills why he, Megan Fox split

    Machine Gun Kelly reveals real reason behind Megan Fox split

    Machine Gun Kelly has finally broken the silence on his and Megan Fox shocking split.

    On Friday, August 8, the 35-year-old rapper released his new album Lost, titled Lost Americana, taking full responsibility for “breaking his home.”

    It is pertinent to mention that the Emo Girl rapper and the Jennifer’s Body alum announced their split in November 2024 while Fox was pregnant with their baby girl, Saga Blade.

    Fans were left shocked at the time, and rumors were spiraling that Fox had discovered Kelly was talking to other women.

    Now, MGK, whose real name is Colson Baker, has confessed in his new track that the former couple ended their relationship because of him.

    The lyrics of the song Treading Water read, “This’ll be the last time you hear me say sorry / That’ll be the last tear you waste on me crying / I broke this home, and just like my father, I’ll die all alonе.

    “This’ll be the last time you hear me say sorry / That’ll be the last tear you waste on me crying / I broke this home,” Kelly sings.

    Elsewhere in the song, MGK confessed his love for the actress and made a promise to change for his and Fox’s daughter, Saga, whom they welcomed in March.

    “The beast killed the beauty; the last petal fell from the rose / And I loved you truly, that’s why it’s hard to let it go,” the dad of two continued, adding, “I broke this home, but I’ll change for our daughter, so she’s not alone.” 


    Continue Reading

  • Oil holds steady on reports of US-Russia deal – Reuters

    1. Oil holds steady on reports of US-Russia deal  Reuters
    2. Oil steadies on reports of US-Russia deal, ends week about 5% lower  Reuters
    3. Brent Tick Higher, But Logs Weekly Loss  TradingView
    4. WTI tumbles to below $63.00 as tariff concerns mount  Mitrade
    5. Oil Updates — crude set for steepest weekly losses since June on tariffs, Trump-Putin talks  Arab News

    Continue Reading

  • Official announcement: Reinier – realmadrid.com

    Official announcement: Reinier – realmadrid.com

    1. Official announcement: Reinier  realmadrid.com
    2. Real Madrid €30 million flop breaks silence ahead of Brazil move – ‘I was not happy’  Madrid Universal
    3. Real Madrid’s €30m midfielder Reinier Jesus set to join Atletico Mineiro for free – The Athletic  The New York Times
    4. Real Madrid’s $30 million Brazilian flop joins Atletico Mineiro for free without playing a single official match  World Soccer Talk
    5. Real Madrid flop to leave Europe after five years at Santiago Bernabeu with no senior appearances as free transfer agreed  Goal.com

    Continue Reading

  • Industry supports NASA plans to accelerate work on lunar nuclear reactors

    Industry supports NASA plans to accelerate work on lunar nuclear reactors

    WASHINGTON — A new NASA directive to accelerate development of a nuclear reactor for the moon has won a positive reaction from industry, who see the plans as aggressive but achievable.

    To continue reading this article:

    Register now and get
    3 free articles every month.

    You’ll also receive our weekly SpaceNews This Week newsletter every Friday. Opt-out at any time.

    Sign in to an existing account

    Get unlimited access to
    SpaceNews.com now.

    As low as $5 per week*

    Cancel anytime. Sales tax may apply. No refunds. (*Billed quarterly)

    See all subscription options

    Jeff Foust writes about space policy, commercial space, and related topics for SpaceNews. He earned a Ph.D. in planetary sciences from the Massachusetts Institute of Technology and a bachelor’s degree with honors in geophysics and planetary science… More by Jeff Foust


    Continue Reading

  • Setting the standard of liability for self-driving cars

    Setting the standard of liability for self-driving cars

    To set policy around AI liability, it might be useful to try to resolve the question in a particular AI use case such as self-driving cars rather than approach the problem in a comprehensive way. Who should be responsible for injury or damage caused by self-driving cars and what should be the standard of liability? This is likely to become an increasingly urgent issue as self-driving cars become more and more prevalent on the nation’s roadways and indeed around the world. 

    First, an important clarification: Safety engineers and regulators have adopted a classification of cars based on their level of driving automation from Level 0, meaning no automation, to Level 5, where vehicles can drive safely in all conditions with no human involvement. Tesla’s Autopilot and Full Self-Driving capability are considered under Level 2 with partial automation capabilities that can perform both the steering and the acceleration/braking functions. But Tesla warns its users that these capabilities “are intended for use with a fully attentive driver, who has their hands on the wheel and is prepared to take over at any moment” and that these features “do not make the vehicle autonomous.” Despite these warnings, which seemed to shield Tesla from liability for accidents involving Autopilot, a Florida jury recently held Tesla partially responsible for a fatal accident involving a Tesla car operating in Autopilot mode and required it to pay $243 million in damages.

    Yet the key liability issues arise once cars reach Levels 4 and 5, where driving is handled entirely by the car and there might be no driver in the vehicle at all. Waymo’s self-driving taxis are an example of level 4 autonomous vehicles. Within the specific conditions under which they are designed to operate safely, their “operational design domain” (ODD), Waymo self-driving cars operate completely autonomously with no human driver in the car. Tesla’s new robotaxis that the company launched in Austin, Texas, in late June similarly operate in autonomous mode. Absent a negligent, reckless, or malicious decision by a passenger to use the intervention button Tesla provides, the company would have liability if an accident resulted from the car’s poor performance.   

    Policymakers seeking to address these liability issues might consider four answers that scholars have discussed. The first is the traditional product liability approach under a negligence standard in which the plaintiff must show that there was a design or manufacturing flaw in the self-driving car and this flaw led to the accident that caused the injury or property damage. Absent such a showing, victims would not be compensated. The second approach is a proposed strict product liability approach under which the self-driving car manufacturers would be liable for any damages their cars produced, regardless of whether the car was defective. Victims would be compensated without having to prove that a design or manufacturing defect caused the accident.  

    The third and fourth approaches tackle the liability issue outside the contours of product liability law. They rely on a new legal construct of a “computer driver,” and they ask under what conditions should the computer driver be held liable for accidents it causes? Under the “reasonable human driver” standard, the car manufacturer would be held liable for damages whenever its computer driver fails to avoid an accident that a reasonable (that is, competent, unimpaired, and attentive) human driver would have avoided. The victims would have to demonstrate that the computer driver’s behavior would have been unreasonable if engaged in by a human driver, but they would be compensated if they could make this showing.  

    Under the fourth approach of a “reasonable computer driver” standard, the driving performance of the self-driving car would be compared to an industry yardstick—an average level of performance, an industry-determined level of expected performance or a state of the art standard focusing on what level of performance is technically and economically feasible.  

    For the reasons outlined below, this discussion concludes that policymakers should maintain the traditional negligence product liability standard but supplement it with a negligent driving regime based on the reasonable human driver standard. The supplement of the reasonable human driver standard introduces a liability standard that judges and juries have the domain expertise to administer. The strict product liability regime turns out to resemble a negligent driving approach and would be workable if combined with the reasonable human driver standard. The computer driver approach turns out to be the product liability negligence approach under a different name. It would not work as a comprehensive response to the risks of self-driving cars but should remain available for plaintiffs to use in addition to litigation based on the reasonable human driver approach.  

    Law professor Bryant Walker Smith reached roughly the same conclusion. He argues that car manufacturers should be held liable whenever their cars perform unreasonably and then suggests that a self-driving car performs unreasonably in a particular situation if “either (a) a human driver or (b) a comparable automated driving system could have done better under the same circumstances.” 


    The traditional product liability negligence standard

    In a 2014 Brookings report, UCLA law professor John Villasenor summarized the case for allowing the courts to address the liability of self-driving cars under existing product liability standards. He applied the existing standards of a design or manufacturing defect, an information defect, or a failure to instruct humans on the safe and appropriate use of the product to the self-driving car case. 

    The nuances of product liability law are well-known to lawyers and might provide fertile ground for injured parties to seek compensation when car manufacturers have been careless or lacking in foresight. Given the high burden of proof in product liability cases, Villasenor is right to conclude that holding manufacturers responsible for their demonstrable failings should not be a significant barrier to deployment of reasonably safe self-driving vehicles. He is also right that preempting state laws in this area merely to make it easier for manufacturers to escape liability is not needed to spur innovation. Bryant Walker Smith has also made the case that existing product liability law is “probably compatible with the adoption of automated driving systems.” 

    However, Villasenor’s conclusion that existing product liability law is “well equipped to address and adapt to the autonomous vehicle liability questions that arise in the coming years” is not the end of the story. While traditional product liability law is one avenue for injured parties to seek redress for injuries or damages in self-driving car cases and should not be abandoned, it suffers from two defects as a comprehensive response to the risks created by self-driving cars.  

    The first is the enormous information asymmetry between the manufacturer and even the most knowledgeable and well-resourced plaintiff. The details of self-driving training, testing, mitigation measures, upgrades, and so on are confidential business information. Discovery in court proceedings could expose some of this information to plaintiffs if they knew what to ask for. But even then, demonstrating that the company failed to take reasonable precautions would require safety engineering expertise that is typically available only within the self-driving companies themselves. The chances of beating a car manufacturer determined to defend itself in court are slim.  

    Think what it would mean for plaintiffs to have to pass a “risk-utility” test in attempting to prove that the manufacturer was responsible for a self-driving car accident that caused injury or damage. This test, used in many cases to determine the presence of a design defect, requires plaintiffs to demonstrate that there was a reasonably available alternative to the system the car manufacturer used that would have avoided the accident. In effect, the plaintiff’s outside expert would face the daunting challenge of having to demonstrate that the car manufacturer missed an affordable upgrade that would have prevented the accident. To be sure, this is a difficulty in many product liability cases, but it is especially likely to arise when a product exists at a technological frontier as self-driving cars do.  

    But it is not just a matter of a difficult burden of proof. Maybe there was no reasonably available software alternative that would have avoided the self-driving car accident. Maybe that’s just as good as the systems get with current technology. The self-driving car accident occurred. It was caused by the misbehavior of the self-driving car. No fault can be traced to any person or legal entity. Still, some parties were injured or suffered property damage through no fault of their own. Will the legal system really leave them without recourse? 

    The second approach holds manufacturers liable in cases where something inexplicable went wrong with the car, but the manufacturer cannot be held to account for it under the negligence standard of product liability. In these cases, strict liability should apply. Plaintiffs would not have to show the manufacturer was at fault but would simply collect compensation from the manufacturer for injury or damage.  

    Law professor David Vladek suggests four reasons for such a strict liability system for self-driving cars. First, it satisfies “basic notions of fairness, compensatory justice, and the apportionment of risk in society” to provide redress in these cases “for persons injured through no fault of their own.” Second, the self-driving car manufacturers “are in a position” to absorb the costs of these “inexplicable accidents” and it is not unreasonable that they should bear them since they benefit from the self-driving cars they create.  

    Third, the strict liability system spares everyone the “enormous transaction costs” that can be expected if the only alternative is to litigate even in circumstances where fault cannot be established. Fourth, the predictable nature of the strict liability system is better for innovation than the uncertainties of endless product liability litigation.  

    Law professor Steven Shavell also embraces a strict liability regime, with the added twist that he thinks the payment should go to the state rather than the harmed individuals, since this would give purchasers of self-driving cars the incentive to demand greater safety from car manufacturers. But that, of course, leaves victims without compensation for injuries.  

    While attractive as a way to ensure fundamental fairness for injured parties and avoid pointless litigation, this strict liability approach has a fundamental defect. It can only function as a replacement for a product liability negligence regime for self-driving cars, not a supplement to it.  

    To see this ask the question: When does the strict liability regime kick in, and when should the negligence product liability regime apply? It is easy to say that strict liability applies only when the self-driving car accident is truly “inexplicable” and untraceable to human fault. But no one can know this at the outset in a particular case. This can only be established as part of litigation. So all self-driving accidents would have to be litigated. But this defeats one of the purposes of the strict liability regime, which is to avoid pointless litigation. To achieve its anti-litigation purpose, it cannot be layered on top of a negligence product liability regime but must prevent litigation suits from starting and move parties harmed in a self-driving car accident directly into the no-fault system, where they simply claim compensation for injury or damage.  

    In effect, this means that all self-driving car cases where the car failed to avoid an accident will be litigated under a strict liability standard. Car manufacturers will have to pay damages even when they have not been negligent in designing the car’s self-driving system. This might be all to the good, since as Vladeck notes, “the complexity and sophistication of driver-less cars, and the complications that will come with the fact patterns that are likely to arise, are going to make proof of wrongdoing in any individual case extremely difficult.” It might be simpler, as Vladeck says, to infer the presence of a defect in the self-driving car on the theory that the accident itself is proof of a defect.  

    But this puts too much of a burden on the manufacturer. What if the manufacturer could prove that its self-driving car, even though it failed to avoid the accident that produced injury or damage, performed in a way a reasonable human driver would have? Maybe it did not stop in time and ran into another car. But a competent, attentive human driver would have taken the same actions in those circumstances. Without an opportunity to prove that a human driver would not have been held liable for damages in a particular case, because they behaved reasonably in the circumstances, car manufacturers would face prohibitively high liability costs.


    The reasonable human driver standard

    The key to the third and fourth liability approaches is to adopt a negligent driving standard in assessing liability for accidents involving self-driving cars, rather than trying to run all claims for compensation through the product liability system. This is often the way accidents involving human drivers are assessed when plaintiffs allege injury or damage from an automobile accident. The court looks to whether the driver involved in the accident exhibited reasonable driving behavior and if not, then it holds the driver responsible for compensating the victims. The difference between the third and fourth approaches is how they define reasonable driving behavior.  

    Law professor William H. Widen and safety engineer Philip Koopman propose that policymakers create a new category of “computer driver” whose driving behavior can be evaluated as if it were a human driver. This applies the same familiar standard to self-driving cars that judges and juries already have domain expertise with and asks them to do nothing different in a self-driving case than in a case where a human driver is involved. The self-driving cars are responsible for an accident when their driving behavior would be deemed negligent if engaged in by a reasonable (that is, competent, attentive, unimpaired) human driver.  

    Current versions of self-driving cars are notorious for doing things that no reasonable human driver would do, such as driving on the wrong side of the road or driving into wet cement. In addition to providing recourse for injured parties, the reasonable human driver standard would provide an economic incentive for self-driving car manufacturers to provide cars at least capable enough to avoid these “stupid” mistakes. Indeed, it promotes innovation to create a self-driving car that performs at least as well as an unimpaired and competent human driver. 

    This approach does not get plaintiffs bogged down in endless product liability litigation where the chances of success are so limited. It also provides a car manufacturer with a way to defend itself in some circumstances involving accidents caused by one of its self-driving cars. In an accident involving a self-driving car, the computer driver of the car can be held liable in exactly the same way a human driver can. If the computer driver does not match or exceed “the driving safety performance outcomes we expect of an attentive and unimpaired” human driver, as Widen and Koopman put it, it is liable for any injuries or damages it produces. But if the computer driver behaved reasonably as measured by what a competent, attentive, unimpaired driver would have done, the car manufacturer would not be liable for damages. A new law could implement this idea, according to Widen and Koopman, by stating that computer drivers owe “a duty of care to automated vehicle occupants, road users, and other members of the public.” 

    This approach addresses the “inexplicable” self-driving car accidents that Vladeck seeks to deal with through his strict product liability approach. Even when a self-driving car accident cannot be traced to a design or manufacturing defect, plaintiffs can still recover damages if they can show that a reasonable human driver would not have caused the accident. There might be a design flaw that produced the accident, and if there is, the self-driving car manufacturer should seek to detect it and remedy it. But plaintiffs do not have to prove its existence and will not be denied a remedy if they cannot meet that burden or if a legally sufficient defect does not exist.  

    In order to hold a legal entity responsible for compensation, the new law that establishes the category of computer driver and creates a duty of care for computer drivers would also need to stipulate that the financially responsible party in cases where a computer driver is found liable would be the manufacturer of the self-driving car. This avoids getting bogged down in philosophically interesting but practically useless speculations about when computer drivers have achieved enough autonomy to become legal actors in their own right.


    The reasonable computer driver standard

    Many would think that the reasonable human standard is too lenient. David Vladeck, for instance, assumes that self-driving cars generally outperform human drivers, and they should perform up to “the standards achievable by the majority of other driver-less cars.” He thinks car manufacturers should be held liable for accidents when their self-driving cars do not live up to this standard. In effect, his strict product liability approach can be thought of as a version of the negligent driving approach, where the driving standard is the reasonable computer driver.  

    Law professor Kevin Webb explicitly adopted a version of this “reasonable computer driver” standard, calling it the “reasonable car standard.” He thinks that the car manufacturer can be held liable “only when the car does not act in a way that another reasonable AV would act.” 

    There is considerable force to this idea. Why not expect more from computer drivers? Why go to all the trouble of developing and deploying self-driving cars if the result is only the current level of traffic safety? Shouldn’t the right liability standard give self-driving car manufacturers an incentive to produce cars that exceed human driving capabilities? 

    Under this “reasonable computer driver” standard, courts would hold a car manufacturer liable when the computer driver’s safety performance fell below what a reasonable computer driver would have done and plaintiffs suffered injury or damage as a result. This reasonable computer driver standard could be defined in principle through industry standards, average or typical driving performance of self-driving cars, or an assessment of what the state of the art of the current technology allows. 

    Such a standard might be more protective of plaintiffs in circumstances, such as speed of reaction to unforeseeable events, where self-driving cars typically perform better than human drivers. If brand X’s self-driving car would have avoided a collision but brand Y’s self-driving car did not, why exonerate brand Y just because no human in that situation could have reacted fast enough to avoid the collision? This more stringent standard would force the industry to keep up with the latest development or face liability consequences for failing to do so.  

    However, the reasonable computer standard would be less protective than the reasonable human driver standard in cases where the technology does not match human driving skills. As Widen and Koopman put it, it would allow “a potential outcome in which AVs much more dangerous than human drivers would be considered ‘reasonable’ if that is the best the industry can do.” It allows an industry defense even when a currently deployed self-driving car does something wildly stupid that causes injury. It seems unreasonable to allow a manufacturer to escape liability when it can show that no model of self-driving cars on the road today could have avoided making the same stupid mistake in those circumstances, even though a reasonable human would have avoided the problem easily.  

    The reasonable human standard seems to be the minimum standard policymakers and the public should demand from self-driving car manufacturers. It is also consistent with their promises and proclamations to regulators and the press. Substituting a standard of what the industry is capable of right now would undermine this minimum goal.  

    This ambiguity of whether a reasonable computer driver standard is more or less protective for plaintiffs illustrates the fundamental problem with adopting it. It is not at all clear what the reasonable computer driver standard would be in any particular case. Using it would inevitably involve courts and juries guessing what other self-driving cars might have done in similar circumstances, what they should have done to be in accordance with an industry code, or whether the state of the art would have allowed car manufacturers to deploy cars that would have avoided the accident.  

    Indeed, the reasonable computer standard risks collapsing back into the product liability design defect standard by forcing plaintiffs to engage in an assessment of what self-driving capabilities are technically and economically feasible. If policymakers want a standard that avoids those litigation pitfalls, it would be better to stay with the reasonable human driver standard.  

    What about the consequence that the reasonable human standard would exonerate a self-driving car manufacturer in circumstances when the rest of the industry would have done better? The answer is that plaintiffs should still have the route of traditional product liability negligence litigation to handle such cases. It is true that such cases are hard to win, but no harder than cases that would be based on the proposed reasonable computer standard.  

    The best way forward, then, is to combine an approach that assesses the performance of the car under a reasonable human driver standard with the traditional negligence approach under product liability law. It is only fair to admit, however, that this combined liability system by itself does not create a very powerful incentive for car manufacturers to produce self-driving cars that exceed the current human safety record.  

    In seeking to move self-driving car manufacturers to a higher level of safety, policymakers should keep in mind that human drivers are pretty safe. Given the amount of driving on the nation’s roads (around 3.2 trillion miles in 2022) and the number of traffic fatalities (42,795 in 2022), human drivers are involved in a fatal accident only about one in every 100 million miles driven.  

    That is the admirable safety record policymakers and the public should expect self-driving cars to match or exceed. A liability standard for individual cases that holds self-driving cars to this minimum level of performance is no small thing.  

    Policymakers rightly will demand more from self-driving cars. Self-driving car manufacturers promise increased safety, and they should be held to such a higher standard. But it is unlikely that this higher safety goal for self-driving cars will be achieved effectively by a liability standard for individual cases. As we have seen, industry is likely to be able to defeat any product liability litigation standard that relies on economic and technical feasibility analyses. As a result, litigation in individual cases, should not be viewed as aiming to improve road safety. It is primarily a less ambitious attempt to compensate people for injuries or property damage in a self-driving car accident through no fault of their own. If it has any effect on the level of safety for the public, it would be to aid in preventing self-driving cars from degrading the current high level of safety provided by human drivers. 

    If policymakers want to move the self-driving car industry to a level of performance exceeding the current level of safety provided by human drivers, this might be done more effectively through a regulatory requirement, rather than through standards for liability in litigation of individual accident cases. For instance, if there is an expectation that computer drivers can and should react to a suddenly appearing pedestrian faster than a human driver would, regulators can design a test and make a specified faster-than-human response time a performance requirement for self-driving cars.  

    Regulators will have to move beyond the current recall systems operated by the National Highway Traffic Safety Administration and by some state regulators such as California’s Department of Motor Vehicles. More needs to be said about establishing a forward-looking and protective regulatory framework for self-driving cars. But if policymakers want to move the industry to a higher level of safety, they will have to devise an upgraded regulatory system that supports this goal.  

    The Brookings Institution is committed to quality, independence, and impact.
    We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).

    Continue Reading

  • The World Games Chengdu 2025: Final day Preliminary Round games to decide quarter-final line-ups

    The World Games Chengdu 2025: Final day Preliminary Round games to decide quarter-final line-ups

    Saturday (9 August) brings with it the final round of preliminary group games at The World Games Chengdu 2025.

    All 16 teams will move into the quarter-finals on Sunday (10 August) with the final games in the group stage deciding who will play who in the unforgiving knockout rounds.

    To follow beach handball at The World Games, click HERE.

    The top two teams at the last IHF Women’s Beach Handball World Championship – Germany (gold) and Argentina (silver) – go into the final day of preliminary group action knowing wins in their third games will seal top spots in their respective groups and, potentially, the easiest team in the quarter-finals – on paper at least.

    First up, the South Americans face debutants China, a team they beat last year in Pingtan in the main round. Armed with the Chengdu 2025 top-scorer Gisella Bonomi (33 points) they should beat the Asian side, who have one win and one draw to their name so far, but with the best goalkeeper in the tournament so far in terms of number of saves made (Fan Wenna 17 from 49 shots/35%).

    It could be a fascinating clash – especially with the strong support likely from both sets of fans as the South Americans have a large number following them in Asia. “Our crowd is the best in the world,” said Zoe Turnes to ihf.info. “They are really passionate and they feel the passion for beach handball which we do. It’s really nice to have that support and it really gives us that fire inside.”

    Germany are the tournament top-scorers overall with 88 points and they face fellow Europeans Denmark, a team they know well and a team which they beat in the semi-finals of China 2024, 2-1, after losing to them 2-0 in the main round.

    “It will be a hard game because Denmark are a good team. We take all our power to this game,” said Germany’s Belen Gettwart to ihf.info, who won gold with the side at The World Games Birmingham 2022.

    “It is really nice to be back. I really love The World Games and really love the Athletes’ Village. After the games here in China, when I see my phone, I see a lot of messages and I love that. I hope that will be the same after the next games.”

    The other games see Vietnam and Croatia both looking to get their first wins as they face Spain and Portugal respectively – the Portuguese with the meanest defence so far, conceding just 51 points in their two games. Spain beat Vietnam 2-0 in China 12 months ago, while Portugal defeated Croatia 2-1 in the main round at the same event.

    With two wins from two so far, Spain and Brazil lead the way in the men’s competition at Chengdu 2025, somewhat surprising considering 12 months ago the teams finished fifth and sixth respectively at the 2024 IHF Men’s Beach Handball World Championship held on Pingtan Island in China.

    Brazil face the biggest threat to their perfect record so far as they take on newly-crowned European champions Germany in the first pair of men’s games on day three. Germany will be reeling from their loss on day two to Portugal and looking to get back on track against the South Americans, who can boast the best defensive record so far, conceding just 72 points in their pair of games.

    “It’s the big match of the group to decide who is going to be the first place in the group and who will have the ‘easiest’ match with the fourth-placed team in the other group. It is never easy, but everyone wants to get the smallest road to the final,” said Brazil’s new senior team player Rai Goncalves to ihf.info, before going on to explain how he was selected for the team.

    “It’s a great pleasure to talk about the coach, Antonio Guerre Peixe. He is the greatest,” said the player who turned 20 years old in March. “He saw me play at the 2022 IHF Men’s Youth Beach Handball World Championship when he was with Iran, but he is a Brazilian and has a long story with them.

    “He returned back to coach the national team and gave me an opportunity at a four nations tournament with Paraguay, Uruguay and Argentina. There, I showed him that I can be part of this group and happily he brought me to this tournament. I am really glad that I am with the guys.”

    The other game in the first session will see The World Games title-holders Croatia face Portugal. The Croatians find themselves in unfamiliar territory – without points after two games. They will be desperate not to end the stage without a win while Portugal, with Ricardo Castro recording the most saves so far – 11 – will be hoping to add another win.

    “We are growing with the competition,” said Castro to ihf.info. “We want to continue to win, it’s match-by-match and we want to win to have the best placement for the quarter-finals.”

    As recently as June was the last time Spain played Tunisia, defeating them 2-0 (22:16, 21:17) on stage one of the BHGT held in North Africa. The two teams do battle once again, this time at Chengdu 2025, Spain knowing a win will cement their top-spot finish in the group.

    The final group of the game sees the best attacking force of the men’s competition – Denmark – take on debutants China in their first-ever meeting.

    Denmark boast a number of top offensive statistics – the most points scored overall (99), the two top-scorers, in Christian Nielsen (36 points) and Martin Andersen (34), plus the best assists – Frederick Jensen’s 19 ahead of Bruno Oliveira’s 18.

    “I am super-excited. We have never played against China before, so it’s always fun to play against teams we haven’t before, playing the host nation and seeing the Asian perspective of the game,” said Denmark’s Martin Vilstrup Andersen to ihf.info.

    “It is on the central court too so there’s a lot of people back home in Denmark following us and they can watch us play down here in China.”

    Andersen also had chance to reflect on being one of the flagbearers for Denmark at the opening ceremony on Thursday (7 August).

    “It was huge, I didn’t expect it, but it’s an honour,” he said. “Just a shout-out to the host nation for it as it was the coolest and most crazy – in a good way – opening ceremony I’ve ever seen. The fireworks, all the volunteers. It is something you’re going to remember and I’m happy I did it, because in four years’ time I don’t know if I will be at The World Games because of my age, or if we qualify. It was unbelievable.”

    The World Games Chengdu 2025 – Beach Handball: Day 3 schedule
    (All times local, CST)

    Saturday 9 August 

    Preliminary Group

    Women’s Competition
    1530    ARG vs CHN, POR vs CRO
    1710    GER vs DEN, ESP vs VIE

    Men’s Competition
    1620    CRO vs POR, GER vs BRA
    1800    DEN vs CHN, ESP vs TUN

    Continue Reading

  • Long-term cancer risk after kidney transplantation: a German primary care cohort study

    Continue Reading

  • White House to clarify tariffs for gold bars as industry stops flying bullion to US – Reuters

    1. White House to clarify tariffs for gold bars as industry stops flying bullion to US  Reuters
    2. US hits one-kilo gold bars with tariffs in blow to refining hub Switzerland  Financial Times
    3. Gold soars to record high after US tariff surprise  Dawn
    4. White House to Clarify Misinformation on Gold Tariffs  Bloomberg.com
    5. US gold futures pare gains after official says White House to clarify tariff policy on bullion bars  Reuters

    Continue Reading