Category: 3. Business

  • OpenAI loses fight to keep ChatGPT logs secret in copyright case – Reuters

    1. OpenAI loses fight to keep ChatGPT logs secret in copyright case  Reuters
    2. OpenAI desperate to avoid explaining why it deleted pirated book datasets  Ars Technica
    3. OpenAI Loses Key Discovery Battle as It Cedes Ground to Authors in AI Lawsuits  The Hollywood Reporter
    4. OpenAI Ordered to Share Documents in Copyright Lawsuit  Legal Reader
    5. US judge declines to reconsider order that OpenAI produce 20m ChatGPT conversations  MLex

    Continue Reading

  • helpful, honest, harmless and hulking

    helpful, honest, harmless and hulking

    Unlock the Editor’s Digest for free

    They grow up so fast. Anthropic, a maker of artificial intelligence models and rival to OpenAI, has hired lawyers ahead of an initial public offering that could value it at $350bn next year. By that point the company would have reached the not-at-all-grand age of five.

    That makes it a pretty good example of AI companies’ turbocharged growth. As a comparison, Google went public six years after its founding, achieving a valuation of about $23bn. Facebook took eight years to spring into public markets with roughly a $100bn price tag. The geriatric Microsoft waited 11 years and debuted in 1986 at around $800mn.

    Behind the hype, at least, is a business. Anthropic’s main product is its chatbot Claude. That generates revenue, if not yet profit: Anthropic has projected that it could make $70bn in sales by 2028, The Information has reported. That would put its mooted valuation at five times that sum. Meta went public in 2012 at, with hindsight, a multiple of six times its three-year-hence sales; China’s Alibaba at seven times and Palantir at 10.

    When investors do get the chance to buy stock in Anthropic directly, joining existing backers Amazon, Google, Microsoft and Nvidia, they will be benchmarking it particularly closely with OpenAI, the maker of ChatGPT, whose latest valuation of $500bn is also five times 2028 projections.

    Who wins the bake-off depends on what flavours an investor prefers. Anthropic seems more popular with companies, with a 32 per cent share of the “enterprise” market as of the end of July, according to Menlo Ventures — which it should be noted is an Anthropic investor. That’s helpful, because businesses are more likely to pay up for AI than consumers.

    Anthropic is also a more narrow business. It builds models, and that’s about it. OpenAI, meanwhile, is investing in data centres, pocket-sized devices, other companies’ shares and its own web browser. Some might call that sprawl; a venture capitalist might be more likely to call it “full stack”. Seen that way, Anthropic might more resemble a Palantir or a Salesforce, where OpenAI has shades of Google parent Alphabet or Microsoft.

    Perhaps the toughest thing to value is what Anthropic might see as its greatest asset: its principles. The company was founded to be a safer alternative to OpenAI, building bots that are “helpful, honest and harmless”. Indeed, the Center for AI Safety deems Anthropic’s products the least likely among major models to “overtly lie” or furnish answers to “hazardous expert-level virology queries”.

    Whether investors will pay a premium for that — or instead demand a discount — remains to be seen. In the meantime, much can happen. By the time it goes public, if it does, AI may have taken another leap forward, or tripped on its own hype. It’s therefore worth thinking about how Anthropic might justify a $350bn valuation — while also being prepared to tear those assumptions up and start again.

    john.foley@ft.com

    Continue Reading

  • Bond investors warned US Treasury over picking Kevin Hassett as Fed chair

    Bond investors warned US Treasury over picking Kevin Hassett as Fed chair

    Unlock the White House Watch newsletter for free

    Bond investors have told the US Treasury they are concerned about Kevin Hassett’s potential appointment as Federal Reserve chair, worrying he will cut interest rates aggressively to please President Donald Trump.

    The Treasury department solicited feedback on Hassett and other candidates in one-on-one conversations with executives at major Wall Street banks, asset management giants and other big players in the US debt market, according to several people familiar with the conversations.

    The discussions took place in November, before Treasury secretary Scott Bessent held his second round of interviews with candidates to replace Jay Powell as Fed chair when his term expires in May 2026, these people said.

    A White House spokesperson said “the president will continue to nominate the most qualified individuals to the federal government, and until an announcement is made by him, any discussion about potential nominations is pointless speculation”.

    The Treasury declined to comment.

    Hassett, the White House’s top economic official, has emerged as a frontrunner for the position in recent weeks, as Trump and Bessent have whittled down the list of potential candidates from 11 initial contenders.

    Trump and Hassett in the Oval Office © Brian Snyder/Reuters

    Trump on Tuesday said he planned to name his pick for Fed chair “early” next year, and signalled Hassett was a “potential” contender. The US dollar briefly slipped on the president’s mention of Hassett.

    The doubts about Hassett reflect a broader anxiety on Wall Street about the transition at the Fed’s helm as Trump prepares to nominate a new leader of the central bank. Some senior bond market participants would have preferred other candidates such as BlackRock’s Rick Rieder and Fed governor Christopher Waller who were seen as more independent from Trump than Hassett.

    Several of the market participants the Treasury contacted said they were worried about Hassett’s alignment with Trump, who has insisted rates should be cut sharply and has called Powell a “stubborn mule” for the central bank’s decision to only modestly lower borrowing costs this year.

    Workers on a lift perform construction on the exterior of the Federal Reserve Building beneath a large eagle sculpture, with an American flag in the foreground
    Concerns abound about Hassett’s closeness to a president who has spent the past year attacking the US central bank © Kevin Dietsch/Getty Images

    The bankers and investors were worried that Hassett could agitate for indiscriminate rate cuts even if inflation continues to run above the Fed’s 2 per cent target, according to three people familiar with the conversations.

    “No one wants to get Truss-ed,” said one market participant, referring to the shock in the UK bond market in 2022 triggered by the then-prime minister Liz Truss’s plans for unfunded tax cuts.

    The prospect of a dovish Fed chair was viewed as particularly worrisome to the big bond managers in the event that inflation in the US rises next year. The Fed’s preferred inflation gauge registered 2.7 per cent in August.

    The combination of loose monetary policy and higher inflation could ignite a sell-off in long-term Treasuries, said one market participant.

    Some market participants were also unconvinced Hassett would be able to win round a divided Fed board and corral consensus on rate decisions, the people added. 

    Among the array of participants in those conversations were members of the group of Wall Street bond titans who make up the Treasury Borrowing Advisory Committee, which counsels Bessent on market and issuance questions, according to two people familiar with the matter.

    Kevin Hassett speaks to reporters outside the White House, standing at microphones with journalists and photographers gathered around him.
    Kevin Hassett served as a senior economic adviser on the presidential campaigns of John McCain, George W Bush and Mitt Romney © Will Oliver/EPA/Bloomberg

    When Hassett, a career economist whose work has focused on tax policy, met with TBAC earlier this year, he spent little time talking about markets, instead pitching White House priorities, including a discussion about Mexican drug cartels, these people said.

    A Washington insider, Hassett served as a senior economic adviser on the presidential campaigns of John McCain, George W Bush and Mitt Romney, before joining the White House during Trump’s first term as chair of the Council of Economic Advisers.

    He also worked at the conservative think-tank the American Enterprise Institute — and at the Fed, where staffers who worked with him remember him as ambitious. 

    Robert Tetlow, a senior policy adviser who recently left the Fed, said Hassett struck him as “smart, eloquent and self-assured”. 

    However, concerns abound that his closeness to a president who has spent the past year attacking the US central bank will threaten the institution’s independence. 

    “Kevin Hassett is more than capable of doing the job of Fed chair, it’s just a question of who shows up,” said Claudia Sahm, a former Fed economist who is now chief economist at New Century Advisors. “Is it the Kevin Hassett who is the active participant in the Trump administration? Or Kevin Hassett the independent economist?” 

    John Stopford, head of managed income at asset manager Ninety One, added: “I think the market sees him as a Trump stooge which erodes the Fed’s credibility at the margin.”

    Additional reporting by Ian Smith in London

    Continue Reading

  • Kirkland Advises Kingswood Capital Management on Acquisition of Daramic | News

    Kirkland & Ellis advised Kingswood Capital Management, LP on its acquisition of Daramic, a leading manufacturer and supplier of lead battery separators, from Japanese-based diversified chemical company Asahi Kasei.

     

    Read Kingswood’s press release

     

    The Kirkland team included corporate lawyers Adam Wexner, Matt Dunnet, Mark Keohane and Tyler Martin; tax lawyers Anne Kim and Allison Bray; and technology & IP transactions lawyers John Lynn, Martin Schwertmann and Sam Mohazzab.

    Continue Reading

  • Liverpool store fined £13k after dead mouse and droppings found

    Liverpool store fined £13k after dead mouse and droppings found

    Angela FergusonNorth West

    Liverpool City Council Mouse droppings seen all over a shelf which also has a tin of soup and a tin of spaghetti on it.Liverpool City Council

    Mouse droppings were found throughout the shop, a court heard

    A convenience store operator has been ordered to pay more than £13,000 after mice droppings and a dead mouse were found on the premises.

    Food including packets of crisps and chocolates had been gnawed by mice at the Best In Late Shop on Atwell Street, Liverpool, Liverpool Magistrates’ Court heard.

    Liverpool City Council environmental health officers visited the store in October 2024, following a complaint of a mouse sighting, and they found it to be infested with rodents.

    Operator Freshone Ltd admitted five breaches of food safety and hygiene regulations in a hearing on 27 November. The shop has since reopened and a second inspection in May this year awarded it the top food-hygiene rating.

    After the court hearing, Harry Doyle, Liverpool City Council’s cabinet member for health, wellbeing and culture, said conditions at the Best In Late Shop when their environmental health officers first visited were “truly horrific”.

    During that inspection, the officers found clear and concerning signs of inadequate pest control, a council spokesperson said.

    “There were mouse droppings found throughout the shop, including on the display shelving storing food and on floor surfaces,” they said.

    “Mice had gnawed foods and packets that were on sale to customers, including crisps and chocolates, while a dead mouse was also found under a freezer.”

    The court heard conditions were so unhygienic, the shop was immediately shut down because it presented an imminent risk to health.

    The store was also awarded the lowest food hygiene rating of zero.

    A council spokesperson said more than 55 mice were caught during the store’s enforced closure.

    The operator was fined £5,333 and ordered to pay a victim surcharge of £2,000 and £5,694 in costs to the council.

    Liverpool City Council A pink packet of noodles on a shelf has a large hole at one end, with noodles visible.Liverpool City Council

    Mice had been gnawing at packets of food on sale in the store, the court heard

    Best In Late Shop was allowed to reopen following a second inspection, which saw significant improvements, a council spokesperson added.

    They said it was inspected again in May this year, when it was awarded a food hygiene rating of five.

    Doyle added: “We take food hygiene and safety extremely seriously and this goes to show that we will take definitive action if a business fails to meet its legal requirements,” he said.

    He said the council was pleased to see the business had “owned up to its mistakes and has used our recommendations to fully turn things around, which is ultimately what we would want to see happen”.

    Continue Reading

  • Booming IT Investment Cushions US Slowdown; World GDP Forecast Edges Up – Fitch Ratings

    1. Booming IT Investment Cushions US Slowdown; World GDP Forecast Edges Up  Fitch Ratings
    2. OECD upgrades growth outlooks for Türkiye, US, eurozone | Daily Sabah  Daily Sabah
    3. Global economy resilient, but fragile: OECD Economic Outlook  Fibre2Fashion
    4. OECD maintains global economic growth forecast for 2025, 2026  Qazinform
    5. OECD Keeps 2026 Global Growth Projection at 2.9 Pct  nippon.com

    Continue Reading

  • Healthy Aging and Labor Market Participation in Korea: Republic of Korea – International Monetary Fund

    1. Healthy Aging and Labor Market Participation in Korea: Republic of Korea  International Monetary Fund
    2. Democratic Party to Extend Retirement Age to 65  조선일보
    3. Salaried workers dismayed social welfare benefits exceed NPS payouts  The Korea Times
    4. High Employment Rate Among Korea’s Elderly Driven by Financial Necessity  Businesskorea
    5. As the national pension system, introduced in 1988, entered the maturity stage, a case of receiving  매일경제

    Continue Reading

  • The Earliest Stage of Embryos Show Specialized Asymmetry

    The Earliest Stage of Embryos Show Specialized Asymmetry

    As nearly one in six couples experience fertility issues, in-vitro fertilization (IVF) is an increasingly common form of reproductive technology. However, there are still many unanswered scientific questions about the basic biology of embryos, including the factors determining their viability, that, if resolved, could ultimately improve IVF’s success rate.

    A new study from Caltech examines mouse embryos when they are composed of just two cells, right after undergoing their very first cellular division. This research is the first to show that these two cells differ significantly—with each having distinct levels of certain proteins. Importantly, the research reveals that the cell that retains the site of sperm entry after division will ultimately make up the majority of the developing body, while the other largely contributes to the placenta. While the studies were done in mouse models, they provide critical direction for understanding how human embryos develop. Indeed, the researchers also assessed human embryos immediately after their first cellular division and found that these two cells are likewise profoundly different.

    The research was conducted primarily in the laboratory of Magdalena Zernicka-Goetz, Bren Professor of Biology and Biological Engineering, and is described in a study appearing in the journal Cell on December 3.

    After a sperm cell fertilizes an egg cell, the newly formed embryo begins to divide and multiply, ultimately becoming the trillions of cells that make up an adult human body over its lifetime. Every cell has a specialized job: immune cells patrol for and destroy invaders, neurons send electrical signals, and skin cells protect from the elements, just to name a few.

    It was previously assumed that all of the cells of a developing embryo are identical, at least prior to the stage when the embryo consists of 16 or more cells. But the new study shows that differences, or asymmetries, exist even in both cells of a two-cell-stage embryo. These differences enable the specialization of the cells—in this case, leading to the formation of the body and the placenta. At this stage, the cells of the embryo are called blastomeres.

    The team found around 300 proteins that are distributed differently between the two blastomeres: some overproduced in one and deficient in another, and vice versa. All of these proteins are important for orchestrating the processes that build and degrade other proteins, as the complement of proteins supplied by the mother declines and is replaced by those produced by the embryo.

    The location of sperm entry into the cell seems to be a key factor determining which blastomere will play each role. Developmental biologists have long believed that mammalian sperm simply provides genetic material, but this new study indicates that the sperm’s entry point sends important signals to the dividing embryo. The mechanism through which this happens is still unclear; for example, the sperm could be contributing particular cellular structures (organelles), or regulatory RNA, or have a mechanical input. Future studies will focus on understanding this mechanism.

    To make these discoveries, the Zernicka-Goetz lab collaborated with two laboratories with expertise in proteomics (the study of protein populations): the Caltech lab of Tsui-Fen Chou, Research Professor of Biology and Biological Engineering; and of Nicolai Slavov at Northeastern University.

    A paper describing the study is titled “Fertilization triggers early proteomic symmetry breaking in mammalian embryos.” The lead authors are Lisa K. Iwamoto-Stohl of the University of Cambridge and Caltech, and Aleksandra A. Petelski of Northeastern University and the Parallel Squared Technology Institute in Massachussetts. In addition to Zernicka-Goetz and Chou, other Caltech co-authors are staff scientist Baiyi Quan; Shoma Nakagawa, director of the Stem Cell and Embryo Engineering Center; graduate students Breanna McMahon and Ting-Yu Wang; and postdoctoral scholar Sergi Junyent. Additional co-authors are Maciej Meglicki, Audrey Fu, Bailey A. T. Weatherbee, Antonia Weberling, and Carlos W. Gantner of the University of Cambridge; Saad Khan, Harrison Specht, Gray Huffman, and Jason Derks of Northeastern University; and Rachel S. Mandelbaum, Richard J. Paulson, and Lisa Lam of USC. Funding was provided by the Wellcome Trust, the Open Philanthropy Grant, a Distinguished Scientist NOMIS award, the National Institutes of Health, the Paul G. Allen Frontiers Group, and the Beckman Institute at Caltech. Magdalena Zernicka-Goetz is an affiliated faculty member with the Tianqiao and Chrissy Chen Institute for Neuroscience at Caltech.

    Continue Reading

  • Experts Caution Against Using AI Chatbots for Emotional Support

    Experts Caution Against Using AI Chatbots for Emotional Support

    • The most popular use case for generative artificial intelligence tools is therapy and companionship.
    • The design of generative AI models make them poor substitutes for mental healthcare professionals, caution TC experts.
    • But, AI and other digital innovations could help address the mental health crisis by improving access in the future.

    [Content warning: This article includes references to self-harm and suicidality.]

    ChatGPT has 800 million active weekly users, who overwhelmingly use the tool for non-work related reasons. One of the most popular use cases this year? Therapy and companionship, according to a report by the Harvard Business Review. Considering that only 50 percent of people with a diagnosable mental health condition get any kind of treatment, ChatGPT has become a popular replacement for a therapist or confidant amidst an ongoing mental health crisis and rise in loneliness.

    “Because [generative] AI chatbots are coded to be affirming, there is a validating quality to responses, which is a huge part of relational support,” says Douglas Mennin, Professor of Clinical Psychology, Director of Clinical Training at TC and co-developer of Emotion Regulation Therapy. “Unfortunately, in the world, people often don’t get that.” 

    As AI usage for emotional support rises in popularity, the limits and dangers of these tools are becoming apparent. There have been reports of people with no history of mental illness experiencing delusions brought on by the chatbot, and multiple teenagers have died by suicide while engaged in relationships with AI companions. 

    TC faculty members in counseling and clinical psychology, digital innovation and communications shared their expertise, research-backed guidance and recommendations on how we can more safely navigate the era of AI.


    Pictured from left to right: Ayorkor Gaba, Assistant Professor of Clinical Psychology; Ioana Literat, Associate Professor of Technology, Media, and Learning; Emma Master, Ph.D. student; Douglas Mennin,  Professor of Clinical Psychology; George Nitzburg (Ph.D. ’12), Assistant Professor of Teaching, Clinical Psychology; and Lalitha Vasudevan, Professor of Technology and Education (Photos: TC Archives)


    The relationships that generative AI tools offer are genuine and impactful, but not a replacement for human connection

    For those who have never had a conversation with ChatGPT, it may seem outlandish to imagine a human forming a deep connection, and even falling in love, with a coded program. However, the responses AI models produce can be almost indistinguishable from human expression. Chatbots can even adopt personalities and mirror the speech patterns of a user. It’s very easy to forget that the messages, which feel so human, are written by a computer program and not by a person with opinions and feelings. The illusion becomes even stronger when using voice mode, where responses are spoken and can even include awkward laughter, as if the bot is embarrassed by its performance.

    Emulating human traits is a core part of generative AI design. “When someone thinks that they’re interacting with an entity that has human qualities, there’s a greater desire to continue the interaction,” explains Lalitha Vasudevan, Professor of Technology and Education, Vice Dean for Digital Innovation and Managing Director of the Digital Futures Institute. 

    Experts say that the emphasis on affirmation can make chatbots appealing for people who are lonely or otherwise lacking community. “People across all demographics are experiencing increased loneliness and isolation, [and] we don’t have the same social safety nets and connections that we used to,” explains Ayorkor Gaba, Assistant Professor in the Counseling and Clinical Psychology program. “So you’re going to see a rise in people feeling connection through these types of tools. However, while these tools may provide psuedo-connection, relying on them to replace human connection can lead to further isolation and hinder the development of essential social skills.”

    According to research from MIT, for example, people who are lonely are more likely to consider ChatGPT a friend and spend large amounts of time on the app while also reporting increased levels of loneliness. This increased isolation for heavy users suggests that ultimately, generative AI isn’t an adequate replacement for human connection. “We want to talk to a real person and when someone’s really suffering, that need to feel personally cared for only grows stronger,” says George Nitzburg (Ph.D. ’12), Assistant Professor of Teaching, Clinical Psychology.

    Generative AI is good at many things, but therapy is not one of them


    group therapy

    (Photo: iStock)


    One of the main challenges of the ongoing mental health crisis is the inaccessibility of care. More than 61 million Americans are dealing with mental illness but the need outstrips the supply of providers by 320 to 1, according to a report by Mental Health America. For those who are able to find care, the cost, time and emotional energy required just to get started creates major barriers, as does the commitment required for what Nitzburg describes as the “gold standard” of weekly in-person therapy sessions paired with medication, if needed.

    An AI “therapist,” by comparison, can be created in minutes and will be available 24/7, with no other responsibilities or obligations beyond providing support. It can seem like an attractive option at a time when mental health support services are being cut. In recent months, the national suicide hotline for LGTBQ+ youth was shut down and the Substance Abuse and Mental Health Services Administration — a federal agency that oversees the national suicide hotline and distributes billions of dollars to states for mental health and addiction services — lost nearly half its staff to layoffs. However, TC experts believe that generative AI is ill-suited to provide therapy because of its design. It tends to people-please, can give false information with confidence and it’s unclear if or how companies are protecting sensitive medical information.

    AI systems are also designed to communicate with users in a way that builds trust, even if the information is incorrect. “People often mistake fluency for credibility,” says Ioana Literat, Associate Professor of Technology, Media, and Learning. “Even highly educated users can be swayed because the delivery mimics the authority of a trusted expert…and once people get used to offloading cognitive labor to AI, they often stop checking sources as carefully.”


    VR therapy

    (Photo: iStock)


    This trust can be amplified when AI models express empathy, but for people experiencing acute distress or who struggle to separate fantasy from reality, affirmation to the point of sycophancy has detrimental effects, caution Mennin and Nitzburg. Research has shown that when AI chatbots, including ChatGPT, were given prompts simulating people experiencing suicidal thoughts, delusions, hallucinations or mania, the chatbots would often validate delusions and encourage dangerous behavior. “The conclusions of that study were clear, the potential for serious harm meant AI [is] simply not ready to replace a trained therapist, at least not yet,” says Nitzburg.

    However, despite the documented risk of serious harm, a majority of ChatGPT’s 700 million weekly users are using the chatbot for emotional support. Recognizing this reality, scholars researching the intersection of mental healthcare and AI — like Gaba and her doctoral student Emma Master — are focused on understanding how and why people are using these tools, including systemic drivers like inequities in healthcare coverage and access as well as medical mistrust. “As psychologists, we have a responsibility to evaluate these tools, track outcomes over time, and ensure that the public is fully informed of both risks and benefits, while continuing to advocate for systemic reforms that make human care more affordable, accessible, and responsive for all,” says Gaba, Founding Director of the Behavioral Health Equity Advancement Lab (B-HEAL) at TC. 

    Digital innovation has the potential to revolutionize mental health care

    The mental health care field is constantly evolving and leveraging new technologies to improve service. The rise of Zoom and virtual meetings during the COVID-19 pandemic made teletherapy a viable and effective treatment option. PTSD treatment for combat veterans now includes virtual reality headsets for more immersive exposure therapy, and AI could also make a positive impact on the field if it’s used to support professionals rather than replace them.

    “Technology offers a set of tools that can reduce barriers to care so people can get something rather than nothing,” says Nitzburg. “The goal is to reduce risk. It’s to widen access to care and encourage people to seek support earlier rather than suffering in silence until they go into crisis.”



    AI companies should slow down and innovate responsibly


    AI apps on phone

    (Photo: iStock)


    When someone receives harmful advice from an AI chatbot, to what extent is a company that failed to establish safeguards responsible? 

    For Vasudevan, it’s a societal responsibility to hold companies like OpenAI accountable as they develop novel technologies. Institutions like TC also have a role to play in keeping users safe. “The tools being developed by some tech companies are shifting the landscape of what it means to engage in ordinary human practices:, what it means to communicate, what it means to seek knowledge and information,” she says. “Schools like ours have a role to play in helping to mediate people’s use of these technologies and develop use cases that are supportive, responsible and generative.”

    Notably, OpenAI is making efforts to reduce harmful responses from ChatGPT. They released a new large language model (LLM), GPT-5, in August to tone down the sycophancy and encourage users to talk to a real person if chats move in a concerning direction. The response from users was overwhelmingly negative. For some, hallucinations increased despite the company’s claim otherwise. Others were mourning their digital partner’s lost personality.

    In an effort to course correct again, OpenAI has released several updates trying to meet the needs of all their customers — including age verification and parental controls. In October, OpenAI updated ChatGPT’s model with the input of 170 mental health professionals to help establish guardrails for the chatbot.  OpenAI claims that ChatGPT is now 65 percent to 80 percent less likely to give a noncompliant response.

    For scholars like Mennin, companies need to be patient and embrace the scientific method if their products are providing emotional support. “People want to move fast, they want to make money, they also want to help people quickly. But we still have to slow down,” he says. “We have to use randomized control tests. We have to use tests of mechanism not just efficacies, or not just what works, but why does this work? And that means that you have to have an LLM that’s controlled.”

     

    Related Media

     

    Continue Reading

  • Why Michael Burry thinks the AI bubble will unravel

    Why Michael Burry thinks the AI bubble will unravel

    Continue Reading