Blog

  • Kevin Durant officially traded to Rockets in NBA record seven-team deal | NBA

    Kevin Durant officially traded to Rockets in NBA record seven-team deal | NBA

    Kevin Durant’s trade to the Houston Rockets is official and officially record-setting.

    The deal got approved by the NBA on Sunday as part of a seven-team transaction, a record number of organizations to be part of a single deal, one in which a slew of other trade agreements got folded into one massive package.

    “Kevin impacts the game on both ends of the court and is one of the most efficient scorers in the history of basketball,” Rockets general manager Rafael Stone said. “We liked the growth our team showed last season and believe Kevin’s skill set will integrate seamlessly.”

    Involved in the deal: Phoenix, Houston, Atlanta, Minnesota, Golden State, Brooklyn and the Los Angeles Lakers. It includes a total of 13 players – the headline moves include Durant going to Houston from Phoenix, the Rockets sending Jalen Green and Dillon Brooks to the Suns, and the Rockets acquiring Clint Capela from the Hawks.

    The seven-team involvement in the Durant trade tops the previous record, a six-team transaction last summer that most notably sent Klay Thompson to the Dallas Mavericks. Golden State – Thompson’s former team – obviously was another part of that trade, as were Charlotte, Minnesota, Philadelphia and Denver on varying levels.

    “One of the greatest to ever play the game, we are grateful for the impact Kevin made on our organization and in our community,” Phoenix general manager Brian Gregory said of Durant. “As a member of the Suns, he climbed the scoring charts to become just the eighth player in NBA history to score 30,000 career points, and we wish him the best as he continues his career in Houston.”

    There will be at least five second-round draft picks in the deal before all terms are satisfied, the potential for another second-round pick swap and the Hawks and Timberwolves both had to receive some cash considerations to make all the math work. And some of those draft picks won’t actually be made until 2032, which raises the serious possibility that some players who will go down in history as being part of the trade haven’t reached high school yet.

    Durant averaged 26.6 points last season, his 17th in the NBA — not counting one year missed because of injury. For his career, the 6ft 11in forward is averaging 27.2 points and seven rebounds per game.

    The move brings Durant back to the state of Texas, where he played his only year of college basketball for the Longhorns and was the college player of the year before going as the No 2 pick in the 2007 draft by Seattle.

    Houston becomes his fifth franchise, joining the SuperSonics (who then became the Oklahoma City Thunder), Golden State, Brooklyn and Phoenix. Durant won his two titles with the Warriors in 2017 and 2018, and last summer in Paris he became the highest-scoring player in US Olympic basketball history and the first men’s player to be part of four gold-medal teams.

    Durant is a four-time scoring champion, a two-time Finals MVP and one of eight players in NBA history with more than 30,000 career points.

    “Having played against Kevin and coached him before, I know he’s the type of competitor who fits with what we’ve been building here in Houston,” Rockets coach Ime Udoka said. “His skill level, love of basketball, and dedication to his craft have made him one of the most respected players of his generation, and my staff and I are excited to work with him.”

    Houston sent Green and Brooks to Phoenix, along with the rights to Khaman Maluach from last month’s draft, a second-round pick in 2026 and another second-rounder in 2032. The Hawks got David Roddy, cash and a 2031 second-round pick swap from the Rockets. Brooklyn gets a 2026 second-round pick and another in 2030 from the Rockets, and the Warriors received the rights to Jahmai Mashack from last month’s draft.

    Continue Reading

  • Alcaraz marches past Rublev while Khachanov and Fritz ease into Wimbledon last eight | Wimbledon 2025

    Alcaraz marches past Rublev while Khachanov and Fritz ease into Wimbledon last eight | Wimbledon 2025

    Every point in tennis is worth the same as the next, but some are more valuable than others. At 3-3 in the third set here on Sunday, after two and a half sets of outrageous hitting, Carlos Alcaraz held a break point to finally move ahead in the match for the first time. He then produced the kind of athleticism and shot-making that make him such an incredible champion, going side to side, sliding across the court and ripping an unstoppable forehand past the onrushing Andrey Rublev.

    Until that point, the Russian had played outstanding tennis, testing the Spaniard with big serving, huge ground strokes and staying calm, which has not always been the case. But Alcaraz, like all great champions, has an uncanny ability to turn it on when he needs to and from that point on, he pulled away for a 6-7 (5), 6-3, 6-4, 6-4 victory that takes his winning streak to 22 matches and secures a clash with Britain’s Cameron Norrie.

    Alcaraz hit 22 aces and even served and volleyed 15 times, winning 13 of those points, as he moved into the last eight for the ninth time in his past 10 slams. He has won 18 matches in a row here, too, and remains ­favourite to win the title for a third straight year.

    “Andrey is one of the most powerful players we have on tour,” Alcaraz said of Rublev. “You kind of feel he’s pushing you to the limit on every ball. I am just really happy with the way I moved today. I think I played intelligent, smart today, tactically, which I’m really proud about.”

    Taylor Fritz, meanwhile, may be beginning to believe that the tennis gods are on his side at Wimbledon this year. After a narrow escape against Giovanni Mpetshi Perricard of France in the first round, when he trailed by two sets to one and 5-1 in the fourth set tie-break, the American was given an easy passage through to the quarter-finals when his opponent, Jordan Thompson, pulled out due to a hamstring injury.

    Taylor Fritz sealed his place in the quarter-finals when Jordan Thompson withdrew from their match due to a hamstring injury. Photograph: Joanna Chan/AP

    The fifth seed was leading 6-1, 3-0 when Thompson called it quits. The Australian had been battling a lower back problem throughout the tournament and pulled up early on clutching his right hamstring. Clearly hampered, especially in his sideways movement, he took a medical ­timeout at 2-0 down in the second set but after playing one more game, he decided to give up.

    The match lasted just 41 minutes in all, including the timeout, which Fritz will doubtless be grateful for as he prepares to face Russia’s Karen Khachanov, who beat Kamil ­Majchrzak of Poland 6-4, 6-2, 6-3. Khachanov has won both his matches with Fritz, even if the most recent one was five years ago.

    “I think our games are quite ­similar overall,” Fritz said. “To be honest, we practise [together] all the time, so we’re pretty familiar with each other’s games. But I think I improved a ton and have become a much, much better player since the last time we played.”

    Continue Reading

  • T20 Blast: Essex finally post win as Somerset stay top and Notts edge thriller

    T20 Blast: Essex finally post win as Somerset stay top and Notts edge thriller

    Nottinghamshire Outlaws grabbed a narrow and thrilling victory over Leicestershire Foxes, winning by one wicket with one ball to spare, on a day which saw Durham, Derbyshire and Worcestershire all post wins in T20 Blast North Group.

    Durham defended a big total, Derbyshire chased one down and Northants’ slide continued, this time at the hands of Worcestershire.

    Meanwhile, in the South Group, Essex claimed their first win in a rain-affected game with Surrey, while Somerset, Gloucestershire and Glamorgan were also victorious.

    But the prize for the most nerve-racking finish went to Trent Bridge.

    After visitors Leicestershire had posted 188-2 from 20 overs, with half-centuries from Rishi Patel (51) and Sol Budinger (56), Nottinghamshire had looked in control of the chase after 50 from Joe Clarke and a gutsy cameo from Tom Moores (42).

    But the loss of Moores for the seventh wicket saw the game tighten with 10 still needed from 14 balls and prompted a late wobble.

    Needing five for the win and four for a tie from the final over, three singles then a wicket brought last man Farhan Ahmed to the crease with two needed from two balls.

    His hit out to the cover boundary was gathered by his brother Rehan but with Farhan gambling on a second the throw came in just too late to prevent Notts getting home.

    Continue Reading

  • Carney says new oil pipeline proposal in Canada is highly likely – Reuters

    1. Carney says new oil pipeline proposal in Canada is highly likely  Reuters
    2. Varcoe: Carney says it’s ‘highly likely’ an oil pipeline will make Ottawa’s major project list  Calgary Herald
    3. Stakes are high in Canada’s race to become an energy superpower  Financial Post
    4. Canada’s economy needs more than just pipelines and hydro  Toronto Star
    5. Canada awaits private sector move on Pacific crude pipeline, minister says  Global News

    Continue Reading

  • Bowers & Wilkins unveils Pi6 and Pi8 true wireless earbuds with improved ear fit design

    Bowers & Wilkins unveils Pi6 and Pi8 true wireless earbuds with improved ear fit design

    The Bowers & Wilkins Pi8 TWS earbuds have an improved ear fit from three years of research and design. (Image source: Bowers & Wilkins)

    The Bowers & Wilkins Pi6 and Pi8 earbuds offer improved fit and ANC performance backed by three years of ear research. The IP54-rated earbuds offer multipoint, high-resolution music playback.

    Bowers & Wilkins has unveiled the Pi6 and Pi8 in-ear true wireless earbuds with active noise cancellation and IP54 dust and water resistance.

    The company researched earbud fit over three years with over a hundred prototypes to improve the fit of the Pi6 and Pi8. The company says this, along with triple microphones per earbud, has helped improve the ANC performance over prior models. Both come with four ear tip sizes to help users find a good fit.

    The Pi6 supports the Qualcomm aptX Adaptive audio codec, enabling up to 24-bit 96 kHz transmissions at up to 420 kbps. The earbuds reproduce music using 12 mm bio-cellulose drivers with 2-band equalization. With ANC on, the Pi6 has an eight hour runtime, with an additional 16 hours from the charging case according to B&W.

    The Pi8 supports the Qualcomm aptX Lossless codec, enabling 16-bit 44.1 kHz and 24-bit 96 kHz transmissions at ~1 Mbps. Its case adds the ability to retransmit music from wired USB-C and 3.5 mm audio sources using the aptX Adaptive codec to the Pi8 earbuds. The earbuds reproduce music using 12 mm carbon cone drivers and support 5-band equalization. With ANC on, the Pi8 has a 6.5 hour runtime, with an additional 13.5 hours from the charging case, according to B&W.

    The Bowers & Wilkins Pi6 has an MSRP of $249 (available here on Amazon) and comes in cloud grey, forest green, glacier blue, and storm grey. The Pi8 has an MSRP of $399 (available here on Amazon) and comes in anthracite black, dove white, jade green, and midnight blue.

    Continue Reading

  • Israel and Hamas begin ceasefire talks in Qatar as Netanyahu heads to Washington

    Israel and Hamas begin ceasefire talks in Qatar as Netanyahu heads to Washington

    Delegations from Israel and Hamas have begun an indirect round of ceasefire talks in Qatar, as Israeli Prime Minister Benjamin Netanyahu heads to Washington to meet Donald Trump.

    Netanyahu said he thinks his meeting with the US president on Monday should help progress efforts to reach a deal for the release of more hostages and a ceasefire in Gaza.

    He said he had given his negotiators clear instructions to achieve a ceasefire agreement under conditions Israel has accepted.

    Hamas has said it has responded to the latest ceasefire proposal in a positive spirit, but it seems clear there are still gaps between the two sides that need to be bridged if any deal is to be agreed.

    For now, Hamas still seems to be holding out for essentially the same conditions it has previously insisted on – including a guarantee of an end to all hostilities at the end of any truce and the withdrawal of Israeli troops.

    Netanyahu’s government has rejected this before.

    The Israeli position may also not have shifted to any major degree. As he was leaving Israel for the US, Netanyahu said he was still committed to what he described as three missions: “The release and return of all the hostages, the living and the fallen; the destruction of Hamas’s capabilities – to kick it out of there, and to ensure that Gaza will no longer constitute a threat to Israel.”

    Qatari and Egyptian mediators will have their work cut out during the indirect talks between Israel and Hamas in trying to overcome these sticking points, which have have derailed other initiatives since the previous ceasefire ended in March.

    Israel has since resumed its offensive against Hamas with great intensity, as well as imposing an eleven-week blockade on aid entering Gaza, which was partially lifted several weeks ago.

    The Israeli government says these measures have been aimed at further weakening Hamas and forcing it to negotiate and free the hostages.

    Just in the past 24 hours, the Israeli military says it struck 130 Hamas targets and killed a number of militants.

    But the cost in civilian lives in Gaza continues to grow as well. Hospital officials in Gaza said more than 30 people were killed on Sunday.

    The question now is not only whether the talks in Qatar can achieve a compromise acceptable to both sides – but also whether Trump can persuade Netanyahu that the war must come to an end at their meeting on Monday.

    Many in Israel already believe that is a price worth paying to save the remaining hostages.

    Once again, they came out on to the streets on Saturday evening, calling on Netanyahu to reach a deal so the hostages can finally be freed.

    But there are hardline voices in Netanyahu’s cabinet, including the national security minister Itamar Ben Gvir and the finance minister Bezalel Smotrich, who have once again expressed their fierce opposition to ending the war in Gaza before Hamas has been completely eliminated.

    Once again, there is the appearance of real momentum towards a ceasefire deal, but uncertainty over whether either the Israeli government or Hamas is ready to reach an agreement that might fall short of the key conditions they have so far set.

    And once again, Palestinians in Gaza and the families of Israeli hostages still held there are fervently hoping this will not be another false dawn.

    The Israeli military launched a campaign in Gaza in response to Hamas’s 7 October 2023 attacks, in which about 1,200 people were killed and 251 others were taken hostage.

    At least 57,338 people have been killed in Gaza since then, according to the territory’s Hamas-run health ministry.

    Continue Reading

  • Considering the Human Rights Impacts of LLM Content Moderation

    Considering the Human Rights Impacts of LLM Content Moderation

    Audio of this conversation is available via your favorite podcast service.

    At Tech Policy Press, we’ve been tracking the emerging application of generative AI systems in content moderation. Recently, the European Center for Not-for-Profit Law (ECNL) released a comprehensive report titled Algorithmic Gatekeepers: The Human Rights Impacts of LLM Content Moderation, which looks at the opportunities and challenges of using generative AI in content moderation systems at scale. I spoke to its primary author, ECNL senior legal manager Marlena Wisniak.

    What follows is a lightly edited transcript of the discussion.

    Justin Hendrix:

    Marlena, can you tell us a little bit about what the ECNL does?

    Marlena Wisniak:

    The Center, or ECNL as we call it, is a human rights and civil liberties organization for over 20 years. We’ve been mostly focusing on civic space, making sure that civil society organizations, but also grassroots orgs, activists, have a safe space to organize and do their work. We’re mostly lawyers, but I have to say we’re the fun lawyers. So we also do advocacy, research, and basically anything that we can do to protect and promote in enabling civic space. My team was founded in 2020, I believe, and has from the beginning focused mostly on AI, but it’s more broadly the digital team at ECNL.

    So we look at how technologies, specifically emerging technologies, impact civic space and human rights. So our core human rights or civil liberties that we look at is typically privacy, freedom of expression, freedom of assembly, so rights to protest, for example, association, right to organize, and non-discrimination. And our key areas of focus substantively are typically surveillance, AI-driven surveillance and biometric surveillance, and broader and social media platforms that plays such a big role in civic space today. And that’s what I’ll be talking about today. But that’s, in a nutshell, my team and how it fits within the broader org.

    Justin Hendrix:

    Well, I’m excited to talk to you about this report that you have authored with help from a variety of different corners, but it’s called Algorithmic Gatekeepers, the Human Rights Impacts of LLM Content Moderation. So this is a topic that we’ve been trying to follow at Tech Policy Press fairly closely, because I feel like it is the sort of intersection of a lot of things that we care about around social media, content moderation, online safety, free expression, and then, of course, artificial intelligence. And LLMs, generative AI generally, the intersection of these two things, I think, is probably one of the most interesting and possibly under-covered or under-explored, at least to date, issues at the intersection of tech and democracy.

    So I want to just start by asking you how you did this report. It’s quite a significant document. A lot of research obviously went into this, including gathering some new information, not just combining citations, as many people do when they produce a PDF white paper. What got you started on this and how did you go about it?

    Marlena Wisniak:

    Yes, you’re right. It was really a heavy lift. It was a research project that went on for about a year, and we collaborated with great folks including yourself. I remember, I think last year we had a call to hear your thoughts, and shout out, before I go deep into context, to Isabelle Anzabi, who was our fellow last summer, and really helped me just go through a lot of papers, mostly in computer science, and the Omidyar Network, who provided generous support for this project.

    And so how it started, I accidentally fell into AI in 2016, but come from content moderation in the 2010s, ’11, ’12, so I’d say early days of content moderation. So automated content moderation has always been a big focus of mine. I was also at Twitter at some point, overseeing their legal department, sorry, content-governance and illegal department. And living in San Francisco right now, really, the big talk on the street is always LLMs, and GenAI more broadly.

    And so I started hearing various use cases of LLMs for content moderation. Most of the talk, I will say, and focus of the research and civil society community is how to moderate LLM-generated content, so how to moderate ChatGPT, for example, or Claude. That is, of course, super important. And I wanted to look at it from another angle, which is how are LLMs used for content moderation? And this I will say, Justin, has been a little bit of a chicken-and-egg conversation, or reflection, because the way how LLMs can moderate content also impacts how LLM-driven content will be moderated, and how they’re moderated impacts how they moderate content. So it’s hard to separate those, one from another, but I did choose to have that focus. And it’s interesting because it’s very much a nerdy topic and yet has real-world implications, and it’s very hyped up.

    So one of the things that I always love to do is go beyond the hype. I’m not a hype person. In fact, now I hate AI. It was fun working on AI in the 2016, ’17, ’18 years when nobody really cared about AI. Since ChatGPT was released, the only thing people ever want to talk about is AI. So now I want to scream. But all this to say that I did want to impact some of the real implications and post what our promises and what our, really, I’d say, not even realistic promises, but the types of impacts we want to see, because I do believe that it can helpful, and also, as this is an emerging space, prevent any possible harm. And I think folks listening to this podcast probably know that human motivation is extremely complicated. It’s horrible for workers. It doesn’t always, and always is an understatement, produce good outcomes.

    And so machine learning came as a solution to that. It was expanded during COVID, and sort of came as this silver-bullet solution. There has been increasing research showing the limitations of that. And so now, LLM is the new white horse, is that an expression? Is the savior, and there’s a lot of hype. So that’s where I came from. It’s like, okay, let’s go deeper into this. Let’s review a lot of computer science papers where some of this more rigorous qualitative and quantitative work has done, translate it to policy folks, and bring a human-rights approach because that’s my background. I’m an international human rights lawyer.

    Justin Hendrix:

    So readers of Tech Policy Press have at least had multiple pieces that we’ve posted on the site about this issue, and often there’s been this sort of question about what is the promise potentially to offset the dangers to the labor force that currently engage in content moderation on behalf of platforms, but then also, of course, there are various perils that we can imagine as well. We’re going to get into those a little bit.

    But I think one thing that distinguishes your report that I wanted to just start with perhaps is that you include a technical primer. You’ve got a kind of set of definitions and, I think usefully, some distinctions between what’s going on with LLMs for content moderation versus more standard machine-learning classifiers and recommendation mechanisms and other types of algorithmic models that have been used in content moderation for quite some time now. What do you think the listener needs to know about the technical aspect of this phenomenon, of the use of LLMs for content moderation at scale from our vantage right now in June of 2025?

    Marlena Wisniak:

    Yeah, I mean, I encourage folks to look at the technical primer, that is, the audience for that are mostly folks who aren’t very familiar with either industry jargon or technical terms. Leveling up, I think one thing that is really critical to consider when thinking about LLM-driven content moderation is that you have typically two layers. So one is the foundation model layer, or the LLMs. LLM are, I should have started, large-language models. And that’s pretty explicit. It’s large-language model, so people see it as God, or the sci-fi technology. It’s really not. It’s really big data sets.

    And I think we often forget that. So if we think about traditional machine learning, the difference is that this is a larger set. And so there’s this implication, obviously, for privacy and other rights that we can explore later down the line. But one, considered, they’re very, very, very big, enormous data sets that require a lot of computing power. And so really, there’s only a handful of companies right now building these models.

    But the ones that we looked at in more depth are Llama by Facebook, sorry, Meta, ChatGPT, OpenAI, Claude, Anthropic, Gemini, Google. And then right as I was finishing out the report, DeepSeek came out. So there’s emerging models as well, but it’s still a very small number of foundation models, given how much data and compute they use. And I often say they’re really not that technically complicated. They’re just bigger. And so from a concentration of power, that has a lot of importance.

    And so then the platforms that I mentioned, they both develop and often use… they develop the LLMs and they use them for content moderation. But what happens for all the other ones like Discord or Reddit, or Slack? Although Slack might be bought by one of them. But anyways, there are so many other platforms. Typically, they do not build their own LLMs. They will either have a license with one of the foundation-model companies and then fine-tune them or they will use one of the open source, like DeepSeek or Llama or something they find on Hugging Face, and then fine-tune them for their purposes.

    But what that means, so that’s one thing from a technical thing to understand how LLMs work and how they’re used in practice. And then another thing I will flag is that LLMs are a subset of generative AI. That’s a term we most commonly know. Generative AI today has, to some extent, become equivalent of ChatGPT. It’s much broader than ChatGPT, but LLMs are one subset of that, or foundation models as well.

    And then maybe one last thing I’ll flag is multilingual language models are those that are trained on text data from dozens to sometimes hundreds of languages simultaneously. And so the idea is that they will be more capable of processing and generating inputs and outputs in multiple languages. And we can about whether that works or not. There’s fantastic research that has been done that inspired me a lot for my work.

    Justin Hendrix:

    Yeah, I’ll just say that the primer, I think what’s great about, again, this report is that it does detail at least some of the ambition technically that people have for using large-language models in content generation and the possibilities it opens up for sorts of things that have been very hard to do or difficult, certainly at scale, with today’s mechanisms, including, as you say, a big one, which is servicing many languages that previously social media firms might have decided are simply off limits for their consideration because it’s just not perhaps feasible or profitable for them to address certain markets or certain languages with the type of scale that perhaps people in those places might believe is necessary in order to do a good job. So that’s been a kind of consistent complaint from civil society for a long time.

    But there are other things here that you point to that seem to fit in the category of promise, such as possibly more robust types of nudges or interventions in what people post, or after the fact what they post. I know I’ve talked to people about the idea that we can imagine content moderation with LLM starting before you ever post something, kind of prefacto content moderation, and possibly in interaction with the content moderation system as it were at that point, before you even pushed something live to your feed. But I don’t know. What else did you learn in terms of just studying these things technically about the possibilities they open up?

    Marlena Wisniak:

    Yeah, thanks are bringing it up. And the report does obviously highlight harms and also explores promises for each individual, right. So I’m with you. One of the key findings of this research was that probably the most promise of LLM-driven content moderation is not to remove content or even to moderate it through a ranking system or curating content, but really anything on procedural rights. So it’s before posting, like you said, nudges. Even though I will say that we also don’t want this Big Brother-type experience where we’re typing something and, oh dear Lord, the algorithm has found that this may be controversial. But yeah, there can be nudges, there can be suggestions for reformulating something that could be abusive, less invasive. They could even suggest other informations, for example, if someone is posting something clearly wrong about… like, factually incorrect about the Ukraine war, or elections, like civic integrity stuff, they could check out xyz.gov, right, elections.gov or something.

    And also on the flip side, appeals and remedy. When a user posts content, they can immediately know that this has been flagged for… especially content that would be automatically removed, they can get an automated notice that’s more personalized that this content has been flagged for review or has been removed, and give more personalized information about how they can appeal that decision. So there’s all this kind of user interaction that I think is pretty cool and exciting.

    There’s also the possibility for personalized content moderation. So let’s say one user really welcomes gore or sensitive content, borderline content that doesn’t violate either the platform’s policies or the law. They can adapt and adjust their own moderation, as opposed to someone who really doesn’t want to see anything remotely sensitive or related to some topics because of any trauma or just dislike or whatever. That can be helpful.

    And I’ll give a shout out here to Discord, with whom we collaborated for a year on engaging external stakeholders as they develop ML-driven interventions to moderate abuse and harassment on their platform, specifically for teens. And so we really worked with a broad range of stakeholders, including youth and children, which is interesting, and understanding how they would like to see AI-driven innovation and intervention. So it wasn’t only LLM, it was also traditional machine learning.

    But yeah, so I think, to sum up creative uses of LLM for moderating content, and by moderating I mean broadly, is something that I support much more than automated removal, which one of the conclusions of this report is that, at least today, it is still too risky, potentially harmful, and just doesn’t… like, ineffective to do that. There will be too many false positives and too many false negatives, both of those disproportionately falling on already marginalized groups who tend to be, as most folks, you probably know, disproportionately silenced by platform and also disproportionately targeted by violative content. So that’s one of the main findings of how we can use LLMs safely is more on the procedural side than actually the content.

    Justin Hendrix:

    We’ll see how platforms attempt to go about this. We’re already seeing some in the wild, examples of uses of LLMs. There’s been reporting even this week of Meta’s new community notes program, using LLMs in certain ways. So I think there’ll be probably as wide a variety of applications of LLMs as there have been of machine learning in content moderation, and we’ll just, I suppose, see how it goes. I mean, different platforms, who knows, maybe one or two will err towards creating a AI nanny state hyper-sanitized environment that people may recoil from, or regard as overly censorious or what have you. And yet it’s possible to imagine, as you say, lots of different types of interventions that people might regard as useful or helpful in whatever way.

    But I want to pause on maybe thinking about the benefits and dig a little more into the potential human rights impacts, because that’s where, of course, you spend the bulk of this report, concerned with things like privacy, freedom of expression and information and opinion, questions around peaceful assembly and association, non-discrimination, participation. Take us through a couple of those things. When you think about the most significant potential human rights impacts of the deployment of large language model systems in content moderation at scale, what do you think is most prominent?

    Marlena Wisniak:

    So I’ll just highlight some of the most specific to LLMs. One thing to consider is that LLMs often exacerbate and accelerate already-existing harms done by traditional machine learning, and traditional ML accelerates and exacerbates harms committed by humans often. So I think it’s like the more scale, scale can be good for accuracy and speed obviously, and it also has, there’s another side to that coin.

    So some of the things you’ll find the report are kind of an LLM-ified analysis of automated content moderation, but I will single out a few really new concerns related to LLMs. And so one of them is kind of a, coming back to the concentration of power issue that I mentioned before, any decision that is made at the foundation model level, unless it is proactively fine-tuned at the deployment level, that will trickle down across platforms.

    So to give you an example, if Meta decides that pro-Palestinian content is considered violent or terrorist content, and there has been a lot of reporting to show that, then if another platform uses Llama without changing that specifically, any decision that Meta makes at the level of Llama will then trickle down to the other platforms. So that’s something to consider from a freedom of expression angle, is generalized censorship, if it’s false positive, and at the same time, content that should be removed will not be removed if the foundation model does not consider that as harmful. So that interaction is particularly important because we’ve never seen that before, to my knowledge at least, and the dynamics on content moderation.

    Another big thing around freedom of information, for example, is hallucinations. So that is a very stereotypical GenAI problem. And for folks who don’t know, hallucinations are — it’s content that is generated by the AI system and that is just made up and wrong. So the weird thing is that it does in a way that seems so confident and so right, and it is just nonsense. So it’ll make up academic papers, or it’ll make up news articles or any kind of facts.

    So if platforms use this to moderate missing this information, that will just be inaccurate to begin with. And it’s hard sometimes to parse that when you have pretty convincing content. And even if, for example, human moderators would use LLMs or GenAI to help them moderate content, if they see this really elaborate article about how, whatever, Trump won the 2020 elections, for example, and they’re not familiar with it, that could form the base of their decisions, and it’s just plain wrong. So that’s the new harm and risk.

    Another one that I found was super interesting, and this, I will say, was me… a lot of this paper was me trying to envision harms and then probe it with engineers or technical folks and also non-technical as well, to ask them, does this make sense? Could this be true? And for example, from freedom of peaceful assembly and protest, one thing to consider is that protests are, and contrarian views are protest definition anti-majority.

    You have a minority express themselves. You go against powerful interests like governments or companies, or even just the status quo. And what does that mean? This data is not well-represented in the training datasets. Because machine learning, I often say is just steroids on stats. So if a view is predominant in the dataset, even if it’s completely wrong, that is the output, right? Machine learning never gives a real decision. It gives a prediction about what is statistically possible. So when you have protestors or journalists, investigative journalists, for example, or anybody bringing up new stuff, that will not actually show up in the dataset and therefore will not be moderated well. And let’s give platforms and those deploying content moderation system the highest benefit of the doubt. They probably wants to moderate it well, just the system will not function unless specifically fine-tuned because protests fall outside the curve of statistical data.

    And that’s also the case for conflicts or exceptional circumstances, crises. These events are, by definition, exceptional, and therefore fall outside the statistical curve and are not moderated well. And that’s another key finding. For freedom of association, one thing that could be interesting here is that some organizations can be mislabeled. Actually, you know what, Justin, forget that one. It’s too long. I’ll skip it.

    Two last pieces I’d like to highlight that we found were really interesting. One was on participation. So on one side, LLMs actually have the potential to support more participatory design by enabling customizable moderation. So like I said before, the users have personalized content moderation, or perhaps in the future use AI agents to moderate their own content as they want. On the flip side, in practice, affected communities are largely excluded from shaping content moderation systems. That’s not new. That has also happened in machine learning.

    Generally, it’s mostly white tech bros in San Francisco or Silicon Valley, so especially marginalized groups and those in the global majority are excluded from that. The addition with LLMs is that they are typically “improved” through a method called reinforcement learning from human feedback, or red-teaming. So you have folks thinking about, in the case of red-teaming, what are potential bad, worst-case scenarios and testing it from an adversarial perspective.

    Same for reinforcement learning. They go through these models and they “fix” them, they reteach them how to learn. The problem is that people who do that are usually Stanford graduates, those in Silicon Valley or other elitist institutions, and it’s very rare. I personally have never heard of folks from marginalized groups being invited to participate in these kinds of activities. I myself, for example, have been invited, but you have to be in a niche kind of AI group to do that. And everybody who I’ve spoken to who has done that, I personally have not. I’ll say that it’s a very homogeneous group. And so basically what that means is then yes, the LLMs will be improved… or, I’m sorry, let me rephrase. There are efforts to reduce inaccuracy and improve the performance of the models of the LLMs. However, the people who do that are typically very homogeneous. So it’s just like snowball effects.

    And the last piece on remedy is that while there are potential promises to access remedy better, like I said before, most everyone use a notification, helping them identify remedy, appeals mechanisms, speeding up the appeals, there’s also fundamentally a lack of explainability and transparency, and that can create barriers to remedy. Another issue, like I said before, is that there are these two layers, the layer of foundation models. So the ChatGPT, the Claude, the Gemini, and then the social media platform that deploys it. And it’s hard to know where to appeal, how to appeal. The foundation models themselves don’t really know how a decision was made. Social media platforms know that even less. Do you appeal to the platform? Does the platform then appeal again to the third party LLM? Where does the user fall into this? So just accountability becomes fragmented, and there’s a lot of confusion and lack of clarity around how to go around that.

    Justin Hendrix:

    So I want to come to some of your recommendations, because you have both recommendations to LLM developers and deployers as well as to, of course, those who are potentially applying these things inside of social media companies. But your section on recommendations to policymakers is perhaps mercifully brief. You’ve only got a handful of recommendations there. It’s clear, I think just from the onus in the report on where the recommendations are, that you see it largely as something that private sector needs to sort out, how are they going to deploy these technologies or not.

    But when it comes to policymakers, what are you telling them? I see you’re interested in making sure they’re refraining, on some level, from mandating the use of these things. I suppose it’s a possibility that somebody might come along and say, “Oh, in fact, we demand that you use them.” I was trying to think of a context for that, but then I found myself thinking about some of the laws we’ve seen put forward in the United States even, where there have been segments of those laws, I think in some of the must-carry laws, where there have been these ideas around transparency of moderation decisions. I feel like I’ve read amicus briefs where some of the people opposed to those laws would make arguments like, “Well, it’s just simply not possible to give explainable rationale to every single user for every single content-moderation decision that’s made.”

    Well, presumably, LLMs would make that quite possible, and you could imagine a government coming along and saying, “Somehow, in the interest of free expression, we would like to mandate, use artificial intelligence to explain any content moderation decision that you might take.” You say that’s a bad idea along with other things, but what would you tell policymakers to be paying attention to here?

    Marlena Wisniak:

    Yeah, I mean, that’s a very astute… you observe that well. Most of the recommendations are to LLMs developers and employers. The reason for this is not that we only see the owners on the private sector, not on the public sector, is that for practical reasons, this part of the research was mostly on assessing the human rights impact. And the recommendations piece was really the last part, and we didn’t have that much capacity to go deep. So the second iteration of this report, hopefully we will continue. It will be really to zoom into the recommendations.

    And I will say then, that on the policy making side, so a lot of our work at ECNL is on policy and legal advocacy. We’ve been working behind the scenes on the AI Act for the past five years, and right now, there’s conversations around the GPAI code in the EU on general-purpose AI. And the reason why I didn’t go deep here is that one, we don’t want a specific AI content moderation or LLM moderation law. We have the DSA in Europe. The U.S. I’ll just set aside, because right now there’s a lot going on. So we’re not calling for a specific LLM content moderation law.

    And the EU already has the DSA and the AI. And I’d say a lot of the foundational aspects of LLMs to policymakers would be kind of basic AI around, like, data protection, human rights impact assessment, stakeholder engagement. So I added these big categories. I didn’t go really deep into them. It’s mostly how can we not fuck it up, to be honest. And I think content moderation, as you know, is a very fraught topic where even folks who are well-intentioned just don’t understand it enough. So sometimes you will have these claims that would let… platform should remove problematic content within an hour. And it’s like, okay, cool. In theory, that sounds great. What does that mean in practice? It means a lot of false positives. It means that disproportionately marginalized groups will be impacted, including sex workers, racialized groups, queer folks. And you’ll have to use automated content moderation, which has all the false positives and false negatives that haven’t reported on.

    So the key recommendation we made to policymakers here are very foundational, I’d say. One, do not mandate LLM moderation exactly for the reason that you expressed before. Two, maintain human oversight. So if LLMs are used to moderate content, and especially to remove content, there still should be a legal requirements for platforms to integrate human-in-the-loop systems, meaning that humans will review whatever decision LLM does. Three, kind of broad transparency and accountability metrics and requirements that’s very DSA-esque… sorry, I should have said Digital Services Act. And a lot of that is really about transparency, like mandating disclosure of how LLM systems function and how they’re used. The reality, Justin, we don’t know. I had several off-the-record calls with platforms, off-the-record means I couldn’t publish the names. They also gave me very vague information. There’s some information that was published by them, and you’ll see it in the report.

    ChatGPT said they use LLMs for content moderation. Meta says they’re beginning to play around with it, gemini or Google. But overall, we don’t know. I definitely don’t know when it’s used when I use social media. We don’t know accuracy rates, we don’t know how they’re used in appeals, or how they’re enforced. So really requiring platforms to notify users about LLM moderation actions. And again, that’s nothing new, I would say. That’s just using prior Santa Clara principles on transparency and accountability, or DSA or kind of like mainstream civil-society asks and implementing them for LLMs.

    The two last ones that we highlighted, one is mandating human rights impact assessments. So hey, platforms, good news. I did one here, so you can use… My goal is that this will be a starting point for platforms to have basically an HRA handed to them on a silver platter, and then obviously use this as a starting point to look at how they implement LLMs on their platform. It’ll be specific to each platform.

    But on that note, one thing that policymakers could do is make HRAs mandatory for LLM developers and platforms that deploy them, both before deployment and throughout the LLM lifecycle. So for example, the pilot with Discord was at the ideation phase, so design, product design. Before they even developed the product, they consulted with a lot of folks to see how an LLM or machine-learning-driven system could be helpful. Is it nudges? Is it for removing content? Do we not want that? Why? And then continue that throughout all the way through development and deployment and make it accessible for external stakeholders.

    There’s so much expertise in the room, and I often say to platforms that, you know, trust and safety teams, policy teams, human rights teams tend to be underfunded. Just go to civil society that has such extensive knowledge, or journalists like yourself and academics and just drink their wisdom, because there’s a lot of stuff out there.

    Justin Hendrix:

    This report seems like a good jumping-off point for a lot of folks who might be interested in investigating or collecting artifacts of how the platforms are deploying these things or generally kind of trying to pay attention to these issues and trying to discern whether the introduction of LLMs in this context is on balance a good thing, a bad thing, or perhaps somehow neutral or indiscernible. With regard to the overall information integrity environment, what would you encourage people to do next? What would you encourage them to go and look at? What threads would you like to see the field pulled from here?

    Marlena Wisniak:

    Yeah. One core piece that we didn’t talk about much is the multilingual piece of it, so how this can work in different languages. And I’ll just give a shout out here to really cool efforts in the global majority to develop community-driven local smaller LLMs or data sets, like the Masakhane in Africa. There’s really cool community-driven initiatives that kind of go beyond the Silicon Valley profit-first massive monopolization and dynamics. So that is one thing, and I really encourage not only researchers, but also platforms to talk to them. A lot of the platforms that I spoke with, they didn’t even know these existed, and if I say that this report is an HRA handed on a silver platter, they have really cool data sets that would be really helpful, especially to smaller platforms, and they could just plug these into their own model.

    So that’s one thing that I hope research and industry will move towards is more languages, more dialects, understanding that it’s not only a difference between English and other language. It really is a colonial imperialist dynamic where English or French or German or Spanish will be much better moderated, and languages that are close to these ones are better, and then obscure, poorly researched languages work very, very poorly. And you can read in the report the reasons are both because the data sets do not exist or they’re just bad quality, because there’s not enough investment. So I really would encourage platforms to invest more resources into that, more participation, and proactively include stakeholders.

    The other area I would love to see is just open conversations between platform and civil society. And like I said, this is a nerdy topic, like LLMs and content moderation. It doesn’t roll off the tongue. So if folks with expertise in content moderation would like, hopefully this report can give them more context. And then I would love to see more evidence, new ideas. Like I said, some of these things were my own kind of “How could this work?” But through many conversation with folks, and I really would want to see more thinking, more assessment of impacts and more evidence as well.

    Like this paper, it’s long, 70 pages. I didn’t mean to make it this long, but there’s a lot of stuff. And I’d say the last part is computer science papers move so fast, incredibly fast. It’s hard to keep up, and they’re very theoretical. So to the extent that there can be more collaboration between technical folks who come from the machine learning, AI, or computer science field, and policy and human rights, I think we’ll actually be able to build much better products and push policymakers to regulate this stuff better.

    Justin Hendrix:

    I appreciate you taking the time to speak to me about this, and I would encourage my readers to go and check out this full report, which is available on the ecnl.org website. I will include a link to it in the show notes. Thank you very much.

    Marlena Wisniak:

    Thanks, Justin.

    Continue Reading

  • Sanitation Challenges with Mobile Food Trucks

    Sanitation Challenges with Mobile Food Trucks

    The explosive growth of food trucks across American cities has introduced not only new and exciting cuisines, but also unique food safety complexities as a result of their compact, mobile kitchens. These operations face distinctive sanitation hurdles compared to traditional restaurants, primarily stemming from spatial constraints and operational mobility.  

    Temperature Control Vulnerabilities  

    Maintaining proper food temperatures remains a critical challenge, with Suffolk County, NY, citing improper holding temperatures in 43% of food truck violations.  Limited refrigeration space and power fluctuations during transit increase risks of foods entering the “danger zone” (40°F-140°F) where pathogens multiply rapidly. The confined workspace also complicates monitoring, as thermometers may be inaccessible during peak service.   

    Hand Hygiene Limitations  

    Inadequate hand washing accounted for nearly 19% of violations in the same study.  Tiny kitchens often accommodate only one handwashing sink, which may be obstructed during service. Water tank capacities limit available water for frequent washing, while high-volume periods pressure staff to skip proper 20-second protocols.   

    Cross-Contamination Threats  

    Proximity of raw and ready-to-eat ingredients in tight quarters elevates contamination risks. Suffolk County documented unprotected food storage in 17.8% of inspections.  Single cutting boards may handle proteins and produce consecutively, while utensil storage challenges,  such as knives kept in drawers rather than holders, further exacerbate risks.   

    Spatial and Operational Constraints  

    The average food truck kitchen spans 50 to 80 square feet, complicating:  

    • Separation of cleaning chemicals from food zones  
    • Implementation of first-in-first-out inventory systems  
    • Access to hidden surfaces for sanitation (e.g., under equipment)   
    • Waste management is particularly challenging without dedicated disposal areas, increasing risks of pest attraction.   

    Regulatory and Inspection Gaps  

    Jurisdictional variations in codes create compliance complexity, says leading food poisoning law firm Ron Simon & Associates. Meanwhile, inspections frequently occur during non-operational hours when temperature controls and handling practices can’t be evaluated. California researchers noted 90 of 95 trucks had at least one critical violation during operational assessments, risks missed during stationary inspections.  Additionally, 16.9% of Suffolk County violations involved absent certified managers.   

    Innovative approaches, including mobile-specific manager certifications, unannounced operational inspections, and space-efficient sanitation protocols, are emerging to address these challenges without compromising the culinary innovation that defines the industry. 

    Continue Reading

  • Stalkerware seller exposed by sloppy SQL security • The Register

    Stalkerware seller exposed by sloppy SQL security • The Register

    Infosec In Brief A security researcher looking at samples of stalkerware discovered an SQL vulnerability that allowed him to steal a database of 62,000 user accounts. 

    Eric Daigle published a blog post this week detailing how he found a piece of stalkerware he wasn’t familiar with, Catwatchful, and then quickly proceeded to pwn it into temporary oblivion. 

    Stalkerware or spyware is a form of software used to track people’s computer activity. It is typically installed by parents, spouses, or employers with physical access to the user’s computer, and tends to be undetectable and very hard to remove. The number of stalkerware installations has been steadily on the rise, even as it’s repeatedly been breached by online vigilantes and security researchers. 

    According to Daigle, Catwatchful is a spyware kit that promises to be undetectable and unstoppable, with only the controller able to make use of it on an infected device or delete it. While it “works really well” for its intended purpose, Daigle also noted that Catwatchful made two POST requests to separate servers when he tried to log into the app. 

    One of the two servers, it turned out, had no appreciable security system installed, allowing Daigle to copy plaintext login details for all 62,000 Catwatchful accounts in the group’s system, including the administrator’s. Oops. 

    Working with reporters from TechCrunch, Daigle even managed to help identify the alleged administrator of Catwatchful, as well as get its hosters to take it down.

    Unfortunately for its stalkees, Catwatchful has remained online as of this week, Daigle says, with temporary sites stood up to replace seized domains, and patches deployed to address the SQLI vulnerability. 

    Critical vulnerabilities of the week: Chrome zero day patched

    Google moved fast this week to patch a zero-day in the V8 JavaScript engine after it was found being exploited in the wild, so don’t skip this stable channel update for Chrome Desktop on Windows, Mac, and Linux. 

    The patch addresses CVE-2025-6554 (CVSS 8.1), a type of confusion vulnerability in V8 that allows a remote attacker to perform an arbitrary read/write via a specially-crafted HTML item. 

    Elsewhere:

    • CVSS 9.6 – CVE-2024-45347: Xiaomi Mi Connect Service APP contains a logic flaw that can allow an attacker to gain unauthorized access to a victim’s device.

    Another Swiss government partner gets ransomed

    The Swiss government said this week that the Radix foundation, an NGO dedicated to healthcare promotion, was hit by ransomware. Given Radix counts a number of government agencies among its customers, the government saw fit to report the matter even though no government data was stolen. 

    “As Radix has no direct access to Federal Administration systems, the attackers did not gain entry to these systems at any time,” the Swiss government said – but government data on Radix’s own systems isn’t necessarily safe, mind you. 

    While it hasn’t shared how many government documents may have been exposed this time around, it could be a sizable amount. The Play ransomware gang hit a Swiss government IT supplier last year and made off with some 65,000 government files among more than a million more stolen from the biz. 

    IDE extension verification is easy to spoof, say researchers

    Software supply chain security is a critical part of modern cyber hygiene, and that includes verification of extensions used in IDEs. Unfortunately it’s easy to spoof such verification in several top IDEs, researchers from OX security claim.

    Research from the OX team, makers of application-level security products, published research this week showing that verification in VSCode, Visual Studio and IntelliJ IDEA can all be spoofed, allowing for a malicious IDE extension to pass itself off as a trustworthy one. 

    “The ability to inject malicious code into extensions, package them as VSIX/ZIP files, and install them while maintaining the verified symbols across multiple major development platforms poses a serious risk,” the OX team said. 

    With verification marks no longer sufficient to judge authenticity of IDE packages, OX recommends only installing extensions directly from official marketplaces rather than from files, while extension developers and IDE makers should be sure there are multiple methods of extension signing available to ensure file security. 

    It wouldn’t be a roundup without a healthcare breach

    Healthcare providers are frequently targeted by data thieves, and for good reason: They’re soft targets, they possess valuable PII, and they often pay up in the case of ransomware. This week’s entrant involves US player Esse Health, based in St Louis, Missouri. 

    Esse began letting customers know this week that it had been breached in April, and that data belonging to some 263,601 people was possibly stolen. Data included names, addresses, dates of birth and healthcare information – all the usual stuff – though luckily medical records themselves weren’t stolen. 

    Reports from shortly after indicate the attack affected Esse phone systems and forced offices to cancel some appointments due to other outages. 

    As is often the case, customers in the firing line are being given some free identity protection service, and the assurance that none of their data has been misused in any way Esse can tell – at least not yet. 

    CVE program begs you to help it help itself

    Things have been a bit perilous for the Common Vulnerabilities and Exposure of late, with the Trump administration letting funding for the program expire until it was saved, for a moment, via a temporary contract extension. CVE board members were reportedly kept in the dark about the end of the program, and now Congress wants a review of the program to check for mismanagement. 

    In other words, there’s enough to do without thinking about how the CVE program might be improved if it doesn’t vanish down the memory hole, which is where you, dear infosec professional, come in. 

    The CVE Program has created a pair of working groups, one for security researchers at CVE numbering authorities (CNAs) and another for consumers, which includes basically everyone else. 

    Research Working Group members will be working to establish research norms and advising other members of the research community with an aim to “promote the CVE program,” while consumers will work to identify what users of the CVE system want and need “to ensure that the CVE Program remains aligned with real-world use cases.”

    Make your voice heard at the links above. ®

    Continue Reading

  • Australia beat West Indies in second Test to seal series win

    Australia beat West Indies in second Test to seal series win

    Australia clinched a series win over West Indies with a resounding victory in the second Test in Grenada.

    After setting the home side 277 to win on a difficult pitch, the tourists ripped through the West Indies batters in 34.3 overs to take an unassailable 2-0 series lead.

    Having wrapped up the first Test inside three days in Barbados with a 159-run victory, Australia enjoyed a similar winning margin on day four in Grenada, by 133 runs.

    They can complete a series clean sweep when the two sides meet in Jamaica next week.

    Day four started with Shamar Joseph (4-66) and namesake Alzarri Joseph (2-52) cleaning up the Australia tail inside 45 minutes.

    But on a pitch offering the bowlers plenty, chasing 277 for victory was always going to be a daunting task for West Indies.

    And so it proved once John Campbell fell leg before for a duck in the second over off the bowling of Josh Hazlewood (2-33).

    Mitchell Starc had Keacy Carty caught for 10 for the first of his three wickets, while opener Kraigg Brathwaite was also caught off the bowling of Beau Webster for seven to leave West Indies 29-3.

    The home side were then four down at lunch as Pat Cummins dismissed Brandon King for 14.

    Shai Hope (34) and captain Roston Chase (17) provided some resistance in the afternoon session before both fell to Hazlewood and Starc respectively.

    Starc then dismissed Justin Greaves for two before brief fireworks from Shamar Joseph (24) and Alzarri Joseph (13) – hitting five sixes between them – were brought to an end by Nathan Lyon’s spin.

    Lyon then removed Jayden Seales to wrap up the win and end with figures of 3-42.

    It also leaves him on 562 Test wickets – just one behind Glenn McGrath’s 563 and second on the list of Australia’s all-time Test wicket-takers.

    The late, great Shane Warne remains out in front with 708 Test wickets.

    “The wickets have been challenging in this series so far but they have also been a lot of fun to play on because Test cricket can be a grind,” said Australia’s Alex Carey, who was named man of the match.

    “It was always a challenging task but you have to believe,” West Indies skipper Chase said.

    “The guys have to try and stay confident and keep believing in themselves.”

    Continue Reading