Blog

  • Considering the Human Rights Impacts of LLM Content Moderation

    Considering the Human Rights Impacts of LLM Content Moderation

    Audio of this conversation is available via your favorite podcast service.

    At Tech Policy Press, we’ve been tracking the emerging application of generative AI systems in content moderation. Recently, the European Center for Not-for-Profit Law (ECNL) released a comprehensive report titled Algorithmic Gatekeepers: The Human Rights Impacts of LLM Content Moderation, which looks at the opportunities and challenges of using generative AI in content moderation systems at scale. I spoke to its primary author, ECNL senior legal manager Marlena Wisniak.

    What follows is a lightly edited transcript of the discussion.

    Justin Hendrix:

    Marlena, can you tell us a little bit about what the ECNL does?

    Marlena Wisniak:

    The Center, or ECNL as we call it, is a human rights and civil liberties organization for over 20 years. We’ve been mostly focusing on civic space, making sure that civil society organizations, but also grassroots orgs, activists, have a safe space to organize and do their work. We’re mostly lawyers, but I have to say we’re the fun lawyers. So we also do advocacy, research, and basically anything that we can do to protect and promote in enabling civic space. My team was founded in 2020, I believe, and has from the beginning focused mostly on AI, but it’s more broadly the digital team at ECNL.

    So we look at how technologies, specifically emerging technologies, impact civic space and human rights. So our core human rights or civil liberties that we look at is typically privacy, freedom of expression, freedom of assembly, so rights to protest, for example, association, right to organize, and non-discrimination. And our key areas of focus substantively are typically surveillance, AI-driven surveillance and biometric surveillance, and broader and social media platforms that plays such a big role in civic space today. And that’s what I’ll be talking about today. But that’s, in a nutshell, my team and how it fits within the broader org.

    Justin Hendrix:

    Well, I’m excited to talk to you about this report that you have authored with help from a variety of different corners, but it’s called Algorithmic Gatekeepers, the Human Rights Impacts of LLM Content Moderation. So this is a topic that we’ve been trying to follow at Tech Policy Press fairly closely, because I feel like it is the sort of intersection of a lot of things that we care about around social media, content moderation, online safety, free expression, and then, of course, artificial intelligence. And LLMs, generative AI generally, the intersection of these two things, I think, is probably one of the most interesting and possibly under-covered or under-explored, at least to date, issues at the intersection of tech and democracy.

    So I want to just start by asking you how you did this report. It’s quite a significant document. A lot of research obviously went into this, including gathering some new information, not just combining citations, as many people do when they produce a PDF white paper. What got you started on this and how did you go about it?

    Marlena Wisniak:

    Yes, you’re right. It was really a heavy lift. It was a research project that went on for about a year, and we collaborated with great folks including yourself. I remember, I think last year we had a call to hear your thoughts, and shout out, before I go deep into context, to Isabelle Anzabi, who was our fellow last summer, and really helped me just go through a lot of papers, mostly in computer science, and the Omidyar Network, who provided generous support for this project.

    And so how it started, I accidentally fell into AI in 2016, but come from content moderation in the 2010s, ’11, ’12, so I’d say early days of content moderation. So automated content moderation has always been a big focus of mine. I was also at Twitter at some point, overseeing their legal department, sorry, content-governance and illegal department. And living in San Francisco right now, really, the big talk on the street is always LLMs, and GenAI more broadly.

    And so I started hearing various use cases of LLMs for content moderation. Most of the talk, I will say, and focus of the research and civil society community is how to moderate LLM-generated content, so how to moderate ChatGPT, for example, or Claude. That is, of course, super important. And I wanted to look at it from another angle, which is how are LLMs used for content moderation? And this I will say, Justin, has been a little bit of a chicken-and-egg conversation, or reflection, because the way how LLMs can moderate content also impacts how LLM-driven content will be moderated, and how they’re moderated impacts how they moderate content. So it’s hard to separate those, one from another, but I did choose to have that focus. And it’s interesting because it’s very much a nerdy topic and yet has real-world implications, and it’s very hyped up.

    So one of the things that I always love to do is go beyond the hype. I’m not a hype person. In fact, now I hate AI. It was fun working on AI in the 2016, ’17, ’18 years when nobody really cared about AI. Since ChatGPT was released, the only thing people ever want to talk about is AI. So now I want to scream. But all this to say that I did want to impact some of the real implications and post what our promises and what our, really, I’d say, not even realistic promises, but the types of impacts we want to see, because I do believe that it can helpful, and also, as this is an emerging space, prevent any possible harm. And I think folks listening to this podcast probably know that human motivation is extremely complicated. It’s horrible for workers. It doesn’t always, and always is an understatement, produce good outcomes.

    And so machine learning came as a solution to that. It was expanded during COVID, and sort of came as this silver-bullet solution. There has been increasing research showing the limitations of that. And so now, LLM is the new white horse, is that an expression? Is the savior, and there’s a lot of hype. So that’s where I came from. It’s like, okay, let’s go deeper into this. Let’s review a lot of computer science papers where some of this more rigorous qualitative and quantitative work has done, translate it to policy folks, and bring a human-rights approach because that’s my background. I’m an international human rights lawyer.

    Justin Hendrix:

    So readers of Tech Policy Press have at least had multiple pieces that we’ve posted on the site about this issue, and often there’s been this sort of question about what is the promise potentially to offset the dangers to the labor force that currently engage in content moderation on behalf of platforms, but then also, of course, there are various perils that we can imagine as well. We’re going to get into those a little bit.

    But I think one thing that distinguishes your report that I wanted to just start with perhaps is that you include a technical primer. You’ve got a kind of set of definitions and, I think usefully, some distinctions between what’s going on with LLMs for content moderation versus more standard machine-learning classifiers and recommendation mechanisms and other types of algorithmic models that have been used in content moderation for quite some time now. What do you think the listener needs to know about the technical aspect of this phenomenon, of the use of LLMs for content moderation at scale from our vantage right now in June of 2025?

    Marlena Wisniak:

    Yeah, I mean, I encourage folks to look at the technical primer, that is, the audience for that are mostly folks who aren’t very familiar with either industry jargon or technical terms. Leveling up, I think one thing that is really critical to consider when thinking about LLM-driven content moderation is that you have typically two layers. So one is the foundation model layer, or the LLMs. LLM are, I should have started, large-language models. And that’s pretty explicit. It’s large-language model, so people see it as God, or the sci-fi technology. It’s really not. It’s really big data sets.

    And I think we often forget that. So if we think about traditional machine learning, the difference is that this is a larger set. And so there’s this implication, obviously, for privacy and other rights that we can explore later down the line. But one, considered, they’re very, very, very big, enormous data sets that require a lot of computing power. And so really, there’s only a handful of companies right now building these models.

    But the ones that we looked at in more depth are Llama by Facebook, sorry, Meta, ChatGPT, OpenAI, Claude, Anthropic, Gemini, Google. And then right as I was finishing out the report, DeepSeek came out. So there’s emerging models as well, but it’s still a very small number of foundation models, given how much data and compute they use. And I often say they’re really not that technically complicated. They’re just bigger. And so from a concentration of power, that has a lot of importance.

    And so then the platforms that I mentioned, they both develop and often use… they develop the LLMs and they use them for content moderation. But what happens for all the other ones like Discord or Reddit, or Slack? Although Slack might be bought by one of them. But anyways, there are so many other platforms. Typically, they do not build their own LLMs. They will either have a license with one of the foundation-model companies and then fine-tune them or they will use one of the open source, like DeepSeek or Llama or something they find on Hugging Face, and then fine-tune them for their purposes.

    But what that means, so that’s one thing from a technical thing to understand how LLMs work and how they’re used in practice. And then another thing I will flag is that LLMs are a subset of generative AI. That’s a term we most commonly know. Generative AI today has, to some extent, become equivalent of ChatGPT. It’s much broader than ChatGPT, but LLMs are one subset of that, or foundation models as well.

    And then maybe one last thing I’ll flag is multilingual language models are those that are trained on text data from dozens to sometimes hundreds of languages simultaneously. And so the idea is that they will be more capable of processing and generating inputs and outputs in multiple languages. And we can about whether that works or not. There’s fantastic research that has been done that inspired me a lot for my work.

    Justin Hendrix:

    Yeah, I’ll just say that the primer, I think what’s great about, again, this report is that it does detail at least some of the ambition technically that people have for using large-language models in content generation and the possibilities it opens up for sorts of things that have been very hard to do or difficult, certainly at scale, with today’s mechanisms, including, as you say, a big one, which is servicing many languages that previously social media firms might have decided are simply off limits for their consideration because it’s just not perhaps feasible or profitable for them to address certain markets or certain languages with the type of scale that perhaps people in those places might believe is necessary in order to do a good job. So that’s been a kind of consistent complaint from civil society for a long time.

    But there are other things here that you point to that seem to fit in the category of promise, such as possibly more robust types of nudges or interventions in what people post, or after the fact what they post. I know I’ve talked to people about the idea that we can imagine content moderation with LLM starting before you ever post something, kind of prefacto content moderation, and possibly in interaction with the content moderation system as it were at that point, before you even pushed something live to your feed. But I don’t know. What else did you learn in terms of just studying these things technically about the possibilities they open up?

    Marlena Wisniak:

    Yeah, thanks are bringing it up. And the report does obviously highlight harms and also explores promises for each individual, right. So I’m with you. One of the key findings of this research was that probably the most promise of LLM-driven content moderation is not to remove content or even to moderate it through a ranking system or curating content, but really anything on procedural rights. So it’s before posting, like you said, nudges. Even though I will say that we also don’t want this Big Brother-type experience where we’re typing something and, oh dear Lord, the algorithm has found that this may be controversial. But yeah, there can be nudges, there can be suggestions for reformulating something that could be abusive, less invasive. They could even suggest other informations, for example, if someone is posting something clearly wrong about… like, factually incorrect about the Ukraine war, or elections, like civic integrity stuff, they could check out xyz.gov, right, elections.gov or something.

    And also on the flip side, appeals and remedy. When a user posts content, they can immediately know that this has been flagged for… especially content that would be automatically removed, they can get an automated notice that’s more personalized that this content has been flagged for review or has been removed, and give more personalized information about how they can appeal that decision. So there’s all this kind of user interaction that I think is pretty cool and exciting.

    There’s also the possibility for personalized content moderation. So let’s say one user really welcomes gore or sensitive content, borderline content that doesn’t violate either the platform’s policies or the law. They can adapt and adjust their own moderation, as opposed to someone who really doesn’t want to see anything remotely sensitive or related to some topics because of any trauma or just dislike or whatever. That can be helpful.

    And I’ll give a shout out here to Discord, with whom we collaborated for a year on engaging external stakeholders as they develop ML-driven interventions to moderate abuse and harassment on their platform, specifically for teens. And so we really worked with a broad range of stakeholders, including youth and children, which is interesting, and understanding how they would like to see AI-driven innovation and intervention. So it wasn’t only LLM, it was also traditional machine learning.

    But yeah, so I think, to sum up creative uses of LLM for moderating content, and by moderating I mean broadly, is something that I support much more than automated removal, which one of the conclusions of this report is that, at least today, it is still too risky, potentially harmful, and just doesn’t… like, ineffective to do that. There will be too many false positives and too many false negatives, both of those disproportionately falling on already marginalized groups who tend to be, as most folks, you probably know, disproportionately silenced by platform and also disproportionately targeted by violative content. So that’s one of the main findings of how we can use LLMs safely is more on the procedural side than actually the content.

    Justin Hendrix:

    We’ll see how platforms attempt to go about this. We’re already seeing some in the wild, examples of uses of LLMs. There’s been reporting even this week of Meta’s new community notes program, using LLMs in certain ways. So I think there’ll be probably as wide a variety of applications of LLMs as there have been of machine learning in content moderation, and we’ll just, I suppose, see how it goes. I mean, different platforms, who knows, maybe one or two will err towards creating a AI nanny state hyper-sanitized environment that people may recoil from, or regard as overly censorious or what have you. And yet it’s possible to imagine, as you say, lots of different types of interventions that people might regard as useful or helpful in whatever way.

    But I want to pause on maybe thinking about the benefits and dig a little more into the potential human rights impacts, because that’s where, of course, you spend the bulk of this report, concerned with things like privacy, freedom of expression and information and opinion, questions around peaceful assembly and association, non-discrimination, participation. Take us through a couple of those things. When you think about the most significant potential human rights impacts of the deployment of large language model systems in content moderation at scale, what do you think is most prominent?

    Marlena Wisniak:

    So I’ll just highlight some of the most specific to LLMs. One thing to consider is that LLMs often exacerbate and accelerate already-existing harms done by traditional machine learning, and traditional ML accelerates and exacerbates harms committed by humans often. So I think it’s like the more scale, scale can be good for accuracy and speed obviously, and it also has, there’s another side to that coin.

    So some of the things you’ll find the report are kind of an LLM-ified analysis of automated content moderation, but I will single out a few really new concerns related to LLMs. And so one of them is kind of a, coming back to the concentration of power issue that I mentioned before, any decision that is made at the foundation model level, unless it is proactively fine-tuned at the deployment level, that will trickle down across platforms.

    So to give you an example, if Meta decides that pro-Palestinian content is considered violent or terrorist content, and there has been a lot of reporting to show that, then if another platform uses Llama without changing that specifically, any decision that Meta makes at the level of Llama will then trickle down to the other platforms. So that’s something to consider from a freedom of expression angle, is generalized censorship, if it’s false positive, and at the same time, content that should be removed will not be removed if the foundation model does not consider that as harmful. So that interaction is particularly important because we’ve never seen that before, to my knowledge at least, and the dynamics on content moderation.

    Another big thing around freedom of information, for example, is hallucinations. So that is a very stereotypical GenAI problem. And for folks who don’t know, hallucinations are — it’s content that is generated by the AI system and that is just made up and wrong. So the weird thing is that it does in a way that seems so confident and so right, and it is just nonsense. So it’ll make up academic papers, or it’ll make up news articles or any kind of facts.

    So if platforms use this to moderate missing this information, that will just be inaccurate to begin with. And it’s hard sometimes to parse that when you have pretty convincing content. And even if, for example, human moderators would use LLMs or GenAI to help them moderate content, if they see this really elaborate article about how, whatever, Trump won the 2020 elections, for example, and they’re not familiar with it, that could form the base of their decisions, and it’s just plain wrong. So that’s the new harm and risk.

    Another one that I found was super interesting, and this, I will say, was me… a lot of this paper was me trying to envision harms and then probe it with engineers or technical folks and also non-technical as well, to ask them, does this make sense? Could this be true? And for example, from freedom of peaceful assembly and protest, one thing to consider is that protests are, and contrarian views are protest definition anti-majority.

    You have a minority express themselves. You go against powerful interests like governments or companies, or even just the status quo. And what does that mean? This data is not well-represented in the training datasets. Because machine learning, I often say is just steroids on stats. So if a view is predominant in the dataset, even if it’s completely wrong, that is the output, right? Machine learning never gives a real decision. It gives a prediction about what is statistically possible. So when you have protestors or journalists, investigative journalists, for example, or anybody bringing up new stuff, that will not actually show up in the dataset and therefore will not be moderated well. And let’s give platforms and those deploying content moderation system the highest benefit of the doubt. They probably wants to moderate it well, just the system will not function unless specifically fine-tuned because protests fall outside the curve of statistical data.

    And that’s also the case for conflicts or exceptional circumstances, crises. These events are, by definition, exceptional, and therefore fall outside the statistical curve and are not moderated well. And that’s another key finding. For freedom of association, one thing that could be interesting here is that some organizations can be mislabeled. Actually, you know what, Justin, forget that one. It’s too long. I’ll skip it.

    Two last pieces I’d like to highlight that we found were really interesting. One was on participation. So on one side, LLMs actually have the potential to support more participatory design by enabling customizable moderation. So like I said before, the users have personalized content moderation, or perhaps in the future use AI agents to moderate their own content as they want. On the flip side, in practice, affected communities are largely excluded from shaping content moderation systems. That’s not new. That has also happened in machine learning.

    Generally, it’s mostly white tech bros in San Francisco or Silicon Valley, so especially marginalized groups and those in the global majority are excluded from that. The addition with LLMs is that they are typically “improved” through a method called reinforcement learning from human feedback, or red-teaming. So you have folks thinking about, in the case of red-teaming, what are potential bad, worst-case scenarios and testing it from an adversarial perspective.

    Same for reinforcement learning. They go through these models and they “fix” them, they reteach them how to learn. The problem is that people who do that are usually Stanford graduates, those in Silicon Valley or other elitist institutions, and it’s very rare. I personally have never heard of folks from marginalized groups being invited to participate in these kinds of activities. I myself, for example, have been invited, but you have to be in a niche kind of AI group to do that. And everybody who I’ve spoken to who has done that, I personally have not. I’ll say that it’s a very homogeneous group. And so basically what that means is then yes, the LLMs will be improved… or, I’m sorry, let me rephrase. There are efforts to reduce inaccuracy and improve the performance of the models of the LLMs. However, the people who do that are typically very homogeneous. So it’s just like snowball effects.

    And the last piece on remedy is that while there are potential promises to access remedy better, like I said before, most everyone use a notification, helping them identify remedy, appeals mechanisms, speeding up the appeals, there’s also fundamentally a lack of explainability and transparency, and that can create barriers to remedy. Another issue, like I said before, is that there are these two layers, the layer of foundation models. So the ChatGPT, the Claude, the Gemini, and then the social media platform that deploys it. And it’s hard to know where to appeal, how to appeal. The foundation models themselves don’t really know how a decision was made. Social media platforms know that even less. Do you appeal to the platform? Does the platform then appeal again to the third party LLM? Where does the user fall into this? So just accountability becomes fragmented, and there’s a lot of confusion and lack of clarity around how to go around that.

    Justin Hendrix:

    So I want to come to some of your recommendations, because you have both recommendations to LLM developers and deployers as well as to, of course, those who are potentially applying these things inside of social media companies. But your section on recommendations to policymakers is perhaps mercifully brief. You’ve only got a handful of recommendations there. It’s clear, I think just from the onus in the report on where the recommendations are, that you see it largely as something that private sector needs to sort out, how are they going to deploy these technologies or not.

    But when it comes to policymakers, what are you telling them? I see you’re interested in making sure they’re refraining, on some level, from mandating the use of these things. I suppose it’s a possibility that somebody might come along and say, “Oh, in fact, we demand that you use them.” I was trying to think of a context for that, but then I found myself thinking about some of the laws we’ve seen put forward in the United States even, where there have been segments of those laws, I think in some of the must-carry laws, where there have been these ideas around transparency of moderation decisions. I feel like I’ve read amicus briefs where some of the people opposed to those laws would make arguments like, “Well, it’s just simply not possible to give explainable rationale to every single user for every single content-moderation decision that’s made.”

    Well, presumably, LLMs would make that quite possible, and you could imagine a government coming along and saying, “Somehow, in the interest of free expression, we would like to mandate, use artificial intelligence to explain any content moderation decision that you might take.” You say that’s a bad idea along with other things, but what would you tell policymakers to be paying attention to here?

    Marlena Wisniak:

    Yeah, I mean, that’s a very astute… you observe that well. Most of the recommendations are to LLMs developers and employers. The reason for this is not that we only see the owners on the private sector, not on the public sector, is that for practical reasons, this part of the research was mostly on assessing the human rights impact. And the recommendations piece was really the last part, and we didn’t have that much capacity to go deep. So the second iteration of this report, hopefully we will continue. It will be really to zoom into the recommendations.

    And I will say then, that on the policy making side, so a lot of our work at ECNL is on policy and legal advocacy. We’ve been working behind the scenes on the AI Act for the past five years, and right now, there’s conversations around the GPAI code in the EU on general-purpose AI. And the reason why I didn’t go deep here is that one, we don’t want a specific AI content moderation or LLM moderation law. We have the DSA in Europe. The U.S. I’ll just set aside, because right now there’s a lot going on. So we’re not calling for a specific LLM content moderation law.

    And the EU already has the DSA and the AI. And I’d say a lot of the foundational aspects of LLMs to policymakers would be kind of basic AI around, like, data protection, human rights impact assessment, stakeholder engagement. So I added these big categories. I didn’t go really deep into them. It’s mostly how can we not fuck it up, to be honest. And I think content moderation, as you know, is a very fraught topic where even folks who are well-intentioned just don’t understand it enough. So sometimes you will have these claims that would let… platform should remove problematic content within an hour. And it’s like, okay, cool. In theory, that sounds great. What does that mean in practice? It means a lot of false positives. It means that disproportionately marginalized groups will be impacted, including sex workers, racialized groups, queer folks. And you’ll have to use automated content moderation, which has all the false positives and false negatives that haven’t reported on.

    So the key recommendation we made to policymakers here are very foundational, I’d say. One, do not mandate LLM moderation exactly for the reason that you expressed before. Two, maintain human oversight. So if LLMs are used to moderate content, and especially to remove content, there still should be a legal requirements for platforms to integrate human-in-the-loop systems, meaning that humans will review whatever decision LLM does. Three, kind of broad transparency and accountability metrics and requirements that’s very DSA-esque… sorry, I should have said Digital Services Act. And a lot of that is really about transparency, like mandating disclosure of how LLM systems function and how they’re used. The reality, Justin, we don’t know. I had several off-the-record calls with platforms, off-the-record means I couldn’t publish the names. They also gave me very vague information. There’s some information that was published by them, and you’ll see it in the report.

    ChatGPT said they use LLMs for content moderation. Meta says they’re beginning to play around with it, gemini or Google. But overall, we don’t know. I definitely don’t know when it’s used when I use social media. We don’t know accuracy rates, we don’t know how they’re used in appeals, or how they’re enforced. So really requiring platforms to notify users about LLM moderation actions. And again, that’s nothing new, I would say. That’s just using prior Santa Clara principles on transparency and accountability, or DSA or kind of like mainstream civil-society asks and implementing them for LLMs.

    The two last ones that we highlighted, one is mandating human rights impact assessments. So hey, platforms, good news. I did one here, so you can use… My goal is that this will be a starting point for platforms to have basically an HRA handed to them on a silver platter, and then obviously use this as a starting point to look at how they implement LLMs on their platform. It’ll be specific to each platform.

    But on that note, one thing that policymakers could do is make HRAs mandatory for LLM developers and platforms that deploy them, both before deployment and throughout the LLM lifecycle. So for example, the pilot with Discord was at the ideation phase, so design, product design. Before they even developed the product, they consulted with a lot of folks to see how an LLM or machine-learning-driven system could be helpful. Is it nudges? Is it for removing content? Do we not want that? Why? And then continue that throughout all the way through development and deployment and make it accessible for external stakeholders.

    There’s so much expertise in the room, and I often say to platforms that, you know, trust and safety teams, policy teams, human rights teams tend to be underfunded. Just go to civil society that has such extensive knowledge, or journalists like yourself and academics and just drink their wisdom, because there’s a lot of stuff out there.

    Justin Hendrix:

    This report seems like a good jumping-off point for a lot of folks who might be interested in investigating or collecting artifacts of how the platforms are deploying these things or generally kind of trying to pay attention to these issues and trying to discern whether the introduction of LLMs in this context is on balance a good thing, a bad thing, or perhaps somehow neutral or indiscernible. With regard to the overall information integrity environment, what would you encourage people to do next? What would you encourage them to go and look at? What threads would you like to see the field pulled from here?

    Marlena Wisniak:

    Yeah. One core piece that we didn’t talk about much is the multilingual piece of it, so how this can work in different languages. And I’ll just give a shout out here to really cool efforts in the global majority to develop community-driven local smaller LLMs or data sets, like the Masakhane in Africa. There’s really cool community-driven initiatives that kind of go beyond the Silicon Valley profit-first massive monopolization and dynamics. So that is one thing, and I really encourage not only researchers, but also platforms to talk to them. A lot of the platforms that I spoke with, they didn’t even know these existed, and if I say that this report is an HRA handed on a silver platter, they have really cool data sets that would be really helpful, especially to smaller platforms, and they could just plug these into their own model.

    So that’s one thing that I hope research and industry will move towards is more languages, more dialects, understanding that it’s not only a difference between English and other language. It really is a colonial imperialist dynamic where English or French or German or Spanish will be much better moderated, and languages that are close to these ones are better, and then obscure, poorly researched languages work very, very poorly. And you can read in the report the reasons are both because the data sets do not exist or they’re just bad quality, because there’s not enough investment. So I really would encourage platforms to invest more resources into that, more participation, and proactively include stakeholders.

    The other area I would love to see is just open conversations between platform and civil society. And like I said, this is a nerdy topic, like LLMs and content moderation. It doesn’t roll off the tongue. So if folks with expertise in content moderation would like, hopefully this report can give them more context. And then I would love to see more evidence, new ideas. Like I said, some of these things were my own kind of “How could this work?” But through many conversation with folks, and I really would want to see more thinking, more assessment of impacts and more evidence as well.

    Like this paper, it’s long, 70 pages. I didn’t mean to make it this long, but there’s a lot of stuff. And I’d say the last part is computer science papers move so fast, incredibly fast. It’s hard to keep up, and they’re very theoretical. So to the extent that there can be more collaboration between technical folks who come from the machine learning, AI, or computer science field, and policy and human rights, I think we’ll actually be able to build much better products and push policymakers to regulate this stuff better.

    Justin Hendrix:

    I appreciate you taking the time to speak to me about this, and I would encourage my readers to go and check out this full report, which is available on the ecnl.org website. I will include a link to it in the show notes. Thank you very much.

    Marlena Wisniak:

    Thanks, Justin.

    Continue Reading

  • Sanitation Challenges with Mobile Food Trucks

    Sanitation Challenges with Mobile Food Trucks

    The explosive growth of food trucks across American cities has introduced not only new and exciting cuisines, but also unique food safety complexities as a result of their compact, mobile kitchens. These operations face distinctive sanitation hurdles compared to traditional restaurants, primarily stemming from spatial constraints and operational mobility.  

    Temperature Control Vulnerabilities  

    Maintaining proper food temperatures remains a critical challenge, with Suffolk County, NY, citing improper holding temperatures in 43% of food truck violations.  Limited refrigeration space and power fluctuations during transit increase risks of foods entering the “danger zone” (40°F-140°F) where pathogens multiply rapidly. The confined workspace also complicates monitoring, as thermometers may be inaccessible during peak service.   

    Hand Hygiene Limitations  

    Inadequate hand washing accounted for nearly 19% of violations in the same study.  Tiny kitchens often accommodate only one handwashing sink, which may be obstructed during service. Water tank capacities limit available water for frequent washing, while high-volume periods pressure staff to skip proper 20-second protocols.   

    Cross-Contamination Threats  

    Proximity of raw and ready-to-eat ingredients in tight quarters elevates contamination risks. Suffolk County documented unprotected food storage in 17.8% of inspections.  Single cutting boards may handle proteins and produce consecutively, while utensil storage challenges,  such as knives kept in drawers rather than holders, further exacerbate risks.   

    Spatial and Operational Constraints  

    The average food truck kitchen spans 50 to 80 square feet, complicating:  

    • Separation of cleaning chemicals from food zones  
    • Implementation of first-in-first-out inventory systems  
    • Access to hidden surfaces for sanitation (e.g., under equipment)   
    • Waste management is particularly challenging without dedicated disposal areas, increasing risks of pest attraction.   

    Regulatory and Inspection Gaps  

    Jurisdictional variations in codes create compliance complexity, says leading food poisoning law firm Ron Simon & Associates. Meanwhile, inspections frequently occur during non-operational hours when temperature controls and handling practices can’t be evaluated. California researchers noted 90 of 95 trucks had at least one critical violation during operational assessments, risks missed during stationary inspections.  Additionally, 16.9% of Suffolk County violations involved absent certified managers.   

    Innovative approaches, including mobile-specific manager certifications, unannounced operational inspections, and space-efficient sanitation protocols, are emerging to address these challenges without compromising the culinary innovation that defines the industry. 

    Continue Reading

  • Stalkerware seller exposed by sloppy SQL security • The Register

    Stalkerware seller exposed by sloppy SQL security • The Register

    Infosec In Brief A security researcher looking at samples of stalkerware discovered an SQL vulnerability that allowed him to steal a database of 62,000 user accounts. 

    Eric Daigle published a blog post this week detailing how he found a piece of stalkerware he wasn’t familiar with, Catwatchful, and then quickly proceeded to pwn it into temporary oblivion. 

    Stalkerware or spyware is a form of software used to track people’s computer activity. It is typically installed by parents, spouses, or employers with physical access to the user’s computer, and tends to be undetectable and very hard to remove. The number of stalkerware installations has been steadily on the rise, even as it’s repeatedly been breached by online vigilantes and security researchers. 

    According to Daigle, Catwatchful is a spyware kit that promises to be undetectable and unstoppable, with only the controller able to make use of it on an infected device or delete it. While it “works really well” for its intended purpose, Daigle also noted that Catwatchful made two POST requests to separate servers when he tried to log into the app. 

    One of the two servers, it turned out, had no appreciable security system installed, allowing Daigle to copy plaintext login details for all 62,000 Catwatchful accounts in the group’s system, including the administrator’s. Oops. 

    Working with reporters from TechCrunch, Daigle even managed to help identify the alleged administrator of Catwatchful, as well as get its hosters to take it down.

    Unfortunately for its stalkees, Catwatchful has remained online as of this week, Daigle says, with temporary sites stood up to replace seized domains, and patches deployed to address the SQLI vulnerability. 

    Critical vulnerabilities of the week: Chrome zero day patched

    Google moved fast this week to patch a zero-day in the V8 JavaScript engine after it was found being exploited in the wild, so don’t skip this stable channel update for Chrome Desktop on Windows, Mac, and Linux. 

    The patch addresses CVE-2025-6554 (CVSS 8.1), a type of confusion vulnerability in V8 that allows a remote attacker to perform an arbitrary read/write via a specially-crafted HTML item. 

    Elsewhere:

    • CVSS 9.6 – CVE-2024-45347: Xiaomi Mi Connect Service APP contains a logic flaw that can allow an attacker to gain unauthorized access to a victim’s device.

    Another Swiss government partner gets ransomed

    The Swiss government said this week that the Radix foundation, an NGO dedicated to healthcare promotion, was hit by ransomware. Given Radix counts a number of government agencies among its customers, the government saw fit to report the matter even though no government data was stolen. 

    “As Radix has no direct access to Federal Administration systems, the attackers did not gain entry to these systems at any time,” the Swiss government said – but government data on Radix’s own systems isn’t necessarily safe, mind you. 

    While it hasn’t shared how many government documents may have been exposed this time around, it could be a sizable amount. The Play ransomware gang hit a Swiss government IT supplier last year and made off with some 65,000 government files among more than a million more stolen from the biz. 

    IDE extension verification is easy to spoof, say researchers

    Software supply chain security is a critical part of modern cyber hygiene, and that includes verification of extensions used in IDEs. Unfortunately it’s easy to spoof such verification in several top IDEs, researchers from OX security claim.

    Research from the OX team, makers of application-level security products, published research this week showing that verification in VSCode, Visual Studio and IntelliJ IDEA can all be spoofed, allowing for a malicious IDE extension to pass itself off as a trustworthy one. 

    “The ability to inject malicious code into extensions, package them as VSIX/ZIP files, and install them while maintaining the verified symbols across multiple major development platforms poses a serious risk,” the OX team said. 

    With verification marks no longer sufficient to judge authenticity of IDE packages, OX recommends only installing extensions directly from official marketplaces rather than from files, while extension developers and IDE makers should be sure there are multiple methods of extension signing available to ensure file security. 

    It wouldn’t be a roundup without a healthcare breach

    Healthcare providers are frequently targeted by data thieves, and for good reason: They’re soft targets, they possess valuable PII, and they often pay up in the case of ransomware. This week’s entrant involves US player Esse Health, based in St Louis, Missouri. 

    Esse began letting customers know this week that it had been breached in April, and that data belonging to some 263,601 people was possibly stolen. Data included names, addresses, dates of birth and healthcare information – all the usual stuff – though luckily medical records themselves weren’t stolen. 

    Reports from shortly after indicate the attack affected Esse phone systems and forced offices to cancel some appointments due to other outages. 

    As is often the case, customers in the firing line are being given some free identity protection service, and the assurance that none of their data has been misused in any way Esse can tell – at least not yet. 

    CVE program begs you to help it help itself

    Things have been a bit perilous for the Common Vulnerabilities and Exposure of late, with the Trump administration letting funding for the program expire until it was saved, for a moment, via a temporary contract extension. CVE board members were reportedly kept in the dark about the end of the program, and now Congress wants a review of the program to check for mismanagement. 

    In other words, there’s enough to do without thinking about how the CVE program might be improved if it doesn’t vanish down the memory hole, which is where you, dear infosec professional, come in. 

    The CVE Program has created a pair of working groups, one for security researchers at CVE numbering authorities (CNAs) and another for consumers, which includes basically everyone else. 

    Research Working Group members will be working to establish research norms and advising other members of the research community with an aim to “promote the CVE program,” while consumers will work to identify what users of the CVE system want and need “to ensure that the CVE Program remains aligned with real-world use cases.”

    Make your voice heard at the links above. ®

    Continue Reading

  • Australia beat West Indies in second Test to seal series win

    Australia beat West Indies in second Test to seal series win

    Australia clinched a series win over West Indies with a resounding victory in the second Test in Grenada.

    After setting the home side 277 to win on a difficult pitch, the tourists ripped through the West Indies batters in 34.3 overs to take an unassailable 2-0 series lead.

    Having wrapped up the first Test inside three days in Barbados with a 159-run victory, Australia enjoyed a similar winning margin on day four in Grenada, by 133 runs.

    They can complete a series clean sweep when the two sides meet in Jamaica next week.

    Day four started with Shamar Joseph (4-66) and namesake Alzarri Joseph (2-52) cleaning up the Australia tail inside 45 minutes.

    But on a pitch offering the bowlers plenty, chasing 277 for victory was always going to be a daunting task for West Indies.

    And so it proved once John Campbell fell leg before for a duck in the second over off the bowling of Josh Hazlewood (2-33).

    Mitchell Starc had Keacy Carty caught for 10 for the first of his three wickets, while opener Kraigg Brathwaite was also caught off the bowling of Beau Webster for seven to leave West Indies 29-3.

    The home side were then four down at lunch as Pat Cummins dismissed Brandon King for 14.

    Shai Hope (34) and captain Roston Chase (17) provided some resistance in the afternoon session before both fell to Hazlewood and Starc respectively.

    Starc then dismissed Justin Greaves for two before brief fireworks from Shamar Joseph (24) and Alzarri Joseph (13) – hitting five sixes between them – were brought to an end by Nathan Lyon’s spin.

    Lyon then removed Jayden Seales to wrap up the win and end with figures of 3-42.

    It also leaves him on 562 Test wickets – just one behind Glenn McGrath’s 563 and second on the list of Australia’s all-time Test wicket-takers.

    The late, great Shane Warne remains out in front with 708 Test wickets.

    “The wickets have been challenging in this series so far but they have also been a lot of fun to play on because Test cricket can be a grind,” said Australia’s Alex Carey, who was named man of the match.

    “It was always a challenging task but you have to believe,” West Indies skipper Chase said.

    “The guys have to try and stay confident and keep believing in themselves.”

    Continue Reading

  • Beats Headphones With 50-Hour Battery Life Are Almost Free, Amazon Sells Them at Cost to Clear Stock

    Beats Headphones With 50-Hour Battery Life Are Almost Free, Amazon Sells Them at Cost to Clear Stock

    Prime Day has become the biggest shopping moment of the year, far outpacing Black Friday, thanks to Amazon’s willingness to slash prices to levels you simply won’t find anywhere else. For this event, Amazon often drops its margins to zero or even sells at a loss, all to attract shoppers with deals that are too good to pass up.

    One of the best offers right now in the Electronics category is on the Beats Solo 4 wireless headphones and the best part is that this deal is open to everyone—Prime member or not. At just $99, down from $199, this is the lowest price ever for these headphones, and for a brand like Beats, it’s a deal you really can’t ignore. With a price like this, expect them to fly off the shelves until stock runs out.

    See at Amazon

    Wireless Headphones

    The Beats Solo 4 are wireless Bluetooth on-ear headphones that have earned themselves a loyal following through their sound, comfort and style. At their regular price of $199, they’re already a popular pick, but at $99, they become a true steal. If you’re looking for quality wireless headphones that deliver on both performance and battery life, this is the moment to act.

    One of the greatest aspects of the Beats Solo 4 is battery life: With 50 hours of playtime per battery charge, you can listen to music for days without needing to recharge. Such a degree of stamina is wonderful if you’re always on the move. And in the unlikely event you ever do run out, a Quick Charge mode gives you hours of music in a matter of minutes.

    Quality of sound is what differentiates Beats from the rest, and the Solo 4 deliver the rich bass, sharp mids, and clear highs. You’ll enjoy the balanced sound profile and reliable Bluetooth connectivity. The headphones connect seamlessly with Apple and Android devices and built-in controls allow you to manage volume, skip tracks, or pick up calls without having to reach for your phone.

    The on-ear variant features soft ear cushions and an adjustable headband which translates to wearing them for hours on end without feeling even a hint of discomfort. They’re light and foldable so they’re easy to store in a bag or backpack.

    This Prime Day deal at $99 is the lowest price these headphones have ever seen, make sure you don’t miss this great opportunity.

    See at Amazon

    Continue Reading

  • ‘Jurassic World Rebirth’ Dominates Box Office With $318 Million Holiday Weekend—But Still Lags Previous Installments – Forbes

    1. ‘Jurassic World Rebirth’ Dominates Box Office With $318 Million Holiday Weekend—But Still Lags Previous Installments  Forbes
    2. ‘Jurassic World Rebirth’: Dinosaur Gets Bigger With $36M+ Saturday; 5-Day Opening Now Roaring To $147M+; Promo Campaign Clocks $150M – Update  Deadline
    3. Jurassic World Rebirth director knew dinosaurs cant be selling point anymore  The News International
    4. ‘Jurassic World: Rebirth’ box office collections day 2: Film clocks Rs 22 crore; Thanks to Saturday boost  Times of India
    5. Weekend Box Office: Dinosaurs stomp critics over the holiday weekend  JoBlo

    Continue Reading

  • OPEC+ to boost oil production even more than expected in August – MarketWatch

    1. OPEC+ to boost oil production even more than expected in August  MarketWatch
    2. OPEC+ members agree to larger-than-expected oil production hike in August  CNBC
    3. OPEC+ adds 548,000 bpd in August  The Express Tribune
    4. Oil falls slightly ahead of expected OPEC+ output increase  Business Recorder
    5. Oil prices hold steady amid strong US jobs data and tariff uncertainty  Daily Times

    Continue Reading

  • Signs of ‘dark stars’ discovered in the early universe

    Signs of ‘dark stars’ discovered in the early universe

    Astronomers poring over new data from the James Webb Space Telescope have reported the most compelling hints yet that “dark stars,” cosmic behemoths fed by dark matter, really existed.

    A new analysis of five ultra‑distant objects shows spectra and shapes that match simulations of dark stars rather than ordinary fusion‑powered suns.


    The candidates sit more than 13 billion light‑years away, meaning their light left them when the universe was only a few hundred million years old.

    The dark star concept was first introduced in 2007 by Katherine Freese and colleagues at the University of Texas at Austin.

    Dark matter powering giant stars

    Dark matter accounts for about 85 percent of the universe’s mass but reveals itself only through gravity. Two decades of underground detectors have yet to capture a single dark‑matter particle, yet its invisible pull sculpts galaxies and galaxy clusters.

    Freese proposed that in the first halos of gas, self‑annihilating dark‑matter particles could generate heat faster than the gas could cool, inflating stars that grow to millions of solar masses without ever igniting fusion.

    Once the local dark‑matter reservoir ran out, the stars would collapse, possibly forming black holes hefty enough to seed the earliest quasars.

    Webb’s infrared vision extends far enough redward to catch such ancient, swollen stars, and its sensors have already revealed dozens of unexpectedly bright objects in the epoch known as Cosmic Dawn.

    Webb spotted strange ancient stars

    Freese’s team studied JADES‑GS‑z11‑0, JADES‑GS‑z13‑0 and three even more distant specks, finding that each could be explained by a single dark star rather than a whole infant galaxy.

    “If it’s real, then I don’t know how else you’d explain it other than with a dark star,” said Freese. A second clue comes from a tentative absorption dip at rest‑frame 1,640 Å, the signature of He II, that appears in one spectrum and is predicted only for dark‑star atmospheres.

    Some doubt the dark star idea

    Daniel Whalen of the University of Portsmouth studies massive stars forming without dark matter. He is not convinced about the existence of dark stars.

    “They ignore an entire body of literature on the formation of supermassive primordial stars, some of which could give signatures very similar to the signatures that they show,” said Whalen.

    Whalen argues that rapid gas accretion alone can inflate stars to a million suns, and that Webb’s photometry cannot yet distinguish between such hot leviathans and Freese’s cooler, puffier dark stars.

    Dark stars may explain early black holes

    The debate matters because JWST and Chandra recently found a black hole in galaxy UHZ‑1 whose mass approaches ten billion Suns just 500 million years after the big bang. Traditional growth models struggle to build such giants so fast.

    A dark star collapsing after millions of years would supply a head start that ordinary Population III stars cannot match.

    Supermassive primordial stars could also create big seeds, but they rely on rare conditions – pristine gas streams and intense ultraviolet fields – that may not occur often enough to explain the abundance of early quasars.

    Light signals may prove dark stars

    The dark star candidates give off light that’s much cooler and puffier than expected from galaxies or fusion‑based suns, which means their spectral energy distribution should look unique.

    One telltale feature would be an absorption dip caused by singly ionized helium, known as He II, at 1,640 Å. If confirmed, this dip could act like a fingerprint. In the Webb spectrum of JADES‑GS‑z14‑0, researchers spotted a faint signal that lines up with where the He II dip should appear.

    However, the signal‑to‑noise ratio was just 2.4, which is barely above the threshold of what’s considered statistically meaningful. That makes it too early to declare the case closed.

    On top of that, observations from ALMA spotted oxygen nearby, which shouldn’t be present in pure dark stars. This could mean the dark star is part of a system with other stars, or something more complex is going on.

    Future research

    One wrinkle is that ALMA detects oxygen in one candidate’s vicinity, implying metal‑rich companions that classic dark‑star theory does not expect.

    Future deep spectra will need to confirm whether the He II feature is genuine and whether metals truly permeate the system.

    Webb keeps adding targets, and the Roman Space Telescope will survey wider areas, potentially catching hundreds of dark‑star‑like objects at redshifts beyond 14.

    For now, dark stars remain an intriguing, if unproven, explanation for several mysteries in the early universe.

    The study is published in the journal Proceedings of the National Academy of Sciences.

    Image Credit: Pierluigi Rinaldi / Rafael Navarro-Carrera / Pablo G. Pérez-González

    —–

    Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates. 

    Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.

    —–


    Continue Reading

  • Bald Baby J.D. Vance Meme Can Now Be Your Boarding Pass

    Bald Baby J.D. Vance Meme Can Now Be Your Boarding Pass

    The long, strange saga of viral memes distorting the appearance of J.D. Vance continues — and this time, it’s literally taking off.

    That’s because New York-based tech innovator James Steinberg, whose many fanciful projects include a detective agency for mundane mysteries and a crowdsourced New York City map of “public cats” available for petting, has designed an app that allows you to change the background your digital airplane boarding pass to display a now-infamous image of the vice president as a bald, bearded baby-man. And if you’re wondering: yes, he’s tried it without getting arrested.

    “I checked with the TSA subreddit first to see what issues I might encounter,” Steinberg tells Rolling Stone. “Some thought it was stupid, some funny, a few asked how to make custom passes like that, but nobody said I would get on the flight ban list, so it seemed okay.” Last week, he went for it, posting a photo from JFK International Airport with the caption “can’t wait to use my big beautiful bald boarding pass to travel today.”

    He says there was no issue at security. “The TSA tried not to show emotion but looked mildly amused,” Steinberg says. “Maybe I should try in a less liberal airport.” While it is illegal to tamper with U.S. boarding passes, the enforcement of this law is more about security risks that could arise from changing personal information or altering a QR code, neither of which Steinberg’s app alters. There are also hundreds of U.S. airports where the TSA only needs to see your ID and won’t ask for your boarding pass. Still, Steinberg says, “Use at your own risk!”

    Steinberg’s gag was inspired by the story of a less fortunate traveler. Last month, Mads Mikkelsen, a 21-year-old Norwegian tourist (not the Danish actor) told his hometown newspaper that when he flew into Newark Liberty International Airport for an extended vacation in the U.S., he was detained by border control for hours and forced to unlock his phone, on which agents found the bald Vance meme in question. Mikkelsen’s entry into the country was denied, and he believed it was due to sensitivities over the doctored likeness of the veep. Customs and Border Patrol later denied this, claiming that Mikkelsen’s “admitted drug use” — he had previously partaken of legal recreational cannabis — was the reason for his ejection.

    Either way, the meme got a lot more traction, raising questions about freedom of political speech. An Irish politician even waved around a printout of the bald Vance meme in parliament while warning of American censorship and repression. “In my opinion, it leads to a discussion at the heart of the issue,” Steinberg says of his app. “For everyone else, I guess they can just enjoy making fun of J.D. Vance.”

    The project has the approval of Dave McNamee, the creative consultant and humorist in Los Angeles who kicked off the Vance edits craze back in October of 2024. After Rep. Michael Collins of Georgia was mocked for posting a portrait of Vance that had been digitally manipulated to give him a more chiseled, “Chad”-like look, McNamee went in the opposite direction.

    “For every 100 likes I will turn J.D. Vance into a progressively apple cheeked baby,” he wrote on X, posting the same portrait but giving Vance a more bloated face. The post eventually racked up more than 200,000 likes and 16 million views, per X metrics, and McNamee indeed continued to make Vance’s head wider and redder in the sequence of posts that followed.

    “I was very weird and funny to see it take off,” McNamee says. About two weeks after his viral thread, another X user debuted the bald version of Vance using an image of the vice presidential nominee during his debate with Minnesota Gov. Tim Walz. But it was only months later, following a contentious White House meeting in February where Vance berated Ukrainian President Volodymyr Zelensky, that “it came back bigger than ever,” McNamee explains, with people altering a photo of the Vance in the Oval Office to give him chubby cheeks.

    “That’s when it became a cryptocurrency and was everywhere,” McNamee says. Another one of the edits he made, of Vance with a propeller hat and a massive lollipop, became the basis of that meme coin, PWEASE. (Many of the Vance jokes also involved exaggerated baby-talk.) “People made millions,” McNamee claims, while he netted around $10,000 from the minting of his content for the blockchain. Though it as since fallen back to earth, at one point, PWEASE had a market cap of approximately $60 million, having shot up more than 92,000 percent in value since its creation.

    It was around then that Vance told a reporter that he had seen the memes and found them amusing. That response — which some read as less than genuine — “only felt natural because once it became a crypto thing I knew it was in the right-wing internet zeitgeist,” McNamee says. As for the continued expansion of of the Vanciverse, he feels that people should see these memes “every day until no one knows what J.D. Vance looks like.” Of Mikkelsen’s trouble with border security, he adds, “the possible suppression of rights because you have a meme is fucked,” and that Steinberg’s form of quiet protest “is funny.”

    Trending Stories

    Steinberg tells Rolling Stone that his app has already been used to create more than 300 digital plane boarding passes. And while it’s not clear how many were actually used, he expects to hear from some of those brave enough to go big, beautiful and bald for their next flight. “A person reached out and said [he is] going to try and film his journey,” he says. “Not clear how allowed that is.”

    Filming in the TSA line definitely isn’t permitted, it’s important to note, but it’s perfectly understandable that people are excited to show off this iconic work of art. Stay safe out there, and may that goofy meme take you wherever you want to go.

    Continue Reading

  • Sprint king and queen crowned at Pacific Mini Games

    Sprint king and queen crowned at Pacific Mini Games

    The sprint king and queens of the Pacific Mini Games in Palau were crowned over the weekend, with the 100 metre and women’s athletics final taking place on Saturday night.

    PNG’s Pais Wisil claimed gold for PNG in the men while Australia’s Keyedel Smith beat out PNG’s Isila Apkup in the women.

    Action continues today with the games’ showcase event baseball set to conclude later today in a gold medal showdown between Guam and home nation Palau.

    However as for the ABC’s Declan Bryne and Sam Wykes – their time in Palau is finished, but not before some deep reflection on their experience.

    Continue Reading