Audio of this conversation is available via your favorite podcast service.
At Tech Policy Press, we’ve been tracking the emerging application of generative AI systems in content moderation. Recently, the European Center for Not-for-Profit Law (ECNL) released a comprehensive report titled Algorithmic Gatekeepers: The Human Rights Impacts of LLM Content Moderation, which looks at the opportunities and challenges of using generative AI in content moderation systems at scale. I spoke to its primary author, ECNL senior legal manager Marlena Wisniak.
What follows is a lightly edited transcript of the discussion.
Justin Hendrix:
Marlena, can you tell us a little bit about what the ECNL does?
Marlena Wisniak:
The Center, or ECNL as we call it, is a human rights and civil liberties organization for over 20 years. We’ve been mostly focusing on civic space, making sure that civil society organizations, but also grassroots orgs, activists, have a safe space to organize and do their work. We’re mostly lawyers, but I have to say we’re the fun lawyers. So we also do advocacy, research, and basically anything that we can do to protect and promote in enabling civic space. My team was founded in 2020, I believe, and has from the beginning focused mostly on AI, but it’s more broadly the digital team at ECNL.
So we look at how technologies, specifically emerging technologies, impact civic space and human rights. So our core human rights or civil liberties that we look at is typically privacy, freedom of expression, freedom of assembly, so rights to protest, for example, association, right to organize, and non-discrimination. And our key areas of focus substantively are typically surveillance, AI-driven surveillance and biometric surveillance, and broader and social media platforms that plays such a big role in civic space today. And that’s what I’ll be talking about today. But that’s, in a nutshell, my team and how it fits within the broader org.
Justin Hendrix:
Well, I’m excited to talk to you about this report that you have authored with help from a variety of different corners, but it’s called Algorithmic Gatekeepers, the Human Rights Impacts of LLM Content Moderation. So this is a topic that we’ve been trying to follow at Tech Policy Press fairly closely, because I feel like it is the sort of intersection of a lot of things that we care about around social media, content moderation, online safety, free expression, and then, of course, artificial intelligence. And LLMs, generative AI generally, the intersection of these two things, I think, is probably one of the most interesting and possibly under-covered or under-explored, at least to date, issues at the intersection of tech and democracy.
So I want to just start by asking you how you did this report. It’s quite a significant document. A lot of research obviously went into this, including gathering some new information, not just combining citations, as many people do when they produce a PDF white paper. What got you started on this and how did you go about it?
Marlena Wisniak:
Yes, you’re right. It was really a heavy lift. It was a research project that went on for about a year, and we collaborated with great folks including yourself. I remember, I think last year we had a call to hear your thoughts, and shout out, before I go deep into context, to Isabelle Anzabi, who was our fellow last summer, and really helped me just go through a lot of papers, mostly in computer science, and the Omidyar Network, who provided generous support for this project.
And so how it started, I accidentally fell into AI in 2016, but come from content moderation in the 2010s, ’11, ’12, so I’d say early days of content moderation. So automated content moderation has always been a big focus of mine. I was also at Twitter at some point, overseeing their legal department, sorry, content-governance and illegal department. And living in San Francisco right now, really, the big talk on the street is always LLMs, and GenAI more broadly.
And so I started hearing various use cases of LLMs for content moderation. Most of the talk, I will say, and focus of the research and civil society community is how to moderate LLM-generated content, so how to moderate ChatGPT, for example, or Claude. That is, of course, super important. And I wanted to look at it from another angle, which is how are LLMs used for content moderation? And this I will say, Justin, has been a little bit of a chicken-and-egg conversation, or reflection, because the way how LLMs can moderate content also impacts how LLM-driven content will be moderated, and how they’re moderated impacts how they moderate content. So it’s hard to separate those, one from another, but I did choose to have that focus. And it’s interesting because it’s very much a nerdy topic and yet has real-world implications, and it’s very hyped up.
So one of the things that I always love to do is go beyond the hype. I’m not a hype person. In fact, now I hate AI. It was fun working on AI in the 2016, ’17, ’18 years when nobody really cared about AI. Since ChatGPT was released, the only thing people ever want to talk about is AI. So now I want to scream. But all this to say that I did want to impact some of the real implications and post what our promises and what our, really, I’d say, not even realistic promises, but the types of impacts we want to see, because I do believe that it can helpful, and also, as this is an emerging space, prevent any possible harm. And I think folks listening to this podcast probably know that human motivation is extremely complicated. It’s horrible for workers. It doesn’t always, and always is an understatement, produce good outcomes.
And so machine learning came as a solution to that. It was expanded during COVID, and sort of came as this silver-bullet solution. There has been increasing research showing the limitations of that. And so now, LLM is the new white horse, is that an expression? Is the savior, and there’s a lot of hype. So that’s where I came from. It’s like, okay, let’s go deeper into this. Let’s review a lot of computer science papers where some of this more rigorous qualitative and quantitative work has done, translate it to policy folks, and bring a human-rights approach because that’s my background. I’m an international human rights lawyer.
Justin Hendrix:
So readers of Tech Policy Press have at least had multiple pieces that we’ve posted on the site about this issue, and often there’s been this sort of question about what is the promise potentially to offset the dangers to the labor force that currently engage in content moderation on behalf of platforms, but then also, of course, there are various perils that we can imagine as well. We’re going to get into those a little bit.
But I think one thing that distinguishes your report that I wanted to just start with perhaps is that you include a technical primer. You’ve got a kind of set of definitions and, I think usefully, some distinctions between what’s going on with LLMs for content moderation versus more standard machine-learning classifiers and recommendation mechanisms and other types of algorithmic models that have been used in content moderation for quite some time now. What do you think the listener needs to know about the technical aspect of this phenomenon, of the use of LLMs for content moderation at scale from our vantage right now in June of 2025?
Marlena Wisniak:
Yeah, I mean, I encourage folks to look at the technical primer, that is, the audience for that are mostly folks who aren’t very familiar with either industry jargon or technical terms. Leveling up, I think one thing that is really critical to consider when thinking about LLM-driven content moderation is that you have typically two layers. So one is the foundation model layer, or the LLMs. LLM are, I should have started, large-language models. And that’s pretty explicit. It’s large-language model, so people see it as God, or the sci-fi technology. It’s really not. It’s really big data sets.
And I think we often forget that. So if we think about traditional machine learning, the difference is that this is a larger set. And so there’s this implication, obviously, for privacy and other rights that we can explore later down the line. But one, considered, they’re very, very, very big, enormous data sets that require a lot of computing power. And so really, there’s only a handful of companies right now building these models.
But the ones that we looked at in more depth are Llama by Facebook, sorry, Meta, ChatGPT, OpenAI, Claude, Anthropic, Gemini, Google. And then right as I was finishing out the report, DeepSeek came out. So there’s emerging models as well, but it’s still a very small number of foundation models, given how much data and compute they use. And I often say they’re really not that technically complicated. They’re just bigger. And so from a concentration of power, that has a lot of importance.
And so then the platforms that I mentioned, they both develop and often use… they develop the LLMs and they use them for content moderation. But what happens for all the other ones like Discord or Reddit, or Slack? Although Slack might be bought by one of them. But anyways, there are so many other platforms. Typically, they do not build their own LLMs. They will either have a license with one of the foundation-model companies and then fine-tune them or they will use one of the open source, like DeepSeek or Llama or something they find on Hugging Face, and then fine-tune them for their purposes.
But what that means, so that’s one thing from a technical thing to understand how LLMs work and how they’re used in practice. And then another thing I will flag is that LLMs are a subset of generative AI. That’s a term we most commonly know. Generative AI today has, to some extent, become equivalent of ChatGPT. It’s much broader than ChatGPT, but LLMs are one subset of that, or foundation models as well.
And then maybe one last thing I’ll flag is multilingual language models are those that are trained on text data from dozens to sometimes hundreds of languages simultaneously. And so the idea is that they will be more capable of processing and generating inputs and outputs in multiple languages. And we can about whether that works or not. There’s fantastic research that has been done that inspired me a lot for my work.
Justin Hendrix:
Yeah, I’ll just say that the primer, I think what’s great about, again, this report is that it does detail at least some of the ambition technically that people have for using large-language models in content generation and the possibilities it opens up for sorts of things that have been very hard to do or difficult, certainly at scale, with today’s mechanisms, including, as you say, a big one, which is servicing many languages that previously social media firms might have decided are simply off limits for their consideration because it’s just not perhaps feasible or profitable for them to address certain markets or certain languages with the type of scale that perhaps people in those places might believe is necessary in order to do a good job. So that’s been a kind of consistent complaint from civil society for a long time.
But there are other things here that you point to that seem to fit in the category of promise, such as possibly more robust types of nudges or interventions in what people post, or after the fact what they post. I know I’ve talked to people about the idea that we can imagine content moderation with LLM starting before you ever post something, kind of prefacto content moderation, and possibly in interaction with the content moderation system as it were at that point, before you even pushed something live to your feed. But I don’t know. What else did you learn in terms of just studying these things technically about the possibilities they open up?
Marlena Wisniak:
Yeah, thanks are bringing it up. And the report does obviously highlight harms and also explores promises for each individual, right. So I’m with you. One of the key findings of this research was that probably the most promise of LLM-driven content moderation is not to remove content or even to moderate it through a ranking system or curating content, but really anything on procedural rights. So it’s before posting, like you said, nudges. Even though I will say that we also don’t want this Big Brother-type experience where we’re typing something and, oh dear Lord, the algorithm has found that this may be controversial. But yeah, there can be nudges, there can be suggestions for reformulating something that could be abusive, less invasive. They could even suggest other informations, for example, if someone is posting something clearly wrong about… like, factually incorrect about the Ukraine war, or elections, like civic integrity stuff, they could check out xyz.gov, right, elections.gov or something.
And also on the flip side, appeals and remedy. When a user posts content, they can immediately know that this has been flagged for… especially content that would be automatically removed, they can get an automated notice that’s more personalized that this content has been flagged for review or has been removed, and give more personalized information about how they can appeal that decision. So there’s all this kind of user interaction that I think is pretty cool and exciting.
There’s also the possibility for personalized content moderation. So let’s say one user really welcomes gore or sensitive content, borderline content that doesn’t violate either the platform’s policies or the law. They can adapt and adjust their own moderation, as opposed to someone who really doesn’t want to see anything remotely sensitive or related to some topics because of any trauma or just dislike or whatever. That can be helpful.
And I’ll give a shout out here to Discord, with whom we collaborated for a year on engaging external stakeholders as they develop ML-driven interventions to moderate abuse and harassment on their platform, specifically for teens. And so we really worked with a broad range of stakeholders, including youth and children, which is interesting, and understanding how they would like to see AI-driven innovation and intervention. So it wasn’t only LLM, it was also traditional machine learning.
But yeah, so I think, to sum up creative uses of LLM for moderating content, and by moderating I mean broadly, is something that I support much more than automated removal, which one of the conclusions of this report is that, at least today, it is still too risky, potentially harmful, and just doesn’t… like, ineffective to do that. There will be too many false positives and too many false negatives, both of those disproportionately falling on already marginalized groups who tend to be, as most folks, you probably know, disproportionately silenced by platform and also disproportionately targeted by violative content. So that’s one of the main findings of how we can use LLMs safely is more on the procedural side than actually the content.
Justin Hendrix:
We’ll see how platforms attempt to go about this. We’re already seeing some in the wild, examples of uses of LLMs. There’s been reporting even this week of Meta’s new community notes program, using LLMs in certain ways. So I think there’ll be probably as wide a variety of applications of LLMs as there have been of machine learning in content moderation, and we’ll just, I suppose, see how it goes. I mean, different platforms, who knows, maybe one or two will err towards creating a AI nanny state hyper-sanitized environment that people may recoil from, or regard as overly censorious or what have you. And yet it’s possible to imagine, as you say, lots of different types of interventions that people might regard as useful or helpful in whatever way.
But I want to pause on maybe thinking about the benefits and dig a little more into the potential human rights impacts, because that’s where, of course, you spend the bulk of this report, concerned with things like privacy, freedom of expression and information and opinion, questions around peaceful assembly and association, non-discrimination, participation. Take us through a couple of those things. When you think about the most significant potential human rights impacts of the deployment of large language model systems in content moderation at scale, what do you think is most prominent?
Marlena Wisniak:
So I’ll just highlight some of the most specific to LLMs. One thing to consider is that LLMs often exacerbate and accelerate already-existing harms done by traditional machine learning, and traditional ML accelerates and exacerbates harms committed by humans often. So I think it’s like the more scale, scale can be good for accuracy and speed obviously, and it also has, there’s another side to that coin.
So some of the things you’ll find the report are kind of an LLM-ified analysis of automated content moderation, but I will single out a few really new concerns related to LLMs. And so one of them is kind of a, coming back to the concentration of power issue that I mentioned before, any decision that is made at the foundation model level, unless it is proactively fine-tuned at the deployment level, that will trickle down across platforms.
So to give you an example, if Meta decides that pro-Palestinian content is considered violent or terrorist content, and there has been a lot of reporting to show that, then if another platform uses Llama without changing that specifically, any decision that Meta makes at the level of Llama will then trickle down to the other platforms. So that’s something to consider from a freedom of expression angle, is generalized censorship, if it’s false positive, and at the same time, content that should be removed will not be removed if the foundation model does not consider that as harmful. So that interaction is particularly important because we’ve never seen that before, to my knowledge at least, and the dynamics on content moderation.
Another big thing around freedom of information, for example, is hallucinations. So that is a very stereotypical GenAI problem. And for folks who don’t know, hallucinations are — it’s content that is generated by the AI system and that is just made up and wrong. So the weird thing is that it does in a way that seems so confident and so right, and it is just nonsense. So it’ll make up academic papers, or it’ll make up news articles or any kind of facts.
So if platforms use this to moderate missing this information, that will just be inaccurate to begin with. And it’s hard sometimes to parse that when you have pretty convincing content. And even if, for example, human moderators would use LLMs or GenAI to help them moderate content, if they see this really elaborate article about how, whatever, Trump won the 2020 elections, for example, and they’re not familiar with it, that could form the base of their decisions, and it’s just plain wrong. So that’s the new harm and risk.
Another one that I found was super interesting, and this, I will say, was me… a lot of this paper was me trying to envision harms and then probe it with engineers or technical folks and also non-technical as well, to ask them, does this make sense? Could this be true? And for example, from freedom of peaceful assembly and protest, one thing to consider is that protests are, and contrarian views are protest definition anti-majority.
You have a minority express themselves. You go against powerful interests like governments or companies, or even just the status quo. And what does that mean? This data is not well-represented in the training datasets. Because machine learning, I often say is just steroids on stats. So if a view is predominant in the dataset, even if it’s completely wrong, that is the output, right? Machine learning never gives a real decision. It gives a prediction about what is statistically possible. So when you have protestors or journalists, investigative journalists, for example, or anybody bringing up new stuff, that will not actually show up in the dataset and therefore will not be moderated well. And let’s give platforms and those deploying content moderation system the highest benefit of the doubt. They probably wants to moderate it well, just the system will not function unless specifically fine-tuned because protests fall outside the curve of statistical data.
And that’s also the case for conflicts or exceptional circumstances, crises. These events are, by definition, exceptional, and therefore fall outside the statistical curve and are not moderated well. And that’s another key finding. For freedom of association, one thing that could be interesting here is that some organizations can be mislabeled. Actually, you know what, Justin, forget that one. It’s too long. I’ll skip it.
Two last pieces I’d like to highlight that we found were really interesting. One was on participation. So on one side, LLMs actually have the potential to support more participatory design by enabling customizable moderation. So like I said before, the users have personalized content moderation, or perhaps in the future use AI agents to moderate their own content as they want. On the flip side, in practice, affected communities are largely excluded from shaping content moderation systems. That’s not new. That has also happened in machine learning.
Generally, it’s mostly white tech bros in San Francisco or Silicon Valley, so especially marginalized groups and those in the global majority are excluded from that. The addition with LLMs is that they are typically “improved” through a method called reinforcement learning from human feedback, or red-teaming. So you have folks thinking about, in the case of red-teaming, what are potential bad, worst-case scenarios and testing it from an adversarial perspective.
Same for reinforcement learning. They go through these models and they “fix” them, they reteach them how to learn. The problem is that people who do that are usually Stanford graduates, those in Silicon Valley or other elitist institutions, and it’s very rare. I personally have never heard of folks from marginalized groups being invited to participate in these kinds of activities. I myself, for example, have been invited, but you have to be in a niche kind of AI group to do that. And everybody who I’ve spoken to who has done that, I personally have not. I’ll say that it’s a very homogeneous group. And so basically what that means is then yes, the LLMs will be improved… or, I’m sorry, let me rephrase. There are efforts to reduce inaccuracy and improve the performance of the models of the LLMs. However, the people who do that are typically very homogeneous. So it’s just like snowball effects.
And the last piece on remedy is that while there are potential promises to access remedy better, like I said before, most everyone use a notification, helping them identify remedy, appeals mechanisms, speeding up the appeals, there’s also fundamentally a lack of explainability and transparency, and that can create barriers to remedy. Another issue, like I said before, is that there are these two layers, the layer of foundation models. So the ChatGPT, the Claude, the Gemini, and then the social media platform that deploys it. And it’s hard to know where to appeal, how to appeal. The foundation models themselves don’t really know how a decision was made. Social media platforms know that even less. Do you appeal to the platform? Does the platform then appeal again to the third party LLM? Where does the user fall into this? So just accountability becomes fragmented, and there’s a lot of confusion and lack of clarity around how to go around that.
Justin Hendrix:
So I want to come to some of your recommendations, because you have both recommendations to LLM developers and deployers as well as to, of course, those who are potentially applying these things inside of social media companies. But your section on recommendations to policymakers is perhaps mercifully brief. You’ve only got a handful of recommendations there. It’s clear, I think just from the onus in the report on where the recommendations are, that you see it largely as something that private sector needs to sort out, how are they going to deploy these technologies or not.
But when it comes to policymakers, what are you telling them? I see you’re interested in making sure they’re refraining, on some level, from mandating the use of these things. I suppose it’s a possibility that somebody might come along and say, “Oh, in fact, we demand that you use them.” I was trying to think of a context for that, but then I found myself thinking about some of the laws we’ve seen put forward in the United States even, where there have been segments of those laws, I think in some of the must-carry laws, where there have been these ideas around transparency of moderation decisions. I feel like I’ve read amicus briefs where some of the people opposed to those laws would make arguments like, “Well, it’s just simply not possible to give explainable rationale to every single user for every single content-moderation decision that’s made.”
Well, presumably, LLMs would make that quite possible, and you could imagine a government coming along and saying, “Somehow, in the interest of free expression, we would like to mandate, use artificial intelligence to explain any content moderation decision that you might take.” You say that’s a bad idea along with other things, but what would you tell policymakers to be paying attention to here?
Marlena Wisniak:
Yeah, I mean, that’s a very astute… you observe that well. Most of the recommendations are to LLMs developers and employers. The reason for this is not that we only see the owners on the private sector, not on the public sector, is that for practical reasons, this part of the research was mostly on assessing the human rights impact. And the recommendations piece was really the last part, and we didn’t have that much capacity to go deep. So the second iteration of this report, hopefully we will continue. It will be really to zoom into the recommendations.
And I will say then, that on the policy making side, so a lot of our work at ECNL is on policy and legal advocacy. We’ve been working behind the scenes on the AI Act for the past five years, and right now, there’s conversations around the GPAI code in the EU on general-purpose AI. And the reason why I didn’t go deep here is that one, we don’t want a specific AI content moderation or LLM moderation law. We have the DSA in Europe. The U.S. I’ll just set aside, because right now there’s a lot going on. So we’re not calling for a specific LLM content moderation law.
And the EU already has the DSA and the AI. And I’d say a lot of the foundational aspects of LLMs to policymakers would be kind of basic AI around, like, data protection, human rights impact assessment, stakeholder engagement. So I added these big categories. I didn’t go really deep into them. It’s mostly how can we not fuck it up, to be honest. And I think content moderation, as you know, is a very fraught topic where even folks who are well-intentioned just don’t understand it enough. So sometimes you will have these claims that would let… platform should remove problematic content within an hour. And it’s like, okay, cool. In theory, that sounds great. What does that mean in practice? It means a lot of false positives. It means that disproportionately marginalized groups will be impacted, including sex workers, racialized groups, queer folks. And you’ll have to use automated content moderation, which has all the false positives and false negatives that haven’t reported on.
So the key recommendation we made to policymakers here are very foundational, I’d say. One, do not mandate LLM moderation exactly for the reason that you expressed before. Two, maintain human oversight. So if LLMs are used to moderate content, and especially to remove content, there still should be a legal requirements for platforms to integrate human-in-the-loop systems, meaning that humans will review whatever decision LLM does. Three, kind of broad transparency and accountability metrics and requirements that’s very DSA-esque… sorry, I should have said Digital Services Act. And a lot of that is really about transparency, like mandating disclosure of how LLM systems function and how they’re used. The reality, Justin, we don’t know. I had several off-the-record calls with platforms, off-the-record means I couldn’t publish the names. They also gave me very vague information. There’s some information that was published by them, and you’ll see it in the report.
ChatGPT said they use LLMs for content moderation. Meta says they’re beginning to play around with it, gemini or Google. But overall, we don’t know. I definitely don’t know when it’s used when I use social media. We don’t know accuracy rates, we don’t know how they’re used in appeals, or how they’re enforced. So really requiring platforms to notify users about LLM moderation actions. And again, that’s nothing new, I would say. That’s just using prior Santa Clara principles on transparency and accountability, or DSA or kind of like mainstream civil-society asks and implementing them for LLMs.
The two last ones that we highlighted, one is mandating human rights impact assessments. So hey, platforms, good news. I did one here, so you can use… My goal is that this will be a starting point for platforms to have basically an HRA handed to them on a silver platter, and then obviously use this as a starting point to look at how they implement LLMs on their platform. It’ll be specific to each platform.
But on that note, one thing that policymakers could do is make HRAs mandatory for LLM developers and platforms that deploy them, both before deployment and throughout the LLM lifecycle. So for example, the pilot with Discord was at the ideation phase, so design, product design. Before they even developed the product, they consulted with a lot of folks to see how an LLM or machine-learning-driven system could be helpful. Is it nudges? Is it for removing content? Do we not want that? Why? And then continue that throughout all the way through development and deployment and make it accessible for external stakeholders.
There’s so much expertise in the room, and I often say to platforms that, you know, trust and safety teams, policy teams, human rights teams tend to be underfunded. Just go to civil society that has such extensive knowledge, or journalists like yourself and academics and just drink their wisdom, because there’s a lot of stuff out there.
Justin Hendrix:
This report seems like a good jumping-off point for a lot of folks who might be interested in investigating or collecting artifacts of how the platforms are deploying these things or generally kind of trying to pay attention to these issues and trying to discern whether the introduction of LLMs in this context is on balance a good thing, a bad thing, or perhaps somehow neutral or indiscernible. With regard to the overall information integrity environment, what would you encourage people to do next? What would you encourage them to go and look at? What threads would you like to see the field pulled from here?
Marlena Wisniak:
Yeah. One core piece that we didn’t talk about much is the multilingual piece of it, so how this can work in different languages. And I’ll just give a shout out here to really cool efforts in the global majority to develop community-driven local smaller LLMs or data sets, like the Masakhane in Africa. There’s really cool community-driven initiatives that kind of go beyond the Silicon Valley profit-first massive monopolization and dynamics. So that is one thing, and I really encourage not only researchers, but also platforms to talk to them. A lot of the platforms that I spoke with, they didn’t even know these existed, and if I say that this report is an HRA handed on a silver platter, they have really cool data sets that would be really helpful, especially to smaller platforms, and they could just plug these into their own model.
So that’s one thing that I hope research and industry will move towards is more languages, more dialects, understanding that it’s not only a difference between English and other language. It really is a colonial imperialist dynamic where English or French or German or Spanish will be much better moderated, and languages that are close to these ones are better, and then obscure, poorly researched languages work very, very poorly. And you can read in the report the reasons are both because the data sets do not exist or they’re just bad quality, because there’s not enough investment. So I really would encourage platforms to invest more resources into that, more participation, and proactively include stakeholders.
The other area I would love to see is just open conversations between platform and civil society. And like I said, this is a nerdy topic, like LLMs and content moderation. It doesn’t roll off the tongue. So if folks with expertise in content moderation would like, hopefully this report can give them more context. And then I would love to see more evidence, new ideas. Like I said, some of these things were my own kind of “How could this work?” But through many conversation with folks, and I really would want to see more thinking, more assessment of impacts and more evidence as well.
Like this paper, it’s long, 70 pages. I didn’t mean to make it this long, but there’s a lot of stuff. And I’d say the last part is computer science papers move so fast, incredibly fast. It’s hard to keep up, and they’re very theoretical. So to the extent that there can be more collaboration between technical folks who come from the machine learning, AI, or computer science field, and policy and human rights, I think we’ll actually be able to build much better products and push policymakers to regulate this stuff better.
Justin Hendrix:
I appreciate you taking the time to speak to me about this, and I would encourage my readers to go and check out this full report, which is available on the ecnl.org website. I will include a link to it in the show notes. Thank you very much.
Marlena Wisniak:
Thanks, Justin.