Wall Street’s blistering three-month rally on cooling trade tensions will be put to the test this week. It’s an exceptionally light week of economic data and earnings releases, which means every little piece of information investors do get will be magnified. Against that sparse backdrop, President Donald Trump ‘s trade policy ahead of a long-awaited deadline will likely be the dominant market story of the week. The S & P 500 enters this pivotal period at record highs and up 26% from its tariff-driven closing low on April 8. 1. Tariffs: What is going to happen with tariffs before Wednesday, when Trump’s 90-day “reciprocal” tariff pause is set to expire and the steeper, country-specific duty rates announced in the Rose Garden on April 2 take effect? That is the question on many investors’ minds, particularly after what appears to be mixed messages from the Trump administration. Trump said Tuesday that he was “not thinking about” extending the tariff pause beyond July 9. A baseline tariff of 10% has been in effect on most U.S. trading partners during the 90-day negotiation window. The question took on greater salience early Friday, when Trump told reporters the U.S. was going to start sending out letters to countries informing them of their new tariff rates. “We have more than 170 countries, and how many deals can you make?” Trump said. “They’re very much more complicated.” Trump said the U.S. might still strike a couple more deals with trading partners, according to Reuters. Throwing more questions in the mix, Treasury Secretary Scott Bessent said Sunday tariffs will “boomerang” back to April levels by Aug. 1 for countries without deals. During an interview on CNN’s “State of the Union,” Bessent said Aug. 1 was not a new deadline. “We are saying this is when it’s happening, if you want to speed things up, have at it, if you want to go back to the old rate that’s your choice,” he said. Indeed, Trump adviser Peter Navarro’s suggestion that the administration would ink ” 90 deals in 90 days ” has thus far not panned out. The U.S. has made just a handful of trade announcements since the April 9 tariff pause. The U.S. has agreed to a trade framework with China that saw both sides reduce tariff rates and lift other retaliatory measures. Washington also struck a deal with the U.K., which took effect June 30 . Last week, Trump unveiled some details of an agreement with Vietnam, which he said will grant U.S. exporters more access to that market, while Vietnamese imports into the U.S. will be subject to a 20% tariff — below the 46% “reciprocal” tariff rate Trump threatened in April. A potential deal between the U.S. and India is one that investors are watching closely, particularly after Reuters reported negotiators from both sides were pushing for a deal. Meanwhile, The Wall Street Journal on Wednesday had a big story on disagreements between Washington and Tokyo that are complicating talks with Japan. Additionally, South Korea’s president, Lee Jae Myung, said Thursday he wasn’t sure whether the country would solidify a deal with the U.S. ahead of the Wednesday deadline, according to the Associated Press . It remains to be seen how the market will trade as Wednesday nears. But recent history suggests the more positive headlines on tariff agreements there are, the better it will be for sentiment. At the same time, investors cannot rule out the potential for volatility if Trump’s rhetoric heats up at all — similar to what we saw last week when he abruptly called off trade talks with Canada over its digital services tax, which weighed on stocks in the June 27 session. It proved to be a short-lived spat after Canada U-turned on the tax proposal, but the flare-up illustrates the fact the market doesn’t have tunnel vision on trade. While investors without a doubt are positioned for more good news, there could be fits and starts along the way. Trump’s comments on the tariff letters over negotiating deals underscore that possibility. That’s especially true with an S & P Short Range Oscillato r north of 8%, signaling very overbought conditions. Any tough talk on trade from Trump could lead to an outsized reaction in the market. A lot of money was made very quickly since the April lows, so plenty of investors and traders could be looking for any reason to book profits. We’ve done exactly that in recent days and Jim Cramer suggested on Thursday’s Morning Meeting that we may even look to do more profit-taking in big winners come Monday. If we get a big positive open on Monday, that would push us even further into overbought territory, which could be an opportunity for members looking to raise cash to do just that. On the geopolitical front, we’ll also be keeping an eye on whether there’s any progress on resolutions to the Russia-Ukraine and Israel-Hamas wars. While these haven’t been major market-moving stories lately, various headlines on ceasefire talks surfaced last week. Trump had planned to speak to Russian leader Vladimir Putin on Thursday. 2. Earnings: While there are no Club names reporting, there are three reports to note. Delta Air Lines and Conagra Brands , the owner of Healthy Choice, Hunts and other food brands, on Thursday morning will offer insights into the health of consumer spending. More specifically, Delta will tell us about the appetite for travel and what role, if any, the weaker U.S. dollar has had on international trips. Conagra also will likely touch on inflationary pressures in the food and commodity complex. Levi Strauss on Thursday night will also offer a checkup on the consumer, as well as tariff and supply chain commentary. Consider these three earnings reports the calm before the second-quarter earnings season really picks up the following week when we begin to hear from the biggest U.S. banks. Week ahead Monday, July 7 No earnings or economic releases of note Tuesday, July 8 NFIB Small Business Index at 6 a.m. ET Federal Reserve’s consumer credit report for May at 3 p.m. ET Wednesday, July 9 Federal Reserve meeting minutes released at 2 p.m. ET Census Bureau’s monthly wholesale trade report for May at 10 a.m. ET Thursday, July 10 Initial jobless claims at 8:30 a.m. ET Before the bell: Delta Air Lines (DAL), Conagra Brands (CAG), Simply Good Foods (SMPL), Helen of Troy (HELE) After the bell: Levi Strauss & Co (LEVI), WD-40 (WDFC), PriceSmart (PSMT) Friday, July 11 Treasury Department’s Monthly Treasury Statement at 2 p.m. ET (See here for a full list of the stocks in Jim Cramer’s Charitable Trust.) As a subscriber to the CNBC Investing Club with Jim Cramer, you will receive a trade alert before Jim makes a trade. Jim waits 45 minutes after sending a trade alert before buying or selling a stock in his charitable trust’s portfolio. If Jim has talked about a stock on CNBC TV, he waits 72 hours after issuing the trade alert before executing the trade. THE ABOVE INVESTING CLUB INFORMATION IS SUBJECT TO OUR TERMS AND CONDITIONS AND PRIVACY POLICY , TOGETHER WITH OUR DISCLAIMER . NO FIDUCIARY OBLIGATION OR DUTY EXISTS, OR IS CREATED, BY VIRTUE OF YOUR RECEIPT OF ANY INFORMATION PROVIDED IN CONNECTION WITH THE INVESTING CLUB. NO SPECIFIC OUTCOME OR PROFIT IS GUARANTEED.
Category: 3. Business
-
Carney says new oil pipeline proposal in Canada is highly likely – Reuters
- Carney says new oil pipeline proposal in Canada is highly likely Reuters
- Varcoe: Carney says it’s ‘highly likely’ an oil pipeline will make Ottawa’s major project list Calgary Herald
- Stakes are high in Canada’s race to become an energy superpower Financial Post
- Canada’s economy needs more than just pipelines and hydro Toronto Star
- Canada awaits private sector move on Pacific crude pipeline, minister says Global News
Continue Reading
-
Considering the Human Rights Impacts of LLM Content Moderation
Audio of this conversation is available via your favorite podcast service.
At Tech Policy Press, we’ve been tracking the emerging application of generative AI systems in content moderation. Recently, the European Center for Not-for-Profit Law (ECNL) released a comprehensive report titled Algorithmic Gatekeepers: The Human Rights Impacts of LLM Content Moderation, which looks at the opportunities and challenges of using generative AI in content moderation systems at scale. I spoke to its primary author, ECNL senior legal manager Marlena Wisniak.
What follows is a lightly edited transcript of the discussion.
Justin Hendrix:
Marlena, can you tell us a little bit about what the ECNL does?
Marlena Wisniak:
The Center, or ECNL as we call it, is a human rights and civil liberties organization for over 20 years. We’ve been mostly focusing on civic space, making sure that civil society organizations, but also grassroots orgs, activists, have a safe space to organize and do their work. We’re mostly lawyers, but I have to say we’re the fun lawyers. So we also do advocacy, research, and basically anything that we can do to protect and promote in enabling civic space. My team was founded in 2020, I believe, and has from the beginning focused mostly on AI, but it’s more broadly the digital team at ECNL.
So we look at how technologies, specifically emerging technologies, impact civic space and human rights. So our core human rights or civil liberties that we look at is typically privacy, freedom of expression, freedom of assembly, so rights to protest, for example, association, right to organize, and non-discrimination. And our key areas of focus substantively are typically surveillance, AI-driven surveillance and biometric surveillance, and broader and social media platforms that plays such a big role in civic space today. And that’s what I’ll be talking about today. But that’s, in a nutshell, my team and how it fits within the broader org.
Justin Hendrix:
Well, I’m excited to talk to you about this report that you have authored with help from a variety of different corners, but it’s called Algorithmic Gatekeepers, the Human Rights Impacts of LLM Content Moderation. So this is a topic that we’ve been trying to follow at Tech Policy Press fairly closely, because I feel like it is the sort of intersection of a lot of things that we care about around social media, content moderation, online safety, free expression, and then, of course, artificial intelligence. And LLMs, generative AI generally, the intersection of these two things, I think, is probably one of the most interesting and possibly under-covered or under-explored, at least to date, issues at the intersection of tech and democracy.
So I want to just start by asking you how you did this report. It’s quite a significant document. A lot of research obviously went into this, including gathering some new information, not just combining citations, as many people do when they produce a PDF white paper. What got you started on this and how did you go about it?
Marlena Wisniak:
Yes, you’re right. It was really a heavy lift. It was a research project that went on for about a year, and we collaborated with great folks including yourself. I remember, I think last year we had a call to hear your thoughts, and shout out, before I go deep into context, to Isabelle Anzabi, who was our fellow last summer, and really helped me just go through a lot of papers, mostly in computer science, and the Omidyar Network, who provided generous support for this project.
And so how it started, I accidentally fell into AI in 2016, but come from content moderation in the 2010s, ’11, ’12, so I’d say early days of content moderation. So automated content moderation has always been a big focus of mine. I was also at Twitter at some point, overseeing their legal department, sorry, content-governance and illegal department. And living in San Francisco right now, really, the big talk on the street is always LLMs, and GenAI more broadly.
And so I started hearing various use cases of LLMs for content moderation. Most of the talk, I will say, and focus of the research and civil society community is how to moderate LLM-generated content, so how to moderate ChatGPT, for example, or Claude. That is, of course, super important. And I wanted to look at it from another angle, which is how are LLMs used for content moderation? And this I will say, Justin, has been a little bit of a chicken-and-egg conversation, or reflection, because the way how LLMs can moderate content also impacts how LLM-driven content will be moderated, and how they’re moderated impacts how they moderate content. So it’s hard to separate those, one from another, but I did choose to have that focus. And it’s interesting because it’s very much a nerdy topic and yet has real-world implications, and it’s very hyped up.
So one of the things that I always love to do is go beyond the hype. I’m not a hype person. In fact, now I hate AI. It was fun working on AI in the 2016, ’17, ’18 years when nobody really cared about AI. Since ChatGPT was released, the only thing people ever want to talk about is AI. So now I want to scream. But all this to say that I did want to impact some of the real implications and post what our promises and what our, really, I’d say, not even realistic promises, but the types of impacts we want to see, because I do believe that it can helpful, and also, as this is an emerging space, prevent any possible harm. And I think folks listening to this podcast probably know that human motivation is extremely complicated. It’s horrible for workers. It doesn’t always, and always is an understatement, produce good outcomes.
And so machine learning came as a solution to that. It was expanded during COVID, and sort of came as this silver-bullet solution. There has been increasing research showing the limitations of that. And so now, LLM is the new white horse, is that an expression? Is the savior, and there’s a lot of hype. So that’s where I came from. It’s like, okay, let’s go deeper into this. Let’s review a lot of computer science papers where some of this more rigorous qualitative and quantitative work has done, translate it to policy folks, and bring a human-rights approach because that’s my background. I’m an international human rights lawyer.
Justin Hendrix:
So readers of Tech Policy Press have at least had multiple pieces that we’ve posted on the site about this issue, and often there’s been this sort of question about what is the promise potentially to offset the dangers to the labor force that currently engage in content moderation on behalf of platforms, but then also, of course, there are various perils that we can imagine as well. We’re going to get into those a little bit.
But I think one thing that distinguishes your report that I wanted to just start with perhaps is that you include a technical primer. You’ve got a kind of set of definitions and, I think usefully, some distinctions between what’s going on with LLMs for content moderation versus more standard machine-learning classifiers and recommendation mechanisms and other types of algorithmic models that have been used in content moderation for quite some time now. What do you think the listener needs to know about the technical aspect of this phenomenon, of the use of LLMs for content moderation at scale from our vantage right now in June of 2025?
Marlena Wisniak:
Yeah, I mean, I encourage folks to look at the technical primer, that is, the audience for that are mostly folks who aren’t very familiar with either industry jargon or technical terms. Leveling up, I think one thing that is really critical to consider when thinking about LLM-driven content moderation is that you have typically two layers. So one is the foundation model layer, or the LLMs. LLM are, I should have started, large-language models. And that’s pretty explicit. It’s large-language model, so people see it as God, or the sci-fi technology. It’s really not. It’s really big data sets.
And I think we often forget that. So if we think about traditional machine learning, the difference is that this is a larger set. And so there’s this implication, obviously, for privacy and other rights that we can explore later down the line. But one, considered, they’re very, very, very big, enormous data sets that require a lot of computing power. And so really, there’s only a handful of companies right now building these models.
But the ones that we looked at in more depth are Llama by Facebook, sorry, Meta, ChatGPT, OpenAI, Claude, Anthropic, Gemini, Google. And then right as I was finishing out the report, DeepSeek came out. So there’s emerging models as well, but it’s still a very small number of foundation models, given how much data and compute they use. And I often say they’re really not that technically complicated. They’re just bigger. And so from a concentration of power, that has a lot of importance.
And so then the platforms that I mentioned, they both develop and often use… they develop the LLMs and they use them for content moderation. But what happens for all the other ones like Discord or Reddit, or Slack? Although Slack might be bought by one of them. But anyways, there are so many other platforms. Typically, they do not build their own LLMs. They will either have a license with one of the foundation-model companies and then fine-tune them or they will use one of the open source, like DeepSeek or Llama or something they find on Hugging Face, and then fine-tune them for their purposes.
But what that means, so that’s one thing from a technical thing to understand how LLMs work and how they’re used in practice. And then another thing I will flag is that LLMs are a subset of generative AI. That’s a term we most commonly know. Generative AI today has, to some extent, become equivalent of ChatGPT. It’s much broader than ChatGPT, but LLMs are one subset of that, or foundation models as well.
And then maybe one last thing I’ll flag is multilingual language models are those that are trained on text data from dozens to sometimes hundreds of languages simultaneously. And so the idea is that they will be more capable of processing and generating inputs and outputs in multiple languages. And we can about whether that works or not. There’s fantastic research that has been done that inspired me a lot for my work.
Justin Hendrix:
Yeah, I’ll just say that the primer, I think what’s great about, again, this report is that it does detail at least some of the ambition technically that people have for using large-language models in content generation and the possibilities it opens up for sorts of things that have been very hard to do or difficult, certainly at scale, with today’s mechanisms, including, as you say, a big one, which is servicing many languages that previously social media firms might have decided are simply off limits for their consideration because it’s just not perhaps feasible or profitable for them to address certain markets or certain languages with the type of scale that perhaps people in those places might believe is necessary in order to do a good job. So that’s been a kind of consistent complaint from civil society for a long time.
But there are other things here that you point to that seem to fit in the category of promise, such as possibly more robust types of nudges or interventions in what people post, or after the fact what they post. I know I’ve talked to people about the idea that we can imagine content moderation with LLM starting before you ever post something, kind of prefacto content moderation, and possibly in interaction with the content moderation system as it were at that point, before you even pushed something live to your feed. But I don’t know. What else did you learn in terms of just studying these things technically about the possibilities they open up?
Marlena Wisniak:
Yeah, thanks are bringing it up. And the report does obviously highlight harms and also explores promises for each individual, right. So I’m with you. One of the key findings of this research was that probably the most promise of LLM-driven content moderation is not to remove content or even to moderate it through a ranking system or curating content, but really anything on procedural rights. So it’s before posting, like you said, nudges. Even though I will say that we also don’t want this Big Brother-type experience where we’re typing something and, oh dear Lord, the algorithm has found that this may be controversial. But yeah, there can be nudges, there can be suggestions for reformulating something that could be abusive, less invasive. They could even suggest other informations, for example, if someone is posting something clearly wrong about… like, factually incorrect about the Ukraine war, or elections, like civic integrity stuff, they could check out xyz.gov, right, elections.gov or something.
And also on the flip side, appeals and remedy. When a user posts content, they can immediately know that this has been flagged for… especially content that would be automatically removed, they can get an automated notice that’s more personalized that this content has been flagged for review or has been removed, and give more personalized information about how they can appeal that decision. So there’s all this kind of user interaction that I think is pretty cool and exciting.
There’s also the possibility for personalized content moderation. So let’s say one user really welcomes gore or sensitive content, borderline content that doesn’t violate either the platform’s policies or the law. They can adapt and adjust their own moderation, as opposed to someone who really doesn’t want to see anything remotely sensitive or related to some topics because of any trauma or just dislike or whatever. That can be helpful.
And I’ll give a shout out here to Discord, with whom we collaborated for a year on engaging external stakeholders as they develop ML-driven interventions to moderate abuse and harassment on their platform, specifically for teens. And so we really worked with a broad range of stakeholders, including youth and children, which is interesting, and understanding how they would like to see AI-driven innovation and intervention. So it wasn’t only LLM, it was also traditional machine learning.
But yeah, so I think, to sum up creative uses of LLM for moderating content, and by moderating I mean broadly, is something that I support much more than automated removal, which one of the conclusions of this report is that, at least today, it is still too risky, potentially harmful, and just doesn’t… like, ineffective to do that. There will be too many false positives and too many false negatives, both of those disproportionately falling on already marginalized groups who tend to be, as most folks, you probably know, disproportionately silenced by platform and also disproportionately targeted by violative content. So that’s one of the main findings of how we can use LLMs safely is more on the procedural side than actually the content.
Justin Hendrix:
We’ll see how platforms attempt to go about this. We’re already seeing some in the wild, examples of uses of LLMs. There’s been reporting even this week of Meta’s new community notes program, using LLMs in certain ways. So I think there’ll be probably as wide a variety of applications of LLMs as there have been of machine learning in content moderation, and we’ll just, I suppose, see how it goes. I mean, different platforms, who knows, maybe one or two will err towards creating a AI nanny state hyper-sanitized environment that people may recoil from, or regard as overly censorious or what have you. And yet it’s possible to imagine, as you say, lots of different types of interventions that people might regard as useful or helpful in whatever way.
But I want to pause on maybe thinking about the benefits and dig a little more into the potential human rights impacts, because that’s where, of course, you spend the bulk of this report, concerned with things like privacy, freedom of expression and information and opinion, questions around peaceful assembly and association, non-discrimination, participation. Take us through a couple of those things. When you think about the most significant potential human rights impacts of the deployment of large language model systems in content moderation at scale, what do you think is most prominent?
Marlena Wisniak:
So I’ll just highlight some of the most specific to LLMs. One thing to consider is that LLMs often exacerbate and accelerate already-existing harms done by traditional machine learning, and traditional ML accelerates and exacerbates harms committed by humans often. So I think it’s like the more scale, scale can be good for accuracy and speed obviously, and it also has, there’s another side to that coin.
So some of the things you’ll find the report are kind of an LLM-ified analysis of automated content moderation, but I will single out a few really new concerns related to LLMs. And so one of them is kind of a, coming back to the concentration of power issue that I mentioned before, any decision that is made at the foundation model level, unless it is proactively fine-tuned at the deployment level, that will trickle down across platforms.
So to give you an example, if Meta decides that pro-Palestinian content is considered violent or terrorist content, and there has been a lot of reporting to show that, then if another platform uses Llama without changing that specifically, any decision that Meta makes at the level of Llama will then trickle down to the other platforms. So that’s something to consider from a freedom of expression angle, is generalized censorship, if it’s false positive, and at the same time, content that should be removed will not be removed if the foundation model does not consider that as harmful. So that interaction is particularly important because we’ve never seen that before, to my knowledge at least, and the dynamics on content moderation.
Another big thing around freedom of information, for example, is hallucinations. So that is a very stereotypical GenAI problem. And for folks who don’t know, hallucinations are — it’s content that is generated by the AI system and that is just made up and wrong. So the weird thing is that it does in a way that seems so confident and so right, and it is just nonsense. So it’ll make up academic papers, or it’ll make up news articles or any kind of facts.
So if platforms use this to moderate missing this information, that will just be inaccurate to begin with. And it’s hard sometimes to parse that when you have pretty convincing content. And even if, for example, human moderators would use LLMs or GenAI to help them moderate content, if they see this really elaborate article about how, whatever, Trump won the 2020 elections, for example, and they’re not familiar with it, that could form the base of their decisions, and it’s just plain wrong. So that’s the new harm and risk.
Another one that I found was super interesting, and this, I will say, was me… a lot of this paper was me trying to envision harms and then probe it with engineers or technical folks and also non-technical as well, to ask them, does this make sense? Could this be true? And for example, from freedom of peaceful assembly and protest, one thing to consider is that protests are, and contrarian views are protest definition anti-majority.
You have a minority express themselves. You go against powerful interests like governments or companies, or even just the status quo. And what does that mean? This data is not well-represented in the training datasets. Because machine learning, I often say is just steroids on stats. So if a view is predominant in the dataset, even if it’s completely wrong, that is the output, right? Machine learning never gives a real decision. It gives a prediction about what is statistically possible. So when you have protestors or journalists, investigative journalists, for example, or anybody bringing up new stuff, that will not actually show up in the dataset and therefore will not be moderated well. And let’s give platforms and those deploying content moderation system the highest benefit of the doubt. They probably wants to moderate it well, just the system will not function unless specifically fine-tuned because protests fall outside the curve of statistical data.
And that’s also the case for conflicts or exceptional circumstances, crises. These events are, by definition, exceptional, and therefore fall outside the statistical curve and are not moderated well. And that’s another key finding. For freedom of association, one thing that could be interesting here is that some organizations can be mislabeled. Actually, you know what, Justin, forget that one. It’s too long. I’ll skip it.
Two last pieces I’d like to highlight that we found were really interesting. One was on participation. So on one side, LLMs actually have the potential to support more participatory design by enabling customizable moderation. So like I said before, the users have personalized content moderation, or perhaps in the future use AI agents to moderate their own content as they want. On the flip side, in practice, affected communities are largely excluded from shaping content moderation systems. That’s not new. That has also happened in machine learning.
Generally, it’s mostly white tech bros in San Francisco or Silicon Valley, so especially marginalized groups and those in the global majority are excluded from that. The addition with LLMs is that they are typically “improved” through a method called reinforcement learning from human feedback, or red-teaming. So you have folks thinking about, in the case of red-teaming, what are potential bad, worst-case scenarios and testing it from an adversarial perspective.
Same for reinforcement learning. They go through these models and they “fix” them, they reteach them how to learn. The problem is that people who do that are usually Stanford graduates, those in Silicon Valley or other elitist institutions, and it’s very rare. I personally have never heard of folks from marginalized groups being invited to participate in these kinds of activities. I myself, for example, have been invited, but you have to be in a niche kind of AI group to do that. And everybody who I’ve spoken to who has done that, I personally have not. I’ll say that it’s a very homogeneous group. And so basically what that means is then yes, the LLMs will be improved… or, I’m sorry, let me rephrase. There are efforts to reduce inaccuracy and improve the performance of the models of the LLMs. However, the people who do that are typically very homogeneous. So it’s just like snowball effects.
And the last piece on remedy is that while there are potential promises to access remedy better, like I said before, most everyone use a notification, helping them identify remedy, appeals mechanisms, speeding up the appeals, there’s also fundamentally a lack of explainability and transparency, and that can create barriers to remedy. Another issue, like I said before, is that there are these two layers, the layer of foundation models. So the ChatGPT, the Claude, the Gemini, and then the social media platform that deploys it. And it’s hard to know where to appeal, how to appeal. The foundation models themselves don’t really know how a decision was made. Social media platforms know that even less. Do you appeal to the platform? Does the platform then appeal again to the third party LLM? Where does the user fall into this? So just accountability becomes fragmented, and there’s a lot of confusion and lack of clarity around how to go around that.
Justin Hendrix:
So I want to come to some of your recommendations, because you have both recommendations to LLM developers and deployers as well as to, of course, those who are potentially applying these things inside of social media companies. But your section on recommendations to policymakers is perhaps mercifully brief. You’ve only got a handful of recommendations there. It’s clear, I think just from the onus in the report on where the recommendations are, that you see it largely as something that private sector needs to sort out, how are they going to deploy these technologies or not.
But when it comes to policymakers, what are you telling them? I see you’re interested in making sure they’re refraining, on some level, from mandating the use of these things. I suppose it’s a possibility that somebody might come along and say, “Oh, in fact, we demand that you use them.” I was trying to think of a context for that, but then I found myself thinking about some of the laws we’ve seen put forward in the United States even, where there have been segments of those laws, I think in some of the must-carry laws, where there have been these ideas around transparency of moderation decisions. I feel like I’ve read amicus briefs where some of the people opposed to those laws would make arguments like, “Well, it’s just simply not possible to give explainable rationale to every single user for every single content-moderation decision that’s made.”
Well, presumably, LLMs would make that quite possible, and you could imagine a government coming along and saying, “Somehow, in the interest of free expression, we would like to mandate, use artificial intelligence to explain any content moderation decision that you might take.” You say that’s a bad idea along with other things, but what would you tell policymakers to be paying attention to here?
Marlena Wisniak:
Yeah, I mean, that’s a very astute… you observe that well. Most of the recommendations are to LLMs developers and employers. The reason for this is not that we only see the owners on the private sector, not on the public sector, is that for practical reasons, this part of the research was mostly on assessing the human rights impact. And the recommendations piece was really the last part, and we didn’t have that much capacity to go deep. So the second iteration of this report, hopefully we will continue. It will be really to zoom into the recommendations.
And I will say then, that on the policy making side, so a lot of our work at ECNL is on policy and legal advocacy. We’ve been working behind the scenes on the AI Act for the past five years, and right now, there’s conversations around the GPAI code in the EU on general-purpose AI. And the reason why I didn’t go deep here is that one, we don’t want a specific AI content moderation or LLM moderation law. We have the DSA in Europe. The U.S. I’ll just set aside, because right now there’s a lot going on. So we’re not calling for a specific LLM content moderation law.
And the EU already has the DSA and the AI. And I’d say a lot of the foundational aspects of LLMs to policymakers would be kind of basic AI around, like, data protection, human rights impact assessment, stakeholder engagement. So I added these big categories. I didn’t go really deep into them. It’s mostly how can we not fuck it up, to be honest. And I think content moderation, as you know, is a very fraught topic where even folks who are well-intentioned just don’t understand it enough. So sometimes you will have these claims that would let… platform should remove problematic content within an hour. And it’s like, okay, cool. In theory, that sounds great. What does that mean in practice? It means a lot of false positives. It means that disproportionately marginalized groups will be impacted, including sex workers, racialized groups, queer folks. And you’ll have to use automated content moderation, which has all the false positives and false negatives that haven’t reported on.
So the key recommendation we made to policymakers here are very foundational, I’d say. One, do not mandate LLM moderation exactly for the reason that you expressed before. Two, maintain human oversight. So if LLMs are used to moderate content, and especially to remove content, there still should be a legal requirements for platforms to integrate human-in-the-loop systems, meaning that humans will review whatever decision LLM does. Three, kind of broad transparency and accountability metrics and requirements that’s very DSA-esque… sorry, I should have said Digital Services Act. And a lot of that is really about transparency, like mandating disclosure of how LLM systems function and how they’re used. The reality, Justin, we don’t know. I had several off-the-record calls with platforms, off-the-record means I couldn’t publish the names. They also gave me very vague information. There’s some information that was published by them, and you’ll see it in the report.
ChatGPT said they use LLMs for content moderation. Meta says they’re beginning to play around with it, gemini or Google. But overall, we don’t know. I definitely don’t know when it’s used when I use social media. We don’t know accuracy rates, we don’t know how they’re used in appeals, or how they’re enforced. So really requiring platforms to notify users about LLM moderation actions. And again, that’s nothing new, I would say. That’s just using prior Santa Clara principles on transparency and accountability, or DSA or kind of like mainstream civil-society asks and implementing them for LLMs.
The two last ones that we highlighted, one is mandating human rights impact assessments. So hey, platforms, good news. I did one here, so you can use… My goal is that this will be a starting point for platforms to have basically an HRA handed to them on a silver platter, and then obviously use this as a starting point to look at how they implement LLMs on their platform. It’ll be specific to each platform.
But on that note, one thing that policymakers could do is make HRAs mandatory for LLM developers and platforms that deploy them, both before deployment and throughout the LLM lifecycle. So for example, the pilot with Discord was at the ideation phase, so design, product design. Before they even developed the product, they consulted with a lot of folks to see how an LLM or machine-learning-driven system could be helpful. Is it nudges? Is it for removing content? Do we not want that? Why? And then continue that throughout all the way through development and deployment and make it accessible for external stakeholders.
There’s so much expertise in the room, and I often say to platforms that, you know, trust and safety teams, policy teams, human rights teams tend to be underfunded. Just go to civil society that has such extensive knowledge, or journalists like yourself and academics and just drink their wisdom, because there’s a lot of stuff out there.
Justin Hendrix:
This report seems like a good jumping-off point for a lot of folks who might be interested in investigating or collecting artifacts of how the platforms are deploying these things or generally kind of trying to pay attention to these issues and trying to discern whether the introduction of LLMs in this context is on balance a good thing, a bad thing, or perhaps somehow neutral or indiscernible. With regard to the overall information integrity environment, what would you encourage people to do next? What would you encourage them to go and look at? What threads would you like to see the field pulled from here?
Marlena Wisniak:
Yeah. One core piece that we didn’t talk about much is the multilingual piece of it, so how this can work in different languages. And I’ll just give a shout out here to really cool efforts in the global majority to develop community-driven local smaller LLMs or data sets, like the Masakhane in Africa. There’s really cool community-driven initiatives that kind of go beyond the Silicon Valley profit-first massive monopolization and dynamics. So that is one thing, and I really encourage not only researchers, but also platforms to talk to them. A lot of the platforms that I spoke with, they didn’t even know these existed, and if I say that this report is an HRA handed on a silver platter, they have really cool data sets that would be really helpful, especially to smaller platforms, and they could just plug these into their own model.
So that’s one thing that I hope research and industry will move towards is more languages, more dialects, understanding that it’s not only a difference between English and other language. It really is a colonial imperialist dynamic where English or French or German or Spanish will be much better moderated, and languages that are close to these ones are better, and then obscure, poorly researched languages work very, very poorly. And you can read in the report the reasons are both because the data sets do not exist or they’re just bad quality, because there’s not enough investment. So I really would encourage platforms to invest more resources into that, more participation, and proactively include stakeholders.
The other area I would love to see is just open conversations between platform and civil society. And like I said, this is a nerdy topic, like LLMs and content moderation. It doesn’t roll off the tongue. So if folks with expertise in content moderation would like, hopefully this report can give them more context. And then I would love to see more evidence, new ideas. Like I said, some of these things were my own kind of “How could this work?” But through many conversation with folks, and I really would want to see more thinking, more assessment of impacts and more evidence as well.
Like this paper, it’s long, 70 pages. I didn’t mean to make it this long, but there’s a lot of stuff. And I’d say the last part is computer science papers move so fast, incredibly fast. It’s hard to keep up, and they’re very theoretical. So to the extent that there can be more collaboration between technical folks who come from the machine learning, AI, or computer science field, and policy and human rights, I think we’ll actually be able to build much better products and push policymakers to regulate this stuff better.
Justin Hendrix:
I appreciate you taking the time to speak to me about this, and I would encourage my readers to go and check out this full report, which is available on the ecnl.org website. I will include a link to it in the show notes. Thank you very much.
Marlena Wisniak:
Thanks, Justin.
Continue Reading
-
OPEC+ to boost oil production even more than expected in August – MarketWatch
- OPEC+ to boost oil production even more than expected in August MarketWatch
- OPEC+ members agree to larger-than-expected oil production hike in August CNBC
- OPEC+ adds 548,000 bpd in August The Express Tribune
- Oil falls slightly ahead of expected OPEC+ output increase Business Recorder
- Oil prices hold steady amid strong US jobs data and tariff uncertainty Daily Times
Continue Reading
-
Bald Baby J.D. Vance Meme Can Now Be Your Boarding Pass
The long, strange saga of viral memes distorting the appearance of J.D. Vance continues — and this time, it’s literally taking off.
That’s because New York-based tech innovator James Steinberg, whose many fanciful projects include a detective agency for mundane mysteries and a crowdsourced New York City map of “public cats” available for petting, has designed an app that allows you to change the background your digital airplane boarding pass to display a now-infamous image of the vice president as a bald, bearded baby-man. And if you’re wondering: yes, he’s tried it without getting arrested.
“I checked with the TSA subreddit first to see what issues I might encounter,” Steinberg tells Rolling Stone. “Some thought it was stupid, some funny, a few asked how to make custom passes like that, but nobody said I would get on the flight ban list, so it seemed okay.” Last week, he went for it, posting a photo from JFK International Airport with the caption “can’t wait to use my big beautiful bald boarding pass to travel today.”
He says there was no issue at security. “The TSA tried not to show emotion but looked mildly amused,” Steinberg says. “Maybe I should try in a less liberal airport.” While it is illegal to tamper with U.S. boarding passes, the enforcement of this law is more about security risks that could arise from changing personal information or altering a QR code, neither of which Steinberg’s app alters. There are also hundreds of U.S. airports where the TSA only needs to see your ID and won’t ask for your boarding pass. Still, Steinberg says, “Use at your own risk!”
Steinberg’s gag was inspired by the story of a less fortunate traveler. Last month, Mads Mikkelsen, a 21-year-old Norwegian tourist (not the Danish actor) told his hometown newspaper that when he flew into Newark Liberty International Airport for an extended vacation in the U.S., he was detained by border control for hours and forced to unlock his phone, on which agents found the bald Vance meme in question. Mikkelsen’s entry into the country was denied, and he believed it was due to sensitivities over the doctored likeness of the veep. Customs and Border Patrol later denied this, claiming that Mikkelsen’s “admitted drug use” — he had previously partaken of legal recreational cannabis — was the reason for his ejection.
Either way, the meme got a lot more traction, raising questions about freedom of political speech. An Irish politician even waved around a printout of the bald Vance meme in parliament while warning of American censorship and repression. “In my opinion, it leads to a discussion at the heart of the issue,” Steinberg says of his app. “For everyone else, I guess they can just enjoy making fun of J.D. Vance.”
The project has the approval of Dave McNamee, the creative consultant and humorist in Los Angeles who kicked off the Vance edits craze back in October of 2024. After Rep. Michael Collins of Georgia was mocked for posting a portrait of Vance that had been digitally manipulated to give him a more chiseled, “Chad”-like look, McNamee went in the opposite direction.
“For every 100 likes I will turn J.D. Vance into a progressively apple cheeked baby,” he wrote on X, posting the same portrait but giving Vance a more bloated face. The post eventually racked up more than 200,000 likes and 16 million views, per X metrics, and McNamee indeed continued to make Vance’s head wider and redder in the sequence of posts that followed.
“I was very weird and funny to see it take off,” McNamee says. About two weeks after his viral thread, another X user debuted the bald version of Vance using an image of the vice presidential nominee during his debate with Minnesota Gov. Tim Walz. But it was only months later, following a contentious White House meeting in February where Vance berated Ukrainian President Volodymyr Zelensky, that “it came back bigger than ever,” McNamee explains, with people altering a photo of the Vance in the Oval Office to give him chubby cheeks.
“That’s when it became a cryptocurrency and was everywhere,” McNamee says. Another one of the edits he made, of Vance with a propeller hat and a massive lollipop, became the basis of that meme coin, PWEASE. (Many of the Vance jokes also involved exaggerated baby-talk.) “People made millions,” McNamee claims, while he netted around $10,000 from the minting of his content for the blockchain. Though it as since fallen back to earth, at one point, PWEASE had a market cap of approximately $60 million, having shot up more than 92,000 percent in value since its creation.
It was around then that Vance told a reporter that he had seen the memes and found them amusing. That response — which some read as less than genuine — “only felt natural because once it became a crypto thing I knew it was in the right-wing internet zeitgeist,” McNamee says. As for the continued expansion of of the Vanciverse, he feels that people should see these memes “every day until no one knows what J.D. Vance looks like.” Of Mikkelsen’s trouble with border security, he adds, “the possible suppression of rights because you have a meme is fucked,” and that Steinberg’s form of quiet protest “is funny.”
Steinberg tells Rolling Stone that his app has already been used to create more than 300 digital plane boarding passes. And while it’s not clear how many were actually used, he expects to hear from some of those brave enough to go big, beautiful and bald for their next flight. “A person reached out and said [he is] going to try and film his journey,” he says. “Not clear how allowed that is.”
Filming in the TSA line definitely isn’t permitted, it’s important to note, but it’s perfectly understandable that people are excited to show off this iconic work of art. Stay safe out there, and may that goofy meme take you wherever you want to go.
Continue Reading
-
This face-tearing sports car could be built in Australia
Ariel was thrust into the spotlight in late 2004 when Jeremy Clarkson drove its second-generation Atom on Top Gear, with its lack of a windscreen creating an iconic moment, albeit an unpleasant one for the presenter.
That was less than five years after the company was founded, and it continues to build the Atom a quarter century later, as the latest fourth-generation model continues to roll down its production line.
To celebrate its 25th birthday, Ariel has unveiled the new Atom 4RR, the most potent version of its current model yet.
Ariel Atom 4RR Powered by the turbocharged ‘K20’ 2.0-litre four-cylinder engine from a Honda Civic Type R, Ariel has modified the four-pot to produce 391kW and 550Nm – a healthy 93kW increase over the ‘standard’ 4R, and 19kW more than its famed Atom V8.
On top of the power increase, Ariel has made “a host of internal changes and new components, as well as optimised oil and fuel systems”, however it’s yet to detail exactly what changes have been made.
Apart from its unmissable fluorescent yellow livery, there don’t appear to be too many changes between the Atom 4R and the Atom 4RR, which means wild Formula 1-esque wings and sidepods, as well as exposed pushrod suspension.
Ariel Atom 4R Ariel says it’ll announce more details soon in the future, expected to be next weekend’s Goodwood Festival of Speed.
What we do know is just 25 examples of the Atom 4RR will be made, representing one-quarter of Ariel’s current annual production capacity.
However, it does have the opportunity to build the 4RR outside of the UK and in Australia, with the Lightspeed Motor Company announcing in May it had secured the exclusive licence to the Atom and its off-road Nomad sibling.
The Ariel Atom’s four generations Lightspeed said it plans to manufacture and export Ariels in Melbourne, having claimed to “secured the rights to sell the Ariel Atom and Nomad across Asia Pacific”, which it says “unlocks a powerful expansion pathway and local production means competitive pricing, shorter lead times with local compliance”.
The firm claims more than 4000 expressions of interest were taken for the Atom in late 2023 when Ariel partnered with Road and Track as its official dealer Down Under.
“By producing the Atom and Nomad domestically, Lightspeed will eliminate lengthy delays, reduce costs, deliver a strong return to investors and deliver these iconic machines into the hands of passionate Australian drivers faster and more affordably than ever before”.
Australian car industry could be reborn with unexpected brand
Continue Reading
-
China reroutes exports via south-east Asia in bid to dodge Trump’s tariffs
Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
Chinese businesses are sending increasing volumes of goods to the US via south-east Asia in a bid to evade the tariff wall erected by Donald Trump as part of his trade war, data suggests.
The value of Chinese exports to the US dropped by 43 per cent year on year in May, according to figures published by the US census bureau — equivalent to $15bn-worth of goods.
But the country’s overall exports rose by 4.8 per cent in the same period, official Chinese data showed, as the shortfall in trade with the US was offset by a 15 per cent increase in shipping to the Association of Southeast Asian Nations trade bloc and a 12 per cent rise to the EU.
This week Washington struck a trade deal with Vietnam that includes a 40 per cent levy on goods that are trans-shipped through the country, in a move that was widely thought to be targeting Chinese re-exports to the US.
Scores of other countries have not yet reached trade deals with Washington. The pause on Trump’s “reciprocal” tariffs ends on Wednesday, and any future deals could also include additional trans-shipment levies. US Treasury secretary Scott Bessent said on Sunday that the higher tariffs would take effect in August.
Mark Williams, chief Asia economist at consultancy Capital Economics, said the data showed “a really striking pattern”.
“We saw this during the first US-China trade war. There was a fairly immediate shift. US imports from China dropped off, but they picked up from Vietnam and Mexico,” he said.
Trump’s imposition of tariffs on China during his first presidential term in 2018 significantly boosted Vietnam’s manufacturing industry and there is mounting evidence that the latest measures are giving it a fresh lift.
Separate research by Capital Economics estimated that $3.4bn of Chinese exports were rerouted through Vietnam in May, a rise of 30 per cent compared with the same month last year.
Indirect trade through Indonesia also increased markedly, with an estimated $0.8bn rerouted in May 2025, 25 per cent higher than May 2024.
Exports of electronic components such as printed circuits, parts of telephone sets and flat panel display modules to Vietnam were up by 54 per cent, or $2.6bn, in May 2025 compared with a year earlier, Chinese data shows.
In India, the effects of the Trump tariffs have been heavily concentrated in smartphones, driven in large part by Apple’s decision to shift the assembly of all US-sold iPhones to India as soon as next year.
Indian exports to the US jumped 17 per cent in May compared with a year earlier, while imports from China and Hong Kong rose 22.4 per cent according to Ajay Srivastava, founder of the Global Trade Research Initiative, a research group.
“India’s import surge in electronics and machinery — much of it from China — and rising exports to the US suggest that global supply chains are adapting [to the tariffs] quickly,” Srivastava said.
Trump’s tariffs are also forcing manufacturers to seek other markets to sell output that is no longer reaching America.
In the United Arab Emirates, imports from China rose by $1.1bn in May 2025 from a year earlier, a 20 per cent increase, with smartphones, laptop computers and disposable vapes among the largest items.
Monica Malik, chief economist at Abu Dhabi Commercial Bank, said: “China is targeting other markets for its goods and demand in this region, which has a growing population, a strong investment programme, and little indigenous manufacturing, remains high.”
Anecdotally, Malik added, the visibility of Chinese-branded products including electric vehicles, smartphones and other consumer electronics had grown rapidly in the last couple of years. “You suddenly see a lot of Chinese EVs on the roads here,” she said.
In Europe, analysts say excess Chinese exports are more likely to be consumed than trans-shipped.
The European Commission on Friday reported sharp increases in imports of textiles, chemicals and machinery in the first five months of 2025 compared with the year before. But officials cautioned it was hard to draw conclusions yet.
The most visible early sign of trade redirection has been a sharp increase in low-value products arriving from China after Trump barred China from using the so-called “de minimis” rule which allowed retailers like Temu and Shein to ship goods valued at less than $800 into the US tariff-free.
Since then there has been a sharp drop off in air freight from China and Hong Kong to the US. Chargeable weight flown in the first week of June was down 19 per cent from a year earlier, according to data from WorldACD.
EU officials say they have detected increased advertising by the companies as they target European consumers instead. The bloc has plans to abolish its own “de minimis” rule and levy a handling charge on each packet of €2.
Maria Demertzis of the Conference Board think-tank in Brussels said that the major trade redirection from China visible in Europe was in low-value packages from China.
“You can see it in the number of adverts now bombarding everyone for Chinese e-sellers,” she said. “Those items are being consumed in Europe, not re-exported.”
Continue Reading
-
Ford CEO sounds alarm on China’s EV dominance — what that means for you
Ford CEO Jim Farley didn’t mince words at the recent Aspen Ideas Festival, describing China’s rapid rise in the electric vehicle (EV) market as the “most humbling experience” of his career.
“Their cost, their quality of their vehicles is far superior to what I see in the west.” Farley said.
Chinese automakers like BYD have pulled ahead with vertically integrated supply chains, efficient production and government support. They’re pumping out reliable EVs at prices that make even budget-conscious U.S. models look expensive — including the new BYD Seagull, priced under $10,000 USD.
“We are in a global competition with China, and it’s not just EVs. And if we lose this, we do not have a future Ford,” Farley said.
For U.S. consumers, this isn’t just about bragging rights in the global auto race. It’s about what you’ll pay for your next vehicle.
If Chinese EVs enter Western markets without tariffs, they could undercut competitors by tens of thousands of dollars — bringing huge savings to consumers but serious threats to U.S. auto industry jobs. In response, the U.S. government has already imposed steep tariffs on Chinese electric vehicles, which may shield domestic automakers for now but also delay price competition.
Farley’s warning comes as more and more Americans are looking to switch to EVs amid soaring gas prices and expanding charging infrastructure. But since Tesla, Ford and GM EVs often start in the $40,000-$60,000 range, many Americans are priced out.
Read more: No millions? No problem. With as little as $10, here’s how you can access this $1B private real estate fund of diversified assets usually only available to major players
China’s advantage? It controls much of the world’s battery production and can bring new EVs to market in a fraction of the time compared to U.S. automakers.
Ford is working on a next-gen affordable EV platform intended to match China’s costs but says it won’t arrive until 2027. Tesla is also aiming to launch a $25,000 “Model 2,” but timelines remain uncertain. Until then, the price gap persists.
Continue Reading
-
U.S. Dollar Decline Makes European Travel Pricier For Americans
Monday, July 7, 2025
This summer for American tourists headed to Europe, the cost to travel to Europe can end up being cheaper than in summers gone by, but the same cannot honestly be said once they arrive. While last-minute London and Rome trips have become cheaper in comparison to last summer, the exchange rates and inflation have made day-to-day travel expenses much higher.
The Dollar’s Decline and Its Impact on Travel
Travelers from the U.S. have long enjoyed the benefits of a strong dollar when traveling abroad. However, 2025 has seen a drastic shift. The U.S. dollar index, which measures the strength of the greenback against other major currencies, has dropped by 10.3% in the first half of the year. This marks the worst performance for the dollar since 1973. Analysts are cautiously optimistic that the dollar may recover slightly, but for now, the effects are being felt by American tourists.
The weaker dollar is particularly noticeable when converting U.S. dollars into euros and British pounds. As of mid-2025, €1 only buys about $0.85, a significant drop from the $0.93 it fetched this time last year. Similarly, the British pound has seen a decline, with £1 now worth about $0.73—about 6 cents less than at the start of July 2024. This devaluation has resulted in noticeably higher costs for travelers when they arrive in Europe.
Increased Costs on the Ground
While airfares may be lower this summer, the situation changes once travelers step off the plane. Many costs associated with European travel, from theater tickets to hotel bills, have risen due to the weaker dollar. For instance, a ticket to a popular London play that would have cost £100 (roughly $135) at the beginning of June now costs $137. Similarly, hotel prices in Barcelona, which cost €850 ($965) for a three-night stay a month ago, have now risen to $1,002, a notable jump due to currency fluctuations.
A Silver Lining: Cheaper Airfares
The drop in airfare prices is one of the few bright spots for travelers this summer. According to data from booking platform Hopper, international flight prices to Europe and Asia are down by 10% and 13%, respectively, compared to the same time last year. These lower fares are helping to offset some of the increased costs caused by the currency exchange issues. In fact, flights to destinations like Sydney, Rio de Janeiro, and Dublin are being offered at some of the lowest prices ever seen, providing opportunities for bargain hunters to secure deals even as other costs rise.
The decreasing price of airfares has been attributed to increased competition among airlines and the return to pre-pandemic pricing. Travel experts are hopeful that this trend will continue throughout the summer, despite the challenges posed by exchange rates.
Changing Spending Habits of U.S. Travelers
Despite the higher costs once they arrive, many U.S. travelers seem undeterred. A survey by Tourism Economics, a market research firm, revealed that travel volumes among U.S. citizens returning home have increased by about 2% over the 28 days leading up to June 21 compared to the same period last year. While some may be altering their destination choices, opting for destinations where the dollar goes further, the general appetite for travel abroad remains strong.
Even so, experts suggest that the economic uncertainty, rather than exchange rates, is playing a larger role in shaping travel decisions this year. For many consumers, the question of whether to take an international trip isn’t so much about the dollar’s value but rather concerns about job security, inflation, or rising costs of living. Greg McBride, chief financial analyst at Bankrate, points out that cancellations are more likely to be driven by fear of job loss or financial strain than the current exchange rate.
Continued U.S. Outbound Travel Demand
While the rising cost of living and global economic instability may deter some travelers, the overall demand for international travel among U.S. citizens remains robust. Tourism Economics has reported that U.S. travel spending abroad has risen by 8.6% in the first four months of 2025, signaling that U.S. travelers are still willing to make international trips despite economic pressures. This trend is likely to continue as consumers seek out opportunities to explore new destinations and experience different cultures.
Despite the economic uncertainties, some U.S. travelers are finding ways to mitigate the impact of a weaker dollar. This includes adjusting their budgets, seeking out cheaper accommodations, or shifting travel plans to countries with more favorable exchange rates.
Consumer Sentiment on Leisure Travel
The impact of the economy on leisure travel cannot be underestimated. According to a survey conducted by Morning Consult, 31% of consumers reported that concerns about the U.S. economy and personal finances were reducing their interest in leisure travel over the next few months. These findings suggest that the rising costs of everyday goods and services are influencing many people’s decision-making regarding travel.
Additionally, with inflation affecting everything from gas prices to grocery bills, travel plans for many Americans are being put on the back burner. As Nicki Zink, deputy head of industry analysis at Morning Consult, notes, factors like the state of the economy are having a more negative effect on travel demand than any other influencing factor. With household budgets tightening, some Americans are opting to stay closer to home rather than embarking on expensive international vacations.
Conclusion: Travel Plans in Flux
As the summer continues onward, the reality of a weaker dollar will still affect U.S. travelers bound for Europe. Although Europe-bound flights this year might prove cheaper than before, travelers upon their arrival will have to encounter costlier prices in light of unfavorable money exchanges. For most people, whether or not to travel abroad will largely rest not only upon the price of a plane trip but also upon related fiscal matters and general economic trends.
Though rates have decreased, a shift in attitude among travelers in response to inflation and economic concerns will affect travel patterns in coming months. Travelers will have to balance the attractiveness of foreign destinations with the added cost and economic pressures.
In spite of these difficulties, travel demand within the U.S. continues to be robust, and numerous Americans adapt to these transitions while keeping their interest in exploring Europe and beyond alive and well.
References: U.S. Department of State: Bureau of Economic and Business Affairs
«Enjoyed this post? Never miss out on future posts by following us»
Continue Reading
-
Oil Market Heading For Surplus In 2025 On Latest OPEC+ Output Hike
An oil storage facility in Groot-Ammers, The Netherlands.
The global oil market is likely heading for a surplus this year following a higher than expected production hike by OPEC+ over the weekend.
At their meeting on Saturday, eight members of OPEC+, a select group of Russia-led oil producers and the Organization of the Petroleum Exporting Countries (OPEC) spearheaded by Saudi Arabia, opted to raise their collective production levels for August by another 548,000 barrels per day.
Producers Saudi Arabia, Russia, Iraq, United Arab Emirates, Kuwait, Kazakhstan, Algeria, and Oman cited “healthy oil market fundamentals and steady global economic outlook” as the reasons behind the move, indicating their belief that the global oil market can absorb the additional supply.
The action, which took the market by surprise, followed three consecutive output hikes of 411,000 bpd announced by OPEC+ in recent months. The series of hikes are part of the producers’ group’s attempt to unwind 2.2 million bpd of previously agreed cuts since 2022.
The latest hike implies 1.92 million bpd or over 87% of those cuts have now been unwound. As in previous instances, OPEC+ said: “The gradual increases may be paused or reversed subject to evolving market conditions. This flexibility will allow the group to continue to support oil market stability.”
An Oil Market Surplus Is Imminent
For all intents and purposes, the volume of the hike is an oversized one that’s demonstrative of OPEC’s intention of putting more barrels on the market for a greater market share.
The hope is that summer demand in the Northern hemisphere would absorb the additional barrels. However, the only issue that non-OPEC production is also rising at a record breaking pace led by the U.S., currently the world’s largest oil producer.
According to the Energy Information Administration – statistical arm of the U.S. Department of Energy – in April, the nation’s crude production came in at an all-time high of 13.47 million bpd, breaking a previous record of 13.45 million bpd set in October 2024.
The ranks of non-OPEC producers are also being boosted by higher output from Brazil, Canada, Guyana and Norway. Collectively, non-OPEC production growth is likely to rise by 1.4 million bpd, according to the International Energy Agency.
Notwithstanding any additional OPEC+ barrels, such levels of non-OPEC growth alone are more than sufficient to account for global demand growth projections for this year that have been put forward by various forecasters. These range from 0.72 million bpd to 1.3 million bpd, with IEA and OPEC being at the opposite ends of that range.
With additional barrels flowing in from all corners, there are fears the oil market may end up with a surplus of as much as 500,000 to 600,000 bpd, perhaps even more. As it becomes pretty apparent that OPEC+ now wants to take the fight to non-OPEC producers in a bid for market share, oil prices will likely head lower.
For instance, back in May, prior to the escalation of tensions in the Middle East the following month, Goldman Sachs was predicting sub-$60 average oil prices – $56 for Brent and $52 for West Texas Intermediate – as the benchmarks head lower in the second half of the year.
It was part of a rising number of its peers lining up to trim their oil price predictions for 2025-26 down to the $60s or below. Barring a major geopolitical escalation or macroeconomic event, OPEC+ has brought that world a lot closer with its latest hike.
Continue Reading