Meta founder and CEO Mark Zuckerberg depicted in his company’s AI smart glasses. Shutterstock
This week, Meta unveiled three new types of smart glasses, each controlled by a wrist band and boasting a “voice-based artificial intelligence assistant that can talk through a speaker and see through a camera,” according to the New York Times. Given the tech industry’s past record of rolling out products before assessing potential harms and misuse, it’s worth contemplating how these new glasses, which Meta founder and CEO Mark Zuckerberg claims will eventually “deliver personal superintelligence,” may be used for nefarious purposes.
The ability to livestream from Meta glasses is already a product feature, and as more of the devices propagate in the market, it should be anticipated that they will be used in extremist attacks, just as smartphones and GoPros have been in recent years.
Tech platforms have struggled to contain livestreamed extremist attacks. In March 2019, in Christchurch, New Zealand, a man motivated by white supremacist ideology fulfilled a promise he made on the extremist forum 8chan when he used a helmet mounted with a GoPro to film himself killing a total of 51 worshippers at two mosques while livestreaming on Facebook. While Meta reported that under 200 people watched the initial livestream, the video propagated across the platform and on to other social media networks, eventually inspiring others to livestream, or attempt to livestream, their own attacks, such as the Poway synagogue shooter the following month, the Halle synagogue in October 2019, and the Buffalo supermarket shooter in May 2022.
This year, the Islamic State-inspired perpetrator of the deadly New Orleans truck attack on New Year’s Day 2025 wore Meta artificial intelligence (AI) glasses to scout the scene beforehand. While he did not initiate a livestream, he also wore the glasses during the attack, which killed 14 and injured dozens more.
The use of the Meta device in this instance points to a variety of new questions as to what may come as tech companies push hands-free, internet-connected, camera-equipped AI wearables to consumers. Might smart glasses create new vulnerabilities that are worth considering, or do they merely represent an evolution from the GoPro-enabled attack at Christchurch? And what technical and policy solutions might mitigate the potential for these devices to be used to record and propagate violent material that could inspire other extremists?
Gamifying extremist violence
Meta has put immense effort into ensuring its Ray-Ban AI glasses become a success. Meta’s initial push to market the glasses came in the form of two Super Bowl ads, which included celebrities sporting the devices. So far, it’s working, as an estimated 700,000 total units were sold in the first year. According to one estimate, smart glasses are expected to have 7.6% compound annual growth from 2024 to 2028. Zuckerberg ventures that smart glasses will become the next big technology with consumers, even going so far as saying those without smart glasses will one day be at a, “significant cognitive disadvantage.” Following Meta’s success, Warby Parker and Google have plans to release “intelligent eyewear” products, demonstrating the expanding market.
These devices will create new opportunities and production methods for everyone that wishes to create content, including extremists. Creating and distributing violent content is an important aspect of extremism because it allows the perpetrators’ message to reach wider audiences, thereby radicalizing and inspiring others. As extremism expert Jacob Ware writes, “the genius of the modern social media radicalization model exists in so-called shitposting culture, which encourages users to post a flood of increasingly shocking and inflammatory content.” Posting and reposting violent and distressing content introduces and desensitizes individuals to ideology that justifies violent extremism, cementing social media as pivotal in the radicalization process.
Livestreamed attacks are, for now, a small part of the content that proliferates on the internet from and for extremists. And progress has been made at containing this material, in particular by the Global Internet Forum to Counter Terrorism (GIFCT), which created a “hash-sharing database” to help tech firms identify terrorist and extremist content that appears across platforms. But researchers have found that attacker-produced content is playing an increasingly important role, and that ‘gamified’ livestreams produced in the mode of the first person shooter represent a “consistent but small” proportion, making these videos vital to radicalization.
Furthermore, researchers expect that the “immersive” potential of AI wearables, that is, using virtual reality to see real-time video of locations, will make planning attacks even easier by eliminating risks or other challenges in scouting sites. As use of wearables continues to grow, extremists are bound to embrace these tools, making planning and livestreaming violence easier and with limited premeditation.
What can technology companies do?
Technology companies can try to mitigate the risks that smart glasses and other wearable devices may play in extremist attacks through technology, policy, and content moderation.
A proactive, wearables-oriented fix may prevent the use of smart glasses in livestreaming violence. One of the biggest draws of AI wearables is their ability to provide more information on a user’s surroundings. For instance, using the wake word, “Hey Meta,” users can ask their Meta smart glasses about what they are looking at, which then allows the device’s camera to send a photo to Meta’s cloud that is processed by AI, and an audible response by the glasses is delivered in return. Given the ability for the wearables to detect their surroundings, it could be possible to eventually train the wearables to recognize an attack is potentially taking place and stop livestreaming or live connection to social media platforms until a moderator can review the situation. Care will have to be given to reduce false positives, and training the recognition system will have to be done ethically, potentially using synthetic data from first-person video games.
With AI smartglasses and other wearables, technology companies still have the same solutions used to mediate all other violative content: policies and content moderation. Platform policies can be utilized to minimize the ability of livestreamed violence to reach unsuspecting users. For instance, the Anti-Defamation League recommends allowing only users who reach certain benchmarks to have access to frictionless ability to livestream, awarding gradual access to wider viewership, preventing livestream links to and from problematic websites, requiring more engagement from the livestreamer to continue livestreaming, delaying livestreams for viewers, and investing in tools that better find violative content being livestreamed.
Fast and accurate content moderation is necessary once the violative content is already on the platform, and GIFCT’s hash sharing database should be beneficial in minimizing the cross-platform proliferation on extremist content if technology companies implement it properly. Policy changes and content moderation, while useful to ensure violent extremist content does not spread to unsuspecting users, does not resolve the issue of extremists utilizing AI wearables to livestream attacks.
As AI wearables become more commonplace, social media platforms should prepare to see these products used to livestream violent extremist acts. And we should not only be concerned about extremist access to this technology, but also any actors who plan to create fear among communities. Recently, a Customs and Border Patrol agent was seen with AI wearables at an immigration raid. Although there is no evidence of a recording, quick access to hands-free, livestreaming and recording technology could play a role in propaganda for state aggression, creating content that adds to the “spectacle of intimidation” against immigrants or other vulnerable groups.
Livestreamed content can be dangerous when it is used to radicalize and inspire others to violence. For instance, while the shooter at Annunciation Catholic School in Minneapolis did not livestream the attack, he did upload videos ahead of the attack on YouTube describing their desire for infamy, and included markings on his weapons and gear that invoked previous attacks, such as those at Christchurch and Buffalo. Similarly, in Colorado, the Evergreen High School shooter’s online activity before the attack that took place earlier this month may have revealed he “intended or hoped” to livestream his attack.
Recognizing the potential for contagion, tech companies have a duty to invest in the necessary technologies, policy, and content moderation tools imperative to minimizing the risk that their products pose in being used to advance extremism. This process must be repeated with the launch of any new device, and especially those that can be used to connect users to livestreams, effectively allowing extremists to connect directly with their audience.