WASHINGTON, D.C. — As artificial intelligence continues to develop and grow in capability, Americans say the government should prioritize maintaining rules for AI safety and data security. According to a new nationally representative Gallup survey conducted in partnership with the Special Competitive Studies Project (SCSP), 80% of U.S. adults believe the government should maintain rules for AI safety and data security, even if it means developing AI capabilities more slowly.
In contrast, 9% say the government should prioritize developing AI capabilities as quickly as possible, even if it means reducing rules for AI safety and data security. Eleven percent of Americans are unsure.
###Embeddable###
Majority-level support for maintaining rules for AI safety and data security is seen across all key subgroups of U.S. adults, including by political affiliation, with 88% of Democrats and 79% of Republicans and independents favoring maintaining rules for safety and security. The poll did not explore which specific AI rules Americans support maintaining.
This preference is notable against the backdrop of global competitiveness in AI development. Most Americans (85%) agree that global competition for the most advanced AI is already underway, and 79% say it is important for the U.S. to have more advanced AI technology than other countries.
However, there are concerns about the United States’ current standing, with more Americans saying the U.S. is falling behind other countries (22%) than moving ahead (12%) in AI development. Another 34% say the U.S. is keeping pace, while 32% are unsure. Despite ambitions for U.S. AI leadership — and doubts about achieving it — Americans still prefer maintaining rules for safety and security, even if development slows. This view aligns with their generally low levels of trust in AI, which is correlated to low adoption and use.
Only 2% of U.S. adults “fully” trust AI’s capability to make fair and unbiased decisions, while 29% trust it “somewhat.” Six in 10 Americans distrust AI somewhat (40%) or fully (20%), although trust rises notably among AI users (46% trust it somewhat or fully).
Among those who favor maintaining rules for AI safety and data security, 30% trust AI either somewhat or fully, compared with 56% among those who favor developing AI capabilities as quickly as possible.
###Embeddable###
Robust Support for Shared Governance and Independent Testing
Almost all Americans (97%) agree that AI safety and security should be subject to rules and regulations, but views diverge on who should be responsible for creating them. Slightly over half say the U.S. government should create rules and regulations governing private companies developing AI (54%), in line with the percentage who think companies should work together to create a shared set of rules (53%).
Relatively few Americans (16%) say each company should be allowed to create its own rules and regulations. These findings indicate broad support for both government and industry standards.
###Embeddable###
People are more emphatic about peer testing and evaluating the safety of AI systems before they are released. A majority (72%) say independent experts should conduct safety tests and evaluations, significantly more than those who think the government (48%) or each company (37%) should conduct tests.
###Embeddable###
Multilateral Advancement Preferred to Working Alone
The spirit of cooperation extends to how people think the U.S. should develop its AI technology. Americans favor advancing AI technology in partnership with a broad coalition of allies and friendly countries (42%) over collaborating with a smaller group of its closest allies (19%) or working independently (14%).
This preference for AI multilateralism holds across party lines. Although Democrats are nearly twice as likely as Republicans (58% vs. 30%, respectively) to favor the U.S. collaborating with a larger group of allies, Republicans still favor working with either a large or small group of allies over working independently (19%).
###Embeddable###
Bottom Line
Findings from Gallup’s research with SCSP highlight important commonalities in how Americans wish to see AI governance evolve. Americans favor U.S. advancement in developing AI while also prioritizing maintaining rules for AI safety and data security. Majorities favor government regulation of AI, company collaboration on shared rules, independent expert testing, and multilateral cooperation in development. As policymakers and companies chart the future of AI, public trust — which is closely tied to adoption and use — will play an important role in advancing AI technology and shaping which rules are maintained.
Read the full Reward, Risk, and Regulation: American Attitudes Toward Artificial Intelligence report.
Stay up to date with the latest insights by following @Gallup on X and on Instagram.
Learn more about how the Gallup Panel works.
###Embeddable###