SEC establishes AI Task Force to drive internal innovation and efficiency

The US Securities and Exchange Commission (SEC) has announced the formation of a dedicated Artificial Intelligence (AI) Task Force, marking a significant step in the agency’s ongoing efforts to modernize its operations and regulatory oversight.

According to the press release, the goal of the task force is to facilitate the use of AI technology within the SEC, which will “enable internal cross-agency and cross-disciplinary collaboration to navigate the AI lifecycle, remove barriers to progress, focus on AI applications that maximize benefits, and maintain governance.”

This initiative is designed to accelerate the responsible integration of AI across the SEC, with the goal of enhancing innovation, efficiency, and the agency’s ability to fulfill its core mission of protecting investors, maintaining fair and orderly markets, and facilitating capital formation.

The launch of the task force dovetails with the White House’s recent AI Action Plan, which calls for accelerated adoption of trustworthy AI across the federal government, reflecting a coordinated effort across the federal government to modernize oversight through responsible AI.

Leadership and structure

The AI Task Force will be led by Valerie Szczepanik, the SEC’s newly appointed Chief AI Officer. Szczepanik brings extensive experience to this role, having previously served as Director of the SEC’s Strategic Hub for Innovation and Financial Technology, Senior Advisor for Digital Assets and Innovation, and in senior positions within the Division of Corporation Finance and the Division of Enforcement.

Objectives and scope

According to the press release, the AI Task Force has several key objectives:

  • Centralizing AI efforts: The task force will serve as a hub for cross-agency and cross-disciplinary collaboration, aligning and centralizing AI initiatives throughout the SEC.
  • Accelerating AI integration: By promoting the adoption of AI-enabled tools and systems, the SEC aims to augment staff capacity, streamline workflows, and improve the accuracy and timeliness of its regulatory and enforcement activities.
  • Supporting innovation: The task force will facilitate responsible AI integration across the agency’s divisions and offices, supporting innovation and the development of trustworthy, effective, and mission-enhancing AI solutions.
  • Maintaining governance: A core focus will be ensuring robust governance, transparency, and ethical standards in the deployment of AI, addressing potential biases, and maintaining public trust in the SEC’s processes.

Strategic implications

SEC Chairman Paul S. Atkins has emphasized that ingraining innovation into the agency’s culture is key to advancing the SEC’s mission. The AI Task Force is expected to equip staff with advanced tools – enabling the agency to more efficiently identify issues for potential rulemaking and enforcement, and to respond more effectively to emerging challenges in the financial markets.

This initiative aligns with broader federal efforts to leverage AI for government efficiency and regulatory modernization. The SEC’s approach is to foster responsible AI use internally, setting a potential precedent for other regulatory bodies and signaling a shift toward greater technological sophistication in financial regulation.

Recent developments and broader context

The SEC’s internal embrace of AI follows its February 20, 2025 announcement of the creation of a Cyber and Emerging Technologies Unit (CETU), which investigates and pursues enforcement actions for offenses involving the use of AI and other emerging technologies.

The agency’s increased use of AI also represents an evolution of its longstanding interest in leveraging technology, including prior initiatives to utilize “big data” for identifying and investigating potential violations of federal securities laws.

The SEC’s ongoing and future use of AI for investigations and compliance underscores the agency’s commitment to staying at the forefront of technological advancements in regulatory oversight.

Implications for market participants

While the immediate focus of the AI Task Force is on internal operations, the SEC’s embrace of AI will likely have downstream effects on the regulatory environment. Enhanced efficiency and innovation within the SEC may lead to more timely and effective enforcement actions, as well as the potential for new rulemaking initiatives informed by advanced data analytics.

Market participants should be aware that the SEC’s increasing reliance on AI may result in more sophisticated surveillance and oversight capabilities. They are also encouraged to consider how best to leverage AI internally as part of their internal controls.

The expansion of the SEC’s use of AI technology highlights the importance of evaluating internal compliance programs, including exploring how emerging technologies and AI can enhance companies’ efforts.

While AI has already been used extensively to automate routine compliance procedures such as document review, many corporate compliance programs are now incorporating AI to proactively identify compliance risks.

For example, AI-enabled compliance tools can quickly analyze large volumes of transactional and other data to identify anomalies that might indicate possible fraud, money laundering, or other issues requiring closer examination. AI can also be used to perform more thorough, targeted diligence by identifying anomalies in diligence responses or to assess whether employees have received proper training and are adhering to company compliance standards and policies – as well as to identify patterns and concerns that may not be immediately apparent to human compliance specialists.

Meeting DOJ expectations and practical considerations for compliance programs

The US Department of Justice’s (DOJ) guidance for the Evaluation of Corporate Compliance Programs (ECCP), last updated in September 2024, directs prosecutors to assess a company’s use of AI, including whether controls are in place to ensure AI’s trustworthiness and reliability, whether AI is used only for its intended purpose, and whether human decision making is used to assess AI outputs.

The guidance also emphasizes the need to access company data to detect non-compliance and measure effectiveness. DOJ expectations now make the use of AI and advanced analytics “table stakes” for demonstrating adequate resourcing and program effectiveness, especially in large organizations where the volume of data is significant.

DOJ guidance also asks whether compliance and control personnel have sufficient access to relevant data sources to allow for timely and effective monitoring and testing of policies, controls, and transactions, and whether the company is appropriately leveraging data analytics tools to create efficiencies in compliance operations and measure the effectiveness of compliance program components.

In most large organizations, where commercial teams utilize AI to meet their objectives, it will likely be difficult to establish adequate resourcing of a compliance program without showing the program has some capability to access and analyze data to identify risk. The volume and complexity of data in many organizations will likely require the use of AI or other advanced analytic tools to handle and make use of the data in any meaningful way.

To meet these expectations, companies are encouraged to consider whether they have the appropriate tools, resources, and skills to identify data sources across the organization; work with the data to allow for useful analysis and insights; and paint a picture of risk that is usable for the program overall. This may require new investments in resources or tools, outsourcing, or a combination of both.

Importantly, working with partners who understand both the compliance space and the substantive legal risks at issue can help ensure that AI tools are tailored to the specific needs of the organization and that the output is actionable and legally protected, especially when work is conducted under privilege. As a final note, companies that are investing heavily in AI will be expected to have a proportional investment in controls.

Conclusion

Given the SEC’s increased adoption of AI – and the likelihood that other federal agencies will follow suit – companies risk falling behind regulators if they fail to implement these technologies, as appropriate for their industry or sector.

However, companies are encouraged to incorporate AI into their compliance programs responsibly, accounting for the limits and risks of the technology. The right approach involves not only adopting advanced tools, but also ensuring robust governance, human oversight, and alignment with evolving regulatory expectations.

We will continue to monitor the SEC’s AI initiatives and provide updates on any developments that may affect our clients’ compliance and regulatory strategies. Please contact us with any questions regarding the SEC’s AI Task Force or its potential implications for your business.

For more information on how we are guiding clients in the adoption of AI for proactive compliance, contact Michele Adeleye.

Continue Reading