How Congress and the White House Have Led the Way on AI Governance

As AI accelerates across the private sector, federal policymakers have built an AI governance framework to leverage the benefits of this new technology while controlling the risks. Federal regulators have not only implemented this framework for their own operations but also adapted it for use with regulated firms. For regulated firms, understanding the AI debate in Congress and the White House clarifies how regulators will address AI-driven tools, models, marketing claims, and risk-management practices within the context of their respective regulatory missions.

ACA examines how Congress, the White House, and the Office of Management and Budget (OMB) have established the architecture of U.S. AI governance, and what this means for compliance, operations, and supervisory expectations.

Congressional Actions That Laid the Foundation for Federal AI Governance

Congress has spent the last decade evaluating AI’s impact on competitiveness, national security, privacy, and civil rights. In 2020, following several years of hearings and research, Congress adopted the AI in Government Act, which directed federal agencies to promote responsible AI adoption and mitigate algorithmic bias, discrimination, and privacy harms. In 2023, Congress took a further step to balance AI risks and rewards with the Advancing American AI Act.

This law promoted AI implementation by strengthening AI transparency and cooperation across the federal government, mandating government-wide AI inventories, AI risk mitigation plans, and the development of four new AI use cases within a year. Agencies’ compliance is monitored through hearings and oversight from the Government Accountability Office and the OMB.

These mandates created the structural blueprint that federal agencies—and increasingly financial regulators—are now using to frame expectations for responsible AI.

1. The 2020 Trustworthy AI Executive Order

President Trump’s Executive Order 13960 established principles requiring federal AI to be lawful, transparent, and accountable, and mandated semi-annual AI use case inventories across agencies.

2. The 2023 Safe, Secure, and Trustworthy AI Executive Order

President Biden’s Executive Order 14110 expanded governance requirements, created a federal AI Council, encouraged global cooperation, and elevated frameworks such as the Office of Science and Technology Policy’s Blueprint for an AI Bill of Rights and the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework as governance models.

3. The 2024 OMB Governance Framework

In its 2024 report to Congress, the Administration highlighted OMB’s M-24-10, which created a unified federal AI governance framework, mandated Chief AI Officers (CAIOs) across agencies, and required agencies to test, scale, and inventory AI use cases. This inventory reveals that AI use cases doubled between 2023 and 2024 across the federal government, and generative AI use cases grew ninefold.

4. The 2025 Competitive AI Shift

In January 2025, President Trump issued Executive Order 14179, scaling back the risk management requirements in the two previous executive orders to further promote U.S. AI competitiveness.

OMB followed with M-25-21 and M-25-22, which:

  • Reduce and simplify the criteria for “high-impact AI”, effectively reducing the AI use cases subject to enhanced governance permit agencies to develop their own risk management frameworks in place of the NIST AI Risk Management Framework and the NIST Generative AI Profile
  • Prefer AI technologies developed and produced in the U.S.
  • Retain CAIO leadership and transparency requirements but expand the CAIO mission to include removing barriers to AI innovation, as well as mitigating risks of high-impact AI
  • Prohibit the filtering of AI output to adjust for dataset bias reflecting historical inequities
  • Prohibit restrictions on AI output filtering that hinder civil rights, diversity, or fairness objectives, and remove the requirement for AI‑based biometric systems to undergo NIST evaluation
  • Remove requirements for vendors to submit testing results that demonstrate AI systems are secure and robust
  • Remove environmental considerations from procurement decisions

Agencies across the federal government have complied with the OMB’s mandates by putting AI tools into practice, compiling AI inventories, and tasking internal working groups with refining the agencies’ AI strategies and governance. The SEC began publishing an inventory of AI use cases for its own operations in 2024 and appointed a CAIO and an Artificial Intelligence Task Force in 2025. The internal SEC AI task force is charged with accelerating AI integration while maintaining appropriate governance.

The key elements of the SEC’s AI effort to comply with OMB guidance include:

  • Identifying and removing barriers to AI innovation
  • Imposing strong governance to ensure AI is used responsibly and fosters public trust
  • Updating internal AI policies to align with federal standards
  • Creating AI use-case inventories across departments
  • Establishing termination protocols for non-compliant AI systems
  • Documenting risk management practices and assigning oversight responsibilities

Why AI Policy for Federal Agencies Matters to Financial Firms

Federal agencies have developed their understanding of AI risk management by seeking to comply with Congressional, White House, and OMB mandates. Federal agencies that regulate financial services firms are applying that knowledge and experience to determine how AI intersects with firms’ obligations under existing law.

What Federal AI Policy Means for SEC-Regulated Firms

ACA expects the regulatory environment to mature rapidly, reflecting the lessons regulators have learned in placing controls around their own AI use. Based on recent SEC examinations and enforcement activity, firms should prepare for heightened expectations in several areas:

  • Accuracy and substantiation of AI claims: Marketing, client communications, and model-driven service descriptions must be fully supported and consistently monitored.
  • Model documentation and testing: Testing and monitoring data quality, bias, model explainability, drift, and hallucinations will become table stakes.
  • Vendor governance: Firms must validate third-party AI capabilities, training data provenance, security controls, and change management processes.
  • Recordkeeping and auditability: Documentation must be sufficient for examinations, particularly as regulators increase scrutiny of automated tools.

These are not theoretical expectations; SEC examiners are closely examining AI controls at registered firms, issuing deficiencies, or pursuing enforcement action against firms that fail to implement AI tools consistently with their regulatory obligations.

Compliance-Ready AI Solutions for Financial Services

ACA delivers an integrated suite of AI governance and compliance services designed specifically for financial services firms. Our support spans the full lifecycle of AI adoption from strategy and risk assessment to model validation, vendor oversight, marketing compliance, and exam readiness.

Learn how emerging AI expectations apply to your firm.
Our team can help you evaluate your AI use cases, tighten controls, and demonstrate compliance with evolving SEC and the Federal Trade Commission standards.

Coming Up Next: The SEC’s Evolving AI Expectations

In subsequent installments of this series, we will review the different types of AI tools being deployed in financial services firms, the nuts and bolts of AI governance, and the expectations of the SEC and FINRA for wealth managers, private fund managers, and broker-dealers.