How Investment Advisers Should Approach AI Risk and Governance?

Across industries, AI continues to accelerate operational transformation, offering opportunities to enhance decision-making, streamline processes, and drive innovation. Adoption is becoming mainstream: a June–July 2025 McKinsey survey of nearly 2,000 companies across 105 countries found that 88% were piloting AI in at least one business function; up from 72% in 2024 and 55% in 2023. Meanwhile, 79% reported using generative AI, a dramatic rise from 33% just two years earlier. Yet despite this momentum, most organizations indicated they remain in the experimentation phase, with almost two-thirds reporting they have not begun scaling AI across the enterprise.

While many sectors have rapidly embraced AI, the wealth management and investment advisory industry has traditionally approached emerging technology with more caution. Robo-advisers have integrated AI to build and manage low-cost, customized portfolios aligned to client goals and risk profiles. However, most registered investment advisers (RIAs) have proceeded more deliberately as they assess how AI aligns with their fiduciary and regulatory obligations.

How Investment Advisers Are Using AI Today

Insights from the 2025 Investment Management Compliance Testing (IMCT) Survey of 577 investment advisers illustrate this measured approach:

  • 40% adopted AI for internal use only (e.g., investment research, portfolio testing and monitoring, IT support).
  • 25% were developing AI use cases but had not yet implemented tools.
  • 18% allowed employees to use AI informally.
  • 8% banned or restricted AI use.
  • 4% used AI externally for simple client interactions (e.g., chatbots).
  • 1% used AI to support complex client interactions (e.g., providing investment advice).

These findings highlighted an industry still in early exploration and risk assessment mode.

However, newer data suggests momentum is accelerating.

ACA’s 2025 AI Benchmarking Report found:

  • 60% are now using AI internally (up from 37% in 2024)
  • 11% are using AI both internally and externally (up from 8% in 2024)
  • 23% continue to explore use cases (down from 38% in 2024)
  • 4% still maintain a ban (down from 15% in 2024)

This jump reflects a broad shift: compliance teams and business leaders are increasingly recognizing the potential benefits of AI, but also the urgency of implementing appropriate governance and controls.

Why AI Adoption Takes Longer for Investment Advisers

The slower pace of AI adoption among U.S. investment advisers is both expected and appropriate. Advisers operate under a fiduciary duty to act in their clients’ best interests, which requires careful evaluation before deploying any new technology.

Key considerations include:

1. Evaluating the Cost–Benefit for Clients

Advisers must determine whether AI tools truly improve client outcomes relative to their risks and costs. This includes assessing tool accuracy, potential performance impacts, and suitability for different client segments.

2. Integrating AI Into Supervision and Compliance Frameworks

AI tools, especially generative AI and predictive technologies, require thoughtful integration into:

  • supervisory procedures
  • compliance testing
  • oversight workflows
  • recordkeeping and disclosure frameworks

Firms must understand how the technology works, its limitations, and how to monitor it effectively.

3. Addressing Conflicts of Interest

AI tools may contain embedded biases or commercial incentives that could inadvertently favor the adviser over the client. Firms must analyze tools for:

  • data conflicts
  • algorithmic bias
  • vendor incentives
  • potential misalignment between the adviser’s objectives and client interests
4. Managing the Risks of Fast-Changing Technology

AI evolves rapidly, which poses operational, cybersecurity, and compliance challenges. Firms must ensure they have governance frameworks capable of keeping pace with an environment where tools, rules, and risks change frequently.

The Road Ahead

AI is no longer an emerging trend; it is becoming an operational reality across financial services. While investment advisers have traditionally adopted new technologies more cautiously than other industries, recent survey data suggests accelerated momentum. Advisers are increasingly exploring how AI can enhance research, portfolio management, internal operations, and even client interactions.

But with opportunity comes responsibility. Firms must balance innovation with compliance obligations, ensuring that any AI use aligns with fiduciary duties, regulatory expectations, and robust governance practices.

As the industry continues evolving, investment advisers who approach AI thoughtfully, grounded in strong risk management and compliance oversight, will be best positioned to unlock its potential.

How ACA Can Help

ACA helps investment advisers adopt AI responsibly by building the governance, oversight, and compliance structures regulators expect. Our team supports firms with:

  • AI risk assessments and governance design to evaluate use cases and embed appropriate controls.
  • Policies, procedures, and supervisory integration tailored to AI tools, data use, vendor oversight, and model governance.
  • Training and change management support to ensure employees understand how to use AI safely and effectively.
  • Benchmarking insights from ACA’s industry-leading AI surveys to help firms evaluate their position relative to peers.

With ACA’s guidance, firms can embrace innovation while maintaining their fiduciary duty, protecting clients, and reducing operational and regulatory risk.

Connect with an expert to learn more about AI in compliance and how ACA can support building a strategy responsibly.