As private fund advisers integrate AI into diligence, portfolio management, and investment decision‑making, they take on new regulatory, data governance, and fiduciary responsibilities. AI tools often rely on large volumes of proprietary and sensitive information, increasing the risk of unauthorized access when processed through external or even enterprise‑grade platforms.
Human oversight remains critical: AI models can introduce errors, overlook qualitative factors, or behave unpredictably in volatile markets. Together, these risks underscore the need for disciplined governance and clear controls around AI use.
Regulatory Focus is Rising
The SEC continues to focus on advisers’ use of AI in examinations and enforcement. Examiners are reviewing how firms describe AI in disclosures and marketing, how AI‑enabled tools are supervised, and whether controls address functions such as fraud detection, back‑office processes, anti‑money laundering, and trading. Compliance programs should document each AI use case and map associated controls to the applicable requirements under the Advisers Act.
Although the SEC withdrew its proposed rule on predictive data analytics and AI, advisers should not assume AI use is unregulated. Section 206 of the Investment Advisers Act of 1940 requires advisers to act with prudence, loyalty, and due care, obligations that apply equally to AI used in portfolio management and investment processes. Other Advisers’ Act provisions, including the compliance and marketing rules, also apply. Federal privacy and cybersecurity regulations, such as Regulation S‑P, further govern advisers’ handling of customer information, with new requirements now in effect for firms with more than $1.5 billion in assets under management.
Regulatory scrutiny is expected to intensify as both the industry and the SEC expand their use of AI tools. The SEC’s 2026 Examination Priorities confirm that examiners will continue to assess the accuracy of firms’ AI‑related disclosures and marketing, as well as the adequacy of policies and procedures governing AI use. The Division of Examinations is also evaluating how firms deploy AI to automate internal regulatory processes and improve operational efficiency.
AI in Investment Decision-Making
Private equity, private credit, and hedge fund managers increasingly use AI to assess targets, model improvements, and identify opportunities. These come with new risks:
- Human oversight can diminish as investment professionals rely on AI.
- Poor or biased input data can degrade outcomes.
- Models may overlook qualitative factors such as management credibility, cultural context, and market sentiment.
- Sensitive company information can be exposed when routed through external or third-party platforms.
- Proprietary strategies may leak through vendor interactions without strong confidentiality controls.
- Material nonpublic information (MNPI) can be ingested or inferred by AI systems if datasets are not segmented and access-controlled.
Regulation S-P and Data Governance
Recent amendments to Regulation S-P set higher expectations for safeguarding customer information. Ensure your AI platforms, whether developed in-house or procured from a third party, are covered by the following controls:
- Incident response: Include AI systems in the written incident response program, with real-time monitoring, prompt detection, rapid containment, and preparedness to notify affected individuals within required timeframes. Test plans and train staff on AI-specific policies and risks.
- Vendor oversight: Identify service providers that use AI, assess their data safeguards and validation processes, and confirm contract terms cover security and breach notifications. Update due diligence procedures accordingly.
- Data mapping: Map where customer information resides across AI tools and workflows, and apply tailored protections based on data sensitivity and use.
MNPI Controls
Integrating AI into investment management heightens the risk of inadvertent exposure or misuse of MNPI. Prohibit entering MNPI into external or public AI tools, enforce information barriers and access controls, and use surveillance to monitor employees’ use of AI systems, in addition to communications and trading indicators of MNPI misuse.
Fiduciary and Marketing Rule Considerations
AI should not conflict with the adviser’s duties of care and loyalty. Avoid misrepresentations about AI capabilities or reliance that could violate anti-fraud provisions under the Advisers Act. Align contracts, disclosures, and investor communications with legal counsel, and confirm compliance with privacy laws (e.g., the General Data Protection Regulation and the California Consumer Privacy Act) and Regulation S-P requirements, especially for cross-border data transfers.
Operational Oversight and Model Governance
Effective oversight is essential to ensure that AI tools operate reliably and in line with regulatory expectations. Key governance practices include:
- Establish a cross-functional committee (legal, compliance, technology, and business owners) to oversee AI implementation.
- Validate and document models, monitor performance and drift, and define thresholds for human review.
- Implement human-in-the-loop workflows for investment decisions and escalations.
- Maintain inventories of AI use cases, data sources, and vendors with periodic reviews.
Building Trust and Compliance Through Effective AI Oversight
AI can accelerate analysis and enhance decision quality when governed thoughtfully. By validating models, monitoring performance, and keeping human oversight central, firms can meet regulatory expectations and protect investors while realizing AI’s benefits.
How ACA Helps Firms Strengthen AI Governance and Compliance
ACA supports private fund advisers in navigating the rapidly changing AI regulations with practical, scalable solutions. Our team helps firms:
- Assess AI-related risks within investment, trading, and due‑diligence processes.
- Strengthen compliance programs to meet SEC expectations, including AI oversight and model governance.
- Implement robust Reg S‑P controls, including incident response, data mapping, and vendor oversight.
- Enhance safeguards to prevent MNPI exposure and protect proprietary information.
- Align disclosures, marketing materials, and internal policies with fiduciary obligations.
By partnering with ACA, firms can adopt AI confidently while maintaining strong regulatory, operational, and investor‑protection standards.
Download the 2025 AI Benchmarking Report for more insights from ACA. Contact an expert to refine your AI compliance strategy.
References
- SEC Risk Alert: Observations from Examinations of Investment Advisers and Broker-Dealers (2023)
- Investment Advisers Act of 1940, Section 206 – Antifraud Provisions
- General Data Protection Regulation (GDPR), Regulation (EU) 2016/679
- California Consumer Privacy Act (CCPA), Cal. Civ. Code § 1798.100 et seq
- SEC Final Rule: Amendments to Regulation S-P (May 2024)
- Forrester Research, ‘Experience-Based Differentiation’ Report (2022)