SEC 2026 AI Exam Priorities: What RIAs Need to Know About Reg S-P Compliance
February 25, 2026 · Dev Sec · 8 min read
The SEC's FY2026 exam priorities make AI a live examination topic. Specifically, whether your disclosures about AI are accurate and whether you have controls to supervise how AI is actually used. The priorities state that exam staff will assess whether firms have implemented "adequate policies and procedures to monitor and/or supervise their use of AI technologies."
That scrutiny lands at the same time many smaller advisers are racing toward the June 3, 2026 compliance date for the SEC's amended Regulation S-P incident response and customer notification requirements. (Larger advisers, those with $1.5B+ AUM, already faced a December 3, 2025 deadline.)
Meanwhile, adoption is already widespread: Schwab's late-2025 survey of 533 RIAs found 63% using AI tools in some capacity, but only about one in ten fully integrating AI into business strategy. The governance gap is real. The Investment Adviser Association's 2025 IMCT survey found that among firms that have adopted AI, 44% have no formal testing or validation of their AI outputs. And Schwab's 2025 Independent Advisor Outlook Study found just over a third (35%) of AI-using advisers have formal policies in place.
This Isn't New, But 2026 Is Different
To be clear: AI has been on the SEC's radar for multiple exam cycles. FY2024 priorities included AI inside "Crypto Assets and Emerging Financial Technology." FY2025 went further, stating that exams may look "in-depth" at compliance policies and disclosures when advisers integrate AI into advisory operations. The Division also built specialized capabilities and teams to address emerging risks including AI.
What makes 2026 different isn't that AI is new to examiners. It's that two forces stack tightly for smaller advisers. AI scrutiny plus a near-term Reg S-P compliance deadline means "evidence readiness" is urgent in a way it wasn't before. You need to show both that you govern AI tools and that your incident response program covers AI-related data exposure. And examiners have already signaled they'll be checking.
The SEC has also demonstrated it will act. In 2024, the Commission charged Delphia (USA) Inc. and Global Predictions Inc. with false and misleading statements about their purported use of AI, what the industry now calls "AI-washing." Both cases involved Marketing Rule violations. If your firm claims to use AI, your disclosures need to match what actually happens in practice.
What Reg S-P Actually Requires
The 2024 Reg S-P amendments respond to what the SEC describes as "expanded use of technology and corresponding risks" since the rule was originally adopted in 2000. The core requirements that matter for AI governance:
Incident response program: Written policies and procedures reasonably designed to detect, respond to, and recover from unauthorized access to or use of customer information. This includes procedures to assess and contain incidents, plus customer notification obligations (with limited exceptions).
Service provider oversight: You must ensure service providers protect against unauthorized access/use and notify your firm "as soon as possible," no later than 72 hours after becoming aware of a breach affecting a customer information system they maintain.
Expanded disposal and recordkeeping: New recordkeeping obligations documenting compliance with the amended requirements.
The rule defines "customer information" broadly: any record containing nonpublic personal information about a customer in your possession or handled on your behalf. That definition is a clean bridge to AI tooling. If an AI vendor or AI-enabled service is handling customer information on behalf of your firm, it lands squarely inside the compliance perimeter.
Most RIAs have information security policies written before AI tools were part of their workflow. Those policies likely don't address how an AI copilot accesses client portfolios, who approved that access, or what your response procedure is if an AI system exposes client data it shouldn't have touched.
What Examiners Will Ask, Based on How Exams Actually Work
SEC risk alerts describe how adviser exams are scoped, including sample document request lists. FY2026 priorities state that exam staff will "engage firms" about progress in preparing incident response programs, and after compliance dates, will examine whether firms have "developed, implemented, and maintained policies and procedures consistent with the new provisions." The SEC even ran outreach events for Reg S-P that included a mock examination segment.
Based on the priorities, enforcement trends, and standard exam procedures, expect these questions:
Inventory: "Provide a list of all AI tools used in your advisory operations, including third-party services."
Safeguards evidence: "Show us your electronic access controls, specifically which AI systems can reach customer information and under what conditions."
Approval process: "Who approved the deployment of each AI tool? What was the evaluation criteria?"
Disclosure accuracy: "Are your representations about AI capabilities and use fair and accurate? Show us how the AI actually works versus what you told clients."
Testing: "How do you test AI outputs for accuracy? Show us the results."
Vendor oversight: "For third-party AI services that handle customer information, how do you ensure they protect against unauthorized access? Do you have the 72-hour breach notification requirement in place?"
Incident response: "Does your incident response program cover AI-related data exposure scenarios?"
Every one of these requires documented evidence, not a general policy statement. The firms that handle these examinations well are the ones that can produce timestamped, verifiable answers on demand.
A Different Way to Think About This
Here's an observation from building governance systems: most compliance questions aren't really policy questions. They're data path queries.
When an examiner asks "What AI tools can access client SSNs?", the answer isn't in a policy document. It requires tracing actual data paths: which AI tools connect to which data stores, what permissions they have, which customer information fields they can read, and whether those access patterns align with your stated policies.
This is fundamentally a graph traversal problem. Your AI tools, data stores, customer records, and regulatory controls form a knowledge graph. Compliance questions are queries against that graph. The answer is deterministic. It's either correct or it isn't. And it should be computable, not something you reconstruct from memory during an exam.
Firms that model their AI governance this way can answer examiner questions quickly and prove their answers are complete. Firms that rely on spreadsheets and PDFs will spend weeks reconstructing answers they're never confident are accurate.
What to Do Before June 3
If you're a smaller adviser facing the June 2026 deadline, here's a practical sequence. If you're a larger adviser who already hit the December 2025 date, this still applies to the AI-specific governance layer that examiners will test separately.
Week 1-2: AI tool inventory. Document every AI tool in use: commercial products, internal models, third-party APIs, copilots, and assistants. Include shadow AI that employees may be using without formal approval. Schwab's data showing 57% of firms allow employees to "explore" AI solutions suggests unauthorized tools are more common than most compliance teams realize.
Week 3-4: Data access mapping. For each AI tool, map what customer information it can access, under what conditions, and who authorized that access. Focus on the Reg S-P definition: any record containing nonpublic personal information in your possession or handled on your behalf.
Week 5-6: Safeguards and control documentation. Map each AI tool's data access to your Safeguards Rule obligations and Rule 206(4)-7 compliance requirements. Document the electronic access controls and produce evidence those controls work, not just that they exist on paper.
Week 7-8: Vendor oversight. For every third-party AI service handling customer information, confirm you have contractual provisions for breach notification within 72 hours. Document your oversight procedures.
Week 9-10: Incident response update. Update your incident response program to cover AI-specific scenarios: an AI tool accessing data beyond its authorized scope, an AI vendor breach, an AI system generating outputs based on data it shouldn't have seen. Run a tabletop exercise.
Week 11-12: Disclosure review and training. Review all client-facing disclosures about AI use for accuracy (remember Delphia and Global Predictions). Finalize updated written supervisory procedures. Train relevant staff.
The Uncomfortable Math
EY's survey found regulatory and compliance complexity surprised 86% of wealth and asset management firms adopting GenAI. That tracks with what we see in practice: firms underestimate how much work AI governance actually requires because they think of it as a policy exercise. It isn't. It's an evidence exercise.
The SEC isn't asking "do you have an AI policy?" They're asking "show me that your AI governance actually works, that the controls you describe are real, that the access patterns you claim are accurate, and that you can prove it."
That's the real shift. Not that AI appeared on the exam priority list. It's been there since at least FY2024. The shift is that examiners now have enough enforcement precedent, enough specialized capability, and enough regulatory framework to ask specific, evidence-based questions. Your answers need to be equally specific.
Sources: SEC Division of Examinations FY2026 Examination Priorities; SEC FY2025 and FY2024 Examination Priorities; SEC Final Rule and Small-Entity Compliance Guide for 2024 Regulation S-P Amendments; Schwab Advisor Services 2026 RIA & AI Study (n=533, October 2025); Investment Adviser Association / ACA Group 2025 IMCT Survey; EY GenAI in Wealth & Asset Management Survey; Schwab 2025 Independent Advisor Outlook Study; SEC Enforcement Actions re: Delphia (USA) Inc. and Global Predictions Inc.
Assess Your AI Governance Readiness - scorecard.efeeo.com
Efeeo's AI Governance Readiness Assessment maps your current AI tools and data flows against SEC examination priorities and Reg S-P requirements, showing exactly where the gaps are before examiners find them.
See How Efeeo Can Help
Discover our AI governance solutions designed to help you navigate compliance and build trust.