SEC examiners are already asking RIAs three specific questions about AI governance. Forty percent of investment adviser firms have implemented AI tools internally. Forty-four percent of those firms have no formal validation of what their AI is actually doing. Here's the playbook for the next three weeks.

Two notes before we get into this. First: I'm not a lawyer, and nothing below is legal advice. If you're running compliance at an RIA, you should be talking to your own counsel about anything specific to your firm. Second: I am the founder of a growth agency that builds AI systems for financial services clients — RIAs, wealth managers, 1031 exchange firms, financial advisors. That's the perspective I'm writing from. I sit in a lot of meetings where the marketing team, the compliance officer, and the founder are trying to figure out what they're allowed to do with AI without ending up on the SEC's quarterly enforcement update.

So this is a practitioner's view of what's actually happening in the field as of May 2026. Some of it you may already know. Some of it you may not be hearing from your compliance vendor, because the regulatory tempo has changed in the last six months and a lot of firms are catching up.

What changes on June 3

The 2024 Regulation S-P amendments — the SEC's update to its rules on customer information protection — created new obligations around third-party service provider oversight. Larger firms had to comply by December 2025. Smaller advisers have a compliance deadline of June 3, 2026.

The amendments matter for the AI conversation because almost every AI tool an RIA touches is, technically, a third-party service provider. That includes:

  • AI meeting note-takers (Otter, Fathom, Fireflies, embedded note-takers inside CRMs)
  • AI email assistants and drafting tools
  • AI proposal and report generators
  • AI inside your CRM or financial planning software
  • Public LLMs your team uses for research, drafting, or analysis
  • AI-enhanced marketing tools, including the ones running outbound on your firm's behalf

The compliance question isn't whether you can use these tools. You can. The question is whether you have documented controls, documented vendor due diligence, and documented evidence of human supervisory review for anything client-facing.

If your answer to any of those three is "we'll get to it" — that's the work that needs to happen in the next three weeks.

The three things examiners are asking RIAs about, right now

SEC examiners are already running this playbook in 2026. The asks in recent exams have been consistent enough that the pattern is now public knowledge. Three things, in order:

  1. A documented inventory of AI tools and use cases at the firm. Not a vague list — a real inventory. What tools are in use, by which teams, for which purposes, with which data inputs and outputs, under which contracts. Including the AI features embedded in tools you didn't think of as AI tools (the AI note-taker built into your video conferencing platform; the AI summary feature inside your CRM; the AI-assisted drafting in your email client).
  2. Vendor due diligence documentation for any AI-enabled service provider. Under the new Regulation S-P framework, you need to be able to show that you understood what each vendor's AI was doing with your data, what their security posture looks like, what their incident response process is, and how you would know if something went wrong. A vendor that can't answer those questions is itself a finding.
  3. Evidence of human-in-the-loop review for AI-assisted recommendations and client communications. "Human in the loop" is no longer a buzzword. It's a documented review process. Examiners want to see records — not a policy statement that says you review, but evidence that the review is actually happening on the AI outputs that matter.
40% Of investment adviser firms have implemented AI tools internally
44% Of those firms have no formal testing or validation of AI outputs
47% Increase in regulatory scrutiny of RIAs since 2023

The marketing rule is the part nobody's worried about — and they should be

This is where I have something to say that most compliance content isn't saying clearly enough.

The SEC Marketing Rule prohibits RIAs from releasing untrue or unsubstantiated advertisements. That's been the rule since it was finalized in 2020. The enforcement actions that have followed have hit firms for AI-related claims, performance claims with insufficient substantiation, and testimonials that didn't meet the new disclosure requirements.

What's new in 2026 is that AI can manufacture untrue or unsubstantiated advertisements at industrial scale. An autonomous AI SDR running outbound on an RIA's behalf can, in the space of an afternoon, send several thousand emails containing claims about the firm's performance, its track record, its services, or its differentiation that haven't been reviewed by anyone qualified to substantiate them.

This is a much bigger compliance exposure than most firms have thought through. And it doesn't matter whether the AI is your tool or your agency's tool — if the email goes out under your firm's name, the marketing rule violation is your firm's violation.

"An autonomous AI SDR running outbound on an RIA's behalf can, in the space of an afternoon, send several thousand emails containing claims that haven't been reviewed by anyone qualified to substantiate them. That's the compliance exposure no one's talking about." — The 2026 marketing rule reality

Shadow AI is the largest exposure most firms haven't measured

"Shadow AI" is the term of art for AI tools your team is using that the firm doesn't know about. A relationship manager pasting client portfolio details into ChatGPT to summarize a meeting. An advisor using Claude to draft a planning narrative. A marketing coordinator running campaign copy through Gemini before sending. Each of those interactions has a data-handling question, a vendor due diligence question, and — depending on the model and account — a question about whether client information is being used to train someone else's AI.

The honest answer at most firms in May 2026 is that nobody knows the full footprint. The team has been using AI tools at the individual-employee level for two years. Some of those tools are governed. Most aren't. The June 3 deadline is the forcing function that turns this into an examinable issue.

What works: a 30-minute "AI in your workflow" interview with every team member, run as a non-punitive inventory exercise. Not a witch hunt. The goal is visibility, not punishment. The pattern that emerges from those interviews is almost always the same — a few tools the leadership team didn't know were in heavy use, a few use cases that need to be brought under governance, and a small number that should be turned off entirely.

The fiduciary prompt framework

One of the more useful behavioral frameworks circulating in RIA compliance circles right now is what some practitioners call the "fiduciary prompt." The idea: every time someone at the firm uses a generative AI tool for anything client-facing or compliance-touching, they preface their request with a short statement of context that sets the AI's behavioral guardrails.

It looks something like this:

BEFORE
No framing
"Write a follow-up email to a prospect about our investment management services and our recent performance." The AI generates something that may include unsubstantiated performance claims, comparative language, or guarantees.
AFTER
Fiduciary prompt
"I am a fiduciary and an SEC-registered RIA. Any communication I send must comply with the SEC Marketing Rule. Do not include performance claims, comparative statements, or testimonials. Use only the substantiated facts I provide. Flag anything that may require disclosure."

This isn't a substitute for human review. It's a behavioral nudge that catches the easiest category of mistakes — the ones where a generative tool, asked nicely, will fabricate exactly the language that gets firms in trouble.

What we actually do for RIA clients

I'll be direct about how we run this. We've built our AI SDR system, our LLM authority program, and our workflow automation for compliance-aware deployment in financial services from the start. That's not a marketing line — it's because four of our last seven clients were RIAs or wealth firms and we had to.

Specifically, for RIA clients:

Every outbound communication is reviewable before send. No autonomous AI is making the final call on what goes out under the firm's name. The AI drafts, sequences, and prioritizes. A qualified human reviews the substantiation on any claim before send. The full audit trail — what was drafted, what was edited, what was approved, by whom — is retained for the firm's compliance records.

Vendor due diligence is documented from day one. Every tool in our stack has a data processing agreement, a documented security posture, and a written description of how it handles client-related information. We don't make firms reconstruct this six weeks before an exam.

The LLM authority program is structured around substantiated content. When we build authority content designed to be cited by ChatGPT and Perplexity, every claim is sourced. Every statistic is linked. Every assertion can be traced back to a primary reference. This is what makes the content citeable — and it's also what makes the content compliant.

The compliance officer gets the same view the marketing team does. Not after the fact — in real time. The system we run has a compliance dashboard. Compliance can see what's queued, what's been sent, what's pending review, and what's been flagged.

/ / / / /

What to do in the next three weeks

If you're an RIA principal, compliance officer, or chief marketing officer reading this in mid-May 2026, here's a defensible action list for the next twenty-one days:

  1. Run the inventory. Every AI tool, every team member, every use case. Document it in a single artifact. Date it. Sign it. That document is now your compliance baseline.
  2. Pull every vendor contract for any tool with AI features. Read the data handling and AI provisions. Note where you don't have written assurance. Send a due diligence questionnaire to any vendor whose contract doesn't already cover it.
  3. Build the human-in-the-loop documentation. Pick the three most exam-likely workflows — client communications, marketing outbound, AI-assisted recommendations — and write down what the human review actually consists of, who does it, how it's recorded, and what happens to the record.
  4. Train the team on the fiduciary prompt. A 30-minute lunch session covers it. Make it part of every new hire's onboarding from this point forward.
  5. If you're running outbound through an agency, audit them. Ask the three examiner questions. If they can't answer all three with documentation, you have a vendor problem and you have it now.

If you're an RIA running outbound and you're not sure whether your stack is examiner-ready — let's audit it together.

We run compliance-aware AI growth systems for RIAs, wealth firms, and 1031 exchange operators. Every send reviewable. Every claim substantiated. Every interaction logged. Same 90-day guarantee as our standard AI SDR program.

Book a compliance-aware audit →

The bottom line

The firms that built their AI governance program in 2024 and 2025 are walking into June 3 with documentation in hand and no anxiety about it. The firms that didn't are spending the back half of May doing the work that should have happened a year ago.

The deadline isn't the point. The deadline is the point at which the SEC starts holding you to a standard the leading firms in your category have already been operating to for twelve months. AI is going to be the dominant productivity layer in wealth management for the next decade. Governing it well is now part of the table stakes of being an RIA, the same way cybersecurity became table stakes between 2015 and 2020.

Three weeks isn't long. But it's long enough.