Regulated Intelligence Brief

Dorsey's AI Management Vision: What It Means for Compliance Teams

Jack Dorsey just laid out his vision for AI replacing middle management after Block cut 4,000 jobs. This isn't regulatory news, but if you're a CCO watching the industry shift toward AI-driven operations, you need to be thinking about what this means for supervisory structures and compliance oversight.

Regulated Intelligence Brief  ·  Digital Assets  ·   ·  GiGCXOs Editorial
Hero image for: Dorsey's AI Management Vision: What It Means for Compliance Teams

Jack Dorsey made headlines this week with a bold claim: AI should replace the corporate hierarchy that has traditionally routed information through large organizations. This comes on the heels of Block cutting 4,000 jobs, and it's getting attention across fintech and financial services.

This isn't a rule change or enforcement action. But if you're running compliance at a broker-dealer, RIA, or fintech firm, Dorsey's comments point to a conversation you're going to have sooner than you think.

What Dorsey Actually Said

According to CoinDesk's reporting, Dorsey argued that "corporate hierarchy has always existed to solve one problem: routing information through organizations too large for any single person to oversee." His thesis is that AI can now handle that routing function, making traditional management layers obsolete.

Whether you buy that vision or not, the underlying reality is undeniable: AI tools are already changing how information flows through organizations, and that includes compliance-relevant information.

The Compliance Problem This Creates

Here's where I get concerned. Every supervisory system we've built — at broker-dealers under FINRA Rule 3110, at investment advisers under the Advisers Act — assumes human supervisors at defined points in the organizational structure. Your supervisory procedures name specific individuals. Your org charts define reporting lines. Your annual compliance reviews evaluate whether those humans are actually doing their jobs.

If AI starts replacing the information-routing function that middle managers perform, your supervisory structure doesn't automatically adapt. You still need:

  • Designated supervisory principals who are actually supervising
  • Clear escalation paths for red flags and exceptions
  • Documentation that a human reviewed AI-surfaced issues
  • Evidence that your supervisory system is reasonably designed

Regulators aren't going to accept "the AI handled it" as a supervision defense. They're going to ask who was responsible for overseeing what the AI did.

The Practical Reality

I've seen firms get excited about AI efficiency gains without thinking through the compliance architecture. They deploy tools that aggregate data, flag anomalies, even draft responses — and then they discover in an exam that nobody can explain who was supervising the process.

If your firm is experimenting with AI in operations, your written supervisory procedures need to address it explicitly. Who reviews the AI's output? How are exceptions escalated? What's the audit trail? These aren't hypothetical questions anymore.

What To Do Now

You don't need to panic, but you do need to be proactive. If your firm is using or considering AI tools that affect information flow or decision-making:

  • Map where AI touches compliance-relevant processes
  • Ensure your WSPs identify human supervisors for each AI-assisted function
  • Document your firm's rationale for how AI fits into your supervisory system
  • Brief your principals on their ongoing oversight obligations

Dorsey's vision may or may not come to pass. But the regulatory expectation of human supervision isn't going anywhere — and your compliance program needs to account for that, regardless of how much AI you deploy.

Jay Proffitt

Subscribe to Regulated Intelligence Brief

Get new compliance intelligence delivered to your inbox.

Key Takeaways

Do regulators allow AI to perform supervisory functions?

No. FINRA Rule 3110 and SEC guidance require designated supervisory principals — humans — to perform supervision. AI can assist by surfacing information, but a registered principal must review and act on it. Your WSPs need to reflect this.

How should firms document AI use in compliance processes?

Your written supervisory procedures should explicitly identify any AI tools used, what functions they perform, who is responsible for reviewing their output, and how exceptions are escalated to human supervisors. Examiners will ask.

What's the risk of deploying AI without updating supervisory procedures?

You create a gap between what your firm actually does and what your procedures describe. That's a supervision deficiency waiting to be found in an exam. Update procedures before or concurrently with any AI deployment.

← NextPrevious →
Browse All IssuesSubscribe
AI compliance supervisory procedures fintech regulation FINRA Rule 3110 compliance technology

The content in this blog is for informational purposes only and does not constitute legal advice, regulatory guidance, or an offer to sell or solicit securities. GiGCXOs is not a law firm. Compliance program requirements vary based on business model, customer base, and regulatory classification.

Published in Regulated Intelligence Brief — AI-powered compliance intelligence for broker-dealers, RIAs, FinTech, and digital asset firms.
Subscribe
Get Started

Outsourcing of Fractional CCO & staff with AI compliance software

For broker-dealers, investment advisers, FinTech, digital asset firms, and prediction markets. Experienced leadership. Accelerated by AI.