AI’s Next Frontier: Autonomy, Scaling Struggles, and Legal Risks

From agentic breakthroughs to adoption gaps and in-house pitfalls - vital insights for strategic leadership

Good morning. Welcome to the Executive Brief: 10+ hours of AI breakthroughs, distilled into a crisp 5-minute read. We deliver intelligence for executives who want to stay ahead. First-mover insights only.

In this week’s edition, we unpack the Big Four’s leap into autonomous AI agents, the scaling snag tripping up two-thirds of businesses, and the legal minefield of homegrown AI tools threatening compliance and control.

Let's get right to it.

Today’s Brief

  • AI Scaling Stalls: Why two-thirds of firms struggle with adoption.

  • In-House AI Risks: The hidden dangers lurking in DIY solutions.

  • The Agentic Shift: Big Four deploy autonomous AI 'workers'.

Read time: 5 minutes

AI News

Autonomous AI agents transform operations and billing

The Brief:
The major professional services firms (Deloitte, EY, PwC, KPMG) are advancing beyond basic AI into "agentic AI"—systems capable of autonomous task completion. Deloitte and EY recently launched new platforms built with Nvidia, aiming to deploy AI agents alongside human employees for tasks like finance and tax compliance, signaling a potential transformation in how work is done and how services are billed.

The details:

  • The Big Four firms are heavily investing in the next wave of AI, known as agentic AI, which involves intelligent systems or "agents" that can perceive, reason, and act autonomously without constant human input.

  • Deloitte launched Zora AI, providing "intelligent digital workers" initially for its finance team to handle tasks like expense management and trend analysis, aiming for significant cost reduction (25%) and productivity gains (40%).

  • EY introduced the EY.ai Agentic Platform, deploying 150 tax agents to assist 80,000 tax professionals with data collection, document analysis, and compliance tasks, targeting millions of cases and processes.

  • Both firms collaborated with Nvidia on their platforms and anticipate these agents will "liberate thousands of hours" and fundamentally transform operations.

  • KPMG is also integrating AI agents as "digital team-mates" across Audit, Tax, and Advisory services, focusing on areas like customer service, reporting, and efficiency.

  • PwC is developing agents for data ingestion/cleansing and customer communication, emphasizing responsible AI and exploring impacts on efficiency, customer experience, and profitability.

  • This shift is prompting firms like EY to reconsider traditional billing models (hours-based) in favor of outcome-based or "service-as-a-software" approaches.

  • Firms like Deloitte are emphasizing a necessary shift in employee mindset towards being "technologists and engineers first."

Why it matters:
The Big Four's aggressive move into agentic AI signals a major operational shift towards human-AI collaboration and autonomous task execution, setting a benchmark for other large organizations. C-suite executives should closely monitor these developments, as they demonstrate the potential for significant efficiency gains, cost reductions, and fundamentally new business models (like outcome-based pricing). This requires strategic planning around workforce adaptation, technology integration, and rethinking value delivery in an increasingly automated enterprise landscape.

Siloed adoption stalls enterprise-wide impact

The Brief:
A recent study by Asana reveals a significant challenge in AI adoption: two-thirds of organizations are failing to scale AI effectively across their business two years into the AI boom. While adoption exists, it's often siloed within leadership or individual tasks, limiting broader organizational impact and collaboration potential.

The details:

  • The study comes from Asana’s Work Innovation Lab, surveying over 3,000 workers in the US/UK and analyzing data from 112,000 workers using Asana's AI.

  • It found 67% of companies struggle to scale AI, with usage often stuck in a "leadership bubble" – senior leaders are 66% more likely to be early adopters than their teams.

  • Key barriers include employee skepticism (individual contributors are 39% more skeptical than leaders) and job security concerns (32% more worried than executives).

  • Companies prioritize tracking financial ROI (59%) over employee satisfaction with AI (23%), suggesting a communication gap regarding AI's benefits and role.

  • Nearly half (49%) of AI workflows are designed for individual use, resulting in minimal (6%) adoption by colleagues.

  • Conversely, integrating AI into team workflows boosts adoption by 30%, and embedding it into cross-functional processes increases adoption by 46%.

  • The vision proposed is for AI to evolve from a reactive tool to a proactive "orchestrator of work," suggesting status updates or initiating tasks within team contexts.

Why it matters:
This highlights that simply adopting AI tools isn't enough; the strategy for scaling is crucial and currently flawed in many organizations. C-suite focus should shift from isolated use cases to integrating AI into core team and cross-functional workflows, actively addressing employee concerns through communication and training to bridge the adoption gap. Measuring employee sentiment alongside ROI is vital for successful, organization-wide AI integration.

Custom AI tools spark compliance chaos

The Brief:
While corporate legal departments and law firms are increasingly developing AI tools in-house for greater control and customization, this trend introduces significant risks. These include potential litigation, regulatory compliance issues, information security vulnerabilities, and the persistent problem of AI generating incorrect information (hallucinations).

The details:

  • Wall Street firms like Citigroup and Morgan Stanley have publicly warned investors about the increased information security and regulatory risks associated with their in-house AI development.

  • A notable case involved personal injury firm Morgan & Morgan, whose attorneys were sanctioned for submitting court filings with case citations fabricated by an AI tool.

  • Developing these tools requires specialized expertise that many in-house legal teams lack, diverging from their core industry-specific knowledge.

  • Proponents argue in-house tools offer better data control, customization, and potentially lower hallucination risk due to limited datasets, often leveraging foundation models like ChatGPT cost-effectively.

  • However, risks remain significant, including legal penalties (like fines and paying opposing counsel fees), evolving compliance landscapes, and data security issues exacerbated by human error, potentially outstripping existing risk management frameworks.

  • Mitigation strategies include rigorous attorney training focusing on skepticism, implementing "human-in-the-loop" processes with verification steps like deep linking to sources, and using multiple AI tools to cross-check outputs.

  • Another approach involves limiting AI use to lower-risk tasks like document review within constrained datasets ("walled gardens") to substantially reduce hallucination potential.

Why it matters:
The allure of bespoke, controlled AI solutions carries substantial legal and reputational dangers, particularly the risk of costly hallucinations and regulatory scrutiny. C-suite leaders must ensure that any in-house AI development is accompanied by rigorous governance, specialized training emphasizing verification, robust human oversight protocols, and a clear assessment of risk tolerance based on the AI's intended application. Simply building it internally doesn't eliminate risk; it merely shifts the locus of responsibility.

Your Next Move

This week, ask your team:

  • “Are we chasing autonomous AI agents before mastering basic AI scaling and adoption across our teams?”

  • “In-house AI brings control but high risk; external AI has its own perils. How do we de-risk our AI strategy without sacrificing competitive edge?”

  • “Agentic AI promises a 'liberated' workforce – are we truly ready to manage, integrate, and even bill for this new digital labor force, or just creating new silos?”

Need a deep dive playbook? Reply—I’ll share a strategy to navigate scaling, risk, and the agentic future.

That’s it for today!

See you next time,

Executive Brief Editorial Team