How AI Agents Are Transforming Professional Services, and How to Implement Successfully

The opportunity for professional services

AI agents represent a step change from the generative AI tools most firms already use. Rather than responding to prompts, agents pursue goals: planning steps, executing them, using tools as needed, and delivering outputs with minimal human orchestration. A proposal agent, for example, could pull client data from the CRM, retrieve previous proposals from a knowledge base, assemble the latest team bios and firm boilerplate, review notes and call transcripts, then build a draft presentation. For a detailed explanation of how agents differ from traditional AI and why investment is accelerating, see our Executive Guide to AI Agent Strategy.

The business case is compelling. Gartner predicts 40% of enterprise applications will integrate task-specific AI agents by end of 2026, up from less than 5% in 2025.¹ PwC found 88% of executives plan to increase AI-related budgets specifically because of agentic AI.²

But for professional services firms (consultants, lawyers, accountants, and agencies), generic use cases aren't enough. The question is whether your firm will capture the value agents offer in your specific context, or watch competitors pull ahead while you're still running pilots.

The prize: What agents can deliver for professional services

Professional services run on expertise, but expertise takes time. Partners and senior staff spend hours on work that requires their judgment for minutes. Agents change that equation, not by replacing expertise, but by compressing the time it takes to get from question to insight.

Legal: From document mountains to actionable intelligence

Law firms and in-house teams are deploying agents across the legal lifecycle. In due diligence, agents review virtual data rooms, flag material contracts, identify non-standard terms, and compile risk summaries that would take junior associates days. For contract analysis, agents compare executed agreements against playbooks, highlight deviations, and suggest redlines aligned with firm or client standards.

Legal research, historically a junior lawyer's rite of passage, is being transformed. Agents scan case law, identify relevant precedents, summarise holdings, and draft research memos in a fraction of traditional time. For litigation teams, agents assist with document review at scale, categorising materials, identifying privileged content, and surfacing documents responsive to specific discovery requests.

The adoption data confirms the shift: corporate legal AI adoption more than doubled in one year, from 23% to 52%.³ But the speed of adoption brings its own risks, a point we'll return to.

Accounting and audit: Turning compliance into competitive advantage

For accounting and advisory firms, agents are moving beyond basic automation into intelligent workflow orchestration. In tax research and monitoring, agents track regulatory changes across jurisdictions, flag client-specific implications, and draft advisory memos. This work previously required specialists to manually track dozens of sources.

M&A due diligence sees similar gains. Agents parse financial statements across multiple subsidiaries, identify discrepancies, calculate adjusted EBITDA under various assumptions, and compile findings into deal-ready summaries. For audit preparation, agents categorise transactions, reconcile accounts, flag anomalies for human review, and prepare documentation packages that accelerate the path to sign-off.

Mid-market firms stand to benefit disproportionately. Where large firms have armies of junior staff for routine work, smaller practices face bandwidth constraints that limit growth. Agents offer a path to capacity without proportional headcount, if deployed thoughtfully.

Consulting and advisory: From proposal factories to insight engines

Consulting firms live and die by their ability to win work and deliver it profitably. Both stand to benefit from agent-enabled transformation.

Bid and proposal production, traditionally a scramble of recycled content, late-night formatting, and last-minute subject matter expert contributions, is being reimagined. Agents can analyse RFP requirements, search content libraries for relevant case studies and capability statements, draft compliant responses, and flag gaps requiring human input. Firms using agent-enabled proposal tools report cutting bid preparation time significantly while improving consistency and compliance.

Beyond business development, agents are enhancing delivery itself. Market intelligence agents monitor client industries in real-time, surfacing competitor moves, regulatory changes, and emerging risks. Knowledge management agents make firm-wide expertise accessible on demand, answering questions like "what did we recommend to similar clients facing this challenge?" by synthesising past deliverables, methodologies, and lessons learned.

Across all three domains, the pattern is the same: agents handle the work that takes time but not judgment, freeing professionals to focus on what clients actually pay for: insight, strategy, and trusted advice.

The risks of getting it wrong (and how to manage them)

Agent deployment in professional services isn't like rolling out a new productivity tool. The stakes are higher. Errors can harm clients, damage reputations, and create legal liability. Three categories of risk need to be addressed, and each can be managed.

Fabrication risk: When agents make things up

Large language models, the technology underlying most AI agents, can generate plausible-sounding content that is entirely fabricated. In legal contexts, this has produced a stream of high-profile failures. A database tracking AI hallucination use in legal proceedings has identified over 800 instances worldwide.⁴

The consequences are serious. Courts have sanctioned lawyers, struck filings, and denied fee awards. In one recent case, attorneys from two major firms submitted a brief containing nine incorrect citations (including two completely fabricated cases) after using AI tools without adequate verification. The special master found their conduct "tantamount to bad faith" and imposed $31,100 in sanctions.⁵

But fabrication isn't inevitable. It's a design problem with engineering solutions. Retrieval-augmented generation (RAG) architectures that ground agent outputs in verified source documents dramatically reduce hallucination risk. Automated citation checking catches errors before they reach clients or courts. Human-in-the-loop review at critical decision points provides a final quality gate.

Firms implementing agents for research and drafting need architectures purpose-built for accuracy, not general-purpose chatbots repurposed for professional work. See our article How we designed a zero-fabrication research agent

Workforce anxiety: The fear that changes everything

As London Mayor Sadiq Khan warned in his Mansion House speech this week, AI could become "a weapon of mass destruction of jobs" if organisations don't manage the transition responsibly.⁶ His concern isn't hypothetical. Research presented alongside his speech found over half of London workers expect AI to change their jobs within the next 12 months. Khan cited estimates that 70% of skills in the average job will have changed by 2030.

Professional services firms face a specific challenge: much of the work agents can automate has historically been how junior professionals learn the craft. If document review disappears as a training ground for young lawyers, how do they develop judgment? If junior consultants no longer build financial models from scratch, where does modelling expertise come from?

The answer isn't to avoid automation (competitors won't wait), but to redesign career paths alongside workflows. This means being explicit about which tasks agents will handle, which humans will retain, and how development pathways adapt. Firms that communicate transparently and involve their people in the transition will retain talent. Those that deploy agents without explanation will face resistance, attrition, and the quiet sabotage that comes from a workforce that feels threatened rather than empowered.

Governance gaps: The questions no one wants to answer

Ask a firm deploying AI agents basic governance questions and the answers often reveal uncomfortable gaps:

  • When an agent produces output that harms a client, who is accountable? The partner who approved the work, the team that configured the agent, or the vendor that built it?

  • What level of human oversight is required before agent-generated work goes to clients? Does it vary by risk level?

  • How do you ensure agents don't leak confidential client information across matters or to third-party model providers?

  • How do you audit what agents actually did, after the fact?

These questions aren't theoretical. Regulators are beginning to ask them, and clients will increasingly expect answers. Our Executive Guide to AI Agent Strategy covers governance frameworks in depth, including human oversight models and the five strategic enablers that separate successful AI initiatives from failed pilots. The firms that build these frameworks now (before an incident forces the issue) will have a significant advantage over those scrambling to respond reactively.

What governance-ready adoption looks like

The Executive Guide covers the broader strategic framework for AI governance, including AI Centres of Excellence, automated model monitoring and retraining systems, and the five enablers of successful transformation. For professional services firms specifically, three elements deserve particular attention.

Define human oversight models before deployment

Not all agent outputs require the same level of scrutiny. A draft email can tolerate lighter review than advice to a client facing litigation. Effective governance distinguishes between human-in-the-loop (approval required before any agent action), human-on-the-loop (monitoring with ability to intervene), and human-out-of-the-loop (rarely appropriate in professional services given fiduciary duties). Define these before deployment, not after something goes wrong.

Redesign roles and teams for human-agent collaboration

Bolting agents onto existing team structures captures a fraction of potential value. Leading firms are rethinking how human and agent capabilities combine: what work should agents handle entirely, what should agents draft for human refinement, and what requires human judgment that agents cannot replicate.

Some firms are going further, questioning the traditional pyramid model itself. If agents can handle work historically done by junior staff, what does that mean for partner-to-associate ratios? For the economics of leverage? For how work gets priced and delivered? These aren't distant concerns. They're strategic questions that forward-thinking firms are already working through.

The challenge goes beyond operations. Firms need new management approaches, different performance metrics, and evolved accountability structures.

Invest in training (it's now a legal requirement)

Under the EU AI Act, Article 4 requires providers and deployers of AI systems to ensure "a sufficient level of AI literacy" among staff and anyone else operating AI systems on their behalf.⁷ This obligation has applied since February 2025, with enforcement beginning August 2025.⁸

The requirement isn't prescriptive. There's no mandated certification or specific curriculum. But European Commission guidance makes clear that simply telling employees to "read the manual" won't suffice.⁹ Organisations must consider employees' technical knowledge, experience, and the context in which AI systems are used. For high-risk AI systems (from August 2026), human oversight requirements under Article 26 demand even more specialised training.

For UK firms serving EU clients or operating EU subsidiaries, compliance is mandatory. For all firms, the business case is clear: untrained staff using AI agents create risk, not value. Training should cover not just how to use specific tools, but how to evaluate agent outputs critically, when to escalate, and what governance processes apply.

Serpin's AI Readiness Programme covers all the essentials and can be tailored to each organisation's specific needs. Meanwhile, our Licence to Operate provides proof of competence, linked to robust training, policies and governance.

The bottom line

AI agents offer professional services firms real competitive advantage: faster delivery, better consistency, expanded capacity without proportional headcount growth. The use cases in legal, accounting, and consulting are proven, and the adoption data confirms the direction of travel.

But the firms capturing that value aren't those deploying agents fastest. They're those pairing technology with governance, redesigning workflows rather than automating broken processes, and investing in workforce readiness alongside tool deployment.

Gartner predicts over 40% of agentic AI projects will be cancelled by end of 2027, due to escalating costs, unclear business value, or inadequate risk controls.¹⁰ The failures won't be about technology. They'll be failures of implementation, governance, and change management.

For professional services firms, the path forward is clear: treat agent deployment as a transformation programme to deliver business value, not a technology project. Start by identifying where AI agents can deliver the most benefit, often in areas with repetitive, labour-intensive workflows. Define governance before selecting tools. Design human oversight into every workflow. Invest in training that builds genuine AI literacy. Bring your people along rather than deploying around them.

The prize is significant, but so are the risks of getting it wrong. The difference lies in how you approach implementation, not whether you have the best underlying technology.

References

¹ Gartner (2025) 'Gartner Predicts 40% of Enterprise Apps Will Feature Task-Specific AI Agents by 2026, Up from Less Than 5% in 2025', Gartner Newsroom, 26 August. Available at: https://www.gartner.com/en/newsroom/press-releases/2025-08-26-gartner-predicts-40-percent-of-enterprise-apps-will-feature-task-specific-ai-agents-by-2026-up-from-less-than-5-percent-in-2025

² PwC (2025) 'PwC's AI Agent Survey'. Available at: https://www.pwc.com/us/en/tech-effect/ai-analytics/ai-agent-survey.html

³ Association of Corporate Counsel and Everlaw (2025) 'Generative AI's Growing Strategic Value for Corporate Law Departments', October. Available at: https://www.everlaw.com/resources/acc-genai-survey-2025/

⁴ Charlotin, D. (2025) 'AI Hallucination Cases Database'. Available at: https://www.damiencharlotin.com/hallucinations/

⁵ Ambrogi, R. (2025) 'AI Hallucinations Strike Again: Two More Cases Where Lawyers Face Judicial Wrath for Fake Citations', LawSites, 14 May. Available at: https://www.lawnext.com/2025/05/ai-hallucinations-strike-again-two-more-cases-where-lawyers-face-judicial-wrath-for-fake-citations.html

⁶ ITV News (2026) 'AI risks becoming "weapon of mass destruction of jobs", Sadiq Khan warns', ITV News London, 15 January. Available at: https://www.itv.com/news/london/2026-01-15/ai-risks-becoming-weapon-of-mass-destruction-of-jobs-mayor-of-london-warns; Financial Times (2026) 'Sadiq Khan warns AI could be "weapon of mass destruction" for jobs', 15 January. Available at: https://www.ft.com/content/6f92844e-6eb6-48dc-a36a-fd63115e45b5

⁷ European Union (2024) Regulation (EU) 2024/1689 (Artificial Intelligence Act), Article 4. Available at: https://artificialintelligenceact.eu/article/4/

⁸ Inside Privacy (2025) 'European Commission Provides Guidance on AI Literacy Requirement under the EU AI Act', 6 March. Available at: https://www.insideprivacy.com/artificial-intelligence/european-commission-provides-guidance-on-ai-literacy-requirement-under-the-eu-ai-act/

⁹ European Commission (2025) 'AI Literacy - Questions & Answers', Shaping Europe's Digital Future. Available at: https://digital-strategy.ec.europa.eu/en/faqs/ai-literacy-questions-answers

¹⁰ Gartner (2025) 'Gartner Predicts Over 40% of Agentic AI Projects Will Be Canceled by End of 2027', Gartner Newsroom, 25 June. Available at: https://www.gartner.com/en/newsroom/press-releases/2025-06-25-gartner-predicts-over-40-percent-of-agentic-ai-projects-will-be-canceled-by-end-of-2027

The opportunity for professional services

AI agents represent a step change from the generative AI tools most firms already use. Rather than responding to prompts, agents pursue goals: planning steps, executing them, using tools as needed, and delivering outputs with minimal human orchestration. A proposal agent, for example, could pull client data from the CRM, retrieve previous proposals from a knowledge base, assemble the latest team bios and firm boilerplate, review notes and call transcripts, then build a draft presentation. For a detailed explanation of how agents differ from traditional AI and why investment is accelerating, see our Executive Guide to AI Agent Strategy.

The business case is compelling. Gartner predicts 40% of enterprise applications will integrate task-specific AI agents by end of 2026, up from less than 5% in 2025.¹ PwC found 88% of executives plan to increase AI-related budgets specifically because of agentic AI.²

But for professional services firms (consultants, lawyers, accountants, and agencies), generic use cases aren't enough. The question is whether your firm will capture the value agents offer in your specific context, or watch competitors pull ahead while you're still running pilots.

The prize: What agents can deliver for professional services

Professional services run on expertise, but expertise takes time. Partners and senior staff spend hours on work that requires their judgment for minutes. Agents change that equation, not by replacing expertise, but by compressing the time it takes to get from question to insight.

Legal: From document mountains to actionable intelligence

Law firms and in-house teams are deploying agents across the legal lifecycle. In due diligence, agents review virtual data rooms, flag material contracts, identify non-standard terms, and compile risk summaries that would take junior associates days. For contract analysis, agents compare executed agreements against playbooks, highlight deviations, and suggest redlines aligned with firm or client standards.

Legal research, historically a junior lawyer's rite of passage, is being transformed. Agents scan case law, identify relevant precedents, summarise holdings, and draft research memos in a fraction of traditional time. For litigation teams, agents assist with document review at scale, categorising materials, identifying privileged content, and surfacing documents responsive to specific discovery requests.

The adoption data confirms the shift: corporate legal AI adoption more than doubled in one year, from 23% to 52%.³ But the speed of adoption brings its own risks, a point we'll return to.

Accounting and audit: Turning compliance into competitive advantage

For accounting and advisory firms, agents are moving beyond basic automation into intelligent workflow orchestration. In tax research and monitoring, agents track regulatory changes across jurisdictions, flag client-specific implications, and draft advisory memos. This work previously required specialists to manually track dozens of sources.

M&A due diligence sees similar gains. Agents parse financial statements across multiple subsidiaries, identify discrepancies, calculate adjusted EBITDA under various assumptions, and compile findings into deal-ready summaries. For audit preparation, agents categorise transactions, reconcile accounts, flag anomalies for human review, and prepare documentation packages that accelerate the path to sign-off.

Mid-market firms stand to benefit disproportionately. Where large firms have armies of junior staff for routine work, smaller practices face bandwidth constraints that limit growth. Agents offer a path to capacity without proportional headcount, if deployed thoughtfully.

Consulting and advisory: From proposal factories to insight engines

Consulting firms live and die by their ability to win work and deliver it profitably. Both stand to benefit from agent-enabled transformation.

Bid and proposal production, traditionally a scramble of recycled content, late-night formatting, and last-minute subject matter expert contributions, is being reimagined. Agents can analyse RFP requirements, search content libraries for relevant case studies and capability statements, draft compliant responses, and flag gaps requiring human input. Firms using agent-enabled proposal tools report cutting bid preparation time significantly while improving consistency and compliance.

Beyond business development, agents are enhancing delivery itself. Market intelligence agents monitor client industries in real-time, surfacing competitor moves, regulatory changes, and emerging risks. Knowledge management agents make firm-wide expertise accessible on demand, answering questions like "what did we recommend to similar clients facing this challenge?" by synthesising past deliverables, methodologies, and lessons learned.

Across all three domains, the pattern is the same: agents handle the work that takes time but not judgment, freeing professionals to focus on what clients actually pay for: insight, strategy, and trusted advice.

The risks of getting it wrong (and how to manage them)

Agent deployment in professional services isn't like rolling out a new productivity tool. The stakes are higher. Errors can harm clients, damage reputations, and create legal liability. Three categories of risk need to be addressed, and each can be managed.

Fabrication risk: When agents make things up

Large language models, the technology underlying most AI agents, can generate plausible-sounding content that is entirely fabricated. In legal contexts, this has produced a stream of high-profile failures. A database tracking AI hallucination use in legal proceedings has identified over 800 instances worldwide.⁴

The consequences are serious. Courts have sanctioned lawyers, struck filings, and denied fee awards. In one recent case, attorneys from two major firms submitted a brief containing nine incorrect citations (including two completely fabricated cases) after using AI tools without adequate verification. The special master found their conduct "tantamount to bad faith" and imposed $31,100 in sanctions.⁵

But fabrication isn't inevitable. It's a design problem with engineering solutions. Retrieval-augmented generation (RAG) architectures that ground agent outputs in verified source documents dramatically reduce hallucination risk. Automated citation checking catches errors before they reach clients or courts. Human-in-the-loop review at critical decision points provides a final quality gate.

Firms implementing agents for research and drafting need architectures purpose-built for accuracy, not general-purpose chatbots repurposed for professional work. See our article How we designed a zero-fabrication research agent

Workforce anxiety: The fear that changes everything

As London Mayor Sadiq Khan warned in his Mansion House speech this week, AI could become "a weapon of mass destruction of jobs" if organisations don't manage the transition responsibly.⁶ His concern isn't hypothetical. Research presented alongside his speech found over half of London workers expect AI to change their jobs within the next 12 months. Khan cited estimates that 70% of skills in the average job will have changed by 2030.

Professional services firms face a specific challenge: much of the work agents can automate has historically been how junior professionals learn the craft. If document review disappears as a training ground for young lawyers, how do they develop judgment? If junior consultants no longer build financial models from scratch, where does modelling expertise come from?

The answer isn't to avoid automation (competitors won't wait), but to redesign career paths alongside workflows. This means being explicit about which tasks agents will handle, which humans will retain, and how development pathways adapt. Firms that communicate transparently and involve their people in the transition will retain talent. Those that deploy agents without explanation will face resistance, attrition, and the quiet sabotage that comes from a workforce that feels threatened rather than empowered.

Governance gaps: The questions no one wants to answer

Ask a firm deploying AI agents basic governance questions and the answers often reveal uncomfortable gaps:

  • When an agent produces output that harms a client, who is accountable? The partner who approved the work, the team that configured the agent, or the vendor that built it?

  • What level of human oversight is required before agent-generated work goes to clients? Does it vary by risk level?

  • How do you ensure agents don't leak confidential client information across matters or to third-party model providers?

  • How do you audit what agents actually did, after the fact?

These questions aren't theoretical. Regulators are beginning to ask them, and clients will increasingly expect answers. Our Executive Guide to AI Agent Strategy covers governance frameworks in depth, including human oversight models and the five strategic enablers that separate successful AI initiatives from failed pilots. The firms that build these frameworks now (before an incident forces the issue) will have a significant advantage over those scrambling to respond reactively.

What governance-ready adoption looks like

The Executive Guide covers the broader strategic framework for AI governance, including AI Centres of Excellence, automated model monitoring and retraining systems, and the five enablers of successful transformation. For professional services firms specifically, three elements deserve particular attention.

Define human oversight models before deployment

Not all agent outputs require the same level of scrutiny. A draft email can tolerate lighter review than advice to a client facing litigation. Effective governance distinguishes between human-in-the-loop (approval required before any agent action), human-on-the-loop (monitoring with ability to intervene), and human-out-of-the-loop (rarely appropriate in professional services given fiduciary duties). Define these before deployment, not after something goes wrong.

Redesign roles and teams for human-agent collaboration

Bolting agents onto existing team structures captures a fraction of potential value. Leading firms are rethinking how human and agent capabilities combine: what work should agents handle entirely, what should agents draft for human refinement, and what requires human judgment that agents cannot replicate.

Some firms are going further, questioning the traditional pyramid model itself. If agents can handle work historically done by junior staff, what does that mean for partner-to-associate ratios? For the economics of leverage? For how work gets priced and delivered? These aren't distant concerns. They're strategic questions that forward-thinking firms are already working through.

The challenge goes beyond operations. Firms need new management approaches, different performance metrics, and evolved accountability structures.

Invest in training (it's now a legal requirement)

Under the EU AI Act, Article 4 requires providers and deployers of AI systems to ensure "a sufficient level of AI literacy" among staff and anyone else operating AI systems on their behalf.⁷ This obligation has applied since February 2025, with enforcement beginning August 2025.⁸

The requirement isn't prescriptive. There's no mandated certification or specific curriculum. But European Commission guidance makes clear that simply telling employees to "read the manual" won't suffice.⁹ Organisations must consider employees' technical knowledge, experience, and the context in which AI systems are used. For high-risk AI systems (from August 2026), human oversight requirements under Article 26 demand even more specialised training.

For UK firms serving EU clients or operating EU subsidiaries, compliance is mandatory. For all firms, the business case is clear: untrained staff using AI agents create risk, not value. Training should cover not just how to use specific tools, but how to evaluate agent outputs critically, when to escalate, and what governance processes apply.

Serpin's AI Readiness Programme covers all the essentials and can be tailored to each organisation's specific needs. Meanwhile, our Licence to Operate provides proof of competence, linked to robust training, policies and governance.

The bottom line

AI agents offer professional services firms real competitive advantage: faster delivery, better consistency, expanded capacity without proportional headcount growth. The use cases in legal, accounting, and consulting are proven, and the adoption data confirms the direction of travel.

But the firms capturing that value aren't those deploying agents fastest. They're those pairing technology with governance, redesigning workflows rather than automating broken processes, and investing in workforce readiness alongside tool deployment.

Gartner predicts over 40% of agentic AI projects will be cancelled by end of 2027, due to escalating costs, unclear business value, or inadequate risk controls.¹⁰ The failures won't be about technology. They'll be failures of implementation, governance, and change management.

For professional services firms, the path forward is clear: treat agent deployment as a transformation programme to deliver business value, not a technology project. Start by identifying where AI agents can deliver the most benefit, often in areas with repetitive, labour-intensive workflows. Define governance before selecting tools. Design human oversight into every workflow. Invest in training that builds genuine AI literacy. Bring your people along rather than deploying around them.

The prize is significant, but so are the risks of getting it wrong. The difference lies in how you approach implementation, not whether you have the best underlying technology.

References

¹ Gartner (2025) 'Gartner Predicts 40% of Enterprise Apps Will Feature Task-Specific AI Agents by 2026, Up from Less Than 5% in 2025', Gartner Newsroom, 26 August. Available at: https://www.gartner.com/en/newsroom/press-releases/2025-08-26-gartner-predicts-40-percent-of-enterprise-apps-will-feature-task-specific-ai-agents-by-2026-up-from-less-than-5-percent-in-2025

² PwC (2025) 'PwC's AI Agent Survey'. Available at: https://www.pwc.com/us/en/tech-effect/ai-analytics/ai-agent-survey.html

³ Association of Corporate Counsel and Everlaw (2025) 'Generative AI's Growing Strategic Value for Corporate Law Departments', October. Available at: https://www.everlaw.com/resources/acc-genai-survey-2025/

⁴ Charlotin, D. (2025) 'AI Hallucination Cases Database'. Available at: https://www.damiencharlotin.com/hallucinations/

⁵ Ambrogi, R. (2025) 'AI Hallucinations Strike Again: Two More Cases Where Lawyers Face Judicial Wrath for Fake Citations', LawSites, 14 May. Available at: https://www.lawnext.com/2025/05/ai-hallucinations-strike-again-two-more-cases-where-lawyers-face-judicial-wrath-for-fake-citations.html

⁶ ITV News (2026) 'AI risks becoming "weapon of mass destruction of jobs", Sadiq Khan warns', ITV News London, 15 January. Available at: https://www.itv.com/news/london/2026-01-15/ai-risks-becoming-weapon-of-mass-destruction-of-jobs-mayor-of-london-warns; Financial Times (2026) 'Sadiq Khan warns AI could be "weapon of mass destruction" for jobs', 15 January. Available at: https://www.ft.com/content/6f92844e-6eb6-48dc-a36a-fd63115e45b5

⁷ European Union (2024) Regulation (EU) 2024/1689 (Artificial Intelligence Act), Article 4. Available at: https://artificialintelligenceact.eu/article/4/

⁸ Inside Privacy (2025) 'European Commission Provides Guidance on AI Literacy Requirement under the EU AI Act', 6 March. Available at: https://www.insideprivacy.com/artificial-intelligence/european-commission-provides-guidance-on-ai-literacy-requirement-under-the-eu-ai-act/

⁹ European Commission (2025) 'AI Literacy - Questions & Answers', Shaping Europe's Digital Future. Available at: https://digital-strategy.ec.europa.eu/en/faqs/ai-literacy-questions-answers

¹⁰ Gartner (2025) 'Gartner Predicts Over 40% of Agentic AI Projects Will Be Canceled by End of 2027', Gartner Newsroom, 25 June. Available at: https://www.gartner.com/en/newsroom/press-releases/2025-06-25-gartner-predicts-over-40-percent-of-agentic-ai-projects-will-be-canceled-by-end-of-2027

Category

Insights

Insights

Insights

Written by

Scott Druck

Blog and Articles
Blog and Articles
Blog and Articles

Latest insights and trends

What next?

Let's have a conversation.

No pressure. No lengthy pitch deck. Just a straightforward discussion about where you are with AI and whether we can help.

If we're not the right fit, we'll tell you. If you're not ready, we'll say so. Better to find that out in a 30-minute call than after signing a contract.

Two male professionals collaborating during brainstorming session
What next?

Let's have a conversation.

No pressure. No lengthy pitch deck. Just a straightforward discussion about where you are with AI and whether we can help.

If we're not the right fit, we'll tell you. If you're not ready, we'll say so. Better to find that out in a 30-minute call than after signing a contract.

Two male professionals collaborating during brainstorming session
What next?

Let's have a conversation.

No pressure. No lengthy pitch deck. Just a straightforward discussion about where you are with AI and whether we can help.

If we're not the right fit, we'll tell you. If you're not ready, we'll say so. Better to find that out in a 30-minute call than after signing a contract.

Two male professionals collaborating during brainstorming session