EU AI Act Seven Months Out: What Professional Services Firms Need to Know



Recent headlines suggest the EU AI Act is delayed. The Digital Omnibus proposal in November 2025 proposed extending high-risk AI deadlines to December 2027 for standalone systems, and August 2028 for regulated products. But the proposal faces opposition. 127 civil society organisations have urged the Commission to halt it, and it requires approval from EU Parliament and Council. Organisations shouldn't assume the delay will pass.
Meanwhile, implementation is ramping up. The first harmonised standard, prEN 18286, has been released for public consultation. Member States must establish regulatory sandboxes by August 2026. The AI Office is monitoring GPAI model compliance and expects informal collaboration from providers, with full enforcement powers from August 2026.
Provider or Deployer?
In the Act, providers develop AI systems or place them on market, while deployers use them professionally. A deployer is any legal entity using an AI system under its authority in a professional capacity. If you're using third-party AI tools, you're a deployer. The Act still applies to you.
Does This Apply to You?
Not all AI use is regulated equally. The Act focuses on high-risk systems affecting fundamental rights, health, or safety. AI used for recruitment, candidate screening, or performance evaluation falls into this category. For financial services firms, AI for credit scoring or insurance pricing is similarly classified.
But much of what professional services firms use AI for falls outside high-risk. Document analysis, research assistance, bid production, marketing content. These operational uses face lighter transparency requirements rather than the full compliance regime.
Article 26 Sets Out What Deployers Must Do By August 2026:
✅ Use AI systems according to provider instructions, with documented processes
✅ Assign human oversight to people with competence, training and authority to intervene
✅ Ensure input data is relevant and representative for the intended purpose
✅ Keep automatically generated logs for at least six months
✅ Inform workers before using high-risk AI in the workplace
Several AI governance platforms can help here, providing automated usage tracking and audit trails. Fundamental Rights Impact Assessments are mandatory for public bodies and private entities providing public services, plus deployers of credit-scoring and insurance-pricing AI.
AI Literacy Applies to Everyone:
This one isn't just for high-risk systems. Article 4 requires all staff operating or using AI systems to have AI literacy suited to their role, technical knowledge and the context of use. This has been in force since February 2025. If your teams are using AI tools without structured training, you're already behind.
Whether August 2026 holds or shifts to late 2027, the obligations aren't changing. Organisations preparing now will be ready either way.
Recent headlines suggest the EU AI Act is delayed. The Digital Omnibus proposal in November 2025 proposed extending high-risk AI deadlines to December 2027 for standalone systems, and August 2028 for regulated products. But the proposal faces opposition. 127 civil society organisations have urged the Commission to halt it, and it requires approval from EU Parliament and Council. Organisations shouldn't assume the delay will pass.
Meanwhile, implementation is ramping up. The first harmonised standard, prEN 18286, has been released for public consultation. Member States must establish regulatory sandboxes by August 2026. The AI Office is monitoring GPAI model compliance and expects informal collaboration from providers, with full enforcement powers from August 2026.
Provider or Deployer?
In the Act, providers develop AI systems or place them on market, while deployers use them professionally. A deployer is any legal entity using an AI system under its authority in a professional capacity. If you're using third-party AI tools, you're a deployer. The Act still applies to you.
Does This Apply to You?
Not all AI use is regulated equally. The Act focuses on high-risk systems affecting fundamental rights, health, or safety. AI used for recruitment, candidate screening, or performance evaluation falls into this category. For financial services firms, AI for credit scoring or insurance pricing is similarly classified.
But much of what professional services firms use AI for falls outside high-risk. Document analysis, research assistance, bid production, marketing content. These operational uses face lighter transparency requirements rather than the full compliance regime.
Article 26 Sets Out What Deployers Must Do By August 2026:
✅ Use AI systems according to provider instructions, with documented processes
✅ Assign human oversight to people with competence, training and authority to intervene
✅ Ensure input data is relevant and representative for the intended purpose
✅ Keep automatically generated logs for at least six months
✅ Inform workers before using high-risk AI in the workplace
Several AI governance platforms can help here, providing automated usage tracking and audit trails. Fundamental Rights Impact Assessments are mandatory for public bodies and private entities providing public services, plus deployers of credit-scoring and insurance-pricing AI.
AI Literacy Applies to Everyone:
This one isn't just for high-risk systems. Article 4 requires all staff operating or using AI systems to have AI literacy suited to their role, technical knowledge and the context of use. This has been in force since February 2025. If your teams are using AI tools without structured training, you're already behind.
Whether August 2026 holds or shifts to late 2027, the obligations aren't changing. Organisations preparing now will be ready either way.
Category
Insights
Insights
Insights
Written by
Scott Druck
Latest insights and trends
Let's have a conversation.
No pressure. No lengthy pitch deck. Just a straightforward discussion about where you are with AI and whether we can help.
If we're not the right fit, we'll tell you. If you're not ready, we'll say so. Better to find that out in a 30-minute call than after signing a contract.

Let's have a conversation.
No pressure. No lengthy pitch deck. Just a straightforward discussion about where you are with AI and whether we can help.
If we're not the right fit, we'll tell you. If you're not ready, we'll say so. Better to find that out in a 30-minute call than after signing a contract.

Let's have a conversation.
No pressure. No lengthy pitch deck. Just a straightforward discussion about where you are with AI and whether we can help.
If we're not the right fit, we'll tell you. If you're not ready, we'll say so. Better to find that out in a 30-minute call than after signing a contract.





