Pages Menu
Rss

Posted by on Apr 13, 2026 in Contact Center

EU AI Act Article 50: What Contact Centre Owners Need to Know Before August

EU AI Act Article 50: What Contact Centre Owners Need to Know Before August

I’m spending a lot of my time at the moment talking to contact center owners. Most of them have heard of the EU AI Act.

Not many of them have read Article 50. Fewer still have checked whether their AI agents are ready for it. And the deadline is 2 August 2026.

That’s less than four months away. So I want to lay out what it actually says, what it means for you, and what you should probably be doing about it right now.

I want to credit Tim Banting, whose LinkedIn post about this reminded me that although we are talking about this with our customers, that’s not true for lots of other people out there, and that this was a topic that warranted more general attention. That’s what this blog post hopes to achieve.

What does Article 50 actually say?

Article 50 is the transparency chapter of the EU AI Act. It applies to every AI system that interacts directly with people. Not just high-risk ones. Every chatbot, every voice bot, every virtual assistant.

The core requirement is simple to state: if someone is talking to an AI, you have to tell them. Not in the terms and conditions. Not on a FAQ page somewhere. At the point of first interaction, clearly and accessibly.

There’s a second part too. If your AI generates content (and it does, if it’s producing text or voice responses), those outputs need to be marked in a machine-readable format so they’re detectable as artificially generated. The technical standards for this are still being finalised through the EU’s Code of Practice, with the final version expected around June 2026.

The penalties for non-compliance are fines up to €20 million or 4% of global annual turnover. That’s the same scale as GDPR fines.

What’s the deadline?

This is where it gets a bit confusing, so bear with me.

The disclosure requirement, telling people they’re interacting with AI, comes into force on 2 August 2026. No delay. Even under the Digital Omnibus proposals that are currently working through the European Parliament, this date is not moving.

The machine-readable labelling requirement gets a short extension under the Omnibus. Systems already on the market before August 2026 would have until 2 February 2027 to comply with the labelling rules. But anything new deployed from August onward has to meet both requirements from day one.

You might also have seen headlines about the EU pushing AI Act deadlines back to 2027 or 2028. That’s real, but it only applies to the high-risk system rules, which are a completely different part of the Act. Article 50 transparency is not affected by those delays.

Transparency vs Human Oversight – they’re different

I keep seeing these two things mixed up, so it’s worth spelling out the difference.

Article 50 transparency is the requirement I’ve been describing: tell people they’re talking to AI, label AI-generated content. It applies to all AI systems that interact with people. It sits in Chapter IV of the Act.

Human-in-the-loop (HITL) oversight is a different obligation entirely. It lives in Chapter III, under the rules for high-risk AI systems. Article 14 says that if your AI is making or influencing decisions about things like access to essential services, healthcare triage, creditworthiness, or employment screening, a qualified human must be able to monitor the system, understand its outputs, and override or stop it.

Most contact centre AI falls under Article 50, not the high-risk rules. Your standard chatbot or voice assistant that answers questions and routes calls is limited-risk. But if your AI is triaging healthcare queries for an NHS trust, or screening job applicants, or making decisions about someone’s access to a service, you could be in high-risk territory without having thought about it.

The reason this distinction matters practically is that Article 50 requires a different set of technical responses to HITL oversight. If you’re planning for one when you should be planning for the other, you’ll waste time and money.

What is everyone else doing?

I went looking for what the major CCaaS vendors and partners are doing to help their customers with Article 50 compliance.

Genesys is probably furthest ahead. They achieved ISO/IEC 42001 certification late last year, which is the first international standard for AI management systems, and they publish AI model cards for their products. NICE has strong compliance tooling and audit trails, which seems to help them in regulated sectors.

The ISO 42001 cert is interesting. It’s an organisational certification. It certifies that Genesys the company has mature governance processes around AI. It does not certify that a specific customer’s deployment is Article 50 compliant. Those are two different problems. Your vendor having good governance is necessary but not sufficient. What matters is whether your specific setup, your actual AI touchpoints, your configuration, is doing the right things.

AWS, who provide Amazon Connect (the platform I’m most familiar with), have published guidance on their approach to the EU AI Act and were among the first signatories of the EU’s AI Pact. But they’re very clear about the shared responsibility model. AWS is responsible for security of the cloud. You are responsible for compliance in the cloud. There is no specific Amazon Connect + Article 50 guidance from AWS yet.

What this looks like in Amazon Connect

I can speak most specifically about Amazon Connect because that’s what we work with every day. And the good news is that Connect already has the building blocks for Article 50 compliance natively. They’re just not wired up for it by default.

Lex can declare that the caller is interacting with an AI system at the start of every conversation. That’s the requirement under Article 50(1). You can do this in the bot’s opening message, in the IVR flow, or both. It’s configuration, not custom development.

Contact Lens can capture and tag AI-generated responses, giving you a record of what was generated and when. Kinesis streams can feed that into an immutable audit log in S3. If you’re using Bedrock for generative AI responses, the invocation logging captures which model generated what, when, and with what parameters. If you’re using Connect’s built-in AI Agents then Contact Lens might well have everything you need, as long as you’ve made sure that long-term record storage is in place.

The architecture supports it. The individual services support it. It just needs someone who understands the requirements to look at your specific deployment, tell you where the gaps are, and get the right things turned on before August.

I’m in the UK or the US. I don’t have to worry, right?

If you’re a UK or US-based organisation, you might be thinking this doesn’t apply to you. The UK hasn’t adopted the EU AI Act and it doesn’t apply to the US either. The UK’s approach to AI regulation is principles-based, through existing regulators like the ICO and Ofcom, and a comprehensive AI Bill isn’t expected until late 2026 at the earliest. I’m less familiar with the US regulation but I’m pretty sure there isn’t anything federal, but a patchwork of state-level laws. The US situation actually sounds like it’s getting messier by the minute, so that’s probably for another blog post!

But here’s the catch. Article 50 applies based on who your AI interacts with, not where your company is based. If your contact centre AI handles queries from EU citizens, you’re in scope. And plenty of UK organisations fall into that category: financial services firms with European operations, NHS trusts that deal with EU nationals, universities, any business that serves customers across the Channel. We could also see global US firms implement these requirements across the board as it’s easier than trying to make sure you correctly segment by region, similar to the cookie consent stuff.

Even where the legal obligation is arguable, there’s a pragmatic case. UK organisations in regulated sectors, particularly healthcare and financial services, tend to align with the highest applicable standard. If you’re going through NHS procurement, or dealing with EU-headquartered clients, being able to demonstrate Article 50 compliance is going to start appearing in questionnaires and tender requirements. So, even though you might be sure you don’t need to, it might be better to get things in place anyway so that you’re ready if needed.

What should you do right now

If I were running a contact centre with AI agents serving any EU audience, here’s what I’d want to know before August:

Which of my AI touchpoints interact directly with people? Chatbots, voice bots, virtual assistants, automated email responses. Map them.

Which of those touchpoints could be interacting with EU citizens? Even if you’re UK-based, if the answer isn’t definitively “none,” you need to plan for Article 50.

Do any of those touchpoints clearly declare that the user is interacting with an AI? Not in the small print. In the conversation itself, at first contact.

Is there an audit trail? Can you show what AI-generated content was produced, when, and by what system? If someone from a compliance team asked to see that evidence tomorrow, could you produce it?

If the answer to any of those is “I’m not sure,” that’s the gap. Three months is enough time to close it, but not if you wait until July.

Post a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.