Discussion about this post

User's avatar
Mahdi Assan's avatar

I think you're right to highlight the debate around the AI Act's applicability to agents. It could be argued that the definition of an 'AI system' under the Act assumes a human-in-the-loop at all points of the system’s operation. This means that simple, single-turn input-and-output prompting of AI systems are certainly in scope. However, are AI agents still in scope if they are capable of executing tasks with multiple steps in which there is no human verification yet those steps may involve uses of tools and resources that impact the agent’s wider environment?

And the European Commission's initial answer to this is...intriguing: they say that the AI Act's risk-based framework applies “to the extent that AI agents are AI systems,” a phrasing that some could interpret as suggesting the Commission is open to the possibility that some agents might fall outside the scope of the Act’s primary risk-based framework.

Regardless of whether the Act applies to an organisation's deployment of agents, the governance implications you set out here still exist. And if organisations do not get a handle on them, there are some serious headaches awaiting.

No posts

Ready for more?