As someone deeply immersed in the tech public policy realm, especially when it comes to data science and artificial intelligence, I've been on the edge of my seat watching the developments of the new EU-wide regulation, “The AI Act”. And let me tell you, the Parliament’s swift ratification on Wednesday caught many of us by surprise. 📅✨
This Act is a labyrinth of complexities and, in numerous sections, it remains enigmatic with its vague delineations. An essential alert for my corporate allies: Should your enterprise engage with even the most fundamental machine learning models (consider a linear model for risk assessment), this Act is poised to significantly influence your operations – and its effects are imminent.
Before all, I would like to share a video. I was a guest on Bloomberg HT's "Artificial Intelligence Center" program, where we exchanged views on artificial intelligence, technology policies, and the technologies of the future. Sharing my experiences and insights was a great pleasure. I invite everyone interested to watch the program and share your thoughts. - you may watch with eng subtitles-
So, let's dive into what this means for us.
Is it Truly Artificial Intelligence (AI)?
Whenever you encounter a so-called "smart" system or algorithm, the first question to ponder is, "Is this truly AI?" From my standpoint, the boundaries defining AI are somewhat nebulous at present. I posit that the essence of AI hinges on its capacity for "adaptiveness," a criterion open to various interpretations. I conjecture that any system employing self-learning techniques could satisfy this criterion, potentially extending to simpler rule-based or expert systems as well.
Title I, Article 3:
‘AI system’ means a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;
Who are you?
The Act delineates clear roles and responsibilities, focusing primarily on the 'providers' (developers) of high-risk AI systems. 'Deployers', on the other hand, refer to entities, either individuals or organizations, that utilize an AI system within a professional domain, distinct from the ultimate consumers. While deployers of high-risk AI systems do shoulder certain obligations, these are notably less stringent compared to those imposed on providers.
The EU's Regulatory Scope
This regulation mandates control over the deployment or introduction of high-risk AI systems within the EU, irrespective of the provider's geographical location. It also encompasses third-country providers if the outputs of their systems are employed within the EU.
Risk Based Approach
The AI Act categorizes AI systems by their risk level, drawing parallels with traffic signal colors for easier understanding:
Banned (Red): AI practices deemed unacceptable due to the risks they pose, such as social scoring and manipulative behavior modification, are outright prohibited.
High-risk (Yellow): These AI systems are subject to stringent regulatory scrutiny, encompassing a broad spectrum of legal and technical obligations.
Limited risk (Green): Systems falling into this category must meet basic transparency guidelines, but otherwise, the regulatory burden is significantly lighter.
In essence, the legislation urges entities to rigorously evaluate their AI systems, eliminating any that engage in banned practices immediately. For those classified as high-risk, prepare for an extensive set of requirements. Conversely, for more benign, low-risk AI applications, regulatory implications are minimal.
The Act's delineation of prohibited practices warrants close attention, not only to outright bans but also to the potential for AI systems to inadvertently engage in such practices. This distinction underscores the importance of a proactive approach, suggesting an audit of AI systems within the first two months post-finalization to preemptively identify and address any such risks. This is crucial because, while the primary aim of these prohibitions is to prevent intentionally harmful uses (as specified in Title II, Article 5), there's an explicit acknowledgment of the risk posed by unintended consequences ("...or the effect of..."). Thus, a thorough review is advisable to ensure no inadvertent harm aligns with these banned practices.
P.S.: It's important to note that these prohibitions take effect six months after finalization, a significantly shorter timeline than the 24 months allocated for the bulk of the regulation's provisions, amplifying the urgency for early compliance efforts.
Banned practices, picking from Title II (paraphrased text):
Manipulative AI: This refers to systems that deceptively alter behavior, ultimately harming an individual’s ability to make informed decisions. Such practices can lead to unintended consequences and erode trust.
Exploiting Vulnerabilities: AI should not target individuals based on factors like age, disability, or economic status for harmful purposes. Responsible AI development ensures fairness and avoids exacerbating existing inequalities.
Intrusive Biometric Tracking: While biometrics can enhance security, using them to deduce sensitive personal information (such as ethnicity, beliefs, or sexual orientation) must be done with care. Exceptions may exist for approved law enforcement purposes.
Social Scoring Systems: AI systems that judge individuals based on their social behavior can have negative consequences. Striking a balance between accountability and privacy is essential.
Profiling for Crime Prediction: AI should not solely predict criminal behavior without supporting evidence. Fairness, transparency, and due process are critical in any crime-related applications.
Indiscriminate Facial Recognition: Building facial recognition databases from untargeted internet or CCTV image collection can infringe on privacy rights. Proper regulation and consent mechanisms are necessary.
Emotion Tracking Without Consent: Deducing emotional states in workplaces or schools without valid health or safety reasons can be intrusive. Respect for individual privacy and consent is paramount.
Unrestricted ‘Real-Time’ Biometric Identification (RBI): The use of RBI in public areas should be carefully regulated. Balancing security needs with privacy rights is essential.
Discerning High-Risk AI under the EU AI Act:
Under Title III, the EU AI Act lays out the criteria to determine whether AI systems fall into the high-risk category. Essentially, high-risk AI systems are those integrated as safety components within products subject to specific EU regulations, listed in Annex II. These systems must undergo a detailed external evaluation according to the stipulated regulations, encompassing a wide range of products from machinery and medical devices to toys and civil aviation security measures.
Moreover, AI applications related to the scenarios described in Annex III are also classified as high-risk, with a few notable exceptions. For instance, if the AI system performs a distinct procedural task, augments the results of tasks previously performed by humans, detects patterns or anomalies in decision-making processes without replacing or significantly influencing prior human judgment, or is involved in a preparatory step for an assessment critical to the objectives outlined in Annex III, it may not be considered high-risk.
For those behind AI systems spotlighted in Annex III yet convinced their technology doesn't qualify as high-risk, a preemptive and well-documented assessment is essential before launching the product or initiating services.
A pivotal note: AI systems dedicated to profiling—automatically processing personal data to analyze or predict aspects of personal life like job performance, economic situation, health, personal preferences, reliability, behavior, location, or movements—are consistently regarded as high-risk.
If you're developing AI systems related to the scenarios outlined in Annex III, and you believe your system does not fall under the high-risk category, it's imperative to document this assessment before introducing your product to the market or offering it as a service.
Special Focus: Annex III High-Risk AI Use Cases
Annex III of the Act elaborates on specific use cases for AI systems regarded as high-risk but not outright prohibited. These encompass a wide array of applications:
Creditworthiness Assessment: AI systems engaged in evaluating an individual's credit score, excluding those used for detecting financial fraud.
Insurance Risk Assessment and Pricing: The application of AI in determining risks and setting prices for health and life insurance.
Recruitment and Performance Monitoring: Utilizing AI in the recruitment process, including job advertisements, resume screening, allocating tasks, and evaluating employee performance.
Remote Biometric Identification: Systems that extend beyond simple identity checks, potentially raising privacy issues.
Identifying Sensitive Attributes: AI that deduces protected characteristics such as ethnicity or religious beliefs using biometric data.
Emotion Detection Systems: The use of AI to infer emotional states, which can lead to ethical dilemmas.
Access to Essential Services: AI determining eligibility for critical public benefits, significantly affecting individuals' lives.
Critical Infrastructure Management: The role of AI in managing crucial infrastructure, including utilities like electricity and water.
Education and Training: AI applications in student admissions, assessments, and monitoring behavior.
Law Enforcement and Migration: AI used in predicting criminal behavior, conducting lie detection tests, and processing asylum applications.
Administration of Justice and Democratic Processes: The deployment of AI in legal decision-making and its potential to influence democratic procedures.
For developers of AI systems identified in Annex III, the legislation mandates a rigorous documentation process to assess and categorize their products as high-risk accurately. This assessment is crucial for aligning with the AI Act’s requirements and ensuring responsible AI deployment within the EU.
The Yellow Light: Navigating Through High-Risk Waters
When your AI venture finds itself under the 'high-risk' spotlight, the EU AI Act lays out a comprehensive suite of obligations in Articles 8-25. These guidelines are designed not just to ensure compliance but to foster a culture of safety, accountability, and transparency. Here's a distilled overview for the guardians of these high-risk AI systems:
Forge a Robust Risk Management Plan: It's imperative to foresee and plan for potential risks throughout the lifecycle of your AI system. Think of it as a safety net that evolves as your system does.
Champion Data Quality: The bedrock of your AI system's training and testing phases should be data sets that are not just vast but are accurate, relevant, and unbiased. Quality over quantity always pays off.
Keep Impeccable Records: Detailed documentation isn't just paperwork; it's your proof of compliance and your best ally in case of audits. It's about writing the story of your AI system's journey in a way that's transparent and accountable.
Enable Continuous Monitoring: Setting up a log for key events isn't just about tracking changes; it's about being proactive in risk management. It's your AI system's heartbeat monitor.
Clarify User Instructions: Your role includes guiding those who deploy your system, ensuring they understand its scope, capabilities, and limits. Clear instructions can significantly reduce misuse or misinterpretation.
Prioritize Human Oversight: This act reiterates the importance of human control and intervention. AI should enhance, not replace, human decision-making, ensuring that technology remains our tool, not our replacement.
Assure System Integrity: Aiming for accuracy, robustness, and security means building an AI system that not just performs well but is also resilient against errors and cyber threats. It's about being prepared for the digital storm.
Establish a Quality Assurance Framework: Compliance isn't a one-time badge; it's a continuous journey. Setting up a quality control system ensures your AI system remains in line with regulatory standards and ethical considerations.
Limited-Risk AI Systems
Limited-risk AI systems, such as generative AI models including ChatGPT, are recognized for posing a moderate level of risk. Consequently, these systems are subject to specific obligations to mitigate their risk levels. These obligations include but are not limited to:
Disclosure that content has been generated by AI, ensuring transparency.
Design measures to prevent the generation of illegal content.
Publication of summaries regarding the copyrighted data used for training, enhancing accountability.
Furthermore, the regulation acknowledges the potential systematic risks originating from general-purpose AI models, including large generative AI systems, and stipulates special provisions to address these concerns.
Low-Risk AI Systems
AI systems not falling under the above category are classified as low-risk AI systems. These systems are generally permitted for free use and encompass AI-powered video games, spam filters, and other similar applications. Providers of these low-risk AI systems may voluntarily choose to adhere to the behavioral codes outlined in the regulation, promoting ethical use and development practices.
By distinguishing between limited-risk and low-risk AI systems, the EU AI Act aims to balance the promotion of AI innovation with the need for safety, transparency, and accountability across different types of AI applications.
Cost Implications: An Early Estimation
Anticipating the financial outlay for adhering to these requirements is crucial, especially as we're still navigating the early stages of this regulatory framework. To offer a glimpse into what this might entail, consider this: I'm developing a training course specifically tailored for professionals in this domain, spanning 3-5 full days to cover these essential practices. This educational effort alone underscores the depth and breadth of what's required. Extending this analogy further, implementing these practices in real-world scenarios could easily translate to 30-50 person-days per AI use case – a significant investment in ensuring your AI system not only complies with the EU AI Act but also sets a benchmark in ethical AI development.
Timeline and Next Steps:
The EU's AI Act sets a clear timeline for compliance:
AI systems with banned practices have 6 months post-act finalization to align.
High-risk AI systems are given a 24-month grace period to meet compliance standards.
A Reflection on Readiness:
History with GDPR suggests a cautionary tale: ample time can still lead to last-minute scrambles for compliance, causing chaos. The sudden surge in GDPR compliance efforts in early 2018 is a testament to the tendency of companies to delay action until absolutely necessary. This often leads to a rush, potential shortages in specialist availability, and inadequate preparation. Learning from this, the proactive approach to AI Act compliance cannot be overemphasized.
Practical Steps for Teams and Organizations:
Prohibited Practices: With only 6 months to comply, prioritize reviewing AI systems for any banned practices. An early audit can prevent the need for hasty discontinuation of critical use cases.
Role Clarification: Identify whether your role concerning AI systems is as a 'deployer' or a 'provider'. Each carries distinct responsibilities under the AI Act.
Documentation: Maintain comprehensive records of all AI-related activities, decisions, and compliance efforts, even if informally. This documentation could be crucial for demonstrating compliance in audits.
Regulatory Engagement: Keep abreast of your national regulatory developments. Engage with any forums, workshops, or 'sandbox' initiatives they offer to better understand compliance expectations.
Knowledge Sharing: Organize educational sessions on AI Act compliance for your data and AI communities. Sharing insights and strategies can collectively raise the preparedness level.
Team Mobilization: Assemble a dedicated team, combining developers and auditors, to deep dive into the AI Act's implications for your operations. This team should emerge as the focal point for driving compliance efforts within your organization.
Embrace AI Responsibly: Avoid the knee-jerk reaction to sidestep AI usage to dodge AI Act compliance. Opting out could mean forfeiting significant competitive advantages that AI technologies offer. Instead, leverage this as an opportunity to refine and enhance your AI initiatives within the new regulatory framework.
Remember, starting early not only ensures compliance but also positions your organization as a leader in ethical and responsible AI use.
Want to share this newsletter? Go right ahead! The share button's right there for you.
Until next issue – stay innovative, stay inspired!
Annex II: List of Union Harmonisation Legislation
Annex IIa: List of Criminal Offences Referred to in Article 5 (1)(iii)
Annex III: High-Risk AI Systems Referred to in Article 6(2)
Annex IV: Technical Documentation Referred to in Article 11 (1)
Annex V: EU Declaration of Conformity
Annex VI: Conformity Assessment Procedure Based on Internal Control
Annex IX: Union Legislation on Large-Scale IT Systems in the Area of Freedom, Security and Justice