The pending challenges in the regulation of Artificial Intelligence in the European Union
As with all disruptive technology, the rapid evolution of AI presents a series of ethical, legal, and socioeconomic challenges
NOTE: This post is a faithful translation from the Portuguese original article written by me and published on this date by Forbes PT ❤ Check out the original !
Artificial Intelligence (AI) has emerged as a driving force in global technological advancement and has brought undeniable benefits. However, as with all disruptive technology, the rapid evolution of AI presents a series of ethical, legal, and socioeconomic challenges. The European Union (EU), recognizing the growing relevance and importance of adequately regulating AI, has been continuously working on the Artificial Intelligence Act (AI Act) – a proposal that aims to classify AI systems based on risk and establish differentiated requirements for their development and use. This legislation, the first of its kind on a global scale, is under review, with completion expected by the end of 2023. A notable development occurred in June 2023. The European Parliament adopted amendments and proposed harmonized rules on AI, among which the prohibition of its use in biometric surveillance and the requirement for generative AI systems to mark content they generate stand out. Additionally, the proposal prohibits emotion recognition and predictive policing systems, establishing specific regimes for general-purpose AI and fundamental models. These updates signal the EU's commitment to formulating robust regulation for AI.
Despite these significant advances, primary challenges still remain. Firstly, a clear definition of the concept of AI, regulating different formats and differentiating it from automation. Additionally, the refinement of risk categories in response to security, privacy, and fundamental rights challenges, and specialized analysis of use cases by application area. But above all, proper regulation should not be seen as static but as an opportunity to create a safe and trustworthy environment by establishing ethical standards for the development and use of technology. The balance between protecting citizens and stimulating innovation must be maintained, and for this, international cooperation and sharing of best practices are essential.
And when it comes to accountability? A crucial point in AI regulation involves establishing appropriate responsibility in the event of damages or negative consequences. This involves clearly defining who is responsible – the manufacturer, the supplier, the operator, or all involved – and developing an effective accountability system. Moreover, the traceability of AI systems must be ensured, allowing for the identification of errors and the attribution of due responsibility.
Above all, it is vital to remember that protecting individual rights and ensuring equity are fundamental pillars in AI regulation, and legislation should address algorithmic bias, discrimination, and data privacy. For all this to occur in a balanced and fair environment, establishing a robust supervisory body with an appropriate structure is essential.
AI can reach its true potential and contribute significantly to progress, but its regulation must be a dynamic and adaptable process. It is crucial that there is dialogue among stakeholders, including companies, governments, civil society, and AI experts, to ensure that regulations are effective, proportional, and suitable for technological advancements and social needs.