title

Regulation of Artificial Intelligence in the European Union

July 26, 2023
Fernando Stefano Maseda

Fernando Stefano Maseda

Front-end EngineerTypeScript Developer with 5+ years of experience providing software development services to B2B and B2C companies in US and LATAM.

Introduction

Interest around the latest developments in Artificial Intelligence, such as the use of ChatGPT or the automation of certain processes, is increasing day by day. To give you an answer, Europe is preparing a Regulation on Artificial Intelligence that is about to crystallize. The new legal framework addresses ethical issues and enforcement challenges in various sectors by focusing on key aspects such as data quality, transparency and human oversight. But you still have some issues to resolve.

The European Commission published a draft of the Artificial Intelligence Regulation, or 'AI Act' (AIA) in April 2021. After long months of analysis, once the EU Parliament ratifies its plenary position in June, the Council, Parliament and the Commission will be in a position to negotiate with the Commission on the final text. Although the Regulation may still undergo changes during the negotiations, it is expected to be definitively approved by the end of the year.

With the aim of positioning Europe as a world center of excellence in AI, taking advantage of its potential for scientific and industrial progress, the new regulation aims to strengthen the regulations on the development and use of this technology within the framework of respect for the values and the legislation of the European Union.

Knowing that this regulation is in danger of becoming obsolete due to technological evolution, the Commission has regulated the application of AI to specific use cases and not the technology itself. Therefore, the risk that the use of technology may pose to the fundamental rights, health and safety of a person is evaluated and AI systems are classified into four risk levels: unacceptable, high, limited and minimal.

Risk Levels

They have unacceptable risk, and their production and use are prohibited by regulation, AI systems with a high probability of physically or psychologically harming people. These refer to apps that subliminally manipulate, socially classify, or indiscriminately surveil citizens. Among these systems are those used for real-time remote biometric identification of people in public places for police purposes and social credit systems that evaluate citizens and can limit their access to public services; a type of discrimination already prohibited in the European Union through the General Data Protection Regulation (RGPD).

Annex III of the regulation details eight areas of application, such as the management of critical infrastructures or migration control, as well as access to education or jobs, among others. For example, the use of AI for personnel management in a company, from hiring to firing, going through the assignment of tasks, is possible in the EU as long as transparency, non-discrimination, and other requirements are guaranteed.

Finally, there are the systems considered to be of limited risk and those of minimal risk. The first are those that interact with people through an automated agent, such as chatbots; they recognize emotions; biometrically categorize people; or manipulate audiovisual content. These systems are required to inform citizens about their operation. For systems that belong to the category of minimal risk, neither the Commission nor the Council impose additional requirements, but the Parliament proposes to establish some general principles that should guide their development.

In addition, in recent weeks, given the rise of tools such as ChatGPT, Parliament has proposed to introduce some specific requirements on the technology on which this tool is based, the foundational models. These requirements would be enforceable for any such system, whether it applies to a high-risk use case or not.

Undoubtedly, this will be one of the most relevant issues in the negotiation of the final text, since it is contrary to the initial objective that the standard does not regulate specific technologies and, if the requirements are difficult to meet by the developers of this type of technology, it will be runs the risk of impeding the development and use of this technology with great potential in the European Union. At this time, the governance requirements imposed by the Parliament could hinder the adoption of this technology in the European Union.

AIA Loopholes

The very categorization of the regulation is criticized for its arbitrariness. Aparna Viswanathan, a technology jurist, notes that “the criteria for each risk category do not provide any justiciable basis for determining the category of an AI system.” She aparna argues that the recommendation systems used in social networks and audiovisual platforms suggest different content to users based on their gender, age, ethnicity or personality; Therefore, "they should enter the category of unacceptable risk due to the documented risk that they have created both for democracy and for the mental health of adolescents." The lawyer has practiced in the US and gives as an example the use of Facebook to influence the 2020 American elections or the use of Twitter to instigate the Capitol riots in 2021. "The legal doctrine of 'intent', specifically, the 'intended to be used' formulation by the AI system developer, key to determining whether an AI system is 'high risk', is prima facie unfeasible".

The ambiguity also affects techniques that can be identified as AI systems even though they are not. This is the case of linear regressions, commonly used in the granting of bank credit, which in the Commission's proposal, but not in the positions of the Council and Parliament, could be considered high-risk AI.

Violation of the regulation carries penalties of up to 30 million euros or 6% of the global revenue of the company that does not comply, as well as fines if false or misleading documentation is presented to the regulators. Bearing in mind that some of the cases provided for by the regulation are also covered by the GDPR, it remains to be clarified whether the sanctions for non-compliance with the AIA would be added to those provided for in the GDPR.

AIA Potential

Despite its limitations, the AIA draft represents the first systematic and harmonized approach to Artificial Intelligence from a legal perspective. Leading countries in this technology, such as China, Japan and Canada, have national plans and strategies in this regard, including independent advisory committees in the case of the last two. But none has a regulation as ambitious as the AIA. Neither is the US, where there is no specific federal law, but related regulations such as the California Consumer Privacy Act (CCPA).

According to Viswanathan, “The gaps in the AIA stem from the fact that it has been conceived in terms of the legal doctrine of product liability. Instead, AI is not a product, but a system that has its own life cycle.” The lawyer would like to see regulators rigorously analyze all aspects of the AI life cycle, in collaboration with technical experts, to devise regulation that prevents damage before it occurs, instead of trying to assess the level of damage after it has occurred. the system has been commercialized.