...and the regulation of artificial intelligence

Leturia Azkarate, Igor

Informatikaria eta ikertzailea

Elhuyar Hizkuntza eta Teknologia

In the previous issue we talked about the latest regulation laws of the web. For on this occasion, by continuing it, we will look at the regulatory movements that have been or are being carried out both in the United States and in the European Union in the field of artificial intelligence. In fact, artificial intelligence is a very important issue, it's taking a big role in recent years, and it's having a huge influence on the economy, education, people's lives ... It is therefore necessary to establish legal frameworks and it will be essential to do so.

eta-adimen-artifiziala-arautzeaz
Ed. 3rdtimeluckystudio/Shutterstock.com

The issue of regulating artificial intelligence (CI) has been a recurring theme in the last year. In March, over a thousand AA and technology experts (including Steve Wozniak and Elon Mike) called for a 6-month moratorium on the development of more advanced technologies than the GPT-4 to regulate the sector and develop anti-risk mechanisms.

OpenAI’s own leader, tool founder ChatGPT and DALL-E, Sam Altman, has insisted on drawing attention, regulating and limiting the risks of the AAs in the media, has made a global tour and last May also requested altman in the US Congress (and in September he did so with Mark Zuckerberg and another May issue. “Mitigating the risk of the destruction of humanity by artificial intelligence must be a priority worldwide, at the level of pandemics and nuclear wars.”

But what credibility do these demands have, with the help of the main players in the sector? When have the entrepreneurs of a sector seen themselves calling for the regulation and delimitation of their area? If it's really so dangerous, you just have to stop, right?

The risk of destruction, to date, is not real, but by fearing that they intend to achieve specific regulations for AA. This supposed risk and the complexity of the matter make this regulation prepared by experts (they, of course). In this way, other general laws will not be applied and the consequences that may arise from the actual damages that already occur (environmental damage, copyright infringement, non-respect of privacy, biases, proliferation of erroneous and false information, inpersonalization and deception...) will be avoided.

Furthermore, if the area is regulated and only certain companies and accredited safety experts can intervene, they avoid the creation of new competitors or open source systems. A round play.

Real regulation in Europe

In the European Union, however, the so-called AI Act, the artificial intelligence act, has long been launched. In April 2021, the European Commission, approved by the Council of Europe in December 2022, approved in June 2023 that the European Parliament should start its negotiations and was approved in December 2023 by the three entities. During this period it has undergone several changes to reach its almost definitive principles, but in the negotiations with the States that are in progress since then it will still undergo some modifications until it reaches the final wording in 2024 and that all entities have definitively approved for entry into force in 2025 or 2026. Companies shall have 6 months for the application of the same or one year for general purpose artificial intelligence systems.

Initially, this law classified AI systems into three groups according to their risk, and the systems of each group had obligations according to those risks:

  • Unacceptable risk. Systems that leverage subliminal manipulation or exploit human vulnerabilities, remote and real-time biometric systems and social scoring systems. They are strictly prohibited, although the first two can be used for the investigation of crimes and law enforcement.
  • High risk. These include systems that are used in products subject to the security law and that are used in areas such as biometrics, critical infrastructures, education, employment, basic and public services, law enforcement, migrations, law, medicine, autonomous cars... These systems should be evaluated prior to placing on the market to ensure that they ensure safety, transparency, traceability, non-discriminatory behaviour, respect for the environment and human supervision.
  • Low risk. Systems not included in other classifications should meet a number of minimum transparency requirements, basically indicating clearly to the user that an artificial intelligence system is being used. Examples include anti-spam filters and/or recommendation systems.

The EU will set up a law enforcement office and there are strong penalties for non-compliance. The most serious infringements will be punished by a fine of EUR 35 million or 7% of the overall turnover (the highest of both).

In constant changes

But for that initial purpose they have already made changes and not just those who are on the right track. In particular, there have been changes in issues affecting the major AI players in the US. Logically, the AI systems mentioned above (ChatGPT and others) should be in the high-risk group as they are used in many of the sectors mentioned here. And they would have difficulty guaranteeing compliance with some of the principles, such as transparency, traceability and non-discrimination, but above all the environment, because the energy consumption of these gigantic neural models is enormous. But the companies themselves asking for regulation in their country have been lobbying in the EU (OpenAI threatens to stop operating in the EU), and the classification they obtained in the version adopted in June by the European Parliament included a fourth creative artificial intelligence group, specifically aimed at it:

  • Creative artificial intelligence and general purpose. These systems have a few obligations related to transparency: to communicate that the content has been created by AA, not to generate illegal content and to publish summaries of data protected by copyright used for training.

There is a total contradiction between what was said by Altman, OpenAI and others in one place and another: In EE.UU. the technology they develop is described as very dangerous and requires regulation, while in the EU they have fought and managed not to fall into the category of high risk, thereby reducing the level of demand. As I said, they and he want a tailor-made regulation, not done by others.

Ed. Vitor Miranda/Shutterstock.com

And it is almost certain that it will also undergo changes in the phase of negotiations with the states. France, for example, has already pointed out that it does not like the law, because it believes that it will be a brake on innovation.

Therefore, although there is still time to get it started, and there will surely be changes, the intentions, principles and objectives of the EU Artificial Intelligence Act are good, and it is quite advanced on what they say they want to do in the US. It is necessary to regulate the artificial intelligence framework to protect and enforce people’s rights, avoid real risks and limit abuses by companies. Some say that there is a risk of reducing innovation, I don't think so, but in any case, it's important to do what is necessary to make technology socially responsible and safe, and not allow anything in the name of innovation.

There are those who say that it will harm European industry for the benefit of the US, but it is just the opposite: the law affects all companies that want to market AA in the EU, including external ones. So the law is going to further limit tech giants with huge resources, with advantages from the point of view and with scrupulous undercuts, and local businesses will have better opportunities and equality. And because the law is pioneering, it is possible that a road may be set in many other countries, as was the case with the General Data Protection Regulation, and that the quality of artificial intelligence of the future for everyone is conditioned and improved.

Over the coming months and years we will see how the issue evolves on both sides of the Atlantic and around the world.

Babesleak
Eusko Jaurlaritzako Industria, Merkataritza eta Turismo Saila