Artificial intelligence (AI) is far from perfect. It is clear that a society full of problems will produce products with problems, so, although it is sometimes sold as a celestial being that will give us all the answers, we cannot forget that AA has also been created by us and therefore has defects. In this article we reflect on these problems, errors or knots.
Since the company OpenAI, on November 30, 2022, published ChatGPT, a tool that allows us to have a conversation with an AA system, we have seen many reflections on artificial intelligence and its utility.
But ChatGPT is just an AA model, that is, we can find many more (although most are not public) that can be useful in many processes. For example, an AA capable of recognising traffic lights, pedestrians, etc. plays a very important role in automatic cars; or who is able to detect a tumor more precisely and quickly than a doctor, can represent a major medical breakthrough.
In view of this, it is clear that AA has a great deal to do in many areas and is likely to be a "breakthrough" or at least a major change. But artificial intelligence also has problems, creates problems and reproduces problems. This article will address some of these problems, related mainly to the socio-political field, but also, to a lesser extent, to the truthfulness and the human essence (although, of course, both areas are related). To do so, we will work on some problems or knots that we can find in each of these areas, taking into account the work of different experts and what they have told us.
The first area we are going to work on is therefore the socio-political area. Specifically, the biases we can find in society. It is clear that our society is still far from being idyllic, especially for white and non-cis-heterosexual men. Inequality remains one of the main social problems and, although more and more voices are heard, the hegemony of this white, cis-heterosexual man is still alive. This affects all areas, not only social, but also productive. Just as everything we produce affects our society, our society is also affected by these products, reinforcing and reproducing, among others, the problematic prejudices we transmit to them. Artificial intelligence is no exception.
Two examples of this can be found in the filters offered by social networks Tik Tok and Instagram (tools to play or transform your face). One of them, the Anime AI filter that became very popular in December 2022, after receiving a photograph of him drew like a Japanese anime. But, as many users reported, the artificial intelligence behind this filter was not able to detect black people and, instead of drawing them as animes, excluded them; it drew them as black objects or, in the worst case, as monkeys. In addition, there have been cases where the holders of ‘Asian’ traits, such as scraped eyes, cannot use those filters, as the artificial intelligence behind them is unable to recognize their eyes. These two examples highlight the racism or eurocentrism of the AA.
But the question is not there, in AA systems we can also find sexism. Many automatic translators, for example, translate traditionally female or male professions, such as the blacksmith or nurse, follow this tradition. Consequently, we will receive phrases like my mother is blacksmith or my father is a nurse when requesting the translation of the Basque version (because they are neutral words in Basque). The problem is not only to give the wrong translation, but it makes it clear that these AA have fully internalized gender roles and therefore reproduce them. This can reinforce such harmful biases, so it is very important to address them, especially if AA systems become increasingly important in our lives.
Ander Corral Naves, researcher at Orai, is working on it. We talked to him and he says that AA systems reproduce these biases as a result of the database we provide them. The data is racist and sexist, not AA. This is due to the change in the way AA is trained in 2018. Then huge amounts of data (the so-called big data) started to pass without having previously analyzed all of this data. So the AA has collected a lot of biased data. But now the paradigm is changing. There is already a lot of data in the AA database, so the time has come to analyze and correct them. This is where researchers from Corral and Orai are retraining AA with paired data.
But intentionally or involuntarily, to a large extent we all reproduce racist and chauvinist prejudices in our society, so it's very easy not to realize many of these biases present in those data, or to abandon many realities. For example, in the machine translation model, what happens to nonbinary people? How do we teach AA to talk about them? 
Artificial intelligence, like all that is produced in our system, depends on our prejudices, so without ignoring them, without building a truly egalitarian society, it will be very difficult for artificial intelligence to avoid these biases, even if it patches the most visible .
Moreover, for artificial intelligence to play an important role in learning in the coming years, it is very important that it be as inclusive as possible and that inequality be avoided, so that future generations do not further internalize these biases. So I think it's very important that developers and researchers are equally educated and understand that they have a big responsibility for it. And the recognition by the ethics committee or other similar institutions of artificial intelligence, or the programs or applications that use it, before its publication, because, at least as long as we form a society based on equality, an inclusive AA can be useful along that path.
Our society is largely based on images. We relate to each other through images, through which we get most of the information. Therefore, the ability to transform and create images of the AA can generate many problems.
For starters, it's already very common to see "fake" pictures. If the AA facilitated image creation and its real appearance, the network would be filled quickly. So how would we differentiate "real" from "false" photos? This would cause images to lose their reliability, so we would tend to lie directly what we don't see directly and what we think is "weird."
On the other hand, some AA systems are able to create a video (called deepfake) in which you appear with a few pictures and a model of your voice. It is therefore very easy to harm someone by creating a video that leaves him in a bad place, for example (racist comments, for example), or by creating one that does so in a vulnerable situation. This can endanger dignity and privacy.
Finally, as Professor Beñat Erezuma told us in an interview, AA can also lead to bullying problems. The Internet has already caused victims of bullying to suffer persecution at home. AA can bring this problem to the extreme. Anyone can make a video of a bullying victim to humiliate her in a hyperrealistic way.
The above examples show that the creative capacity of AA can be very dangerous, especially for already vulnerable groups (women, victims of bullying, etc.). ). ). But these problems have not been caused by OA, which was already in our society, OA reinforces them, facilitates their appearance and leads them to the extreme. Most of the problems mentioned originate in the way we relate. That is why I think it is more appropriate to seek ways of building relationships with others in a healthier way than to fight phenomena that are nothing but a reflection of those relationships.
However, these are not the only problems with AEs. The issue of artificial intelligence has also questioned the human in itself. Intelligence and creative ability, for example, have often been considered traits that make man "special". But even though the intelligence of AA cannot yet be compared to ours, we are already seeing that it is not so far away. In addition, we've seen that OA also has creative capacity, and although that's what you're told it does just repetition and copying, aren't we humans doing the same thing? Is our creation, like artificial intelligence, not a consequence of the impact of what we consume? What sets us apart?
On the other hand, regarding the preponderance we have on Earth, many thinkers have already pointed out that OA can also modify it. As is speculated, at some point artificial intelligence will overcome human intelligence, which will radically change the current situation, a moment called technological singularity. The scientist Raymond Kurweil, for example, in his book The singularity is near (2005), places this moment in 2045 and states that its consequences are unpredictable. In any case, what is clear is that our superiority is at risk; on the other hand, in the light of this, perhaps it is not so bad.
For all that, it is clear that we must do something before all these problems are insurmountable. Fortunately, there are already movements that denounce the risks of artificial intelligences and make proposals to them. A group of thinkers and experts of different races, genders, ages and nationalities, for example, has written a manifesto criticising the Eurocentric character of the AA. In addition, a request already signed by nearly 2000 people asking companies to stop six months in AA training more powerful than GPT-4 to prepare regulation on them is being extended. Although they may have problems, they are two examples that require thinking in the direction of developing artificial intelligences.
It is impossible to talk about a world that moves so fast without leaving things out, but given the few problems we have seen, it is already clear that artificial intelligence is not so perfect and that, instead of putting all the forces in the attempt to constantly improve, it may be more important to think about how we can improve our society and what role AA can play in it. Maybe a harder job.
 Company dedicated to AA research, which aims to contribute to the development of an AA that can help society without profit.
 The AA has received much more the word blacksmith referring to a man, so it will always do so in terms of payment of blacksmith (and not of herrera). That is what needs to be changed.
 This idea of paradigm shift has a lot to do with the work of physicist and scientific philosopher Thomas Kuhn. It says that science advances in the paradigm shift, developing new premises and ways of doing science, creating new approaches to dealing with problems that the previous way could not solve (Kuhn, 2004).
 To know how they see Corral and Saralegi, 2022.
 On artificial intelligences and LGBTQ people, see Leufer, 2023.
 The subject of prejudices is complex and highly worked. For example: Allport, G.W. (1955). Nature of Prejudice. ADDISON-WESLEY PUBLISHING COMPANY, Inc.
 For example, a streamer known as “QTCinderella” suffered a deepfake case. Someone inserted his face into a porn video, which led him to suffer persecution that, according to a bird, caused him a sense of rape and dysmorphophobia.
 Beñat Erezuma is a passionate AA user who has made a poem and podcast using it.
Abdilla, A., Adamson, C., Affonso Souza, C., “A Jung Moon”, Buse Çetin, R., Chilla, R., Dotan, R., Fjeld, J., Ghazal, F., Havens, J.C., Jayaram, M., Jordan, S., Krishnan, A., Lach, E.M., Mhlambi, S., Morrow, M., Ricaurte Quijano, P., Rizk, N., Rosenstock, S., Taylor, J. (2022). "Artificial Intelligence. A Decolonial Manifesto." Manifesto VI. https://manyfesto.ai/index.html. (30/03/2023).
Allport, G.W. (1955). "The Nature of Prejudice." Addison-Wesley Publishing Company, Inc.
Corral, A. and Saralegi, X. (2022). "Gender Bias Mitigation for NMT Involving Genderless Languages". Proceedings of the Seventh Conference on Machine Translation (WMT), pp. 165–176, Abu Dhabi, United Sestra Emirates (Hybrid). Association for Computational Linguistics.
Galarraga, A. (2022). "They have created a novel technique to correct gender bias in machine translation." Elhuyar. https://aldizkaria.elhuyar.eus/albisteak/itzulpen-automatikoan-genero-alborapena-zuzentzeko/ (04/03/2023).
Kuhn, T. (2004). "The structure of scientific revolutions. Fund for Economic Culture.
Kurzweil, R. (2005). "The Singularity Is Near: When Humans Transie Biology." United States: Viking.
"Pause Giant AI Experiments: An Open Letter (2023)". Future of Life Institute. https://futureoflife.org/open-letter/pause-giant-ai-experiments/. (30/03/2023).