Artificial intelligence at stake

Azkune Galparsoro, Gorka

Ikertzailea eta irakaslea

Euskal Herriko Unibertsitateko Informatika Fakultatea

Helena Matute Greño

Psikologia Esperimentaleko katedraduna

Deustuko Unibertsitatea

Artificial intelligence is a great technological challenge for the human being: to create machines with our capacities. Machines that can communicate with us and, processing huge amounts of data, learning and making decisions by themselves. Although someone can still feel far away, artificial intelligence is already integrated into our lives and will do everything in the coming decades: transportation, education, culture, finance, agriculture…

Within a few years we will make purchases through personal computer assistants, who will suggest how to improve our learning process or make a medical diagnosis by asking for our symptoms. We are going to be much more efficient thanks to artificial intelligence, but, on the other hand, there are those who have the concern that robotization takes away work and space for human beings.

There are those who say that this technology is developing too quickly, who see many questions to answer before. In fact, artificial intelligence is an expression of the global vision of our society. It is important, therefore, that society has this technology, with a clear and elaborate opinion before the laws are set, so that it does not become a technology that controls us ourselves.

We address three basic questions to understand the current situation and the situation that comes to us: What is the real potential and the main challenges of artificial intelligence? What shortcomings do you have at this time? What risks?

To these questions we have asked two experts who work around artificial intelligence: Gorka Azkune Galparsoro, computer scientist and artificial intelligence researcher, and Helena Matute Greño, experimental psychologist who investigates the psychology of artificial intelligence.

adimen-artifiziala-jokoan
Ed. Sdecoret/Shutterstock.com

“Can man create an intelligent machine that avoids his mistakes?”

Gorka Azkune Galparsoro

Researcher in artificial intelligence. UPV/EHU

What is the real potential and the main challenges of artificial intelligence?

The potential of artificial intelligence is enormous. It is fundamental in the development of science, but we already use it for thousands of current jobs: from the automatic detection of dangerous emails to the automatic creation of albums with our photographs. In the future, artificial intelligence can overcome human intelligence in all areas, which can expand the possibilities of high potential. I don't know if we're going to get it ever, but theoretically there would be no reason for that to not happen.

But the scientific challenges are great. Even with the advances made in recent years, I would say that artificial intelligence is still in its beginnings. Current systems learn from data and then, if similar data is provided, they work properly. But they still have little generalization capacity. If new data is given, it does not work well, unlike humans. It is foreseeable that to achieve this competition it is necessary to develop other challenges such as learning algorithms, memory management and symbolic reasoning, among others.

What shortcomings do you have at this time?

Our current systems have a lot of gaps. Besides the lack of generalizing capacity, there are other problems. For example, a system that knows how to play chess has the ability to impose itself on the best chess players, but only knows, it can't drive a car or solve a sudoku. Human beings are able to do many things and very well (I recognize that this affirmation is very debatable). But our systems are still very specialized. How to get multi-tasking systems?

Another clear vacuum is the use of data. The best current systems use gigantic sets of tagged data. Despite progress made in the last two years, we still do not know how to correctly use untagged data (unsupervised learning). It is not the same to see some photos that someone says what is in each photo, that see the photo without any explanation. In the second case, humans are able to learn a lot of useful information. Machines not yet.

What risks do you see?

Like all technology, artificial intelligence can be used well or badly. This is not a technological problem, but a human problem: What do we want to use technologies based on artificial intelligence? We can use them to fight against climate change, but also to manipulate people's political opinions. I think the law should regulate the goal of using technology and not technology itself. I don't care about the technology a person uses to manipulate my political opinions, what I care about is that he wants to manipulate my political opinions, and that's why we should penalize them. Therefore, I do not think that artificial intelligence is different from other technologies.

Someone can say: but if that system decides for itself to carry out harmful actions? This option is very far away technologically, but if you think it can happen, if the machine owns its actions, it should be responsible, like humans, for the consequences of its actions.

From a scientific point of view, I am more concerned about other things. For example, as systems learn from data and experience, and both data and experiences get from our hands, aren't we going to reproduce and spread our trends? We have already seen that the systems that have been trained with texts written by the human being have macho tendencies. It is not a system error, but a data error. Put more philosophically, can man create an intelligent machine that avoids his mistakes?

 

“We have to assign responsibilities in artificial intelligence”

Helena Matute Greño

Director of the Laboratory of Experimental Psychology. University of Deusto

What is the real potential and the main challenges of artificial intelligence?

The artificial intelligences currently have many capacities that until recently were only considered human. In very intuitive and creative games they are able to win humans (for example, in the game Go), drive cars and pilot planes, carry out medical diagnoses (in some ways better than humans), symphonic compositions, draw pictures and sometimes are able to win contests, detect who will be the good worker, who will be delinquent, what to buy and what news to read in the fake video. It is a very powerful technology and, like all powerful technologies, you have to use it very carefully. It can reinforce the progress of humanity, but it can also cause great risks.

What shortcomings do you have at this time?

Today, artificial intelligences continue to make numerous mistakes, which is an important risk to take into account. We tend to rely too much on their recommendations, for example, when they tell us who we don't have to hire or who we shouldn't afford to leave jail. But sometimes they fail. We should not rely blindly on them.

Another problem is that sometimes they have no transparent objectives and often do not coincide with ours. This can be dangerous. It is true that there is still a lot to get the conscious machines that appear in science fiction. But this is usually used as an excuse to say that there is no reason to worry. But to what extent is it important to realize that artificial intelligence itself is hurting us? Imagine if a machine had the ability to do damage in some way, for example, increasing the differences; denying mortgages to the weakest, without warning of their causes; denying a health insurance; making an excessive use of the cognitive trends of adults and children, by clicking on a specific advertising and watching all those videos more and more extreme that show us… If you have the ability to hurt, you should not really care about it? Obviously, he is not aware of it.

At present, machines can already hurt us, since they have to maximize the target marked (for example, maximize the profits of a company), objective that does not correspond to ours. Today, consciousness is what should least matter to us. What we care about when creating laws is the damage they can cause.

What risks do you see?

There are many issues to be addressed urgently. We must ensure that artificial intelligences do what we want and respect both the values we want to promote as a society and people (for example, democracy). The protection of children is also fundamental. And before advancing in some areas that can generate greater risk, we must ensure that we develop appropriate systems for the control of artificial intelligence, cybersecurity, adequate extinction mechanisms, etc. We must also advance in the allocation of responsibilities. To whom should we impute responsibility for the damage caused by artificial intelligences? Finally, certain uses of artificial intelligence, such as constant surveillance, facial knowledge in the street and in schools, the use of autonomous weapons, the exploitation of human tendencies and weaknesses, etc. Can we accept as a society the malicious uses that are made with artificial intelligence? We must immediately think about what future we want to build and start acting accordingly.

Babesleak
Eusko Jaurlaritzako Industria, Merkataritza eta Turismo Saila