They study the human cognitive biases of artificial intelligence.

Galarraga Aiestaran, Ana

Elhuyar Zientzia

500
adimen-artifizialaren-giza-alborapen-kognitiboak-a
They explain how the logarithms of artificial intelligence internalize and increase human cognitive biases. - Ed. Danqing Wang

The RIEV has compiled and analyzed the evidence of the cognitive biases that appear in artificial intelligence algorithms in order, knowing their influence, to take corrective measures.

In fact, it's known that artificial intelligence algorithms, which are in the data, internalize and increase biases, are already working on their detection and correction. For example, researchers from Orai (Elhuyar) have developed a novel technique for correcting machine translation bias.

However, according to the authors of the article, human cognitive biases influence the integral development of artificial intelligence algorithms, and all participants, from researchers to users. Thus, the authors have compiled evidence of these biases and have explained, through examples, how they act.

First, substitutability is higher the closer an individual approaches the prototype of the category. For example, a txantxangorri is more likely to appear in the bird category than a penguin. In artificial intelligence, this bias greatly influences the training phase. In some activities, men will show much more than women and will be more white than racialized. This is why programs are not represented, such as medical diagnoses.

The collection bias leads us to remember and take into account more strongly the information that corresponds to the forecasts or hypotheses. In artificial intelligence, in the tests of the development phase, it implies accepting the results that corroborate the conviction of the researcher and underestimating others. The authors mention a 2013 study by Calikli and Bener, in which 60% to 93% of software gaps are due to reuse bias.

The effect of the former is also analyzed. In this sense, when receiving information we tend to highlight the first to the detriment of what is later collected. Its consequences are very evident in internet searches. Artificial intelligence also increases this influence by learning from previous trends.

The anchorage effect led us to compare the information obtained later with the one nailed. The reference is the one we have nailed. Also here, artificial intelligence inherits and spreads an involuntary bias caused by the user, creating a vicious circle.

Finally, there is a description of the illusion of the effect cause. Similarly, human beings are inherent in this bias: if something has happened after something else, we will think that one is a consequence of another. This is also reflected in artificial intelligence models, which run the risk of misleading results.

The authors warn that, although artificial intelligences have internalized human cognitive biases, most users consider them objective and neutral. Therefore, they consider it important to be alert to identify biases and avoid or minimize their consequences.

Babesleak
Eusko Jaurlaritzako Industria, Merkataritza eta Turismo Saila