Menu of algorithms

Leturia Azkarate, Igor

Informatikaria eta ikertzailea

Elhuyar Hizkuntza eta Teknologia

In the first months of 2016, instead of showing chronologically Twitter and Instagram, tweets and photos, they have gone to show what interests us most. To do this, they use an algorithm that takes into account the number of cases, the time of attention and other metrics that we have offered on occasions prior to similar or to the same users. Previously, Google and Facebook have taken similar steps, and algorithms will continue to guide more and more processes of our life (cars, purchases...). It is comfortable and in most cases effective, but not everything is advantageous.
algoritmoen-meneko
Ed. © Dollarphotoclub/kavzov

What comes from a long time ago is to use an algorithm to decide the results that a web service should show to users. Google used from its origin the well-known algorithm PageRank to order its results, based on the importance assigned to each web. This algorithm was the one that characterized Google from other search engines and led it to be everything today. In the case of a search engine it is necessary to use a good results management algorithm, since for the user it would be crazy to check thousands of results until finding the most suitable.

But it also has its bad things. On the one hand, we cannot ensure that there are no really interesting results hidden behind. On the other hand, the results shown above are the most widely read, being one of the factors for them to appear in the next searches, which greatly hinders the rise to later results or new contents.

Then, Google started offering customized results to each user. To do this, take into account our previous searches, the results in which we have clicked, etc. but also the information of users of our social network. This also has two main disadvantages. One, the lack of privacy, Google keeps a lot of information about us in order to offer these personalized results. The other, who is always receiving information from an environment or type, that is, we live surrounded by a happy bubble defined by us and our environment, without knowing what is out, strengthening our opinions and weakening critical thinking. The concern for this topic is growing and proof of this is the relative success that the Duck search engine is having, since the respect to privacy and the exposure of the same results to all users are their main claims.

End of the timeline

Facebook, Twitter and Instagram, originally, showed us chronologically the updates of our contacts, tweets and photos in the so-called timeline or timeline. But Facebook a few years ago, Twitter in February of this year and Instagram in March of this year have modified it and now show on top those who, according to an algorithm, seem more interesting to us. It is possible that there is a better service, but it also has the negative effects mentioned above: that of the happy bubble, the fact that the authors and popular contents are increasingly popular and the marginalized are increasingly marginalized. They have received more protests, the truth is that we are easier to accept that the thousands of results from the anonymous source of search engines are filtered by another person who decides on a not so long list of our friends.

And well, if the algorithms only decide our reading or search for leisure, medium bad. But also in the critical processes of our life more and more automatic processes guided by algorithms are used. There are fewer and fewer employees in banks, and algorithms already decide who to give credits and who not. Thanks to credit cards, algorithms can be based on all our expenses history, but also our activity on social networks, our relationships… At the same time, the LinkedIn management algorithm determines whether or not we are suitable for a job.

And to spy on us, Snowden let us know that governments spy all our online activity, but it is impossible that all that information they perceive is treated by the human being, the algorithms study and decide what is our risk index. What you want to say, it makes me feel very uncomfortable to know that an algorithm decides whether I am a terrorist, or I am worthy of credit, or suitable for work… And that fear becomes a tool of social control that pushes us to self-censor the activity in our network.

The main concern that these algorithms provoke us is that we do not know what their base is behind and that the companies that use them have too many powers in us. For me there is another more worrying thing. So far, at least in most cases, algorithms are created by humans, that is, someone decides which factors take into account the algorithm and what weight to give to each of them. And you can also change these weights, introduce new factors… When the algorithm seems to have been wrong in a certain decision, you can analyze why it has been and why the decision has changed or, if it is often wrong, change the algorithm. But in the previous issue of the journal, machine learning is increasingly used to make decisions in complex processes. In them, a system trains to obtain the desired results for a large number of inputs, but the functioning of the resulting system is diffuse due to its complexity. That is, the algorithm creator company does not know with certainty why the algorithm has made a specific decision, nor how it can modify or improve it, if it does not train with more data and, however, would still not know how it works.

There is no doubt that in the future algorithms will increasingly control our lives. They will work a lot and make many decisions in our place and will be, undoubtedly, very comfortable. But it can come to a time when we will be so used to making decisions by algorithms and not knowing why those decisions are made, that we will not be able to consider if that is the best way or decision. Because if the algorithm has said it…

Babesleak
Eusko Jaurlaritzako Industria, Merkataritza eta Turismo Saila