Whether as a robo-recruiter when applying, as a chatbot in customer service or as a big data analyst in police operations, artificial intelligence is increasingly becoming a conversation partner in important situations and therefore a problem.
Artificial intelligence is not free from prejudice, misinformation and hatred. This is why algorithmic systems can also discriminate and restrict people’s rights, says a KIT researcher. It is the result of a current study by the KIT Institute for Technology Assessment and Systems Analysis (ITAS) on behalf of the federal government.
Artificial intelligence reflects society
The highlight: Although we call it artificial intelligence, in reality AI is made up of algorithms that draw on existing knowledge. This means that algorithms are only as free from errors as the information from which they draw their knowledge. After all, every algorithm is programmed by humans and fed with data that come from humans – be it scientific knowledge, postings from social media or all the chaos of information and disinformation that hangs around on the Internet. All of this is riddled with human error, with errors and meanness and therefore no algorithm in the world can be free of it.
The big internet companies already had to experience this. A few years ago, Microsoft had to stop experimenting with Chatbot Taybecause it was manipulable. Instead of adopting the jargon of adolescents and chatting in a friendly manner, Tay finally insulted women and denied historical facts. And the problem of manipulable learning systems has not yet been solved. Most recently, Google attracted the anger of women and equality officers because the search engine automatically added “… don’t drive” and “… don’t draw squares” to the search for “women can”. (Note on the side: If you enter “men can”, Google adds, among other things, “… not drinking” and “… not listening”.) What may sound rather strange to some is a real problem in many applications.
AI doesn’t provide more objective data
Algorithms can capture and process data volumes much faster than humans. This is what makes algorithmic systems so valuable for companies, public institutions and banks. Once the learning system has been programmed, the companies using it save a lot of time and money. That is the reason why more and more AI systems are in use. They decide on the granting of loans, check the suitability of applicants in the recruiting process and search for statements of criminal interest on the Internet.
This either prepares human decisions, i.e. makes an initial selection with which human colleagues can then continue to work. Or people are immediately taken out of the equation and no longer actively intervenes in the decision-making process. “The fact that this inevitably leads to more objective and therefore fairer decisions often proves to be a fallacy today,” ITAS scientist Carsten Orwat sums up. It would be particularly critical if the algorithms used features that were actually protected in their decision-making. For example, if they include age, gender, ethnic origin, religion, sexual orientation or disabilities, although they should not take this information into account when making certain decisions.
Discrimination against layoffs and lending
In his study “Risks of discrimination through the use of algorithms”, study author Orwat lists various facts and gives examples in which individual people or groups of people were disadvantaged by algorithms in learning systems. In the United States, for example, a computer system supports the issue of early release. The risk probability of Americans with African descent is overestimated and, in contrast to that of the white population, is overestimated, human rights organizations criticize.
In Finland, a credit institution used algorithms for the web-based credit process. However, the algorithm found men on average to be more creditworthy than women and also believed that Finns were more likely to repay their loans than Sweden. The institute was then fined for violating Finnish anti-discrimination law.
“If data is processed that contains evaluations of people over other people, inequalities and discrimination can spread or intensify,” summarizes Orwat. His recommendation: Anyone using algorithm-based systems should have the programming specialists and the employees who will later work with the system advised by anti-discrimination agencies. This minimizes the risk of prejudices, stereotypes and misinformation entering the system’s database to influence decision-making from there.
The complete study can be downloaded from the website of the Federal Anti-Discrimination Agency.
Leave a Reply
You must be logged in to post a comment.