Algorithmic Injustice: Mend it or End it

Comment

Computers are often thought of as neutral technology. However, it becomes alarmingly clear that machines learn from training data made up of outdated social norms, values and attitudes towards race and gender. This can have insidious consequences.

Mann und Frau vor ungleichen Treppenstufen

As the world moves by fits and starts towards greater racial and gender equality, there is a surprising new obstacle in its path; computer systems that are tasked with automating decisions. This is perhaps the most insidious form of inequality in our history. Computers are often thought of as a neutral technology that should eliminate human biases in many important areas of our lives. Unfortunately, it is becoming alarmingly clear from a growing body of evidence that automated decision algorithms are propagating gender and race discrimination throughout our global community and perpetrating injustices on the poor and the vulnerable.

One cause of the problems is the unconscious bias or even thoughtlessness of programmers who inhabit a tech world dominated by white men. But the most common cause of algorithmic injustice comes from the use of deep learning algorithms in which decisions are based on patterns found in very large quantities of data. While societal values, norms and attitudes towards race and gender are still evolving, their footprint remains on the internet from where much of the training data for machine learning algorithms are derived. For example, studies have demonstrated that ‘man’ is associated with boss, president, leader and director, whereas ‘woman’ is associated with helper, assistant, employee and aide. Thus, the societal push for greater fairness and justice stumbles on the historical values about poverty, gender, race and ethnicity locked into big data.

Machine learning can cast out minorities

One of the benefits of using machine learning systems in an engineering context is that they reduce or remove the impact of outliers in the training data. In the context of a decision algorithm, ‘outliers’ can equate to minorities such as ethnic groups or other groups with low representation in the data. This is a well-known problem for deep learning and Google provides a very simple example of data for training a system with drawings of shoes. Most of the data are drawings of plain shoes or trainers but a few drew high-heels. Because this was a minority view, the completed algorithm misclassified high heels as ‘not shoes’. This is a simple example where the misclassification can be clearly seen, but racial and gender bias can be much more subtle and difficult to show.

In this regard, it is extremely difficult and often impossible to work out which features in the data are being used to determine decisions. The end result of learning is a large matrix of numbers that is used to generate the decisions. No one has, as yet, found a general means to probe the matrices to ascertain why a particular decision has been made. This is a dark recess where bias resides.

Decision algorithms may well improve over time but there have been no magic bullet solutions despite massive efforts by some large tech companies. Many of the companies developing software, particularly for policing, insist that they did well on their inhouse testing. It has remained for other organisations to collect and demonstrate the biases, yet the systems keep on getting rolled out. It is the familiar old story that once there has been huge investment in a technology it continues to be used despite its failings.

Technology, and particularly AI, has always gotten ahead of itself with ambition outstripping achievement. In my long experience working on the subject and reviewing many research proposals, ambition often wins the day. Indeed, ambition is often a positive step towards achievement. In many cases it can still be worthwhile even if the achievement falls well short of the ambition. However, when it comes to the use of algorithms to make decisions that impact on people’s lives, we need to be considerably more cautious of ambitious claims about speculative technology that can lead us down the wrong path.

We cannot simply ignore the types of discriminatory algorithmic biases appearing in the civilian world and pretend that we can just make them go away when it comes to weapons development and use. These are just some of the problems that have come to light, since the increased use of AI in society. We don’t know what further problems are around the corner or what further biases are likely to occur in targeting technologies.

The moral of this tale is simple. We must take a precautionary approach to the outsourcing of decisions to computer algorithms. We must not rely on the possibility of future fixes but instead we must look at what the technology is capable of today. We cannot let the injustices continue to proliferate throughout the world. We need to strongly regulate for large scale algorithmic testing regimes, like drug testing, where companies are responsible for demonstrating compliance with equality laws.

Until, and unless, it can be shown that decision algorithms can be validated, we need to develop new legally binding human rights instruments to ensure that decisions that significantly impact on human lives are made by humans.

The security tipping point

When we put all of this together, we must ask ourselves if we want to live in a world where our lives and the way we behave is entirely controlled and dominated by the automation of decisions. All of our life data is currently being tracked by the tech giants like Google, Microsoft and Amazon (Alexa). But what happens when states legislate to obtain access to all of that data?

It is already possible to see that China is setting up the technology to monitor its entire population. It is now a requirement that to buy a smartphone, people have to register with their official ID and allow the authorities to photograph them for automated face recognition. They have already set up experiments to monitor the entire Uighur Muslim population in Xinjiang to repress and keep them under wraps. China has also developed a social credit system like an episode of Black Mirror. If you do something socially unacceptable, you can’t rent a car and you cannot take the fast train or get a loan. Is this where we want to end up?

China is an authoritarian dictatorship that does not require permission or consent from its population and the population does not have a means to investigate or to overturn decisions made by the state. But just think about this for a moment. The police and security services in a number of western nations are beginning to use automated face recognition and biased algorithms for predictive policing. Some public services are using unjust algorithms for welfare payments and we have racist algorithms for automating passport applications. But this is just the tip of the iceberg for the data and recording of our lives that tech companies have on us.

What would it take to tip the balance and allow a nation to change its laws so that they can access all of our tracked data to keep us under wraps, to prevent peaceful protest and strikes and to make new crimes that create uniform social behaviour?

It could be an emergency situation, like a terror attack, that pushes us over the edge. The public may then accept the removal of another sliver of their civil liberties, in a trade-off for their perceived safety and security.

One of my deepest concerns is that while I am writing this article there are people suffering injustices at the hands of algorithms already and that this is just the beginning. A precautionary approach is urgently needed. We need new strong regulation at the international level to stop the use of decision algorithms until they can prove fairness through large scale testing like the pharmaceutical industry.

Another concern is that if all of this is happening where it can be clearly seen and scrutinised with freedom of information requests, what injustices could be meted out in conflict zones, refugee crises and border control? We need to find instruments to insure that the technology does not derogate the human rights of victims of warfare or famine. We must also ensure that international humanitarian law is upheld by preventing biased algorithms being used for the technologies of violence.