developing bigoted algorithms.

Allan Vikiru.
4 min readApr 12, 2021
A vector image of a computer displaying the OOPS! interrogation

Time and again, we’ve heard stories of machine learning algorithms and their developers being accused of discrimination.

Recently, I watched this video that explains how algorithms can produce racist results — which are consequently used in crucial decision making systems. YouTube’s demonetisation tool was found to be biased against queer content, by flagging video titles with LGBTQ-related vocabulary. In the tweet below, a Google Translate user shows how the tool missed the mark; by exhibiting gender stereotypes when translating some Kiswahili phrases to English.

Credit: Vivian on Twitter

All these stem from a huge problem in artificial intelligence known as bias.

Just as humans can be influenced by prejudice when making decisions, ML algorithms can also be trained to produce biased results. This has been a pressing issue since the advent of AI. In 1976, Joseph Weizenbaum, one of the pioneers of modern AI, argued that machines should not replace humans in careers such as healthcare, customer service and policing, since they demand for professionals to not only empathise with people but also promote their human dignity.

Jones describes how bias comes about in various forms during algorithm development. The first source is from the data used in training the program. Sample bias arises from using a dataset that doesn’t reflect the actual program environment — for example, using songs from one genre in a music recognition software. Another instance, is where the data itself presents some stereotype e.g. portraying male health workers as doctors and women as nurses. This is referred to as prejudicial bias.

Before model training, data is cleaned i.e. ensured to be correct, complete and relevant. In this process, exclusion bias can arise, where features deemed irrelevant are removed from the dataset. After model training and testing, measurement bias can develop due to a significant difference in real-world data from training data. For instance, an image recognition software may fail to read an image of a black cat, since that which was applied during training was brighter than the new image. Lastly, algorithmic bias comes about from the functioning of the algorithm itself; which results from issues such as method of development applied or the training techniques used.

Despite this, bias is still important in machine learning. In ‘The Need for Biases in Learning Generalizations’, Tom Mitchell argues that algorithms need to learn some types of biases; so that this information can be applied towards new but similar situations presented to it. For instance, referring to the example of the image recognition software, the model can be trained on both bright and dull images of black cats, so that it can capture both instances in the real-world. You can liken this to humans, in that by going through a certain experience, e.g. slipping on a wet floor, that memory would guide them in future decision making i.e. walking more cautiously on subsequent wet floors.

However, as demonstrated earlier, applying uncontrolled, biased algorithms has destructive effects. Besides causing discontent among users, they also tarnish decision making processes. In 2016, ProPublica tested COMPAS for racial bias — this is software used across the U.S., to determine the chances of a convicted criminal to reoffend. Among the test cases, they found that for a pair of criminals arrested for a similar crime, the Black offender registered a higher risk score compared to the white offender. This is despite the white offender committing more violent crimes prior to the arrest and unlike the Black offender, they would actually end up reoffending. This score determines one’s criminal record which has potential to limit their access to employment, adoption and international travel.

All in all, it’s very much possible to minimise the chances of algorithms producing biased results. This can be done by:

  1. Using third-party tools such as Google’s What-If and IBM’s AI Fairness 360 to detect and remove instances of bias.
  2. Incorporating diverse technical teams in algorithm development and maintenance. Individuals with varying backgrounds in terms of race, sex, gender, sexual orientation etc. can greatly assist in resolving errors that arise and avoiding potential ones.
  3. Providing as much varying and relevant training data to the model as possible. This however, can counter with the issue of data scarcity in ML.
  4. Embracing a human-based approach; where communities that can be, or are affected by the program are continually included in the development process. Their insight could assist in filling gaps missed out by developers.
  5. Calling for public policies that promote the ethical implementation and use of algorithms. Dr. Safiya Noble, in her book ‘Algorithms of Oppression’, insists that corporations and governments are the most liable, in enforcing reforms that deal with issues causing algorithmic bias.

--

--