Nulla facilisi. Nulla consequat massa quis enim. Ut tincidunt tincidunt erat. Phasellus blandit leo ut odio. Curabitur suscipit suscipit tellus.

Duis lobortis massa imperdiet quam. Aenean imperdiet. Phasellus ullamcorper ipsum rutrum nunc. Etiam ut purus mattis mauris sodales aliquam. Sed consequat, leo eget bibendum sodales, augue velit cursus nunc, quis gravida magna mi a libero.

17th January 2018

Why AI is biased

Critics of using artificial intelligence (AI) in the recruitment process say that programs are imbued with the biases of their creators. With coding and software development roles still dominated by white males, this presents a huge problem for employers wanting to use technology to eliminate unconscious bias. But what is the truth?

Learn more about AI and the future of D&I

Types of AI

AI is now a catch-all term for everything from Amazon’s Alexa to Google’s AlphaGo. However, many things that are marketed as AI are, in fact, no more complex than the common calculator. These products are reliant on pre-programmed inputs, even though they use concepts and technologies which have emerged from the study of AI, such as voice recognition. Apple’s Siri assistant, for example, is programmed to run a web search for any command that it doesn’t know.

This type of AI is particularly susceptible to its creator’s biases, as it isn’t intelligent in its own right. It relies on pre-scripted instructions to output a result defined by its creator.

Machine learning, which uses algorithms to learn from patterns, is closer to real artificial intelligence. Machine learning can either use supervised learning, where it is fed patterns and told what the patterns mean, or unsupervised learning, where it identifies patterns itself.

AI in recruitment

For employers wanting to find a bias free recruitment solution, AI can seem like an easy way to avoid human bias and eliminate discrimination. However, there are a number of problems with this:

  • The most basic AI’s are not intelligent. They simply do what they are told, which means that they will inherit biases from the person writing the instructions.
  • Supervised machine learning relies on its teacher, and will make judgements based on the patterns it has been taught. If its teacher has certain biases, these biases will be reflected in the feedback they give to the AI.
  • Unsupervised machine learning doesn’t acquire biases from its creators, but it can (and usually does) develop its own biases. For example, Microsoft’s chatbot Tay spent a day learning unsupervised on Twitter. It then began repeating the anti-Semitic opinions it had seen on the platform.

AI needs data to learn, and without completely unbiased data, it will learn biases. This is a particular problem in recruitment, where most decisions are made with the input of conscious or unconscious biases. Another key issue with artificial intelligence is its dependence on patterns. Where there is a pattern of Russell Group graduates being more likely to perform highly, AI will learn that Russell Group graduates are good, and other graduates are bad. In this way it acts much like the human brain, which also learns biases from the patterns it sees.

Attend our conference to learn more: Diversity & Inclusion: The Changing Landscape