Machine learning is suddenly everywhere, transforming, improving and threatening our lives and livelihoods. Apparently, the machine revolution is here. But why has machine learning come to the forefront now? What are we really talking about with machine learning? And is machine learning really autonomous?

Machine learning is a distinct area of Artificial Intelligence (AI). While the goal of AI is ultimately to create a machine that can mimic the human mind, machine learning is focused on writing software that can learn from past experiences. The ideas and techniques behind AI and machine learning have been around for decades but the rate of development over the past year has been immense. Our computational power has dramatically increased and we have access to far more and better data, bringing us to a tipping point.

In 1959, Arthur Samuel defined machine learning simply as the field of study that gives computers the ability to learn without being explicitly programmed. In 1998, Tom Mitchell, professor at Carnegie Mellon University, discussed machine learning as when a computer program is said to learn from experience with respect to a given task, measured by its performance of that task. Basically, if a program can improve how it performs a task based on past experience then it has learned. This is different from a computer carrying out a task because it has been programmed to do so.

Machine learning can be applied to learning a game, to recognising, classifying or categorising items from visual or measurement data, or making predictions of future actions or values based on previous values, like the cost of a car or how a person’s future actions. At its core, machine learning is the extraction of knowledge from data. If you have a question you are trying to answer and you think the answer is in the data you might apply machine learning. So really, machine learning is an automated statistical model that says, ‘based on these factors this is the likely outcome’. And this is where we start to get into trouble.

Machine learning has impressive power to identify patterns and generate results and suggestions. It can save huge amounts of time and make solid predictions: many Netflix users love the recommendations on what to watch, and Amazon users might find the list of products that they might like illuminating — or at least amusing. However, machine learning models are based off past experience and, like any statistical model, are dependent on the features and characteristics included in that model, or not. The decisions on which characteristics to include in a machine learning model, and how to present and weight those characteristics is deeply important and not a neutral or autonomous process. What you choose not to count defines the shape of your model as much as what you do count.

We need to ask why did we tell the algorithm to look for what it looks for and why did we select the characteristics that we did. We also need to look at who writes the programs.  Machine learning itself might be neutral but its application will reflect its creators. In the etch industry many of the people working on machine learning will share key characteristics with one another — like gender, educational experience and many lifestyle norms. It is hard to believe that a homogenous environment will not embed many of their shared conscious or unconscious assumptions and value judgements into the programs they create, and potentially project a certain worldview out onto millions of others.

We can look at another fundamental problem from the perspective of the individual. When machine learning models are applied to people the question is, ‘how have people like you behaved in the past’. There are two problems there. First, each individual is being judged and our behaviour predicted based on ‘people like us’. How are ‘people like us’ defined? Is it by our race — as identified by a computer or computer programmer? Our gender? Age? Zip code? And what are the endless associations and correlations that correspond to our race, gender, zip code, and so on? Second, what if the past is exactly what we do not want to be seen as a predictor of future behaviour? We are judged on precisely what individuals so often strive not to be judged on.

What makes it even more important is that, practically speaking, machine learning models are often employed at key points of transition in our lives, determining things like jail sentences and applications for jobs, for college and to borrow money. Being based on selective characteristics means that these models can reflect certain goals and ideologies and being based in history means these models can, and often do, function to reaffirm and justify existing biases. These trends are the opposite of progress.