Skip to content

Out of Control: Knowledge Problems in AI

Artificial intelligence is rapidly shaping the twenty-first century.

Artificial Intelligence

Yet, like all transformative technologies, its implementation won’t be painless. In this article, we’ll explore some knowledge problems in AI, and what this means for how we use this technology.

The last few centuries have been shaped by successive waves of technological innovation.

The first three were driven by innovations in coal and steam, electricity and the automobile, and then computing. Now in the twenty-first century, the fourth is unfolding before our eyes – where the global economy is increasingly becoming powered by technologies of mobile internet, automation, and AI.

Within this, we can see AI having immense transformative potential.

Recent reports from PWC suggest that AI could add $15.7 trillion to the global economy by 2030.

That’s roughly the size of the combined outputs of China and India.

By implementing this technology, businesses can enhance performance and productivity, across industries and functions, from maintenance and manufacturing to logistics and sales. Meanwhile, we can see whole economies benefit from enhanced labour productivity and innovation, both fundamental to long-term economic growth.

Accordingly, vast sums of money are being put into developing this technology, from start-ups to the world’s biggest firms. And already, most aspects of our lives are touched by AI in some way.

From deciding who gets a loan, facial recognition, creating art, finding cancerous cells in medical scans, and making (94.5%) correct diagnoses of eye problems (Google’s DeepMind), to detecting fraud, fighting terrorism (Facebook), sifting through Arctic bird songs, and winning game shows (IBM’s Watson, on Jeopardy).

Yet, despite the immense range of applications, the widespread adoption of AI is creating significant challenges for the world economy – in transforming labour markets through automating huge swatches of jobs, in the risks of exacerbating inequalities if the benefits are not effectively dispersed, and the costs of infrastructural development.

More fundamentally, for the first time, we’ve created a technology that has the power to develop itself. And this poses significant challenges for how we use this technology.

First, because of our level of understanding of how AI systems function. And in turn, the challenges in issues of ethics, trust and autonomy these systems create.

Knowledge Problems in AI: Automotive Industry

In 2016, a company called Nvidia released an autonomous car, which functioned differently to any we’ve seen before.

Relying solely on an algorithm that had taught itself to drive by watching a human do so, this car didn’t follow a single instruction by an engineer or programmer.

This exciting and impressive feat demonstrates the rising power of AI.

Yet, it’s not actually clear how the system powering Nvidia’s car works. The system is so complicated that even the engineers who designed it struggle to isolate specific reasons for any single action. And so far, there’s no way to ask it for an explanation.

So, what happens when something goes wrong? If such an autonomous car crashes into another vehicle, who is responsible? And who or what is the car choosing to protect, the passengers or those in the other vehicle?

Whatever the decision, in such an instance the AI system will implicitly be making an ethical choice on our behalf.

This is a well-trodden ethical issue for technology. In fact, MIT has a website dedicated to precisely this problem, positing different scenarios about what choices an AI should make. And as Danah Boyd – the principal researcher at Microsoft Research – notes, there are serious questions about the values being written into such systems, and who is ultimately responsible for them.

Yet, the problem is that such ethical quandaries aren’t rare – they’re everywhere. From decisions about how to spend public money, to product functioning.

It’s vital we understand the process for such decisions. Otherwise, gaps in our knowledge about the functioning of AI systems open us up to our lives being governed by decisions that implicitly have an ethical dimension, but to which we take no part.

Knowledge Problems in AI: Healthcare

In 2015, at Mount Sinai Hospital in New York, an AI programme called Deep Patient was used to analyse patient database records. Having learnt using the data from 700,000 patients, Deep Patient was tested on new patients. Without instruction, it discovered patterns hidden in the data that allowed it to successfully predict the incidence of a wide variety of diseases and ailments.

However, Deep Patient had also made surprisingly accurate predictions about the onset of severe psychiatric disorders like schizophrenia. This success was puzzling because such disorders are notoriously difficult to predict.

The problem was, the medical professionals at the hospital didn’t understand how this was possible. Deep Patient was unable to offer any clue as to the rationale for the prediction.

Perhaps this was merely a demonstration of the power of AI in finding patterns not before found by humans.

Yet, as AI is increasingly applied through the health sphere – in the UK, AI is being implemented in multiple medical centres around the country, and over the next two years in the US, more than 1/3 of hospitals plan to adopt AI – it’s vital that we can understand how decisions are made and truth is derived.

First, this is to enhance our learning and knowledge development within healthcare. But more than this, medicine is a discipline where there is little to no room for error. We need to be able to understand the reasoning behind any decision made. This is so we can confirm the accuracy of any decision, and so that we can effectively respond when something goes wrong.

And something always goes wrong.

When AI goes awry

Like all technologies or systems, AI can malfunction. From Amazon’s Alexa giving a user the chilling advice to “kill your foster parents”, to cameras missing the mark on racial sensitivity.

But as we see AI applied in higher-stakes settings, the social costs of any malfunction will be magnified.

One such setting AI is the US criminal justice system. Here, AI has been used to build risk assessment tools. These tools then provide scores to indicate the probability of someone committing further crimes.

The most widely used risk assessment tool in the US is called COMPAS. And it’s developed by a private company called Northpointe. Tools like COMPAS are used to support decision-making processes in numerous applications. From assigning bond amounts and granting parole requests, to sentencing.

Such tools are so popular that the US Justice Department encourages their use at every stage of the justice process. And a currently pending reform bill would mandate their use in all federal prisons.

Yet, despite such tools widespread use, they’re increasingly controversial.

This is because, COMPAS has been injecting bias into the courts. It does so by discriminating against African American and Hispanic men, even more than in the real world.

In some way, the AI system within COMPAS has reflected and exaggerated existing human biases, picked up through processes of self-learning to absorb and adopt existing prejudice in society, before making decisions based on them.

Even as the US Attorney General Eric Holder and numerous independent studies have highlighted these issues, the government ignores the problem. Meanwhile, Northpointe refuses to disclose the calculations and processes used to arrive at the risk scores.

In short, a malfunctioning AI is making major decisions, which are highly biased and prejudicial. These decisions shape our lives. And yet, we’re unable to explore in depth the reasoning behind those decisions.

Knowledge Problems in AI: The knowledge itself.

AI has immense potential in its application. Yet, across sectors, we have significant gaps in our knowledge about how AI systems work. This makes it near impossible for us to work out why AI systems make specific choices.

As we come to increasingly rely more and more on such technology, we need to be able to understand how AI decisions are made. This is for a number of key reasons.

First, this is to build trust in such systems in preparation for when malfunctions occur. This requires creating mechanisms to audit and provide oversight, to reassure us as to the reliability of the system.

Second, AI systems make decisions that are implicitly ethical, as the autonomous car quandary shows us. If we don’t understand how AI decisions are made, ethical choices will be made on our behalf that we take no part in shaping.

Finally, as AI grows in influence, a lack of knowledge about the system threatens our autonomy. By failing to understand the system that increasingly governs our lives, we risk losing all control.

The complexity of such systems is a product of how they’re built, emerging from the nature of the system itself.  

The creation of today’s AI systems has been possible only in the last couple of decades, the product of technological developments in deep neural networks and deep learning. These advances in deep learning techniques, learn – through observation and experience – based on artificial neural networks, loosely modelled on the way interactions take place in the brain. Nowhere does a programmer write commands to solve a problem. Rather, the system generates its own algorithm, essentially programming itself based on data and the desired output.

These deep neural networks are the source of AI’s power, but they’ve meant that these systems have become deeper and more complex. And they’ve done so in ways we can’t as yet understand.

The problem is, we can’t just look inside a deep neural network to see how it works. The reasoning is embedded in the behaviour of thousands of simulated neurons. These are then arranged into hundreds of intricately interconnected layers. And these layers then enable the network to recognise patterns across different levels of abstraction.

How do we resolve this?

So far, the AI we’ve seen has been primarily ‘narrow AI’, where machine learning techniques are used to solve specific problems. The next step is creating ‘general AI’ systems, which can tackle general problems in the same way as humans. But this is decades away, at least.

In the meantime, little is likely to dampen the rapid advance and implementation of present and future AI systems. Yet, it’s vital that as this technology is implemented and developed, it’s done in ways that we can harness for social good, rather than as another means to exaggerate existing wealth and power inequalities.

This means, such technology needs to be designed to work with human beings, putting people at the centre as a way to augment the workforce, allowing us to focus on high-value analysis, decision-making, and innovation.

Despite the importance of such issues, and as is often the case, regulatory bodies are slow to respond to such developments. There are the beginnings of regulation in the space, for instance, GDPR in the EU (protecting consumer personal data), and countries like South Korea and France working to set up specific AI and robotics regulation. But it’s not enough. There are currently no commonly accepted approaches to regulate or use AI. And there are no industry standards for testing, even as AI’s application spreads rapidly.

In absence of proper regulation, we need to find ways to monitor and audit AI systems, for fear such systems will lose any sight of scrutiny. 

Specifically, this means building AI systems that have the capacity to provide clear explanations for the decisions made – without this, our trust in such processes cannot be effectively built.

This idea is gaining traction in the EU for instance, where arguments are being made that it should be a fundamental right to interrogate AI systems to understand how it reached conclusions. And efforts are being made to build such capabilities into the technology.

In 2015, Google altered a deep learning based image recognition algorithm so that instead of spotting objects, it would generate and modify them. Called Deep Dream, the resulting images provide us with a glimpse of the features the program uses to recognise different objects.

Knowledge Problems in AI deep dream pattern recognition

Other efforts include those at the University of Washington, led by Carlos Guestrin. Guestrin’s team have developed a way for machine learning systems  to provide a rationale for their outputs. Specifically, the computer finds examples from a data set and serves them up with a short explanation. For instance, highlighting significant emails or choices in image recognition. But necessarily, these are simplified explanations, and part of the information is lost along the way.

Glimpses of AI decision making are a start, but they’re not enough.

We need to be able to make sense of the complex interplay of calculations inside deep neural networks to resolve knowledge problems in AI. It is this interplay that is crucial to higher level pattern recognition and decision-making. But this is a mathematical and statistical quagmire, and it means we have no easy solution.

As Guestrin describes, “we’re a long way from having truly interpretable AI”, and  “we haven’t achieved the whole dream, which is where AI has a conversation with you, and it is able to explain”.

And this ability to explain is key in eliminating knowledge problems in AI. As Russ  Salakhutdinov (Apple’s Director of AI Research), this ‘explainability’ is the core of the evolving relationship between humans and  intelligent  machines.

Resolving these knowledge problems in AI is vital.

This will first allow us to verify AI decisions to build trust and reliability in AI systems. Second, to understand any ethical implications of such decisions. And third, to ensure human-beings don’t lose all autonomy as AI increasingly shapes our lives.

AI has the potential to radically and positively transform many aspects of how we live and organise as a species. But to ensure this potential is met, we need to fully understand the AI systems that we’re increasingly handing control over our lives.

A Terminator style meltdown might be a long way away, for now. But without resolving these knowledge problems in AI, who is to say where we sit in the age-old balance between utopian, AI-driven future, and the Luddite’s Armageddon.

Originally published here.

Leave a Reply

Your email address will not be published. Required fields are marked *