Follow us

< View Proof Blog

Putting AI in the driving seat: why machines might soon resemble humans 23.09.16

Earlier this week I attended the New Scientist Live event. Amongst the range of topics in technology, space and human biology what I learnt the most about was the advances in artificial intelligence.

Making a car that can drive itself requires a number of components. Visual mapping involves firing lasers from the front of the car into its immediate surroundings to discern the space on the road around it. Visualisation software measures sunlight and shadows to allow the car to understand where it is and at what time.

It uses both to build a bank of ‘memories’ from which it can draw upon in order to keep to the road and ‘know’ where it’s going. This is what I learnt from Paul Newman, Professor of Information Engineering at the University of Oxford in his talk on self-driving cars.

What I also learnt from him is that AI takes the process one step further. The deep learning software (a form of AI) allows the car to build models of the objects around it from the images it records and understand how it should react in relation to them. This ensures that if a person were to suddenly step into the middle of the road, the driverless car would stop.

The more images the deep learning software records and analyses the more the car learns about the objects in the images and how to react to them. Just like a human child, the more it interacts with the world around it, the better it gets at understanding it.

One of the first machines to be considered ‘artificial intelligence’ was Deep Blue, a chess playing computer developed by IBM that won both a chess game and a chess match against a reigning world champion. The computer system contains a map of all the possible moves and counter moves that can be played.

Artificial intelligence has since developed. Deep learning tools have progressed the way machines ‘figure stuff out’. No longer is there a predefined route to a correct answer which the machine can draw upon. The AI that we now know is developed by building artificial networks that mirror the neural networks in a human brain.

Instead of mapping out the route from problem to solution, the machine is given the problem and the correct answer to that problem. This allows the machine to develop an understanding of why that answer is the right answer.

This is known as ‘back propagation of errors’ and is the same way in which humans learns. The artificial pathways which allowed the machine to figure out why the answer given to them is correct are then strengthened by repeating the exercise. A training set is created for that particular problem/solution; the technique is the same way in which, for example, a human child would revise for an exam.

It’s these artificial networks that distinguish an AI from a machine. They offer a huge advantage in that the more the machine learns the better it gets at learning, especially by doctoring certain networks to increase the speed at which it does this. The human brain works in the same way. The more we teach ourselves about a particular topic, the easier it is when it comes to learning about a something similar.

With machine learning, the potential of AI is enormous. However, as Daniel Glaser, Director of Science Gallery at King’s College London, pointed out in his talk on ‘What does AI look like’, there are massive implications. What happens when an AI learns something that it wasn’t meant to? How can you keep track of how much a machine knows if you’ve given it an open playing field to learn anything?

One of the earliest tests of machine learning involved the military. The army supplied engineers with images of tanks, either covered or partially covered behind trees. The AI was used to quickly identify which of the images had tanks in and which didn’t. The AI was given a set of rules to define the tank based on its features, such as the turret or tracks.

What worried the engineers overseeing the AI was when it identified a tank completely hidden behind a set of trees. No visible features were showing for any imaging software to pick up; however the AI had still managed to identify the tank. It had done this by teaching itself a new set of rules; it had begun to measure the shadows created by the tanks at different times of the day, learning a completely new set of rules on its own. Essentially, it had become smarter.

Already scientists are building AI neural networks only to turn off certain sections in order to understand how machines learn and the future of AI will very much depend on how well scientists can measure, control and apply the process of machine learning.

If you have any questions about my blog please email me at  sam.middleton@aprilsixproof.com