DARPA: New Drones That ‘Think’ Like Humans Deployed To War Zones

Fact checked
DARPA says that new drones that can 'think like humans' will be deployed to warzones

The Defense Advanced Research Projects Agency (DARPA) is developing a new microchip that when implanted into a drone will make it capable of learning in real-time, without the need for human analysis. 

The Pentagon say that the chip, known as ‘Eyeriss’, is small enough to fit inside a wide range of mobile devices, has 168 cores (in comparison to smartphones which have 4 cores) and has been modelled on the human brain.

They say that the drones, when equipped with this new technology, will be able to strike targets in war zones without the need for human assistance.

Dailymail.co.uk reports:

For instance, a drone powered by Eyeriss could alert soldiers on the ground once it has identified a target.

This would make it potentially more efficient than human teams scrutinising imagery, a desirable technique in modern warfare.

The chip has been developed by researchers at the MIT in alliance with Darpa, the Pentagon’s advanced science research department, as well as graphics chip firm Nvidia.

Neural networks are usually made using graphics processing units (GPUs), chips found in all computing devices with screens.

These brain-like artificial intelligence systems depend on large, multi-core graphics processors to work, which aren’t practical for mobile devices.

This latest breakthrough is the first time scientists have managed to shrink down the technology so that it relies on lower power and can be implemented into smaller devices.

Researchers at MIT said Eyeriss is 10 times as efficient as a mobile GPU, so it could also enable mobile devices such as smartphones and tablets to run powerful artificial-intelligence algorithms locally, rather than having to upload data to the cloud for processing.

This would help to speed up apps because the chip has its own memory, and keep the amount of work to a minimum, meaning your future handset wouldn’t rely so heavily on the internet.

‘Deep learning is useful for many applications, such as object recognition, speech, face detection,’ said Vivienne Sze, a professor at MIT’s department of electrical engineering and computer science whose group developed the new chip.

‘Right now, the networks are pretty complex and are mostly run on high-power GPUs.

‘You can imagine that if you can bring that functionality to your cell phone, you could still operate even if you don’t have a Wi-Fi connection.

For years, engineers have developing ‘deep neural networks’ designed to mimic our minds and Google recently made its research in this field open to the public with the release of TensorFlow.

Machine learning and AI is being used to make searching more accurate, and Google hopes that by releasing TensorFlow, this software will become even more advanced and widespread.

Last year it also revealed a set of images to help explain how its systems learn over time. It showed how the system learns and what happens when it gets things wrong.

The images were created by feeding a picture into the neural network, and asking it to emphasise features it recognised – in this case, animals.

Google trains an artificial neural network by showing it millions of training examples and gradually adjusting the network parameters until it gives the classifications the team want.

The network typically consists of 10 to 30 stacked layers of artificial neurons. Each image is fed into the input layer, which then talks to the next layer, until eventually the ‘output’ layer is reached.

The network’s ‘answer’ comes from this final output layer.

In doing this, the software builds up a idea of what it thinks an object looked like.

Elsewhere, Google is using artificial intelligence to deliver more accurate search queries using a system called RankBrain.

Be the first to comment

Leave a Reply

Your email address will not be published.




This site uses Akismet to reduce spam. Learn how your comment data is processed.