If there is one thing the United States is increasingly adept at, its finding a way to make international warfare even scarier and unpredictable than it already is by its very definition. According to the Daily Beast, the US military has tested, on a number of occasions, a brain implant that allows a human operator to control, with their own thoughts, up to three unmanned aerial vehicles (UAV), simultaneously.
According to the agency which oversaw the computer-simulated tests – the Defense Advanced Research Projects Agency (DARPA) – the tests were conducted between June 2016 and January 2017.
The Daily Beast’s account states that the test subject, being partially paralyzed, was able to complete the tests by channelling his thoughts through a medical implant embedded in his skull, which used electroencephalogram (EEG) to interface with a computer simulation of a drone carrying out a mission in the company of two, simulated unmanned support aircraft. The support aircraft were apparently based loosely on fictional aircraft.
And how was this done? Through surgery. That’s right. The volunteer test subject, Mr Nathan Copeland, had to undergo surgery to place electrodes in his brain to successfully complete a neural interface. Through this surgery, Copeland was able to send brain signals to the drones and the drones were able to send signals back to him so that he was able to perceive the environment of the drone. The drone was then itself able to scan its environment, detect an obstacle in its pathway, and send the signal back to the operator with its recommendation.
In a separate phase of the experiment, a test subject was able to physically perceive the physical space and feel what the drone saw at the same time that the drone responded to the commands it was given. You might say that man and drone became one.
While this technology is still in its early stages, it certainly gives us a glimpse of the direction the United States is headed in: robotics that are controlled purely by the mind of a human operator. As of right now, the available commands are limited and mainly pertain to the turning of the drone in a particular direction. Obviously, making the leap from something that can be achieved in a computer simulation to something that can in fact be implemented in reality is something wholly other, but the fact remains that this is where the future of warfare is moving towards and has been for some time with very little media coverage.
In June 2013, a professor at the University of Minnesota was the first person to demonstrate a small quadcopter drone’s ability to fly through balloon hoops inside a gym, controlled only by a cap with 64 electrode sensors on a person’s head and computer.
According to the International Business Times, scientists at the University of Texas at San Antonio were also already working on this technology in 2014. The scientists were being funded by the Office of the Secretary of Defence and the US Department of Defence.
In February 2015, DARPA also made a further breakthrough when it announced that another volunteer, this time a quadriplegic, had flown a simulated F-35 stealth fighter using only her thoughts as well.
Approximately a year later, 16 people at the University of Florida used EEG headsets to steer drones along a 10-yard indoor course.
Even if the US military is not going to be physically capable of advancing mind-controlled bombers anytime soon, the future of the immediate technology at hand is still pretty damn nerve-wracking. According to the Marine Corps Times, the Corps is looking to advance a “swarm of suicide drones,” whereby the intention is to give a single operator control over 15 suicide drones with “minimal operator burden.” These drones come with a range of capabilities including lethal warheads; as well as electronic attack payloads to interfere with enemy communications.
In 2014, Panagiotis Artemiadis, director of the Human-Oriented Robotics and Control Lab at Arizona State University, began developing technology that allowed the soldier to control multiple drones at the same time, having received $860,000 from DARPA and the US Air Force. At the time, an operator could only control four drones at a time, but the aim of the research was for an operator to be able to handle, eventually, hundreds of drones at one time.
Further to this, the US military has also conducted tests involving over 100 tiny remote-controlled-aircraft – known as Perdix micro-drones – from three F/A-18 Super Hornets, as part of this “swarm” technology. Rather than being given an order, the drones are given broad tasks such as observing an enemy airfield. The swarm of drones then execute the mission with as little human input as possible, making them technically semi-autonomous.
As the Washington Post astutely observed, it should be pretty clear where this technology will end up in just a few short years:
“Although the Perdix is billed as a surveillance tool, it would take little to imagine the small devices as half-foot-long bombs launched to overwhelm a target by sheer volume or to decimate an enemy airfield or ship without endangering pilots.”
Imagine if the technology for these swarms were hacked or manipulated in any way by a foreign power. Or imagine if they simply malfunctioned due to a software error. In February 2016, Paul Scharre, senior fellow and director of the 20YY Future of Warfare Initiative Center for a New American Security, released a report warning about these very risks. In this report, Scharre warned that the technology poses “a novel risk of mass fratricide, with large numbers of weapons turning on friendly forces.”
I don’t know about the rest of you, but just the use of the word “Robocalypse” in this report alone is enough to convince me we should not be pursuing this technology at all unless we want every terrible sci-fi movie ever made to become reality.
Right now, across the South Korean border is Samsung’s SGR-A1 sentry gun, reportedly capable of firing autonomously as well as being able to perform surveillance, voice-recognition, tracking and firing with mounted machine gun or grenade launcher. According to NBC, the robot can automatically detect North Korean soldiers walking over the border and can open fire with no human input (though this is allegedly not done in practice).
In the UK, Taranis, a top-secret unmanned aircraft, can travel at supersonic speeds to be used by the British military to carry out pre-meditated attacks with full autonomy.
No one talks about it, but Israel also uses unmanned ground vehicles (UGVs) to not only patrol its borders but also to replace soldiers on missions, as well. Israel’s Border Patroller model can be armed with remote-controlled weapons, reconnaissance means, and additional components that cannot be fitted on the traditional Guardium model it had been using for years prior. Israeli has also developed an unmanned patrol ship that is currently defending Israel’s strategic ports and patrolling the coastline.
Not to mention the ‘Harpy’, a loitering munition developed by Israel Aerospace Industries with a brain of its own, programmed to cruise until it detects hostile activity.
Israel further claims to have advanced a “drone that can reach Iran,” an unmanned aerial vehicle that can stay airborne for 24 hours and is believed to be capable of launching air-to-surface missiles. In similar fashion, the UK is also designing a fully autonomous drone that can stay airborne for 24 hours, which has already been tested in Australia.
As far as one can see, the world record appears to be held by QinetiQ’s Zephyr, an ultra-lightweight, solar-powered UAV which conducted an endurance flight lasting a whopping 14 days while flying at 70,000 feet.
I could go on, but I am getting dangerously close to my word limit.
Joking aside, the amount of online papers, studies and reports warning against this type of technology is astounding.
Don’t fret though; Professor Ronald Arkin from the Georgia Institute of Technology has an idea: a weapons system controlled by what he terms as an “ethical governor.” In other words, killer robots who are programmed to comply with the international laws of war and rules of engagement.
Yeah, because that’s how ISIS will be using the technology, to comply with international law. Forget ISIS, even the Trump administration is quite open about is complete disregard for the rules of international warfare.
Maybe it’s not such a bad idea though; because you know what a killer robot that was programmed to respect the rules of international laws of war and rules of engagement do in the first instance? It would pack its bags and go home, never to illegally invade the rest of the planet to begin with.