Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

The Future of Drones in Artificial Intelligence

DZone's Guide to

The Future of Drones in Artificial Intelligence

Are AI drones harmful or helpful? It's up to us. There are lots of potential benefits to count on if we enter into the machine learning world wisely, but there are risks.

· AI Zone ·
Free Resource

Did you know that 50- 80% of your enterprise business processes can be automated with AssistEdge?  Identify processes, deploy bots and scale effortlessly with AssistEdge.

Drones are no more than a fiction gadget for most of us. They're fun toys that can fly around an area, capture aerial shots, or even spy on someone. These unmanned aerial vehicles (UAVs) are growing in popularity rapidly and have been planned in different scenarios that are far beyond the use of robotic toys.

Drones have redefined and enhanced a lot of industries within just a couple of years. They are widely used to deliver goods quickly, scan military bases, and study the environment at a broader scope. Drones have been used in safety inspections, security monitoring, storm tracking, and border surveillance. They are also armed with bombs and missiles in military attacks and to protect the lives of army personnel that would otherwise need to enter war zones.

Now, it seems like every company is offering drones for commercial purposes. These remote-controlled flyers have huge potential.

According to Drone Base:

“Drone-captured data is a very amazing solution to deliver complete analysis to stakeholders and it delivers an affordable option to improve designing, estimating, reporting, and progress-tracking for worksites.”

Now still controlled by humans, the drones of the future will be controlled by artificial intelligence. AI allows drones and other machines to make decisions and operate on their own on your behalf. But a machine that is able to make decisions and learn to work independently could cause more harm than good and befall society.

Artificial intelligence is like an unknown world we are diving head-first into, and our imagination is our only guide. Some of the brightest minds in the past century have imagined what might be happening today. Could we enter the world where an army of cyborgs (just like in the Terminator series) sends us into a nuclear holocaust?

The risk of autonomous robots is even more than fictional account by the Isaac Asimov, an American sci-fi writer. Asimov published a series of stories from 1940 to 1950 that depict the future of robots and humans.

It is in this connection that we are introduced to the Three Laws of Robotics: a set of rules that dictated how artificial intelligence could co-exist with man harmoniously. If you don’t know, the Three Laws indicate:

  1. A robot must not injure a human or allow a human to be harmed through inaction.
  2. A robot must follow the commands given by humans unless they conflict with the first law.
  3. A robot should protect itself unless this conflicts with the First or Second Law.

Obviously, these Three Laws are part of fiction, but Asimov introduced us to a very threatening and real concept. When can a machine work independently — when it can make decisions and learn as per its advanced knowledge — what controls it from overwhelming a mortal society?

As artificial intelligence jumps from the work of science fiction to reality, we may face real-life scenarios where those laws could work. What if robotic military arms are deployed with a capacity to kill millions in one go, what happens when these autonomous raiders evolve in a way to overlook the commands from their creators? Mother Jones visited the possible risk of autonomous robots in 2013. Steve Goose, who is a spokesperson for Campaign to Stop Killer Robots, added:

“It’s not about scenarios that seem like an army of Terminators. It will look like armored trucks and stealth bombers.”

Though the technology is a ways off from how it was predicted in 2013, drones and other AI weapons are approaching sooner than expected. A 2012 directive was issued by Pentagon that called for setting guidelines to minimize the consequences and possibility of failures in semi-autonomous and fully autonomous weapon systems. But unmanned fighter drones have been deployed around borders in South Korea. Several major figures in the tech world, even popular names like Elon Musk, have called for a ban on "killer machines" due to such developments.

Over 114 specialists, including Stephen Hawking and Elon Musk, added,

“We don’t have much time to act. Once this box of Pandora is open, it will be too difficult to close.”

With the recent Slaughterbots release, the Future of Life Institute brought the point home with a terrifying sci-fi short film that covers what could happen in a world conquered by uncontrolled autonomous killing machines.

Stuart Russell, a researcher in AI at UC Berkeley and FLI scientific advisor, says:

“I was also the part of making this film as it has cleared the issues. While military lawyers and government ministers are stuck in the 1950s, i.e. arguing whether machines can ‘make decisions in human way’ or can be completely ‘autonomous,’ the technology behind such weapons of mass destruction is going forward. There are irrelevant philosophical distinctions. The catastrophic impact on mankind counts much.”

Set in the backdrop of near future, the film depicts the introduction of AI-powered drones that fall into wrong hands eventually and become a tool for assassination, which targets thousands of university students and politicians. The film supports the call for a ban on autonomous killers by FLI. Similar movements were targeted by the United Nations Convention on traditional weapons, which was attended by guests who represented over 70 nations.

Is it too late to stop the robotic catastrophe in the near future? The technology is available already and Stuart warns that it could bring apocalypse if we fail to act now. The window to avoid such kind of global destruction is approaching soon, according to him.

In the film’s climax, Russell warns that the short film is beyond speculation. It shows what may happen with miniaturizing and integrating technologies that we have already. There is huge potential in AI to benefit humans — even in defense. But setting these killers free to make decisions to kill humans will destroy our freedom and society — and thousands of his fellow scientists agree.

Russell is right in two different ways, at least. The technology is available already. At Carnegie Mellon University, roboticists published a research paper called Learn to Fly by Crashing. It covers tests by roboticists on an AR Drone 2.0 that they watched to teach how to navigate 20 indoor environments with trial-and-error. The drone excelled in its aerial environment within just 40 hours of flight time and through 11,500 corrections and collisions.

The researchers added:

“We have developed a drone just to crash into objects. Along with positive data, we deploy all the negative flight data sampled from similar trajectories to learn a powerful and easy policy to conduct UAV navigation.”

Russell was just perfect on the benefits and potential of AI drones. Recently, Intel has used the technology to collect video and other information on wildlife to less invasively and more efficiently help scientists to conduct research.

General Manager and Vice President of AI Product Group at Intel, Naveen Rao, says:

“AI is supposed to help us resolve most of the complex challenges by promoting problem-solving at large scales, such as uncovering a new discovery.”

At the same time, Avitas Systems, a GE subsidiary, has started deploying drones to automate infrastructure inspections, such as of power lines, pipelines, and transport systems. The AI drones perform not just the surveillance efficiently and safely, but their technology of machine learning is also able to detect anomalies in data.

BNSF Railway has used drones recently in inspections. Pete Smith from TE Connectivity said in Aviation Today:

“The drones can pre-program to go through the tracks when following them. They can collect data with cameras to take pictures of tracks. They can take huge data with high-resolution cameras. They use artificial intelligence to perform analysis on data.”

Are AI drones harmful or helpful? It is up to our decisions. There are lots of potential benefits to count on if we enter into the machine learning world wisely, but there are certain risks of inaction.

It goes without saying that several scientists, even Hawking and Musk, call for a ban of autonomous killers from the United Nations. A global moratorium is required to control the use of AI to secure humanity from its own discovery.

Consuming AI in byte sized applications is the best way to transform digitally. #BuiltOnAI, EdgeVerve’s business application, provides you with everything you need to plug & play AI into your enterprise.  Learn more.

Topics:
ai ,machine learning ,drones ,ar ,robotics

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}