As machines are branching out into ever more unknown and unpredictable environments, so to is their ability to understand those environments improving.
The platform was developed by professors at the Stanford Artificial Intelligence Lab and aims to tackle some of the toughest questions in computer vision, with the eventual goal of developing machines that can understand what it sees.
Knowing What You See
Researchers at Boston University are conducting work on the same topic, and have developed a robot that is capable of recognizing specific objects, and then maneuver around them without human support.
The ability for robots to guide and navigate for themselves is hugely important and feeds into a vast range of possible applications. The Boston project utilized a deep neural network that was capable of processing huge amounts of data in order to recognize the simple objects.
“There’s an algorithm that will take a ton of pictures of one object and will put it in and compile it all,” they say.“Then we basically assign a number to it.” The robot “will come upon an object and it will say, ‘Oh, there’s an object in front of me, let me think about it.’ It will…find a picture that corresponds with the object, pick that number, and then it will be able to use that as a reference, so it can exclaim, ‘Oh, it’s a ball,’ ‘It’s a cone,’ or whatever object I had decided to teach it.”
Of course, whilst this project achieved some nice things, it’s still at a very early stage, but other projects highlight the tremendous progress being made.
Foremost amongst these is undoubtedly the high-profile video of Boston Dynamic’s Atlas device that was published last month.
The video, shared below, shows the robot capable of carrying out a range of tasks, under a variety of distractions.
“It uses sensors in its body and legs to balance and LIDAR and stereo sensors in its head to avoid obstacles, assess the terrain and help with navigation,” the company explain.
It’s clearly an area that is growing at considerable pace, with detectors in driverless cars proving particularly adept at identifying objects, whether other cars, pedestrians, or cyclists.
What’s more, it is becoming so good at detecting such objects that it is beginning to predict what objects will do next. Hopefully, with companies such as Boston Dynamics operating within the Google stable will ensure a cross-fertilization of ideas between the two fields.