NVIDIA’s researchers come with a robot that uses deep neural networking
It can replicate human actions
It was done on NVIDIA’s high performance GPU Titan X
Don’t worry, its not the age of the terminator or robots taking over the world yet. We are getting closer to that, but for now you’re safe. But thanks to breakthroughs in machine learning and deep learning, our robots are getting smarter everyday.
How this works?
Researchers at NVIDIA have developed a robot that utilizes deep learning AI algorithms that observe, learn and train themselves by observing human actions. NVIDIA has already developed an AI that can detect objects, pick them up and move them. This just goes a step forward.
The framework consists of deep neural networks that have been built to perceive objects, generate programs and execute them as well. In simple words, the robot observes human actions, learning from them and tries to replicate them! Somewhat like training a baby but only 100 times smarter.
The neural networks were trained on NVIDIA’s Titan x GPU. an illustration of how it does that is given below
- First, the human demonstrates what task is to be performed by the robot. Like standing or sitting for example
- The robot watches the actions via a camera and perceives the position and action of the human
- The neural network then generates a plan on how to execute the perceived task
- It then carries out the execution.
You can also check out their research paper in detail here
There is also a video that explains this brilliant work in action
As you can already imagine the applications of this type of technology are numerous. Right from the gaming community to having helpers at home that can perform various tasks at a time.
The future definitely looks like something from a sci-fi movie. And personally, i’m pretty excited about it.
Read Further ;