Home » Digital » A branch of Facebook dedicated to robotics teaches robots to learn from their mistakes

A branch of Facebook dedicated to robotics teaches robots to learn from their mistakes

A branch of Facebook dedicated to robotics teaches robots to learn from their mistakes
In a roof garden at Facebook’s Menlo Park headquarters, a six-legged robot named Daisy makes rough noises while staggering over sandy soil.

Daisy, which looks like a giant robotic spider, is part of a robot science project within Facebook’s Artificial Intelligence Research (FAIR) group. Since last summer, FAIR scientists have helped robots learn how to walk and grab objects. The goal is that they learn to do these actions in the same way that people acquire these skills: exploring the world around them and using trial and error.

Many people may not know that the world’s largest social network is also tinkering with robots. But the work of this group is not intended to appear, for example, in the Facebook news feed. Rather, the hope is that the project will be able to help researchers to make artificial intelligence learn more independently. They also want to make robots learn by using less data, which humans are often required to collect before the AI can function properly.

In theory, this work could eventually help improve the types of AI activities that many technology companies (including Facebook) are working on, such as translating words from one language to another or recognizing people and objects in images.

In addition to Daisy, Facebook researchers are working on robotic arts that consist of multi-jointed arms and robotic hands equipped with tactile sensors on the tips of the fingers. They are using an automatic learning technique called “self-supervised learning”, in which robots have to understand for themselves how to do things, repeatedly trying to perform an action and then using the data collected by the sensors to improve more and more.

Research is still at an early stage: Meier said that robots are just starting to reach objects, but have not yet decided how to collect them. Like children, who must first learn to use their muscles before they can move – not to mention pushing themselves up to stand up – so too, robots must go through that discovery process.

But why force a robot to understand these kinds of tasks?

The robot has to understand what the consequences of his actions are, Franziska Meier, scientific researcher at FAIR, told CNN Business. “As humans we can learn this kind of thing, but we have to be able to teach a robot how to learn it,” she said. Also, she said, researchers were surprised to find that letting robots explore to understand things for themselves can speed up the learning process.

In a demonstration last week, Daisy was operating in demo mode, but through self-supervised learning she was slowly learning to walk. The six-legged robot, which the researchers bought, was chosen for its stability, said research scientist Roberto Calandra. The robot began to feel the various soils on which it was placed, before starting to move (soils that included smooth salt within Facebook and other surfaces). Slowly he learned how to move forward taking into account elements such as balance and the way it is positioned using the sensors on the legs.

The researchers also tested another robot, which consists of an articulated arm with a pincer for grabbing objects. Through coordinates, this analyzed the point in space that the researchers wanted to reach and, taking five hours, reached the object – always making different movements, each time the experience increased the learning data compared to what it felt before.
“Every time, in practice, he looks for something, gets more data, optimizes the model. And that’s just the beginning,” said Meier.

Calandra said that one of the reasons it’s exciting, working on this kind of AI with robots, rather than using the software on a computer, is because it forces the algorithms to use the data efficiently. That is, they have to understand how to do tasks in days or hours since they have to do it in real time, rather than in software simulations that can be accelerated to imitate a longer period of time like months or years.
“If you start out knowing that it’s just a simulation, you’ll cradle yourself with the ability to perform hundreds of risk-free attempts. And this is an approach yes, very interesting scientifically, but it does not apply to the real world,” said the researcher.