3D vision allowing robots to take on challenging terrains
Researchers led by the University of California San Diego have developed a model that trains four-legged robots to see more clearly in 3D, enabling them to easily cross challenging terrain such as stairs, rocky ground and gap-filled paths, while clearing obstacles in the way.
Xiaolong Wang, senior author of the study and professor of electrical and computer engineering at the UC San Diego Jacobs School of Engineering, said the robots could be deployed in more complex real-world environments.
The robot is equipped with a forward-facing depth camera on its head, which is tilted downwards at an angle that gives it a view of both the scene in front of it and the terrain beneath it.
To improve 3D perception, the researchers developed a model that takes 2D images from the camera and translates them into 3D space by looking at a short video sequence that consists of the current frame and a few previous frames, and extracting pieces of 3D information from each 2D frame. This includes information about the robot’s leg movements such as joint angle, joint velocity and distance from the ground. The model then compares information from the previous frame to the current frame to estimate the 3D transformation.
The model fuses all the information together so it can use the current frame to synthesise the previous frames. As the robot moves, the model checks the synthesised frames against the frames already captured by the camera. If they do not match, the model is able to make corrections.
The robot’s movement is controlled by the 3D representation, where the robot is able to remember previous visual information as well as the actions its legs have taken before.
“Our approach allows the robot to build a short-term memory of its 3D surroundings so that it can act better,” Wang said.
The study builds on previous work by the team, where researchers developed algorithms that combine computer vision with proprioception to enable a four-legged robot to walk and run on uneven ground while avoiding obstacles. The 3D perception advancements allow the robot to walk on more challenging terrain than before.
According to Wang, this makes the robot more versatile across different scenarios.
There are limitations to the approach, however, as it does not guide the robot to a specific goal or destination. When deployed, the robot takes a straight path and if it sees an obstacle, avoids it via another straight path.
“The robot does not control exactly where it goes,” Wang said. “In future work, we would like to include more planning techniques and complete the navigation pipeline.”
Unlocking AI: strategic moves to revolutionise the food sector
As the AI transformation gathers pace, we can expect AI tools to become established in the food...
The development of food GMPs
Good manufacturing practices (GMPs) in the food industry are in place to ensure that the products...
Improving traceability with a warehouse management system
When it comes to supply chain management, advanced technologies are playing a role in optimising...