Robo-picker can think for itself


Wednesday, 28 February, 2018


Robo-picker can think for itself

Smarter than traditional ‘pick and place’ robots, a newly developed robo-picker prototype decides how to grasp an object, then picks it up, determines what it is and where it should be placed and then puts it there. Ultimately, this robotic system could be extremely useful in warehouse sorting and other picking and clearing tasks.

MIT and Princeton University engineers who developed the system received some sponsorship from ABB, Mathworks and Amazon.

Their system consists of a standard industrial robotic arm outfitted with a custom gripper and suction cup. An ‘object-agnostic’ grasping algorithm enables the robot to assess a bin of random objects and determine the best way to grip or suction onto an item amid the clutter, without having to know anything about the object before picking it up.

Once grasped the robot lifts the selected item from the bin and a set of cameras then takes images of the object from various angles. The robot compares these images with a library of other images to find the closest match and so identifies the item. Once identified the item can be stowed in the appropriate place.

The ‘grasp-first-then-recognise’ workflow turns out to be an effective sequence compared to other pick-and-place technologies.

Building a library of successes and failures

Currently, most pick-and-place systems are designed to function only in tightly controlled environments with the robots performing one one specific, repetitive task, such as gripping a package off an assembly line, always in the same, carefully calibrated orientation.

The robo-picker technology, however, will enable the robots to be more flexible, adaptive and intelligent. They will become able to work in unstructured settings where they will be able to recognise and sort thousands of items from the clutter.

How to grasp the object

The researchers employed four main grasping behaviours:

  • Suctioning onto an object vertically.
  • Suctioning onto an object from the side.
  • Gripping the object vertically like the claw in an arcade game.
  • Gripping vertically, then using a flexible spatula to slide between the object and the wall (for objects that lie flush against a wall).

The robots were shown images of bins cluttered with objects, captured from the robot’s vantage point. Then the robots were shown which objects were graspable, with which of the four main grasping behaviours, and which were not, marking each example as a success or failure. After hundreds of trials a library of picking successes and failures was created. This library was incorporated into a “deep neural network” — a class of learning algorithms that enables the robot to match the current problem it faces with a successful outcome from the past, based on its library of successes and failures.

Ultimately, the robots knew how to predict which items were graspable or suctionable, and which configuration of these picking behaviours was likely to be successful. Once gripped and away from the clutter it was easier for the robots to recognise items ready for stowing.

From pixels to labels

A perception system was developed in a similar way to the grasping algorithm, enabling the robots to recognise and classify objects after they had been grasped.

To do so, they first assembled a library of product images taken from online sources such as retailer websites. They labelled each image with the correct identification — for instance, duct tape versus masking tape — and then developed another learning algorithm to relate the pixels in a given image to the correct label for a given object.

Last July, the team packed up the 2-ton robot and shipped it to Japan, where, a month later, they reassembled it to participate in the Amazon Robotics Challenge, a yearly competition sponsored by the online megaretailer to encourage innovations in warehouse technology. Sixteen teams took part in a competition to pick and stow objects from a cluttered bin.

In the end, the MIT/Princeton robot had a 54% success rate in picking objects up using suction and a 75% success rate using grasping, and was able to recognise novel objects with 100% accuracy. The robot also stowed all 20 objects within the allotted time.

The team is now working to further improve the pick-and-place technology, particularly speed and reactivity. Tactile sensors have been added to the robot’s gripper and a new training regime is already underway.

Image credit: Melanie Gonick/MIT

Originally published here.

Related Articles

Unlocking AI: strategic moves to revolutionise the food sector

As the AI transformation gathers pace, we can expect AI tools to become established in the food...

The development of food GMPs

Good manufacturing practices (GMPs) in the food industry are in place to ensure that the products...

Improving traceability with a warehouse management system

When it comes to supply chain management, advanced technologies are playing a role in optimising...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd