Grasping Part Two - This Time It's Personal

More on grasping

06 Sep, 2022

author avatar
Grasping Part Two - This Time It's Personal

In our previous post, we introduced the challenge of universal grasping and explored just how complex picking something up can be. This week we’ll explore the latest technology in robot picking and separate reality from hype. We’ll discuss -

  • The hottest start-ups in grasping
  • How Robot Olympiads benchmark progress
  • What technologies are working in the industry today

State-of-the-art start-ups

The last two years have seen an explosion in universal grasping companies and from appearances, they’re having a lot of success- let's look at our top picks.

Deepmind, OpenAi and Dyson

First, we have the three giants of the field - Google’s Deepmind, OpenAi and Dyson.  One of our first bulletins covered Google’s approach to flexible robotics and they have really been ramping up their focus on robotics with spinouts (Intrinsic, Everyday Robots), acquisitions (Vicarious) a whole lot of research. Their approach uses machine vision, Deep Reinforcement Learning (DRL) and simulation to train their systems. Deepmind is a major proponent of combing simulation with DRL and it seems like many other companies are following their lead. They also have a basement filled with different robots undertaking real-life DRL. OpenAi also uses vision, DRL and simulation to tackle grasping and manipulation. If you want to learn more about DRL - subscribe to the newsletter as we will be diving into the topic in more detail.


Dyson announced a move into home robotics and this has been followed by a lot of marketing and a huge recruitment campaign. Their 3-minute intro video shows a range of custom end effectors and vision systems tackling household tasks like loading a dishwasher and cleaning.  It’s unclear what their first product will be or what control strategy they are using but from research papers sponsored by the company, we can see they use vision-based systems with DRL and simulation.  If you want to learn more, ReorientBot and SafePick are great examples of research they have sponsored.

Next, we have the new entrants -

Covariant - is a Californian robotics company developing a universal AI for robots. The company has a few universal picking solutions and although it's unclear what tech stack they use, their engineers have a background in imitation learning and DRL. Covariant is one of the best regarded in the space as they beat 20 other AI companies in ABB’s global picking challenge.

Dexterity - another Californian company, they produce warehouse "AI-enabled robots that can intelligently pick, pack, stack, palletize, and depalletize without changing your workflows.

Right Hand Robotics - is a Washington-based robotics company developing a universal picking robot. Their system combines AI software with intelligent grippers and machine vision. It's unclear what type of learning system they use but one thing that really sets them apart is their gripper - they have every cool hybrid mechanical pneumatic tool.

Neura - a German / Chinese company developing an AI-empowered cobot. Their system has been receiving a lot of hype on the trade show circuit and integrates vision, force feedback, voice control and “smart skin”. It's unclear what flavour of AI they are wielding.

Agile Robotics - is a Munich-based company with a very similar value proposition to Neura. The company is a spin-out of the German Aerospace Centre and doesn't have a huge amount of information publicly available.

Micropsi Industries- is a Berlin-based company that is developing a vision system and AI controller that can be retrofitted onto robot arms.  The system can be used to make your average robot highly flexible and universal.

Sewts - another Munich-based company, Sewts are tackling the hardest problems first by focusing on the robotic manipulation of highly conformal objects like fabrics. They use 2D and 3D cameras plus another ambiguous AI, as well as leaning heavily on simulation.

Unfortunately, companies can be a bit sneaky. First, they don't like to publish their methods. It's fair enough, they don't want everyone to know the recipe to their secret sauce. Secondly, they like to big up their results. When you’re trying to drag the present into the future, there is always a bit of fake it till you make it.

Both are understandable (within reason) but it means demo videos and case studies can obfuscate reality. It’s impossible to tell how much manual intervention or hard coding etc there has been. This issue also exists with published research, where academics have full control of the environment and you never know what's happening behind the scenes.

As it’s impossible to look under the hood, we need another source of truth.

Challenges

Luckily we have Robotic Olympiads. Every year the top research departments and companies compete in a number of different robotics events across the world. Since 2006 there have been around 15 robotics events that feature robotic manipulation. These include Amazon Picking Challenge, the DARPA Robotics Challenge which only ran for a few years and the RoboCup which has run every year since 2006.

Sun et al. reviewed all of these events and synthesised the results. Rather than relying on corporate marketing, this gives us a more balanced view of what is possible. It’s impressive achieving results in a controlled environment, it's way more impressive in front of crowd.

Looking at the top 9 challenges in robotic picking -

Research Challenges in Perception

1. Objects with shiny surfaces

Why it's hard - Shiny surfaces and reflected light confuse vision systems making it difficult to capture accurate data on an object

Progress - In 2016, teams struggled to pick silverware due to its shiny surface. In 2020, several teams could reliably segment a shiny spoon from other spoons and estimate its pose for grasping. They managed the reflection by controlling the lighting using multi-flash. Objects with shiny surfaces may not be a significant challenge if the object model is known and its location variation is limited.

2. Translucent or transparent objects

Why it’s hard - Vision systems struggle to differentiate a transparent object from its background/objects making it tough to ID edges and surfaces.

Progress - Translucent objects still pose a significant challenge to perception. As an example, no team has been able to successfully pick ice cubes.

3. Insertion

Why it's hard - Tasks with tight tolerances usually require highly precise perception.

Progress - This challenge has been well solved with many teams successfully plugging a USB light into a socket. This process can be very slow and teams have relied on perfect knowledge of the object’s model.

Research Challenges in Grasping

4. Grasping with imperfect perception

Why it's hard - Perception errors cause the majority of failures in grasping. If the object’s pose is estimated incorrectly, the robot could knock over the object, push it away, end up with an unstable grasp, or completely miss the object.

Progress - Teams have gradually incorporated approaches to handle perception uncertainties since they found that it was one of several major reasons that make their solutions slow and unreliable. It’s still a significant challenge that requires more work.

5. Objects with challenging shapes and surfaces:

Why it's hard - Even with perfect vision, some objects can still be challenging to pick up and manipulate. This could be due to minimal surfaces for grasping, curved or narrow shapes, roughness or conformal shapes

Progress - Teams have explored several options and made good progress. Several teams tried an automatic tool-change approach that allows them to swap grippers. Several grippers were designed to deal with challenging shapes and surfaces. Others developed a gripper that incorporates multiple fixtures. Although impressive many of these approaches were tailored and calibrated to known objects. It is a significant challenge in cases with unknown objects.

6. Grasping objects in clutter:

Why it's hard - A constrained environment can make pose planning difficult and block the hand from grasping suitably. Objects can also get tangled or change the location of the object within the gripper when it's being moved.

Progress - Teams have made progress with objects lying close together but even moderately cluttered environments still slows down performance dramatically. This is still a major challenge and is much more difficult if the objects are unknown and the situations are new.

7. Re-Grasping:

Why it's hard - Regrasping an object by placing it on the floor is relatively simple but manipulating it in the air requires very dexterous manipulators and control.

Progress - Most teams have avoided in-air regrasping and favoured placing the object on the floor, even if it is much less efficient.

8. Grasping for in-hand manipulation

Why it's hard - Tasks like using a robot to operate scissors or a syringe requires complex object models, high dexterity, significant feedback and course correction.

Progress - In-hand manipulation remains the biggest challenge, where teams have been unable to complete tasks that utilize scissors, syringes, and tongs. In many cases, these tasks have been removed from competition due to their difficulty.

What can we learn from these competitions?

As progress has been made organisers have made the process more challenging. Generally, the challenges have moved from being highly predetermined to including more randomness and uncertainty. In 2016, the teams were allowed to predefine the locations of the objects with little to no randomness. In 2017 and onwards, the teams were only allowed to define object regions based on the robot’s workspace and the organizers randomly placed the objects in those regions.

As a result, the approaches have also shifted.  In the early editions, teams tried to solve challenges with highly-engineered solutions. The arrival of learning-based approaches has resulted in less and less hard-coded routines, less use of predefined grasp points and more effective, reliable solutions. Unfortunately, the generality of such approaches for unknown objects is still relatively poor, and many of the challenges above remain unsolved for unknown objects.

There is a lot of hype for generalisable and flexible manipulation in the industry but this remains to be proven in the open environment of robotic competitions. How well it works in industry remains an open question.

What’s really working for industrials?

Although these technologies show a lot of promise and may well be on the verge of bringing great value, they still aren't robust enough for many industrial clients. Certain applications such as sorting packages have enough object standardisation that flexible systems are gaining traction but in manufacturing, there is still a long way to go.

How can companies deal with these limitations? They should follow the Pareto Principle of Automation -

Don’t aim for 100% automation if you don't have to. The last 20% can incur 80% of the difficulty and cost.

Instead, try and find shortcuts to reduce complexity -

  • Dissect your process - Breaking a complex process down into small steps allows you to determine which tasks are trivial and have a good ROI for automation. Focus only on these - there are no prizes for over automation.
  • Constrain flexibility - Do you really need a fully flexible system? Can you simplify by pre-triaging products or using a modular approach? It may not be as sexy but it may save headaches and cost.
  • Constrain the input - Rather than presenting an object to your robot in any possible orientation, you can often ensure it is fed from bulk and constrained in a single orientation using clever design and simple mechanisms like static bars, linear actuators and conveyors. Doing this can allow you to remove the need for complex sensing and vision.
  • Stick with simple manipulators - You can pick up 90% of objects with a simple 2-finger gripper and a pneumatic vacuum system. It may be tempting to get excited by soft robotics and multi-finger systems but quite often these add unnecessary complexity to programming and control. They’re also very expensive.
  • Use clever tools - If you need something complex there are a lot of tried and tested technologies that can make your life easier. The Asycube or Flexibowl vibrate parts and make it easier for a vision to identify the objects to pick.

There are many processes where these options won't work but there are even more situations where throwing AI at a problem isn't needed. That said it DRL is taking the lead in robotics AI. 

Tagged with

Share

06 Sep, 2022

More about Remix Robotics

author avatar

Remix Robotics is an automation design agency that builds custom robotic systems in sectors including manufacturing, agriculture, and logistics. Our team specialises in tackling challenging requirements with bespoke solutions. We've helped clients such as DHL, Mercedes F1 and the Small Robot Compan... learn more

Wevolver 2022