Grasping at straws (pt2)

Grasping at straws (pt2)

Welcome to the Bulletin by Remix Robotics, where we share a summary of the week's need-to-know robotics and automation news.

In today's email -

  • Why AI CEOs are terrified of… AI?
  • Tesla - Robot hype vs reality
  • Robot grasping is hard - how are the best in class solving it?
  • A mean meme to end the article 😲

Snippets

AI Misalignment - Imagine if climate protestors were also the founders of oil companies - according to Scott Alexander, this is the current state of AI. The biggest AI companies Deepmind, OpenAI and Anthrporphic were founded by those most frightened of super-intelligent AIs. His entertaining blog post explores this paradox and suggests what we can do about it. (Long read)

X-ray vision - Google has developed an algorithm, X-Ray that enables robots to search through heaps of objects and correctly grasp a target.

Making fabric a reality - Researchers at Carnegie Mellon  have recently proposed a new computational technique that could allow robots to better understand and handle fabrics. Their system uses tactile sensors and is much less sensitive to  fabric patterns, changes in lighting, and other visual discrepancies

Who needs a mansion anyways -  Robotic furniture is being used to provide multifunctional living spaces for compact homes. With the touch of a button, the room can transform from bedroom to wardrobe, to office, to living room.

Ford revamps plant - The South American plant is set to become the most advanced automotive factory in the region. Ford’s planned transformation will include automated welding robots, a high-speed press and automated internal logistics.

Hyundai launches AI institute - Hyundai to invest over $400 million to establish the new institute with the goal of “solving the most important and difficult challenges facing the creation of advanced robots.” The main focus areas will be cognitive AI, athletic AI, organic hardware design & ethics and policy.

Walkie talkie robots - Google Research and Everyday Robots have teamed up to create a language model plan to make it possible to communicate with helper robots via text or speech and to execute more complex tasks. Check out this video to see the future helper robots in action.

Hype vs reality - Since the announcement of the Tesla robot in August 2021, there has been a huge amount of hype surrounding its release. For all those wondering what the robot will be like, this video details everything that Elon Musk has ever said about it. Let's see how these details compare to the actual robot when it is unveiled in September!

The Big Idea

Grasping Part Two - This Time It's Personal

Last week we introduced the challenge of universal grasping and explored just how complex picking something up can be. This week we’ll explore the latest technology in robot picking and separate reality from hype. We’ll discuss -

  • The hottest start-ups in grasping
  • How Robot Olympiads benchmark progress
  • What technologies are working in the industry today

State-of-the-art start-ups

The last two years have seen an explosion in universal grasping companies and from appearances, they’re having a lot of success- let's look at our top picks.

Deepmind, OpenAi and Dyson

First, we have the three giants of the field - Google’s Deepmind, OpenAi and Dyson.  One of our first bulletins covered Google’s approach to flexible robotics and they have really been ramping up their focus on robotics with spinouts (Intrinsic, Everyday Robots), acquisitions (Vicarious) a whole lot of research. Their approach uses machine vision, Deep Reinforcement Learning (DRL) and simulation to train their systems. Deepmind is a major proponent of combing simulation with DRL and it seems like many other companies are following their lead. They also have a basement filled with different robots undertaking real-life DRL. OpenAi also uses vision, DRL and simulation to tackle grasping and manipulation. If you want to learn more about DRL - subscribe to the newsletter as we will be diving into the topic in more detail.

Dyson announced a move into home robotics and this has been followed by a lot of marketing and a huge recruitment campaign. Their 3-minute intro video shows a range of custom end effectors and vision systems tackling household tasks like loading a dishwasher and cleaning.  It’s unclear what their first product will be or what control strategy they are using but from research papers sponsored by the company, we can see they use vision-based systems with DRL and simulation.  If you want to learn more, ReorientBot and SafePick are great examples of research they have sponsored.

Next, we have the new entrants -

Covariant - is a Californian robotics company developing a universal AI for robots. The company has a few universal picking solutions and although it's unclear what tech stack they use, their engineers have a background in imitation learning and DRL. Covariant is one of the best regarded in the space as they beat 20 other AI companies in ABB’s global picking challenge.

Dexterity - another Californian company, they produce warehouse "AI-enabled robots that can intelligently pick, pack, stack, palletize, and depalletize without changing your workflows.

Right Hand Robotics - is a Washington-based robotics company developing a universal picking robot. Their system combines AI software with intelligent grippers and machine vision. It's unclear what type of learning system they use but one thing that really sets them apart is their gripper - they have every cool hybrid mechanical pneumatic tool.

Neura - a German / Chinese company developing an AI-empowered cobot. Their system has been receiving a lot of hype on the trade show circuit and integrates vision, force feedback, voice control and “smart skin”. It's unclear what flavour of AI they are wielding.

Agile Robotics - is a Munich-based company with a very similar value proposition to Neura. The company is a spin-out of the German Aerospace Centre and doesn't have a huge amount of information publicly available.

Micropsi Industries- is a Berlin-based company that is developing a vision system and AI controller that can be retrofitted onto robot arms.  The system can be used to make your average robot highly flexible and universal.

Sewts - another Munich-based company, Sewts are tackling the hardest problems first by focusing on the robotic manipulation of highly conformal objects like fabrics. They use 2D and 3D cameras plus another ambiguous AI, as well as leaning heavily on simulation.

Unfortunately, companies can be a bit sneaky. First, they don't like to publish their methods. It's fair enough, they don't want everyone to know the recipe to their secret sauce. Secondly, they like to big up their results. When you’re trying to drag the present into the future, there is always a bit of fake it till you make it.

Both are understandable (within reason) but it means demo videos and case studies can obfuscate reality. It’s impossible to tell how much manual intervention or hard coding etc there has been. This issue also exists with published research, where academics have full control of the environment and you never know what's happening behind the scenes.

As it’s impossible to look under the hood, we need another source of truth.

Challenges

Luckily we have Robotic Olympiads. Every year the top research departments and companies compete in a number of different robotics events across the world. Since 2006 there have been around 15 robotics events that feature robotic manipulation. These include Amazon Picking Challenge, the DARPA Robotics Challenge which only ran for a few years and the RoboCup which has run every year since 2006.

Jack Pearson

London