This Week in Robotics 11.11.22

This Week in Robotics 11.11.22

Welcome to the Bulletin by Remix Robotics, where we share a summary of the week's need-to-know robotics and automation news.

In today's email -

  • Robots are going to get very easy - speech-to-task programming and AI end-effector design
  • AI’s can program themselves now
  • Amazon’s new robots
  • Mechanical neural networks (this is cool)
  • Why AI art is important


Amazon reveals the new design for Prime Air’s delivery drone -  The design is focused on increasing range, expanded temperature tolerance, the ability to fly in light rain (don’t expect to see this in the UK any time soon) and a 25% reduction in noise. This project has been ongoing for just over a decade and we can’t wait to see it off the ground.

Amazon introduces Sparrow -  Sparrow is the first robotic system in their warehouses that can detect, select, and handle individual products. There are many workflows where this could be used including including stowing, picking, consolidation, and packing. It seems like it will be used for initially for order consolidation, the process of combining batch-picked items into their respective orders. This accounts for approximately 5% of total labour demand in fulfilment centres, which is tiny in comparison to order picking, 53% of total labour. P.s the announcement ends by plugging their Mechatronic and Robotics Apprenticeship… We know what they’re doing but it still sounds like a very cool job.

$5 million Avatar XPRIZE awarded to NimbRo VR telepresence robot -  The competition called upon teams to develop human-operated robot avatars that can replicate a person’s senses, actions, and presence and execute tasks in a remote location. The challenge included picking up a drill and using it to unscrew a bolt and feeling a series of objects out of view to determine which had the roughest texture. NimbRo won the coemption and walked away with $5M


What if robots could write their own code - Google has developed Code as Policies (CaP), a language-based code generator for robots. How does it work? You give the robot text instruction, and it uses PaLM ( a Large Language Model / LLM), to convert this to code. The model is tunned to robot tasks and generalisable to new instructions, it can also translate precise values (e.g., velocities) to ambiguous descriptions ("faster" and "to the left") depending on the context to elicit behavioural common sense. Programming robots is currently one of the biggest barriers to easy robot integration - what happens when it's as easy as talking to a colleague?

What if robots could design their own tools - The second skill biggest barrier to robot integration is custom tool design. Of course, you can buy 2 finger grippers or basic pneumatics but as soon as the task becomes a bit niche a mechanical designer is needed to design tooling. Researchers have developed a generalisable process for automatically designing and optimising new tools. They tested on tasks like winding ropes, flipping a box, and pushing peas onto a spoon. Another nail in the coffin for integrators?

Large Language Models Can Self-Improve -  Proponents of the singularity believe that once AIs can self-improve we’ll see an explosion in progress leading us straight to AGI… Well researchers have demonstrated that an LLM is also capable of self-improving with only unlabelled datasets to match state-of-the-art performance. I for one look forward to our robot overlords.

Mechanical neural networks - Mechanical engineers were feeling left out by all the progress in AI so they decided to copy their computer scientists' colleagues. The result is a mechanical neural network capable of learning different tasks - machine learning indeed. How does it work? The strength of connections between nodes is mimic by variable stiffness spring that can be tuned using motors to adjust tension during the training phase. Instead of processing digital data, the mechanical neural network processes forces applied to it, twisting and morphing its shape depending on the stiffness of its beams. The goal is that this could be used to create aircraft wings that morph during flight to maintain efficiency or minimise turbulence.


Growth in AI and robotics research accelerates - (From a few months ago) The number of AI and robotics papers published in 82 high-quality science journals has been increasing exponentially according to review by Nature. The US is in the lead with nearly 2750 papers published in 2021 but China is growing the fastest with an increase of 1,174% increase in the annual Share of published papers since 2015.

A free advanced robotics course from UC Berkeley - We’ve spoken at length about the cool projects coming out of UC B and now anyone get a taste of their teaching. The University has uploaded 25 lectures from their 2019 Advanced Robotics course for free on YouTube.

What to Watch in AI - A few weeks ago we spoke about AI’s history of over-hyping and then failing to meet expectations. The Generalist has brought together 10 experts to share their thoughts on the industry. Some take aways -

  • Copilot for everything. AI is already streamlining illustration, writing, and coding. It may soon become an assistant for all knowledge workers. In the future, we may have versions of GitHub’s “Copilot” feature for lawyers, financial analysts, architects, and beyond.
  • Improving interfaces. Interactions with AI typically take the form of a basic text box in which a user enters a “prompt.” While simple to use, greater control may be needed to unlock the technology’s power. The challenge will be to enable this potential without introducing needless complexity. Applications will need smooth, creative interfaces to thrive.
  • Addressing the labour shortage. Skilled labourers are in short supply as society’s need increases. For example, while demand for skilled welders increases by 4% per year, supply declines by 7%. AI-powered robots may be part of the solution, automating welding, construction, and other manual tasks.

Shots fired - We’ve mentioned Musk’s hatred of LiDAR as a self-driving car sensor, he’s called it a “fool's errand” & said that  “Anyone relying on lidar is doomed. Doomed!” Well, an opinion piece has come forward in defence of LiDAR" The economics can make sense, especially on the higher-end vehicle, given the price point. And then you have all the benefits of providing a 3D sensor to the vehicle and its ADAS, as well as autonomous systems.” More interestingly they commented on the state of Tesla’s autopilot division since the departure of Senior Director of AI Andrej Karpathy. “They’re left with a very weak team of fewer than a hundred. These other companies have thousands of engineers on their teams, so from every which way you look at it, Tesla’s not going to do it where they’re at right now”. I wouldn't take any of this as fact but as evidence that self-driving car wars are far from over.

Blinded by the light - Unrelated to the above, researchers have found that LiDAR systems can be blinded to the presence of pedestrians using targeted lasers…

Robotaxis are great apparently - A blogger details what its like to take 26 rides in driverless robocabs. Their (very positive) summary -

  • The technology is amazingly robust in the current operational domain.
  • After just a few minutes, a ride in a robotaxi appears to be completely normal and spectacularly unspectacular.
  • Some drivers feel provoked by the robotaxis and their driving style and behave aggressively.
  • Just as we had to learn how to deal with traffic around cars, we will have to learn how to deal with robotaxis. Knowing their limitations and new capabilities makes them easier for humans to use.
  • Fares today don’t yet reflect the cost of operation and development, but in the future, they could be well below the prices of today’s transportation systems. Estimates range from 80 to 90% lower fares or even free rides in some cases.

Video / Podcast

Quite an arty video about AI art. Emergent Garden explain how AI art generators can be used to simulate evolutionary processes. It's a cool video exploring big ideas.

Google may be the best search engine for things that exist. But AI image generators can be thought of as search engines for things that don't exist. If you’ve used Stable Diffusion or Mid Journey you’ll know they’re not the best if you have a very clear idea of what you want. Instead, they are great for open-ended exploration - following threads of interest to see where they go.

Evolutionary exploration leads you down unexpected paths but often gets you better results, even if unexpected.

Jack Pearson