Some Random Thoughts on Polypod

Feb 1999

It has come to my attention that some people think that this work is primarily simulation. The bulk of the work here was in building the hardware and implementing the control. Out of the 7 simple straight and turning gaits presented on these web pages, 4 have been implemented on the physical robot. I'm working to convert the video of these gaits into machine readable form. In addition, recently two more gaits have been implemented at Xerox PARC on the PolyBot (the next generation of Polypod).

On July 12, 1996 a message was posted to comp.robotics.misc:

In article <>,
Dirk Schwarzmann   wrote:
>I'm pretty new to robots and I want to change this :-)
>A year ago, I heard of a concept to build a complex robot from many small and
>simple ones which are all the same. The small robots have some basic functions
>and abilities (like a little RAM, some movable or turnable parts and so on)
>and can be assembled in many different ways (because they are uniform) so that
>you get a different robot from the same small ones if you re-assemble it in a
>different way. The goal is to achieve a specialized robot for a special job
>with the ability to turn itself into another robot when it gets another job.

I realized that this was perhaps a better way to talk about Polypod. That is, building a complex robot from many small simple ones. However, looking back as I was going through the design of Polypod, there was a great temptation to build a complex robot from many small complex robots. The reason being that small complex more capable robots should combine to make even more capable combined robots. It's easy to add just one more piece of functionality to a module, and one more... (complexity creep).

I think the problem here is reliability or robustness.

One way to look at it is that when a system made up of a single complex robot fails, it may fail catastrophically. A system made up of many repeated redundant modules will have some gradation of failure if designed properly. When something fails, most likely the failure won't be catastrophic. The fundamental problem is that the more parts you have, the more parts you have that can fail. As the number of modules increases, the question won't be whether something has failed (something will definitely fail) but how many have failed.

The main factor in the robustness issue will be what is the likelihood that any single module will fail. How robust is each module. Given a set of n modules, the probability that one has failed is the probability of one module not failing to the nth power. (is this correct?)

After reading the above paragraph Nyles Nettleton of Sun Microsystems sent me an email suggesting I use MTBF [mean time between failure] as a robustness metric. This is an excellent idea. I hope he doesn't mind my quoting him: understanding of it is that the likelihood of failure of a
system is more like the inverse of a summation of probabilities.  I
calculate it as if it were a parallel resistive network - for a system
of three components of equal MTBF for example, the aggregate MTBF
would be a third of the component value.

Which brings us back to complex modules versus simple modules. Simple almost always means more robust and reliable.

"Everything should be made as simple as possible... But not simpler." - Albert Einstein

More ramblings on statically stable locomotion.

Back to Polypod.

Comments may be sent to Mark Yim at

last updated February 1999