News

Daisy Driver 2.2

Today I’m delighted to bring you an update to my earlier post about the Daisy Driver motor controller. The latest version meets all my requirements and now I can focus on the next parts of the Sixi 3 robot arm project. I’m making it availble to you right here, right now. Join me in building a world with more interesting robots and read on for all the details.

Read more: Daisy Driver 2.2

What’s in the board?

The Daisy Driver can do so much!

  • The STM32F405 brain is huge and crazy fast. 168 MHz CPU/210 DMIPS, up to 1 Mbyte of Flash.
  • The TMC2130 motor driver can handle up to 1.5amps and includes current change sensing, which could be used for collaborative robots that sense collisions.
  • The IPS2200 rotation sensor is absolute – no need to “home” – touch off a limit switch – when the robot turns on. It always knows where it is.
  • The rotation sensor is also hollow-shaft. A wire can go through the sensor, through the center of the gearbox, and out to the next Sixi 3 actuator.
  • The board can be connected to its neighbors in a linked-list of actuators. They talk to each other through the CANbus network using the CANOpen protocol. This makes electrical work a breeze.
  • Since every CAN device needs a unique address, 6 dip switches make it easy to setup. No pads to solder!
  • It also has a USB connection so you can use a normal Serial interface from your favorite app to talk to the device.
  • The JTAG programming pins make it simple to upload new code from Arduino. Check out our daisy driver Arduino firmware.
  • The RGB LED is a handy status indicator. Have it display CANbus traffic, sensor angle, or morse code!

So why is this important?

Sixi 2 with inside wiring
Sixi 3 with outside wiring

The Sixi 2 robot was my fourth robot arm. It was big, strong, and gorgeous. But it was also 1300 parts in 250 types and it had long cables that ran from the wrist, through the elbow, and into the base. If I’m going to make robots easy for everyone then this is not workable. So I went back to the drawing board again and decided to build a modular robot arm out of identical pieces. The Sixi 3 actuator has less than 50 parts in about 20 types. Even with six actuators in a chain to make a Sixi 3 robot arm that’s less than 300 parts total and none of them are one-offs. production and maintenance at scale! The factory must grow.

Why Daisy Chain?

There are three main ways to wire the joints in a robot arm: all-outside, all-inside, and daisy chain.

All outside wires are easy for maintenance but hard to manage while the robot is running. I’ve had robots try to strangle themselves on their own cabling by accident.

All inside wires are tidy and elegant, but doing any service work means dismantling the entire arm.

Daisy chain is the best of both worlds: no outside wiring and easy maintenance. I’ve tested replacing the elbow of a Sixi 3. It takes about 10 minutes.

So now what?

Next up is to share the goodness with you. The board is available now. More are on the way and I can now refocus my energy on the next step in the bigger picture: using one arm to assemble another arm. As that happens I’m going to need more actuators, more boards, and more gearboxes, plus the Robot Overlord software to run everything.

I’m very interested in talking to anyone that would like to help make an awesome harmonic gearbox, anyone into creating data sheets for electronics, anyone who sees the potential.

News Robot Arm

DaisyDriver 2.0

The Sixi 3 robot arm is the result of many years of study and research. I’m trying to build my dream machine that avoids every problem discovered in my previous robot arms. My short list of desirable things include:

  • Easy manufacturing = use less unique parts = use repeating components.
  • Easy maintenance = daisy chain components.
  • Safer = no loose wires, always know where the arm is, responsive behaviour. (safety third!)

To that end I’m on my second attempt to design and program the board of my needs. Here’s where I’m at today, December 31, 2022. Read on for all the details.

(more…)
News

Friday Facts 17: Sixi 3 actuator FAQ

Here are all the latest details on development of the Sixi 3 actuator.

Over the last ten years I’ve built four completely different robot arms over the last ten years. The last (and biggest) had 1300 total parts in 250 types of parts. That counts every unique screw, printed part, length of wire, crimp connector, PCB component… everything.

The Sixi 3 actuator is designed to be a single motorized LEGO block that can be repeated to build robot arms. It has less than 50 parts in about 10 types. Also! Building many little identical things is easier than building one huge unique thing.

News

Friday Facts 16: UpDog (Unity+ML)

Unity has a Machine Learning component. Can I teach it to roll over and stand up?

My robots always start as a simulation.  Instead of hand-rolling a physics engine and a machine learning system I decided this time to try to teach a quadruped bot to a gait strategy (aka get up and walk) using Unity mlagents which use Tensorflow.  My scene and all related material is here:

http://github.com/MarginallyClever/DogML

What is Machine Learning?

Machine Learning is a magic box. On one side you feed it some inputs and then give/withold rewards until the other side produces the result you want. The first trick is to design the inputs well. The second is to design the reward well. There may be other tricks after that but I haven’t found them yet. Part of writing this is document what I’m trying. Maybe you have ideas about how to do it better? Talk with me.

The quadruped model

The root Environment lets me put everything in a Prefab which I can then instance to run many bots in parallel. Dog contains Torso which is the root of the skeleton. Each bone of the skeleton is an ArticulatedBody because they most closely resemble the behavior I’d expect to feed to my servos – set desired angle value, cross fingers. 

Each joint has one degree of rotational freedom and some reasonable limits.  I have no idea what joint stiffness would mirror real life.

Each joint contains one Model that has the mesh of the bot, the material, etc.

On the end of each leg is a foot which is ignored by physics. I paint them red when I detect contact with the terrain.

Behavior Parameters set the inputs and outputs of the network. Space Size is the number of inputs. Stacked Vectors stacks recent inputs so that the network has a kind of memory. Actions are the outputs, and Continuous Actions are decimal numbers usually in the range -1…1. Behavior Type is where you can set drive-by-ML or heuristic (aka drive it yourself).

Dog Controller is the script that is collects the Observations – it feeds the ML with sensory input (more on that later). Dog Controller also receives instructions from the network and then applies them – in this case, it sets the Drive Targets of each ArticulatedBody, same as I would do with a servo on a quadruped. Lastly, Dog Controller usually also gives a Reward to the network to teach it.

Decision Requester sets how often the network should make a decision. The value here is in Academy Steps, which isn’t Unity Update ticks, it isn’t frames, it isn’t FixedUpdate physics steps, it’s… something else.

OnEpisodeBegin

At the start of each episode (learning cycle) I have to reset the position of the quadruped. I’ve found the easiest way is to make a preserve the original Torso and instead use a completely new clone of the entire Torso hierarchy.

Calculating Rewards

I have now experimented with many automatic styles of reward. Mostly my thinking is a variation on these themes:

  • The reward function first gives points for “uprightness” (read torso.transform.up.y approaching 1).
  • Add to that height off ground (max height 3.6, so height/3.6 will never be more than 1).
  • Add to that horizontal movement (maximum XZ speed of 10, then divided by 10, never more than 1).

These three elements are further scaled by a values that can be tweaked from Dog Controller.

I have also tried a manual voting style where the user (me) can click on bots that appear to be doing well to feed them a reward. I also added a button to end all episodes and start over, in case they’re all terrible. Machine Learning Eugenics :S

Results so far

nine bots running in parallel while my CPU gently weeps

So far it has managed to roll over a few times but never figured out how to stand up. It doesn’t even stay rolled over.

Things I know I don’t know

v1

I have no idea if my network is shaped/configured well. Currently I’m using three layers with 128 neurons each.

I have no idea when rewards are applied to the network, so I’m not sure when is best to manually feed them. I don’t know if I should reward continuously while it is running or do as in the codemonkey example (reward and immediately reset).

ArticulatedBody is not great to work with.  GetJointPositions is in radians and SetJointTargets is in degrees? Physically I have no idea what stiffness to set for ArticulatedBody joints. Too high and they can launch the quad off the ground, too low and they don’t have the strength to roll the dog over.

Setting a good Joint Speed in the Dog Controller is also part of the problem. High stiffness and low joint speed (under 0.5 degree per step)

Why do I have to restart tensorboard results graphing script every time I restart the mlagent-learn training tool? my parameters haven’t changed, only the results that are being graphed.

Final thoughts

I’d really like to talk to some passionate and experienced people about training this successfully. You can find me on Discord or Github all the time. If you have specific comments or questions about this article, let me know and I’ll try to address them in an edit.