Robot Arm

Record & Playback 4

I have been building a robot arm. You may have seen it on my Instagram. I also have an open source Java app called Robot Overlord, which can simulate the arm in 3D. Robot overlord can drive the simulation AND the real arm by moving the joystick. All the moves in this video were done with the Playstation controller:

In a previous post on Hackaday.io, I briefly covered a system I wrote into Robot Overlord that would capture all the joystick data and play it back on command. Technically, that worked. Qualified success.

Watch I stream robot related tutorials from imakerobots on www.twitch.tv

However! Driving this way is way inefficient. new instructions are sent to the arm 30 times a second. The arm can’t see where it is going, so it can’t plan to reach high speeds. It’s like a very fast game of Marco Polo. Also if you’re a novice driver like me it’s really easy to go the wrong way. It would be great if I could move the simulated arm to a destination pose BUT only update the real robot when I’m sure I like my destination pose. Then the arm would then move in a straight line from start pose to end pose at top speed.

First I needed a way to save any pose to a file on disk and then bring it back. Then I could save and load single poses. Then I could play those poses back to the real robot, same as I did with the joystick model. Then I could repeat tests, which helps me confirm things work correctly.

If I have a start and an end pose then I can find a way to interpolate between two poses – I can split that line into sub poses if needed. I can already send poses to the robot. So what I can do is find the happy trade off between too many poses (no acceleration) and too few (less accurate movement).

Looking through my daily notes I see I started on the new system some time before 2019-8-13, because that was when the weirdness started: I found cases where recording to disk and coming back were out of sync. Not identically 1:1. Discombobulated. When I tried to play back a recording the hand of the robot (J5) was always turned 90 degrees from the original recording. As I began to dig into why, I opened a whole can of worms. Bigguns.

Worm: The robot sim in Robot Overlord was wrong.

When Jin Han and I built the computer model of the robot arm in Fusion360, the design was started in November 2018 and back then we started facing the wrong direction.

Arm designed pointing at -Z

When I say it was built facing the wrong direction, I mean that I imagined That both Fusion360 and Robot Overlord would have the hand pointing at +X and up was +Z. In fact, in Fusion360 the hand is pointing at -Z and up is +Y, and in Robot Overlord I reassembled the arm with the hand facing -Y and up is +Z. Copying the model over was stupid hard and I didn’t realize that was partly because I was doing it the wrong way, turned 90 degrees on two axies. It would have been easier if it was upside down and backwards!

My method to solve it was to load one joint at a time starting at the base, get it turned facing upwards, and then add another link and so on. Once all the bones were in their relative positions, build D-H parameters that matched.

Worm: The D-H model of the arm was wrong.

The Sixi was the first robot arm I ever coded that used Denavit–Hartenberg parameters. One of the reasons I used D-H parameters is that they’re well documented and supported by other people into robotics. I can easily use D-H to calculate Forward Kinematics (FK), where i know the angle of every joint in the arm and I want to get the pose of the hand. (A pose is a position in space and an orientation. One common way to describe this combo is with a 4×4 matrix). I could also use Youtube videos that explained how to calculate Inverse Kinematics for a robot arm with D-H parameters. Especially tricky is the spherical wrist:

I found the videos on spherical wrists were incomplete and it wasn’t until I stumbled on these notes from York University in Canada that I found the missing piece.

Worm: Inverse Kinematics calculations were wrong.

Of course my code didn’t quite match the stuff I’d been taught because my model was facing -Y instead of +Z – a 90 degree turn. Every time the tutorials said use atan(y,x) I had to write atan(-x,y).

Not knowing that I’d done all this stuff wrong yet, I had to diagnose the problem. I build a jUnit test in src.test.java.com.marginallyclever.robotOverlord.MiscTests.java:TestFK2IK(). This test sweeps the arm through the set of all angles keyframe0. Every instance in keyframe0 creates some possible pose m0. Some m0 can be solved with Inverse Kinematics to make some other keyframe1. keyframe1 can create a pose m1. m1 should ALWAYS match m0. I got lot reams of big data, all of which told me Yes, there’s definitely something wrong. It took about a week of nail-biting research until I figured out and unscrambled each of those worms.

So what does all that mean? It means I can now build meaningful recordings and now I can start to search for the right happy trade off .