Today I’d like to show you how to use Gradient Descent to solve Inverse Kinematics for your robot arm with a practical example. I used GDIK to improve the movement planning of the SIXI robot arm and I was blown away by how much simpler it was to understand and implement. Read on to see how it works so good.

## For you new the conversation…

There are basically two ways to move an arm. The first way is with forward kinematics: I turn one or more joints and I figure out where the finger tip went. The second way is with inverse kinematics: I move the finger tip and then figure out where the joints went. The second way is very desirable because then I can make the fingertip follow a line, which is super easy to program. The problem is that the second method has, until now, been crazy hard to solve.

## In the Before Time

I tried to solve inverse kinematics analytically – by drawing models of the robot; solving the geometry; and programming those geometric solutions into a solver class. The analytical solution falls completely apart in special cases called singularities. If you can move your elbow without moving your hand or finger… that’s a singularity: a place where there’s a whole range of valid solutions.

Big sad story, we spent a really really long time trying to make this work right. We studied the great masters old and new. The IK solver was 350 lines of dense work with matrix transformations and vector algebra. There were always places it didn’t satisfy or straight up didn’t work.

I first heard about Gradient Descent in conversations about Machine Learning, then found this clear and simple blog post.

The gist of it is that the arm is blindly wiggling towards a target and it takes the time it takes BUT it also doesn’t have any problem with singularities. Nice!

Have you ever played “warmer, colder”? That’s Gradient Descent. For every joint in the arm, move a tiny amount. If the result is better? move that way. If it is worse? Move the other! Repeat until we are close enough to target.

## A setback is overcome

The linked tutorial above is excellent and has great gifs. Unfortunately, it doesn’t say what to do if the robot arm like Sixi has a flexible wrist which has to turn to match a desired angle. One of their follow up posts about tentacles uses a method that only exists in the Unity engine called Quaternion.Angle. Since I’m not working in Unity, this is no help. I had to cook up my own answer.

Since my target and current position are both 4×4 matrixes, the first attempt was

error = sum( abs( target[x][y] – current[x][y]) );

Because the 3×3 rotation part of the matrix is never outside the range [-1…1] the sum of this part could not compete with the translation part, often much much bigger.

My solution is to treat two normals of each matrix as “handles”, like so:

``````// Java code
public double distanceToTarget() {
Matrix4d currentMatrix = endEffector.getPoseWorld();

// linear difference in centers
Vector3d c0 = new Vector3d();
Vector3d c1 = new Vector3d();
currentMatrix.get(c0);
targetMatrix.get(c1);
c1.sub(c0);
double dC = c1.lengthSquared();

// linear difference in X handles
Vector3d x0 = MatrixHelper.getXAxis(targetMatrix);
Vector3d x1 = MatrixHelper.getXAxis(currentMatrix);
x1.scale(CORRECTIVE_FACTOR);
x0.scale(CORRECTIVE_FACTOR);
x1.sub(x0);
double dX = x1.lengthSquared();

// linear difference in Y handles
Vector3d y0 = MatrixHelper.getYAxis(targetMatrix);
Vector3d y1 = MatrixHelper.getYAxis(currentMatrix);
y1.scale(CORRECTIVE_FACTOR);
y0.scale(CORRECTIVE_FACTOR);
y1.sub(y0);
double dY = y1.lengthSquared();

// now sum these to get the error term.
return dC+dX+dY;
}``````

I picture xHandle and yHandle as the red and green drag controls in the gif at the top of the post, scaled up by `CORRECTIVE_FACTOR`. The GD has to match the position of the center and the handle tips to get everything right. Since three points in space is all one needs, there’s no need to test a zHandle.

## Where’s the code?

My Gradient Descent implementation is in the dev branch of Robot Overlord. You can check out the dev branch before release by going to our github page. At the time of this post the exact file is a very long url.