3D Printing Tutorials

Making beautiful time lapse videos of 3D prints

It would be easy to set a camera next to the printer, set a fixed time lapse between shots, and let it run the while the printer does what it do…. but I want better than that: I want the time lapse to show the growing shape to be the only moving thing in the shot and the printer moves as little as possible.  The difference is intense.

Without printer stabilization
With printer stabilization

This is done with Raspberry Pi that will control both the 3D printer and the web camera.  So enough talk, let’s get to it.

The time lapse tools

The time lapse setup

  1. Follow the steps in the above video until the 6:45 mark. 
  2. Settings > Software Update > Update All.  Restart Octoprint when it requests and surf back to the page.
  3. Settings > Plugin Manager > Get More… > Search: OctoLapse > Install.  I had to be very patient here.  The download was fast, the install took nearly an hour on a Pi B+.  Eventually Restart Octoprint when it requests and surf back to the page. The top will now say OCTOLAPSE instead of OctoPrint.  The Octolapse settings will be hidden in a drop-down menu on the top right, under the user login.  Set your Printer and Stabilization settings.  I chose Back Right because my camera is front left.
octolapse settings
This picture was taken during a timelapse.  Some details may differ.

Time lapse usage

  1. Load a model in Slic3r Prusa and process it as normal.  I used a Tesla wall charger bracket from Thingiverse.
  2. Save the gcode file your computer.
  3. In Octoprint web panel expand the left side Files box and click UploadNot Upload to SD!  Choose the gcode file you saved to your computer.
  4. To the right of Files is a wrench.  Click it and “sort by upload date (descending)”.  Your new file will be at the top of the list of files, which will include whatever is on your Prusa’s SD card.  Timelapses will not be perfect if done from the SD card.  They must be uploaded to the Pi so that the Pi (octolapse) can inject the gcode to move the printer when it’s time to take a photo.
  5. Hit print!

Time lapse results

It looks pretty good!  For better results I will move the filament out of the shot, add a backdrop to hide irrelevant noise, and maybe play with a better camera angle.  Surely Mr. Robot Guy can build a rail for a panning time lapse, right?

See Also

Octoprint’s list of webcameras known to work

If you like those open source projects please show your support with a donation.

If you have updates to this post, please comment below!

Robot Arm

Can Machine Learning Improve Robot Kinematics?

I’ve tried several times to hand-code inverse kinematics for robot industrial arms in Robot Overlord.  To make a long story short, there are a lot of complicated edge cases that break my brain.  Many modern methods involve a lot of guess work in the path planning.  I know that a well trained Machine Learning agent could do the job much better, but to date there are none I can download and install in my robot.  So I’m going to try and do it myself.  Join me!

The problem I’m trying to solve with Machine Learning

I have a 3D model of my arm in the Java app called Robot Overlord.  The 3D model is fully posable.  At any given pose I can calculate the angle of every joint and the exact location and orientation of the finger tip.  I have Forward Kinematics (FK) which is a tool to translate joint angles into finger tip.  I have Inverse Kinematics (IK) which is a tool to translate the reverse.

A robot arm is programmed with a series of poses.  Go to pose A, close the gripper, Go to pose B, insert the part, Go to pose C, etc… The robot software has to calculate the movement of the arm between poses and then adjust every motor simultaneously to drive the finger tip along the path between the two poses.  I’ve already solved the firmware part to drive six motors given sets of joint angles.

The problem is that one IK solution there might be many combinations of joint angles – sometimes infinite solutions.  To illustrate this, hold a finger on the table and move your elbow.  Your finger tip didn’t move and you had lots of possible wrist/shoulder changes.  As the arm moves through space it can cross a singularity – one of the spots with infinite solutions – and when it comes out the other side the hand-written solution flips the some or all of the arm 180 degrees around.  A smarter system would have recognized the problem and (for instance) kept the elbow to the side.  I have tried to write better IK code but have not had any success.

My Machine Learning Plan of Attack

My plan is to use a Deep Learning Neural Network.  The DNN is a bit of a black box: on one side there is a layer of Inputs, on the other there is a layer of Outputs, and in between there are one or more hidden layers.  Inputs filter through the layers and come out as Outputs.  The magic is in the filtration process!  The filter can be trained with gradient descent using a cost function – if I can score every input/output combination I can let the DNN play with the virtual arm while the cost function watches and says “good, bad, better, worse” until the two work out all the best possible movements.

My Machine Learning Network design

I believe my inputs should be:

  • Arm starting pose: 6 random angle values.  Because DNN inputs are values in the range 0…1 I’ll say 0 is 0 degrees and 1 is 360 degrees.
  • Arm ending pose: 3 random position values and 3 random angle values.  Position values are scaled over the total movement range of the robot.  So if the robot can move on the X axis from -50 to +50, (x/100+0.5) would give a value 0…1.
  • Interpolation between both poses: 1 decimal number.  0 means at the start pose and 1 means at the end pose.

I want my outputs to be:

  • Arm joint angles: 6 angle values.
  • confidence flag: 1 number.  0 means “I definitely can’t reach that pose” and 1 means “I can reach that pose”.

The cost function should work in two steps:

  1. make sure there is no error in the joint value – that is to say, the finger is actually on the path where it should be, and
  2. seek to reduce joint acceleration.  Adjust the elbow and the wrist ahead of time to avoid the need to suddenly twist.

I’m going to try first with two hidden layers, then experiment from there.  I intuitively guess it will take at least two layers because there are two parts to the cost function.

My Machine Learning code setup

Robot Overlord source code is already written in Java so I’ve added TensorFlow and DL4J.  Currently I’m still walking through the MNIST quickstart tutorials and asking the DL4J chat room for help.  They already solved a few head scratching differences between the DL4J quickstart tutorials and the DL4J up-to-date examples.  You can find my first test in Robot Overlord’s code at /src/main/java/com/marginallyclever/robotOverlord/DL4JTest.java

Next

I hope that I’ve described my challenge thoroughly.  Please feel free to look at the code and make pull requests, or comment below or in the forums with any tips and advice you might have.  If you’re feeling helpful but not sure how, please share this far and wide so that I can reach the people who have the DNN know-how.

Stay awesome!

Opinion Robot Arm

Why Would I Need Robot Arms?

Robot arms can decouple (separate) the arm doing the work from the mind thinking about the work.  Here are a few ideas of how you can benefit.

Robot arms decoupling for Health and Safety

Robots can work where humans cannot, and this is a great health benefit.  They don’t need to breathe, drink, or worry about low levels of radiation.

Perhaps the most common industrial uses are grinding, welding, and cutting robots.  Humans can’t be hurt if they are nowhere near the danger.  How do you avoid shark attacks?  Stay out of the water!  Glues, acids, paints, and other dangerous chemicals also come to mind.  Weak human flesh is weak!  OSHA probably love robot arms.

Consider: what if affordable robot arms could be put into hospitals?  Imagine a west african country dealing with an Ebola outbreak.  Today in 2018 the doctors have to wear full body protection and clean themselves very carefully at the risk of infection.  It’s a sealed, full body plastic suit, in the African summer, while trying to care for patients!  Heat stroke is a real problem.  Boots full of sweat is just gross.  If the doctors could work remotely they would be both safer and more comfortable.

Robot arms decoupling for Time

A robot arm is always ready to work, while the human operator is not.  But suppose the humans work in shifts?  Now the arm is running around the clock, and employing people on both sides of the globe at the same time.

Of course there is the far more traditional time decoupling where a robot arm is given a highly repeatable and very boring job.  This saves the expensive time of the human trainer.  It also makes sure that the entire job is done with the patience, consistency, and precision needed.  No falling asleep and missing one item in the middle of a million.

Robot arms decoupling for Security

For some, a robot arm brings peace of mind.

Suppose your business has a lot of very valuable, very portable stock.  Diamonds, marijuana, whatever.  As the business owner your big concern is making sure employees don’t take your stuff home.  What if they could work from home and never know where the job is being done?  They can’t directly touch the merchandise.  They don’t know the location of the merchandise!  It’s very hard to commit a crime without means and opportunity.

Robot arms decoupling for Extreme Distance

Working remotely can be taken to any extreme.  There are robot arms on Mars rovers Curiosity and Opportunity.  One day soon Marginally Clever robots will be assembling structures on the moon and humans will be doing the thinking for those arms from here on earth.

Final Thoughts

Have you got other reasons to use robot arms?  Personal examples?  Picture to illustrate these examples?  Share below or in the forums.

Tutorials

The Best way to program an Arduino Microcontroller from the Linux Command Line Interface

Updating arduino code over the internet hasn’t been easy for me before now. In this post I will show you how to update 200+ different microcontrollers from the Linux Command Line Interface (CLI) using PlatformIO. My specific example (see instagram video below) will be a Teensy 3.2 using a Raspberry Pi, but you can repeat it for many other combinations with a little effort.  This method is specifically for the times when you have no Graphical User Interface (GUI) What-you-see-is-what-you-get (WYSIWYG) into the remote machine such as TeamViewer and all you have a bare bones text interface to the remote device.

(more…)

News Opinion

CES 2018 Robots? Mostly Garbage

Right! I’m in a bad mood today and the crap out of CES 2018 isn’t helping. My short review is “most of these are garbage, and the rest have some brain damage.” Here’s a quick run down of the big ticket items and why I do or do not like them. (more…)