Jigsolve: see all the pieces at once with the new Google Map system

Jigsolve project is running at Science World in Vancouver. I felt that it could be a little easier to play. Building on top of the existing camera code and movement code, I taught the robot a new trick: At the click of a button it moves the robot in a grid and takes photos of the entire table. These photos are then checked into the Github repository along with an index.html. If you surf to the index page you can see all the photos as a google map.



Jigsolve: First jigsaw pieces attached!

As part of filming the Kickstarter video a team member suggested a video of putting two pieces together via robot.  Of course!  So we got down to it… and it was the most nail-biting thrill I’ve had all day.  First try was close, second was on the money but didn’t press in, and third pressed it into place.  Phew!

…ok, I tried to make a GIF of the action and it came out at 560mb.  Something new to learn.


Jigsolve: It works!

The jigsolve robot works!

This morning I tested movements against software limits, the picking, the placing, all through the irc bot. I then spent three hours trying to configure my (&/#*\+~€{! Routers to let the camera video out to the internet.

Above is a quick vid I shot just before I started the testing.

The networking issue is actually moot and I should let it go. I need to integrate the twitch API and get the Kickstarter video done. Once the machine moves to its temp home wherever it’s running for the duration of the game I’ll have all new networking …fun.

Wow! Writing a log on my phone is painful. Seriously, hackaday, get your shit together.

Like these posts?  Tell your friends about marginallyclever.com.  Selling electronic parts keeps me building weird stuff like this.  Thanks!