Paul came to the shop with a 2′ carnival light arrow sign that had five LED light bulbs inside. He asked if I could make them light one at a time to animate the pointing effect. Read on to see how it was done.
TBH I have no idea if the camera can see the nozzle tip from there or if the image will be in focus. More than a little important! I’d be very grateful to our lizard overlords if I did NOT have to build a mirror system to get the image to the camera.
The M2*20 screws were hidden when I took this screen shot. ( •_•)>⌐■-■ (⌐■_■)
I bought and assembled an X-Carve CNC. I am now designing a few parts to hold the rotating air nozzle that will pick up jigsaw pieces. (purple plate, pin, plate, yellow tube.) once I can turn and suction up pieces I’ll add a few more bits to hold a raspberry pi camera, which will be mounted to look down at the nozzle.
Here’s the X-Carve mostly assembled.
This has been on the backburner for too long, sitting in a corner and taunting me with it’s unfinished business. The problem is not the suction nozzle or the camera or the code – it’s the janky gantry I built, partly to save money and partly to learn about CoreXY systems.
Business is good and the money is there, so I’ve purchased an Inventables X-Carve machine. Once it arrives I’ll mount the suction nozzle instead of a cutter and run the system like that. Later when the thrill of jigsolving is gone I’ll repurpose the X-Carve as a traditional CNC machine.
More when the X-Carve assembly happens.
In this post I’m going to talk about what I consider a robot (and not a robot), cover some of the basics to start with robotics, and give some examples from a successful class I have been leading at a local makerspace.
Continue reading Where to Start with Robotics
Programming robots visually is proving to be a big challenge. I thought I laid all the ground work but now I’m not so certain.
Each derivation of Robot has a derivation RobotMotionState (a snapshot of the robot at a moment in time). By comparing RobotMotionStates I hoped to test for robot/robot collisions.
I’ve also got a system to command a robot to move, which changes the RobotMotionState, checks that it’s internally valid (robot can reach that far) and externally valid (doesn’t hit anything in the world).
I’ve been stuck for months trying to build a system of RobotPrograms. Something like the Adobe Premiere or Flash timeline, where blocks of time have one command each. I couldn’t wrap my brain around it enough to come up with generalize-able cases that could be written once and work for all robots.
In a dream last night I flipped things around a bit. What if a timeline has keyframes and each keyframe is a RobotMotionState?
- Move the timeline to a new position, move the robot to the new state, keyframe is automatically created.
- Move to an existing keyframe, adjust the robot, done.
- Delete an existing keyframe.
- Drag a keyframe along the timeline.
- Clone a keyframe.
Ideally there should be a way to interpolate between two RobotMotionStates. That way as the read head is moved along the timeline the robot(s) move between their states. There must also be a way to calculate the difference between two RobotMotionStates and send *just that change* to the robot as a command.
Oy. I’m not sure this is any easier.
Robot Overlord Java app contains lots and lots of classes, some of which are robots and their gui.
I want robot developers to have an easy time adding their robots to RO. A simple interface, minimal distraction, and examples to work from are Good Things. I’m told that RO can use a Service Provider Interface (SPI) to load a jar it’s never seen before, a jar that contains an implementation of the interface. I would then
- make a RobotInterface,
- make every robot I’ve already got use that interface
- move each robot to a separate jar and load said jars through SPI
- make a separate github project for each robot
- advertise these plugins via tutorials so that you can fork a repo, adjust to taste, and publish your new thing.
What I’m discovering is that SPI is tricky tricky.
- I can’t find any online examples where someone has done this successfully.
- I have not yet got RO to load my first robot’s jar file, tho I’m trying. Is the jar packaged wrong? Maybe it doesn’t say “yes, I have that interface!” in the right way.
- Is RO not even seeing the jar? I’m told SPI looks for any jar on the classpath. I printed out the classpath, then put the robot jar in one of those classpath folders and ran the app again. Nothing.
There are several possible points of failure, none of which can be clearly eliminated as possibilities. Worse, I’m not sure how these plugins would be debugged. Running RO would not give a lot of insight into the plugins’ inner workings. Would I still be able to tweak code in real time? That is a must.
So I ask you, dear reader: am I way off track? What do?
I should note here that I do not want to have to run RO from the command line with a custom classpath. While I’m able to do it, I doubt that the people who buy robots and use them will even know how to open a command line. Imagine a grade school teacher trying to set up for their students, or your aged mother who’s used to OSX. It ain’t happening. You don’t want that tech support phone call and neither do I.
To make RO easier for developers, I have published the API at http://marginallyclever.github.io/Robot-Overlord-App/
Next, the robots currently supported by RO will be moved to separate projects with their own github repositories.
A crucial feature here is keeping it easy for the end user, whom I assume (in a worst case) know nothing about computers. They shouldn’t have to modify the classpath or open a shell. I’d like it to be as easy as Arduino’s board support installer – pick from a list of plugins online, download on demand, and go.
Next, one or more tutorials will be made showing how to fork a repo, modify the robot type to your needs, and then publish your new plugin such that RO can find and install your plugin. Much easier than having to wade through the entire RO project and make a pull request.
After that I run out of ideas. Comment with your suggestions, please.
Here’s the latest robot I’ve added to RO: A new stewart platform.
I’ve shown you how to use shift registers to drive an LED grid, including how to draw pictures on the screen from memory. Now we’re going to use those tools to make a game similar to the classic Tetris. I’ll show you the circuit, how to draw pieces, how to create animations, respond to user input, and more. Learning how to build complex behavior from simple parts is a great start to thinking about how robots behave.
In recent posts I’ve covered how to use LEDs and how to use shift registers and even how to combine shift registers and LEDs to control numeric displays. In this post we’re going to use 256 LEDs in an 8×8 LED grid.