Improving image resolution

Home Forum Makelangelo Polargraph Art Robot Improving image resolution

  • This topic is empty.
Viewing 4 posts - 1 through 4 (of 4 total)
  • Author
  • #5897

    Well…. I spent about 5 hours today trying to improve the image resolution.

    What I got was a better way to dither floyd/steinberg so the output pictures look more like the thing they are portraying. I *was* calculating the dither by assuming anything above 50% black was white and anything below was black. Now I take the average color of the entire image and use that as the dividing line.

    Can anyone explain Weighted Voronoi Stippling to me? I want to add this to the app and so far I don’t grok it.


    Ok, I think I have the code working to increase the resolution. It very quickly adds a *lot* of dots, which makes processing take much more time.

    You can grab the update 0.9.4 from


    HI There, is there an explanation of what the Image resolution setting does?

    My machine draws onto A4/letter paper and I had the problem that the detail in the outputs was related to machine size, not input image size, so this may be quite helpful to me.

    I also looked as Weighted Voronoi Stippling a little while ago and spent far too long reading the original paper.This is how I understand it:

    1. Randomly drop x points over the image
    2. Calaulate the Voronoi Diagram for the points you have (the voronoi diagram is just a case of working out for each pixel which of your random points it is nearest to. Each point now has a list of pixels which are nearer to it than any other. If you draw along the boundaries of each point’s set of pixels, that’s your voronoi diagram.
    3. Calculate the intensity centroid for each set of pixels belonging to each point. Normally the centroid is the center of mass, so if you were to balance an object on a point, the point where it balances flat is it’s centroid. For the intensity centroid you calculate where the balance of intensity (darkness) is. The centroid is somewhere within the set of points, but probably not where the random point was.
    4. Update your random points to be at the intensity centroid
    5. GOTO 2

    The points ‘migrate’ to where the images darkest features are. You stop the algorithm when the user gets bored or when you detect that the movements of the points becomes sufficiently small in each iteration.

    The original authors of the method also used a trick to improve the processing time in step 2. Imagine you’re looking down over all your random points. Each one has a different coloured cone hung below it. the cones can slide through each other too, but you’ll only see the colour of the highest cone, as it’ll mask the others. As you look straight down, the colour you see at any point will be the one belonging to the cone nearest it. If you se that up using a 3D graphics card, you can assign a colour to each point, let the card render the scene, then read the scene, using the colour at each pixel to look up the nearest point.
    I got an implementation of this working in the browser using three.js (happy to share if it’s of any use), but I ran out of steam when it came to intensity centroids!

    Do shout if I can be of any assistance.


    Image resolution affects the number of dots per inch when generating the dithered image, which in turn means a longer, more complex TSP line. The higher the resolution the more dots the more TSP.

    That sounds great! I’m working on adding new styles of drawing to the software and having voronoi stippling would be an awesomea addition. Plus I think it could make the TSP line drawings look better.

Viewing 4 posts - 1 through 4 (of 4 total)
  • You must be logged in to reply to this topic.