A long time ago I wrote a useless, but fun program that attempts to programmatically recreate a source image by randomly placing randomly sized, randomly coloured rectangles onto a canvas. If the result of this random application of colour is closer to the source image, it's kept, otherwise the changes are discarded. Over time, it gets reasonably close to the source image.
Sort of similarly (sorry I don’t have a link handy but could find one tomorrow if anyone is interested), there was an interesting program someone posted in an AskHN post a year or two ago where the program took an input image, started with a blank circle that was made up of a configurable number of “pins” evenly spaced apart around the perimeter, and then starting from one “pin” iteratively found the next pin that, when connected with a line from the current pin, most made the generating image look more like the input image compared to the other possible lines, and then onto the next pin and the next until you had an image that roughly looked like the input image. The point of that original program was to figure out the best way to connect pins inserted around a circular frame with a single long thread and then weave the image with it. The original post had pictures in it of the guy’s finished artwork and everything.
I don’t remember what language he wrote it in, but I converted it to a react app that handled everything client side using html canvas and bitmap images. Not to publish or anything, just for fun.
It wasn’t but that is VERY similar haha. Wonder if this person had the same idea as I did. Namely “hey this idea is cool! I MUST build it myself to tinker with.” Lmao
After much sleuthing, I finally found the original comment. It has a link to images of the built out project which is nice and in the replies the OP posts links to the GitHub gist. If any interest in my react version I can make that public too, just know it was quick and dirty but does work.
After much sleuthing, I finally found the original comment. It has a link to images of the built out project which is nice and in the replies the OP posts links to the GitHub gist. If any interest in my react version I can make that public too, just know it was quick and dirty but does work.
I was feeling a bit self-conscious about my 2015 Javascript, and gave it a bit of a polish! So it's slightly more than 114 now. I'm glad you appreciated this! The whole idea is built around a very simple mechanism.
I've seen a similar thing used to generate low-size versions of images using arbitrary SVG primitives (large triangles or circles rather than small blocks), for use as placeholders while high-res images load. I can't find any of them in quick search, but there were a couple of F/OSS tools that implemented the trick.
Fun. At a very high level this is similar to what Gaussian splats do. You nudge Gaussians, that start out as random blobs in 3D space, in a direction that minimizes a cost between their projections into known camera poses and input images.
I’m pretty sure it is, except instead of comparing the result to a source image, it is analyzed and scored by a model that was trained to recognize images.
It's kind of the opposite. Stable Diffusion starts with random noise, changes it, asks the model of it's representative of the goal, repeat. But there's much more to it.
Thank you very much! The browser landscape was very different when the original code was written, around 10 years ago. Maybe I'll revisit it and see if it can be optimised!
https://ajxs.github.io/pbp/