https://dreamandcolor.com/ has been a fun solo bootstrapped side project for me for the past 2.5ish years - (specialize in converting photos to coloring pages for parents, educators, etc)
I started it primarily wanting to take a shot at productizing an image diffusion model (Stable Diffusion 1.5 when I started) in a novel (at the time) way and it ended up growing legs of it's own.
She's steadily chugging along, growing about 10-20% per month with minimal marketing, exceeding all expectations I had for the project when I set out
You can get great results with nano banana nowadays (ex: "convert this image to a coloring page") - I'd say we focus on 1. consistency with our base style from image to image, 2. likeness (still really tough to get 100% right but we've come a long way since our MVP) and 3. offering fun alternatives (South Park inspired coloring pages, Minecraft style, etc)
We also handle all the post-processing (upscaling, image cleaning, etc) that you need in order to get great printed results - with Gemini (Nano Banana) or ChatGPT you've got to pull each image out, possibly remove the watermark, set the curves/levels in photoshop/gimp, upscale it, etc then print the page - you can just hit Export and download a pdf ready to print from our site
I'm working on something similar focused on being able to easily jump between the two (cloud and fully local) using a Bring Your Own [API] Key model – all data/config/settings/prompts are fully stored locally and provider API calls are routed directly (never pass through our servers). Currently using mlc-llm for models & inference fully local in the browser (Qwen3-1.7b has been working great)
I mean publish it on the npm registry (https://www.npmjs.com/signup). That way, it would be easy to install, just by adding some lines to claude_desktop_config.json:
Real big fan of Continue for VS Code with Claude; @ reference files or add snippets for specific context, and you can disable indexing + tab autocomplete to save on resources
Congrats to the Cursor team, their offering just seemed too closed box for me. Continue was a nice breath of fresh air being OSS
You could try out our site [1] – we use an AI model but not a diffusion model (like OP is) to convert photos to line art, the resulting lines are less "smooth" but it better represents the original image
We use a model but not a diffusion model + controlnet which is what I'm assuming this website does.
Would love any feedback on the results + site [2] – before it's asked: we do not use any user uploaded images for model training, they're only stored to show you the color version of your originally uploaded image
It still has the vibe of janky edge-detection that can be done without AI. Admittedly, less janky than pure edge detection but still not quite smooth enough that I'd pay for the results.
Also, just FYI, asking three different time across these comments for people to check out your site feels a bit spammy to me. I'd suggest you do your own 'Show HN' if you are trying to engage with the community here for feedback.
Fair, thanks for your feedback – some images work better than others in terms of the artifacting. There's lots of improvements we'd like to make, for now the best balance I've found is letting the user choose their level of detail on the site and add any masking lines as needed before PDF export
Definitely, both sizing and layout - On paper we thought we'd be able to design our current product in a reasonable form factor with a symmetric layout.
During the modeling phase we quickly switched to an asymmetric layout to maintain the form factor we were going for.
I started it primarily wanting to take a shot at productizing an image diffusion model (Stable Diffusion 1.5 when I started) in a novel (at the time) way and it ended up growing legs of it's own.
She's steadily chugging along, growing about 10-20% per month with minimal marketing, exceeding all expectations I had for the project when I set out