Hacker Newsnew | past | comments | ask | show | jobs | submit | mkummer's commentslogin

https://dreamandcolor.com/ has been a fun solo bootstrapped side project for me for the past 2.5ish years - (specialize in converting photos to coloring pages for parents, educators, etc)

I started it primarily wanting to take a shot at productizing an image diffusion model (Stable Diffusion 1.5 when I started) in a novel (at the time) way and it ended up growing legs of it's own.

She's steadily chugging along, growing about 10-20% per month with minimal marketing, exceeding all expectations I had for the project when I set out


Well done! How does it compare to using built-in image models like nano banana?


You can get great results with nano banana nowadays (ex: "convert this image to a coloring page") - I'd say we focus on 1. consistency with our base style from image to image, 2. likeness (still really tough to get 100% right but we've come a long way since our MVP) and 3. offering fun alternatives (South Park inspired coloring pages, Minecraft style, etc)

We also handle all the post-processing (upscaling, image cleaning, etc) that you need in order to get great printed results - with Gemini (Nano Banana) or ChatGPT you've got to pull each image out, possibly remove the watermark, set the curves/levels in photoshop/gimp, upscale it, etc then print the page - you can just hit Export and download a pdf ready to print from our site


This is fantastic, a perfect example. Good stuff!


Is the web interface open sourced anywhere? Looks great, excited to try it out


Super cool and well thought out!

I'm working on something similar focused on being able to easily jump between the two (cloud and fully local) using a Bring Your Own [API] Key model – all data/config/settings/prompts are fully stored locally and provider API calls are routed directly (never pass through our servers). Currently using mlc-llm for models & inference fully local in the browser (Qwen3-1.7b has been working great)

[1] https://hypersonic.chat/


Agreed, I'd been working on a Google Sheets specific MCP last week – just got it published here: https://github.com/mkummer225/google-sheets-mcp


This is cool. You should submit this as a 'Show HN'.

Also consider publishing it so people can use it without having to use git.


Publishing it where? It can’t be a github page, it’s too complex; anything else incurs real costs.


I mean publish it on the npm registry (https://www.npmjs.com/signup). That way, it would be easy to install, just by adding some lines to claude_desktop_config.json:

  {
    "mcpServers": {
      "ragdocs": {
        "command": "npx",
        "args": [
          "-y",
          "@qpd-v/mcp-server-ragdocs"
        ],
        "env": {
          "QDRANT_URL": "http://127.0.0.1:6333",
          "EMBEDDING_PROVIDER": "ollama",
          "OLLAMA_URL": "http://localhost:11434"
        }
      },
     }
    }
  }


Continue.dev's VS Code extension is fantastic for this


Real big fan of Continue for VS Code with Claude; @ reference files or add snippets for specific context, and you can disable indexing + tab autocomplete to save on resources

Congrats to the Cursor team, their offering just seemed too closed box for me. Continue was a nice breath of fresh air being OSS


You could try out our site [1] – we use an AI model but not a diffusion model (like OP is) to convert photos to line art, the resulting lines are less "smooth" but it better represents the original image

[1] https://dreamandcolor.com/


With less "artistic license" [1]

We use a model but not a diffusion model + controlnet which is what I'm assuming this website does.

Would love any feedback on the results + site [2] – before it's asked: we do not use any user uploaded images for model training, they're only stored to show you the color version of your originally uploaded image

[1] https://static.dreamandcolor.com/aa828ad1-5696-4d4e-95eb-bcd...

[2] https://dreamandcolor.com/


It still has the vibe of janky edge-detection that can be done without AI. Admittedly, less janky than pure edge detection but still not quite smooth enough that I'd pay for the results.

Also, just FYI, asking three different time across these comments for people to check out your site feels a bit spammy to me. I'd suggest you do your own 'Show HN' if you are trying to engage with the community here for feedback.


Fair, thanks for your feedback – some images work better than others in terms of the artifacting. There's lots of improvements we'd like to make, for now the best balance I've found is letting the user choose their level of detail on the site and add any masking lines as needed before PDF export


Definitely, both sizing and layout - On paper we thought we'd be able to design our current product in a reasonable form factor with a symmetric layout.

During the modeling phase we quickly switched to an asymmetric layout to maintain the form factor we were going for.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: