Hacker News new | past | comments | ask | show | jobs | submit login

Since this topic is closely related to my new project, I’d love to hear your opinion on it.

I’m thinking of building an AI IDE that helps engineers write production quality code quickly when working with AI. The core idea is to introduce a new kind of collaboration workflow.

You start with the same kind of prompt, like “I want to build this feature...”, but instead of the model making changes right away, it proposes an architecture for what it plans to do, shown from a bird’s-eye view in the 2D canvas.

You collaborate with the AI on this architecture to ensure everything is built the way you want. You’re setting up data flows, structure, and validation checks. Once you’re satisfied with the design, you hit play, and the model writes the code.

Website (in progress): https://skylinevision.ai

YC Video showing prototype that I just finished yesterday: https://www.youtube.com/watch?v=DXlHNJPQRtk

Karpathy’s post that talks about this: https://x.com/karpathy/status/1917920257257459899

Thoughts? Do you think this workflow has a chance of being adopted?






I quite liked the video. Hope you get to launch the product and I could try it out some day.

The only thing that I kept thinking about was - if there is a correction needed- you have to make it fully by hand. Find everything and map. However, if the first try was way off , I would like to enter from "midpoint" a correction that I want. So instead of fixing 50%, I would be left with maybe 10 or 20. Don't know if you get what I mean.


Yes, the idea is to ‘speak/write’ to the local model to fix those little things so you don’t have to do them by hand. I actually already have a fine-tuned Qwen model running on Apple’s MLX to handle some of that, but given the hard YC deadline, it didn’t make it into the demo.

Eventually, you’d say, ‘add an additional layer, TopicsController, between those two files,’ and the local model would do it quickly without a problem, since it doesn’t involve complicated code generation. You’d only use powerful remote models at the end.


Looks like an antidote for "vibe coding", like it. When are you planning to release something that could be tried? Is this open source?

I believe we can have a beta release in September, and yes, we plan to open-source the editor.

PS. I’m stealing the ‘antidote to “vibe coding”’ phrase :)


The video was a good intro to the concept. As long as it has repeatable memory for the corrections shown in the video, then the answer to your question about being adopted is “yes!”

It looks interesting, but I couldn't really follow what you were doing in the video or why. And then just as you were about to build, the video ends?

Just watched the demo video and thought it is a very interesting approach to development, I will definitely be following this project. Good Luck.



Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: