Right now its just a tool you can use or not and if you are smart enough, you figure out very quickly when to use a tool for efficency and when not.
I do not vibe code my core architecture because i control it and know it very well. I vibe code some webui i don't care about or a hobby idea in 1-4h on a weekend because otherwise it would take me 2 full weekends.
I fix emails, i get feedback etc.
When I do experiemnts with vibe coding, i'm very aware what i'm doing.
Nonetheless, its 2025. Alone 2026 we will add so much more compute and the progress we see is just crazy fast. In a few month there will be the next version of claude, gpt, gemini and co.
And this progress will not stop tomorrow. We don't know yet how fast it will progress and when it will be suddenly a lot better then we are.
Additionally you do need to learn how to use these tools. I learned through vibe coding that i have to specify specific things i just assume the smart LLM will do right without me telling for example.
Now i'm thinking about doing an experiemnt were i record everything about a small project i want to do, to then subscribe it into text and then feeding it into an llm to strucuture it and then build me that thing. I could walk around outside with a headset to do so and it would be a fun experiemnt how it would feel like.
I can imagine myself having some non intrusive AR Google and the ai sometimes shows me results and i basically just give feedback .
I do not vibe code my core architecture because i control it and know it very well. I vibe code some webui i don't care about or a hobby idea in 1-4h on a weekend because otherwise it would take me 2 full weekends.
I fix emails, i get feedback etc.
When I do experiemnts with vibe coding, i'm very aware what i'm doing.
Nonetheless, its 2025. Alone 2026 we will add so much more compute and the progress we see is just crazy fast. In a few month there will be the next version of claude, gpt, gemini and co.
And this progress will not stop tomorrow. We don't know yet how fast it will progress and when it will be suddenly a lot better then we are.
Additionally you do need to learn how to use these tools. I learned through vibe coding that i have to specify specific things i just assume the smart LLM will do right without me telling for example.
Now i'm thinking about doing an experiemnt were i record everything about a small project i want to do, to then subscribe it into text and then feeding it into an llm to strucuture it and then build me that thing. I could walk around outside with a headset to do so and it would be a fun experiemnt how it would feel like.
I can imagine myself having some non intrusive AR Google and the ai sometimes shows me results and i basically just give feedback .