Claude code is more user friendly than cursor with its CLI like interface. The file modifications are easy to view and it automatically runs psql, cd, ls , grep command. Output of the commands is shown in more user friendly fashion. Agents and MCPs are easy to organized and used.
I feel just the opposite. I think Cursor's output is actually in the realm of "beautiful." It's well formatted and shows the user snippets of code and reasoning that helps the user learn. Claude is stuck in a terminal window, so reduced to monospaced bullet lines. Its verbose mode spits out lines of file listings and other context irrelevant to the user.
What you’re working on? In my industry it fails half of the time and comes up with absolute nonsense. The data just don’t exist for our problems, it can only work when you guide it and ask for a few functions at max.
This sounds like my experiences with it. I'm writing embedded firmware in C and Rust. I'd describe further, but Claude seems incompetent at all aspects of this space.
This. Every "AI is greate" response seems to be from someone doing web development - something I've intentionally avoided ever since I got tired of it around 2001, and hope to never have to do again.
We write C++ code in a very customized internal idiom to drive our hardware. Claude is great at filling in debugging statements / iterating over standard data structures to dump their contents, but not much else.
That seems to be a great example of precisely the sort of program an AI would be good at. A small focused product that only does one thing. Mainly gluing together other people's code. It's a polished greenfield project that does one tiny bit of focused functionality.
Interestingly, this guy has been making pretty much the same app as you, and live-streamed making it on youtube:
Looks like he's now pivoted to selling access to his discord server for vibe coding tips as I can't find a link to his product.
But if we're honest here, it's not going to take a ton of code to make that. All the functionality to do it is well documented.
Many people here could make a competitor in a week, without agentic AI, just using AI as a super-charged SO. The limiter pre-AI (aside from AI transcribing it) would have been reading and implementing/debugging all the documentation of the libraries you're using, which AI is great at circumventing.
Your product looks really good, and is an excellent example of what vibe coded AI is great at. I hope you're getting good traction.
Ah, I’ve tried that one, but I must be doing something wrong. I give it a fully specified working program, and often times it gives me back one that only works 50% of the time!
Does Claude Code provide some kind of "global memory" the llm refers to, or is this just a request you make within the the llm's context window? Just curious hadn't heard the use of the term
EDIT: I see, you're asking Claude to modify claude.md to track your preference there, right?
What does the playwright MCP accomplish for you? Is it basically a way for Claude to play with your app in the browser without having to write playwright tests?
Your routine seems completely normal. This is Not Youtube addiction and procrastination. You work 8 hours - that's already your productivity. The idea that your personal time also needs to be optimized and highly productive is tech bro nonsense.
After a full workday, your brain wants easy dopamine. That is being human. You're not a startup. The people posting about 5am routines are just performing productivity online.
Maybe go to bed earlier or be more mindful when it comes to digital dooms scroll.
If an Agentic requests originate from a server - it is easy for receivers to block the server. Write a browser-plugin and you are playing in somebody else's garden.
Write a browser and now you get to pretend like a human!
YT transcripts definitely lack speaker ID. LLMs can infer speakers from context but miss nuance without proper speaker recognition.
I have been tackling this while building VideoToBe.com.
My current pipeline is Download Video -> Whisper Transcription with diarization -> Replace speaker tags with AI generated speaker ID + human fallback.
Reliable ML speaker identification is still surprisingly hard.
For podcast summarization, speaker ID is a game-changer vs basic YT transcripts.
I'm implementing a similar workflow for VideoToBe.com
My Current Pipeline:
Media Extraction - yt-dlp for reliable video/audio downloads
Local Transcription - OpenAI Whisper running on my own hardware (no API costs)
Storage & UI - Transcripts stored in S3 with a custom web interface for viewing
After reading your blog post, I will be testing effect on speeding audio for locally-hosted Whisper models. Running Whisper locally eliminates the ongoing cost concerns since my infrastructure is already a sunk cost. Speeding audio could be an interesting performance enhancement to explore!
10% Daily time to learn about various AI tools and improve my workflow.
20% Procrastination (this might be way more than what I'm willing to accept. But this is HN, I want to appear smart )
20% Writing detailed description of features and breaking down task lists, writing acceptance tests.
20% AI Coding (Claude Code)
20% Testing + Production
reply