I appreciate the author taking the time to share his workflow even though I really dislike the way this article is written. My dislike stems from sentences like this one: "I’ve been using Claude Code as my primary development tool for approx 9 months, and the workflow I’ve settled into is radically different from what most people do with AI coding tools." There is nothing radically different in the way he's using it (quite the opposite) and the are so many people that wrote about their workflows (and which are almost exactly the same, here's just one example [1]). Apart from that, the obvious use of AI to write or edit the article makes it further indigestible: "That’s it. No magic prompts, no elaborate system instructions, no clever hacks. Just a disciplined pipeline that separates thinking from typing."
There's no way I'd call what I do "radically different from what most people do" myself, under any circumstances. Yet in my last cross-team discussions at work, I realized that a whole lot of people were using AI in ways I'd consider either silly or mostly ineffective. We had a team boasting "we used Amazon Q to increase our projects' unit test coverage", and a principal engineer talking about how he uses Cursor as some form of advanced auto complete.
So when I point claude code at a ticket, hand it readOnly access to a qa environment so it can see how the database actually looks like, chat about implementation details and then tell it to go implement the plan, running unit tests, functional tests, run linters and all, that, they look at me like I have three heads.
So if you ask me, explaining reasonably easy ways to get good outcomes out of Codex or Claude Code is still necessary evangelism, at least in companies that haven't spent on tools to do things like what Stripe does. There's still quite a few people out there copying and pasting from the chat window.
> We had a team boasting "we used Amazon Q to increase our projects' unit test coverage"
Well are the tests good or no? Did it help the work get done faster or more thoroughly than without?
> how he uses Cursor as some form of advanced auto complete
Is there something wrong with that? That's literally what an LLM is, why not use it directly for that purpose instead of using the wacky indirect "run autocomplete on a conversation and accompanying script of actions" thing. Not everyone wants to be an agent jockey.
I don't see what's necessarily silly or ineffective about what you described. Personally I don't find it particularly efficient to chat about and plan out all bunch of work with a robot for every task, often it's faster to just sketch out a design on a notepad and then go write code, maybe with advanced AI completion help to save keystrokes.
I agree that if you want the AI to do non-trivial amounts of work, you need to chat and plan out the work and establish a good context window. What I don't agree with is your implication that any other less-sophisticated use of AI is necessarily deficient.
> the obvious use of AI to write or edit the article makes it further indigestible: "That’s it. No magic prompts, no elaborate system instructions, no clever hacks. Just a disciplined pipeline that separates thinking from typing."
Any comment complaining about using AI deserves a downvote. First of all it reads like witch hunt, accusation without evidence that’s only based on some common perceptions. Secondly, whether it’s written with AI’s help or not, that particular sentence is clear, concise, and communicative. It’s much better than a lot of human written mumblings prevalent here on HN.
Anyone wants to guess if I’m using AI to help with this comment of mine?
Anecdotally, I haven't been excited by anything published on show HN recently (with the exception being the barracuda compiler). I think it's a combination of what the author describes: surface-level solutions and projects mostly vibe-coded whose authors haven't actually thought that hard about what real problem they are solving.
I'm usually the first one to bash Facebook and its main liar in chief, but the first couple of paragraphs mix damning evidence with weak stretches (as mentioned by others here). The bad stuff is genuinely bad enough.
Meta's own research found Instagram worsened body image for 1 in 3 teen girls [1]. They killed a deactivation study when results looked bad, with one employee comparing it to tobacco companies burying research [2]. They had a 17-strike policy for accounts involved in sexual solicitation [3]. And they ran growth strategies explicitly targeting kids under 13, segmenting youth into "Kid (6-10), Tween (10-13), and Teen 13+" [4].
[1] 2019 Instagram slide presentation, "Teen Mental Health Deep Dive"
[2] Meta internal deactivation study (unnamed employee quote from unsealed docs)
[3] Testimony of Vaishnavi Jayakumar, former Instagram Head of Safety and Well-being
Honestly, well done and thanks for sharing it. I also really appreciate the fact that you included multiple screenshots of the UI, as well as some of the agent plans. Reading the code and project structure, it feels like you put in the work.
Email migration is genuinely painful and I am sure there's a real market here, so I am not trying to discourage you. But why should I trust a third party with my IMAP credentials?
"Credentials encrypted in memory only and deleted immediately after migration".
I have no way to audit/verify this claim. You're essentially asking users to hand over the keys to their entire email history on faith.
I remember about 20 years ago writing a relatively simple tool in perl with IMAP::Client to migrate a Universities staff mail from Courier (I think) to Communigate Pro, and then another one to migrate from Communigate Pro to Microsoft Exchange a few years later.
I was at the beginning of my career. It was pretty easy. Went almost flawlessly, moving thousands of peoples email.
Where is the "painful" part? It's just moving blobs of text around.
From the article: "I added –dry-run on a whim early on in the project. I was surprised at how useful I found it to be."
Not to be overly critical (I think it's great OP found value in adding and using --dry-run), but I am willing to bet that this was a suggestion/addition from a coding agent (and most likely Claude code/Opus). Having used it myself to build various CLI tools in different languages, it almost always creates that option when iterating on CLIs. To the point where it's almost a tell. I wonder if we're entering a moment of convergence where all the tools will have similar patterns/options because they are similarly written by agents.
> Early in the development process, when testing the incomplete application, I remembered that Subversion (the version control system after CVS, before Git) had a –dry-run option.
> I remembered how helpful that was, so I decided to add it to my command as well.
He mentions the reason he added it, and it's a compelling enough story to be true.
Of course and I am not trying to point fingers. But I do think it's interesting because it's also possible that it is confabulation. Not lying, but genuinely constructing coherent explanations for decisions whose true origins are different than we recall. I think working with coding agents has already made this immensely more common.
I had the equivalent of --dry-run in my kdecvs-build script from 2003 (where it was called --pretend) so it's not that spontaneous an idea that it must have been dreamed up by an AI.
Any time you have a script that needs to run for a long time or might involve destructive actions, having a way to run the script in a "tell me what you would do without actually doing it" mode is a fairly obvious user story to throw in.
Again, it's completely possible that OP and you are the wonderful exceptions (untouched and uninspired by coding agents) that have been using these patterns for as long as you can remember. My comment revolved around the psychological phenomenon, not whether dry-run is a clever/novel idea. It's about how we might tell ourselves stories about the origin of our ideas when working with those tools.
And my point is simply that if it were obvious enough an idea that I thought of it after initially using my tool, you probably will want to look for more realistic examples of where a person thinks they came up with an idea that was really prompted back to them in an AI chat.
This isn't something with surprising nuance like how a McDonald's milkshake serves a non-food "job to be done" during a shopper's morning commute. As evidenced by all the others in this thread pointing out other tools that do similar things, it's a fairly obvious idea to come up with after actually using a new tool.
You'd be more likely to learn about it doing product comparisons of other tools, although since there is a lot of common art for AI training to draw from, yes it is also possible to hear about it from your AI first.
It's great and reassuring to know that, in this day and age, products still get made entirely by one individual.
> Hi, Felix from the team here, this is my product - let us know what you think.
> We're on purpose releasing this very early, we expect to rapidly iterate on
> it.
> (We're also battling an unrelated Opus 4.5 inference incident right now, so
> you might not see Cowork in your client right away.)
Hey there! I find the idea super relevant and I think compliance tools that can be used like this are the way forward.
Given the timeline of the commits and some other tells (e.g. using forwardRef despite using React 19 which deprecates it), it seems like you used coding assistants extensively. That's a personal preference, but I would mention that explicitly (if that's the case), if only for intellectual honesty.
Hard disagree from me there. I don’t care what language a tool is built with, I’m neither interested in their choice of code editor, nor whether they use AI in the process or not. It’s a means to an end, not some flaw to be ashamed of and forced to disclose.
If something gets built with AI or not at all, that’s a net positive as far as I’m concerned.
You’re right that the commit history doesn’t fully reflect the raw development process.
I did some cleanup and squashing before publishing, since this is an open-source project
and I wanted the history to be readable and reviewable.
I do use coding assistants as part of my workflow, mostly for iteration speed and boilerplate,
but the architectural decisions, evaluation logic, and compliance mapping are intentional
and manually reasoned through.
Happy to clarify any part of the implementation or assumptions if something looks odd.
The irony of an article defending vibe coding that reads like it was vibe-written. The four "Why do we want X code?" paragraphs all end with the same sentence, the headers follow a neat structure no human would naturally land on, and it wraps up with a safely noncommittal "is this good or bad? Depends on who you are."
I long for articles that have been written in more time that it takes to read them.
The D-U-N-S requirement is the real killer here. It's a business identifier that costs money and requires a registered business entity. Even with the promised 'student/hobbyist' path, this fundamentally changes Android from a platform where anyone can distribute software to one where Google decides who's allowed to code. They're further normalizing the idea that installing software requires permission.
[1] https://github.com/snarktank/ai-dev-tasks
reply