Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think there's something poetic about that fact that you can go on some AI prompt subreddits and have folks there make posts about turning ChatGPT into an "super business consultant" and then go over hear to read about how it's actually pretty bad at that.

But back on point, I found AI works best when given a full set of guardrails around what it should do. The other day I put it to work generating copy for my website. Typically it will go off the deep end if you try to make it generate entire paragraphs but for small pieces of text (id say up to 3 sentences) it does surprisingly well and because it's outputting such small amounts of text you can quickly make edits to remove places where it made a bad word choice or didn't describe something quite right.

But I would say I only got ChatGPT to do this after uploading 3-4 large documents that outline my product in excruciating detail.

As for coding tasks again it works great when given max guardrails. I had several pages that had strings from an object and I wanted those strings to be put back in the code and taken out of the object. This object has ~500 lines in it so it would have taken all day but I ended up doing it in about an hour by having AI do most of the work and just going in after the fact and verifying. This worked really well but I would caution folks that this was a very very specific use case. I've tried vibe coding once for shits and giggles and I got annoyed and stopped after about 10 minutes, IMHO if you're a developer at the "Senior" level, dealing with AI output is more crumbsome than just writing the damn code yourself.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: