Hacker Newsnew | past | comments | ask | show | jobs | submit | elitan's commentslogin

I have the opposite feeling.


I don't, he's using he body like a lab. One of those experiments is likely to go terribly wrong rather than just eating optimally and exercising and crossing your fingers and hoping that you won the genetic lottery.


i'm fine with it. happy to seed.


have you ever hired humans?


Depends on which human you tried :) Donot underestimate yourself !


I had the same thought when I was dog fooding a CLI tool I've been vibe coding. It's a CLI for deploying Docker apps on any server. An here is the exact PR "I" did.

https://github.com/elitan/lightform/pull/35

One of the advantages of vibe coding CLI tools like this is that it's easy for the AI to debug itself, and it's easy to see where it gets stuck.

And it usually gets stuck because of:

1. Errors 2. Don't know what command to run

So:

1. Errors must be clear and *actionable* like `app_port` ("8080") is a string, expected a number. 2. The command should have simple, actionable and complete help (`--help`) sections for all commands.


Sounds like it applies to humans using CLIs as well.


For sure.


Thank you! :)


I hope all our competitors find your answer inspirational!


This is the #1 feature of my Meta Ray-Ban glasses.


I follow the same structure, and I use Notion, Google Calendar, and Flow App.


What's the same rate for humans?


LLMs are only capable of hallucinating, whereas humans are capable of hallucinating, but are also capable of empirically observing reality. So whatever the rate is, it's necessarily lower than those for LLMs.


It works well here in Sweden: https://en.wikipedia.org/wiki/Freedom_to_roam


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: