Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They're good autocomplete, they can help search for solutions sometimes better than Google (SEO spam), you can use it as a rubber duck, and you can make it auto fill trivial stuff that would take you a few minutes to write out manually, like test scaffolding. I would never use it to actually complete a non-trivial task and I always confirm it's answers. And yeah, sometimes it sucks - it's a tool with a learning curve about knowing it's limitations.

The reason there's so much money and time is that even semi-competant AI is relatively new and the methods are still extreme crude, and yet it's this advanced. This seems like the path to an AGI, and if someone were to even approach that point, it would radically change the world forever and could lead to either really good things or really bad things.

Now, GPT-4 isn't considered the best at specialized tasks. It's a master of many, but there are much smaller models that can do things like incredibly complex symbolic/geometric math proofs, write code, perform translations, etc better. A lot of ideas are on making expert systems using many of those specialists combined with a generalist, like the segmentation of a brain.

Anyway:

> I seriously consider people who use it daily to either be straight up incompetent, or maybe their domain is so trivial that the LLM actually does well.

These kinds of radical lines of thinking about a significant proportion of enthused professionals (in any industry) who aren't showing the same experience as you, is a red flag for introspection. It's so easy to fall into the "enlightened me" trap.

I appreciate you asking for more information!



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: