Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

my take is that even without significant improvements over the current level of LLMs you are going to see significant impact on jobs and society over the coming years. There's so much low hanging fruit just by improving the UI around the existing models, I've seen prototypes that can automate a huge amount of grunge work and common manual workflows that office workers do every day. They just need to be integrated at the web browser or OS level so they can be used across apps and have some ability to preview/rollback any changes via human sanity check in the cases the AI agent messes up. Microsoft and Apple can destroy most of these AI startups by streamlining the workflow, Microsoft is already rolling stuff out. Apple really needs to back one of the open source models to support their on-device privacy message and include it with their devices as an improved Siri

and all of this is assuming we don't see the AGI exponential model improvement a lot of the AI bros hype up. If we continue to see improvements on the level of GPT-2 to GPT-3 to GPT-4, then the economy as we know it changes forever. I've already seen research prototypes using LLMs to do automated reinforcement learning and optimization for robotics, so you could rapidly see automation there as well



It's true that LLMs can be used to automate things that couldn't be automated before. But even without LLMs there are untold millions of jobs that could be replaced with a simple shell script. Staggering inefficiency is everywhere. So maybe the GPTs won't be that much of a gamechanger?


The reason the scripts are not in use is that they're costly to maintain due to sprawling edge cases. One LLM argument is that they are better able to handle these ambiguities without expensive developer support.

I for one look forward to the 737 Max on board flight critical LLM, it'll help with our emissions goals.


unless the reason why these jobs weren't replaced with a simple shell script was that we didn't have enough people for whom such a shell script was "simple".

maybe LLMs will change that: maybe it will help write all those "simple" shell scripts?


I think there's probably a lot of truth to the meme that goes around saying that our jobs are safe because effective LLM use requires people to know what they want.


I think the exact opposite is true.

The more you play with these tools, the more you realize that they are not nearly as robust as they need to be to have a significant impact on society. We need much more reliability.

We will undoubtedly continue to see new generative AI tools rolling out, but I think the end result is going to be a lot less disruptive than many claimed when ChatGPT was initially released.

Of course if there is exponential improvement all bets are off, but we have had nearly a year of stagnation where GPT-4 hasn’t been beaten. This year we’ll see if this is an actual ceiling.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: