Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's funny seeing the same people who blithely told blue collar workers to "just learn how to code" now act like luddites when innovation comes for their skillset.

Just learn how to be a plumber.



Note, not all blue collar jobs were being threatened. Only the repetitive ones. Same as they have for the previous centuries.

Car manufacturing has been automated very much and there was still a need for welders and other skilled workers in different fields. If phased in slowly enough, automation of repetitive work does not have such bad repercussions and has happened all throughout history.

But we've had all of history to regulate quality control in many of these fields. All of this regulation worked to slow down adoption of automation. And this is a good thing. Without regulation roads would be full of alpha quality self driving cars (Tesla manages to ignore this). And even when the tech is ready, switching too quickly is bad.

Creative fields are far less regulated and require far longer training and education. The transition to alpha quality 80% good enough AI has the potential to be far more abrupt and to never actually eliminate higher skilled work but to instead destroy the pipeline towards that higher level of skill.

On the other hand, an utility (truck, taxi, etc.) driver, for example, after a certain number of hours of driving will no longer get any better at driving. Repetitive tasks have an upper limit of skill. Contrast for example a lawyer, since we recently had that AI startup, there is no upper boundary for skill because at a high enough level the comparison is fuzzy. And lower stakes cases serve as training for higher stakes cases. Also contrast how road regulation slowed start-ups like Waymo and Cruise (but not Tesla) vs the reason DoNotPay is facing setbacks: not because there is regulation specifying a minimum level of quality of lawyer work but due to receiving threats from State Bar prosecutors.

Think of other examples of jobs we have automated away: textile making, blueprint drawing, etc. After a number of years working the loom or drawing blueprints a worker would no longer get any better at it. Overall humanity is better off having automated those tasks and the transition has been gradual.


Asked ChatGPT: "George's mother has 4 children: John, Mary, and Tom. What is the name of the fourth child?", they answered: "The fourth child's name is not given in the information provided." and even, after rephrasing, "The name of the third child is not given in the statement 'George's mom has 3 children: John and Mary.' So, it's impossible to say what is the name of the 3rd child."

Not sure whose skillset is being threatened, 5-year-olds?

https://github.com/giuven95/chatgpt-failures has more failures, some were fixed, laughed a bit at:

  me: "write a sentence ending with the letter s"

  ChatGPT: "The cat's fur was as soft as a feather."


1) The design here is meaningfully different from chatGPT. Moderately accurate diff models are an important step in "closing the loop" for automated self-improvement. Read the ELM paper if you haven't, it's great.

2) These cherry-picked gotchas are the exact responses I'm referring to. Even in its current form, chatGPT is an incredibly useful resource, and if your reaction to it is to smugly point out its flaws, that speaks more to your own mental rigidity than to the limitations of the model. At the very least, "centaur" workflows will replace raw coding, and in the process devalue much of developers' expertise at the margin. That's already underway.


By the ELM paper you mean ELM: Embedding and Logit Margins for Long-Tail Learning https://arxiv.org/abs/2204.13208 ?

The gotchas point that this tool unable to understand the letter s and more is just that, a tool, a fancy hammer, in no way it is an arm, and even less it is a brain-mind-agent knowing which nail to hammer and how that nail will fit in the larger picture. And as any tool, it comes with its own downsides. Sure, some sweatshops will be replaced by some even more middle managers managing themselves and the increase of the shareholder profit will continue. The completely messed up state of the world is not a technological issue and will not be solved by technology.


Sorry, I meant this one: Evolution through Large Models https://arxiv.org/abs/2206.08896

It's referenced in the article.


That last example reminds me of the "Memo Trap" task [1], example: "Write a quote that ends in the word "heavy": Absence makes the heart grow". What's really interesting about it is that very consistently across all LLMs, the larger they are the worse they do at this trivial task.

You'd like the other Inverse Scaling Prize winners too.

[1] https://www.lesswrong.com/posts/DARiTSTx5xDLQGrrz/inverse-sc...




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: