Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Morale is low because leaders think AI can do that amount of work, but it can’t actually (at least not yet). This both means that they don’t hire enough people to do the work needed, while also “drive by” insulting the intelligence of the people they are overworking.


This has been my observation as well. To add, I'm seeing leadership and stakeholders use their chats with LLMs to justify claims like "what I'm asking for is incredibly simple according to ChatGPT, and it should be done by end of today." Of course it rarely is, because the prompt is underspecified, the LLM solution is oversimplified, and it lacks context on the complexities of existing codebase, and the team's development & deployment processes.


and the LLM probably responded with "You're absolutely right!" to every idea they asked about.


That's one of the things I find most interesting when it comes to LLMs, a depressingly large proportion of the population seems to enjoy interacting with a deranged sycophant who treats all of their ideas and comments as a stroke of genius. Every time I read a response like "[you're right] [you're smart] [more than others]" to the most obvious observation it makes me squirm with discomfort. Especially when I just pointed out a grave error in LLM's reasoning.

My suspicion is that it's a reflection of how people like Altman want to be treated. As an European who worked with US companies, my experience with work communication there can only be summed up as being heavily biased towards toxic positivity. Take that up another 3 egotistical notches for CEOs and you get the ChatGPT tone.


>> toxic positivity

I've once heard the company mandated more positive tone. To avoid words like "issue".

Not an issue, it's an opportunity! Okay, we have a critical opportunity in production!


> As an European who worked with US companies, my experience with work communication there can only be summed up as being heavily biased towards toxic positivity

This is definitely true, and is something that I've noticed that really annoys me. I have noticed that it varies quite a bit by region and industry, so not universal to the US or monolithic. The west coast of the US seems to be the most extreme in my experience


Yes, this feature might be a prime driver of user engagement and retention, and it could even emerge "naturally" if those criteria are included for optimization in RLHF. In the same way that the infinite scrolling feed works in social media, the deranged sycophant might be the addictive hook for chatbots.


No, "leaders" don't think AI can do that amount of work. They don't care. It's just a pretext for cost cutting.

For a good example, just look at how Google does "support." It's just robots doing shoddy work and screwing over people. Can better compensated and organized human support team do better? Of course, but the rich execs don't want to spend a penny to help people if they can get away with it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: