Hacker News new | past | comments | ask | show | jobs | submit login

Can anyone say which of the LLM companies is the least "shady"?

If I want to use an LLM to augment my work, and don't have a massively powerful local machine to run local models, what are the best options?

Obviously I saw the news about OpenAI's head of research openly supporting war crimes, but I don't feel confident about what's up with the other companies.




Just use what works for you.

E.g. i'm very outspoken about my preferences for open llm practices like executed by Meta and Deepseek. I'm very aware of the regulatory caption and pulling up the ladder tactics by the "AI safety" lobby.

However. In my own operations I do still rely on OpenAI because it works better than what I tried so far for my use case.

That said, when I can find an open model based SaaS operator that serves my needs as well without major change investment, I will switch.


Why not vibe-code it using OpenAI


I'm not talking about me developing the applications, but about using LLM services inside the products in operation.

For my "vibe coding" I've been using OpenAI, Grok and Deepseek if using small method generation, documentation shortcuts, library discovery and debugging counts as such.


Just call it hacking, we don't need new names for coding without any forethought.


Who put you in charge of naming?


The Claude people seem to be quite chill.


Agreed. They're a bit mental on "safety" but given that's not likely to be a real issue then they're fine.


Given the growing focus on AIs as agents, I think it's going to be a real issue sooner rather than later.


“Safety” was in air quotes for a reason. The Claude peoples’ idea of “AI safety” risks are straight out of the terminator movies.


Wouldn’t you rather have a player concerned with worst case scenarios?


Defending against movie plot threats has been found not a good use of resources already 20 years ago in the war on terrorism.

https://www.schneier.com/essays/archives/2005/09/terrorists_...


These aren't worst-case scenarios. That would imply there was an actual possibility of it happening.


Claude has closed outputs and they train on your inputs. Just like OpenAI, Grok, and Gemini (API), mistral…

Who’s chill? Groq is chill


claude and mistral seem to be in a good ethical place.

You actually can't fault llama either, as a standalone product. However it's still in Zuck Paradise


A: none of the above




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: