Hacker News new | past | comments | ask | show | jobs | submit login

Closed weight LLMs are unethical forms of theft that will privatize profits on a work that includes virtually all of humanity’s digital written work and serve to ultimately heighten wealth inequality as they get more sophisticated and start eliminating jobs.

The only path forward is open model weights, Sam Altman is on the wrong side of history here, and I hope he fails to convince regulators.




Sure, but we should avoid talking about open access as a self-evidently beneficial end rather than a means to make things that benefit society. Coming from a an academic library background, I saw numerous open access arguments fall flat in front of decision makers because their champions didn't have any existing analog to point at, and hadn't bothered to consider concrete ways an inaccessible data set stopped them from improving society.

While open data is very important to me, personally, as someone who knows how to use raw data to do cool things, I'll be able to connect a lot of the loose wires I see sticking out of the for the benefit of humanity angle once we start making genuinely end-user-friendly applications that require neither payment nor understanding python module dependencies to install. I've got some sketches kicking around but I'm tapped for time for at least a couple of months.


While I agree with your message, it needs more nuance. Second order effects exist: most open models have been boosted with data generated by closed SOTA models. GPT-4 has been the teacher of a whole generation of AI models, closed as it is, and even if OpenAI officially opposed this practice.


> The only path forward is open model weights, Sam Altman is on the wrong side of history here, and I hope he fails to convince regulators.

The third path would be an uprising against AI that would ultimately lead to an outright ban of it, a dissolution of all AI companies, and a moratorium on research on AI.


Which is impossible at this stage ?


Why? I am talking about a revolution against AI.


Because people can build this stuff at home or on small underground labs, militaries around the world will research it in secret, so how do you police it ?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: