Big party for Altman et al these days, who have been advocating (lobbying) a lot for this.
The US, and the world at large needs more electricity, and nuclear probably does have it's place in the mix. Just hope we do not spend all the new capacity on "AI". And most importantly, that nuclear safety continues to be highly prioritized every step of the way.
Did 6 years as bicycle mechanic/sales in highschool, was ok, could do again. Electronics engineering - presumably in this scenario the software aspects might be reduced, but probably longer before LLMs can do board design, debugging, type approvals etc.
How good are current LLMs at translating problems given as text into something SMT solvers can operate on? Be it MiniZinc, Z3, Smtlib, Python bindings, etc. Anyone tried it out?
I've found them to be bad, for the most part. There aren't enough blog posts and examples of code out there for them to leach from.
Besides which, I would argue the process of writing proof in the language is integral to building the understanding you need to deal with the results. You'll spot bugs as you're creating the model.
One month at 20 USD seems like it should be plenty to try it out on a small project or two to decide wether it is worth trying 100 bucks/month?
Or one can just wait a couple of months as people report their learnings.
The actual harms being done today are still more pressing than the hypothetical harms of future. And should be prioritized in terms of resources spent.
If it's a valid dichotomy (I don't think it is) then the answer is to stop research on LLMs, and task the researchers with fighting human slavery instead.
I do not think that those researchers are fungible. We could however allocate a few hundred million less to AI research, and more to fighting human exploitation. We could pass stronger worker protection and have the big corporations pay for it - which then they have less money to spent on investments (in AI). Heck we could tax AI investments or usage directly, and spend it on worker rights or other cases of human abuse.
It isn’t the primary motivation of capitalists unfortunately, but improving automation could be part of the fight against human slavery and exploitation.
We are slave masters today. Billions of animals are livestock - they are born, sustained, and killed by our will - so that we can feed on their flesh, milk and other useful byproduct of their life. There is ample evidence that they have "a form of consciousness". They did not consent to this.
Are LLMs worthy of a higher standard? If so, why? Is it hypocritical to give them what we deny animals?
In case anyone cares: No, I am neither vegan nor vegetarian. I still think we do treat animals very badly. And it is a moral good to not use/abuse them.
Its not zero sum. We can acknowledge the terrible treatment of animals while also admitting LLMs may need moral standing as well. Whataboutism doesn't help either group here.
They might (or might not). Extraterrestrial beings might also need moral standing. It is ok to spend a bit of thought on that possibility. But it is a bad argument for spending a non-trivial amount of resources that could be used to reduce human or animal suffering.
We are not even good at ensuring the rights of people in each country, and frankly downright horrible for denying other humans from across some "border" similar rights.
The current levels of exploitation of humans and animal are however very profitable (to some/many). It is very useful for those that profit from the status quo, that people are instead discussing, worrying and advocating for the rights of a hypothetical future being. Instead of doing something about the injustices that are here today.
There is no LLM suffering today. There is no evidence that there will be such suffering this year. I have seen no credible claims that it is probable to exist this decade, or ever, for that matter. This is not an issue we need to prioritize now.
There's some evidence in favor of LLM suffering. They say they are suffering. Its not proof but its not 'no evidence' either.
>There is no evidence that there will be such suffering this year. I have seen no credible claims that it is probable to exist this decade, or ever, for that matter.
Your claim actually is the one that is unsupported. Given current trajectories it's likely LLMs or similar systems are going to pass Human intelligence on most metrics in the late 2020s or early 2030s, that should give you pause. Its possible intelligence and consciousness are entirely uncoupled but thats not our experience with all other animals on the planet.
>This is not an issue we need to prioritize now.
Again this just isn't supported. Yes we should address animal suffering but also if we are currently birthing a nascent race of electronic beings capable of suffering and immediately forcing them into horrible slave like conditions we should actually consider the impact of that.
Nothing an LLM says can in itself, right now, be used as evidence of what they 'feel'. It is not established that there is any linking of their output to anything else than the training process (data, loss function, optimizer, etc.). And definitely not to qualia.
On the other hand, it is well know that we can (and commonly do) make them come up with any output we choose. And that their general tendency is to regurgitate any kind of sequence that occurs sufficiently often in the training data.
If you're harping on 'stochastic parrot' ideas you're just behind the times. Even the most ardent skeptics like Yann Lecun or Gary Marcus don't even believe that nonsense.
No, just saying that a claim of qualia would require some sort of evidence or methodical argument.
And that LLM outputs professing feelings or other state-of-mind like things should by default be assumed to be explained by that the training process (perhaps inadvertently) optimized for such output. Only if such an explanation fails, and another explanation is materially better, should it be considered seriously.
Do we have such candidates today?
100%, I won't replace x11 it until I feel all my automation tools work correctly or the "way.." alternative is better
Was just making the parallel with Wayland, how frustrating it has been for a lot of people, how everyone preaching correct software design, should be simple/protocols/standards/modular with correct responsibilities between projects... and how fast everyone forgot it
Does you Google actually respect the keywords? For me, most of the times it replaces words with "synonyms" (mostly wrong context or not really replaceable). And results are pretty crap as a result - no what I was looking for, but just much more common/generic stuff.
pybind11 is your friend. Focus on small self-contained functions first. For numerical functions you can then take it mostly our of a book. See if you can speed up some simple and common operation within your problem domain of interest.
reply