This strikes me as "end game" type behavior. These companies see the writing on the wall, and are willing to throw everything they have left to retain relevance in the coming post-AGI world. To me I'm more alarmed than I am shocked at the pay packages.
I'm a designer by profession but the majority of my actual schooling was in photography. Capturing a great visual moment feels second nature to me and the process feels so involuntary that you'd rarely even notice I've taken a photo. You can absolutely live in the moment and still have something to show for it.
If you're a fan of the holographic brain, you could postulate that perhaps the brain’s usual filtering mechanisms are being sufficiently degraded to allow for consciousness to tap into nonlocal holographic information. Or perhaps its a feature, returning you the "cosmic source" of all life and knowledge.
Not sure if this was posted as humour, but I don't feel that way. In today's world, where I certainly would consider taking the blue pill, I'm having a blast with LLMs!
It has helped me learn stuff incredibly faster. Especially I find them useful for filling the gaps of knowledge and exploring new topics in my own way and language, without needing to wait an answer from a human (that could also be wrong).
Why does it feel, that "we are entirely inside the bubble" for you?
In the early days of ChatGPT where it seemed like this fun new thing, I used it to "learn" C. I don't remember anything it told me, and none of the answers it gave me were anything that I couldn't find elsewhere in different forms - heck I could have flipped open Kernighan & Ritchie to the right page and got the answer.
I had a conversation with an AI/Bitcoin enthusiast recently. Maybe that already tells you everything you need to know about this person, but to the hammer the point home, they made a claim to similar to you: "I learn much more and much better with AI". They also said they "fact check" things it "tells" them. Some moments later they told me "Bitcoin has its roots in Occupy Wall Street".
A simple web search tells you that Bitcoin is conceived a full 2 years before Occupy. How can they be related?
It's a simple error that can be fact checked simply. It's a pretty innocuous falsity in this particular case - but how many more falsehoods have they collected? How do those falsehoods influence them on a day-by-day basis?
How many falsehoods influence you?
A very well meaning activist posted a "comprehensive" list of all the programs that were to be halted by the grants and loans freezes last week. Some of the entries on the list weren't real, or not related to the freeze. They revealed they used ChatGPT to help compile the list and then went down one-by-one to verify each one.
With such meticulous attention to detail, incorrect information still filtered through.
I guess the real learning happens outside the AI, here in real life. Does the code run? Sure, it's on my local and not in production, but I would've never have the patience to get "that new thing working" without AI as assistant.
Does the food taste good? Oops, there's a bit too much vegetables here, they are never gonna fit in this pan of mine. Not a big deal, next time I'll be wiser.
AI is like a hypothesis machine. You're gonna have to figure out if the output is true. Few years ago, just testing any machine's "intelligence" was pretty quickly done and machine failed miserably. Now, the accuracy is astounishing in comparison.
> How many falsehoods influence you?
That is a great question. The answer is definitely not zero. I try to live by with a hacker mentality and I'm an engineer by trade. I read news and comments, which I'm not sure is good for me. But you also need some compassion towards oneself. It's not like ripping everything open will lead to salvation. I believe the truth does set you free, eventually. But all in one's time...
Anyway, AI is a tool like any other. Someone will hammer their fingers with it. I just don't understand the hate. It's not like we're drinking any AI koolaids here. It's just like it was 30 years ago (in my personal journey), you had a keyboard and a machine, you asked it things and got gibberish. Now the conversation with it just started to get interesting. Peace.
>It has helped me learn stuff incredibly faster. Especially I find them useful for filling the gaps of knowledge and exploring new topics in my own way and language
and then you verify every single fact it tells you via traditional methods by confirming them in human-written documents, right?
Otherwise, how do you use the LLM for learning? If you don't know the answer to what you're asking, you can't tell if it's lying. It also can't tell if it's lying, so you can't ask it.
If you have to look up every fact it outputs after it does, using traditional methods, why not skip to just looking things up the old fashioned way and save time?
Occasionally an LLM helps me surface unknown keywords that make traditional searches easier, but they can't teach anything because they don't know anything. They can imagine things you might be able to learn from a real authority, but that's it. That can be useful! But it's not useful for learning alone.
And if you're not verifying literally everything an LLM tells you.. are you sure you're learning anything real?
I guess it all depends on the topic and levels of trust. How can I be certain that I have a brain? I just have to take something for granted, don't I? Of course I will "verify" the "important stuff", but what is important? How can I tell? Most of the time only thing I need is a pointer in the right direction. Wrong advice? I know when I get there I suppose.
I can remember numerous things I was told while growing up, that aren't actually true. Either by plain lies and rumours or because of the long list of our cognitive biases.
> If you have to look up every fact it outputs after it does, using traditional methods, why not skip to just looking things up the old fashioned way and save time?
What is the old fashioned way? I mean people learn "truths" these days from Tiktok and Youtube. Some of the stuff is actually very good, you just have to distill it based on the stuff I was being taught at school. Nonody has yet declared LLMs as a subtitute for schools, maybe they soon will, but neither "guarantees" us anything. We could as well be taught political agendas.
I could order a book about construction, but I wouldn't build a house without asking a "verified" expert. Some people build anyway and we get some catastrofic results.
Levels of trust, it's all games and play until it gets serious, like what to eat or doing something that involves life threatening physics. I take it as playing with a toy. Surely something great have come up from only a few piece of legos?
> And if you're not verifying literally everything an LLM tells you.. are you sure you're learning anything real?
I guess you shouldn't do it that way. But really, so far the topics I've rigorously explored with ChatGPT for example, have been better than your average journalism. What is real?
Saying you need to verify "literally everything" both overestimates the frequency of hallucinations and underestimates the amount of wrong found in human-written sources. e.g. the infamous case of Google's AI recommending Elmer's glue on pizza was literally a human-written suggestion first: https://www.reddit.com/r/Pizza/comments/1a19s0/my_cheese_sli...
> without needing to wait an answer from a human (that could also be wrong).
The difference is you have some reassurances that the human is not wrong - their expertise and experience.
The problem with LLMs, as demonstrated by the top-level comment here, is that they constantly make stuff up. While you may think you're learning things quickly, how do you know you're learning them "correctly", for lack of a better word?
Until an LLM can say "I don't know", I really don't think people should be relying on them as a first-class method of learning.
"Occasional nonsense" doesn't sound great, but would be tolerable.
Problem is - LLMs pull answers from their behind, just like a lazy student on the exam. "Halucinations" is the word people use to describe this.
Those are extremely hard to spot - unless you happen to know the right answer already, at which point - why ask? And those are everywhere.
One example - recently there was quite a discussion about llm being able to understand (and answer) base16 (aka "hex") encoding on the fly, so I went on to try base64, gzipped base64, zstd-compressed base64, etc...
To my surprise, LLM got most of those encoding/compressions right, decoded/uncompressed the question, and answered it flawlessly.
But with few encodings, LLM detected base64 correctly, got compression algorithm correctly, and then... instead of decompressing, made up a completely different payload, and proceeded to answer that. Without any hint of anything sinister going.
We really need LLMs to reliably calculate and express confidence. Otherwise they will remain mere toys.
I think as these things get more integrated into customer service workflows - especially for things like insurance claims - there's gonna start being a lot more buyer's remorse on everyone's part.
We've tried for decades to turn people into reliable robots, now many companies are running to replace people robots with (maybe less reliable?) robot-robots. What could go wrong? What are the escalation paths going to be? Who's going to be watching them?
I put the word "some" in front of "crypto" for a reason.
There is some crypto that we know how to break with a sufficiently large quantum computer [0]. There is some we don't know how to do that to. I might be behind the state of the art here, but when I wasn't we specifically really only knew how to use it to break cryptography that Shor's algorithm breaks.
Nope. Any crypto you can break with a real, physical, non-imaginary quantum computer, you can break faster with classical. Get over it. Shor's don't run yet and probably never will.
You are misdirecting and you know it. I don't even need to discredit that paper. Other people have done it for me already.
This is like asking whether $500 billion to fund warp drives would yield better returns.
Money can't buy fundamental breakthroughs: money buys you parallel experimental volume - i.e. more people working from the same knowledge base, and presumably an increase in the chance that one of them does advance the field. But at any given time point, everyone is working from the same baseline (money also can improve this - by funding things you can ensure knowledge is distributed more evenly so everyone is working at the state of the art, rather then playing catch up in proprietary silos).
True quantum computing in the sense that most people would imagine it, using individual qubits in an analogous (ish) way to classical computers, has not reached a useful scale. To date only “toy problems” to demonstrate theoretical results have been solved.
Saving the planet doesn't make the stock prices go up, so no one will care.
Private companies are now getting their own nuclear power stations to power AI. We can't get new nuclear power for public use, but private for profit initiatives? Absolutely.
> Saving the planet doesn't make the stock prices go up, so no one will care.
I mean, it _could_, if you set up a market structure to incentivize it. CAISO (California) has done this, and now solar and storage costs are plummeting and associated industries are booming as the solar+storage solution starts outcompeting other forms of energy production.
Heck, solar+storage is even booming in ERCOT (Texas), which has no specific market incentives for it. Their spot market swings so wildly that storage makes money on power arbitrage and transmission easing.
Any nuclear power plants being built decreases the marginal cost of building another. If private companies are willing to front the cost of building the first one in recent times, it may help.
I have such fond memories of Dungeon Keeper, Dungeon Keeper 2, Fable, Black & White, Populous.
I think the biggest take away from Molyneux's work is regardless of how seriously he talked the games up, the games themselves never once took themselves too seriously. There was a level of playful whimsy that just didn't exist back then (and probably still doesn't today). You could tell he wanted to say more and do more, but was always limited by the technology available at the time. It felt like he was searching for something in the games he developed, and I was always happy to go searching with him.
Populous! That's a name I haven't heard for a long time. It was so much fun building out the land, smiting people, and then saving the day after earthquakes, etc. Good times.
I have yet to see a modern version that was half as interesting.
I’ve never understood the people who took him at face value but I’ve also never understood people who didn’t like the guy.
He made some of the most interesting and original and fun games out there. What, he can’t puff up his chest once in awhile? If anything I want more games from him.
When it's vicious and cunning, sure. But the guy's pathological, and still extremely endearing despite that.
I had a high school friend who was lying all the time. His father had access to unheard of cpu prototypes and whatever else. We nicknamed him "C. The Mythomaniac" and called out his bullshit everyday.
I don't think people started really disliking him until Curiosity and Godus. And IMO neither of those are good games.
He also promised the winner of Curiosity 1% of all revenue from Godus, then retconned the deal to be 1% of profit after the game failed to become profitable.
Actually it was reconned to 1% of the profit after they implemented a specific feature which they then never implemented. It was just a massive PR scam.
Of course, I am not exactly excusing him. But as customer you should also not trust advertisement blindly, especially when it is pie in the sky too good to be true. Most reviews of the time would not miss the opportunity to joke about Molyneux's serial overpromising. It was a running gag before long.
> But as customer you should also not trust advertisement blindly
That’s and excuse for his lies. As a customer you should be able to trust adverticement, that’s why we have laws for adverticements. Blaming people who believe in a scam artists lies for having believed them is madness. The fact that he made games you like doesn’t excuse his blatant lies to investors and customers alike, and you really should listen to yourself and stop making excuses for him.
Video games are simply artistic creations meant to entertain, as are other forms of entertainment media. Were you formerly under the impression that video games were a portal to another reality?