Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"There are maybe a few hundred people in the world who viscerally understand what's coming. Most are at DeepMind / OpenAI / Anthropic / X but some are on the outside. You have to be able to forecast the aggregate effect of rapid algorithmic improvement, aggressive investment in building RL environments for iterative self-improvement, and many tens of billions already committed to building data centers. Either we're all wrong, or everything is about to change." - Vedant Misra, Deepmind Researcher.

Maybe your calibration isn't poor. Maybe they really are all wrong but there's a tendency here to these these people behind the scenes are all charlatans, fueling hype without equal substance hoping to make a quick buck before it all comes crashing down, but i don't think that's true at all. I think these people really genuinely believe they're going to get there. And if you genuinely think that, them this kind of investment isn't so crazy.



The problem is, they are hugely incentivised to hype to raise funding. It’s not whether they are “wrong”, it’s whether they are being realistic.

The argument presented in the quote there is: “everyone in AI foundation companies are putting money into AI, therefore we must be near AGI.”

The best evaluation of progress is to use the tools we have. It doesn’t look like we are close to AGI. It looks like amazing NLP with an enormous amount of human labelling.


Absolutely. Look at how Sam Altman speaks.

If you've taken a couple of lectures about AI, you've probably been taught not to anthropomorphize your own algorithms, especially given how the masses think of AI (in terms of Skynet, Cortana, "Her", Ex Machina, etc). It encourages people to mistake the capabilities of the models and ascribe to them all of the traits of AI they've seen in TV and movies.

Sam has ignored that advice, and exploited the hype that can be generated by doing so. He even tried to mimic the product in "Her", down to the voice [0]. The old board said his "outright lying" made it impossible to trust him [1]. That behavior raises eyebrows, even if he's got a legitimate product.

[0]: https://www.wired.com/story/openai-gpt-4o-chatgpt-artificial...

[1]: https://www.theverge.com/2024/5/28/24166713/openai-helen-ton...


Oh, surely Larry Ellison is a trustworthy sort of fellow, right? :)


>The problem is, they are hugely incentivised to hype to raise funding.

Hype is extremely normal. Everyone with a business gets the chance to hype for the purpose of funding. That alone isn't going to get several of the biggest tech giants in the world to pour billions.

Satya just said, "he has his 80 billion ready". Is Microsoft an "AI foundation company" ? Is Google ? Is Meta ?

The point is the old saying - "Put your money where your mouth is". People can say all sorts of things but what they choose to spend their money on says a whole lot.

And I'm not saying this means the investment is guaranteed to be worth it.


The newest US president announced this within the 48 hours of assuming office. Hype alone couldn't set such a big wheel in motion.


Again, that’s a discussion about enthusiasm not technology.

I just want an objective opinion from someone who has a deep understanding of the cutting edge.

It’s maddening to try to plan for a future which everyone is incentivised and permitted to fabricate.


> there's a tendency here to these these people behind the scenes are all charlatans, fueling hype without equal substance hoping to make a quick buck before it all comes crashing down, but i don't think that's true at all. I think these people really genuinely believe they're going to get there.

I don't immediately disagree with you but you just accidentally also described all crypto/NFT enthusiasts of a few years ago.


NFTs couldn't pass the Turing test, something I didn't expect to witness in my lifetime.

The two are qualitatively different.


Worth pointing out; the Turing test is pretty much just a thought experiment. Turing never considered it a test of "intelligence", or any other human quality. Many people have criticized its use as a measure of such.

[1] https://en.wikipedia.org/wiki/Turing_test#Weaknesses


Yep - the "value" of a Turing-capable system has been questioned for a while now. We watched Markov chains and IRC bots clear the Turing test on a regular basis in the mid-2000s, and all we got out of that was better automated scamming.

Even now, as we have fully capable conversational models we don't really have any great immediate applications. Our efforts at making them "think" is yielding marginal returns.


I'm not so sure it passes the turing test since you can trivially determine that the conversation partner is a machine by asking it a trick question or offering it a "jailbreak" style prompt.


You're missing the point. The hype is the same, because the incentives are the same.

I agree with you that there is significantly more there there with AI, but I agree with the parent that the hype cycles are essentially indistinguishable.


I agree, AI is fucking spectacular and NFTs have no substance. But at the same time, neither AI nor NFTs have substantially affected my life so far, so I experience a very weird cognitive dissonance when the AI hype crowd gasps on twitter. This is exactly the same feeling I had when I felt like I was the only one in my twitter bubble who didn't think NFTs were the shizzle.

I mean, AI the tech can be spectacular and the hype can be overblown, right? I'm not even sure that the hype is overblown, but it sure feels like the kind of hype that we'll say, a few years from now, was overblown.


It's identical energy. A significant number of people are attaching their hopes and dreams to a piece of technology while deluding themselves about the technical limitations of that technology. It's all rooted in greed. Relatively few are in it to push humanity forward, most are just trying to "get theirs."


Well Crypto had nowhere near the uptake [0] and investment (even leaving this announcement aside, several of the biggest tech giants are pouring billions into this).

At any rate, I'm not saying this means that all this investment is guaranteed to pay off.

[0] With 300 million weekly active users/1 billion messages per day and #8 in visits worldwide the last few months just 2 years after release, ChatGPT is the software product with the fastest adoption ever.


Motivated reasoning sings nicely to the tune of billions of dollars. None of these folks will ever say, "don't waste money on this dead end". However, it's clear that there is still a lot of productive value to extract from transformers and certainly there will be other useful things that appear along the way. It's not the worst investment I can imagine, even if it never leads to "AGI"


Yeah people don't rush to say "don't waste money on this dead end" but think about it for a moment.

A 500B dollar investment doesn't just fall into one's lap. It's not your run of the mill funding round. No, this is something you very actively work towards that your funders must be really damn convinced is worth the gamble. No one sane is going to look at what they genuinely believe to be a dead end and try to garner up Manhattan Project scales of investment. Careers have been nuked for far less.


The Manhattan project cost only $2 billion (about $30 billion adjusting for inflation to today).


It would probably be more reasonable to adjust for US GDP. That would put $2 billion back then at around the same as $250 billion today. So only about 2x off.


We're talking about Masayoshi Son here lol.


I am not qualified to make any assumptions but I do wonder if a massive investment into computing infrastructure serves national security purposes beyond AI. Like building subway stations that also happen to serve as bomb shelters.

Are there computing and cryptography problems that the infrastructure could be (publicly or quietly) reallocated to address if the United States found itself in a conflict? Any cryptographers here have a thought on whether hundreds of thousands of GPUs turned on a single cryptographic key would yield any value?


I'm not a cryptographer, nor am I good with math (actually I suck badly; consider yourself warned...), but am I curious about how threatened password hashes should feel if the 'AI juggernauts' suddenly fancy themselves playing on the red team, so I quickly did some (likely poor) back-of-the-napkin calculations.

'Well known' password notwithstanding, let's use the following as a password:

correct-horse-battery-staple

This password is 28 characters long, and whilst it could be stronger with uppercase letters, numbers, and special characters, it still shirtfronts a respectable ~1,397,958,111 decillion (1.39 × 10^42) combinations for an unsuspecting AI-turned-hashcat cluster to crack. Let's say this password was protected by SHA2-256 (assuming no cryptographic weaknesses exist (I haven't checked, purely for academic purposes)), and that at least 50% of hashes would need to be tested before 'success' flourishes (lets try to make things a bit exciting...).

I looked up a random benchmark for hashcat, and found an average of 20 gigahashs/second (GH/s) for a single RTX 4090.

If we throw 100 RTX 4090s at this hashed password, assuming a uniform 20 GH/s (combined firepower of 2,000 GH/s) and absolutely perfect running conditions, it would take at least eleven-nonillion-fifty octillion (1.105 x 10^31) years to crack. Earth will be long gone by the time that rolls around.

Turning up the heat (perhaps literally) by throwing 1,000,000 RTX 4090s at this hashed password, assuming the same conditions, doesn't help much (in terms of Earth's lifespan): two-octillion-two-hundred-ten septillion (2.21 x 10^27) years.

Using some recommended password specifications from NIST - 15 characters comprised of upper and lower-case letters, numbers, and special characters - lets try:

dXIl5p*Vn6Gt#BH

Despite the higher complexity, this password only just eeks out a paltry ~ 41 sextillion (4.11 × 10^22) possible combinations. Throwing 100 RTX 4090s at this password would, rather worryingly, only take around three hundred twenty-six billion seven hundred thirteen million two hundred seventeen thousand (326,713,217,000) years to have a 50% chance of success. My calculator didn't even turn my answer into a scientific number!

More alarming still, is when 1,000,000 RTX 4090s get sic'ed on the shorter hashed password: around thirty-two million six hundred seventy-one thousand (32,671,000) years to knock down half of this hashed password's strength.

I read a report that suggested Microsoft aimed to have 1.8 million GPUs by the end of 2024. We'll probably be safe for at least the next six months or so. All bets are off after that.

All I dream about is the tital wave of cheap high-performance GPUs flooding the market when the AI bubble bursts, so I can finally run Farcry at 25 frames per second for less than a grand.


>Maybe they really are all wrong

All? Quite a few of the best minds in the field, like Yann LeCun for example, have been adamant that 1) autoregressive LLMs are NOT the path to AGI and 2) that AGI is very likely NOT just a couple of years away.


You have hit on something that really bothers me about recent AGI discourse. It’s common to claim that “all” researchers agree that AGI is imminent, and yet when you dive into these claims “all” is a subset of researchers that excludes everyone in academia, people like Yann, and others.

So the statement becomes tautological “all researchers who believe that AGI is imminent believe that AGI is imminent”.

And of course, OpenAI and the other labs don’t perform actual science any longer (if science requires some sort of public sharing of information), so they win every disagreement by claiming that if you could only see what they have behind closed doors, you’d become a true believer.


Doesn't OpenAI explicitly have a "definition" of AGI that's just "it makes some money"?


>You have hit on something that really bothers me about recent AGI discourse. It’s common to claim that “all” researchers agree that AGI is imminent, and yet when you dive into these claims “all” is a subset of researchers that excludes everyone in academia, people like Yann, and others.

When the old gang at Open ai was together, Sutskever, not Sam was easily the most hypey of them all. And if you ask Norvig today, AGI is already here. 2 months ago, Lecun said he believes AGI could be here in 5 to 10 years and this is supposed to be the skeptic. This is the kind of thing i'm talking about. The idea that it's just the non academics caught in the hype is just blatantly false.

No, it doesn't have to be literally everybody to make the point.


Here's why I know that OpenAI is stuck in a hype cycle. For all of 2024, the cry from employees was "PhD level models are coming this year; just imagine what you can do when everyone has PhD level intelligence at their beck and call". And, indeed, PhD level models did arrive...if you consider GPQA to be a benchmark that is particularly meaningful in the real world. Why should I take this year's pronouncements seriously, given this?

OpenAI is what you get when you take Goodhart's Law to the extreme. They are so focused on benchmarks that they are completely blind to the rate of progress that actual matters (hint...it's not model capability in a vacuum).

Yann indeed does believe that AGI will arrive in a decade, but the important thing is that he is honest that this is an uncertain estimate and is based off of extrapolation.


I'm inclined to agree with Yann about true AGI, but he works at Meta and they seem to think current LLM's are sufficiently useful to be dumping preposterous amounts of money at them as well.

It may be a distinction thats not worth making if the current approach is good enough to completely transform society and make infinite money


Yeah, in my mind, the distinction worth making is where the inflection point from exponential growth to plateau in the s-curve of usefulness is. Have we already hit it? Are we going to hit it soon? Is it far in the future? Or is it exponential from here straight to "the singularity"?

Hard to predict!

If we've already hit it, this has already been a very short period of time during which we've seen incredibly valuable new technology commercialized, and that's nothing to sneeze at, and fortunes have and will be rightly made from it.

If it's in the near future, then a lot of people might be over-investing in the promise of future growth that won't materialize to the extent they hoped. Some people will lose their shirts, but we're still left with incredibly useful new technology.

But if we have a long (or infinite) way to go before hitting that inflection point, then the hype is justified.


It's obviously not taken to mean literally everybody.

Whatever LeCun says and really even he has said "AGI is possible in 5 to 10 years" as recently as 2 months ago (so if that's the 'skeptic' opinion, you can only imagine what a lot of people are thinking), Meta has and is pouring a whole lot of money into LLM development. "Put your money where your mouth is" as they say. People can say all sorts of things but what they choose to focus their money on tells a whole lot.


Who says they will stick to autoregressive LLMs?


I think it will be in between, like most things end up being. I don't think they are charlatans at all, but I think they're probably a bit high on their own supply. I think it's true that "everything is about to change", but I think that change will look more like the status quo than the current hype cycle suggests. There are a lot of periods in history when "everything changed", and I believe we're already a number of years into one of those periods now, but in all those cases, despite "everything" changing, a perhaps surprising number of things remained the same. I think this will be no different than that. But it's hard, impossible really, to accurately predict where the chips will land.


My prediction is a Apple loses to Open AI who releases a H.E.R. (like the movie) like phone. She is seen on your lock screen a la a Facetime call UI/UX and she can be skinned to look like whoever; i.e. a deceased loved one.

She interfaces with AI Agents of companies, organizations, friends, family, etc to get things done for you (or to learn from..what's my friends bday his agent tells yours) automagically and she is like a friend. Always there for you at your beckon call like in the movie H.E.R.

Zuckerberg's glasses that can not take selfies will only be complimentary to our AI phones.

That's just my guess and desire as fervent GPT user, as well a Meta Ray Ban wearer (can't take selfies with glasses).


My take on this is that, despite an ever-increasingly connected world, you still need an assistant like this to remain available at all times your device is. If I can’t rely on it when my signal is weak, or the network/service is down/saturated, its way of working itself into people’s core routines is minimal. So either the model runs locally, in which case I’d argue OpenAI have no moat, or they uncover some secret sauce they’re able to keep contained to their research labs and data centres that’s simply that much better than the rest, in perpetuity, and is so good people are willing to undergo the massive switching costs and tolerate the situations in which the service they’ve come to be so dependent on isn’t available to them. Let’s also not discount the fact that Apple are one of the largest manufacturers globally of smartphones, and that getting up to speed in the myriad industries required to compete with them, even when contracting out much of that work, is hard.


Sure but Microsof has the expertise and they own 49 percent of Open AI if I'm not mistaken. Open AI uses their expertise and access to hardware to create a GPT branded AI phone.

I can see your point re: run locally but no reason Open AI can't release version 0.1 and how many times are u left without an internet connection on ur current phone?

Overall I hate Apple now it's so stale compared to GPT's iPhone app. I nerd rage at dumbass Siri.


I still fail to see who desire that, how it benefits humanity, or why we need to invest 500b to get to this


Do you use chatGPT many times throughout your day (i do)? If so did you ever want it to find the best hotel and book it for you? With chatGPT you can not do this now as all the travel websites do not have their own AI Agents for GPT to communicate with. Once they do you can type to GPT back and forth or talk to it to get anything and everything you now use the web to do. Yet your human like friend (AI friend/agent) will do for you.. U dont have to talk to it but if it's just like a human why not talk to it to do everything for you & use it as a knowledgebase. If you aren't aware you can now have a full back n forth conversation with chatGPT(it's not dumbass Siri).

All technological advances that are adopted are ones that made life easier and for some cooler then what they were once using (cell phone to iPhone put the web in our pocket but using your iPhone while driving is dangerous but talking to your human like friend isnt). Check out the movie H.E.R. as what Im describing is mostly what i describe above.

Time will tell if any of what im saying comes to fruition, but Silicon Valley is all a buzz about AI Agents in the last month or two and going forward.


Every single point you listed seems like a worst version of what I already have.

I want to interact with real people, not bots, I'm already spending most of my time wasting my life in front of a fucking screen for work


Sure but the AI genie is out of the bottle and the tech billionaires are barreling us towards AI friends/agents/assistants. If we stop the momentum China picks it up and American prosperity and security is at risk.

What i describe is much like the movie H.E.R. which Sam Altman chatGPT's CEO asked Scarlett Johansan (voice of the AI in that movie) to be the voice of GPT. GPT is now a little like H.E.R the movie as you can have a full conversation with it unlike Siri. Just atm you dont see how GPT looks .. it doesn't look like a FaceTime call with human AI friend/Assistant (how your AI Assistant/Friend looks.. could look and sound like a deceased loved one .. that's my own crazy idea not from the movie H.E.R). Yet maybe in the future it will.. I'm betting it will, but it's only a guess and time will tell.

Im awaiting your downvote :) but will revisit this thread in a few years or more. Well if im right ;)


> If we stop the momentum China picks it up and American prosperity and security is at risk.

How so ?


The biggest new technology where America controls/drives it and all other countries follow along .. we are & continue to be number one in AI technology and all other countries come to and or follow us that brings America billions to trillions of dollars ... the more money we have the more secure & prosperous America is & continues to be. As well creating new AI tech and using it to further protect America from adversaries.

The genie is out of the bottle and America must keep it's momentum in AI up .. ahead of all other countries for it's continued prosperity and security!


Sorry, you live in a different world, google glasses were aggressively lame, the ray bans only slightly less so.

But pulling out your phone to talk to it like a friend...


Well I use GPT daily to get things done and use it as a knowlegebase. I text and talk to it throughout the day, as well I think it's called "chat"GPT for a reason because it will evolve to the point where you feel like you are talking to a human. Tho this human is your assistant and does everything for you and interfaces with other AI agents to book travel, learn your friends/family schedules and anything you now do on the web there will be AI agent for that your AI agent interfacing with.

Maybe you have not seen the 2013 movie "H.E.R.?" Scarlett Johansan starred in it (her voice was the AI) and Sam Altman asked her to be the voice of chatGPT.

Overall this is what I see happening and excited for some of it or possibly all of it to happen. Yet time will tell :-) and it sounds like your betting none of it will happen ... we'll see :)


I suppose most anybody talking about this topic has seen Her by now (and if they haven't, they should, it's both a good movie and very relevant). The problem is rather that not everybody shares your enthusiasm about the utopia it depicts.

This is because it's also a dystopia in disguise. It's a social criticism and a cautionary tale about the way fetishizing technology is emotionally crippling us as individuals in a society. It kind of amazes me that this aspect seems to go over some people's heads.

It's obviously true what Booker said: What one person considers an ideal dream might to another person seem a nightmare.


Indeed yet chatGPT is already like H.E.R. yet there's no human like face to it ATM.

Im just jumping ahead utilizing what was seen in H.E.R. to envision where we are headed (possibly) as well adding my own crazy ... your AI Assistant Friend seen on your lock screen via a Facetime UI/UX call looks and sounds like a deceased loved one. Mom still guiding you through life.


That sounds creepy as hell to me. Are you serious, or is that an idea for a horror movie?

Edit: aaaand right after posting I stumble across a documentary running on TV in this very moment, in which a dying guy trained an AI on himself to accompany his widow after his death. Seems you're not the only one to find that desirable...


Very insightful take on agents interacting with agents thanks for sharing.

Re H.E.R phone - I see people already trying to build this type of product, one example: https://www.aphoneafriend.com


I am hoping it is just the usual ponzi thing.


How would this be a Ponzi scheme? Who are the leaf nodes ending up holding the bag?


Investors, mostly private - eg. SoftBank and all the other deep pockets funneling money into this - but also public, because lots of people are invested in Nvidia, Microsoft, and Google, who will be directly affected if the bubble bursts, and just everyone invested in the markets generally, as this bubble bursting would already probably be more broadly damaging than even the dot com bust was.

Personally, I do expect a big correction at some point, even if it never reaches the point of bubble bursting. But I have no idea when I expect it to happen, so this isn't, like, an investable thesis.


So unlike with a regular Ponzi scheme most of the money just is wasted?


Well Madoff funnelled it into lifestyle.

Technically you are correct. A ponzi is a single entity paying returns from new marks. It is a straight con.

But some systems can be ponzi-like in that they require more and more investment and people get rich by selling into that. Bitcoin is an example.


Not sure what you mean by "wasted"? Like a regular Ponzi scheme, there are many opportunities for the people at the top to extract value out into cash, while people who "got in" on the scheme later are left holding the bag when the bubble bursts.


Yes, usually people at the top extract the cash. Like Bernie Madoff just spent the money for his enjoyment. In this case the money goes to people building the data centers, providing resources and the engineers at OpenAI who are actually working for the money.


A lot of it does - and that's great! - but a lot of it accrues to the owners of the businesses involved.

If this is a bubble and it bursts in a few years, a lot of investors in specific companies, and in the market broadly, will lose a lot of money, but Sam Altman and Jensen Huang will remain very wealthy.

I'm a capitalist and I think there are good reasons for wealth to accrue to those who take risks and drive toward technological progress. But it just also is the case that they are incentivized to hype their companies, even if it risks getting out over their skis and leads to a bubble which eventually bursts. There are just have lots of ways to extract wealth prior to a bubble bursting, so the downsides of unwarranted hype are not as acute as they might otherwise be.


https://www.aboutamazon.com/news/aws/amazon-invests-addition...

Not this specificay but this kinda thing. If I am getting billions like this, I wanna keep this gravy going. And it comes from shareholders ultimately.


It's just being spent though, no? Sounds more like a potential waste of money than a Ponzi scheme.


So they're either wrong or building Skynet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: