Hacker Newsnew | past | comments | ask | show | jobs | submit | bobxmax's commentslogin

I thought Alex Wang was a very curious choice. There are so many foundational AI labs with interesting CEOs... I get that Wang is remarkable in his own right, but he basically just built MTurk and timed the bubble.

Doesn't really scream CEO of AGI to me.


A lot of people also don't know that many of the well known papers are just variations on small time papers with a fuck ton more compute thrown at the problem. Probably the strongest feature that correlates to successful researcher is compute. Many have taken this to claim that the GPU poor can't contribute but that ignores so many other valid explanations... and we wonder why innovation has slowed... It's also weird because if compute was all you need then there's a much cheaper option than Zuck paid. But he's paying for fame.


> A lot of people also don't know that many of the well known papers are just variations on small time papers with a fuck ton more compute thrown at the problem.

I worked for a small research heavy AI startup for a bit and it was heart breaking how many people I would interact with in that general space with research they worked hard and passionately on only to have been beaten to the punch by a famous lab that could rush the paper out quicker and at a larger scale.

There were also more than a few instances of high-probability plagiarism. My team had a paper that had been existing for years basically re-written without citation by a major lab. After some complaining they added a footnote. But it doesn't really matter because no big lab is going to have to defend themselves publicly against some small startup, and their job at the big labs is to churn out papers.


  > only to have been beaten to the punch by a famous lab that could rush the paper out quicker and at a larger scale.
This added at least a year to my PhD... Reviewers kept rejecting my works saying "add more datasets" and such comments. That's nice and all, but on the few datasets I did use I beat out top labs and used a tenth of the compute. I'd love to add more datasets but even though I only used a tenth of the compute I blew my entire compute budget. Guess state of the art results, a smaller model, higher throughput, and 3rd party validation were not enough (use an unpopular model architecture).

I always felt like my works were being evaluated as engineering products, not as research.

  > a few instances of high-probability plagiarism
I was reviewing a work once and I actually couldn't tell if the researchers knew that they ripped me off or not. They compared to my method, citing, and showing figures using it. But then dropped the performance metrics from the table. So I asked. I got them in return and saw that there was no difference... So I dove in and worked out that they were just doing 99% my method with additional complexity (computational overhead). I was pretty upset.

I was also upset because otherwise the paper was good. The results were nice and they even tested our work in a domain we hadn't. Were they just upfront I would have gladly accepted the work. Though I'm pretty confident the other reviewers wouldn't have due to "lack of novelty."

It's a really weird system that we've constructed. We're our own worst enemies.

  > their job at the big labs is to churn out papers.
I'd modify this slightly. Their job is to get citations. Churning out papers really helps with that, but so does all the tweeting and evangelizing of their works. It's an unfortunate truth that as researchers we have to sell our works, and not just by the scientific merit that they hold. People have to read them after all. But we should also note that it is easier for some groups to get noticed more than others. Prestige doesn't make a paper good, but it sure acts as a multiplying factor for all the metrics we use for determining if it is good.


It’s funny.

I learnt the hard way that communications/image/signal processing research basically doesn’t care about Computer Architecture at the nuts and bolts level of compiler optimization and implementation.

When they encounter a problem whose normal solution requires excessive amounts of computation, they reduce complexity algorithmically using mathematical techniques, and quantify the effects.

They don’t quibble about a 10x speed up, they reduce the “big O()” complexity. They could care less whether it was implemented in interpreted Python or hand-optimized assembly code.

On one hand, I know there’s a lot of talent in AI today. But throwing hardware at the problem is the dumbest way forward.

WiFI adapters would be wheeled luggage if we had the same mentality during their development.


At some point it becomes difficult to improve the O() complexity. How do you do better that the O(n-squared) of the Transformer, with acceptable tradeoffs? Many big brains in all the big labs are very aware of the importance of algorithmic advances. There is no low hanging fruit, but they're doing their best.

Then in parallel to that looking at compiler optimizations, and other higher-level algorithmic innovations such as Flash Attention (a classic at this point) which had a drastic impact on performance due to cache awareness, without changing the O() complexity.


Sometimes it's the theory, sometimes it's the engineering, and often it's both.


  > They don’t quibble about a 10x speed up, they reduce the “big O()” complexity. They could care less whether it was implemented in interpreted Python or hand-optimized assembly code.
I can at least say that's not all of us. But you're probably right that this is dominating. I find it so weird since everyone stresses empirics yet also seems to not care about them. It took me my entire PhD to figure out what was really going on. I've written too many long winded rants on this site though


You make it sound like reducing the big O complexity is a dumb thing to do in research, but this is really the only way to make lasting progress in computer science. Computer architectures become obsolete as hardware changes, but any theoretical advances in the problem space will remain true forever.


No, my point was the opposite, I agree with you. But the commercial focus on throwing hardware at the problem seems to have gotten entirely out of hand.


Frankly this is the reason why Im not convinced the current movement of LLMs will yield anything close to the dream.

The right people to deliver immense progress dont exist right now.


  > The right people to deliver immense progress dont exist right now.
I wouldn't go this far. But I would say that we're not giving them a good shot.

The people are always there, you just need to find them and enable them.

  How do you manage genius? You don’t.
  — Mervin Kelly


The reportings at the time said that he was Mark’s 5th choice or similar. It is fairly clear he would prefer Ilya, Murati, Mark Chen, and perhaps others, but they said no, and Alex Wang was the first one to say yes.


Why in the world would he want Murati? She has absolutely no technical chops and was not functionally CTO of OpenAI.


> was not functionally CTO of OpenAI.

Why do you say that?


Her history was entirely non technical up until openAI.


I think that's total BS, based on this article about her, https://fortune.com/2025/10/03/mira-murati-career-ai-thinkin...

1. She has 2 BAs, one in math and one in mechanical engineering.

2. She was an "Advanced Concepts Engineer at Zodiac Aerospace from 2012 to 2013".

3. She was a product manager at Tesla on the Model X

4. She was VP of product and engineering at Leap Motion.

Going from that fact that she wasn't a deep learning researcher to "her history was entirely non technical up until Open AI" is plain false. And plus, the job of CTO is 90%+ people management, and she appears more than smart enough and experienced enough to evaluate technical decisions of her team.


I think you havent been in tech long enough to know what that resume is.


Because she was CTO of OpenAI.


Pretty ironic when access to trade secrets and people skills is seen as more important in a technical field than technical competence.


For the record, I doubt the CTO of OpenAI is the best person to fund if you're looking for trade secrets on training and deploying SOTA LLMs. They are two levels too far from reality to know anything useful.


What technical chops does Sam Altman have?


He started coding at age 8


The self-supervised mesa-optimizer strikes again


Alexandr Wang is not interesting and a few steps short of a fraud that Mark had to bail out because he was so co invested.

Shareholders should be livid if they knew a single thing about what was going on.


Tell me more


Scale promised cutting-edge data pipelines and model-training infra but mostly sold outsourced labeling with a tech veneer. Great margins, weak moat — classic Valley overclaim, not outright fraud.


Quite an odd request to ask Hamas to disarm.


It absolutely isn't. They attacked Israel, got defeated and usually you would require an unconditional surrender. Basic laws of warfare.

Hamas is an exception because it believes in martyr death and would drag the whole population in Gaza with them. That is new and unprecedented.


You're right. Mandela and the ANC should've disarmed. Basic laws of warfare.


I am very excited to hear how that is relevant or comparable.


You are wondering why a case of so-called terrorists using violence against apartheid occupiers is relevant?


There are no odd requests, just requests that will be fulfilled or not. Hamas could have asked Israel to disarm. The agreement is that Hamas will disarm.

Most likely scenario is similar to Hizbollah in Lebanon. They didn't disarm as per agreement, hence no international aid flowing to Lebanon. Israel still controls South Lebanon and kills a couple of hizbollocks fighters on a daily basis.


Thank god. Finally an end (for now) to the bloodshed. Hopefully an opportunity to take that psycho Netanyahu out of office (and to the Hague, where he should rot)


Elections in Israel are 1 year away.

Pretty sure that second coming is more likely than Bibi been tried for anything in Hague


As someone from (and who lives) in the developing + non-aligned part of the world, I'm always amazed at how ingrained hatred for anything western and non-democratic is for Americans.

Anything that isn't a so-called democracy with so-called western values (which change every 10 years) is patently evil.

Is this a relic of cold war propaganda? I hate to call it "brainwashing" because that's a very politically charged word but it's a belief that's so ubiquitous and immediate it surprises me. Especially for a culture with such anti-authoritarian cultural history.

Not defending CPP by any means just always find it funny from a vantage point of being surrounded on both sides by extremely large pot and kettle.


Dude. The CCP sucks. Just ask Jack Ma or Gedhun Choekyi Nyima

https://en.wikipedia.org/wiki/Gedhun_Choekyi_Nyima


The American "democracy" also sucks. Just ask anyone in Latin America who had to live under US-backed dictatorships, or those in Middle Eastern countries that were destabilised or destroyed by American influence. Or the immigrants (or citizens that happen to look like immigrants) under siege in the country right now. I could go on for a long, long time.


Whataboutism is boring and lame.


Yes, ask the billionaire how the anti-billionaire country treated him.

Here's a hint: not well.


What about the 6 year old child the CCP abducted for political purposes?

And what makes you say that the CCP is anti-billionaire?


Yes. They suck. They've also never done anything remotely as bad as the US did in Iraq.

That's my point. Americans act like China is the great evil... it's quite strange.


"'ve also never done anything remotely as bad as the US did in Iraq."

Ask Tibetens about that. The US left Iraq but the CCP still controls Tibet and oppresss native Tibetans. Or the Uyghurs that the CCP are brutally persecuting. Or the Falun Gong. The CCP also is a strong ally of the despicable North Korean government and sends North Koreans in China back to North Korea to face long prison sentences or execution.

The CCP is very evil at violating individual human rights. And smart people defending their behavior is very odd


Nobody is defending their behavior. You're just proving the brainwashing point I made by instinctively going into attack dog mode making silly arguments.

And no, China's oppression of Tibet is nothing close to a million dead Iraqis and an ancient country turned into a failed state. The fact you'd even make such a goofy comparison shows how deep American indoctrination runs.

Your tax dollars are still torturing brown people without trial in Gitmo and genociding Palestinians btw.


"Nobody is defending their behavior"

You are by minimizing it.

"China's oppression of Tibet is nothing close to a million dead Iraqis"

At least the US got rid of Saddam. China is still oppressing the hell out of Tibet.

"The fact you'd even make such a goofy comparison shows how deep American indoctrination runs."

The fact that you consider this to BE a "goofy comparison" shows how deep your pro-CCP indoctrination runs.

"genociding Palestinians"

If you consider what Israel is doing to Palestinians to be genocide then you have to consider what the CCP is doing to the Uyghurs to be a genocide also. But you seem very selective with your outrage.


I'm not going to debate someone who compares Tibetan political opression with a million dead Iraqis and the brutal carpet bombing and turning an ancient country and people into a failed state. Or who thinks the Uyghur re-education camps, however terrible, are in any way comparable to 20,000 dead Gazan children.

You have psychological problems and are beyond reasoning with. Like I said, brainwashed.


Why don't you ask some Tibetans how they feel about being taken over and ruled by China.


Tibetan freedom of religion, language and movement being restricted is terrible. It's like what your government is doing to Gaza, except without all of the murder and ethnic cleansing.

Like I said, you have psychological issues if you think it's remotely comparable to millions of Iraqis getting senselessly tortured and slaughtered.

Like a Nazi who thinks they're not so bad because the Americans have racial segregation.


And your definition of thinking is?


Not arithmetic or boolean algebra. What's your definition?


> Not arithmetic or boolean algebra.

That isn’t a definition or even a coherent attempt.

For starters, what kind of cognition or computation can’t be implemented with either logic or arithmetic?

What is or is not “cognition” is going to be a higher level property than what basic universally capable substrate is used. Given such substrates can easily simulate each other, be substituted for each other.

Even digital and analog systems can be used to implement each other to arbitrary accuracy.

Cognition is a higher level concern.


I'm not the one making obtuse claims and desperately trying to trigger reactions.


Address the substance of my statements or save yourself the time & further keystrokes responding to my posts.


There is no substance to address lol.


Then you should have saved yourself the time from any further responses. That would be the rational course of action.


The dot com boom involved silly things like Pets.com IPOing pre-revenue. Claude code hit $500m in ARR in 3 months.

The fact people don't see the difference between the two is unreal. Hacker news has gone full r* around this topic, you find better nuance even on Reddit than here.


Do you mean pre-profit/without ever making a profit? I found an article about their IPO:

> Pets.com lost $42.4 million during the fourth quarter last year on $5.2 million in sales. Since the company's inception in February of last year, it has lost $61.8 million on $5.8 million in sales.

https://www.cnet.com/tech/tech-industry/pets-com-raises-82-5...

They had sales, they were just making a massive loss. Isn’t that pretty similar to AI companies, just on a way smaller scale?

We haven’t seen AI IPOs yet, but it’s not hard to imagine one of them going public before making profit IMO.


You'd think after all this time nerds would stop obsessing about profit. Profit doesn't matter. It hasn't mattered for a long time because tech companies have such fat margins they can go profitable in months if they wanted to.

Yes, $5m in sales. That's effectively pre-revenue for a tech company.


They're not claiming that it's like the dot com boom because no one is actually making money. They're claiming that this is more like the dot com boom than the housing bubble, which I think is true. The dot com crash didn't cause Jane-on-the-street to lose her house while she worked a factory job, though the housing crisis did have those kinds of consumer-affecting outcomes.


It's nothing like the dot com bubble because that was based on speculative future value and zero present value. There is more present value in AI than at any point in software in the last 30 years.


The Internet bubble was also based on something real, but that didn't stop it from being a bubble.

For example, Cisco parked at over $500B in market cap during the boom. Its current market cap is around half that, at $250B.


What you're missing is how that value comes about. People seem to think it's an infinite fountain but it's more like strip mining the commons.

We also know that AI hype is holding up most of the stock market this point, including the ticker symbols which you don't think of as being for "AI companies". Market optimism at large is coming from the idea that companies won't need employees soon, or that they can keep using AI to de-leverage and de-skill their workforce


So that $500m in ARR in 3 months is from hype? That's what you're contending?


1. Claude Code is claimed to have hit €500m ARR in 3 months.

2. What is the Claude Code profit for the same period?

3. What is the Claude Code profit per request served when excluding fixed expenses such as training the models?


Who cares? Are you not seeing the exponential decrease in GPU inference costs? H100s costs 3x what they did a year ago.


I'm not sure I understand how B (H100s costs 3x what they did a year ago) follows from A (exponential decrease in GPU inference costs).


You have a good point. Pets.com would have fared much better if investors gave them several billion dollars in 1998, 1999 and then again in 2000


can see cramer "buy pets.com! general revenue is just around the corner"


Pets.com could have traded at a significant multiple of the entire combined revenue of the pet space if investors simply poured infinite dollars into it.

The could have even got into the programming space with all that capital. Pawed Code


No, that's not my point. It helps to get out of the HN echo chamber to see it though.


That’s a good point. Pets.com raised $82 million from its IPO pre-revenue (bad) and Anthropic raised $500 million from Sam Bankman-Fried pre-revenue (good)


Yes, because Anthropic makes revenue. You're having a hard time grasping how business works I think.


I’m admittedly not very good at math. You pointed out that Claude Code got to $500mm ARR in 3 months, but it kind of looks like it actually Anthropic over four years and many billions of dollars to make a product that generates significant revenue (I appreciate their modesty in not bragging about the net profit on that revenue there). I’d say that the users bragging about being able to cost Anthropoid multiple orders of magnitude in costs beyond what they pay them kind of makes the ARR less impressive but I’m not a fancy management scientist

But I’m bad at math and grasping things. If you simply pick a point in time to start counting things and decide what costs you want to count then the business looks very exciting. The math is much more reassuring and the overall climate looks much saner than the dot com bubble because we simply don’t know how much money is being lost, which is fine becau


What in gods name are you talking about


We've banned this account for repeatedly violating the site guidelines and ignoring our requests to stop.

If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.


A good sign that an article is another pointless, naive AI doomerism piece is that they cite that atrocious MIT 95% "study".


A good sign someone is an AI huckster is they ignore the rest of the article and citations.


Plenty of hucksters around and for them I say three words: show me the money (free cashflows).


Um, that's four words.

Not saying you're wrong, though...


No what's driving much of the valuations is the biggest leap in human technology since the internet and skyrocketing revenues as a result

Private companies soaring to $100m ARR in 12 months is commonplace now. That's what's driving the valuation.


Revenues mean nothing without positive equity earnings, especially so without a viable path to get there. Without a clear path, how do you justify the valuation? Lol.

Uber and Amazon had a very logical path to get there.

The reinvestment is so high that once you tack that onto the earnings youre in a fat negative. What does that mean? You will eat into the cash balance and eventually have to go raise more.


> No what's driving much of the valuations is the biggest leap in human technology since the internet and skyrocketing revenues as a result

That is a 1999-like bubble and how you get 75 - 90% of these companies crashing when the music stops.

> Private companies soaring to $100m ARR in 12 months is commonplace now. That's what's driving the valuation.

We don't even know if that is even real to begin with. Even if it is, that revenue can be lost as quickly as it is gained.

This happened to Hopin and other companies who grew extremely quickly and then their valuations crashed.

The questions you should be asking yourself is even after looking at the competition, what is the retention and the switching cost of these said "$100m ARR in 12 months" companies if a competitor moves into their core business?


We don’t know how sticky that revenue is, or if it’s going to be a commodity in the long run. Similar things used to happen in ad-tech before investors got wise that there was no moat.


It's only disrespectful if you're a gatekeeper.


defending the idea of complex, nuanced effort for the sake of coherent creation being a demonstration of skill is gatekeeping?

I'd love to see programmers reactions to having the measure of their work reduced in such a way as more people vibe code past all the technical nonsense.


Last I checked programming isn't judged as an art.

Your supposed judgment on skill has nothing to do with something's value as an artform


And so sampling, dj'ing, these aren't skills? This isn't music?


Sampling is not just cutting a fragment from a song and calling it a day. Usually (if you look at Prodigy's tracks for example) it includes transformation so that the result doesn't sound much like the original. For example, you can sample a single note and make a melody from it. Or turn a soft violin note into a monster's roar.

As for DJ'ing I would say it is pretty limited form of art and it requires lot of skill to create something new this way.


Yes, that's what people are doing with AI music as well. Acting like there's some obvious "line" of what constitutes meaningful transformation is silly.


I see DJing as more akin to being a skillful curator than being an artist. They are related but not equivalent.


I was a trance and dnb DJ, I definitely never claimed I was writing the songs I played and I think it would have been dishonest to do so.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: