Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm seeing less and less point with all of that AI thing.

I'm seeing companies spending tens of megabucks on idling GPU farms without a clear idea what to do with them.

Saw that when we did a subcontract for datacentre for Alibaba. They had a huuge reception for all kinds of dignitaries, showing them CGI movies of their alleged AI supposedly crunching data right there in the DC — and all that with all of hardware in the DC being shut down...

The moment I poked a joke about that during an event, there came dead silence, and faces on the stage started to turn red. The guy defused the situation with a joke, and the party went on.



The downvotes for you are really disappointing. Because before AI, there was the walled garden of HPC, with temple-like supercomputers to which gaining access involved magic incantations and politically savvy groveling from academics wishing to graduate from grad school in less than a decade. GPUs disrupted that garden in a big way. Or all of this has happened before(tm).

But in the past few years, the one two punch of AI and Python rebuilt that garden. So now not only are the big guys controlling training the big AIs, but the insistence on describing those computations entirely from a Python abstraction of those computations is leading to insanely inefficient use of those GPUs (and coming soon, dedicated ASICs with even worse software libaries past Resnet, BERT and a few other acceptable strawmen).

That said, one can do a lot with AI/ML models that fit in a single sub $10,000 machine built entirely from consumer parts and doubly so if one is willing to profile and low-level optimize one's code as intensely as stare at the data going into the model (you're in grad school, you have time for this, you have all the time so to speak and it will save you lots of time in the long run). For inspiration, all the big guys have GPU codemonkeys on staff to micro-optimize as needed. One might want to take a cue from that and DIY.


AI has too perfect of a sales pitch and surrounding narrative for business people to treat it rationally. The fact that philosophers who don't understand AI are debating AI philosophy is strong evidence of that. A crash is less likely this time around because it's being funded by established companies instead of the public market this time, but there has to be a moment of reckoning for the baloney eventually... I hope.


>philosophers who don't understand AI

If you mean "philosophers who don't understand ML/gradient descent/linear algebra/statistics etc", I don't think many philosophers are debating about that stuff.

If you mean "philosophers who don't understand artificial intelligence", that would be all of them, and everyone else too, because no-one understands artificial intelligence yet. And a lot of the people who come closest to understanding it are in philosophy departments.


I more or less agree with your second point. Presently the philosophy of artificial intelligence is kind of like theology, it is molded by how people like to think more so than it is determined by reality. Sales pitches also tend to be molded by how people think. AI has that theological essence that makes people love it even if they don't understand it, and that makes it a lot easier to sell big GPU farms to companies that don't have a legitimate use.


> "without a clear idea what to do with them."

Of course they know what to do with them: show us more effective ads. That's the cutting edge of technological progress now.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: