Hacker Newsnew | past | comments | ask | show | jobs | submitlogin



My advisor also worked on this ML project for estimating electron density and temperature within tokomaks: https://www.cs.wm.edu/~ppeers/showPublication.php?id=Ozturk:...

Technically that counts as "AI in nuclear fusion", but it isn't any sort of breakthrough. In almost every case the effects of AI are marginal. Not zero exactly, but nowhere near the breathless hype.


Depends on what you count as ML and what counts as AI and the distinctions are meaningless. It’s just that ML techniques have lost their magic and are now just part of the toolkit of how you do things. But I wouldn’t say they’re zero because you need a lot of wins all over the place compounded over time to get the true win. It’s a hard multi disciplinary project with unknown physics on top of engineering. It’s now known physics in some ways on the theoretical side but the practical physics of it and how to scale it up and make it work are not solved and that’s real R&D that has to be done. There’s glimmers of progress here and there, achievements we never had before but it all still feels so far away because it’s not coalescing as fast as the advertised wins might suggest.


When I used to work in grid computing almost 20 years ago, we were already running fusion experiments, realtime streaming the data to the grid, which would rapidly analyze it and compute some new parameters for the next run (I think they had a 20 minute downtime). I don't think it was considered machine learning at the time, though (and was certainly not deep learning as we practice it today).


Remember, AI is just procedurally generated data analysis.


Computers are just electron tunneling machines.


You’re just procedurally generated data analysis.


Fascinating! Reminds me of the new generation of AR headsets (eg Orion) that are making the impossible possible simply by adding an ANN(-derived) layer above some their of device controllers. I wonder how many problems will fundamentally change in the face of mature brute-forcing techniques…


Very interesting if you consider life as a complex chemical reaction that tries to self-sustain.


AI is not a buzzword, try beating Go without machine learning. Just ignore the whole enterprise speak, and you'll see a lot of really cool things which are possible almost only by means of neural nets.


Almost every technology buzzword is based on something interesting and useful that then gets used in as many unsuitable applications as possible.


I really like idea of blockchains and think it's a pretty clean and clever solution for many problems outside of crypto.


Like what?


Anything requiring keeping an immutable track of provenance, for instance.


How's that not crypto? Such a network needs to be run and achieve consensus, right? How's that work without the crypto part? It seems like what you're describing is Ethereum.


Neural nets is a useful term that refers to a class of algorithms, though. "AI" doesn't appear to have any use beyond marketing to people who don't know what they want.


Genuinely curious what you mean by "beating Go". Do you mean "beating exceptional human Go players"?


Yes, meant the latter. But given how Go was spoken about pre-AlphaGo about the only 2-player piece-game "not beaten by AI", it's maybe implied here.


Neural networks have been used in industrial process control for many years. This is just another industrial control problem, perhaps a difficult one.


That’s really interesting! Do you have other interesting specific examples? I would have guessed that most industrial control problems were simpler sets of differential equations that could be directly estimated.


I’ve heard about a local packaging factory recently that uses an ML-first system for messaging their clients about what they will need to order soon, based on recent orders and global criteria. It’s not a simple problem, apparently. They signal the clients and start pre-producing, basically sort of algotrading themselves.

Not really an industrial control though, but close to it.


PID control your customers :)


Furnace control was one of the first practical applications of multi-layer neural networks: https://www.researchgate.net/publication/282185187_Artificia...


(Small) neural networks have been discussed to be used in computer chips for branch prediction. I don't really find a good source whether that really landed in production though. Here was some discussion:

https://news.ycombinator.com/item?id=12340348


Back in ~2004, when I was looking for a job for the industrial placement year of my degree, one of the options was a nut packing factory using computer vision systems. I was intrigued, but went for the job processing satellite images instead.

Even well before that, ML is very closely related to statistics, so early practical applications would have been as simple as gathering data points on widget production and doing the kinds of analyses that are now backed into free spreadsheet software.


From what I remember reading at the time, most of industrial vision applications 20 years ago had very little to do with neural networks. Or even with ML in general, relying on bespoke feature detectors.


CNNs are the gold standard, and neural networks. Not ‘AI’ though.


Perhaps in 2024. Not in 2004.



I know the history and the fist USPS applications. My point is in any random industrial vision setup on the verge of millennium you'd unlikely found any sort of NN.


What would you find instead?



None of those work for many common cases - which is why CNN’s revolutionized actual use of computer vision.


I'm not sure why are you arguing with me, my point was what was the historical reality at the time. Machine vision used feature detectors, cellphones had keyboards, teenagers wore low cut jeans.


My ‘argument’ (not sure we’re really in disagreement, actually) is that the actual usage/use cases were so limited back then, and the economic impact because of it was so much smaller, that they’re really not equivalent even if they might technically be called the same thing.

CNN’s literally changed the game.

It’s like talking about cell phones, so we attempt to compare [https://en.m.wikipedia.org/wiki/Motorola_DynaTAC] and a modern flagship Apple or Samsung smart phone. Technically they can both make mobile phone calls, yes.

And it isn’t really wrong to compare them.

Technically, if we limit the specific scope of what we’re comparing and squint a lot, and avoid the context, they’re equivalent.

But they aren’t really the same type of thing either, eh?

In fact, if we take a broader view, the Motorola DynaTAC was a portable radio connected to the phone system, not what we’d call a phone.


Yeah, this is precisely the kind of stuff I was talking about 48 hours ago, when everyone was telling me that ML will never find any kind of practical application.

There are so many fundamental fields - engineering, chemistry, biology, physics - which stand to have absolute quantum leaps in knowledge and capability with this technology.


Yeah machine learning is more or less just very complex application of control theory techniques and notably it is usually done by people without formal control theory backgrounds.

Super useful for control applications but obviously you really want to know control theory so that you aren't just using ML to throw darts at a wall.


> using ML to throw darts at a wall

A bit off topic but I had a laugh at that, reminded me of that ridiculous Meta commercial where the girl at a pool table asks Meta which ball should she hit?


More like optimization. Signal processing and control theory are basically the same maths with different applications, optimization is a bit different but has some overlap (especially with, e.g. optimal control techniques).


Eh to a certain degree. The architecture of the models is very much control theory and then the training is control system tuning (which of course is an optimisation problem like you said).

I would definitely agree that optimisation fits the definition in part but I find really only control theory covers that entire field of signal processing, optimisation, and decision making systems.

And importantly, because ML in some amount touches on all of those, control theory tends to fit better as it focuses so heavily on providing a comprehensive framework for reasoning about all of those elements together.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: