It's all bunk. Hyperloop is a gadgetbahn without any practical way of deployment.
Sure, in theory there's nothing preventing one from building such a thing, but the economics just say no to the concept.
Hyperloop doesn't fail due to missing technology, but due to economics that just don't add up.
For an example in lesson's from reality, just look at Musk's Loop project in Las Vegas: slick autonomous 10+ people buses turned into Model 3s on autonomous sleds, which were replaced by autonomous Model 3s with extra wheels, and the project ended up with regular Model 3s that have drivers (you know, for safety, even though they drive autonomously).
The concept of a vactrain isn't new and whenever someone presents an old and failed idea as the next big thing, you just need to stop and ask one simple questions: what changed?
I wouldn't place too much confidence in announcements and plans - there are lots of projects in the middle east that were halted and never finished or quickly abandoned.
Announcements like that can safely be put in the same drawer as Virgin Galactic's sub orbital tourist flights.
VG even had a working prototype back in 2004(!) and didn't manage to turn it into a regular service in 16 years.
Hyperloop on the other hand doesn't even have a working prototype that demonstrates its capabilities, yet some seriously believe they can enter commercial operation in just 3 years? So far all they've ("they" being Hyperloop TT) shown was an empty shell.
Nope. I'm calling it right now: three years from now, in January 2024, there won't be an operational Hyperloop (<100 Pa vacuum tubes, MAGLEV) with passengers anywhere in world.
“Academic rumblings about the limits of Deep Learning
BY 2017”
“The technical press starts reporting about limits of Deep Learning, and limits of reinforcement learning of game play.”
“ VCs figure out that for an investment to pay off there needs to be something more than "X + Deep Learning"”
... a few more.
Every single AI related one had to do with what people were saying and thinking. No hard statements about what a system will not be able to do? Such as, fool humans easily into thinking text and videos are real by 20xx? Or any other concrete statement that isn’t just what people talk about?
I can't remember who, but an AI expert tried challenging AI sceptics to come up with a future year and a feat that they were sure an AI wouldn't be able to achieve by that year.
The sceptics were reluctant to give any such prediction, but he managed to persuade one to try.
Their prediction was that no AI would achieve a score above 90% on the Winograd Schema Challenge before 2025 (or something similar). The AI expert writing about this pointed out that a couple of months after the sceptic had made their prediction, in 2019, this threshold was passed by the BERT language model.
Lord, I hate the Rationalists' tendency to write long articles with so little structure. I don't understand how people make it through a post like this, which has a ton of juicy parts but doesn't stick to a thesis and doesn't commit to clarity.
Notice he won't commit to any quantifiable short-term predictions that could come back to bite him in the future. No references to any ML benchmarks at all. Just a lot of hand waving and vague arguments.
A has been that has nothing to contribute towards ML or AI. Ignore him.
I really don't see this technology as being unattainable in the same way that robot self-awareness or mass-adoption of flying cars are.
There are plans for commercial operation to start in 2023 between Dubai and Abu Dhabi[0], and last year it received approval from the US Congress[1].
This could be a lot of hype, or even a scam, but that doesn't mean the technology is 30 years away.
[0] https://www.constructionweekonline.com/projects-and-tenders/...
[1] https://gulfnews.com/uae/transport/uae-hyperloop-now-a-step-...