Hacker Newsnew | past | comments | ask | show | jobs | submit | mythbuster2001's commentslogin

Driverless trains are not a problem to begin with. If you have a train and it carries hundreds of people [or trainload of cargo], it is economically irrelevant whether it has a human operator or not. So it is safer to have a human there just in case.


The tech for automated road trains is vastly closer than where self-driving cars are. If you feel this way about trains, then you should feel the same way about highway-specific automobile autonomy.

We also need to focus on the actual problem we're faced with. Cities are hitting limits to growth due to congestion and we can't build more roads. The state of tech right now is capable enough to solve that problem by increasing the throughput of the major arteries of LA / DC / etc.


I'm not sure anything that came out from OpenAI so far, justifies that sort of compensation... Maybe they have some aces up their sleeves...


These numbers are actually much worse than one would think based on the PR.


Wait till your Macbook dies and you try to pull some data out of the soldered on SSD. I find dongles annoying, lack of magsafe irritating but the soldered on SSD is what really puts me off.


But you have backups right? Betting on data recovery is a really poor strategy, you might drop your laptop under a truck.


Yeah sure. But sometimes you are traveling, without access to your backup. Or something did not yet make it to the backup. Or whatever, 100 other situations. Making SSD soldered on as opposed to putting a $3 connector just adds to fragility. On $3k laptop. No thanks.


And the only reason that decision was made was to stop you from DIY upgrading the SSD and getting a few more years out of your $3k laptop


No. It was done to save motherboard space.

Those upgradable connector ports have a cost associated with them.


Well which is it, space or cost?


So a connector is /less/ fragile than a soldered connection? I don’t think so.

If you’re really depending that much on the ad-hoc access to a backup, drop an additional external SSD in your bag and add it to timemachine. That’s even more secure. I highly doubt that you are currently packing an external adapter to transplant your internal SSD to in case your notebook fails....


Let me explain: motherboard has 1000 components. One of them fails and it is likely a brick. SSD contains several chips. Most probably what will fail will not be an SSD but some capacitor somewhere on CPU power line. With a separate SSD, you buy connector/another laptop, pull it out of a bricked machine and you are good to go. With a soldered on SSD you a F^%$ed. All for a $3 connector.

I'm not paying $3k to be inconvenienced by such retarded design and carry yet another ssd along with my bag of dongles, because apple decided to get cheap on a $3 connector. Kapish?


This is a great example on how one can get fooled by optimizing the wrong measure. This should be introductory material to machine learning, before any algorithm gets introduced.


This is not rain. This is a drizzle on a very well maintained, center divided freeway, where the radar and ultrasonic sensor has a very nice reference (divider). Also with cars tracked in front. Example of (non-autonomous) driving in a real rain here for reference: https://www.youtube.com/watch?v=L3xKT98a3og


you mean this

https://www.youtube.com/watch?v=bAxoo6JLmgg

or this

https://www.youtube.com/watch?v=JoGgE55qNBA

and this is just a L2 system...come on brah come harder


How about a haboob near phoenix: https://www.youtube.com/watch?v=8vQMuwRjI6s


You are dogmatically defending Turing test which I think is the primary source of this confusion. Turing test says: if it fools humans into thinking its intelligent it is intelligent. That is fair. But once some other humans understand the inner workings of some simple "AI" mechanism it no longer fools humans, since they now know what adversarial questions to ask to uncover it. Therefore it consequently fails the Turing test and we have the AI effect. This test is just a bad idea and it impairs research (for a number of reasons stated in the post which you prematurely dismiss).

The coffee criterion for AGI (https://en.wikipedia.org/wiki/Artificial_general_intelligenc...) is much better, since it requires ability to creatively interact with unpredictable reality as a test for intelligence. It avoids all the philosophical bullshit and all the smoke and mirrors, since you cannot fool physics. Somehow the so called "AI researchers" avoid robotics like fire, sine there stuff actually needs to work (not just statistically) and outrageous BS claims cannot be made.

And yes, ultimately the human brain may be smoke and mirrors. But frankly, quite sophisticated smoke and mirrors, not anywhere close to the crap that is being put forward right now.


No, I'm defending the Chinese room thought experiment. It's cheating to look at the implementation and then claim it isn't AI; you can't look at the human implementation which could very well also be based on simple math we simply haven't figured out yet. It's only fair to judge by inputs and outputs. And you are confusing AI with AGI; something does not have to have human level intelligence to be AI. The Turing test is about AGI, not AI.

Useful and relevant world changing AI will happen long before AGI which could very well be a pipe dream. A car that drives far better than humans is useful AI and yet would fall short of AGI, a robot that can clean my house is useful AI but could fall way short of AGI, there are vast world changing things to be done by AI long before AGI ever becomes a reality and that we understand how something works DOES NOT disqualify it from being AI, even if it boils down to little more than some statistical inference.

Saying it isn't AI because you understand how it works is like saying submarines can't swim; it doesn't have to work like nature to be valid nor does it have to be like human intelligence to be intelligent and any intelligence we build is by definition artificial intelligence. Machine learning that can diagnose better than a doctor... is AI, not matter how well you understand it's just math, it's still AI. Those who conflate AI with consciousness are the ones in err. AI does not and has not ever meant artificial self aware consciousness, while such a thing would be AI, it would be the pinnacle of AI, AGI.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: