Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Comma.ai, self-driving cars and indefinite optimism (mleverything.substack.com)
109 points by bko on Feb 16, 2021 | hide | past | favorite | 65 comments


I think the author is guilty of exactly what he’s accusing everyone else of. The major car companies are delivering relatively high quality products at scale (we have decent certainty it’s high quality because the presence of many of these driver assist features correlate with reduced accidents). And they’re rolling out slowly without a ton of stories of cars going haywire and flying off the road. And of course certain cars will only have certain features (addressing his Ford complaint) because those features require the correct mix of sensors and compute that is not yet present on every vehicle. That’s how technology that can affect personal safety should be rolled out. Slowly, carefully, and very boringly.

Releasing “alpha” level software and saying essentially “lol have fun idk” when it can significantly impact customer safety is childish, bad business, morally wrong, and something we as an industry need to move away from. It doesn’t matter if your customer base are early adopters, hackers, or open source supporters. You still owe them their safety and a reasonable and coherent privacy policy.


> And they’re rolling out slowly without a ton of stories of cars going haywire and flying off the road.

Automakers and Comma are operating at completely different scales. Orders of magnitude apart. You’re going to hear about more autopilot issues from major manufacturers because there are far, far more of them on the road. Comma only has a small group of loyal customers who make the leap to buy and modify their car.

Comma also incentivizes customers to keep quiet if the autopilot fails. The drivers can’t blame a multinational automaker if something goes wrong. They have to admit that they modified their car with an aftermarket device that hasn’t been safety tested. Comma even provides disclaimers that their product isn’t production ready. If it fails, the customer doesn’t have much incentive to broadcast to the world that they bought, installed, and used a product on public roads that was advertised as being not ready for prime time.

It’s false to equate Comma’s software strategy with that of major automakers. At minimum, automakers are working with safety regulators. Comma has gone to great lengths to specifically avoid dealing with anything safety related.


> Automakers and Comma are operating at completely different scales. Orders of magnitude apart.

Super Cruise has 7 million miles driven [0]. Comma has 35 million miles driven [1]

> Comma also incentivizes customers to keep quiet if the autopilot fails

How do you figure? The amount of weird hatred people have towards comma is much more than Cadillac. You can find literally hours of real life video of comma driving in cars. Professional reviewers like Consumer Reports don't have a pro comma bias. Comma doesn't have a PR guy or teams of lawyers. They're not a hundred billion dollar company. They're a dozen or so engineers. They have no influence over trade publications or blogs. I think people and reviewers would judge comma much more harshly than they would big auto.

> At minimum, automakers are working with safety regulators

> openpilot is developed in good faith to be compliant with FMVSS requirements and to follow industry standards of safety for Level 2 Driver Assistance Systems. In particular, we observe ISO26262 guidelines, including those from pertinent documents released by NHTSA. In addition, we impose strict coding guidelines (like MISRA C : 2012) on parts of openpilot that are safety relevant. We also perform software-in-the-loop, hardware-in-the-loop and in-vehicle tests before each software release. [2]

Please point me to the safety documents of the competitors.Most don't even have driver monitoring like comma.

[0] https://www.cadillac.com/world-of-cadillac/innovation/super-...

[1] comma.ai

[2] https://github.com/commaai/openpilot/blob/master/SAFETY.md


>> Automakers and Comma are operating at completely different scales. Orders of magnitude apart.

>Super Cruise has 7 million miles driven [0]. Comma has 35 million miles driven [1]

Tesla has likely exceeded 4,000 million miles with Autopilot at this point versus Comma's 35. That certainly qualifies as "different scales" and "orders of magnitude apart".

It is also frankly bizarre that the author of this article barely even mentions Tesla while treating Cadillac and Audi as the primary competitors for Comma.


I'm the author. I agree Tesla is miles ahead, but they're not offering the same product.

Comma is comfortably level 2 self driving, essentially lane keep. They don't detect red lights or stop signs, the don't have navigation or even maps. It just keeps you going in a straight line, with or without lane markings. It's much more comparable to something like Super Cruise.

My point was that lane keep assist technology exists as features in new cars. That's the target market and that's what comma should be benchmarked against. If you have a problem with comma's safety features you should have an even bigger issue with that of the large auto makers.


Sorry if this comes off as rude, but this comment shows you don't really know what you are talking about. Tesla offers two driver assistance products. There is Autopilot and Full Self-Driving Capability. All Autopilot does is autosteering and adaptive cruise control[1]. The more advance stuff like detecting red lights and stop signs are part of the FSD package which needs to be purchased separately. Autopilot and Comma seem like natural fits for a direct comparison.

[1] - https://www.tesla.com/support/autopilot


That's fair. I honestly didn't look into Tesla because I'm almost certain they're more advanced and a comparison would be kind of pointless. Tesla is better than comma.ai, but from what I've seen, comma.ai is at least comparable to the non-Tesla adaptive cruise control.


We believe Tesla is 1-2 years ahead of us, and that's held fairly true since the beginning of comma.


The man himself. You're a legend Hotz. Check out my sound cloud. Wait, I forgot, I don't have a sound cloud.


How do you factor HD mapping into the equation, like the one used by the Cadillac CT6?


>> The drivers can’t blame a multinational automaker if something goes wrong. They have to admit that they modified their car with an aftermarket device that hasn’t been safety tested. Comma even provides disclaimers that their product isn’t production ready. If it fails, the customer doesn’t have much incentive to broadcast to the world that they bought, installed, and used a product on public roads that was advertised as being not ready for prime time.

Well this is just terrifying.

I'm wondering if there have been any serious accidents where a driver was using his software and got taken to court over the repercussions of the accident.

Thinking about tens of thousands of people using this software on the road is pretty scary.


The situation in say a Tesla or any other new car isn't much better under most driving rule regimes out there today. Fault for accident is virtually always the driver, regardless of whether automated or assist systems in use. This includes Tesla's Autopilot when used in the United States, you don't get to blame Tesla at least not today.

Personally, I can't wait for wider adoption of mature "assisted driving" products in the future - I find the idea of that future a lot less scary than the millions of not so great human drivers we have to deal with right now.

The biggest problem I have with Comma is that the quality of the camera installation will vary enormously between users even if we assume the software to be perfect. At least in a car certified for production you can mandate the cameras, sensors etc be attached permanently to the vehicle in relatively sound locations, vastly reduced risk of them falling off and causing a bad steering or other control input.


I didn't mean to shit on big auto and I like their features. My car has collision detection that I find pretty reliable and useful. My main complaint is what I see as the unfair attacks of comma.ai by engineers where we should be applauding his efforts and openness, especially compared to the competition.

I don't see the Ford complaint as them introducing features slowly and carefully. To me its a code smell. But my biggest complaint is that its opaque. If they had a detailed description of the features, what sensors are used and their limitations, then I would be much happier. But instead they have nothing more than a name and a trademark.

Again, the "alpha" level doesn't mean anything. Hotz is immature and some of his actions are regretful. It certainly doesn't live up to the thought and care he puts into his products. Of course, I don't know the guy but I have heard him speak for hours and was very impressed.


Why would having different features be a code smell? I would compare it more to purchasing version X or Y of photoshop, or something similar. If you buy and older or cheaper version of the software, you get a different feature set.

I can completely agree the naming and descriptions are horrendous though. I think that as more of these technologies start to provide assistance in more areas of driving, more of the time, education and transparency are nearly as important as safety. And that’s an area where every one of these projects is failing to a greater or lesser degree.


It just strikes me as a patch work.

Consider BLIS (Blind Spot Information System) with Cross-Traffic Alert.

I imagine there's an engineer that created a function isBlindSpot(state, "right") as opposed to a more comprehensive system of object detection, where it'll be able to tell you if there's a blind spot but also be integral in Intersection Assist (a separate feature in the 2021 Mustang Mach-E).

Of course I could be wrong, and the features could be an interface to a more comprehensive system, but my hunch is that it's more segmented.


I detest geo/comma for doing this as much as musk.

Plus it's a toyish feature


Python straw man aside, Comma could improve their safety dramatically, today, by documenting their safety systems, requiring a specific, audited checklist for new vehicles to be introduced, producing a real change management and detailed test result report for each release, utilizing a real safety processor in their "Panda" CAN intercept product, and working to reduce their use of bus effects like message-stuffing or first-in-wins racing with CAN control messages where possible.

But there's no way I would bring this up in their GitHub, for example, because it isn't the culture of the project - and that's the problem. For example, their "leader" shows up and makes commits to their safety code like "WTF WHY WAS THIS SHIT EVERYWHERE."

To Comma's credit, they have made great strides in safety since the first two years of their product - since 2018, they've introduced safety checks that (while undocumented), seem to be somewhat consistent (or trying to be) across vehicles, are written in MISRA C which is at least something that's actually used in the field, and exist at the low level of the system. Commits are signed now, and their founder seems to make fewer non-documenting troll commits to the product. These are all signs that the company is growing up (and, if we look at the history of the company and their leadership, we certainly see some signs as to why this may be the case...).

But, these are things that should have been in place since day one, and the founders and company were certainly not unaware. It is absolutely fair to judge products and companies for the decisions they make, ESPECIALLY when their product is purportedly "world-changing," and in the case of Comma, safety does not seem ingrained into the decision-making process.

As an aside, this blog post is killing me, between extensive Peter Thiel quotes, random jabs at California's COVID policy, Elon worship, and a lionization of a startup company with an immature founder building safety-related hardware.


I cannot edit my original post any more, but a helpful HNer "DesolationJones" was able to provide a little bit more insight into Comma's safety strategy using information trawled from Reddit comments and Medium posts: https://news.ycombinator.com/item?id=26158158 .

It seems that Comma are actively working on documenting their safety systems (via a new hire to own this task) and creating an audited checklist. And, there is a "new vehicle" checklist enumerated in a Medium post including some safety constraints, although in reading the Panda "safety" tree, I found no actual acknowledgement that these specific constraints were audited for each vehicle addition.

So, it seems that the great strides since 2018 continue - once these promises come true, they will have almost reached what I would consider the baseline level for a safety-related automotive system. Perhaps they will be able to secure an exit this way - I am actually rather surprised they haven't, given the froth level in this space as well as the valuable training data they have collected from their users.


We aren't interested in an exit. Our mission is to "solve self driving cars while delivering shippable intermediaries"


> For example, their "leader" shows up and makes commits to their safety code like "WTF WHY WAS THIS SHIT EVERYWHERE."

Every self-driving product that deploys to consumers is going to end up in court eventually. It's inevitable.

Imagine trying to defend comma.ai to a lay jury... the prosecution won't even have to try.


I'd pessimistically imagine the real commit trees of actual SDC companies then are no different, given the "move fast and break things" ethos many of these companies have.

Meanwhile, taking these extra rote precautions (e.g. cut down on global state, throw unit tests on everything, don't use go-to's) has no real effect when the elephant in the room is that even correctly-implemented state-of-the-art machine learning models only have classification accuracies in the mid 90%s at best.

That's (relatively) fine if you're deciding what ad to show to a teenager using TikTok. When we are talking about driving a car, that loosely means having an accident every 100 miles. Based on publicly available disengagement data, this corroborates (only Waymo has surpassed the 1000 mile mark, GM might have recently, too).

Meanwhile, verification in this domain only means replaying actual recorded sensor data in a "simulation" using one-off example scenarios, not doing any kind of formal analysis or exhaustive proofs -- which simply don't exist yet for ML. There's no real way currently to take an arbitrary mis-classified example and determine its root cause -- or even correct it within the existing model without reducing accuracy elsewhere!

It's literally impossible to avoid being unsafe, so code style rules and procedures amount to mere hoop-jumping


I agree with you overall - in my post, I was trying to accept the premise of the overall safety model: "the ML will break but as long as the driver is paying attention it's OK." This was out of an effort to be fair to Comma, as, to your point, this is the overall state of the industry.

I disagree, however that code style rules and procedures amount to hoop-jumping, as not every failure mode is the same (especially if you do accept the driver-attention / driver-in-control premise). An unexpected failure at the control layer (especially the control override layer) is a different class of failure to an unexpected direction command from the ML model, for example.


I think it's a matter of personal responsibility on one hand, but rest assured when this product get someone killed, Comma will be in court.

I don't think they have the money to fight multiple Court challenges, even if you're right you still need to fight. I doubt this company will be around in 5 years, and I at least hope anyone who installs this thing in their car and then gets into an accident loses their license.


It's not a matter of personal responsibility when drivers using this software can crash into other drivers. Just like we don't let people drive drunk.

I'm actually not anti-Comma, just responding to this specific point.


First off, just because the HN community lays on criticism doesn't meant that the people making the criticism don't think the technology is a good idea. If I spend the energy to criticize $technology, it means that I care about it to an extent. There's a baseline level of interest. If I didn't care about $technology, then I wouldn't spend the energy to break down into concrete, detailed criticism.

Also, the author quotes Peter Thiel, claiming that the current cynicism towards self-driving technology is a type of "indefinite optimism" that hampers people's ability to plan the future. This is juxtaposed with the 1950's-1960's America, where Thiel claims "definite optimism" reigned. That type of indefinite optimism, at least in 1960's America, did not account for minorities, people of color, people of disabilities, etc. There was no future plan for them. Indefinite optimism, though it may appear lethargic and slow, at least is able to investigate the multitudes of edge-case scenarios that any $technology will face.


How is Comma's behaviour substantially different from Tesla's?

Arguably Comma is being a little more upfront about the user being a guinea pig for their software.


And Comma, if you read their website, shows how Consumer Reports compared them to Tesla Autopilot as well as Nissan, Ford, and other car manufacturer's stock offering for what is essentially Adaptive Cruise Control. CR looked at features and safety together and Comma scored #1, so from Comma's perspective, they're doing better than what car manufacturers are currently selling, and they are certainly way more upfront than those manufacturers that you are a guinea pig.


I don’t know how they “best Tesla” but it’s Not that hard to do better than the oem lane assist features. Scoring better then super cruise is surprising too, but perhaps that’s because super cruise is geo fenced.


> but perhaps that’s because super cruise is geo fenced.

Exactly.

The Consumer Reports test wasn't a comprehensive safety analysis; it was a superficial "how will this feel to a consumer based upon a few drives" analysis. We are talking about a consumer reviews magazine... no regulator or jury is going to give a shit what a Microwave/Refrigerator review website said after performing 37 "tests".

In other words, the other products are "worse" because those are production-quality products for which automakers with substantial assets to forfeit are willing to accept a degree of real liability.

I'm sure if you compared comma.ai to GM's equivalent "alpha quality software" -- aka Cruise's L5 prototype -- the latter would come out way on top. But GM isn't stupid enough to ship alpha quality self-driving software, even as an aftermarket add-on.

Including comma.ai in this comparison was a huge oversight on the part of Consumer Reports. They should be comparing comma.ai to competitors' "alpha" products, which are all L5 prototypes.


I'm biased, but OP hasn't done any due diligence. Here's from 2017 with regards to Audi https://www.audi-mediacenter.com/en/technology-lexicon-7180/..., there is also new info out there.

"But most importantly let’s just stop being so derisive when it comes to world changing technologies." You just did it yourself.


The AUDI page you linked doesn't contain the words "liable" or "liability".


What is comma.ai's business model here?

I mean whose auto insurance(both collision and more importantly liability) has a clause that lets anyone hook up a random after market self driving toolkit to their car.

Who has liability once you start using their solution?

I'm sure comma.ai will have a clause that their solution is only for fun and you can't blame them if their self driving solution drives your car into a bus full of orphans.

So what's the point of what they are selling if you can't actually use it?

Or put another way, a self driving, or even driving assist add on only has value if the vendor absorbs all liability when using their system.


> I mean whose auto insurance(both collision and more importantly liability) has a clause that lets anyone hook up a random after market self driving toolkit to their car.

On the one hand, if I get wasted and crash my car into a tree, insurers will generally cover that even though I was negligent. On the other hand, if I get cheaper home insurance by lying about having smoke detectors, and my house burns down, I might be in violation of the contract and not be covered.

So it seems like where this SHOULD end up is that insurers could charge more (or less as it gets better) for driving with these features. That said, I wouldn't fault an insurance company for not wanting to cover a system that says in all caps, as the linked article points out, "THIS IS ALPHA QUALITY SOFTWARE FOR RESEARCH PURPOSES ONLY." I'd possibly equate that warning to somebody who got into an accident after doing research on how well they could drive steering with their teeth, and it'd make me consider broadening the definition of "intentional" damage.


As a slight tangent Hotz has joked (?) that his real business will be to provide auto insurance to his drivers since he's confident that they'll be safer drivers. He'd basically pick off the best drivers and he has more data than anyone about what constitutes a safe driver.

He makes this interesting observation that essentially all good drivers are alike; each bad driver is bad in his own way.

It's an interesting insight and great for self driving since the "good" drivers would be clustered while the "bad" drivers would essentially be noise.


Vision guided missiles are hot right now.


They want to avoid thinking about non-engineering things like liability for as long as possible and focus on the engineering problem. It's a valid approach.

As for the "who has liability" question it's simple: the driver does. Or at least it's intended to be simple in the absence of lawyers twisting the wording around. Regular, non self driving cars have basic driving assists. For example, cruise control will happily ram your car into the back of another one. Adaptive cruise still has a chance of doing that, plus it has the additional risk of causing an accident by randomly slamming on the brakes for no reason at all. For all these failure modes the liability is understood to lie with the driver: it is up to them to be alert enough to override the automation when it malfunctions. However, if you do a long distance drive with and without cruise control you'll quickly admit that it has value, even with all the imperfections. Same for Comma.


> They want to avoid thinking about non-engineering things like liability for as long as possible and focus on the engineering problem. It's a valid approach.

When you write software that can kill people, you don't get to roll your eyes at questions like "who is responsible when someone dies?".

These sorts of questions ARE engineering questions, and answering these questions with thought and care is important! Why? Because if the answer is "this is alpha quality and we might be liable" then you wait to deploy the feature. Which is why comma.ai is "ahead" of its competitors -- because they aren't doing real engineering. Thinking about the real-world context into which your system is being deployed is the thing that separates real engineering from R&D/hacking.

What you're describing is not engineering; it's R&D. Even comma.ai admits this. And, look, R&D is perfectly okay! Everyone else is also doing R&D on real roads! E.g., all of the major auto manufacturers are putting cars on real roads with full L5 (and safety driver backups where appropriate). In fact, the major auto manufacturers are all WAY ahead of comma.ai when it comes to R&D-quality systems! Compare Cruise or Argo or Uber ATG or Waymo to comma.ai.

But it's a terrible idea to ship R&D to paying customers when lives are on the line. If comma.ai wants to drive their system with their own safety drivers, that's fine.

This isn't even (just) a normative or ethical statement. It's simply a statement of fact, at least in the USA. For some reason Software never became a "real" engineering discipline. But automotive


As long as the product demands a user to be looking at the road(and intends to keep it like that for a while), there is no reason to spend time figuring out how to solve legal issues in case the user isn't responsible.


This is only true for a tiny subset of possible flaws.

Toyota was found legally responsible for its unintended acceleration bug from the mid 2000s when "the driver should pay attention 100% of the time" wasn't even something you would think to say because there were exactly zero driver assist features.


I don't know. How is this fundamentally different than any other vehicle modification? Or even any other form of driver distraction.

The driver is always responsible. This whole exercise feels like people trying to remove themselves from liability so that they can feel better about their absurd behavior.

Why can't we keep this simple? The driver is responsible.


They first famously tried to sell the technology to Tesla.

When that didn’t work, I think they set out to develop a proof of concept that they could use to generate hype, then wait until an automaker wanted to buy their technology to catch up.

I get the impression they cursed future acquisition opportunities by refusing to even engage with safety regulators. They’re now selling their product in clever ways designed to shift all liability to the user.


They’re now selling their product in clever ways designed to shift all liability to the user.

In "clever ways" that completely fail to succeed in this goal, legally speaking. Product liability (law) is very expansive and generally covers every person in the chain from user to manufacturer.


Not exactly. Elon made an offer to geo to make a better mobile eye. some work was done, geo wasn’t okay with the deal. But he was still interested in the problem so he forumd comma


Bingo. Liability is a huge risk here and they are using their "customers" as alpha testers that are playing Russian roulette with their lives and those around them. I want Comma to succeed but not by shipping alpha-products to the general public that doesn't understand the risks.


Wait, they're shipping it already? Wtf, all I did was look at their codebase and I assumed they were just making some news about it. Distributing code and shipping a product are very different, and I'll be surprised if nothing bad comes out of it soon.


Comma thinks they are ready because it's essentially an Adaptive Cruise Control, and as they state on their website, they scored #1 for best at that by Consumer Reports compared to the stock offerings from Ford, Nissan, and other carmakers. Frankly, I don't know if _anyone_ should be including this kind of tech if it's this early, but Comma is currently far from the only one for what it does. Frankly, because it scored #1 and the code is open-source, I'd be much more likely to trust Comma than the stock offering from my manufacturer.


> best at that by Consumer Reports compared to the stock offerings from Ford, Nissan, and other carmakers.

Read through the Consumer Reports PDF [1]. It's abundantly clear that they have no idea how to properly evaluate adaptive cruise systems.

What this report tells you is exactly what you would expect from a Consumer Reports publication: the generic impression that a typical consumer will have after a few hours of use. This sort of superficial analysis is completely unrelated to anything approaching a real safety analysis.

[1] https://data.consumerreports.org/wp-content/uploads/2020/11/...


Yep. It's not "this is less likely to kill you", it's "this has a better UI and some sort of ML process that's supposed to check you're looking at the road". Somehow they score it highly for telling you when you should and shouldn't use it for a product whose disclaimer says you shouldn't really be using it at all.

It's a bit like declaring your bike outperforms cars in crash safety because a magazine gives it 8/10 for handling and 9/10 for the visibility of your yellow jacket and your rear light having a "flash" mode.


Man, you have a lot of hate for CR. I think their report makes it very clear that these aren't recommendations of systems in any way but rather an attempt to start formalizing testing and the state of various systems that are out there. In multiple places they say this is a report FOR THE INDUSTRY, not consumers.

They even have a specific callout for Comma:

> A determination was made to include the Comma Two Open Pilot system manufactured by > Comma.ai. Although Consumer Reports does not endorse after-market modifications to all > consumers, we feel that it is important to include the test results in this report to the industry. The > direct comparison of this system to the other OEM systems will hopefully provide insight on this > alternative approach and highlight the areas across the industry that have room for improvement.

It is at least SOMETHING. I haven't seen anything else even close to as formalized that looks across all the various systems out there. Do you have better ones to look at?


> these aren't recommendations of systems in any way

This is a deflection. I'm responding to a comment that stated:

>> as they state on their website, they scored #1 for best at that by Consumer Reports compared to the stock offerings

CR is irresponsible for issuing ranked scores while saying in the fine print that these scores aren't actually recommendations.

That they didn't foresee comma.ai using their recommendation in marketing only makes them that much more irresponsible.


>It enables your car to steer, accelerate, and brake automatically within its lane. Drive to a highway, press the cruise control SET button, and openpilot will engage

Ah, it isn't marketed as a full autopilot. I'm not 100% confident that it should be used in the current state but I'm also not an auto regulator, so we shall see.


I have one. Works pretty well but requires a fair bit of tweaking. It's called a dev kit for a reason.

Many car models limit the amount (and at what rate) that the computer system can turn the wheel. The system is piggybacking off of this engineering for most of its 'safety'. There was people spoofing park assist on some models to get very strong control of the wheel but such discussion got banned on the official discord.

Which leads me to my big problem with Comma.ai: it's open source but not very open. Look at the code, go back several releases and watch the number of comments regress.

There's zero documentation, the wiki is a fucking joke and for some reason does not come up in a google search and is just missing critical information. No, I don't want to watch a 5 fucking minute video to find a 10 second clip with the information I need. Write that shit down.

Instead you ask a question and some asshole that is gunning for a job at comma posts a passive agressive screenshot of them using the discord search. It's almost like if you put effort into having information recorded instead of being a dick you wouldn't have to be a dick. That wouldn't be very fun or inflate egos much so it doesn't happen.


> Who has liability once you start using their solution?

Their FAQ just says "ask your insurance company" lol


I think their strategy is selling the technology to car manufacturers and investors not average people.


They have a "Shop" button on their homepage that lets anyone buy their product for installation on their own car. I had a coworker purchase Comma.ai + a specific make/model solely for its Comma.ai compatibility last year. Average people are getting in on this, for the better or (likely) worse.


But isn't the real value the data they gather to train their models? This is where to money is.


Just watched today George Hotz’s interview in Lex Fridman’s podcast from last October, 3 hours talking about many interesting things including comma.ai and how it compares with Tesla’s approach.

He even mentions HN a couple of times:

https://www.youtube.com/watch?v=_L3gNaAVjQ4&t=53m35s

https://www.youtube.com/watch?v=_L3gNaAVjQ4&t=2h18m17s


Since I didn't know who George Hotz was until yesterday, I thought I'd mention he's aka geohot of iOS jailbreaking and PS3 reverse engineering fame

https://en.wikipedia.org/wiki/George_Hotz


Still one of my favorite raps of all time https://www.youtube.com/watch?v=9iUvuaChDEg&ab_channel=geoho...


Ok first,I’m impressed someone is monetising their HN comments. But this entire argument is ridiculous, Comma AI’s transparency isn’t a positive compared to traditional car companies. Traditional car companies aren’t transparent because the expectations of a customer is “I buy the product and it works”. Transparency isn’t a substitute for well designed and tested products. Comma AI has a massive advantage over car companies- they have no intention of following industry standard design procedures to ensure they don’t kill people- instead they sell their software with a sticker saying “if you kill someone you don’t know us”.

I won’t be derisive as this author claims I would be, I’ll be serious. I’ve worked on autonomous driving solutions with traditional car companies, they move slower because BMW means something and they know it’ll mean death if they fuck up. Comma AI haven’t learned from the industry they’re trying to replace. This is the continual edge that Silicon Valley has over the rest of the world - they’re happy to just use legal arbitrage to gain an advantage and then pivot once their advantage is entrenched. It’s thus ridiculous “how were we supposed to know this would happen” excuse that’s trotted out all the time that’s absolute self serving bullshit.


I think something that is completely missed by the industry that comma dose quite well in comparison, is driver monitoring.

sure self driving cars is the engineering end goal. But really we aren't we trying to cut fatalities?

Good driver monitoring is a pretty god Dayum good way to prevent distracted driving from causing an accident.


Now that every car company has its own self driving arm and is also working on its own separate drivers assist, the limited time window for Comma AI go-to-market under its current approach is closing.


You know comma2 exists right?


Yes, and why would someone ever use what is essentially a rehash of Comma1 if their new Toyota already has driver assist built-in?


Because the end result user experience is better +ota ?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: