Hacker News new | past | comments | ask | show | jobs | submit | condiment's comments login

So, what’s it gonna take for Waymo to start selling retrofit kits for existing cars?

If a $10,000 investment reduces the chances of a serious accident by 90%, the corresponding reduction in insurance rates might have a payoff within a few years. Especially if adoption starts to push rates up for customers who don’t automate. I can’t take a taxi everywhere, but I’d sure like it if my car drove me everywhere and did a better job than me at it too.


They did just sign a deal with Toyota[1]. Probably no retorfitting, but looks like they at least intend to license the tech.

[1]https://waymo.com/blog/2025/04/waymo-and-toyota-outline-stra...


I had another post in this thread with the same information, but here again is the current Waymo sensor suite...

>With 13 cameras, 4 lidar, 6 radar, and an array of external audio receivers (EARs), our new sensor suite is optimized for greater performance...it provides the Waymo Driver with overlapping fields of view, all around the vehicle, up to 500 meters away, day and night, and in a range of weather conditions.


It honestly feels like they spend years validating each new platform, e.g. the Zeekr was announced years ago and only recently are they very rarely seen on the streets but only as Waymo Engineering mules. Likewise the transition from the Chrysler Pacifica to the I-Pace took a while. Hopefully they figure something out to scale up to more platforms soon. They announced a partnership with Hyundai a few years ago, also with nothing to show for it.

Of course, safety first, so they should take their time and not rush things...


The zeekr vehicles were heavily affected by changing regulations and tariffs around Chinese made automobiles. The Hyundai vehicles haven't started production yet. They're in an awkward situation because they made a bad prediction about the political direction without a backup plan and it went south right as they were entering production.

They need to work everywhere. How do they do in snow/ice (humans do really bad here - but where I live it happens often enough that we often cannot stay home or we would spend weeks in the house)

Don't get my wrong, I'm hoping it is soon. However they have a lot of work left.


Perhaps, in the same way you may not need an aircoditioner (due to your location), it would probably not be the best for waymo to market itself as 'suitable for all conditions, ymmv, lol'.

In this case, chain of custody needs to extend to the capture device itself, and to any software that exists in the supply chain for the video content.

There are some experimental specifications that exist to provide attestation as to the authenticity of media. But most of what I’ve seen so far is a “perjury based” approach that just requires a human to say that something is authentic.


Chain of custody isn't real as long as the judiciary gives the government a 'good faith' pass when chain of custody isn't maintained/documentable in court. Go into Lexus Nexus and look up 'good faith' related to 'chain of custody'. Any 'protections' that can be waived away at the judges whim when the standard isn't met by the government are not actually real but pure theater to lend legitimacy to the American judicial system that it doesn't deserve.


> In this case, chain of custody needs to extend to the capture device itself, and to any software that exists in the supply chain for the video content.

There are two major problems with this.

First, is all footage from existing surveillance systems going to be thrown out because it doesn't use this technology? Answer: No, because it would be impractical. But then nobody cares to adopt the technology because using it isn't required. How's that IPv6 transition going?

Second, that sort of thing doesn't actually work anyway. Surveillance cameras are made by the lowest bidder. Their security record is appalling. They're going to publish their private keys on github and expose buffer overflows to the public internet and leave a telnet server running on the camera that gives you a root shell with no password. Does it sound like hyperbole? Those are all things that have actually happened.

There is only one known way to prevent this from happening: Do not allow the hardware vendor to write the software. Any of the software. Instead, demand hardware documentation so that the firmware can be written by open source software people instead of lowest bidder hardware companies. This is incompatible with using the hardware vendor as the root of trust, which is a natural consequence because the hardware vendors are completely untrustworthy.

But let's suppose we find some way to do it. We'll pass a law imposing a $100 fine on any company that has a security vulnerability. Then there will never be a security vulnerability again because security vulnerabilities will be illegal; I'm assured this is how laws work. At that point the forger takes the camera and points it a a high resolution playback of the forgery, and the camera records and signs the forgery.

I kind of wish people would stop suggesting this. It's completely useless but it creates the false impression that it can be solved this way and then people stop trying to find a real solution.


On a 3090 (24gb vram), same prompt & quant, I can report more than double the tokens per second, and significantly faster prompt eval.

    total_duration:       10530451000
    load_duration:        54350253
    prompt_eval_count:    36
    prompt_eval_duration: 29000000
    prompt_token/s:       1241.38
    eval_count:           460
    eval_duration:        10445000000
    response_token/s:     44.04
Fast prompt eval is important when feeding larger contexts into these models, which is required for almost anything useful. GPUs have other advantages for traditional ML, whisper models, vision, and image generation. There's a lot of flexibility that doesn't really get discussed when folks trot out the 'just buy a mac' line.

Anecdotally I can share my revealed preference. I have both an M3 (36gb) as well as a GPU machine, and I went through the trouble of putting my GPU box online because it was so much faster than the mac. And doubling up the GPUs allows me to run models like the deepseek-tuned llama 3.3, with which I have completely replaced my use of chatgpt 4o.


Thanks for numbers! People should include their LLM runner as well I think, as there are differences in hardware optimization support. Like I haven't tested it but I've heard MLX is noticeably faster than Ollama on Macs.


FAA launch licenses require substantial liability insurance. 500 million in this case.

https://drs.faa.gov/browse/excelExternalWindow/DRSDOCID17389...


The flag you want to see from a senior is reasoned examples of how they use it effectively. Ask for stories about successes and failures. By now, everyone has some.


So it’s websockets, only instead of the Web server needing to handle the protocol upgrade, you just piggyback on HTTP with an in-band protocol.

I’m not sure this makes sense in 2024. Pretty much every web server supports websockets at this point, and so do all of the browsers. You can easily impose the constraint on your code that communication through a websocket is mono-directional. And the capability to broadcast a message to all subscribers is going to be deceptively complex, no matter how you broadcast it.


Yes most servers support websockets. But unfortunately most proxies and firewalls do not, especially in big company networks. Suggesting my users to use SSEs for my database replication stream solved most of their problems. Also setting up a SSE endpoint is like 5 lines of code. WebSockets instead require much more and you also have to do things like pings etc to ensure that it automatically reconnects. SEEs with the JavaScript EventSource API have all you need build in:

https://rxdb.info/articles/websockets-sse-polling-webrtc-web...


SSE also works well on HTTP/3 whereas web sockets still don’t.


I don't see much point in WebSockets for HTTP/3. WebTransport will cover everything you would need it for an more.


That might very well be but the future is not today.


But why add it to HTTP/3 at all? HTTP/1.1 hijacking is a pretty simple process. I suspect HTTP/3 would be significantly more complicated. I'm not sure that effort is worth it when WebTransport will make it obselete.


It was added to HTTP/2 as well and there is an RFC. (Though a lot of servers don’t support it even on HTTP/2)

My point is mostly that SSE works well and is supported and that has A meaningful benefit today.


To have multiple independent websocket streams, without ordering requirements between streams.


going slightly off the tangent here, does FaaS cloud providers like AWS, CloudFlare, and etc support SSEs?

Last time I checked, they don't really support it.


Congrats on the launch, I bought one.

This is a really cool example of ambient technology. Typically when people talk about ambient technology they're talking about something like an e-ink display that is pushing information to you, but in a way that doesn't require interactivity and isn't screaming for your attention. This is a little different in that it's always _receiving_ information from you without any need for interaction or maintenance, except on your terms.

The interaction model is pretty clever too. Since it's collecting data from the instrument they've found a way to cue the device to perform an action without the user needing to open an app. (black keys to bookmark) There is an app of course, but it connects directly to the device, with no annoying setup requirements. I've seen this same approach with several other devices - Xbloom coffee maker, Combustion thermometers, Week Aqua lights - it works really well. I'm understating it. It's astounding how pleasant it is to use devices like this.

As hardware continues to improve I expect this will be the default mode for pretty much every new technology appliance. Ambient operation, local data, local app, with cloud and accounts as _options_ to extend functionality if it's necessary.


You’ll be warned when you accept the terms and conditions. And after the first lawsuits, there will be some fine print on the bottom of the page for you to ignore.


This is unlikely. I work with people all over the world and generally we communicate our own time zone plus the top local time zone for our audience.

In an organization where people are accustomed to indexing their activities to other people who live in a different time zone, it’s actually easier to use the most common time zones than it is to switch everyone to UTC. When using UTC, you have to do the mental gymnastics twice. You first have to relate your own time zone to UTC, and second relate your audience time zone to it. But what happens in practice is that you quickly learn what times in your local time zone correspond to movements of the day in your peer time zones. And because everybody has the same mental model, it’s easier (and less error prone) to just to use local time zones.


Our team has members in every US timezone.

I find I'll often try and use relative times in conversation, e.g. "after the stand-up" or "at the top of the hour" rather than specifying a wall clock time.


I agree for simple cases like this.

But the more timezones the participants are in, the harder this approach gets.


I'm actually surprised more chat programs don't automatically "internationalize and localize" dates. Seems the same effort they put into spotting a username could look for dates in the user's locale, and then render it in the appropriate one everywhere. We'd need some conventions to indicate it should not change the date, maybe. Most uses would probably be fine with not having that.


This reads as though the cable companies aren’t aware of the limitations of their tech and that couldn’t be farther from the truth. The last mile isn’t the same as today. Docsis technology continues to improve, more RF channels are being allocated to high speed internet, and cable companies are wholesale replacing their CMTS infrastructure with higher frequency (read: more channels) equipment.

The truth is that only some cable companies make these investments - you can look up “fiber node size” for respective performance across different companies. A fiber node being the place where optical is terminated and switched to coax. These have been getting smaller everywhere, but it only makes sense to invest there when the upstream infrastructure can support it. So from a consumer perspective, your “Linux isos” will be slow to download in any case until the upstream network is upgraded and your node is split to offer higher performance.


> This reads as though the cable companies aren’t aware of the limitations of their tech and that couldn’t be farther from the truth.

I know, and you know, that the people doing the more serious engineering for DOCSIS based cable last mile segments are well aware of the limitations of the tech. What I was saying is that they are milking every last dollar of ROI out of the existing physical plant because overbuilding your entire network with XGSPON (it would be dumb to do 2.5G PON in 2024/2025) is a very capital intensive endeavour.

The shareholder value and profits of the company are increased in the short term by continuing to do copper as long as possible, even at the cost of thousands of unhappy customers dragging your company's name into the mud.

It's the fundamental business model problem, and executives at big dinosaur coax operator telecoms that have made the decision to do it this way as long as possible, until the coax/oversubcription situation becomes completely untenable in an area, or until a real XGSPON operator (maybe Lumen, or Ziply, or similar) which overlaps with your historical cable tv network rolls out a better product and you have no choice but to spend the money to keep up with the local competition.

You and I also know that no matter how much they mess about with DOCSIS3.1 and channel sizes and different RF configurations, the aggregate capacity of a few strands of fiber (using even the lowest cost and most rudimentary WDM) is much greater than RF over coax. Squeezing 2048/4096QAM RF stuff into coax is polishing the brass doorknobs on the titanic. It's not a viable long term solution!


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: