Hacker Newsnew | past | comments | ask | show | jobs | submit | adgjlsfhk1's commentslogin

lots of industrial processes produce waste heat that can't easily be turned into energy, so the comparison isn't to a boiler, but to not having the heat.

It is true that the heat can be used if it is there anyways. But usually not in a big city-wide network. Instead a more localized, larger consumer is far better, because running the hot water network is far too expensive. For example, large producers of heat like data centers, dairy processing or chemical plants around here deliver their heat to public swimming pools, schools or greenhouses that are intentionally built nearby.

Even the grandparent's article says so if you read carefully: "A large portion of the town’s own buildings, including the municipal school, town hall, and library, are connected to the district heating network.". They didn't even attach all of the public buildings. Not to mention about the rest of the town.


When doing GPS stuff, you don't use local time.

it actually is somewhat an HDR problem because the HDR standards made some dumb choices. SDR standardizes relative brightness, but HDR uses absolute brightness even though that's an obviously dumb idea and in practice no one with a brain actually implements it.

It seems plausible that it's a similar size model and that the 3x drop is just additional hardware efficiency/lowered margin.

Or just pressure from Gemini 3

Maybe it's AWS Inferentia instead of NVidia GPUs :)

I don't think anyone is arguing that Kyber is purposefully backdoored. They are arguing that it (and basically every other lattice based method) has lost a minimum of ~50-100 bits of security in the past decade (and half of the stage 1 algorithms were broken entirely). The reason I can only give ~50-100 bits as the amount Kyber has lost is because attacks are progressing fast enough, and analysis of attacks is complicated enough that no one has actually published a reliable estimate of how strong Kyber is putting together all known attacks.

I have no knowledge of whether Kyber at this point is vulnerable given whatever private cryptanalysis the NSA definitely has done on it, but if Kyber is adopted now, it will definitely be in use 2 decades from now, and it's hard to believe that it won't be vulnerable/broken then (even with only publicly available information).


Source for this loss of security? I'm aware of the MATZOV work but you make it sound like there's a continuous and steady improvement in attacks and that is not my impression.

Lots of algorithms were broken, but so what? Things like Rainbow and SIKE are not at all based on the hardness of solving lattice problems.


To start with, you could not lie about what the results were.

The really big difference between named loops and cryptography is that if one gets approved and is bad, a couple new programmers get confused, while with the other, a significant chunk of the internet becomes vulnerable to hacking.

Just because a feature is standardized does not mean it gets implemented. This is actually even more true for cryptography than it is for programming language specifications.

The situation is actually somewhat the opposite here: the code points for these algorithms have already been assigned (go to https://www.iana.org/assignments/tls-parameters/tls-paramete... and search for draft-connolly-tls-mlkem-key-agreement-05) and Chrome, at least, has it implemented behind a flag (https://mailarchive.ietf.org/arch/msg/tls/_fCHTJifii3ycIJIDw...).

The question at hand is whether the IETF will publish an Informational (i.e., non-standard) document defining pure-MLKEM in TLS or whether people will have to read the Internet-Draft currently associated with the code point.


> Just because a feature is standardized does not mean it gets implemented.

This makes no sense. If you think it actually had a high chance of remaining unimplemented it anyway then why not just concede the point and take it out? It sure looks like you're not fine with leaving it unimplemented, and you're doing this because you want it implemented, no? It makes no sense to die on that hill if you're gonna tell people it might not exist.

Also, how do you just completely ignore the fact that standards have been weakened in the past precisely to achieve their implementation? This isn't a hypothetical he's worried about, it has literally happened. You're just claiming it's false despite history blatantly showing the opposite because... why? Because trust me bro?


The problem with standardizing bad crypto options is that you are then exposed to all sorts of downgrade attack possibilities. There's a reason TLS1.3 removed all of the bad crypto algorithms that it had supported.

There were a number of things going on with TLS 1.3 and paring down the algorithm list.

First, we both wanted to get rid of static RSA and standardize on a DH-style exchange. This also allowed us to move the first encrypted message in 1-RTT mode to the first flight from the server. You'll note that while TLS 1.3 supports KEMs for PQ, they are run in the opposite direction from TLS 1.2, with the client supplying the public key and the server signing the transcript, just as with DH.

Second, TLS 1.3 made a number of changes to the negotiation which necessitated defining new code points, such as separating symmetric algorithm negotiation from asymmetric algorithm negotiation. When those new code points were defined, we just didn't register a lot of the older algorithms. In the specific case of symmetric algorithms, we also only. use AEAD-compatible encryption, which restricted the space further. Much of the motivation here was security, but it was also about implementation convenience because implementers didn't want to support a lot of algorithms for TLS 1.3.

It's worth noting that at roughly the same time, TLS relaxed the rules for registering new code points, so that you can register them without an RFC. This allows people to reserve code points for their own usage, but doesn't require the IETF to get involved and (hopefully) reduces pressure on other implementers to actually support those code points.


TLS 1.3 did do that, but it also fixed the ciphersuite negotiation mechanism (and got formally verified). So downgrade attacks are a moot point now.

IMO Khan was by far the best we've had in at least 2 decades. Her FCC even got a judge to rule to break up Google! The biggest downside Khan had was being attached to a 1 term president. There's just not that many court cases against trillion dollar companies you can take from investigation to winning the appeal on in 4 years

All true, and I'm not making a value statement about whether her influence was good or bad. However, Khan only threatened the oligarchs' companies, while Harris point-blank threatened their fortunes.

Don't pick a fight with people who buy ink by the barrel and bandwidth by the exabyte-second. Or at least, don't do it a month before an election.


The oligarchs hated Kahn with the intensity of a thousand burning suns. If you listened to All In all they were doing is ranting about her and Gary Gensler.

That being said, Kamala's refusal to run on Kahn's record definitely helped cost her the election. She thought she could play footsie with Wall Street and SV by backchanneling that she would fire Kahn, so she felt like she couldn't say anything good about Kahn without upsetting the oligarchs, but what she was doing was really popular.


no. youtube and netflix both use h264+av1 as their codec options. Netflix seems to use x265 for a small subset (but it's somewhat unclear).

That's incorrect.

Youtube detects your capabilities and sets it automatically. Unless you're using an obsolete potato network or watching low resolution stuff you'll likely get x265.

https://support.google.com/youtube/answer/2853702?hl=en#:~:t...

Netflix is similar. It defaults to h265 for Netflix content (because they want it to look good). Partner/licensed content uses the inferior codecs that use more bandwidth to achieve worse quality.


youtube has never and will never come to support x265 they even tried to block support from chrome becuase they hate it that much they support x264,vp8/vp9, av1 and soon av2 they literally started and entire organisation to take on mpeg called aomedia

Ahh. You're right, sort of.

Users can choose h265 for live streams and they allow hevc uploads, but they then transcode it to worse codecs before broadcast.

I wonder how they would save on bandwidth by switching to hevc? I think its something like 40% more efficient on average.

I guess av1 is even better, but what percentage of hardware supports it?


> what percentage of hardware supports it?

Pretty much everything modern except Apple. Intel since 11th gen (2021), AMD since Zen4 (2022), Samsung phones since 2021, Google phones since 2021, Mediatek since 2020.

With modern lifecycles the way they are, that's probably ~60-80% of everything out there.

Also software decoding works just fine.


Thanks! I guess I have some catching up to do.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: