Congestion pricing seems like a pretty liberal policy to me. Using supply and demand to set a price.
Sure, you could crank the Friedman dial to 11 by say, privatizing the roads and letting the operators set the price based on competition.
But the policy is liberal at its core. A “lefty, collectively enforced” policy would be something like a quota or permit system.
A key difference being that anyone who wants to drive on the road can do so as long as they pay. It isn’t “everyone with odd license plate numbers can drive today, evens can drive tomorrow” but rather “you can drive today if it’s worth $9 to you”.
I think you and the parent comment are confusing the term "liberal". He refers to "liberal" in the classical sense: free markets, limited government, rule of law, etc. You mean "liberal" in the North American sense: lefty, social justice, etc.
Additionally all those emergency vehicles are going to have an easier time shuttling patients to hospitals and firefighters to fires. The whole spectrum benefits from that, not just the rich.
> The study found that producing the electricity to train this model produced an air pollution equivalent of more than 10,000 round trips by car between Los Angeles and New York City.
I am totally on board with making sure data center energy usage is rational and aligned with climate policy, but "10k trips between LA and NY" doesn't seem like something that is just on its face outrageous to me.
Isn't the goal that these LLMs provide so much utility they're worth the cost? I think it's pretty plausible that efficiency gains from LLMs could add up to 10k cross USA trips worth of air pollution.
Of course this excludes the cost of actually running the model, which I suspect could be far higher
I'm not sure I understand why secure boot is user-infantilizing? I think there are some legitimate concerns about where attestation could be headed, but I like the ability to force my machine to only run signed executables.
It seems like the immediate problem here is that most people will never enroll their own keys, and if every vendor's crappy EFI binary gets signed by Microsoft, there will be a huge library of garbage vendor code which is all an attack surface.
The problem here is that the signature doesn't do anything for you.
Suppose you want to be assured of the software running on your machine. You go into the firmware, point it at your boot loader and say "only this one". It makes a hash of the boot loader and refuses to use any other one until you change the setting, which requires your firmware password. Your boot loader then only loads the operating systems you've configured, and so on.
That doesn't require any certificates and you get 100% of the benefits. The firmware needs to verify the boot loader and the boot loader the OS etc. The OS doesn't need to verify the firmware because it can't because if the firmware or boot loader was compromised then the code in the OS to validate it would be just as compromised.
The only thing the signature gets you is remote attestation, which is the evil to be prevented. Simple hashing would get you everything else.
And then you also don't get this "garbage code is nonetheless trusted" problem because there is no global root of trust and you never told your firmware to trust this random firmware update utility for somebody else's hardware.
> The problem here is that the signature doesn't do anything for you.
For your own personal machine, sure. But say you're a sysadmin in a company that has thousands of units. Suddenly, a CA infrastructure is much more appealing than having to deal with component hashes.
How is it any different? You install the hash of the boot loader when you issue the machine, then use the trusted system to update the hash if necessary.
Also, the concern is that the system comes from the factory with private keys the owner doesn't have access to, allowing the device to defect by informing on them to a third party. Keys installed by the owner rather than the manufacturer are fine, and then such keys also wouldn't be trusting random third party code either.
> How is it any different? You install the hash of the boot loader when you issue the machine, then use the trusted system to update the hash if necessary.
With your private CA you can skip the "update the hash" part, removing a crucial step that one might forget in a hurry or that simply might go wrong because of whatever sort of bug or power outage... and brick thousands of machines as a result.
The "update hash" part is the counterpart to the "sign the binary" part, so if you forget to do it you're going to have problems either way. Also, this is the sort of thing that large organizations would have automated tooling to do anyway.
>Suppose you want to be assured of the software running on your machine. You go into the firmware, point it at your boot loader and say "only this one". It makes a hash of the boot loader and refuses to use any other one until you change the setting, which requires your firmware password. Your boot loader then only loads the operating systems you've configured, and so on.
What if you need to update the bootloader?
>The only thing the signature gets you is remote attestation, which is the evil to be prevented. Simple hashing would get you everything else.
TPMs can do remote attestation without signatures just fine, by measuring the hash of the bootloader. It'd be clumsy, but doable, just like your idea of using hashes for verification.
> How does the system know whether the new bootloader is legitimate or not?
However it wants to. Maybe the existing bootloader (chosen by the owner rather than the vendor) or the OS it loads has its own signature verification system for update packages, like apt-get. Maybe the OS downloads it from a trusted URL via HTTPS and relies on web PKI. Maybe it uses Kerberos authentication to get it from the organization's own update servers. Maybe it just boots an OS that allows the operator to apply any update they want from a USB stick, but only after authenticating with the OS.
None of that is the firmware's problem, all it has to do is disallow modifications to itself unless the owner has entered the firmware password or the system is booted from the owner-designated trusted bootloader.
> All TPMs have private keys from the factory. They're entirely unrelated to the secure boot keys.
The point isn't which device has the keys, it's that it shouldn't contain any from the factory. Nothing good can come of it.
The situation you're protecting against is one where someone who compromises the OS can make that compromise persistent by replacing the bootloader. That means you can't place any trust in any component after the bootloader, since an attacker could just fake whatever mechanism you're enforcing.
> The point isn't which device has the keys, it's that it shouldn't contain any from the factory. Nothing good can come of it.
TPMs have private keys, and are not involved in enforcing secure boot. The firmware validating the signatures only has public keys.
> The situation you're protecting against is one where someone who compromises the OS can make that compromise persistent by replacing the bootloader. That means you can't place any trust in any component after the bootloader, since an attacker could just fake whatever mechanism you're enforcing.
Isn't that kind of pointless?
Suppose the attacker gets root on your OS, i.e. what they would need to supply the firmware with a new hash. That OS install is now compromised, because they can now change whatever else they want in the filesystem. If you boot the same OS again, even using the trusted bootloader, it's still compromised.
If you don't realize that it's compromised, you're now using a compromised system regardless of the bootloader. If you do realize it's compromised then you do a clean reinstall of the OS and designate your bootloader as the trusted one again instead of whatever the compromised OS installed.
What does the bootloader really get them that root didn't already?
> The firmware validating the signatures only has public keys.
Having the keys installed from the factory still seems like the thing causing the problem:
If it only trusts e.g. Microsoft's public key, they now get to decide if they want to sign something you might want to use. If they don't, secure boot prevents it from working, which causes problems for you if you want it to work.
Which then puts them under pressure to sign all kinds of things because people want their firmware updaters etc. to work, and then you get compromised by some code they signed which wasn't even relevant to you.
Whereas what you want is some way of designating what can run on your machine, regardless of what someone else would like to run on theirs. But then that's a machine-specific determination rather than something somebody should be deciding globally for everyone.
I'm working on an embedded system right now that has two CDC ethernet devices. One shows up as ethX and the other shows up as usbX. Maybe it's because one is CDC EEM and the other is CDC ECM? But I don't think this is generally true for all CDC ethernet.
With GPL you don't have to actively work to upstream your patches, but in practice you can't withhold your patches from upstream. If you add a feature, they get to have it too.
Unlike permissively licensed software, where you can add proprietary features.
Depends how savvy your users are, and what your users lose if they do send your patches upstream. For example, GRSec or RedHat both drop you as a customer (so no security updates) if you republish their patches publicly. Or a paid iPhone app's users probably wouldn't know what source code is, let alone where/how/bother to republish it for the benefit of other users.
> So the point is that we need another license that does gives open source rights to individuals, yet does not permit corporations to take everything and give nothing.
I have a much bigger issue with "legitimate" spam these days. Every service makes you give an email address, and they all force you to check a box allowing them to email you whatever they want. Then if they even have an "opt out" link, it takes you to a list of 500 different types of notifications and forces you to opt out of each one individually.
Usually I will just disable the iCloud hide-my-email I used for a site, but sometimes there are legitimate emails mixed in with the stream of crap. I opted out of marketing emails from my credit card company, and now they instead send me emails asking me to re-evaluate my email preferences...
It would be nice to see more done to fix this, but I guess it doesn't make anyone money. I guess I'll just have to use AI to filter signal from noise.
Do you know of any data on this? It seems like the kind of thing that could be studied and measured. I'm inclined to believe the opposite about the viability of e-stamps, but I will readily admit I have no data to back that opinion up.
Sure, you could crank the Friedman dial to 11 by say, privatizing the roads and letting the operators set the price based on competition.
But the policy is liberal at its core. A “lefty, collectively enforced” policy would be something like a quota or permit system.
A key difference being that anyone who wants to drive on the road can do so as long as they pay. It isn’t “everyone with odd license plate numbers can drive today, evens can drive tomorrow” but rather “you can drive today if it’s worth $9 to you”.
reply