There is no such thing as a politically neutral place. Especially not for an organization that has financial ties to many other organizations around the world.
Same way that it's ok that if you update your distro it fetches newer drivers, a new kernel and patched versions of all the software you installed? Microcode is loaded at runtime, it's not permanently modifying your system.
The question is still: are "magic incantations" in packages ok, considering that they allow the issuer to control your hardware more than if the code was baked into firmware just once?
Also, these packages allow vendors to keep quiet about security issues, because they can silently fix them in the next update.
Is it any more of a "magic incantation" than the linux-image-XYZ package which controls which OS kernel is installed? Or the linux-firmware package which controls what firmware gets loaded on various devices?
but i dont do that automatically, as the parent described. as far as i am aware i need to manually issue a comand to perform the update, which is ok as far as i have that control
Distros prompt you when upgrading packages, not binaries. You get a prompt to update Firefox package, not for replacing the actual binary on disk.
Typically packages including microcode behave the same way - prompt to update the package, no prompt to implement that update (replace individual files).
"automatically" as in "automatically when you update the rest of your distro", not an extra step as the poster above clearly seemed to expect. That context also was clearly from the other comments already.
i dont seem to be able to reply directly. my question was meant as a clarification because simply saying something is "automatic" is very ambiguous. "automatic" to me means giving up control. to me whats being described is not "automatic"
It's just a package. It's not automatic nor hidden. If you've never noticed it being updated, then you probably haven't been paying enough attention to the package list when you did choose to issue a system-wide upgrade command.
Worth noting that BIOS updates frequently ship with ucode updates that are applied at boot before UEFI executes the operating system. So if GP is diligent about keeping the BIOS up to date, it's conceivable that Linux's ucode update has never had any work to do. At the very least this seems to be the case with Thinkpads.
The "updated microcode" package is installed on your system manually (like any other package). The "automatic" part comes in when your system boots up, after this package is installed. On boot, when the package is installed, your system will automatically use the installed package to update the processor's microcode.
if its not automatic or hidden then it is ok, as long as you have that control. to be clear i think GNU has every right to forbid non-free software just as anyone has the right to not use GNU software if they don't want to. what should be known about are consequences and responsibilities for either (any) decision. if GNU says we dont want to support this security update it needs to clearly state the risk their users face. if it does that then its ok as far as i am concerned
They do not state anything like that; that's the entire problem with that linux-libre patch. It is removing a message that informs user that their computer is at risk without an update, because they don't want people to know, because if they did most people would choose to install the proprietary update to the proprietary microcode they are already running, and that would expose the existence of that microcode, and the FSF's utopia only exists in the minds of people who aren't aware of all the proprietary firmware they're running anyway.
It's all a big lie. There's proprietary firmware everywhere. The FSF just doesn't want users to know about it, so they can live happily in their blissful ignorance believing they are freer than everyone else.
In games with hidden information, yes. In open games? There is an optimal path from start to victory, for one of the players. No need to deviate randomly.
If you always play for the optimal move, give it a thousand games or so, and your opponents will know the optimal game.
They will learn the best opening move from your first move with white, the best reply from your response to it when they play it, white’s best second move from playing that against your best opening move, etc.
Once they know that, playing against them, all games will be the same and they’ll _look_ as good at chess as you.
Especially if the optimal move isn’t far better than the next best move(s) varying your moves can be used to show that isn’t true, and delay the time it takes your opponents to learn that optimal game.
That may well be true, but it isn't relevant. Game-playing engines are designed to win the position being played, not to maximize winning probability over its playing career.
Stockfish or AlphaZero don't care one bit about what happens in the next game they play.
It isn't necessary either. In chess, as there's no need to worry about interactions between players' information sets, a player can safely apply pure strategies. Winnings approach being maximized over player career by trying to match as close as possible subgame perfect equilibrium from any given start node per game.
That is what should likely happen and is already the case to a point. Much of the existing programs are funded at the state level. But the federal government is already meddling so you can't exactly do what works best for you. You have to follow the federal ACA rules no matter what you implement at this point.
In the Netherlands some/many people work 40 hours on a 36 hour contract, resulting in almost 10 weeks of paid vacation as well. It's worth a lot to me.
That is what I do. The backup server can access the production server, but production can't access the backup. But my setup requires the backup to go to disk on the production server and not stream out directly, so that aspect is annoying.
People usually don't go to a status page to see if a page is reachable from their computer. They know that already. They want to know if it's a global issue, and if the company is aware that it's down, and when it will be back up.
Yeah, but I think that largely depends on the degree of security you want to have in your blockchain. For example you could use something of a multi-party signing-scheme for consensus that regularly stamps hashes of it's own table into the bitcoin blockchain - this kind of scheme would be significantly less expensive to implement since you could pack thousands of transactions into a single bitcoin transaction. I'm not really sure micro-payments are the answer to add blocking to begin with though -
> You can have multiple inputs / outputs per bitcoin transaction, so presumably that's what he's talking about.
Actually that's not it. What I'm saying is that you can have any normal multi-party signature scheme, where every N signing-rounds you ensure some measure of canonical ordering by publishing the hash in the bitcoin blockchain. I think sidechains are probably formulated in a similar fashion but it's been a while since I've looked at exactly what they are doing, the trickier parts if I remember correctly are transfers of value on and off the original chain.