Hacker News new | past | comments | ask | show | jobs | submit | StrangeATractor's comments login

I usually just pull off at a truck stop for sleep if I'm traveling, but even those have been getting filled up lately. Plenty of exits will just have semis lined up on the shoulder because there's no room anywhere else and they can only drive so many hours by law.


How do you even represent HDR colors? I've tried Googling this and I can never really find an answer. Is it basically just eight hex digits instead of six?


Depends on the format, but 10 or 12 bits per color primary is typical; moreover, HDR video standards typically use limited-range YCbCr encodings instead of full-range RGB, so certain low and high values are defined as "blacker than black", "whiter than white", or are otherwise reserved.

So in terms of hex digits, three sets (Y, Cb, Cr) of three, with not all values representing valid colors.

Compared to SDR standards like sRGB, HDR formats also typically use larger color spaces (Rec. 2020[1] is typical) and far more extreme transfer ("gamma") functions (PQ[2] or HLG[3]).

Finally, note that it is common for the encoded values to represent colors and intensities that far exceed the capabilities of most, if not all, display hardware, so the mapping from encoded values to actual displayed pixels can be rather complicated. Google "HDR tone mapping" for more than you ever wanted to know.

[1] https://en.wikipedia.org/wiki/Rec._2020

[2] https://en.wikipedia.org/wiki/Perceptual_quantizer

[3] https://en.wikipedia.org/wiki/Hybrid_log–gamma


HDR is less about the colors and more about achieving a wider range of brightness levels (hence increasing the total dynamic range). It's typically encoded using 10 or 12-bit color depth, and metadata is added to the media on how to map the colors' luminance/brightness values (e.g., what darkest and brightest values of a image/video should be). This is then used to transform HDR content into appropriate color and luminance values for your specific monitor (e.g., the reference monitor used for grading might have a peak brightness of 1000 nits, but yours might be different, or have a different luminance-response curve or support a different color space).


In SDR content, colors are encoded with 8-bits ranges. So, 24bits for the three color channels. With HDR, we usually use 10 bits instead (this is where the "10" comes from in "HDR10"). See https://en.wikipedia.org/wiki/Color_depth#Deep_color_(30-bit....


Both QR codes are using #ffffff.

The "bright" one has metadata that tells the operating system to render white at the maximum possible brightness, instead of whatever brightness it would normally render white at.

It's broadly supported on Apple devices, though how well it works depends on the hardware you have.


You're correct in part, as both image sources (the picture and the video) have maxed out pixel data. If it's HDR10, it does not have 24 but 30 bits of color though, so it maxes out at #3fffffff.


I wouldn't fuck with the FAA, their interpretation of what they say is what's really the law, not what's written in the FAR.

About 30 years ago a pilot in a floatplane was on step as he went under a bridge (he was still in the water, but the floats ride closer to the surface than when it's at near-idle speeds). When the FAA somehow got wind of this, they really put the screws on him because in their interpretation he was flying under the bridge (very illegal), even though he wasn't airborne. Nowhere to my knowledge did the FARs make this distinction between floating and flying. You'd think in a case like this they'd let it go, maybe update the rules to clarify, but they really went after him for it.


I have to say re: pipewire that it's one of the few (redhat?) projects of the past 15-20 years that doesn't seem to have generated a ton of hate from users so it must be pretty good all around.


I love Pipewire, Pipewire fixed bluetooth headphone reliability. For years I had frequent trouble where the bluetooth headphones and bluetoothctl would both say they were connected, but there was no audio sink for pulseaudio. Sometimes there would be audio, but it would be crappy mono audio in "headset mode". For three or four years I had this problem several times a week. I believed this was an issue with the linux bluetooth drivers until one day, after several unsuccessful cycles of reloading, restarting and rebooting everything I got fed up enough to install Pipewire (which was unexpectedly painless.) From that moment on, not once have my headphones failed to connect on the first attempt. I've concluded that there's something fundamentally wrong with the way Pulseaudio recognizes and responds to a bluetooth speaker being connected, and Pipewire obviously doesn't have this flaw.

There's a popular narrative of "linux enthusiasts hate anything that's new"; you hear this a lot from people defending Pulseaudio and SystemD from "the trolls". It was never true. People love new things when the new thing solves their problems, and hate the new thing when it introduces new problems. The haters narrative is little more than cope; a way for the authors of buggy software to rationalize the negative response to their software.


This is me learning that pipewire is a RH project.

For context as to why it's not getting hate: the alternative is Pulseaudio (another RH project, headed by Poettering who is now at Microsoft incidentally) which was eggergiously difficult to configure, heavy and seemed to dominate any system that tried to interface with it even slightly.

The same exact concerns that people levy towards systemd and GNOME.

Pipewire is light, interfaces with programs on their terms and seems to follow the philosophy that "it's just a tool" meaning that it should not be something you have to care about as an application developer or as a user.

I have a lot of respect for that.


Pipewire also worked out of the box for me, and abstracts away ALSA and PulseAudio. It can even handle JACK. It's very nice.


I didn't even know it existed until the other day when I found out I was already using it.

I can't imagine giving better praise for a piece of software.


Pulseaudio was adopted too early, and I think distros have learned their lesson this time around.

Honestly, it fell under "just worked" for me. One time I had been reading about PA using too much CPU, so I checked and indeed it was using a reasonable amount of processing power "just" to feed data from the media player to ALSA. So I tried turning it off, and the media player used more CPU to play audio direct to ALSA without PA than both the media player and PA running together.


I had tons of problems with PA. Jitter, noise, insensible defaults. i.e. Why upmix stereo to 5.1 just because I have the speakers, and why make disabling it an override?

I never used onboard audio, always had some higher end card with more resolution than a bog standard on board audio chip, and PA struggled to deal with them for a long time.

Also if the daemon crashed or needed a restart, it was a dance of restarting with exacting order and other details.

Pipewire is just invisible. It works the way it should and doesn't bend the system to fit in.


Pipewire is invisible because Pulseaudio (PA) exercised a lot of sound stack features, exposing bugs (which were often attributed to PA) and prompting a lot of bug fixes. It wouldn't be nearly as good if it weren't building on PA's foundations.

Pipewire also doesn't need to bend the system to fit in, because the system is already the right shape.

(And I'm perennially annoyed that my work Mac won't upmix to 5.1. I've got the speakers, why only use two of them?)


I need to read the ALSA bugfixes because of PA to see and believe that.

On the other hand, I used multichannel audio since the last 20 years or so, on Linux, and it always worked with what I have. First with Live and Audigy, then with an Asus card.

No, pipewire is much more gentle on how it handles and takes the streams from other applications. It doesn't make an heavy-handed attempt to make it is also replacing ALSA and drivers at the same time. It's a much thinner layer and works what it should do. Most importantly it doesn't alter the streams it goes through it.

(Sorry, but upmixing a good stereo sound source to 5.1 is just butchering the sound. The resulting sound stage is an abomination of what it should be. For a musically inclined person (read: ex-orchestra player), it's just torture. It's so wrong on so many levels.)


GNOME Shell uses 0.75% of my CPU and 136 MB of memory. All processes in gnome-system-moniotr that are somehow related to GNOME clock in around 400-500 MB of memory. How is that heavy?


Pipewire has made Linux the best audio stack in the world by some margin.


It's barely usable when paired with the Realtime patchset.

An actual OS that constitutes a "good audio stack" would be able to provide hard realtime (i.e. formally guaranteed maximum latencies), like seL4 does.

Linux-rt is just probabilistic. It behaves much better than mainline, but latency could spike anytime.


Classes like this were funny. I often found that people who just knew the answer immediately had a lot of difficulty doing proofs, and the people good at doing proofs usually didn't arrive at the solution very quickly or seem to have an intuitive understanding of the problem at first.

Some people are just wired differently.


That’s interesting. In your context what is “knowing the answer”? To me it seems like they “knew” a given statement was true but they didn’t know how to prove it which makes me wonder how they knew.


The teacher may start off a class with a question like "what is the most efficient way to satisfy this problem given these constraints?" Some people would know the answer immediately but couldn't do a proof for it hardly at all. Others could find a proof in class almost every time, but necer really saw the answer to the problem until they'd sat on it for a while and chewed over it.

Can't really give an example question, it's been well over a decade (closer to two) since I took it.


Honestly every time the FBI is in the news I get confirmation they're a bunch of lying incompent sleaze bags backed and protected by the state.


Even back to J Edgar Hoover days...


They tried to blackmail MLK to get him to kill himself. A department created originally to fight the mob started acting exactly like it. There's no way a reasonable person's faith in an organization like that could ever be restored.

Imagine the information that isn't public about their misdeeds.


The FBI repeatedly denied that the Mob existed for over a decade, until a patrol cop stumbled accidentally on a meeting of Mob bosses from across America.

J Edger Hoover's Wikipedia:

> From the mid-1940s through the mid-50s, he paid little attention to criminal vice rackets such as illegal drugs, prostitution, extortion, and flatly denied the existence of the Mafia in the United States. In the 1950s, evidence of the FBI's unwillingness to investigate the Mafia became a topic of public criticism.

Story of 1957 Mafia meeting:

https://www.smithsonianmag.com/history/1957-meeting-forced-f...


This is ridiculous and really only happens because the cost of securing your customer (or citizen) data is higher than the cost of losing control of it. If the cost of losing data to hacks was, say three times higher than the cost estimated to secure it, the problem would become much less common very quickly.

As it is, states and corporations externalize the costs of hacks to the victims of their incompetence. They have no reason to take opsec seriously because they aren't held liable in even the most egrigious cases. Data should be a liability.


>> Data should be a liability.

This is the crux of it, methinks. "Data is the new oil" has been a common refrain and as long as the externalities of poor security posture hygiene can be completely outsourced while these companies make mountains of cash by monetizing your every scrap of behavior, attention and information, this will only get worse as every entity seeks to hoard more information on you.

Keeping more data than absolutely necessary for critical business operations should be an existential threat for any entity. Those businesses built on this data ought to take Fort Knox level pains to secure it. Anything short of that and we will continue to exist in a society of deteriorating trust and social contract.


A framing I often use is, "Data is like holding uranium". It can be incredibly valuable, but also very dangerous. You should be very sure that the data you're holding is worth the cost of safely protecting it (a high cost), and if it is not, get rid of it.

Stripe is a good mental model here, I don't want a person's credit card data, I want to charge them for my product. I love storing a Stripe customer ID, if a hacker were to grab that table, I wouldn't lose (a lot) of sleep, they couldn't do much with it. If that table held credit card data...I would.

That farms out a lot of responsibility to Stripe, but for a side project, I don't have the time necessary to do as good of a job at it relative to Stripe.


FWIW, most other merchant credit card processors also handle this using tokens, encrypting credit card keypads, etc.


And have for 25 years been or so. Authorize.net predates a lot and this was always their main feature, they run it and you get the OK and an ID.


I think GDPR was a great step in this direction, even with the annoying cookies popups. There's some states in the US with similar laws basically saying "it's legal for you to collect this data, but if you do you need to build systems for people to request all the data tied to them and for it to be deleted". Hopefully the next step would be to make data sharing opt-in, while it's somewhat limited it is really nice that iOS makes apps request access to different data.

I'm really curious how effective these are in practice if someone got logs or backups, but it at least gives people a path to know what data is there remove the active copies


We also need changes so that posession of identifying information is not sufficient to establish identity. That sounds like a tall order but clearly this information is leaking all over the place and just because someone has my identifying numbers and date of birth and mother's maiden name and signature and fingerprints and whatever else, should not be adequate to gain access to bank accounts or execute contracts and other legal agreements.


The hard part of this isn't that we don't know how to do it, it's that people don't like the consequences of it.

Your bank can give you a bank card with cryptographic keys in it and then you need the card to make a transaction. But then if you lose the card...

At which point we fall back to birth certificates and things because there's nothing else available. The alternative would be that if you lose your bank card, you lose your money. Which could be mitigated by e.g. having backup cards that you keep at home in a safe, but some people would lose those too, and what then?


Why would losing your bank card mean losing your money? The bank card would be there to establish identity when performing a transaction. Going to a branch in person with government-issued photo id would be the way to establish identity when generating a bank card. It’s a pain to do, but it only needs to be done for a new account or to revoke/replace a card.

By analogy, the cryptographic key on the bank card is a cross between a session token and a private key. Like a private key, it is never directly exposed for verification. Like a session token, it can be replaced.


> Going to a branch in person with government-issued photo id would be the way to establish identity when generating a bank card.

You need to bootstrap it all somehow. All you've done is move the authentication problem to how you get a government id.

Suppose your house burns down and you're standing on your lawn in your pajamas with no identity documents of any kind. What now?


At least walking into a bank with a fake birth certificate and other forged identty is not a form of impersonation that can be done remotely and at scale.


I think there's generally considered to be three classes of authentication methods

- something you know, like a password - something you have, like an RFID card - something you "are", like a fingerprint

You can add multiple of these and choose from different categories to add security, but each time you do it also gets less convenient. You could require a birth certificate, DNA test, and social security number for any access to a bank account, but then it wouldn't really work as a checking or savings account, and if you lose your birth certificate you're locked out of your account.

Definitely worth considering the other side - when you need to access the account how much inconvenience and delay are you willing to put up with before you can? For a checking account it seems like people usually just want a single one of them - the debit card, account login, or face/fingerprint to authenticate


> "Data is the new oil"

The common usage of this phrase isn't too inaccurate. Keep in mind what oil does to the environment, not just during spills but even in normal refining!


When oil is refined, it feeds energy consumption, which can have both positive and negative effects. When data is refined, it feeds AI, which can have both positive and negative effects.

When oil spills, it causes toxic damage to the environment. When data spills, it causes damage to society's individuals and the firms that should have kept the data secure.

It's not a perfect analogy, but there are some similarities.


Would be good to get a value on this liability onto a company’s balance sheet. This could be done as a value of risk, or a cost of insuring against data exposure (should be govt mandated to have insurance). If Lloyds can insure against weather and piracy then someone should be able to underwrite insurance against data breaches.


For government IT in particular, the cost of security is basically infinite because they aren't organizationally mature enough to do anything right. There's no way to make the cost of being hacked infinite; no court or legislature is going to order the DMV to be disbanded.


> There's no way to make the cost of being hacked infinite; no court or legislature is going to order the DMV to be disbanded

Disbanding the DMV doesn’t make the cost to any actor infinite (“DMV” is an abstraction, and state agencies are routinely created amd destroyed, sometimes as political damage control due to IT scandals [0], but that’s not an infinite cost on anyone.)

[0] e.g., the California Department of Information Technology in 2002: https://www.google.com/amp/s/www.computerworld.com/article/2...


In 1995 the California State Police were disbanded and their role was given to the California Highway Patrol.

The part of the DMV that performs driver testing isn't the part that loses all your data. It wouldn't be impossible to disband their IT department and give the role to some other government agency.

They could also just, you know, stop collecting it. Print your height and hair color etc. on your driver's license and don't store it anywhere else. Instead store a hash of it at the DMV with the salt stored on the license itself, so you can revalidate the license without being able to reconstitute it.


The cost benefit calculation also includes the odds of being hacked. Enormous numbers of organizations are at risk but most survive by security by obscurity. Most are content to hope to remain obscure.

Especially since the cost of actual security is very high. You have to build it into every aspect of the system. It makes development cost an order of magnitude more and constrains usability... and you'll still never really be certain

When you take employees into account the cost becomes almost insurmountable. Keeping bank style security means tightly limiting access, making even simple operations more work.

That's not an excuse. That's a warning. We are at grave risk, and we need to completely reconsider how almost every piece of software is written. Competence is hard and expensive.


>Data should be a liability.

Exactly. Mandate financial compensation for any and all value derived from data that an individual creates, whether they opted in or not.

If YC ran banner ads and my comment is viewed on the same page as an ad, then I should receive some significant percentage of that ad revenue. If an ad is targeted to a customer on IG through an ad campaign, based on the user's data, then the user should get a significant percentage of that ad.

That should clear everything up.


You have a lot of opinion here but this was a zero day attack. Would love to hear your opinion on how Louisiana could have safeguarded against that.


On Qubes OS even a zero-day would likely not be sufficient. It relies on hardware virtualization and runs everything in VMs. My daily driver, can't recommend it enough.


You should pay attention to pwn2own every year where it is common for competitors to find a browser rce and VM escape exploit.


Last time a VT-d VM escape was discovered in 2006, and it was done by the Qubes founder: https://en.wikipedia.org/wiki/Blue_Pill_(software).


It doesn't have to be vt-d: https://keenlab.tencent.com/en/2018/04/23/A-bunch-of-Red-Pil...

I can confidently tell you that your understanding of security mitigations is flawed. And I say that based on experience not just a baseless opinion. Silver bullets in security don't exist.

Let's every moveit instance was run in a container in a vm and in a dmz (actually moveit transfer is usually deployed in a dmz, isolated from everything else). But the entire purpose of the software is to contain all these important files and expose them to authorized parties, basically a file server (even has sftp!). The threat actors in this case didn't even bother compromising the OS, they just got a session id as a result of abusing sqli and .net deserialization flaw and logged into the webui and downloaded the files. At no point could a vm have stopped any of this.

I said your undersranding is flawed because you mindset is solution centric not data centric. If all an attacker cares about is access to your gmail, a qubes VM with strict selinux rules is useless if they get you to click on a link that exploits firefox to steal your gmail cookies, defeating any yubikey 2fa you may have.


You are too confident. First, Qubes does not use VMware. It uses Xen, which has way less vulnerabilities, most of which do not affect Qubes [0]. And none of them have led to arbitrary code execution in host since 2006, except maybe for VMs with PCI passthrough [1].

Also, Qubes does not trust the hardware emulation [2]. It keeps the trusted computing base as small as possible. Of course, the covert-channel attacks are still possible [3], but they are much weaker and can be mitigated through isolation. Qubes does not implement an ordinary copy-paste functionality; it's implementation is much more secure, see [4,5].

Hardware-virtualized VMs without any devices attached are extremely hard to escape from or access for other VMs. I am not aware of any successful attempts in the last >10 years.

> qubes VM with strict selinux rules is useless if they get you to click on a link that exploits firefox to steal your gmail cookies

This is wrong, because clicking on a link in my email would open a broswer in a dedicated, disposable VM. Also any attachment would also open in a disposable VM.

Having said that, you are probably right that in this case Qubes itself would not help as the whole database had to be available online.

[0] https://www.qubes-os.org/security/xsa/

[1] https://www.qubes-os.org/news/2021/06/08/qsb-069/

[2] https://www.qubes-os.org/faq/#is-the-io-emulation-component-...

[3] https://www.qubes-os.org/doc/data-leaks/

[4] https://www.qubes-os.org/doc/how-to-copy-and-move-files/#sec...

[5] https://www.qubes-os.org/doc/how-to-copy-and-paste-text/#sec...


I'll just say that the entire point of zero-day is that you are not aware of it now and that zero-click attacks exist and also that while your selection of a product based on historical performance under scrutiny is valid, you cannot use that as evidence to claim an attack is not possible.

And again, you miss the whole point of zero-day, it is an unknown. You can create an attacker-hostile environment but you cannot make guarantees or claim a solution is best practice against an unknown/undefined.

As far as I am aware for example, spectre/meltdown or *-hammer attacks could have been used against your disposable vm to read your email vm's memory. There are also hardware attacks against radio controllers/chips that could be used to wirelessly dump memory without your interaction.

You know, I was just telling someone how I thought those scenes in movies where the hacker guy types really fast and hacks everything were silly when I started working in security. Now, I know that if he is just the guy buying/trading exploits and spent a lot of time automating stuff and setting up infra,it is indeed possible to make it all look so easy (but still, some movies/shows take it too far). Especiallu in government work, the guys using the tools are rarely the guys that develop them or maintain their infra. The guys who develop access might also be different from the guys who action objectives or work on exfil or shell management.


I do know what zero-days are. And I do not solely use the historical performance as evidence for strong security.

Did you ever think that such a good performance that Xen and Qubes have is not a mere coincidence? What if I tell you that it is not just a pure luck? It just can't be luck statistically: in the Linux kernel, serious vulnerabilities are found all the time, whereas here they're extremely rare.

Let me tell you that it is actually a result of a good architecture. You know, in Xen, the trusted computing base, i.e., the code responsible for isolation and security is really small: it's of the order of 100k lines of code. In Linux it's millions of lines. Every code has bugs, it's unavoidable, but you simply can't have a similar number of bugs in 100k lines as in millions of lines. There are no "guarantees" here, just pure statistics.

In addition, Xen is very popular among big companies, so its code is constantly checked for bugs by the best people in the field. And zero-days in it are so expensive that you would not waste one on leaking personal data of people for no reason. Security is not about guarantees, it's about probabilities. And they are very small in Qubes, if you are not a big target. Even if you are, they are smaller than for any other system, thanks to the security-oriented design and reliance on compartmentalization.

Yes, Qubes is vulnerable to Spectre, Meltdown, *-hammer attacks. But it never attempted to protect the user against the hardware it runs on. It is physically impossible. Also, hardware attacks like these are exception, not the rule. And even they usually don't lead to VM escapes.


State government is tricky. Pay sucks and doesn’t attract a lot of talent. Same with the education sector.


Government IT is outsourced at prices that could pay top salaries (think Palantir). They just don't.


> Government IT is outsourced at prices that could pay top salaries

Mostly, its not; a lot is not outsourced, and a lot that is is outsourced on personal services contracts at rates that even if there was no vendor overhead wouldn’t pay for a top-flight pay and benefits package.


> If the cost of losing data to hacks was, say three times higher than the cost estimated to secure it

I’d estimate that if this happened, the cost to secure data would grow by about a factor of 3.


Great. Let's see what sort of response this gets from the gov't.

At some point we need to start treating data as a liability.


I agree that we should, but why do we need to?


GP mixed up vim with emacs.


Mag isn't too bad, I believe you also get acclimated to it (haven't taken it in a while).

IME it's not much different from having a fiber-heavy diet.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: