Hacker Newsnew | past | comments | ask | show | jobs | submit | hyperhopper's commentslogin

So the hecklers selling overpriced trinkets at every major tourist attraction in Europe or the US aren't scams? I disagree.


Unless they're trying to force them on you, no, they're not. Them being annoying as fuck doesn't mean they're dishonest.


The united states also said not to buy masks and that they were ineffective during the pandemic.

Placing absolute trust in these organizations and restricting freedom of speech based on that is a very bootlicking, anti-freedom stance


Fauci was trying to prevent a run on masks, which he believed were needed by the health care workers. So he probably justified his lie to the US to himself because it was for the "greater good" (The ends justify the means is not my view BTW).

It turns out that masks ARE largely ineffective at preventing CoViD infection. It's amazing how many studies have come up with vastly different results.

https://egc.yale.edu/research/largest-study-masks-and-covid-...

(Before you tell me that the story I cited above says the opposite, look at the effectiveness percentages they claim for each case.)

There's also this: https://x.com/RandPaul/status/1970565993169588579


Actual (N95/FFP2/FFP3) masks DO work, your comment is misleading. The study you've linked says:

> Colored masks of various construction were handed out free of charge, accompanied by a range of mask-wearing promotional activities inspired by marketing research

"of various construction" is... not very specific.

If you just try to cover your face with a piece of cloth it won't work well. But if you'll use a good mask (N95/FFP2/FFP3), with proper fit [0] then you can decrease the chance of being infected (see e.g. [1])

[0] https://www.mpg.de/17916867/coronavirus-masks-risk-protectio...

[1] https://www.cam.ac.uk/research/news/upgrading-ppe-for-staff-...


They claim a 5% reduction in spread with cloth masks and a 12% reduction with surgical masks. I think 1 less case out of every 10 or 20 is pretty acceptable?

Especially at the time when many countries were having their healthcare systems overloaded by cases.


I didn't want to be the one to have to say it, but neither masks nor social distancing had any scientific backing at all. It was all made up, completely made up. The saddest thing I see all the time is the poor souls STILL wearing masks in 2025 for no reason. I don't care how immunocompromised they are, the mask isn't doing anything to prevent viral infection at all. They might help against pollen. I also can't believe how many doctors and nurses at my wife's cancer clinic wear masks all the damn time even though they are not in a surgical enviornment. It's all been foisted upon them by the management of those clinics and the management is completely insane and nobody speaks up about it because it's their job if they do, so the isanity just keeps rolling on and on and it is utterly dehumanizing and demoralizing. If a cancer patient wants to wear a mask because it affords them some tiny comfort, then fine, but that is purely psychological. I've seen it over and over and over because I've been at numerous hospitals this past year trying to help my wife survive a cancer that I think Pfizer may be to blame for.


I'm sorry about your wife.

There was scientific basis for N95 masks and similar masks. If you are talking about cloth and paper masks, I mostly agree. Even then there were tests done with using even those surgical masks with 3d printed frames. I remember this as one example of people following this line of thinking.

https://www.concordia.ca/news/stories/2021/07/26/surgical-ma...

As for dehumanization, I used to live in Tokyo and spending years riding the train. I think blaming masks for dehumanization when we have entire systems ragebaiting us on a daily basis is like blaming the LED light for your electric bill.

Social Distancing having "no scientific backing" is very difficult to respond to. Do you mean in terms of long term reduction of spread, or as a temporary measure to prevent overwhelming the hospitals (which is what the concern was at the time)?

I do agree that it was fundamentally dishonest to block people from going to church and then telling other people it was OK to protest (because somehow these protests were "socially distanced" and outdoors). They could have applied the same logic to Church groups and helped them find places to congregate, but it was clearly a case of having sympathy for the in-group vs the out-group.


Basically, yes. However, if we make a distinction between respirators (e.g. N95 mask) and masks (including "surgical" masks, which don't really have a meaningfully better FFE than cloth masks), then at least respirators offer some protection to the wearer, provided they also still minimize contact. But, in keeping with this distinction, yes, masks were never seriously scientifically supported. It is incredibly disheartening to see mask mandates still in cancer wards, despite these being mandates for (objectively useless) cloth/surgical masks.


> I didn't want to be the one to have to say it, but neither masks nor social distancing had any scientific backing at all.

This is false. Even quick search shows multiple papers from pre-covid times that show masks being effective [0][1]. There are many more studies post-covid that show that N95/FFP2/FFP3 masks actually work if you wear them correctly (most people don't know how to do this). Educate yourself before sharing lies.

[0] https://pubmed.ncbi.nlm.nih.gov/21477136/

[1] https://pubmed.ncbi.nlm.nih.gov/19652172/


Yeah they burned a lot of trust with that, for sure.


They burned it beyond down to the ground and below. And many of you on here willfully continue to trust them and argue vehemently against people who try to tell you the actual truth of the matter. RFK Jr. is a flawed human being, but he's doing some good work in unwinding some of the web of lies we live under right now.


It's good RFK is more willing to question things but he seems just as guilty when it comes to spinning webs of lies.

If we think tylenol might cause autism why doesn't he run/fund a nice clean and large randomized controlled trial? Instead he spreads conjecture based on papers with extremely weak evidence.


He’s just bringing different lies with new sponsors.


I think the problem is that apparently some people discovered there is a profitable business model in spreading misinformation, so a trustful (even if not always right), non malicious, reference of information might be needed.

But who watches the watchmen?


100k is the number of active cards. It is being reported that they had 2-3x as many cards in total.

Seems like a nation-state level attack from somebody that has millions to spend to keep this up their sleeve


Why even have cards when there are eSims, but maybe cards have some advantages in terms of deniability or something?

It sounds sophisticated, but nation state or cartel or something else big?


Probably because it's way easier to pull a SIM out of the package and stuff it into the reader than it is to go through the QR code/web site/phone app you need to get the eSIM up and running for your provider.

What I'm really curious about is the money trail. These cards weren't bought in one off cash purchases or via some penny ante crypto reseller. Someone bought in bulk using real money. They probably had to talk with the salesguy at the MVNO to make an order that large. This kind of thing must leave a footprint.


The bar to getting access to MVNO sales is actually extremely, extremely low.

They're ordering and activating maybe 20-50 at a time, and ordering that number of SIM activation kits from dealer supply houses is extremely normal. Activation typically also is at little to no cost as well to dealers in this market.

FWIW: at sixteen, I somehow managed to get dealer access to a CDMA MVNO. I was able to activate accounts on the fly with $2 of "free" credit to start the user off, with zero cost to me. I still get emails to this day over a decade and a half later from various cellular resellers offering me bulk cellphones...


Yeah this should have triggered some serious KYC flags at the carrier(s)...


Is SIM card KYC mandatory in the land of the free? I thought it was more of a European thing


Most the these SIM card stations require physical SIM cards, just because they always have.


You could buy normal sim cards with cash


Could simply be a propaganda botfarm. Each of these sim cards registers on facebook, youtube, reddit and the faraway propaganda teams use them to relay messages.


Yeah, it feels like it could be related to various propaganda effects on social media networks, stuff like this:

https://readsludge.com/2025/09/15/democratic-pr-firm-to-run-...


Oh-my-zsh adds most of that while still being POSIX compliant


In my personal experience, oh-my-zsh slows down things too much. You're better off just taking whatever you really like about oh-my-zsh and configure it yourself.



"zsh is just fine, you just need to add a megabyte of scripts on top" is not a good advertisement :)


Your own quote shows the source of the confusion. OC was asking how will google handle apps that have somebody else signing for them. Your quote talks about letting devs that go through a verification process still side load (though that has no real benefit at that point since google still holds control over you)



This is the real news. It should be illegal to call something deleted when it is not.


> It should be illegal to call something deleted when it is not.

I don't disagree, but that ship sailed at least 15+ years ago. Soft delete is the name of the game basically everywhere...


Consequently all your "deleted chats" might one day become public if someone manages to dump some tables off OpenAI's databases.

Maybe not today on its heyday, but who knows what happens in 20 years once OpenAI becomes Yahoo of AI, or loses much of its value, gets scrapped for parts and bought by less sophisticated owners.

It's better to regard that data as already public.


At work we dutifully delete all data on a GDPR request


How do you manage deleting data from backups? Do you know not take backups?


"When data subjects exercise one of their rights, the controller must respond within one month. If the request is too complex and more time is needed to answer, then your organisation may extend the time limit by two further months, provided that the data subject is informed within one month after receiving the request."

Backup retention policy 60 days, respond within a week or two telling someone that you have purged their data from the main database but that these backups exist and cannot be changed, but that they will be automatically deleted in 60 days.

The only real difficulty is if those backups are actually restored, then the user deletion needs to be replayed, which is something that would be easy to forget.


Probably most just ignore backups. But there were some good proposals where you encrypt every users data with their own key. So a full delete is just deleting the users encryption key, rendering all data everywhere including backups inaccessible.


Deletion via encryption only works if every user’s data is completely separate from every other user’s data in the storage layer. This is rarely the case in databases, indexes, etc. It also is often infeasible if the number of users is very large (key schedule state alone will blow up your CPU cache).

Databases with data from multiple users largely can’t work this way unless you are comfortable with a several order of magnitude loss of performance. It has been built many times but performance is so poor that it is deemed unusable.


The entire mess isn't with data in databases, but on laptops for offline analysis, in log files, backups, etc.

It's easy enough to have a SQL query to delete a users' data from the production database for real.

It's all the other places the data goes that's a mess, and a robust system of deletion via encryption could work fine in most of those places, at least in the abstract with the proper tooling.


Some of these issues could perhaps be addressed by having fixed retention of PII in the online systems, and encryption at rest in the offline systems. If a user wants to access data of theirs which has gone offline, they take the decryption hit. Of course it helps to be critical about how much data should be retained in the first place.

It is true that protecting the user's privacy costs more than not protecting it, but some organizations feel a moral obligation or have a legal duty to do so. And some users value their own privacy enough that they are willing to deal with the decreased convenience.

As an engineer, I find it neat that figuring out how to delete data is often a more complicated problem than figuring out how to create it. I welcome government regulations that encourage more research and development in this area, since from my perspective that aligns actually-interesting technical work with the public good.


> As an engineer, I find it neat that figuring out how to delete data is often a more complicated problem than figuring out how to create it.

Unfortunately, this is a deeply hard problem in theory. It is not as though it has not been thoroughly studied in computer science. When GDPR first came out I was actually doing core research on “delete-optimized” databases. It is a problem in other domains. Regulations don’t have the power to dictate mathematics.

I know of several examples in multiple countries where data deletion laws are flatly ignored by the government because it is literally impossible to comply even though they want to. Often this data supports a critical public good, so simply not collecting it would have adverse consequences to their citizens.

tl;dr: delete-optimized architectures are so profoundly pathological to query performance, and a lesser extent insert performance, that no one can use them for most practical applications. It is fundamental to the computer science of the problem. Denial of this reality leads to issues like the above where non-compliance is required because the law didn’t concern itself with the physics of computation.

If the database is too slow to load the data then it doesn’t matter how fast your deterministic hard deletion is because there is no data to delete in the system.

Any improvements in the situation are solving minor problems in narrow cases. The core theory problems are what they are. No amount of wishful thinking will change this situation.


Instantaneous deletes might be impossible, but I really doubt that it’s physically impossible to eventually delete user data. If you soft delete first to hide user data, and then maybe it takes hours, weeks, months to eventually purge from all systems, that’s fine. Regulators aren’t expecting you to edit old backups, only that they eventually get cleared in reasonable time.

Seems that companies are capable of moving mountains when the task is tracking the user and bypassing privacy protections. But when the task is deleting the users data it’s “literally impossible”


It would be interesting to hear more about your experience with systems where deletion has been deemed "literally impossible".

Every database I have come across in my career has a delete function. Often it is slow. In many places I worked, deleting or expiring data cost almost as much as or sometimes more than inserting it... but we still expired the data because that's a fundamental requirement of the system. So everything costs 2x, so what? The interesting thing is how to make it cost less than 2x.


You can use row based encryption and store the encrypted encryption key alongside each row. You use a master key to decrypt the row encryption key and then decrypt the row each time you need to access it. This is the standard way of implementing it.

You can instead switch to a password-based key derivation function for the row encryption key if you want the row to be encrypted by a user provided password


This has been tried many times. The performance is so poor as to be unusable for most applications. The technical reasons are well-understood.

The issue is that, at a minimum, you have added 32 bytes to a row just for the key. That is extremely expensive and in many cases will be a large percentage of the entire row; many years ago PostgreSQL went to heroic efforts to reduce 2 bytes per row for performance reasons. It also limits you to row storage, which means query performance will be poor.

That aside, you overlooked the fact that you'll have to compute a key schedule for each row. None of the setup costs of the encryption can be amortized, which makes processing a row extremely expensive computationally.

There is no obvious solution that actually works. This has been studied and implemented extensively. The reason no one does it isn't because no one has thought of it before.


You’re not wrong about the downsides. However you’re wrong about the costs being prohibitive on general. I’ve personally worked on quite a few applications that do this and the additional cost has never been an issue.

Obviously context matters and there are some applications where the cost does not outweigh the benefit


I think you and the GP are probably talking about different scale orders of magnitude.


Very likely!

But I think there must also be constraints other than scale. The profit margins must also be razor thin.


Smart, how do you backup the users encryption keys?


A set of encryption keys is a lot smaller than the set of all user data, so it's much more viable to have both more redundant hot storage and more frequently rotated cold storage of just the keys.


Most companies don't keep all backups in perpetuity, and instead have rolling backups over some period of time.


Backups can have a fixed retention period.


Sure, but now when the backup is restored two weeks later, is the user redeleted or just forgotten about?


Depends on the processes in place at the company. Presumably if a backup is restored, some kind of replay has to happen after that, otherwise all the other users are going to lose data that arrived in the interim. A catastrophic failure where both two weeks of user data and all the related events get irretrievably blackholed could still happen, sure, but any company where that is a regular occurrence likely has much bigger problems than complying with GDPR.

The point is that none of these problems are insurmountable - they are all processes and practices that have been in place since long before GDPR and long before I started in this industry 25+ years ago. Even if deletion is only eventually consistent, even if a few pieces of data slip through the cracks, it is not hard to have policies in place that at least provide a best effort at upholding users' privacy and complying with the regulations.

Organizations who choose not to bother, claiming that it's all too difficult, or that because deletion cannot be done 100% perfectly it should not even be attempted at all, are making weak excuses. The cynical take would be that they are just covering for the fact that they really do not respect their users' privacy and simply do not want to give up even the slightest chance of extracting value from that data they illegally and immorally choose to retain.


Purely out of interest, how do you verify that the GDPR request is coming from the actual user and not an imposter?


> The organisation might need you to prove your identity. However, they should only ask you for just enough information to be sure you are the right person. If they do this, then the one-month time period to respond to your request begins from when they receive this additional information.

https://ico.org.uk/for-the-public/your-right-to-get-your-dat...


In my domain, our set of services only authorizes Customer Centre system to do so. I guess I'd need to ask them for details, but I always assumed they have checks in place


That won't work in this case, because I doubt GDPR requests override court orders.


This is very, very hard in practice.

With how modern systems, languages, databases and file systems are designed, deletion often means "mark this as deleted" or "erase the location of this data". This is true on all possible levels of the stack, from hardware to high-level application frameworks.

Changing this would slow computers down massively. Just to give a few examples, backups would be prohibited, so would be garbage collection and all existing SSD drives. File systems would have to wipe data on unlink(), which would increase drive wear and turn operations which everybody assumed were O(1) for years into O(n), and existing software isn't prepared for that. Same with zeroing out memory pages, OSes would have to be redesigned to do it all at once when a process terminates, and we just don't know what the performance impact of that would be.


You just do it the way fast storage wipes do it. Encrypt everything, and to delete you delete the decryption key. If a user wants to clear their personal data, you delete their decryption key and all of their data is burned without having to physically modify it.


That only works if you have a single key at the block level, like an encryption key per file. It essentially doesn’t work for data that is finely mixed with different keys such as in a database. Encryption works on byte blocks, 16-bytes in the case of AES. Modern data representations interleave data at the bit level for performance and efficiency reasons. How do you encrypt a block with several users data in it? Separating these out into individual blocks is extremely expensive in several dimensions.

There have been several attempts to build e.g. databases that worked this way. The performance and scalability was so poor compared to normal databases that they were essentially unusable.


It would be very hard to change technically, yes.

But that's not the only solve. It's easy to change the words we use instead to make it clear to users that the data isn't irrevocably deleted.


Or maybe it should be illegal to have a court order that the privacy of millions of people should be infringed? I’m with OpenAI on this one, regardless of their less than pure reasons. You don’t get to wiretap all of the US population, and that’s essentially what they are doing here.


They are preserving evidence in a lawsuit. If you are concerned, you can try petitioning the court to keep your data private. I don't know how that would go.


The privacy of millions of people should take precedence over ease of evidence collection for a lawsuit.


You can use that same argument for wiretapping the US, because surely someone did something wrong. So we should just collect evidence on everyone on the off chance we need it.


That's already the case. Ever looked into the Snowden leaks?


"Marked" for deletion.


The concept of “deleted” is not black and white, it is a continuum (though I agree that this is a very soft delete). As a technical matter, it is surprisingly difficult and expensive to unrecoverably delete something with high assurance. Most deletes in real systems are much softer than people assume because it dramatically improves performance, scalability, and cost.

There have been many attempts to build e.g. databases that support deterministic hard deletes. Unfortunately, that feature is sufficiently ruinous to efficient software architecture that performance is extremely poor such that no one uses them.


> I like Mozilla and thought Pocket's future would be relatively safe in their hands.

Never trust a company like this. You'll always get burned. If it's not FOSS, its not reliable and will likely burn you


> For VCs, why go anywhere else if the best of the world are flocking towards you already?

Very false. A lot of the best in the world have no intention of ever living in silicon valley. In my circles people dread even a week long trip there.


The test was to scientifically figure out which one he likes the most


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: