The blurred version feels honest -- it's not showing you anything more than what has been encoded.
The sharp image feels confusing -- it's showing you a ton of detail that is totally wrong. "Detail" that wasn't in the original, but is just artifacts.
Why would you prefer distracting artifacts over a blurred version?
The details are quite real, and they make the image far more comprehensible.
Get a picture of grass, save it as a JPEG at 15% quality... It still looks like grass. Then run it through jpeg2png... The output looks like a green smear. You might not even be able to tell that it's supposed to be grass. jpeg2png just blurs the hell out of images.
Get a picture of grass, save it as a JPEG at 15% quality... It still looks like grass. Then run it through jpeg2png... The output looks like a green smear. You might not even be able to tell that it's supposed to be grass. jpeg2png just blurs the hell out of images.
What does bigfoot have to do with conspiracy? Doesn't bigfoot qualify as folklore/urban legend/pseudoscience/hoax/mythology? Is there widespread belief the government is actively covering up its existence for some reason?
Nothing in the linked story explained it. Did someone make a whole documentary and couldn't get the most basic info right? Or did the reporter mangle the article write-up?
I've implemented it, too, and didn't run into any problems. User inputs the zip code, if there's multiple city matches, they select the correct one from the drop-down (or you auto-complete the city name after they type the first 4 letters).
The fact that "A city can also exist in multiple zip codes. And there can be multiple cities with the same name in the same state" is a good point IN FAVOR of asking for the zip code first (NOT to avoid it) because you certainly can't do it the other way round.
And if you just leave it to the user to free-type all that info in, you have to verify it after... Users are going to make typos, and the USPS will kick your butt if you don't correct it (and credit card payments won't go through, either). So it may be less work for web-form creators, but pushing the verification down stream just makes it all worse for the company using it.
If my memories serves me right it was meant to be "Copy Convert" but "cc" was already taken for "C Compiler" so they went to the next letter in the alphabet (alpha beta), hence "dd". Thanks for listening to my TED talk of useless and probably false information :)
According to Dennis Ritchie, the name is an allusion to the DD statement found in IBM's Job Control Language (JCL), where DD is short for data definition https://en.wikipedia.org/wiki/Dd_%28Unix%29#History
Borg is available for download as a standalone binary, easily dropped onto any Linux system even with very limited privs. And in the repos of every distro easily installed and kept up-to-date.
By avoiding that one step and using rsync instead, you're resigning yourself to "send 600MiBs" over the network for every tiny config change. Not a good trade-off.
In 1996? OpenBSD and Apache had been around for a year. PGP had been around for several years. HTTPS was used where needed. SecurID tokens were common for organizations that cared about security.
Admittedly SSH wasn't around, but kerberos+rlogin and SSL+telnet was available. Organizations who cared about security would have SecurID tokens issued to their employees and required for login.
Dial-in over phone lines, and requiring a password, was much less discoverable or exploitable than services exposed to the internet, today.
SSH was around, but not nearly as pervasive it is today. I have memories of having to shake my mouse around during the windows client installation to generate entropy. Fun times
I believe your recollection is off by several years...
What you're describing is PuttyGen. According to Wikipedia, the first Putty release was in 1999. Archive.org doesn't have any snapshots of the Putty website before 2000, so that checks-out.
The RSA patent didn't expire in the US until September 2000, so that's when free implementations like OpenSSH first became widely available. That's precisely when I started using it...
The original SSH was first released mid-1995. There would have been a small number of installations in 1996, but absolutely negligible. It was not well-known until later, circa 2000.
> There would have been a small number of installations in 1996, but absolutely negligible.
On HN there's always a good chance you're talking to some of the people involved in those "negligible" installations. I know that I submitted some patches to Tatu Ylönen for Ssh to compile on Ultrix, so that must have been in 1995 or early 1996 because after that I didn't have access to any Ultrix machines. I may have been an early adopter, but it didn't take long for ssh to take over the world, at least among Unix system administrators; at Usenix within a year everybody was using ssh because there wasn't any alternative and in terms of security it was a life-saver.
As for the RSA patent... I don't know what license the original Ssh was released under, but it was considered "freeware" when it came out and nobody cared about the US RSA patent. Maybe technically in the USA you shouldn't have used it? Nobody cared.
And the mouse-jiggling thing... not specifically a PuttyGen thing. On linux /dev/random device gave you a few bits at a time stingily, only after it had enough entropy, so it was common for programs that needed good randomness to ask you to jiggle the mouse because that was one of the sources of entropy, so bits random bits would come faster. I'm pretty sure that was still the case well into the Zips.
so I was running a SVN server in a decommissioned PC somewhere in a startup as an intern. whole company ends up using it and out of nowhere it used to freeze, I would go to check if it had rebooted or crashed and everything was fine.
it fixed by itself, without any fixes from my part. happened many times.
asked for help to a senior, guy ran strace and found a read waiting in /dev/random. and of course it solved by itself any time I checked because I was moving the mouse!
controversially but acceptably, we had linked it to urandom and move on
how fast that guy used strace and analyzed the syscalls inspired me to be better at linux
> it didn't take long for ssh to take over the world
That doesn't seem to be accurate. Wikipedia says, by the end of "2000 the number of users had grown to 2 million"
> everybody was using ssh because there wasn't any alternative
I already listed TWO of the most popular alternatives.
> the mouse-jiggling thing... not specifically a PuttyGen thing. On linux
Parent specifically said "windows client installation." Putty was very common on Windows. PuttyGen specifically and prominently told the user to move their mouse... etc. etc.
Even back in 1996, OpenBSD emphasized security. By 2000 they claimed "Three years without a remote hole in the default install!" at the very top of their website. Qmail was released in Dec 1995 and its security withstood scrutiny for quite a lot of years. I'd be interested in seeing just how many RCEs a modern security researcher could actually come up with from a 1996 release of BSDi, OpenBSD, Solaris, AIX, etc. I'd bet on just a handful.
I can understand how, if your whole world was Windows 3.1 and 95, you'd feel that way about security at the time.
Work life is quite a lot different for a working-stiff than it is for a CEO. In large part, their company is an extension of themselves. Work whatever hours you want to. Private plane to take you wherever you want to go for free, if you can come up with a work-related excuse to go there (with no need to justify not coming back for weeks). Multiple folks acting more-or-less as your personal assistants. An office bigger than your house, filled with anything you want in it, on the company's dime. A big pool of cash you can order the company to throw at whatever interests you. etc.
Based on this description, you could say that what the CEO does in no way resembles what real work looks like for 90% of the population. Which I think is true. It's a pity they make so much more than people who do actual work.
> developers of free-ish as in freedom products OWE it, not only to themselves, but their community to be as profitable as possible
Wikipedia seems to do just fine without.
Commercializing a product is a whole other field, and it's not reasonable to expect everyone to be good at that, and not reasonable to expect developers to all take on a second job of commercializing their hobby projects.
Why don't YOU commercialize your fork of their service, and use the proceeds to hire developers to maintain the code? That would be infinitely more useful than armchair criticism of others.
Because donations are a system that works very much in their favor and not at all in favor of other types of projects. Look at the OpenSSL Software Foundation having received less than $2k in yearly donations during the leadup to heartbleed.
> Commercializing a product is a whole other field, and it's not reasonable to expect everyone to be good at that, and not reasonable to expect developers to all take on a second job of commercializing their hobby projects.
I very much want to disagree with you, but I do not know how. Achieving some commercial success if you do look for it where others with your skill set are successful is not too difficult (see the trades), but the whole point of such projects is the exact opposite: Doing things differently and pushing accepted boundaries to where you think they should be.
On the other hand I think that this is acceptable. As I wrote in another comment, the obligations in these projects mostly arise from what the developers wants to commit themselves to (or, sadly, do so mistakenly). It is very reasonable to e.g. not value the long term success of your project highly.
You might want to just share an idea, maybe someone else will carry on your project or maybe if in 5 years someone shows a picture of you proudly presenting your project, you're like "AI has gotten really impressive, if I didn't know better, I don't think I could tell that this is a fake". And if you're anything like me, strong commitments to internet strangers might be life-threatening. 2 out of 3 times a promise I made got upvoted, I got hit by a car within less than 48 hours of making it and not once otherwise. An up-arrow got just one pointy end, a GitHub star 5. I'm not taking chances.
They pay fair wages because they have enough scale where pestering for donations once a year is enough to justify their costs and then some. And even then, this forum is very famous for shitting on such a large scale not-for-profits, with many justifying their decision not to donate by seeing how much money the non-profit already has in their pockets. The only reason we even know how much money the non-profit has in its pockets is because non-profits are legally obliged to publicly disclose that, while for-profits are not (until they go public of course).
My point being that it's a mountain to climb, and just because those at the top have already climbed it doesn't translate into everyone being able to climb it. It takes a whole lot of effort and probably some public grants, but getting those public grants is a whole different skill set than actually building the thing. And you can only get a public grant after you've already created something useful, so your idea of a non-profit quickly turns into an inescable hole in your pocket that you're desperately trying to fill for at least a year or two.
This is why while our lists might vary, every single one of us can only name like 5, maybe 10 non-profits that have "made it" (however we define that success).
All that said, go set up a reocurring $2/month donation to your favourite non-profit right now. Whether you choose Wikimedia or something else, I'm sure it's well worth 10% of a monthly subscription you're already paying for an LLM or whatever. Unlike your for-profit subscriptions, if the money becomes tight you can always cancel these without losing anything.