Hacker News new | past | comments | ask | show | jobs | submit | more Franciscouzo's comments login

That's by design, if you use a proof of work that's validated outside of the blockchain, the work could be valid for multiple forks, thus being a really bad proof of work system.


If you take into account herd immunity the actual effectiveness should be >99%.


Yeah, that's a good point. Obviously vaccinations benefit from network effects as the number of vaccinated individuals in the population increases - so over time this will skew the results towards higher efficacy.


And sunscreen is also a thing.


I don’t trust most people to apply sunscreen effectively. You really need a lot of it for adequate protection, and by the time you apply that much, you become an oily slippery mess leaving sunscreen on every surface you touch. It’s easier to just stay the hell away from the sunlight and any surface that reflect light.


No you can't, you can only hide your last seen status.


You can actually do it (I'm not saying it's a good idea), there's a way to differentiate between -0 and 0:

    > Object.is(0, -0)
    false


I was going to make a comment about this not working for integer types; but then I remembered that JavaScript doesn't have integers, so I guess it technically works.


I think you meant GNU/Linux instead of "real".


Android is a real Linux in the same way that iOS is a real BSD Unix.


No, I don't think that's true. Android is just Linux with an Android userland.

Richard Stallman was quite correct to call it GNU/Linux, as much as I dislike the guy.


It's a long term pattern that keeps playing out to say "Richard Stallman was quite correct <insert topic here>, as much as I dislike the guy."

Maybe we should pay more attention to what he says!


Except that you are only allowed to use these Linux calls,

https://github.com/aosp-mirror/platform_bionic/blob/master/d...

Anything not white listed on the LinuxSE and seccomp configurations, just has as outcome killing the "naughty" app.


iOS is just BSD/Mach with an iOS Userland. Here is the kernel: https://github.com/apple/darwin-xnu


Yes. it's accurate to describe iOS as iOS/Darwin, and macOS as macOS/Darwin. But "BSD" implies a BSD userland to most people, which macOS has but iOS doesn't.


Linux implies a GNU userland to most people which Android does not have.


Alpine Linux is unequivocally a Linux despite not having much to do with GNU in userland.


It has a work-alike of the gnu userland.


Well, that's precisely what Stallman was trying to address, wasn't he? Trying to separate out the Linux kernel from the GNU userland.


Yes, but he had failed before he even started trying to get people to say GNU/Linux instead of Linux.

That’s my point - when people say Linux they generally mean GNU/Linux not Android.

Equally, people are generally not talking about only the kernel, when they say Linux. Usually when someone is talking about only the kernel they say ‘the Linux kernel’.

GNU/Linux usually only gets used by the FSF or during discussions of this kind.

“Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.” [1]

[1] https://en.wikipedia.org/wiki/Linux


The opposite tho. Darwin uses a BSD userland (and a proprietary GUI stack) atop a custom kernel while Android is a Linux kernel with a custom userland (and again a proprietary GUI stack.)

They do both use bash tho ;)


Used to. Macs switched to zsh :-)


What? Android’s kernel is a pretty close derivative of upstream Linux while iOS’ kernel has almost nothing to do with any current BSD distribution.


iOS kernel is Darwin, a current BSD: https://github.com/apple/darwin-xnu


Darwin only half BSD. It uses a Mach kernel and Bash but BSD utilities instead of coreutils. It’s development and administrative utilities along with its init system are custom as well.


Fair enough, but that still means this statement (made by ArgylSound): “iOS’ kernel has almost nothing to do with any current BSD distribution.” is false.


XNU, Darwin's kernel, is Mach + FreeBSD[1].

[1] https://github.com/apple/darwin-xnu


For more background, see https://developer.apple.com/library/archive/documentation/Da...

There it clarifies that it is "mostly" FreeBSD. The Mach being used also differs significantly from CMU Mach and what you might find by looking for Mach3/4 source dumps.

One of the biggest challenges in "converting" NeXTSTEP (MacOS's predecessor) to OS X was both updating software to newer versions and eliminating expensive licenses from AT&T and Adobe.

NeXTSTEP was a "capital U" UNIX with AT&T proprietary code, based on 4.3/4.4BSD (encumbered). Every copy needed a UNIX license and royalties to AT&T.

NeXT was based on Mach2, which had 4.3BSD deeply integrated into the kernel source tree. Device drivers were both native BSD ones along with a "DriverKit" interface that used Mach messages to write userland device drivers.

CMU Mach v3 and v4 cut out all the BSD code and put it into a userland "UX Server", a model incompatible with NeXT. So instead, Apple took the OSF/1 Mach kernel, derived from Mach 2.5, and replaced the BSD subsystem code with 4.4BSD-lite, gradually updating its subsystems with FreeBSD ones.

So TLDR, Darwin/XNU has both a BSD userland and essentially a FreeBSD kernel. When you make a "UNIX-y" syscall from C in MacOS, you're "talking" to a "FreeBSD kernel".


Not having to deal with the security theater is also a big plus.

When you take in account all the time it takes getting from your place to the airport and back to the hotel there's usually not much of a difference in time in taking the train anyway.


Assuming one is traveling up or down a coast. The coast to Chicago is a different matter.


It is until you read the terms and conditions and see that you have a 1TB data cap, or a more generic: we'll cut your access if you have "excessive" data usage.


I don't have a data cap though. In fact I specifically picked a provider without a data cap (out of the 2 available at my location). My understanding is data caps are more popular in e.g. Canada and Australia and such, and are hardly a thing anywhere else. At least not to the point where they'd be enforced.


Once everyone using your provider start filling up their connection, it won't be long until they implement one.

What you got is a 200Mbps connection to your provider (and still that's probably a lie, it's probably shared before reaching their endpoint), afterward it's fully shared with every other customer... that's just how the internet is made, you can't have a dedicated 1 gbps to every single server, that just doesn't make sense.

Thing is, the higher the requirements, the more expensive it is to support, that's simple math... If you got 10 000 clients that download 1 gbps, you need 10 tbps, it's even worse if they are all on the same service, that connection won't support this, believe me.


ISPs in the USA have been sneakily adding them for years. It's almost standard now.


There's a lot of immigration from Latin America though.


It still helps though, let's assume they're using MD5 without salt, if the attacker has a rainbow table, this leads to a speed up, since the attacker only has guess bit-by-bit instead of trying every possibility.


Right, what you've got is an oracle that will tell you how many prefix bits of MD5(password) were correct. I suspect bytes is more realistic than bits but the same principle.

Rainbow tables don't seem like a very applicable technology for this. I think you'd want to pre-process a dictionary of passwords you want to be able to guess (either from known passwords or just obeying some rule) so that you can group and order them by MD5 prefix.

Using the oracle lets you narrow down 50% at a time, so if you've got a dictionary of 1 billion passwords, after 30 iterations you've either rejected all the possibilities or found a correct password.

NB This makes no difference against a strong random password, since a hypothetical dictionary of such passwords is far too large, it only impacts ordinary human passwords that you could brute force if you had the hash.


What you’re talking about is more like a hash table not a rainbow table. If you’re looking for a pre-image for the first 8 bytes of MD5 that would be a table with 2^64 rows * at least 8 bytes or at least 144 exabytes of data.

Using such a table would allow you to select guesses that matched the first n bits of the stored hash and then use timing to try to guess the right choice for the n+1’th bit... up to the first 64 bits. You would then have limited the number of guesses down to just another 2^64 possibilities. So that’s not really a valid approach.

A different approach would be to try to figure out the first ~32 bits of the hash using a much smaller (34GB) table and use that knowledge to screen potential candidate passwords offline.

Again, all this works only as long as there isn’t a salt, and the value you are trying to discover is a guessable password and not a randomly generated key.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: