Unfortunately, my AppleID password is one I _do_ need to remember - I need to use it often, and in places that 1Password won't auto fill. At least: the iCloud website login, various iDevices when using app store, and iTunes on several machines (all on the home sharing network). The alternative seems to be to have all those devices "remember" my AppleID password, which seems like a security lose.
A random string of characters in a wallet doesn't have a lot of value. Of course, don't write down what that string is for, and make sure you have another copy at home. If you are really concerned, leave off the first character of your password and remember that.
The truth is that 2-factor authentication is the real solution. But one has to make do with imperfect solutions
Writing your password / passphrase down allows you to chose a good, strong password. You do not give anything that links that password to a particular service.
Most people will only need to refer to the written password for a week or so, and then they will remember it.
You put the piece of paper in the wallet because you want people to treat it like a 50 dollar bill. People leave bits of paper anywhere, but they don't leave 50 dollar bills everywhere.
It is baffling to me that authenticating to computers, software, and services is still so weirdly broken. Especially since there is now billions of dollars involved in it.
I know that you're not suggesting that people should reuse one password across multiple services. In your model:
1) I have to lose my wallet and
2) Not change my password and
3) You have to know my login email address and
4) You have to find which service the email and password work for
...and all of these have to happen in the time between setting a new strong password and learning that strong password. Because when you've learnt the pass you stop carrying it around.
If you lose your wallet there's a bunch of stuff you need to do. You need to cancel your cards, for example. Keeping a single password in there (for the short time it takes you to remember it) means that there is one more step added - you need to change that password.
You're also failing to do a sensible risk analysis. The threat model for passwords is "hackers, anywhere in the world". The venn diagram of that very big set has a teeny tiny intersection with the much smaller set of "people who have access to my wallet if I happen to lose it".
Writing down a good password means that you get to use a good password. You get to choose a properly strong password, with many characters of mixed case including numbers and specials; or a 6 word passphrase.
If you label the passwords you're probably doing it wrong. If someone pulls out a piece of paper that says "QWhXnLv0qzi1h1m" out of my wallet, how are they going to use it?
If you're worried about someone stealing it, just shift the password over, so it's now "mQWhXnLv0qzi1h1 > 1" on paper.
Any tech savy person knows that has a strong possibility of being a password. Grab an ID, google "your name gmail", log in.
The kind of weak encrypting scheme you can remember is easily defeatable, this is still very vulnerable even if you leave one or two letters off (which you'll have to remember in addition to the scheme). So, going back to the parent, no, this isn't safer than a password in your head.
but this isn't the password for gmail. This is the password for the password manager account. So you need to know the password manager they are using and the username to match with the password. They have to find this out within the time that we've realised we have lost our wallet and are changing the password.
Obviously this is still less secure than no password in the wallet at all, but I don't think it's "very vulnerable" as you are claiming.
having a 20-30 character long password is fairly easy, it may not be 100% random, but (correct me if i'm wrong) a password that long with just a handful of random extra letters and numbers is going to be rather easy to remember and probably going to be just as had to brute-force.
You have to be careful when adding some random characters, because most cracking software includes dictionary mangling options.
Thus, option 0ption opt1on etc all get mangled into a wordlist, while )*&HD@IHU don't. Yes, it still increases difficulty, and they are much easier to remember, but people need to be careful.
Software: Don't install shit you don't trust. Don't trust shit you can't verify.
This one is pretty tricky. There's a lot of little tools out there that I find invaluable, and haven't screwed me over yet (as far as I know), but fall firmly in the "downloaded it off someone's little personal website" category.
I'd say we need better fine-grained permission systems for software, so people can install programs without needing to trust them, safe in the knowledge that they'll get the opportunity to deny any malicious behaviour before it actually happens.
That's what the Mac App Store is starting to do, but unfortunately, it's "completely sandboxed in the store" or "not in the store". I'd like a model that started completely sandboxed but let me choose if I want to let it out of the sandbox in certain, well-monitored cases.
let me choose if I want to let it out of the sandbox in certain, well-monitored cases
That's exactly what I mean. I envisage something kinda like Windows 7's UAC dialogs, but more specific than "this program wants root! [allow] [deny]" -- more along the lines of "this program wants to install a driver / write to such-and-such protected files (its own program folder/anywhere in Program Files/the Windows folder/...) / low-level disk access / to run at startup / etcetc [allow] [deny]".
Actually, I'd specifically forbid "all permissions" as an option; an enumeration of every permission a program wants would make the user more likely to notice unreasonable requests than a single item would, even if that single item's actually "everything". I get the impression, from seeing ordinary users dealing with UAC, that they don't usually appreciate quite how much power they're giving programs when they hit "allow".
I believe that's similer to what SELinux does, although I've never used it beyond observing its presence on university-owned computers.
That's what Apple's shooting for with the App Store's sandboxing requirements, but I'm sure the typical HNer will continue to have a few programs that need to operate outside the limited entitlements that the App Store allows. Still, it'll be better to have a single digit count of those on your computer instead of anything being able to erase your home folder without asking.
Yes, but you don't fix it by shutting them all down and making people pay for something they could provide themselves. You make it free, 'cos it's the Olympics and people need to communicate freely.
ps. wasn't my downvote, I think they're useless
edit-- well it seems you removed the part of your comment that called out your down-voter and now i seem pedantic
making people pay for something they could provide themselves
They could. At the expense of everyone else around them.
You make it free, 'cos it's the Olympics and people need to communicate freely.
Go to a club on Thursday night (Thirsty Thursday) and try to have a conversation with your friend. The same thing happens with wifi. Considering the amount of wifi equipment used by the teams at the Olympics, I think it's more pragmatic for the IOC to heavily regulate the spectrum to be sure the coaches can do their job. When it's all said and done, the spirit of the Olympics is about the athletes, not the fans.
Making everyone pay isn't a solution, it's a business plan.
When I go out to a loud club I can still communicate and no one comes around telling me it's against the club rules to try to.
The Olympics would be nothing without our ability to communicate about them. IMHO that's one of the biggest failings of NBC this year. They're doing a horrible job of communicating the Olympics. Opening up the communication channels just brings more interest, more coverage and more excitement.
True, but how many people (apart from journos, who are almost certainly going to already have a separate wifi network provided) really need to be using their laptops to communicate from within the Olympic site? If you want to tweet or whatever, the chances are that you'll already have a smartphone with you that you can do that from.
The NBC debacle is a big deal, and if I was American it would have a big impact on my enjoyment. But this is vastly different, and I'm not finding my enjoyment affected one bit by this ban (I doubt that I will be bothered when I'm down there next week either).
If you're coming from a foreign country, you've probably got a foreign phone/data contract, and probably don't want to be streaming huge amounts of data over it at roaming rates.
Whether or not that reddit post is correct in this situation, they are banning regular wifi and 3g hotspots created by personal cell phones. Which IMHO isn't really draconian but pretty dumb. The olympics should be about athletics, communication, and celebration. Not brand recognition.
Seems bogus to me. Most of the important radios are in low the UHF band (<600MHz).
(I happen to know this, since I was involved in the frequency assignments for the stage management system of the main stadium, and the company I work for delivered all the base stations pre-programmed.)
SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
The OFL allows the licensed fonts to be used, studied, modified and redistributed freely as long as they are not sold by themselves. The fonts, including any derivative works, can be bundled, embedded, redistributed and/or sold with any software provided that any reserved names are not used by derivative works. The fonts and derivatives, however, cannot be released under any other type of license. The requirement for fonts to remain under this license does not apply to any document created using the fonts or their derivatives.
This comes close to dancing around the Open Source Definition (http://opensource.org/osd.html/) I think. The restriction on "not sold by themselves" would seem to be a violation of the "No Restriction on Fields of Endeavor" requirement. But it's close.
It seems like a silly restriction anyway. What advantage would someone gain over Adobe by selling freely available fonts "by themselves" with no added value?
I think its designed to keep someone from taking the work, and selling it as their own. If you look at fonts, there are hundreds of thousands of unique fonts being sold individually.
Well, sure. But these fonts are already open source. So who would buy it when they can get it for free from Adobe? This is like the resellers of free software on ebay. Sure, you can sell to a few dummies. And it's a little annoying. But it's not something that's going to hurt Adobe or its image meaningfully. Why bother polluting your license with this restriction?
If it's only going to affect people selling free stuff to ignorant people, why complain about the restriction? Wouldn't reducing people essentially scamming be a worthy reason for having that clause?
One good reason to complain about the restriction would be that it's a potential violation of the open source definition and a GPL-incompatible "additional restriction" on redistribution. And because it's just dumb.
If there is no added value, by definition, there would not be an advantage for the buyer.
The only added values I can think of would be the traditional reason why people used to buy CDs with freely available software:
- the seller delivers the stuff cheaper than if the buyer downloaded it themselves (highly unlikely nowadays, but maybe there are corners of the world where this still applies)
- the seller acts as a curator, sifting the gold from the junk, so that the buyer need not do that.
The 'curator' role might still be worth something. For example, a site could have a link 'do you like the template/icons/font we use? Buy it here'. That would be forbidden by this license (but selling all three in a package, or even two fonts with this license in one package, would be fine, at least in a literal interpretation of the license)
If what you're implying is that you believe in the ability of the free market to solve the problem, then first we would need free the wireless phone market.
Exactly. I'm a huge free market proponent, but the problem is that the wireless spectrum is a public good/utility. As such, we the public should put stipulations on companies who want to use that public resource.
It's not about what kind of software they are considered, it's about code running on your system. In the RMS eyes, if code runs on your system, and you can't read/modify that code yourself, it's unethical.
...everyone knows child porn is bad. It's existence though, does not negate the usefulness of a system. Not that cryptosphere is going to be the open government revolution, but your comment is a useless vomit of emotional blather.
How do we design a system that is anonymous and un-censorable where users can opt out of being relays for certain types of data?
Hard problem. Trying to solve it would be interesting. Not trying to solve it would make you identical to FreeNet and Tor and all the other efforts in this area, and thus less interesting.
I agree that there is no 100% solution to this problem, since all data can be converted to any format. There is also no 100% solution to pollution in a city, for example, or public health, or usability of a GUI. But there are 90% solutions that could make the problem marginal rather than severe.
BTW, on my "emotional vomit:"
Tell me. If these networks are for real human beings to engage in open communication, what happens when one of these real human beings comes across... say... a picture of a little girl being cooked over an open fire like a pig. (I didn't see this, but I was discussing the Tor .onion network on Reddit and someone claimed they came across this. I believe them.) Do you really think that person is going to return to this network to discuss... say... politics or economics or their local election?
It is a problem. To strip away the "emotional vomit," let's call it a usability problem. How do we make a freenet that is usable for non-psychopaths?
Edit: what I'm really saying is this:
Freenets have been done. It's a solved problem. Add some PK crypto and some hashing and some onion routing and shake.
What isn't a solved problem is: make a darknet/freenet that your mom would feel comfortable using. Make one that your average person -- maybe one with kids and thus really turned off by CP -- would want to one-click install from the Mac app store and browse.
THAT would make a serious political impact. Now you'd have hordes of average people using an utterly uncensorable chat system that was also hard to data-mine and tie to identity.
Right now, most people are going to start browsing the offerings that already exist (Tor is pretty easy to set up) and see stuff like "world's largest archive of hard-core lolita!", close the app, delete it, and never return. That's why these networks are not very popular, and it severely limits their political impact.
The problem is thinking that there is a way. You can't. Data can easily be obfuscated. As long as bits are flowing through your machine from sources that you don't control, you could be middle man to anything.
API stated it was a hard problem. Hard problems dealing with cryptography can take years and some times produce PhDs when they are cracked. It is easy to tell it is a hard problem, it is hard to say for sure it is not possible. If you have done the hard work of show it impossible please share and save others from repeating your work.
There is at least one free Ph.D thesis topic in this thread. I think solving this problem and creating a darknet that most people would want to use would be at least as impactful as the development of Bitcoin. People said that was impossible too.
I see a lot of people saying that you cannot analyze the data flowing through a system - and yet, I have to wonder, how then do people find the data they are looking for? It seems to be a rather fundamental theorem that if you know a URL to a file, no, there is no way to stop you passing whatever you want. But if you only have search, then surely the search itself implies that you have a way to classify the data!
Personally, I think this is enough to stop the spread (if not the storage) of horror. In other words, someone might safely store their cache on my computer without my knowledge (heaven help me) but I refuse to store anything that is searchable as a horror.
I don't think this system is searchable. You'll be storing a bunch of encrypted blobs, each of which can be retrieved and decrypted by anyone who knows a short key. One of them might be a CP picture (if you store enough of them, statistically one of them will be a CP picture). One of them might be an index full of other short keys, pointing to a whole host of CP pictures spread across the mesh.
If you're fine with that, that's good enough; you can run this system, and political activists and perverts alike will be able to stick their blocks there, accessible to anyone to whom they can pass the relevant short keys. But many people will be uncomfortable with even this much.
Yeah this system does not appear to be the quick glance I looked, but api's disired system does not need to model this one.
api's requirements:
> How do we design a system that is anonymous and un-censorable where users can opt out of being relays for certain types of data?
As long as we are reasonable by taking "un-censorable" to mean very difficult to censor and "users can opt out of certain types of data" to mean highly limit traffic of data type <x>, it seems like a hard problem until proven impossible.
How then can it be used to share anything? I can see how it could be used as a secure, distributed backup (which itself is rather handy) but I'm not sure how it can be used to distribute data.
It's (AIUI) meant as a replacement for pastebin etc. - you host something on this mesh, and then you only have to spread a short hash key around. It also lets you do the wikileaks thing of publishing a bunch of encrypted data which you could later release the key to.
In the OMG, think of the children case: A number of cases that went public (some even linked here) agreed that legally, child porn is 'i recognize it when I see it' kind of subjective. I'm obviously talking about teenagers here and different moralities or a missing context (such as those 'taken for fun' or 'sent to a friend, privately and deliberately' cases).
Api might, from his subjective view, decide that this as-yet-never-encountered image is bad/evil/perverse. How would you ever create an algorithm for that, other than 'api, please press a button that says "fine by me" or "no way hell", right next to the image in question'?
It is not clear to me that a system could be created that would be fine grained enough to take into account individual preferences with out general AI. I can imagine a rough grained system were none of them match perfectly but some get close. A user would have to pick a standard and live with the good and the bad that came with it. Not perfect but more choice then what you have if you sign up to be a tor relay or run freenet now.
This isn't about cryptography, this is about what data can be pulled out of bits transfered through your computer. There is no requisite that they be encrypted, they can be obfuscated, hidden, or just not recognizable as illegal to you. There is no way for you to verify that a collection of bits, put on your computer by a 3rd party, does not in fact represent something illegal. You would have to have access to every existing and theoretical encoding, encryption and obfuscation technique, and use them in every theoretical combination to verify such a thing.
The hard work has already been done and it is easy to demonstrate the problem and the impossibility of telling 'good data' from 'bad data', assuming the system is cryptographically secure, as by definition a cryptographically secure system resists analysis of content.
Consider the problem of one time pads. If I have two messages the same length, one made of 'good data' and the other consisting of 'bad data' and I encode them both with different one time pads, then it is possible for the resulting ciphertext version of each message to be identical. Another way of putting this is that for any given ciphertext that has been properly encoded with a one time pad, the only information available about the plaintext is the length of the message (assuming you know already that a one time pad was used) and nothing else.
api:
> How do we design a system that is anonymous and un-censorable where users can opt out of being relays for certain types of data?
So not necessarily following any of the specifications of the system in the article.
I read his specification to mean that users are anonymous, they can post data and it can not be tracked to them. I do not see this necessarily requiring the data be filtered in a encrypted state only that it can not be tracked back to a submitter who took reasonable precautions.
It is. What «api» has been proposing is literally, "I want a no-censorship network which I can censor."
That it is paradoxical does not necessarily make it impossible, though. The goals are certainly contrary but I am not certain that they are contradictory.
If you think about community-based censorship, this could probably be arranged even in an anonymity community, as long as it had active-enough participation. A popular search engine like Google can have tremendous ability to censor others even on a network like Tor where people cannot easily be censored.
The chief problem is that «api» faces is that his/her aspirations are too individualistic and unimaginative. You could always put the to-be-censored material in an encrypted archive and distribute the link to the material with the password to it -- this sometimes happens with BitTorrent (and then you'd have to click on ads to get the password and it becomes a nightmare). Then nodes cannot inspect the content. So what are you going to do, limit content-types? This was done by Napster, where only MP3s would be shared -- but a piece of software quickly came out called Wrapster which "wrapped" other files in MP3s. There exist JPEG steganography tools as well, both hiding files within the least-significant bits of the image data as well as in parts of the JPEG which do not get interpreted by a normal JPEG reader (e.g. appending a RAR archive to the end of the JPEG image).
I say "too individualistic" as well because any sort of relay net where the nodes themselves inspect the content that they trade is going to expose itself to a possibility of systematic censorship. "I know that you know what you were sending me" is a horrible way to start your cryptosystem.
Nonetheless, there might be hope for a sort of global data-store which the nodes collectively take responsibility for, which nodes collectively trade and where nodes can vote to "veto" certain indexed files. The idea would be that you can't take down the data store by taking down individual nodes, you can't prove which node "uploaded" a file, and you can't necessarily fault the nodes for failing to down-vote a file tracked by the community since hosting the file is a collective decision, not an individual one. It would have to use central aspects of the design of BitCoin alongside central aspects of anonymity networks, but I don't see why it would be impossible.
Well if no one opted out of any data type then it would be just like the some systems we have today. If most relays opted out of a data type x then the result would probably be that data type x would be less anonymous then other data types. It would take fewer conspirators to subvert the system for data type x. Similar to anonymity from government Z is effectively lost if they control m% of the nodes on tor.
Though I do not study cryptograph professionally that would be my current guess.
I doubt this is easily possible, with Tor as designed.
Maybe you could establish a blacklist of CP sites, and that could be applied at the entry and exit nodes of the Tor network. This blacklist would have to be public and checked by many that it didn't contain non-CP sites, so in effect it would be a public directory of CP, which is problematic already.
Then, those that run entry and exit nodes could voluntarily apply the blacklist. In this way the Tor community could have its own values, while still being independent of any authority.
But this assumes that CP will remain restricted to certain domains in the .onion system, or the traditional DNS system. Which of course they won't. Maybe there will be one .onion domain per picture. Maybe there will be a Flickr of .onion where it's not so easy to figure out who's doing what. Then you'd have to lean on that service to police its own members' content.
I can imagine various messy and imperfect ways to limit the amount of CP in the world, or at least make it harder to find, but we just don't have good legal models for dealing with true freedom of speech. And our institutions today would rather persist in the fantasy that they can completely control speech, than accept that their role might just be to advise the citizens on how to police themselves.
> How do we design a system that is anonymous and un-censorable where users can opt out of being relays for certain types of data?
If you do really want true anonymity and un-censorability as guarantees of the system design, then no, I don't think users can decide what they don't want to store or transmit. For, if they can, then their governments can coerce them into making the same "choice." Any preference that can be set by a user, can also be forced upon said user by a system administrator, operating system vendor, etc.
My real question is, do we need cryptography and anonymity built in at a protocol level to have something that's useful for political activism? It seems to me that there are only two real "innovations" these networks bring over, say, pushing encrypted blobs to people over SFTP drops (these, by coincidence, are both factors I've only really seen on Freenet):
1. That you have the ability to "push" content into the network, such that it will then replicate and spread through the network as it is accessed, without the possibility of an audit trail leading back to the source peer (even though the original source may know which client uploaded it, each peer only knows which other peer they got it from, so all you need to ensure anonymity is an internet cafe);
2. That content cannot be removed from the network easily--as there can always be dark peers who have copies of your data block, who will come online later and repopulate the network even if it has been seemingly purged of a block (by, say, all involved homes and data-centers being raided by the feds)--and that this happens pretty much transparently to the people involved, since people are always joining, leaving, and re-joining the mesh/swarm/whatever-it-is.
Encryption need only happen on a layer above this system, where and when it's desired. Anonymity need only happen at the end-points: the users can just access the system over Tor if they don't have the requisite internet cafe/seven proxies handy.
As long as you're just passing cat pictures around, why not just throw them onto a simple, infinitely-sized, everyone-can-create-files-but-nobody-can-delete-them DHT-based "disk"? And if you're passing political activism around, just encrypt and sign it like you were going to send it over email, then drop it in the mesh and email the URN instead. (This is presuming a stable PKI key-publishing/querying infrastructure as well, of course.)
And if you want to make it convenient for end-users, just make a browser extension that can load those URNs through the mesh as if they were regular HTTP URLs, and does the decryption and signature-validation automatically--and have the mesh software install that browser extension--and then you'll have something.
>As long as you're just passing cat pictures around, why not just throw them onto a simple, infinitely-sized, everyone-can-create-files-but-nobody-can-delete-them DHT-based "disk"? And if you're passing political activism around, just encrypt and sign it like you were going to send it over email, then drop it in the mesh and email the URN instead. (This is presuming a stable PKI key-publishing/querying infrastructure as well, of course.)
The latter is basically equivalent to this system, and so your system would have exactly the same problems: the only way you could avoid being a relay for child porn is to refuse to relay any encrypted content, at which point your node is not helping the political activism. Allowing unencrypted content also leaves you much more open to traffic analysis (if only a small fraction of data is encrypted, it's much easier to find the nodes that are inserting the political activism data).
Right, I'm not claiming that my suggested alternative is able to avoid what I might provisionally call the AUE theorem†; I was just suggesting the alternative as a way to separate out the "dumb" everyone-shares-one-infinite-distributed-hard-disk block-transfer layer, from the "smart" only-certain-people-can-see-certain things layer. The lower layer is an infrastructure protocol, with about as much Ethical impact as the Internet itself (to be specific, it could be compared to, say, BitTorrent.) The upper layer is where ethical responsibility comes into play.
We can probably convince your mother to download an app from the App Store that integrates just with the lower layer--hey, it's just like Dropbox, but bigger! [Well, as long as anyone and everyone can read random samples of your data if they like...]--because the upper layer, with the encryption and signing, will siphon off all the stigma of not-so-above-board usage of the protocol and attach it to itself. It's no different technically, but it is very different socially.
The advantage of having one reviled app on a larger infrastructure is that that reviled app gets to "hide" its blocks among all the above-board usage of the infrastructure. Like another poster in the thread said, if you go onto Freenet or the Tor Directory, the links to CP sites are plain and obvious, because it's a large part of what's going on there. But if you could look at your own disk usage as a node in this network, I imagine the number of encrypted blocks as compared to, say, plain-old MPEG frames of TV shows, would be vanishingly small. (And it's be relatively impossible to define which is which, either, since this infrastructure has no "index" or metadata; it merely is a big bucket of blocks named by their content hashes, of which most--not just the encrypted ones--are meaningless unless you have another block giving the order in which to string them together to make a file.)
† "Anonymous, Uncensorable, Ethical: pick two."--named after the CAP theorem of database design. Well, it would really be the AUE conjecture for now--but I'd love to see someone prove it either way; it seems like the sort of thing that is amenable to that.
> it's be relatively impossible to define which is which, either, since this infrastructure has no "index" or metadata; it merely is a big bucket of blocks named by their content hashes, of which most--not just the encrypted ones--are meaningless unless you have another block giving the order in which to string them together to make a file.
If you can't tell whether a given block is encrypted data or just part of an mpeg, how can you choose to store only unencrypted data? I suppose you could make an argument for building this system on top of a nonencrypting distributed data store, like bittorrent, for the sake of looking like that nonencrypted protocol to anyone intercepting the traffic. But there would have to be some metadata that let the encrypting protocol know where to find its stuff, and if the user who's downloading it can tell, so can anyone intercepting the unencrypted stream. Wouldn't you just end up with a situation where the upper layer is to the lower layer as freenet is to the internet?
> If you can't tell whether a given block is encrypted data or just part of an mpeg, how can you choose to store only unencrypted data?
I didn't say you could :) The point of this alternative is that it separates the stigma 1%-99% toward the upper layer, but puts the implementation 95% into the lower layer--and therefore we get a stable, un-censorable distributed storage network on the lower layer with the "abuses" of the upper layer (CP and political activism both) being an unavoidable free rider, but not something "visible" (in the sense of seeing CP sites listed in your index directory) to people only using the lower layer.
This situation, of course, also describes the Internet as it is today: protocols like HTTP and SMTP are used by everyone, and also by some unethical people who send their stuff over those same protocols in encrypted containers using anonymizing proxies.
The difference here is that the two big hurdles--of identity-diffusion over time after initial data seeding, and of guaranteeing data persistence as long as there continue to be consumers of the data becoming persistent-caching peers--are taken care of by the lower layer, allowing the upper layer to just handle transparent encryption in whichever way it sees fit.
(And thus can we also replace the upper layer if we come up with a better way to anonymously and securely get the right metadata into the right hands, without having to throw out the network effect of all the extant peers. They simply start transmitting-and-caching blocks representing the new kind of metadata exchanges along-side the blocks representing the old kind.)
Regarding the social implications and imperatives...
I think that groups like LulzSec provide a public service (see "Why the Joker and Not Batman is the Savior of [sic] Us All" http://thisorthat.com/blog/why-the-joker-and-not-batman-is-t... ) in that they show the importance and the need for everyone to be security conscious. I wish there were more groups like this out there raiding and dumping stuff periodically.
I wonder about whether it'd be political or legally feasible to have a law enforcement agency which just trolled around the internet and attempted to crack services that citizens depend upon.
The reason why this is relevant and important to dark nets, is that currently the only folks who use dark nets are folks who have something to hide. That might be folks who are illegitimately persecuted by governments, or folks who are legitimate criminals. These two groups are functionally indistinguishable, even if their intents and causes are different. They both have data they're trying to hide and communicate, without exposing themselves to authorities.
You could require receivers to calculate a difficult problem where the difficulty was based on the "hate" people had for the content, similar to what Zed Shaw tried to do awhile ago with IRC (?).
Basically, each person that didn't like the content would spend some CPU time to up the difficulty of transferring it. After awhile, it'd take someone who wanted the content so long that they would give up, in which case the content wouldn't be transferred any more.
I disagree. I found his post to be very informative. I like the idea of an encrypted anonymous network precisely because I have this romantic notion of people of the world sharing important information with each other that can't be stopped or even tracked by governments or big corporations.
If the reality is that my contribution of resources to a project mostly just benefits a bunch of child-porn creators doing horrible things to innocent human beings, then I have to admit that my romantic notion is naive and behave correspondingly.
Sometimes you just can't have nice things because the worst members of society will criminally abuse them.
The first one with the 2-tone metal back is still the prettiest IMHO. And it seemed the sturdiest. A drop resulted in a dent, not a crack or finger shaving glass shards.
Data: Back it up. Backup your backups. Stop fucking around. If you don't get hacked, your storage will fail.
Software: Don't install shit you don't trust. Don't trust shit you can't verify.
Passwords: Don't try to remember them!!
It's 2012, not following these simple rules is inexcusable.