Hmm. My Mom was a trained engineer. She's no longer with us, but if she'd lived long enough to select a Bitcoin wallet, I have no reason to believe she'd have picked a poor one...
(And yes, I get the irony that they probably picked this headline out of fear that "not your dad's..." was sexist.)
actually, we chose the title because it made us giggle a bit more than "your dad's". the phrase "not your mom's" is relatively common and normally applied to stuff like sandwiches, jeans, or some other somewhat-sexist object.
my mom coded cobol for harris bank for many moons :)
Without newsblur support, not really likely to rebuy.
I respect the work involved in moving what had been a google reader client into a true multi-provider client, so would rebuy it it supported the provider I actually use. :-)
The author mentioned before that NewsBlur won't be supported as the service works in a different way compared to Feedly, Feedbin & Co. - I imagine this could be related to NewsBlurs constantly live updating nature, but then again I only tried NewsBlur briefly, so maybe I just got that wrong.
This doesn't seem to me to mean as much as the author thinks.
By the same reasoning, after all, email as a communication mechanism is "dominated by" spam, much of it similarly shady in nature.
There is an interesting difference, though -- while spam actively hurts the usability of email, botnet traffic which parasites on the TOR network helps make the system as a whole stronger, by adding more noise in which the important uses of TOR can be lost.
Yes. A central idea of the American experiment is that there are exchanges of liberty for security which it is not acceptable to make even if the people collectively decide to make them.
That is why certain liberties are enshrined in the Constitution, which is intentionally made hard to amend, rather than merely in law, which is relatively easy to change.
Looking at the number of totalitarian systems which started out as expressions of the collective will of their societies in the two-plus centuries since, I'd say the founders got this one right...
Collectively we can pass laws and change the constitution.
The supreme court has ruled, in Smith vs Maryland, that the government can capture message metadata from a third party when a message is conveyed by that third party without warrant. So the constitution doesn't help much here -- either we need to collectively change the constitution or collectively pass a lot of laws or collectively get different supreme court judges appointed and have the ruling overturned.
> Looking at the number of totalitarian societies which started out as expressions of the collective will of their societies in the two-plus centuries since
Which ones are you talking about? I can't think of many. I can think of quite a few where authoritarian movements got substantial support, but I can't for sure name many where such regimes were anywhere near a majority, and I'd think a majority at least would be required before it becomes reasonable to talk about "expressions of the collective will of their societies".
Ok, from that point of view, it can make some sense. Even if it seems in the current debate, the issue was the trade-off was more or less made unilaterally by the executive branch, without much consultation, which is actually worse that a "collective" decision, whatever that means.
Someone more knowledgeable than I should confirm or deny this, but my understanding was that TOR should be resilient to a single point attack like that. On the other hand, if they can watch packet timing on a significant fraction of intermediate nodes, there is a problem.
And, hypothetically, the FISA authorized box was only getting traffic from the one site, and not the entirety of network traffic. The room 641A attack is far more problematic.
As I understand it, TOR uses encrypted layers each of which tells the node where to send the partially unwrapped message on to.
So, if I encrypt something to, say, three layers and sent it to some TOR nodes:
1-2-3
1 knows it came from me and knows to send it to 2
2 knows it came from 1 and to send it to 3
3 knows it came from 2 and to send it to wherever
No one point on the system, IIRC, is meant to have the information necessary to compromise the entire chain. Though, if you could compromise a sizeable portion of the network, you'd be in with a significant chance of compromising any given message - which I find hard to believe that the government hasn't done.
From how I read it, he did that after the experience, when it wasn't under surveillance anymore (well, at least not from inside the datacenter), as a sort of atonement.
I think the only reasonable thing to do would be to somehow encourage (without directly encouraging) people to try a variety of "test traffic" on your box, with the subtly-concealed intention of rooting the fed's box with traffic addressed to your own box (and thus not "intentional", and done by other people anyway). For a "hacking contest" or something.
Except that before "just supporting Linux (and maybe MacOS)" was the thing to do, it was "just support Sun (and maybe HP-UX)". And before that, it was "just support BSD/Vax (and maybe SVR2)".
So, when you take a broader view than "just support what everyone uses", you're not just helping niche platforms -- you're future-proofing.
Beyond the question of how meaningful these metrics are, wouldn't it be a more useful metric to look at how complex the code you write when you use each framework is, rather than the code that implements each framework?
I suspect -- but don't know -- that these might even turn out to be inversely correlated for some types of frameworks. DSL-based frameworks often require some interesting tricks to get right.
So which is better? A complex framework that lets you write simple, concise code? Or a simple framework that requires a lot of boilerplate and complexity to use?
I think the answer to your question, and probably the point of asking it, is that It Depends. If you're writing a very simple web service that will be accessed by millions of clients for a small set of straightforward operations, you'll probably want to pick the framework with the smallest footprint and write a very little bit of custom code. If you're writing a vastly complex system to mingle support for new and legacy data and integrate with many disparate third party services, you will want a framework that does a lot behind the scenes to make your own code as painless as possible.
For everything else, aside from Mastercard, there's room for argument and discussion... but I think the only universal takeaway is that most or all of the frameworks mentioned have solid use cases, and none of them work for everyone in every circumstance.
The [[:print:]] will match any printable characters like åä, while the [ -~] will not.
I used this once as another safeguard against pushing binary data into the database. It was a poor system to begin with where you even have that possibility... and it happened at least once before the fix and my safeguard was in place.
There will be situations where you need to check specifically for 7-bit ASCII printable characters only. I've worked with APIs that require everything outside that range to be escaped/encoded into it.
Email could be an example, I guess, although I haven't worked with it enough to know whether the whole "7-bits only" thing is still an issue these days.
This would seem a problem that could be solved by a cloudish sort of solution, though -- have an iPad app which hands actual typesetting of a document off to a server somewhere, sending TeX sources, and receiving a PDF in return.
A simple starter implementation might require your TeX to be a single file, and could just run TeX in a mostly-readonly chroot jail (bearing in mind that TeX can shell out, read/write arbitrary files readable by the executing user, and so on), and serve up a PDF, all wrapped in a web service of some sort.
This wouldn't be terribly fast, but given TeX's performance, the actual typesetting is likely to be the bulk of the time spent, even considering the round trip.
I think you're not considering enough ways in which one technology can be superior to another.
Ruby wins for the web not because it is faster _at executing code_, but because the combination of Ruby and Rails is faster _at iterating new features_; the optimization which has led it to win is performance of programmers, not code.
Likewise, Windows is inferior to Unix as an OS design -- but it is vastly superior to (non-MacOS) Unix in the time it takes to get a semi-technical user from a machine still in the box to a machine which can do the things most users care about (browse the web, edit documents, and so on).
In the long run, which matters more? Machines are getting faster much more quickly than programmers or users are getting smarter...
(And yes, I get the irony that they probably picked this headline out of fear that "not your dad's..." was sexist.)