Do you believe that you have a moral right to force them to do anything?
Is there a moral imperative that they are morally required to secure their systems and that others should/could demand that they must do so? It definitely could be in certain cases (for example, a hospital storing confidential data of their customers), but in the usual situation where it's just their data and their money, isn't that their moral right to decide how high a fence (if any!) they want to build around their property?
Telling someone "hey, you forgot to lock your door" is a good thing, but ultimately IMHO it's their decision if they want to lock the door or accept those risks.
Yeah, I agree 100%. But in a lot of the cases mentioned in this thread the private data of the company's customers was at risk. For example system in the original article allowed you to access other people's name, address and national ID number. I was thinking only of situations like these, there's no reason to threaten a company if they're the only ones at risk.
Okay, if private data of the company's customers is at risk, then it is a reason to push for some action, but it matters how you do it. In this case I don't see a big need for reinventing the wheel - this is a common issue for which all the options, pros, cons and risks have already been discussed and there is a somewhat clear consensus (with some debate about nuances) on the expected ethical action, and that is https://en.wikipedia.org/wiki/Responsible_disclosure or http://www.cert.org/vulnerability-analysis/vul-disclosure.cf...? . Many nations have some more specific guidelines issued by e.g. their local CERT that are adapted to their local legal situation.
The process works reasonably well even if the vendor is not cooperative. In that case it is somewhat similar to the message proposed above, but substantially different - first, the threat is not that you'll destroy or publish their data (which is extortion) but that you'll publish your description of the vulnerability (which generally is not); second, the threat is not that you might consider damaging the data (i.e. stating that you'd be willing to do an immoral thing) instead that some other immoral people might damage the data; and third, the disclosure is not conditional on receiving money from them.
I can see that the proposed threat was meant in the same direction, and is somewhat similar to the "threat" implied in general responsible disclosure, i.e., if you don't fix it in 45 days then we'll publish info that most likely will mean that you'll get hacked. But it's substantially different, the details are quite important, and you'd need a good reason to deviate from the standard responsible disclosure guidelines.
I mean, what do you do when after sending a message "I have hacked your system, accessed <this information> and modified <that bit of data>, using <this procedure>. You have <this time> to send <this much> Bitcoins to <this wallet>, or I <copy or trash> your database. Thank you for your attention." you see that they have not fixed the issue but have transferred the requested Bitcoins? It'd be a possible direct result of your actions. Is that a desirable outcome? Is that an ethical outcome?