Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I love Mickens' work, and think this is overall a great presentation, but I feel like it misses (or maybe just doesn't fully explore) an important point.

Start with the Internet of Things example. He chalks up the abysmal security record of IoT devices to two factors: it keeps IoT devices cheap, and IoT vendors don't understand history. And there's a lot of truth in both these assertions! But they are both just expressing facets of a deeper, more fundamental reason: IoT devices aren't secure because their customers don't demand security.

This deeper problem completely explains why the two higher-level problems he observes exist. Making your product secure makes it more expensive and slower to come to market than just leaving it wide open, and the IoT vendors know their customers care about cost and availability and don't care about security. So they do the rational (in the homo economicus sense of the term) thing and optimize for things their customers are actually willing to pay for.

The same causality can be observed in the ML world. Mickens asks why people are hooking ML systems whose operation isn't fully understood to important things like financial decisionmaking and criminal justice systems. The answer is that the customers demand it. ML is trendy and buzzworthy, so if you're a vendor of (say) financial systems, and you can find some way to incorporate ML into your offerings with a straight face, now you have an attractive new checkbox on the feature list your salespeople dangle in front of potential customers. And once the effectiveness of having that box checked becomes clear, you kind of have to do it, even if you know it'll be ineffective or even worse, or risk losing business to a competitor with fewer scruples.

All of which is to say that what we see playing out in both these scenarios isn't really the vendors' fault. They are instead classic examples of market failure. People end up buying shoddy products because spotting their shoddiness requires technical expertise they don't have; responsible vendors who try not to make shoddy products lose sales to irresponsible vendors who don't; eventually all the responsible vendors are out of business and the only products available to buy are shoddy ones. There are lessons to learn from this, but they're economic rather than technological.



This is like saying doctors should push cheap drugs that may or may not make your testicles explode because customers don't demand non-testicle exploding drugs.

We trust doctors to take into account all the nuances of medicine that laymen have never even heard of, and give us good advice. Because not everyone can be an expert on everything.

Its the same with software. We can't expect everyone to be an expert.. its up to our industry to act responsibly.

Sure its "market failure" in so far as duping uninformed people is a good way to make a quick buck, but the deeper issue is moral failure / failure to take responsibility.


...and we don't just rely on drug makers, for example, to be moral and take responsibility. We have government agencies that _require_ strict testing of their safety and effectiveness. If we left it up to the market, we would get inferior results. The problem is, we have no FDA equivalent for tech security.


That's a great point made with a pretty suspect example. FDA very subject to regulatory capture.


Regulatory capture is certainly an important problem, but the pre-FDA record suggests strongly that the FDA we have is much better than not having one. But I would certainly not suggest that there is not a real problem with regulatory capture, just that the current situation in tech security (no FDA equivalent) is worse.


Regulatory capture is certainly an issue, but it doesn't mean that the FDA isn't better than not having a regulatory regime at all.


The cost of the FDA is that the process is slower.

The benefit is that medicine is effective and measurably safe. It’s obviously necessary, and the supplement industry shows why.


I agree, this is a core reason we have a government.


We have FTC/FCC and EU/GDPR


Doctors would totally push cheap testicle-exploding drugs on their patients if there wasn't extensive regulation preventing them from doing that.

They do push life-explodingly addictive and harmful painkillers on their patients, despite knowing the harm it does, because regulations don't prevent them from doing that.

What would be the consequences of an FDA for IoT? Huge price increases, sudden workability of patents as a means of protection, but more security and better products?


> Doctors would totally push cheap testicle-exploding drugs on their patients if there wasn't extensive regulation preventing them from doing that.

And who would have had some of the most input into said extensive regulation?


Customers don't demand non-testicle exploding drugs because that's already the standard in the same way that customers don't demand software that doesn't wipe their disks at random intervals, because software already doesn't (careless usage of dd notwithstanding).

If drugs started exploding testicles you can bet customers would start demanding they didn't (male customers at least). Just look at the Thalidomide incident, I've seen it in the news within the last decade and it happened nearly 60 years ago at this point.


I think consumers are a little more savvy than people in this thread are giving them credit for. Sure, nobody want exploding gonads, but most folks couldn't give a whit if some overseas teenager manages to sneak a look at the contents of their driveway. People just want a cheap camera to catch their neighbors letting the dog poop in their lawn, and if it means becoming part of a botnot, who cares.

The market has spoken, cheap wins over secure time after time. The consumers know, and they don't care, because to them the stakes are just not that high. Their genitals will be fine, and who wouldn't mind an extra set of eyes on the front yard.


Consumers are not savvy as a group. There is always an "eternal september", new suckers born every minute, that can be abused. Beyond that, there are plenty of ways that you can maintain consumer trust while abusing it at the same time. You can sell them products that hurt them in ways they don't understand, and you can control the media surrounding your product enough to ensure that they don't understand. Advertising has a basic purpose of making people aware of products, but it also serves to mislead them on the value of things, overstating the benefits and understating the costs.

This idea that people understand the total consequence of what they do with their money is so simplistic that it's stupid. Markets don't "speak" from a vaccuum, they demand what their constituents are convinced is valuable, regardless of the accuracy of valuation. Lies sell garbage all the time and the consumer isn't to blame for wanting it, the professional, skilled, psychology-wielding liars who sold them on it are.

Hypothetical example: pay for a bunk study that concludes eating apples prevents hair loss, benefit for decades, with negligible repercussions to your business when the lie is uncovered. I'd be skeptical if you claim you can't identify several real examples yourself.


I think you are missing a crucial point. I as a consumer really do not care in the least if someone hacks my device. Worst comes to worst I either do some sort of factory reset or just throw it out, I was probably looking to buy the shinier version anyways. Who cares?

I really dont care if my tea kettle is part of some botnet. I cant even imagine a reason why I should care. I guess it sorta sucks for the people getting ddosed :/

For instance, my router cost me $20. It is probably full of security holes. I dont care. To buy a router that was secure I would have to pay more than $20. I would not feel any benefit for spending that extra money. So I don't do it.

On the other hand, as a developer I'm always thinking about security because it is fun and I feel a sense of responsibility for the things I make.


yup- the "consumer" is not a source of moral force. Its an approximation of whatever purchase decisions people make.

So consumers would of course be happy if you made plastic straws - look at how many get sold!

Now if you told people they would not have plastics, and everything would cost 5x more because we dont have a cheap packaging option, OR tell people that they couldnt transport liquids anymore because we dont have bottles - well you can imagine those customers and consumers would be upset.

The economy is not moral.

Morality is laws and regulations which impose restrictions on the system to make it

fair Environmentally friendly less exploitive etc.

We try and let the market resolve as much of this on its own, so that we can have market efficiency without tying it up with regulations.


Consumers are happy about plastic straws because it was conveniently (for the producer of straws) not communicated to them how manufacture of plastic straws is irresponsible and creates external costs to the environment at no cost to the producer.

You can't honestly believe it's both okay to mislead people in commerce and okay to put the onus of good judgement on them.


Outside of the fact important information can easily be stolen.

Personally I think the consumer should face financial liability when iot devices are used in massive attacks that create problems for others.

Just because you chose a shitty vendor with a shitty product doesn't mean the entire internet should suffer.

I am a fan of things like brickerbot and I hope that sort of thing continues aggressively.


How can you reasonably ask a consumer to evaluate the security of a product when many don’t have basic education? Also, many reputable companies that make “good products” have security breaches, so you can’t just rely on reputation.


Force the consumer to force manufacturers to make less shitty products. Until that happens I hope brickerbot type attacks continue to happen for the cheapo crap.

Sure good products can have a security flaw. But iot and home routers are complete garbage. The consumer should be held liable for being apart of massive disruption of the internet.

It's the equivalent of manslaughter, you might not have intended it. But in this case you didn't do anything to stop it and helped to cause millions in damage. I quite frankly don't care about their education. That's their responsibility. Just the same as you need to learn to drive.


The way the consumer would force manufacturers to do this is by passing laws that would make manufacturers liable.

They would do this because of information asymmetry and the collective action problem. At the point of purchase, consumers don't have the information to make a choice, and they don't have the ability to will an alternative into existence so they can choose it. Improvement in collective outcomes is often hard to achieve purely through market means, which is why we don't rely on markets to solve all these problems.


Why make millions liable where a few hundred bad actors can be trivially dealt with.


Would you similarly not care if drug dealers sold drugs in your driveway, Viagra sellers sent spam from your email, and so on? I think you're being disingenuous.


Not just that but the customers informed enough to care can do it themselves. Most people on this site care about security, and most people on this site can set up their own home automation, servers, security camera, and/or speaker system. So the people buying these future botnet-nodes end up being the unsavvy inevitably.


Doctors have done that - and pharma firms would and have done worse.

The reason they dont is because there are regulations and trials which have to be passed before you can go forward.

And those are things which people on HN regularly criticize - pointing out that life saving drugs would be on the market faster if these regulations were not so "onerous".


What you are describing is an example of of customers demanding non-testicle exploding drugs as why we don't have them.

When a drug causes problems, customers often end up suing the manufacturer/developer of said drug. If doctors prescribe said drugs after it becomes common knowledge that it could cause a problem, they also might be sued for malpractice.

Are people sing IoT companies for poor security practices? If so, are they winning? Without that, what inventive is there for those companies to do anything more than they already are? It's not like there's actually any level of brand awareness for the vast majority of these devices, so it's easy enough to just ignore complaints and rely on the fact that nobody pays attention to your track record when it comes to this market.


nobody's ever sued me for leaving flaming bags of dog poop on your front porch before ringing your doorbell and making a getaway by segway while cackling madly. yet, every day, i resist the overriding temptation to do exactly that. why? well, gosh darn it, because it's the right thing to do!

i think the drive to reduce every bit of human behavior to economic incentives backed by a government force structure is ultimately counterproductive. would you agree?


>leaving flaming bags of dog poop on your front porch before ringing your doorbell and making a getaway by segway while cackling madly. yet, every day, i resist the overriding temptation

If there were millions of dollars to be made in the flaming dog shit Segway getaway business, I am positive many would succumb to the temptation.

So your comparison is unfair, it's easy for you to avoid such a behavior because you have no benefits. Not securing a device is a significant economic win for the manufacturer, as explained by the thread originator. You get a device that "just works" as opposed to one with complex key setup instructions that by necessity must default in the misconfigured state (else, you bet everybody is using the defaults).


If you’re trying to explain the behaviour of unusually, upstanding moral people sure. If you’re trying to deal with anything larger than a small and highly committed group no.

> there are three classes of humans 1) those who will throw the rock at you with the mob 2) those who will not throw the rock and avert their eyes 3) those who will speak out against throwing the rocks

> the ratio is probably 90:9:1


I’d be a little more optimistic and put the ratio at more like 9:90:1.

We don’t typically go around continually throwing actual rocks at each other, so it is possible to make progress on these issues.


https://www.politico.com/blogs/media/2015/03/new-york-times-...

The author of the tweet quoted was speaking metaphorically based on his own experience. Virtually no one supported him publicly when he needed it.


Good but me and I'm sure many other people will happily leave flaming of dog poop on your front porch before ringing your doorbell and making a getaway by segway while cackling madly if nobody ever sued me.


How much money do you earn by leaving flaming bags of dog poop? Can you get rich that way?


I agree, but in this system we indoctrinate our children to operate on profit motives. It took me many decades to understand that money is, ironically, worthless.


if you consider how doctors are happy to prescribe drugs that are not ideal (understatement) for their patients' health for money from pharmaceutical companies, your argument falls apart. consider the opioid epidemic


There aren't enough doctors in medicine to go around. There probably aren't enough doctors (as in PhD) in all the other technology industries supporting medicine.


Customers can't evaluate security of IoT devices and, furthermore, they can't even evaluate what the downside of an insecure device is. So my printer is insecure- what does that mean for me? How much should I care?

At least with cars, you know what an unsafe car can do (kill you) and it still took Ralph Nader's book and citizen pressure to set up a federal agency to oversee car safety. Also, even when most people know that seatbelts are a good idea, we still have seatbelt laws because they mean fewer people die.

https://en.m.wikipedia.org/wiki/Unsafe_at_Any_Speed?wprov=sf...


They can for some of them if you give them this pic from Brian Krebs:

https://krebsonsecurity.com/2012/10/the-scrap-value-of-a-hac...

Got through to a lot of them that way. They were more likely to practice better computer security or buy less "smart" products that don't need to be smart.


That pic made my eyes glaze over. It's a good concept, poor execution.


Maybe they shouldn't have those devices then.


Lets evaluate what makes more sense, OEMs and programmers that have an understanding of the software and hardware and the programming they undertake being responsible for for their own work.

Or blaming the users for not understanding what is essentially an black box that is basically an entirely unknown quantity before (and after) you buy it, often even with when the user has very high technical skill.

I know a lot of programmers are allergic to taking responsibility for their products, maybe its time that changed.


> They are instead classic examples of market failure.

The way to fix market failure is well understood, though; regulation. You're arguing for regulation of the software industry, just as we have regulation of the medical industry or the oil industry.

(The software engineering industry is, I would argue, drastically under-regulated.)


> The way to fix market failure is well understood, though; regulation. You're arguing for regulation of the software industry, just as we have regulation of the medical industry or the oil industry.

That's an excellent idea. I hope your country regulates the hell out of your nation's software industry. Meanwhile I'll buy a rake to help me gather all the money your economy will throw my way because somehow developing software in your nation became suddenly cost-prohibitive and your economy has no alternative to outsource it to nations unencumbered by regulation.


Do you really think suggesting that selfish drive for selling insecure and underregulated software is really an argument against regulation?

I don't think anybody denied that capturing an unregulated space by selling shoddy and cheap products is actually a great way of making any ruthless actor a ton of money, I'm really not sure what point you're trying to make here


we have regulations for software and software services in EU (e.g. GDPR) and US (e.g, DMCA, HIPAA) and the economy has not collapsed.


You don't understand. We should have a global government, and then regulate software all over the world at the same time


Not necessarily. It could also be done by allowing people to sue makers of insecure software or hardware.


This is worse. This leads to lawyers making the critical decisions instead of regulators and auditors. The latter group at least has some familiarity with the subject area.


No, judges and juries decide lawsuits. They have the benefit of being harder to bribe than regulators.


One leads to the other. Lawyers start making a bunch of decisions on corporate strategy and product design because they have to anticipate the rulings of judges and juries. Their decisions are usually going to come late in the game though, leading to lots of last minute, shoehorned changes because they aren't able to review early enough in the development process (since lawyers are expensive and you can't get enough of them be involved early).


Only if you have money to get to court. Everybody else would be left depending on the good will of big companies. That's why courts should be the last resort, not the first. We need regulation, and if everything else fails the courts should be the way to go.


Liability is generally a much better approach than specific regulation. Lawsuits happen after the fact and concern actual harm suffered by actual people. Damages are assigned based on this actual harm. That means that in liability system the price of bad behavior is approximately the harm it causes, which is exactly what you want. Liability doesn't require everyone to actually go to court, because almost all lawsuits or threats thereof are settled based on expectations shaped by previous cases that did go to court. Further, class action lawsuits allow large numbers of harmed people to be represented in a single action at no cost to themselves.

Regulation, on the other hand, is an ex ante affair. It involves some central planning authority, whether Congress or some administrative agency, trying to create rules that they believe will prevent future problems. The regulator will always get it wrong to some extent, often to a very large extent. Rules can be too specific, stifling innovations that would allow actors to achieve the same or better results with different methods. They can be too strict or too loose. The rule making process is also necessarily slow, so regulations tend to come too late and linger too long after technology has moved on. Finally, regulations are ultimately political, driven by what will translate into votes, not necessarily efficiency. If they represent a right-wing constituency, that will mean looser regulation; if a left-wing constituency, tighter regulation.

What's interesting about liability is that companies will buy insurance for it. The insurance companies will demand compliance with certain rules in order to be covered--essentially private regulations. But unlike government regulation, there are multiple competing insurance companies. The resulting market for insurance means that the market searches for the optimal balance between harm prevention and profitability. Insurance companies have a strong incentive to devise the rules that provide the optimum level of security for lowest cost possible.


I agree that liability is probably the best approach and is long overdue for software. The problem is the standard for proving security nonfeasance? My thought is that if your product was found to have a security problem and you did not have a security audit performed by licensed security auditor then you are liable. But I'm not sure there are licensed security auditors in the way, for instance, a CPA is licensed. Over time, if a security issue is publicly reported (e.g. a CVE) and you haven't fixed it within a certain amount of time then you are also liable. The length of time a vendor must provide security updates to a product for free should probably be defined in law, e.g. 2 years.


> It could also be done by allowing people to sue makers of insecure software or hardware.

What about free/open source software? Should society punish those idiots who had the gaul to contribute their free time to a project that everyone can use free of charge?


Cap it at the value paid for the product.

If it's given for nothing then that's what can be charged for it's failure, nothing.


> The way to fix market failure is well understood, though; regulation

No, the way to fix market failure it to increase the aspects that cause markets to function and reduce aspects that cause market dysfunction, and if that doesn't do the trick, then you fall back to regulation.

Markets change in small ways constantly which results in large changes over time, and even regulation that fits perfectly initially is doomed to affect the market negatively given enough time.

When it's important enough, we use regulation to ensure minimal levels some attribute are maintained for the benefit of everyone, such as privacy, safety. Regulation might end up being a good response for a part of the problem, but so could actually holding some companies liable for negligence. I suspect some combination might be best.

I think if you approach the problem of market failure with the idea the the only and best fix is regulation you're likely to just punt problems down the road a decade or two (if you're lucky).


The DO-178B and now DO-178C regulations appear to be doing well. A whole ecosystem of quality-supporting tools, certified components, and QA experts have formed. Likewise, most or all of the early, secure products were designed for the TCSEC regulations. Although it had issues, the parts that increased assurance worked fine.

So, given TCSEC half worked and DO-178C currently works, I'd say regulation is the answer on this stuff. It just can't be too prescriptive. The situation would vastly improve if just a few things like checking inputs, avoiding unsafe code where possible, fuzzing, and so on were required.

And we also sue their ass in court for not doing this easy, provably-useful stuff. That's to get stuff done when regulators aren't along with using legal damages to force them to take action.


South Korea regulated their software security!

That's why even this decade, people were required to use Internet Explorer 6 with ActiveX enabled, to access online banking, because it was the only system the government considered secure enough. We're talking well after IE6 had become a distant memory in the rest of the world.

Are you sure you want governments to regulate software security?


Good regulation doesn't get made because regulation as a practice is broken by malicious actors who sponsor our non-representative elected officials.

Remember campaign contribution limits? Yeah.


Here is the most classic and widely cited paper ever on market failure when customers can't tell what's good and what's a lemon:

The Market for "Lemons": Quality Uncertainty and the Market Mechanism

https://www.sas.upenn.edu/~hfang/teaching/socialinsurance/re...

It's strikingly prescient that Akerlof mentions 'group insurance' as another market that is rife for failure due to a slightly different mechanism. Here we are 50 years later failing to understand this economic lesson.


That paper is interesting because it proved that the used car market doesn't exist. A proof of a false result is not a good proof.


It didn't 'prove' used car markets don't exist, it showed that naive car markets with high amounts of information asymmetry can't exist, and in fact they don't.

All real used car markets have multiple layers of either testing and warrantying (which solves or reduces the asymmetry to a manageable level), legal remedies (many states have 'lemon laws' that push liability back to the seller), or are filled with sophisticated buyers (e.g. car auctions) who can actually tell which car is a 'lemon' because they bring a trained mechanic who will inspect the car in person.


> many states have 'lemon laws' that push liability back to the seller

It’s not that many, and they don’t work that well.

https://www.edmunds.com/auto-warranty/my-used-cars-a-lemon-a...


But how can a customer demand security? There is nothing that a customer can do to choose a more secure IoT device over a less secure one. Even if you look at known vulns, simply having vulns in the past is not necessarily reflective of current security posture. Beyond pentesting an app, how does a consumer act on their desire for a secure device?


There's been security evaluations of products where evaluators do both checklist stuff and try to hack the product. Consumers could buy the stuff that gets cleared through those processes. For instance, there's products on the market like INTEGRITY-178B and LynxSecure designed specifically for securely partitioning systems. They have networking stacks available, too. On occasion, a company would make things like routers with them. Virtually nobody bought them because they cost more than insecure devices or lacked unnecessary Thing X, Y, or Z. Intel tried with i432 APX, BiiN with i960 CPU (a nice one), and Itanium w/ security enhancements Secure64 SourceT uses. Lost a billion dollars or something over the three. So, those companies usually folded, withdrew the products, or switched to selling for outrageous amounts to defense sector.

So far, almost no money is going into stuff with higher assurance of correctness. Those companies are losing money when they try though. So, the market naturally responded to the demand. I strongly discourage anyone from even trying again given the cost and fact that users won't buy it. Instead, I recommend making a product that's decently secure that can be secured later. Make it good enough to sell on its own with great marketing and so on. As money comes in, move a percentage of it toward improving its overall assurance. Basically takes a nonprofit and/or ideological group that wants strong security to happen at a loss or at least opportunity cost to get it done. CompSci people also make strong designs with FOSS code that often needs polish. Companies can pick up their ideas or prototypes to convert into something that can sell. Alternatively, team up with them to split the work into what each can financially sustain and are good at. That's happening with CompCert whose innovations come from CompSci but sold by AbsInt. K Framework people and Runtime Verification Inc. are another good example with one coming from the other.


OK so now you have one good security certification and a dozen BS phony ones, and plenty of international drop ship / amazon fba sellers happy to counterfeit the legit certification. Now what?


The answer is that the customers demand it.

I have to say that this even more of a non-answer than the motivations Michens offers.

Sure, customers want X because it's trendy and seems to provide some vague value. But the underlying answer is customers are willing buy the latest crap damn-the-consequences because these particular customers are buying products whose failure mode is going to cost society a lot but isn't going them all that much. IoT being a prime example. The Internet light bulbs knocking out hospitals or whatever - no one is holding anyone accountable and that's great for someone.

Software failures and security failures so far involve remarkably low costs to companies compared to costs to society. Liability provides some disincentive for dumping battery acid in a river (though that seems to be lessening, sadly) but liability for running or selling crappy software is the stuff that dreams are made of.


i agree with everything. however.

i recently subscribed to Curiosity Stream. its like netflix but only academic-ish documentaries. its "curated" by human beings. i can almost feel the lack of "algorithm". its weird how i feel about it, compared to youtube or whatever.

it reminds me a little bit of going to a "health food store" in the mid 1990s. they were all tiny, tiny niche shops usually owned by one person or a family. they sold weird stuff like organic tofu and soy milk. nowdays, you can buy both of those products in walmart and target.

something very strange happened... somehow the shitty mass market moved towards the tiny, higher quality, higher price niche products.

how did that happen?


That's how it always happens. Something becomes perceived as high quality and desirable. Due to its high quality, it is expensive. But many people want it, so there's an opening for a product that is similar enough for the "layperson", but doesn't cost what the "connoisseur" is willing to pay. Nine times out of ten, that means lower quality.


>Start with the Internet of Things example. He chalks up the abysmal security record of IoT devices to two factors: it keeps IoT devices cheap, and IoT vendors don't understand history. And there's a lot of truth in both these assertions! But they are both just expressing facets of a deeper, more fundamental reason: IoT devices aren't secure because their customers don't demand security.

Its not just price though. You cant just make the devices more expensive to be able to do proper security, the bottleneck in a lot of cases is the energy consumption. That doesnt really scale with more expensive hardware. If your device needs to run from a coin-cell for the next 10 years you will be cautious with how much security you can afford. Even worth off are energy harvesting products without even such a little battery.


>IoT devices aren't secure because their customers don't demand security.

Apple offers the most secure devices, a tiny fraction of its consumer base demands security, or is even aware of how secure their products are.


My understanding of regulated industries (e.g. CE products sold internationally) there are two sides of the coin.

1) properly understanding history as a motivation for risk management and properly funding that quality control.

2) technical ability to implement solutions to the risks identified from step one.

For example, the founder of the company that designs and builds a medical device does not necessarily understand the negatives of pressing CTRL+ALT+DELETE when the software from the manufacturer freezes. People can do so many things wrongly in just a few simple steps.

We can think of dozens of ways to fix the problem but the C levels might only understand 0.5 to 1 of those solutions.

There simply isn't enough quality work going in to a proprietary/closed system that is profit driven.

In my little dream world if all businesses were open-source (code, process, profit margins, all of it) we'd be better at building off of past work and innovation would literally be cheaper. Maybe it's a pipe dream.


> IoT devices aren't secure because their customers don't demand security.

Customers cannot evaluate security, just like in cars and many other technologies.

Vendors need to be held accountable and fined by 3rd parties.


> The same causality can be observed in the ML world. Mickens

> asks why people are hooking ML systems whose operation

> isn't fully understood to important things like financial

> decisionmaking and criminal justice systems. The answer is

> that the customers demand it. ML is trendy and buzzworthy

But that's the same as with the testicle exploding argument: ML is nowadays called AI, can self-drive cars and beat humans at any task (like Jeopardy or Go). So people assume from their experience that it just works, even better than any human. Of course also a big mystery bubble is created around that both by Marketing people and ML practitioners (oh and IBM).

Being myself an engineer working on "normal" systems, I somehow feel pressed as well to do something fancier like ML - according to some survey already 40% of Engineers do that. But on the other hand I realize most of this stuff is, as already pointed out in the talk, just there to target ads or work on meaningless financial systems. I was recently listening to a talk of an AI expert person, using the AI for fraud detection in an online payment system. At the end of the talk somehow asked a really interesting question which was: so how do you connect that to your online system? He answered: we don't, it's just for compliance reporting. That's just stupid, I feel misguided. It's cool to do statistics on your data, simulations but calling that AI is incredibly misleading.


The bigger hurdle is that security actually works against usability, because you have to build something that works on arbitrary networks with who-knows-what configured and no guarantee that the consumer has access let alone knowledge of how to fix random networking issues. Granted there is plenty of low-hanging fruit with minimal usability impact, but if we want to talk about actual decent security that is a very difficult proposition for a plug-and-play consumer product regardless of customer demand.


Customers barely grasp identity theft with respect to bank accounts. Nobody understands the risks of a magic light switch.

We’re living in an era of laissez faire commerce in the US. The biggest, most influential retailer routinely ships counterfeit products and nobody really care.

That is a failure of the regulatory environment — economic forces aren’t powerful enough to deal with these issues. The kickback from government will be brutal and overreaching when it happens.


You assert that "IoT devices aren't secure because their customers don't demand security."

I'll assert that customers can "demand" recycling all they want but companies are going to continue to package their products in the cheapest thing possible without regard to its ability to be recycled. Speaking with your dollar only works if there is at least one company doing what you want.


Apple takes security (and privacy, it’s natural extension) very seriously. It’s not an open source process unfortunately, but they’ve shown a clear financial and strategic commitment to hardware and software level security. They also done an excellent job communicating this to users in the way that they ask for permissions, etc.

A lot of consumers explicitly choose this option, but it’s all wrapped up in “quality”. When I buy a MacBook I know they won’t cheap out on the casing, or the user experience, or the security, and I pay a premium for that.


I'm not sure that companies responding to obvious market failures isn't the companies' fault. We're not some geoup of mindless automatans max/mining for profit (that's the purview of the ML at discussion here). Selling shoddy, dangerous wares should come with consequences.


>We're not some geoup of mindless automatans max/mining for profit

Haven’t spent much time around investment bankers huh?


Another subtle point is that “operation poorly understood” could in fact be a desirable feature for a system that makes sensitive decisions s.a who’s taken to the black room on border crossings.


With your comment about IOT security...

I believe HomeKit devices are a great example of devices that can almost be perfectly secured. A lot of IOT devices support multiple IOT platforms, for example the Philips Hue supports IFTTT, Google Home, Amazon, and of course HomeKit, but the first three options only allow your IOT devices to work in your home with permanent wide area network access. Latency issues aside, this is bad for security because it simply opens more attack vectors to your devices, and relies on third parties to manage your security. What's the benefit of relying on Amazon to manage your IOT devices? Well for the average Joe, it means he won't have to buy a home "hub" (Apple TV/iPad) for allowing remote access of some sort, and also the setup process is generally easier. Problems arise because the IOT device is now responsible for accessing the Internet, and has to contain a much larger codebase.

HomeKit's design is that each IOT device will talk to your local devices, i.e. an iPhone, an iPad, an Apple TV. If and only if you set up an iDevice as a home "hub", do you allow remote access. HomeKit is keeping it modular, which means that if a serious bug is found in remote access code, then you can be confident that Apple will update the Apple TV's firmware, as opposed to an IOT device from a will-be-bankrupt company.

Now what if you have a rogue device on your local network that is hacking other devices? Well this is where a firewall, as Mickens' suggests in his talk, can help. Keep in mind that this is a problem for any style of IOT device, and can only really be protected using a firewall. You can actually create something called a bridging firewall that inspects each packet passing through it's network interfaces. Currently, I've bought a small WiFi router from MikroTik just for this purpose (only 25 USD). All of my IOT devices (and my less secure devices like printers and audio receivers) are plugged in or associated with my MikroTik device, and the bridging firewall acts as follows:

a) drops ethernet packets sent to my main router's MAC address This stops any WAN access

b) drops ethernet packets sent to my home server's MAC address, except for port 67-68 Allows DHCP

c) drops packets sent to any other IOT device

And that's it! I can generally assume my Linux Desktop and my MacBook are secure enough. A few reasons why this is not overkill. First, it separates my two networks without using any VLAN nonsense (and avahi/Bonjour nonsense), and creates a powerful firewall in between the two. Second, it allows my IOT WiFi network to have a different password from my home WiFi network. Third, it doesn't slow down my main router's WiFi speed, and I would hate to have a 802.11g device slowing down my wireless network. Fourth, I believe the firewall can be set up to stop ARP spoofing.

Finally, HomeKit devices are the few IOT device standards that allow you to truly own a device. In fact, after buying the device, you can set up your own local HomeKit Controller in Python (https://github.com/jlusiardi/homekit_python) meaning you don't need to buy anything at all from Apple.


I have to ask, because IoT and lack of security are synonymous, who is providing secure IoT platforms to consumers? Is there history of such companies failing?


At Azure Sphere we're trying! Dev boards ship in a month.


Such a company would be maybe 5 years late to the market, so...likely would never happen.


Misapplications of ML are not demanded, they are sold. Demand is for clean solutions, not dirty ones that make new problems. The selling is where false confidence in broken solutions gets made. Greed is sitting at the root of institutional incompetence in most situations.


No. There are potential regulatory principles here that have nothing to do with the customer's demands. You sound like you are assuming a free-market, free-enterprise situation. That is not reality. There are hundreds of years of regulatory norms in other domains. https://en.wikipedia.org/wiki/Precautionary_principle


>IoT devices aren't secure because their customers don't demand security.

This is hard for me to agree with, because as a consumer literally ALL THE TIME I notice small things product designers do because they know better but that I am sure none of their customers noticed, or read about in reviews or something.

Producers often know better and do the right thing just because they're the experts, and even though nobody demands it.

It's just that IoT security is not something that these experts can do.

To use a recent cupcake analogy, it's as though every single bakery in the entire world that sold cupcakes, sold ones that to the few people who actually have good taste (which includes you and me) actually tastes like shit. Why do the bakers only sell cupcakes that taste like shit? Because nobody demands cupcakes that don't taste like shit? No, because if the bakers knew how to then at least some of them would be selling good cupcakes. It's because a good cupcake recipe doesn't exist anywhere on the planet. Anybody who is making a cupcake is making a shit cupcake. This is the state of iot security: the experts are shit at it. You and I notice.

If the experts figured it out then bakeries would follow. What, you don't think anyone who goes through the trouble of manufacturing and boxing a product bothers to Google "how to make a secure IoT device" and read what they find? Of course they do. What they find is "hahaha whatever."

It's as though if you Googled "best cupcake recipe" all of the top hits said "I don't know mix some flour and butter and bake for a while, put some frosting on it. Whatever, it's a cupcake."

Here is the link: https://www.google.com/search?q=how+to+make+a+secure+iot+dev...

Do you see a single useable recipe there? I don't. All I see is "I don't know, mix some flour and butter and bake it? Put frosting on it. Beats me."

An actual cupcake requires milk, sugar, baking powder, eggs, and an actual recipe. Maybe some vanilla essence. These aren't even listed.

If the state of the art is shit, blame the state of the art.

A secure IoT device is like a watermelon soufflé. You're on your own.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: