Are you familiar with the academic field of security and the notion of trust in trusted computing? The IoT devices that is being discussed in the article are for industrial control systems, not necessarily your home lightbulb. The threat model is different. Do you want every municipal power company to be trusted to properly vet the code they are putting on these devices, or do you want to trust the device manufacturer to be the one who can put code on the devices?
Owner is still owner, be it someone who lives in a single family residence, or that of a municipality.
In my area, tornado sirens are unencrypted aand a simple recordable and replayable frequency. The cost to add an encrypted radio connection is $100k for the base station, and $25k per siren. There are 80+ sirens.
If this were open source, then a simple computer could he retrofitted to do this. But because they are highly proprietary, the county would be on the hook for $2.1M just to defend against an asshole with a HackRF.
FLOSS and open principles should matter to governments as well as individuals. Trading temporary easiness for no long term usability is utterly ridiculous. And you end up with a doorstop in the end either way.
And who can push new code after the manufacturer's bankruptcy? I've worked in IoT and I'd say the biggest security problems are in this order:
- Devices requiring Internet access for functionality that could have been done locally
- Hardware SDKs which are basically abandoned forks by manufacturers so IoT companies ship stone-age kernels and device drivers
- The usual stuff: too much complexity, lack of tests, bad documentation, meaning old parts of the software get forgotten (but remain exploitable)
Theoretical waxing about trusted computing and remote attestation does seem disingenuous when problems with non-certified firmware is probably not even in the top 10 in the real world. Notice how the article author mentions some scary attacks but conveniently omits how the attackers actually gained access?