Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And I feel like it undermines any effort to make free, featureful applications if the hardware itself can't be trusted.


You can trust hardware and software that's easy to inspect.

If you can't be sure what's going on and unable to inspect or debug the hardware and software, how can you trust it's doing what you want?

Proprietary hardware and software is already known to work against the interests of the user. Not knowing exactly what's going on is being taken advantage of at large scale.

Let's put it this way: if you can choose between making your own lasagna with a good recipe vs ready-made microwave lasagna. What would you choose? How about your suit? And would you trust an open known to work well pacemaker vs the latest Motorola or Samsung pacemaker? Would you rather verify the device independently or pay up for an SLA?


No software is "easy to inspect". Only a tiny fraction of users will ever even try. When things are inspected and problems are found, you need a way to revoke the malicious bits. You'll never notify everyone, which is one of the roles app stores play.

You trust hardware and software by establishing boundaries. We figured this out long ago with the kernel mode/user mode privilege check and other things. You want apps to be heavily locked down/sandboxed, and you want the OS to enforce it, but every time you do you go up against the principles of open source absolutists like the FSF. "What do you mean my app can't dig into the storage layer and read the raw image files? So what if apps could use that to leak user location data, I need that ability so I can tell if it's a picture of a bird"

For sensitive information - such as financial transactions - the rewards for bad actors are simply too high to trust any device which has been rooted. The banks - who are generally on the hook if something goes wrong, or at least have to pay a lot of lawyers to get off the hook - are not interested in moral arguments, they want a risk-reduced environment or no app for you - as is their right.


> For sensitive information - such as financial transactions - the rewards for bad actors are simply too high to trust any device which has been rooted

In practice, that just means you trust a Chinese black box Android ROM from a random manufacturer, but not a fresh Lineage OS. To run some banking apps there, one has to root it and install all kinds of crap to hide the fact that your phone is running an OS you actually can trust.

I don't think it's right, I don't think non-manufacturer provided ROMs are a real danger in practice, or rooted phones, and I think this is all just security theater and an excuse to control what people do on their own devices.


> The banks - who are generally on the hook if something goes wrong, or at least have to pay a lot of lawyers to get off the hook - are not interested in moral arguments, they want a risk-reduced environment or no app for you - as is their right.

If they pay for the phone and ship it to you then I agree. Otherwise, they have an obligation to serve their community (part of their banking charter) and that may include meeting their customers where they are, rather than offering an app with unreasonable usage requirements.


No charter requires allowing access from any device. The charters don't even require banks to be open during hours most of their customers are off work.


The charters aren't that specific (nor should they be). But they do oblige the banks to serve their customers to a certain extent.


> You trust hardware and software by establishing boundaries. We figured this out long ago with the kernel mode/user mode privilege check and other things. You want apps to be heavily locked down/sandboxed, and you want the OS to enforce it, but every time you do you go up against the principles of open source absolutists like the FSF. "What do you mean my app can't dig into the storage layer and read the raw image files? So what if apps could use that to leak user location data, I need that ability so I can tell if it's a picture of a bird"

Well, no. The objection isn't to sandboxing apps, but to sandboxing the user, as it were. On my laptop, I run my browser in a sandbox (eg. bubblewrap, though the implementation of choice shifts with time), but as the user I control that sandbox. Likewise, on my phone, I'm still quite happy that my apps have to ask for assorted permissions; it's just that I should be able to give permission to read my photos if I choose.


Users can't be trusted. They don't read. You can put a popup that flashes in all caps saying "THIS WILL GIVE ACCESS TO YOUR BANK ACCOUNT" and users will blindly click OK to get to whatever they think they want, be that an Instagram feed, a game, or whatever.

https://devblogs.microsoft.com/oldnewthing/20030901-00/?p=42...

It was true 22 years ago and is even more true today.


That's not a good example. My bank issued a token device which scans their code, asks me my pin, prompts me what's going to happen and asks for confirmation. Then I can enter the digits to proceed.

This is reasonably secure. If you hijack my account, you still don't have the hardware device and the random secret that was set up between the device and the bank.

You need to actually hack into the bank itself to transfer my money elsewhere.

Meanwhile, I only access the bank with my own computers. That means I installed them and have root. Not a problem at all.


Not really.

If their security depends on enslaving the user, their security sucks.

Real security, be it your financial transactions or keeping your bird pictures safe, doesn't depend on any secret algorithm. Because it's secure.


The threat models aren't secret algorithms, they're apps reading the contents of the screen, stealing keystrokes, MITM attacks against 2FA, and much more.


Apple, Google and Microsoft created that problem.

I don't have this problem on my computers, they run free software. My wifes thinkpad runs free software. The friends I gave a computer with various GNU+Linux distros don't have this problem.

Add Google Chrome with its spammy extensions to the mix and they start getting problems.


So, things that can be exploited on a stock Pixel with no user root? This is a weird argument to make at the same time as https://news.ycombinator.com/item?id=45588594 is on the front page.


There’s no way I’d trust open source anyone with my health. And I am not sure there is one open known to work well project, let alone a pacemaker that couldn’t possibly be funded in the open source world. What open source hardware is actually more usable than the closed source alternative for most people?


Trusted to do what? Work against user's interests? Prevent user from even expressing their interests?


Should the app builder’s ability to “trust” that the hardware will protect them from the user supersede the user’s ability to be able to trust that the hardware will protect them from the app?

In other words, should the device be responsible to enforcing DRM (and more) against its owner?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: