Hacker Newsnew | past | comments | ask | show | jobs | submit | formerly_proven's commentslogin

Both of these jurisdictions have low prosecution and high conviction rates, because the conviction rate is an artifact of prosecutors only going to trial if they now they'll win. In the US this is heavily confounded by plea bargains, since prosecutors can get punishments without even having to go to trial.

As opposed to thoughts and prayers-based ABS?

Some cars are going with entirely electrically actuated brakes, either inboard on on-hub, compared to the E-Tron which uses traditional hydraulically actuated brakes. One uses an electric motor to wind something to tighten the spring clip by pulling it that then pushes the pads to the rotor and the other uses pressure to overcome the spring by pushing the spring to compress it and push the pads to the rotor. I'm guessing Audi didn't go with entirely electric brakes because they have a reputation for being harsh and difficult to modulate with the pedal, and Audi is supposed to be both a luxury and sport brand where pedal feel is important.

Are talking about brake-by-wire? Where brakes are controlled by electric only, and if electrics die, no brakes?

These are dangerous. Cars are not maintained to aircraft standards and will never be.


With electrically actuated brakes the default power off state is fully engaged. Meaning if the power dies the brakes lock up. That causes it's own issues, obviously, but a sudden deceleration is better than no deceleration at most road speeds.

edit: as formerlyproven below states, the ones currently for sale also have a hydraulic backup.


Insane take. They will be vastly more reliable than hydraulics.

Brake by wire passenger car brake systems are still hydraulic... and all of them have a mechanical backup. There is not a single car on the market today using electromechanical brakes.

Unless you're talking about electric parking brakes in a thread about ABS.


Look up cybercab. Their new disassembled manufacturing method cannot support hydraulic lines.

Just remember to pack a bottle of spare magic smoke incase something leaks!

Which cars?

No, just any combination of electric regenerative braking combined with electrically controlled brakes.

It is an electric car after all


These are all page cache poisoning attacks (dirtyfrag, copyfail, dirtypipe). Maybe the page cache should have defense-in-depth measures for SUID binaries?

SUID mitigations have nothing to do with the vulnerability itself - just the exploit.

If there's a root cronjob that runs a world readable binary, you could modify it in the page cache and exploit it that way.

Modifying the page cache is a really strong primitive with countless ways to exploit it.


True! Building protections (e.g. physical pages in the page cache are not writeable 100% of the time) just for executables has of course countless circumventions as well (e.g. config files). Yeah, there is probably not that much to be done there, actually. Looking at some of the diffs it seems to me like the kernel makes it really not particularly obvious when/how this goes wrong. E.g. the patch for this is to look at an additional flag on the socket buffer to fix an arbitrary page cache write. This feels rather action at a distance. Logically this of course makes sense, the whole point of splice et al is to feed data from one file-like into another file-like, whatever those ends might be. That erases the underlying provenance of the data.

splice() should maybe generally refuse to operate on things you can't write to.

splice is documented to return EBADF if "One or both file descriptors are not valid, or do not have proper read-write mode."

So it seems surprising to me that you can call it when the out fd is not writable? But I didn't retain the information about the vulnerability, so I'm missing something. There was something about copy on write, IIRC?


"proper read-write mode" for the input fd is reading only. The exploit is writing to the splice() input fd.

Also, NB, I said permission check, not mode check. The input fd to splice can and will be open for only reading quite often. Doesn't mean the kernel can't still do a write permission check.

(Except I didn't say that here. Oops. Getting confused with my posts.)


OK, I may likely have too much sleep debt to understand, but given the bug is that splice can write to the input fd, you're suggesting maybe splice should only let you use an input fd if the process has access to write to it?

But splice is a more or less a generalization of sendfile, and sendfile is often used for webserving where the serving process does not have ownership of the documents it is serving. It doesn't make sense to limit splice such that it can't do the task it was built for. Maybe splice should just not write to the input fd? :P


> But splice is a more or less a generalization of sendfile

Not really, splice(2) is actually more limited, it's an optimisation for reading and writing data between files and pipes without needing to make copies.

sendfile(2) works with any fds because it just exists to remove a fair bit of the copy overhead when doing a userspace read/write loop, but it does actually do a copy.


Yes, it'd curtail splice() usage quite heavily. Maybe too much.

But apparently we can't be trusted with the page cache…

Maybe the kernel using supervisor-read-only flags could be made to work, only issue then is what happens if something does in fact need to write…


Aren’t you just saying “don’t write bugs?”

Very few / almost no EVs are direct drive. The industry-standard layout is an integrated package of transverse motor with two-stage reduction and final drive, giving something like in the general vicinity of a 10:1 overall ratio (longer ratios are directionally more efficient, while shorter ratios are usually more cost-efficient). That's the ballpark of 2nd gear in many cars (final drive 3-4:1, 2nd gear often around 2-3:1).

So if you mount just the electric motor from an EV (insofar as it exists as a separable part) to a manual gearbox and weld that into 2nd gear you have something which broadly matches the design envelope of that motor. You could reduce gearbox losses by also removing the now unneeded but still idling gears.


I'll bite.

What's the point of 800V with a battery this small? 400V already enables around 200 kW of charging, which is 3C with a pack this small. So charging is not limited by the voltage level, because the pack assuredly isn't reaching 3C. At 200hp the efficiency gains are marginal if they exist at all. So what would the benefit of 800V be, apart from higher costs?


It's a very standard defense-in-depth technique to put secrets between guard pages and only make the secret page readable when needed. That way any inadvertent access, be it programming error or exploit, simply causes a segfault, unless it's raced with a valid access (in a multithreaded or shm context) or the exploit explicitly changed the permission bits. Most memory disclosure vulnerabilities don't allow you to do that.

That being said any single password, when used, passes through so many layers and components that it's likely impossible to even just wipe the contaminated memory locations. But that's fine, the password database is opened for most of the browser's lifetime, any given password actively being used is a rare event in comparison.


Wouldn’t a guard page be readable in Linux with /proc/self/mem ? (at least read only pages are writable with it)

> It's a very standard defense-in-depth technique

Is there any software we’d be aware of which uses this technique


Windows and OpenSSL both do this

In what I assume is GP's general area coal furnaces are quite common for heating.

yuck. worst possible option.

The facelift/software that was introduced with the ID.7 is really good (especially the navigation system with AR HUD), but you kinda have to consider that the HN user population is extremely US-centric and IDs aren't really available in the US, so I don't think it's surprising that the opinions on HN lag behind reality by a couple years there.

c_w

(mostly design clout though)


If it actually was v4 with more bits and different ARP it wouldn't take 30+ years to be deployed.

It's the more bits that are the problem. Anything with more bits is incompatible with the whole internet and anything that's incompatible with the whole internet won't be deployed quickly if ever. NAT is way worse than IPv6 but it got deployed quickly because it was compatible.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: