Hacker Newsnew | past | comments | ask | show | jobs | submit | mtklein's commentslogin

I am also not looking forward to the system transitioning from "big experiment, burn money to make it good" to "established business unit, tweak it to death for incrementally more money / personal promotion." We're still in the honeymoon period and I very much expect to hate Waymo in 10 or 15 years when they reach a steady state.

What levers are there, really? Waymo has a monopoly and it seems like they will for a while, so they have a lot of power, but all I really see them doing is making it expensive. Anything that makes the experience worse takes away from their ability to take market share away from Uber/Lyft.

Ads in the car.

Forced “safety breaks” due to the newly proven dangers of sitting in a car for more than 20 minutes. Taking place at our safety parter McDonalds.

Deliberately taking certain routes and encouraging you to stop at partner stores.

Making you pay rent for the self driving.

Increasing the subscription costs continuously.


enshitification should be a new certainty along with death and taxes

That worries me.

Self-driving vehicles need aircraft-type maintenance. Yet there's nothing like the FAA to enforce a minimum equipment list, maintenance intervals, or signoffs by approved mechanics.

Is there a scratch or chip in the scanner dome? Are both the primary and backup steering actuators working? Is there any damage to the vehicle fender sensors? Is dispatch allowed with some redundant components not working? If so, for how long?

Here's the FAA's Minimum Equipment List for single-engine aircraft.[1] For each item, you can see if it has to be working to take off, and, if not, how long is allowed to fix it. There's nothing like that for self-driving land vehicles.

What's the fleet going to look like at 8 years of wear and tear?

[1] https://www.faa.gov/sites/faa.gov/files/MMEL_SE_Rev_2_Draft....


> Self-driving vehicles need aircraft-type maintenance.

That's a hyperbolic false equivalence.

Aircraft typically carry hundreds of people and can crash to the ground. As long as a self-driving car can detect when it is degraded, it can just stop with the blinkers on. Usually with 0 - 2 people inside.


The question is how broken can a car be when dispatched. What's the safe floor? See the other article today about a Tesla getting into an accident because of undetected sensor degradation.

> Aircraft typically carry hundreds of people and can crash to the ground.

Cars are more numerous and could spontaneously either plow into pedestrians, or rear-end someone, causing chain damage and, quite often, a spillage of toxic chemicals (e.g., a cistern carrying acid/fuel/pesticide).

Plus, you have a problem of hostile actors having easier access to cars compared to planes.


It’s just death and taxes combined.

If I remember correctly, the AVX2 feature set is a fairly direct upscale of SSE4.1 to 256 bit. Very few instructions even allowed interaction between the top and bottom 128 bits, I assume to make implementation on existing 128 bit vector units easier. And the most notable new things that AVX2 added beyond that widening, fp16 conversion and FMA support, are also present in NEON, so I wouldn't expect that to be the issue either.

So I'd bet the issue is either newness of the codebase, as the article suggests, or perhaps that it is harder to schedule the work in 256 bit chunks than 128. It's got to be easier when you've got more than enough NEON q registers to handle the xmms, harder when you've got only exactly enough to pair up for handling ymms?


> Very few instructions even allowed interaction between the top and bottom 128 bits

That would be plain AVX, AVX2 has shuffles across the 128-bit boundary. To me that seems like the main hurdle for emulation with 128-bit vectors, in my experience compilers are very eager to emit shuffle instructions if allowed, and emulating a 256-bit shuffle with 128-bit operations would require 2 shuffles and a blend for each half of the emulated register.

EDIT: I just noticed that the benchmark in the article is pure math which probably wouldn't hit this particular issue, so this doesn't explain the performance difference...


There are also mode switching and calling convention issues.

The way that the vector registers were extended to 256-bit causes problems when legacy 128-bit and 256-bit ops are mixed. Doing so puts the CPU into a mode where all legacy 128-bit ops are forced to blend the high half, which can reduce throughput of existing SSE2-based library routines to as low as 1/4 throughput. For this reason, AVX code has to aggressively use the VZEROUPPER instruction to ensure that the CPU is not left in AVX 256-bit vector mode before possibly returning to any library or external code that uses SSE2. VZEROUPPER sets a flag to zero the high half of all 256-bit registers, so it's cheap on modern x86 CPUs but can be expensive to emulate without hardware support.

The other problem is that only the low 128 bits of vector registers are preserved across function calls due to the Windows x64 calling convention and the VZEROUPPER issue. This means that practically any call to external code forces the compiler to spill all AVX vectors to memory. Ideally 256-bit vector usage is concentrated in leaf routines so this isn't an issue, but where used in non-leaf routines, it can result in a lot of memory traffic.


I agree with you, but we must admit that The Expanse has all of spaceships bouncing around, explosion sounds, and superhumans.


Ah, maybe I remembered it wrong... wasn't there some movie/show in the news for not having space explosion sounds?

Maybe it was Interstellar.


BSG had notably muted sounds from interstellar explosions.


This is astonishingly bad power usage for a laptop, a complete dealbreaker: "...early tests show that the SoC already draws about 16 watts at idle..."


For some context, my 12-core Intel laptop consumes 1.5 to 2 watts at idle for the SoC. Apple M silicon might consume even less.


yeah, that is impressively bad. Perhaps a reporting error and it is 16W at full tilt?


Maybe it's a typo and it's 1.6W.


> upgrade kit

> makes your laptop slower

Hmm...


This was a nice surprise when learning to code for NES, that I could write pretty much normal C and have it work on the 6502. A lot of tutorials warn you, "prepare for weird code" and this pretty much moots that.


Zig is so good at this, it is also probably the easiest way to cross-compile C.


And it could be used as drop in replacement for gcc/clang

https://andrewkelley.me/post/zig-cc-powerful-drop-in-replace...


I think this is _Alignas/alignas.

    struct foo {
        _Alignas(64) float x,y;
        _Alignas(64) int     z;
    };
    _Static_assert(sizeof(struct foo) == 192, "");


The example I linked uses alignas, but the key is knowing what value to pass. std::hardware_destructive_interference_size tells you what the current/target hardware's correct align value is, which is the challenge.


I very much used to agree with this, but some time this summer the ChatGPT iOS app started to change this for me. I have definitely had days where I've felt as coding-creative as I can be on a laptop but instead just texting my AI interns to handle the execution while I'm out for a walk.


I don't understand why this article invents and explains a phony ranged-float fix when the real fix from the footnotes would have been just as simple to explain. The deception needlessly undermines the main point of the article, which I completely agree with.


The real fix felt more complicated when I drafted this. Seems like it isn't; I'll think about updating the post


That fix has limited applicability. x * x is also a non-negative float. But abs(x * x) is not optimized. Or abs(abs(x)+1). GCC, for example, does know that.


You're not wrong that he wasn't physically fit, but he was also one of the most human people many of us in this thread have ever met.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: