Hacker Newsnew | past | comments | ask | show | jobs | submit | flohofwoe's commentslogin

Raises the question whether a bug in code that's never called actually exists ;)

Code that is never normally used can sometimes still be gotten to run by an attacker, and therefore can still be a security risk.

But that code would have to be selected in menuconfig, compiled, and the module loaded. I assume that nobody does that for bus mice, and even if someone, by mistake, selects one of the drivers, that's 1 machine in a billion. Who would target that?

Same argument for any retro-tech. What hacker would spend hours/days to hack my bare-metal DOS box running Arachne + a packet driver just to mine bitcoins on a K6-2 for a couple of hours until I turn it off from the AT power switch (not button).


Good point. I guess I have this issue in the back of my mind [1], which was widely shipped with ffmpeg despite being basically never needed.

[1] https://x.com/FFmpeg/status/1983949866725437791


From my understanding, that isn't how drivers in Linux work. Nearly no kernels will have that code compiled into them because kconfig won't call for it. It is "opt-in", and it is so niche few Distros would have done so.

Linux only ships with a tiny sub-set of the drivers in the source tree.



Instruction slots are extremely valuable in 8-bit instruction sets. The Z80 has some free slots left in the ED-prefixed instruction subset, but being prefix-instructions means they could at best run at half speed of one-byte instructions (8 vs 4 clock cycles).

I doubt any of that is measurable, since all ALU operations are usually implemented with the same logic (e.g. see https://www.righto.com/2013/09/the-z-80-has-4-bit-alu-heres-...)

Depending on what's stone-age for you, a SUB with a register was also only one byte, and was the same cost as XOR, at least in the Intel/Zilog lineage all the way back to the 70s ;)

That comment is not very useful without pointing to realworld CPUs where SUB is more expensive than XOR ;)

E.g. on Z80 and 6502 both have the same cycle count.


The 6502 doesn't support XOR A or SUB A, and in fact doesn't have a SUB opcode at all, only SBC (subtract with carry, requiring an extra opcode to set the carry flag beforehand).

I was handwaving over the details, SBC is identical to SUB when the carry flag is clear, so it's understandable why the 6502 designers didn't waste an instruction slot.

EOR and SBC still have the same cycle counts though.


Sure, in some contexts you would know that the carry flag was set or clear (depending on what you needed), and it was common to take advantage of that and not add an explicit clc or sec, although you better comment the assumption/dependency on the preceding code.

However the 6502 doesn't support reg-reg ALU operations, only reg-mem, so there simply is no xor a,a or sbc a,a support. You'd either have to do the explicit lda #0, or maybe use txa/tya if there was a free zero to be had.


Cortex A8 vsub reads the second source register a cycle earlier than veor, so that can add one cycle latency

Not scalar, but still sub vs xor. Though you’d use vmov immediate for zeroing anyway.


With more bits, then SUB is going to be more and more expensive to fit in the same number of clocks as XOR. So with an 8-bit CPU like Z80, it probably makes design sense to have XOR and SUB both take one cycle. But if for instance a CPU uses 128-bit registers, then the propagate-and-carry logic for ADD/SUB might take way much longer than XOR that the designers might not try to fit ADD/SUB into the same single clock cycle as XOR, and so might instead do multi-cycle pipelined ADD/SUB.

A real-world CPU example is the Cray-1, where S-Register Scalar Operations (64-bit) take 3 cycles for ADD/SUB but still only 1 cycle for XOR. [1]

[1] https://ed-thelen.org/comp-hist/CRAY-1-HardRefMan/CRAY-1-HRM...


Harvard Mark I? Not sure why people think programming started with Z80.

The article is about x86, and x86 assembly is mostly a superset of 8080 (which is why machine language numbers registers as AX/CX/DX/BX, matching roughly the function of A/BC/DE/HL on the 8080—in particular with respect to BX and HL being last).

So you say x86 wasn't made ex nihilo, but evolved from previous designs? When this evolution began? 8080 followed 8008, code for which was written in macro-11 https://en.wikipedia.org/wiki/PDP-11_architecture#Example_co...

My WW2-era assembly is a bit rusty, but I don't think the Harvard Mark 1 had bitwise logical operations?

Pausing a game is not as simple as freezing its entire state (and saving is also not as easy as dumping the entire game state to disc, at the least you'd end up with gigabyte-sized savegames).

Many game systems need to remain active in paused state (the UI needs to remain working for example, and actions in the UI may also manipulate game state (inventory, crafting, levelling up...). There are also plenty of games with 'active-pause' where the user can issue commands to units while in paused state.


It's usually not as simple as that. You'd still want to at least keep the UI alive, and you also need to continue rendering while the game is paused because the swapchain surfaces might lose their content (window resizing, changing the display mode, alt-tabbing to the desktop etc).

E.g. when you open the ingame menu, the inventory (etc) you usually want to pause the gameplay, but still want to interact with the UI. Sometimes that means that at least also some of the gameplay logic needs to remain alive (inventory management, crafting, levelling up, etc...).

There are also a lot of games which need some sort of 'active pause', e.g. the gameplay needs to stop while the user can issue commands to units (traditional example: real-time combat with pause like in Baldurs-Gate-style RPGs).

Sometimes the underlying engine also doesn't properly separate gameplay logic from rendering, e.g. you can't skip one without also skipping the other (which is an engine design bug, but similar situations may also happen up in gameplay code).

Finally: pausing and the save-game-implementation is often an afterthought, but really should be implemented as the very first thing. It's quite easy to run into the trap that a frame also needs to advance time. If the game has the concept of a fixed-duration 'game logic tick' which is independent from the frame rate you're already halfway there though, but many games simply use a variable-length game tick which is identical with the frame duration.


Yep, this comment does a good job of illustrating the (surprising) complexity of this.

I'll add that the notion of the "time scale" variable as mentioned in the article is something that's only solidified/codified since Unity and the like came about. And at the same time, the way Unity et al. works[0] doesn't really encourage thinking about what I'd call "main loop logic" in the bottom-up way that's required to build a really robust system for managing states. You can do it, of course, (you can write as much code as you want) but by default, everything in the scene is "active" and somewhat independent from everything else, and you don't have direct control over the various major systems.

[0] I guess I should say "worked" -- I mostly used 3.x and a little bit of early version 4 -- I'm sure it's improved but I wouldn't expect anything drastically different.


It also saved memory with "odd" number of bits eg 3 bitplanes for 8 colors per pixel.

Yes, I too remember the memory savings being the key aspect of bitplane design choices in the early and mid 80s. The increased granularity in varying the trade-off between memory for graphics vs program code being the secondary aspect. The downside to bitplanes vs chunky pixel formats was having to write to four different memory addresses to fully change one 16 color pixel.

One case where bitplanes could be faster was high-res bitmapped text. As long as your text was monochrome (all in one bitplane), you could write an 8 pixel wide character with one byte. This was a big deal when it came to scrolling a screen full of bitmapped text.


Memory savings and by extension memory bandwidth for low color modes, as you indirectly point out with the text example.

The early Amigas had a memory bandwidth of ~7MB/s, and the chip RAM bandwidth was shared with the custom chips... Anything that'd reduce needed memory transfers was a big deal.


Remind me, what part of Europe do you live in again?

[flagged]


So... did you get robbed or stabbed in Munich or are just racist?

[flagged]


you really did not explain what changed, what happens now there that isn't "normal" anymore. Treating you as a racist is the easy part, the problem is how you, as a racist, are treating people.

This was pretty much a big media nothingburger.

The rule isn't new, it existed for decades all the way back to the beginnings of the Cold War. Nobody cared back then (neither the people nor the army), nobody should care now (there are no sanctions). I guess some journalist was actually reading through the consciption law (as probably the only person on the planet), stumbled over that passage and turned it into an elephant.


The rule was literally changed in January. No, it didn't exist all the way back to the beginnings of the cold war!

https://www.tagesschau.de/inland/innenpolitik/wehrdienst-aus...

"Die Regelung habe bereits in den Zeiten des Kalten Krieges gegolten "und hatte keine praktische Relevanz", teilte das Ministerium mit. Sie sei auch nicht sanktioniert. Im Gegensatz zur alten Fassung gilt die Genehmigungspflicht nun auch außerhalb des Spannungs- und Verteidigungsfalls."

The rule existed, but apparently they broadened the scope. In any case, even if the rule is ignored nothing happens - so the question is of course why that rule exists in the first place of course.


When there's a rule with a condition that meant the rule hasn't applied for decades, and then the condition is removed so that the rule always applies, it's no longer the same rule.

Rules that are not enforced are bad as they create space for arbitrariness and corruption. It was a mistake by gov't, opposition & media that this wasn't spotted at the time the law was revised.

The most surprising thing is that the ministry didn't figure this out itself. You'd expect the people drafting laws to consider such things. Thus, it's an indicator of ministerial sloppiness. Not a nothingburger.


It may not be sloppiness. Consider the official statement as shared in this comment: https://news.ycombinator.com/reply?id=47789061. The ministry of defense will issue an 'exception' that generally applies. Presumably, revoking this exception is straightforward and much easier than passing a new law.

If it wasn't sloppiness they would have issued the directive on January 1.

> The most surprising thing is that the ministry didn't figure this out itself. Thus, it's an indicator of ministerial sloppiness.

This I agree with. Might have to do with law changes requiring a two-thirds-majority in parliament though. They could have communicated earlier and better though.


The way laws work there is that each law either enumerates the penalties itself or the law of penalties enumerates them. So for each law you only have to check two places to know what the penalties are.

In this case, there are none.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: