This is cool don't get me wrong, but surely overcomplicated? Why not just record audio to disk the whole night then eyeball the waveform for loudness spikes? If you just don't connect it to any network at all, there's no data breach risk (or am I misunderstanding the justification for the noise-detection toggle thing?).
Thx for the feedback about the hero image. I just removed it. (you weren’t the only one pointing it out)
The intention was to have something less detailed than the screenshot in the post.
About the other thing: yes this would have worked for a night or so. I wanted to be able to go back and forth between nights and compare. I also had concerns about the SD-card durability and storage capacity. Still, after an hour into letting the coding agent do its thing, I was impressed by the result, so more and more ideas popped into my head
As you add more code between the "open" and the "close", you introduce more opportunities for control flow to accidentally skip the "close" (leak), or call it more than once (double-free). It forces you to use single-return style, which can make some things very awkward to express.
You're basically doing "defer"-style cleanup manually; you may as well just use the real "defer" if your compiler supports it. It's supposed to be official in a future standard, too.
>The improvement in speed from Example 2 to Example 2a is only about 12%, and many people would pronounce that insignificant. The conventional wisdom shared by many of today’s software engineers calls for ignoring efficiency in the small; but I believe this is simply an overreaction to the abuses they see being practiced by penny-wise- and-pound-foolish programmers, who can’t debug or maintain their “optimized” programs. In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal; and I believe the same viewpoint should prevail in software engineering. Of course I wouldn’t bother making such optimizations on a one-shot job, but when it’s a question of preparing quality programs, I don’t want to restrict myself to tools that deny me such efficiencies.
Knuth thought an easy 12% was worth it, but most people who quote him would scoff at such efforts.
Moreover:
>Knuth’s Optimization Principle captures a fundamental trade-off in software engineering: performance improvements often increase complexity. Applying that trade-off before understanding where performance actually matters leads to unreadable systems.
I suppose there is a fundamental tradeoff somewhere, but that doesn't mean you're actually at the Pareto frontier, or anywhere close to it. In many cases, simpler code is faster, and fast code makes for simpler systems.
For example, you might write a slow program, so you buy a bunch more machines and scale horizontally. Now you have distributed systems problems, cache problems, lots more orchestration complexity. If you'd written it to be fast to begin with, you could have done it all on one box and had a much simpler architecture.
Most times I hear people say the "premature optimization" quote, it's just a thought-terminating cliche.
I absolutely cannot stand people who recite this quote but has no knowledge of the sentences that come before or after it: "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."
> In many cases, simpler code is faster, and fast code makes for simpler systems. (...)
I wholeheartedly agree with you here. You mentioned a few architectural/backend issues that emerge from bad performance and introduce unnecessary complexity.
But this also happens in UI: Optimistic updates, client side caching, bundling/transpiling, codesplitting etc.
This is what happens when people always answer performance problems with adding stuff than removing stuff.
> I suppose there is a fundamental tradeoff somewhere, but that doesn't mean you're actually at the Pareto frontier, or anywhere close to it. In many cases, simpler code is faster, and fast code makes for simpler systems.
Just a little historic context will tell you what Knuth was talking about.
Compilers in the era of Knuth were extremely dumb. You didn't get things like automatic method inlining or loop unrolling, you had to do that stuff by hand. And yes, it would give you faster code, but it also made that code uglier.
The modern equivalent would be seeing code working with floating points and jumping to SIMD intrinsics or inline assembly because the compiler did a bad job (or you presume it did) with the floating point math.
That is such a rare case that I find the premature optimization quote to always be wrong when deployed. It's always seems to be an excuse to deploy linear searches and to avoid using (or learning?) language datastructures which solve problems very cleanly in less code and much less time (and sometimes with less memory).
Yes, this is (more or less) how we regenerate the system state, when necessary. But keep in mind that the fuzzing target is a network of containers, plus a whole Linux userland, plus the kernel. And these workloads often run for many minutes in each timeline. Regenerating the entire state from t=0 would be far too computationally intensive on the "read path", when all you want are the logs leading up to some event. We only do it on the "write path", when there's a need to interact with the system by creating new branching timelines. And even then, we have some smart snapshotting so that you're not always paying the full time cost from t=0; we trade off more memory usage for lower latency.
Oh one other thing: the "fuzzer" component itself is not fully deterministic. It can't be, because it also has to forward arbitrary user input into the simulation component (which is deterministic). If you decide to rewind to some moment and run a shell command, that's an input which can't be recovered from a fixed random seed. So in practice we explicitly store all the inputs that were fed in.
The problem is that it encourages people to use excel for things that should never be in a spreadsheet in the first place. I mean if you're reaching for VBA, building complex PowerQuery pipelines, and writing nested LAMBDA functions just to process your data, imho you have outgrown excel. Just because you can build an entire solution in Excel because you already know the interface, doesn't mean you should...
Also, don't get me started on the newer functions such as XLOOKUP and Dynamic... Relational data belongs in a relational database. If you are joining tables and filtering massive arrays, you should be using standard SQL Arrays, it makes it so much easier to troubleshoot long term.
>The only worthwhile change in desktop environments since the early 2000s has been search as you type launchers.
Add to that: unicode handling, support for bigger displays, mixed-DPI, networking and device discovery is much less of a faff, sound mixing is better, power management and sleep modes much improved. And some other things I'm forgetting.
There are some people who would exclude all of those an enhancements because they don't care about them (yes, even Unicode, I've seen some people on here argue against supporting anything other than ASCII)
Unicode is a fair point, I do speak a language that has a couple of letters that are affected. And of course many many more people across the world are way more affected by that. I didn't really consider that part of the desktop environment though, but I could see the argument for why it might (the file manager for example will need to deal with it, as would translations in the menus etc).
I was primarily thinking about enhancements in the user interactions. Things you see on a day to day basis. You really don't see if you use unicode, ASCII, ISO-somenumbers, ShiftJIS etc (except when transferring information between systems).
Also the AI-generated hero image looks vile.
reply