Zig 0.15 is pretty stable. The biggest issue I face daily are silent compiler errors (SIGBUS) for trivial things, e.g. a typo in an import path. I've yet to find exactly why this [only sometimes] causes such a crash, but they're a real pain to figure out over a large changeset. `zig ast-check` sometimes catches the error, else Claude's pretty good at spotting where I accidentally re-used a variable name (again, 90% of the time I do that, it's an easy error, but the other 10%, I get a message-less compiler crash). It sounds like the changes in the OP might be specifically addressing these types of issues.
Also, my .zig-cache is currently at 173GB, which causes some issues on the small Linux ARM VPS I test with.
As for upgrades. I upgraded lightpanda to 0.14 then 0.15 and it was fine. I think for lightpanda, the 0.16 changes might not be too bad, with the only potential issue coming from our use of libcurl and our small websocket server (for CDP connections). Those layers are relatively isolated / abstracted, so I'm hopeful.
As a library developer, I've given up following / tracking 0.16. For one, the change don't resonate with me, and for another, it's changing far too fast. I don't think anyone expects 0.16 support in a library right now. I've gotten PRs for my "dev" branches from a few brave souls and everyone seems happy with that arrangement.
> The biggest issue I face daily are silent compiler errors (SIGBUS) for trivial things, e.g. a typo in an import path.
I don't use zig. My experience has been that caches themselves are sources of bugs (not talking about zig only, but in general). Clearing all relevant caches occasionally is useful when you're experiencing weird bugs.
I don't know why I was downvoted here. One day, I was experiencing weird compilation errors. Clearing the `ccache` C/C++ compiler cache helped get past the problem. Yes, I could have investigated in detail what was the issue and if ccache had a bug but sometimes you don't have the luxury of investigating everything your toolchain throws at you.
Never mind that the previous poster’s insight about caches is correct.
Zig has had caching bugs/issues/limitations that could be worked around by clearing the cache. (Has had, and more that likely still has, and will have.)
> zig's caching system is designed explicitly so that garbage collection could happen in one process simultaneously while the cache is being used by another process.
> I just ran WizTree to find out why my disk was full, and the zig cache for one project alone was like 140 GB.
> not only the .zig-cache directory in my projects, but the global zig cache directory which is caching various dependencies: I'm finding each week I have to clear both caches to prevent run-away disk space
Like what's going on? This doesn't seem normal at all. I also read somewhere that zig stores every version of your binary as well? Can you shed some light on why it works like this in zigland?
AFAIK garbage collection is basically not implemented yet. I myself do `ZIG_LOCAL_CACHE_DIR=~/.cache/zig` so I only have to nuke single directory whenever I feel like it.
20 seconds each time. Last time I tried to enable incremental build, it wasn't working for us. It was a while ago, but I think it had to do with something in our v8 bridge.
The diagnostic struct could contain a caller-provided allocator field which the callee can use, and a deinit() function on the diagnostic struct which frees everything.
Claude code has those "thoughts" you say it never will. In plan mode, it isn't uncommon that it'll ask you: do you want to do this the quick and simple way, or would you prefer to "extract this code into a reusable component". It also will back out and say "Actually, this is getting messy, 'boss' what do you think?"
I could just be lucky that I work in a field with a thorough specification and numerous reference implementations.
I agree that Claude does this stuff. I also think the Chinese menus of options it provides are weak in their imagination, which means that for thoroughly specified problem spaces with reference implementations you're in good shape, but if you want to come up with a novel system, experience is required, otherwise you will end up in design hell. I think the danger is in juniors thinking the Chinese menu of options provided are "good" options in the first place. Simply because they are coherent does not mean they are good, and the combinations of "a little of this, a little of that" game of tradeoffs during design is lost.
I recently asked Claude to make some kind of simple data structure and it responded with something like "You already have an abstraction very similar to this in SourceCodeAbc.cpp line 123. It would be trivial to refactor this class to be more generic. Should I?" I was pretty blown away. It was like a first glimpse of an LLM play-acting as someone more senior and thoughtful than the usual "cocaine-fueled intern."
Fair enough. That’s a more accurate way of putting it. Corporations are shields for the ruling elite to get away with pretty much anything. And in addition to that the corporation can be sued by shareholders if it doesn’t maximize shareholder value by externalizing costs. It’s the kind of policy that would be very popular among cancer cells.
Decompress's Reader shouldn't depend on the size of the buffer of the writer passed in to its "stream" implementation.
So that's a bug in the Decompress Reader implementation.
The article confuses a bug in a specific Reader implementation with a problem with the Writer interface generally.
(If a reader really wants to impose some chunking limitation for some reason, then it should return an error in the invalid case, not go into an infinite loop.)
So how would the Decompress Reader be implemented correctly? Should it use its own buffer that is guaranteed to be large enough? If so, how would it allocate that buffer?
At the very least, the API of the Readera and Writers lends itself to implementations that have this kind of bug, where they depend on the buffer being a certain size.
> So how would the Decompress Reader be implemented correctly? Should it use its own buffer that is guaranteed to be large enough?
yes
> If so, how would it allocate that buffer?
as it sees fit. or, it can offer mechanisms for the caller to provide pre-allocated buffer(s). in any case the point is that this detail can't be the responsibility of the producer to satisfy, unless that requirement is specified somehow explicitly
in general the new zig io interfaces conflate behavior (read/write) with implementation (buffer size(s))
I ... don't disagree with you. Thanks. It helps my understanding.
I know this is moving the goalpost, but it's still a shame that it [obviously] has to be a runtime error. Practically speaking, I still think it leaves lot of friction and edge cases. But what you say makes sense: it doesn't have to be unsafe.
Makes me curious why they asserted instead of erroring in the first place (and I don't think that's exclusive to the zstd implementation right now).
Blog posts are collaboration (1). I did get the sense that Andrew doesn't see it that way. (And for this post in particular, and writegate in general, I have been discussing it on the discord channel. I know that isn't an official channel).
My reasons for not engaging more directly doesn't have anything to do with my confidence / knowledge. They are personal. The linked issues, which I was aware of, are only tangentially related. And even if they specifically addressed my concerns, I don't see how writing about it is anything but useful.
But I also got the sense that more direct collaboration is welcome and could be appreciated.
(1) - I'm the author of The Little MongoDB Book, The Little Redis Book, The Little Go Book, etc... I've always felt that the appeal of my writing is that I'm an average programmer. I run into the same problems, and struggle to understand the same things that many programmers do. When I write, I'm able to write from that perspective.
No matter how inclusive a community you have, there'll always be some opinions and perspectives which get drowned out. It can be intimidating to say "I don't understand", or "it's too complicated" or, god forbid, "I think this is a bad design"; especially when the experts are saying the opposite. I'm old enough that I see looking the fool as both a learning and mentoring experience. If saying "io.Reader is too complicated" saves someone else the embarrassment of saying it, or the shame of feeling it, or gives them a reference to express their own thoughts, I'm a happy blogger.
I don't even like Zig but I read your blog for the low level technical aspects. I agree completely that blog posts are collaborative. I read all kinds of blogs that talk about how computers work. I can't say if it brings value to the Zig people, but it certainly brings value to me regardless!
Also, my .zig-cache is currently at 173GB, which causes some issues on the small Linux ARM VPS I test with.
As for upgrades. I upgraded lightpanda to 0.14 then 0.15 and it was fine. I think for lightpanda, the 0.16 changes might not be too bad, with the only potential issue coming from our use of libcurl and our small websocket server (for CDP connections). Those layers are relatively isolated / abstracted, so I'm hopeful.
As a library developer, I've given up following / tracking 0.16. For one, the change don't resonate with me, and for another, it's changing far too fast. I don't think anyone expects 0.16 support in a library right now. I've gotten PRs for my "dev" branches from a few brave souls and everyone seems happy with that arrangement.