I've had a color scheme plugin yanked from my IDE a while back, as it went malicious (Material Theme). It's just a bunch of hex codes, how is that even possible? Baffling and disappointing indeed.
> git on the command line is the power tool. The VS Code plugin is the training wheels version.
I don't disagree with your underlying point, but git is perhaps the worst example. It is a horrible tool in several ways. Powerful yes, but it pays too large a price in UX.
I've only ever used git through the CLI as well, but having switched to jujutsu (also CLI) I am not going back. It is quite eye-opening how much simpler git should be, for the average user (I realize "average user" is doing some heavy-lifting here -- git covers an enormous number of diverse use cases).
What jujutsu-CLI is for me (version control UX that just works) might be VSCode's GUI git integration for other people, or magit, or GitButler, or whatever other GUI or TUI.
Who cares about training wheels? If the real deal is a unicycle with one pedal and no saddle, I will keep using training wheels, thank you very much.
All IDEs including VSCode have fuzzy-finding for files, and fuzzy-finding for symbols as well. Between these, I never find myself using the file tree, except when it's the best tool for the job ("what other files are in this directory?", file tree manipulation (which IDEs recognize you doing, adjusting imports for you!) etc.).
I actually notice how this pattern is very fast, but I lose a code base's mental map. Coworkers might take longer to open any individual file but have a much better idea of repo layout as a whole. That makes them effective otherwise.
I have worked in monorepos for large codebases. In those, file trees are completely useless because you can't fit the repo layout to your mind. Also, the file trees are so large that when I'd ask a coworker using VSCode to open a file, they'd usually take a few seconds trying to locate it (on remote ssh, it even takes a bit to load files to expand directory as well).
For smaller code bases, I usually read the repo first and I'm able to learn what exists while working through it.
One of the primary things shells are supposed to excel at is file system navigation and manipulation, but the experience is horrible. I can never get `cp`, `rsync`, `mv`, `find` right without looking them up, despite trying to learn. It's way too easy to mess up irreversibly and destructively.
One example is flattening a directory. You accidentally git-cloned one too deep, into `some/dir/dir/` and want to flatten contents to `some/dir/`, deleting the then-empty `some/dir/dir/`. Trivial and reversible in any file manager, needlessly difficult in a shell. I get it wrong all the time.
Similarly, iterating over a list of files (important yet trivial). `find` is arcane in its invocation, differs between Unix flavors, you will want `xargs` but `{}` substitution is also very error-prone. And don't forget `print0`! And don't you even dare `for f in *.pdf`, it will blow up in more ways than you can count. Also prepare to juggle quotes and escapes and pray you get that right. Further, bash defaults are insane (pray you don't forget `set -euo pipefail`).
How can we be failed by our tools so much, for these use cases? Why aren't these things trivial and safe?
I have trouble gauging the effects of `*`, aka "will I cp/mv the dir or all contents of the dir"? Then shell expansion getting in the way, like expanding it all ("argument list too long"). I try to use rsync when possible, where intermittent `.` in paths are relevant, as well as trailing slashes... and don't forget `-a`! Then I forget if those same things apply to cp/mv. Then I cp one directory too deep and need to undo the mess -- no CTRL+Z!
The wide-spread use of `rm -rf some-ostensibly-empty-dir/` is also super dangerous, when `rm -r` or even `rmdir` would suffice, but you need to remember those exist. I find it strange there's no widely (where applicable) available `rm-to-trash`, which would be wildly safer and is table stakes for Desktop work otherwise.
Then there's `dd`...
I use terminals a lot, but a GUI approach (potentially with feedback pre-operation) plus undo/redo functionality for file system work is just so much easier to work with. As dumb as drag-and-drop with a mouse is, a single typo can't wreck your day.
If using dd for messing with filesystems and partitions, then I don't know how much more a gui will save you. Its an inherently scary task that needs some thinking first. Personally, that stuff always scares me gui or not.
What is there to get wrong in your flattening dir example? I'd mv the internal dir as some other name to outside say dir2, them rm rf outer and mv the dir2. Unless there is something wrong in this approach.
I would agree with the rest, I always have to look up find and xargs syntax.
You've never mis-dragged something in the GUI or had a flaky mouse button which lets up randomly while you're dragging which left your folder "somewhere" that you now have to find?
The things I want to get done on my computer are so much richer than moving files back and forth in a terminal all day. Simple things should be simple. Tools should enable us. Moving files is a means to an end and these commands having so many sharp edges makes me unhappy indeed.
But yes, of course I am an IDE toddler and cannot even tie my shoes without help from an LLM, thanks for reminding me.
Most of the commands you've mentioned have worked fairly well for me. Are you sure you're looking up cp, mv and find every time before using them? Sounds pretty weird.
Terminal emulators are a crutch you kids are spoiled by. In my day we had to create programs on punch cards. Copy a file from one directory to another? fifteen cards full of custom assembler code. Print out a file listing on the teletype? Ten cards... uphill... in the snow... both ways.
LLMs make all those commands trivial to learn. Back in the day we had to study whole man pages meant as reference. Like keeping attention for minutes at a time. The horror!
To some extent yes. If all you have is 2 GitHub Actions YAML files you are not going to reap massive benefits.
I'm a big fan of CUE myself. The benefits compound as you need to output more and more artifacts (= YAML config). Think of several k8s manifests, several GitHub Actions files, e.g. for building across several combinations of OSes, settings, etc.
CUE strikes a really nice balance between being primarily data description and not a Turing-complete language (e.g. cdk8s can get arbitrarily complex and abstract), reducing boilerplate (having you spell out the common bits once only, and each non-commit bit once only) and being type-safe (validation at build/export time, with native import of Go types, JSON schema and more).
They recently added an LSP which helps close the gap to other ecosystems. For example, cdk8s being TS means it naturally has fantastic IDE support, which CUE has been lacking in. CUE error messages can also be very verbose and unhelpful.
At work, we generate a couple thousand lines of k8s YAML from ~0.1x of that in CUE. The CUE is commented liberally, and validation imported from native k8s types, and sprinkled in where needed otherwise (e.g. we know for our application the FOO setting needs to be between 5 and 10). The generated YAML is clean, without any indentation and quoting worries. We also generate YAML-in-YAML, i.e. our application takes YAML config, which itself is in an outer k8s YAML ConfigMap. YAML-in-YAML is normally an enormous pain and easy to get wrong. In CUE it's just `yaml.Marshal`.
You get a lot of benefit for a comparatively simple mental model: all your CUE files form just one large document, and for export to YAML it's merged. Any conflicting values and any missing values fail the export. That's it. The mental model of e.g. cdk8s is massively more complex and has unbounded potential for abstraction footguns (being TypeScript). Not to mention CUE is Go and shipped as a single binary; the CUE v0.15.0 you use today will still compile and work 10 years from now.
You can start very simple, with CUE looking not unlike JSON, and add CUE-specific bits from there. You can always rip out the CUE and just keep the generated YAML, or replace CUE with e.g. cdk8s. It's not a one-way door.
The cherry on top are CUE scripts/tasks. In our case we use a CUE script to split the one-large-document (10s of thousands of lines) into separate files, according to some criteria. This is all defined in CUE as well, meaning I can write ~40 lines of CUE (this has a bit of a learning curve) instead of ~200 lines of cursed, buggy bash.
A similar problem applies to Go, just inverted. Take iteration. The vast majority of use cases for iterating over containers are map, filter, reduce. Go doesn't have these functions. That's very simple! All Go developers are aligned here: just use a for loop. There's no room for "10% of concepts corners", there's just that 1 corner.
But, for loops get tedious. So people will make helper functions. Generic ones today, non-generic in the past. The result is that you have a zoo of iteration-related helper functions all throughout. You'll need to learn those when onboarding to a new code base as well. Go's readability makes this easier, but by definitions everything's entirely non-standard.
> The result is that you have a zoo of iteration-related helper functions all throughout. You'll need to learn those when onboarding to a new code base as well.
This is overblown, imo. Now just generics exist, you just define Map(), Filter(), and Reduce() in your internal util package. So yes, a new dev needs to find the util package. But they need to do that anyway.
What’s more, these particular functions don’t spread into the type signatures of other functions. That means a new dev only has to go looking for them when they themselves want to use those functions.
Sure, it’s not entirely ideal maybe. But the tone and content of your comment makes it sound a zillion times worse than it is.
> This is a non sequitur. Both Rust and Zig and any other language has the ability to end in an exception state.
There are degrees to this though. A panic + unwind in Rust is clean and _safe_, thus preferable to segfaults.
Java and Go are another similar example. Only in the latter can races on multi-word data structures lead to "arbitrary memory corruption" [1]. Even in those GC languages there's degrees to memory safety.
> A panic + unwind in Rust is clean and _safe_, thus preferable to segfaults
Curious about safety here: Are kernel / cross-thread resources (ex: a mutex/futex/fd) released on unwind (assuming the stack being unwound acquired those)?
But if Rust panic’s, the entire process is dead, so everything gets reclaimed on exit by the kernel. Total annihilation.
All modern OS’s behave this way. When your process starts and is assigned an address, you get an area. It can balloon but it starts somewhere. When the process ends, that area is reclaimed.
A DB action and audit emission have to run transactionally anyway.
So on cancellation, the transaction times out and nothing is written. Bad but safe.
The problem is the same on other platforms. For example, what if writing to the DB throws an exception if you’re on Python? Your app just dies, the transaction times out. Unfortunate but safe.
If it does not run transactionally you have a problem in any execution scenario.
So, regarding transactions, absolutely you can throw them away on cancellation. But there's an interesting wrinkle here: if you use a connection pool like most users, and you were going to do the ROLLBACK at the end of your future on error, then that ROLLBACK wouldn't run it the future is cancelled! Then future operations reusing the same connection would be stuck in transaction la-la land.
(This is related to the fact that Rust doesn't have async drop — you can't run async code on drop, other than spawning a new task to do the cleanup.)
This is prong 3 of my cancel correctness framework (that the cancellation violates a system property, in this case a cleanup property.) The solution here is to ensure the connection is in a pristine state before handing it out the next time it's used.
When I set up n8n last year I was thoroughly confused at the lack of state management. I ended up storing as JSON in a remote blob “database”, a horrible hack.
Node-RED has three state access levels: global one across all flows, a flow based state which is only applicable to one flow and node-based state/storage that is linked to single node.
Any node can access global state, only nodes contained in the flow can access flow state and nodes that have their own state can solely access that storage.