This is quite insensitive and a poor analogy. Telling someone who's struggling to "Get your head in the game" is unhelpful in the extreme. It's like telling someone who's drowning to "Just swim better".
Free climbing is a high-risk pursuit. A free climber goes on occasional expeditions where they solo a route, but before then, they train with others, have spotters and partners during route practice, and whole communities to provide help, support and emergency contact should things go wrong. Alex Honnold doesn't just rock up to a mountain (if you'll forgive the pun) and solo it every day for months on end.
It's very difficult to just put everything aside and focus on a singular task for the amount of time it takes to complete a significant software project. Once the initial enthusiasm wanes, keeping going can be a real challenge when you're on your own. The OP's concerns are absolutely valid and deserve respect.
What you just said about free-climbing is perfectly applicable to building something solo. Except unlike climbing solo, building a product is easier because you could get support the whole way. Also you won’t die if you fail.
You should never expect enthusiasm to last. How many people are just as enthusiastic about going to the gym for years compared to when they first start out? These are all the things people need to realize before embarking on these solo journeys.
You may not like it, but if a person was drowning and they actually do start to “just swim better”, they will survive. Just do what you need to do.
Who would look at a drowning person and say "just swim better and you'll survive"?
A drowning person can't "just swim better". That's exactly why they're drowning in the first place.
What you're saying is true but not helpful. There's a whole process that goes into "just do it" that is weirdly unpopular to talk about. You have to become capable first, and that only happens in community where you can safely make a lot of mistakes.
I think the key here is the ability to focus on the privacy-first nature of local LLMs. A cloud-based service will always be more powerful (and markedly so), but Apple is very cautious about pursuing cloud-based solutions when user data is involved - privacy is a selling point of their products, after all. This is a double-edged sword, as you get to sell your services as privacy-friendly, but your offerings can be significantly less capable than your competitors' (see the iOS messages summary debacle, for example). The advantage of waiting is that smaller AI models are becoming much more powerful all the time.
Of course, Microsoft is also at this with its Copilot programme for laptops, where an onboard Neural Processing Unit has to be a particular speed to qualify. This lets you do local AI things like content-aware image snipping, text summaries and...er, Recall.
As to whether Apple will come out of this looking good or not, I think they're currently regretting rolling out a shitty initial AI offering, and will get better with the next release. It'll be like Apple Maps. Or the butterfly keyboard. Or any number of other broken version 1 Apple things.
An interesting question is to whether Apple Intelligence can be cancelled or pared back now the landscape is so AI dominated, i.e. will the lack of AI offerings be seen as a competitive disadvantage, or are people so sick of AI by now that it isn't a factor.
The reality is more akin to me agreeing to have my car covered in your bumper stickers (and anyone else's, as long as they abide by my particular set of bumper sticker rules), and then having to remove some without notice at a later date because the chap at the gas station got offended and wouldn't serve me any more.
And there being only one gas station.
And the guy having not objected to the exact same bumper stickers for the last 15 years.
Visa have said this is because of 'enhanced risk' caused by this content, but they've been fine with it up to now. It's only because of the Australian group's censorious actions that they've decided to act. That's the frustrating thing, at least to me.
Whether they have the legal right to do it or not, it's still a dick move.
What you gain in performance, you somewhat sacrifice in flexibility, at least in comparison with OpenEXR.
OpenEXR was designed for modularity, allowing efficient access to individual layers or channels. This is crucial in VFX workflows where only specific passes (like normals or diffuse) might be needed at any one time. This access is possible because EXR stores channels separately and supports tiled or scanline-based access.
The custom compression method Aras proposes - using meshoptimizer on 16K pixel chunks, followed by zstd as a second compressor step - achieves significantly faster decompression and better compression speeds than EXR ZIP, HTJ2K, or JPEG-XL lossless. However, it trades off random access and requires decompressing the entire image at once, which increases memory usage. Individual frames for a VFX production can be multiple gigabytes (i.e. dozens of 32-bit layers at 4K resolution).
The author's proposal is still compelling, and I wonder if a variant could find its way into some sort of archival format.
(author here) I think yes and no -- while it is true that the "MOP" quick test I tried does not allow to access/decompress individual EXR channels, it does allow to access "chunks" of the image. Unlike say EXR ZIP that splits up image into 16 scanline chunks where each is independent, this splits up into 16K pixel chunks where each is completely independent from each other. So you can access a chunk without decompressing the whole image.
That said, if someone were to investigate ideas like this furher, then yes, making "layers" within EXR be able to get decompressed independently would be a thing to look at. Making individual "channels" perhaps not so much; it is very likely that if someone needs say "indirect specular" layer, then they need all the channels inside of it (R, G, B).
There seems to be a mistaken thought that having an AI (or indeed someone else) help you achieve a task means you aren't learning anything. This is reductionist. I suggest instead that it's about degrees of autonomy. The person you're responding to made a choice to get the AI to help integrate a library. They chose NOT to have the AI edit the files itself; they rather spent time reading through the changes and understanding the integration points, and tweaking the code to make it their own. This is much different to vibe coding.
I do a similar loop with my use of AI - I will upload code to Gemini 2.5 Pro, talk through options and assumptions, and maybe get it to write some or all of the next step, or to try out different approaches to a refactor. Integrating any code back into the original source is never copy-and-paste, and that's where the learning is. For example, I added Dexie (a library/wrapper for accessing IndexedDB) to a browser extension project the other day, and the AI helped me get started with a minimal amount of initial knowledge, yet I learned a lot about Dexie and have been able to expand upon the code myself since. If I were on my own, I would probably have barrelled ahead and just used IndexedDB directly, resulting in a lot more boilerplate code and time spent doing busywork. It's this sort of friction reduction that I find most liberating about AI. Trying out a new library isn't a multi-hour slog; instead, you can sample it and possibly reject it as unsuitable almost immediately without having to waste a lot of time on R&D. In my case, I didn't learn 'raw' IndexedDB, but instead I got the job done with a library offering a more suitable level of abstraction, and saved hours in the process.
This isn't lazy or giving up the opportunity to learn, it's simply optimising your time.
The "not invented here" syndrome is something I kindly suggest you examine, as you may find you are actually limiting your own innovation by rejecting everything that you can't do yourself.
It's not reductionist, it's a fact. If you, instead of learning Python, ask an LLM to code you something in Python, you won't learn a line of Python in the process. Even if you read the produced code from beginning to end. Because (and honestly I'm surprised I have to point out this, here of all places) you learn by writing code, not by reading code.
I encourage you to try this yourself and see how you feel.
Recently I used an LLM to help me build a small application in Rust, having never used it before (though I had a few years of high performance C++ experience).
The LLM wrote most of the code, but it was no more than ~100 lines at a time, then I’d tweak, insert, commit, plan the next feature. I hand-wrote very little, but I was extremely involved in the design and layout of the app.
Without question, I learned a lot about Rust. I used tokio’s async runtime, their mpsc channels, and streams to make a high performance crawler that worked really well for my use case.
If I needed to write Rust without an LLM now, I believe I could do it - though it would be slower and harder.
There’s definitely a “turn my brain off and LLM for me” way to use these tools, but it is reductive to state that ALL usage of such tools is like this.
Of course you have learned a lot about rust. What you haven't learned is to program in rust. Try, a month from now, to write that application in rust from scratch, without any LLM help. If you can, then you truly learned to program in rust. If you don't, then what you learned is just generic trivia about rust.
I think your project is at far too early a state to share. You are currently literally using 'fs::recursive_directory_iterator()' to search one file at a time on the selected drives. You aren't even using multi-threading for solid state volumes or caching the file system details during iteration, meaning every search is going to take several seconds on even modestly-sized SSDs. I can't believe this is faster than a comparable Windows 11 search, as you claim on your GitHub - I suspect the Windows search was for a non-indexed location and/or from a cold start, meaning the file system entries weren't cached already by the OS. (By the way, if you're going to make claims like this, you should publish reproducible benchmarks, so people can test your methodology.)
You also say that you want a solution more integrated with Windows, yet your UI is in SFML and your screenshots look nothing like a native Windows application.
Your project website doesn't render correctly on non-wide screens.
You are using a font which won't render many Unicode filenames correctly, e.g. Japanese or Chinese characters.
You have given no thought to accessibility or localisation.
Everything will run as non-administrator, but it will be unable to index NTFS properties. This is an OS-level restriction. You will face the same when you attempt to read these properties. The same goes for accessing protected folders - you just skip them.
You mention that you also want to feature file encryption and file transfer features in the same application. You are taking on too much here; each of the three features (file search, file transfer and file/folder encryption) is challenging and could take months to years to develop on its own. There are already accomplished apps satisfying these needs. Why would anyone use your solution over 7-zip?
You have no tests. If you want to feature encryption or file transfer, you will need an extensive test suite. Users will not be happy if your tool deletes or mangles their files.
Looking at your "Da-file-encrypter" repo, you don't seem to have any knowledge about how to actually encrypt files - you are doing an XOR against the bytes of the key, which is derived from the same bytes each time. Please NEVER ROLL YOUR OWN encryption strategy. Even the experts get it wrong. Instead, use a proper cryptography library. If you take anything away from this, please let it be that - never implement your own encryption.
Voidtools' Everything has an API and people have used that to integrate with launchers like PowerToys Run. It may be better to use the API and to add the features you think it is missing. It is very unlikely you will be able to beat its performance.
Judging by your licence and the Donate button on your website, you seem to have recently switched licences and want to make money out of this. In my personal opinion, that's not a good look for a project in this state.
The problem is that this is the first time any of us are hearing of these inefficiencies. Were these concerns raised with the author's state representative? The oversight committee? Why was the press not informed if lives were at risk? Why was pressure not heaped upon senior management and the powers that be to effect actual change years ago when the problems were first observed? Merely griping about how frustrating your job was categorically does no fucking good, and neither does writing about it after the fact. You may think you're fighting the good fight, but this isn't a grassroots protest if the only people you were complaining to were your manager and/or your colleagues.
Yes, inefficiency and bureaucracy suck. No, the answer is not to scrap it all without knowing what the hell you're doing and "just wing it", which is exactly what Musk and co are doing. Strangely enough, the solution is probably a compromise. Compromise takes two sides actually talking to each other, with people familiar with the matter present, and an acceptance that the goal is to make the process better without negatively impacting important things like security, safety and data integrity.
Please forgive the tone of this message, but I can't help wonder how many thousands of people are going to die because of the effective abolishment of USAID and other programmes.
If it's your first time hearing about it you haven't been paying attention. The healthcare.gov debacle was headline news, and the USDS that DOGE subsumed was established to address these problems but was ineffectual. The American people voted for effects, for better or worse.
To be clear, I meant the specific issues the OP was talking about in the article, not the overall inefficiencies. I was speaking to the fact that the author had not seemed to have raised the matters with anyone who was actually in a position to do something about it. I apologise for the lack of clarity.
The OP misunderstands (perhaps deliberately or for humorous effect) what a co-pilot is. This is telling:
"I learned that I need to stay firmly in the driver’s seat when tackling new tech."
Er, that's pretty much what a pilot is supposed to do! You can't (as yet) just give an AI free reign over your codebase and expect to come back later that day to discover a fully finished implementation. Maybe unless your prompt was "Make a snake game in Python". A pilot would be supervising their co-pilot at all times.
Comparing AIs to junior devs is getting tiresome. AIs like Claude and newer versions of ChatGPT have incredible knowledge bases. Yes, they do slip up, especially with esoteric matters where there are few authoritative (or several conflicting) sources, but the breadth of knowledge in and of itself is very valuable. As an anecdote, neither Claude nor ChatGPT were able to accurately answer a question I had about file operation flags yesterday, but when I said to ChatGPT that its answer wasn't correct, it apologised and said the Raymond Chen article it had sourced wasn't super clear about the particular combination I'd asked about. That's like having your own research assistant, not a headstrong overconfident junior dev. Yes, they make mistakes, but at least now they'll admit to them. This is a long way from a year or two ago.
In conclusion: don't use an AI as one of your primary sources of information for technology you're new to, especially if you're not double-checking its answers like a good pilot.
Free climbing is a high-risk pursuit. A free climber goes on occasional expeditions where they solo a route, but before then, they train with others, have spotters and partners during route practice, and whole communities to provide help, support and emergency contact should things go wrong. Alex Honnold doesn't just rock up to a mountain (if you'll forgive the pun) and solo it every day for months on end.
It's very difficult to just put everything aside and focus on a singular task for the amount of time it takes to complete a significant software project. Once the initial enthusiasm wanes, keeping going can be a real challenge when you're on your own. The OP's concerns are absolutely valid and deserve respect.