> The deeper problem is that Microsoft keeps trying to solve GUI consistency at the framework layer
I really don't think that's the fundamental issue.
TFA points out, and I agree, that the fundamental issue is political: competing teams across different divisions coming up with different solutions to solve the same problem that are then all released and pushed in a confusing mishmash of messages.
I haven't written a line of code for a Windows desktop app or extension since early 2014, when the picture was already extremely confusing. I have no idea where I'd begin now.
My choice seems to be either a third party option (like Electron, which is an abomination for a "native" Windows app), or something from Microsoft that feels like it's already deprecated (in rhetoric if not in actuality).
It's a million miles from the in the box development experience of even the late zero years where the correct and current approach was still readily apparent and everything you needed to move forward with development was available from the moment you opened Visual Studio.
There's just so much friction nowadays, starting with the mental load of figuring out the most acceptable/least annoying/most likely still to be supported in 5 - 10 years tech to use for solving the problem.
Honestly, things like Electron are quite literally the problem!
All of people’s modern desktop woes begin and end at the browser. Here’s why: the late 2010’s push into the cloud made JavaScript all-the-rage. A language the creator made in pretty much a weekend coding session.
There naturally is major business incentives powering this. SaaS made things MUCH easier for delivering software.
Fast forward 15 years and MSFT is full in on TypeScript. It’s a disease that starts with MsOffice and percolates to the whole OS (same as what’s happening in copilot).
.Net is actually elegant in many ways. You have PowerShell, VB .Net, C#, F# etc. languages of many paradigms all targeting the same bytecode (and supported by the OS).
And this is being replace by a fun little JavaScript thingy.
That may be how JavaScript started, but unless your claim is that JavaScript hasn't changed at all in the thirty years or so since then, your argument is a complete non-sequitur.
Yeah, thank you. Also, JavaScript today means TypeScript—an arguably extremely capable type system actively developed by Microsoft—and several, modern runtimes with a big standard library and solid asynchronous primitives. There are a lot worse scripting languages out there.
Folks misunderstand the whole point just because I mention TypeScript. Sure it’s a capable and elegant language. Doesn’t change the fact that it’s a bloated monstrosity on the desktop.
Think about it: it transpiles to JavaScript. Even if it’s the most elegant language in the world doesn’t change the fact that it’s a world of bloat.
Stacks on stacks on stacks. And yet people are complaining about .Net? Come on. Lol
Transpilation and bloat are orthogonal. Javascript being bloated or not is also a relative: consider Python, which is much slower than js, and much more memory hungry.
To further argue your original point: chrome & electron are the only reason desktop is still around, both Microsoft and Apple tried their very hardest to build a walled garden of GUI frameworks, rejecting the very idea of compatibility, good design, and ease of use, until they were surpassed by the web, and particularly Google, showing that delivering functioning applications to a computer does not require gigantic widget libraries, outdated looks or complicated downloads & install processes, but is in fact nothing more than a bit of standardization and a couple MBs of text.
All this electron & web hate is so incredibly misplaced I don't even know where to begin. Have you tried making a cross platform mac/win native app? I have, its like being catapulted into the stone age, but you're asked to build a skyscraper.
Why would transpiling change anything? C++ was once transpiled into C. I appreciate that you personally think JavaScript is poorly designed (I mostly agree!) but that doesn't mean it's slow. V8 can do miracles nowadays.
> They said our docs were too big and for some reason their chunking process was failing.
Why would the size of your docs have any bearing on whether or not the chunking process works? That makes no sense. Unless of course they're operating on the document entirely in memory which seems not very bright unless you're very confident of the maximum size of document you're going to be dealing with.
(I implemented a RAG process from scratch a few weeks ago, having never done so before. For our use case it's actually not that hard. Not trivial, but not that hard. I realise there are now SaaS RAG solutions but we have almost no budget and, in any case, data residence is a huge concern for us, and to get control of that you generally have to go for the expensive Enterprise tier.)
I agree it makes no sense. The whole point of chunking is to handle large documents. If your chunking system fails because a document is too big, that seems like a pretty glaring omission. I just chalked it up to the tech being new and novel and therefore having more bugs/people not fully understanding how it worked/etc. It was a vendor and they never gave us more details.
Not all problems have to be solved. We just fell back to using older, more proven technology, started with the simplest implementation and iterated as needed, and the result was great.
That's good. I think if you can get the result you need with a technology that's already familiar to you then, in cases where that tech is still supported, that's going to be a win.
RAG worked well for us in this recent case but, in 3+ years of developing LLM backed solutions, it's the first time I've had to reach for it.
On your parenthetical point, I also agree: some really weird camera selections, and frustrating dropouts, during the crucial moments of the launch.
Nevertheless, a real triumph, and I particularly enjoyed the "full send" remark from (I think) the commander. I also really enjoy the fact that the livestream is relatively light on commentary and that most of what you hear is from mission control and the crew.
> in a short while able to use coding agents will be the new able to use Excel.
Yeah, but there’s “able to use Excel”, and then there’s “able to use Excel.”
There is a vast skill gap between those with basic Excel, those who are proficient, and those who have mastered it.
As in intermittent user of Excel I fall somewhere in the middle, although I’m probably a master of knowing how to find out how to do what I need with Excel.
The same will be true for agentic development (which is more than just coding).
I agree. Also good for small changes that need to be applied consistently across an entire codebase.
I recently refactored our whole app from hard deletes to soft deletes. There are obviously various ways to skin this particular cat, but the way I chose needed all our deletions updated and also needed queries updating to exclude soft deleted rows, except in specific circumstances (e.g., admins restoring accidentally deleted data).
Of course, this is not hard to do manually but is is a bloody chore and tends toward error prone. But the agent made short work of it, for which I was very grateful.
Do you not end up breaking half the value of referential integrity doing it that way (e.g. you had to update all the queries but now you have a sharp edge in that all future queries need to remember to be soft delete aware. Not a blocker for sure, just a sharp edge).
You know your system better than me for sure, a random commenter on a website :-D your comment just shocked me out of my daze enough for my brain to say "but I always move the record to another table rather than soft delete" and i felt compelled to give unsolicited and likely wrong opinion.
Yeah, I did consider moving records to shadow tables, but - because of the nature of our data - it requires moving a lot of child records as well, so it's quite a lot of additional churn in WAL, and the same for restore. And this approach has its own challenges with referential integrity.
More than that, though: lots of queries for reporting, and the like, suddenly need to use JOINs. Same for admin use cases where we want them to be able to see archived and live data in a unified view. The conclusion I came to is it doesn't really eliminate complexity for us: just moves it elsehwere.
Totally valid approach though. I'd also considered different views for live versus archived (or live+archived) data. Again, it solves some issues, but moves complexity elsewhere.
The other key point: it's a Ruby on Rails system so the moment you start doing funky stuff with separate tables or views, whilst it is doable, you lose a lot of the benefits of Active Record and end up having to do a lot more manual lifting. So, again, this sort of played against the alternatives.
As I say, not to diss other approaches: in a different situation I might have chosen one of them.
My conclusion - not for the first time - is that soft delete obviously adds some level of irreducible complexity to an application or system versus hard delete no matter how you do it. Whether or not that extra complexity is worth it very much depends on the application and your user/customer base.
For some people, just the ability to restore deleted rows from backup would be enough - and in other cases it's been enough for me - but that is always a bit of a faff so not a great fit if you're optimising for minimal support overhead and rapid turnaround of any issues that do arise.
Thanks for taking the time to write such a high quality reply; this is something I've wondered about for a long time and I appreciate the thought and detail you've shared here. :)
No worries - I'm glad it's helpful. Like anything, it's incredibly context specific, and you're always weighing up trade offs that may or may not turn out to be valid over the long term based on the best information you have right now.
This, make sure the 'active' flag (or deleted_at timestamp) is part of most indexes and you're probably going to see very small impacts on reads.
It then turns into a slowly-growing problem if you never ever clean up the soft-deleted records, but just being able to gain auditability nearly immediately is usually well worth kicking the can down the road.
This is what gives me the warm fuzzies about the HN community: people jumping to wild conclusions about your domain and systems based on a 4 sentence comment. /s
The thing you'll start to notice is this happens A LOT on every subject.
HN tends to think of itself as smarter than the average for every topic. But it turns out there is a lot of bad and factually wrong information in every thread.
Yeah, I know, and I know I've been guilty of it myself at times. It's a trap that's too easy to fall into.
Something about aughts dev culture as well: I remember it being really common back then. Everybody had to appear to be smart by second guessing everything everyone else was doing. Exhausting.
Entitlement and, really, some of this crosses the line into bullying of the foundation and the maintainers, should be dealt with robustly. It will help to reset expectations around what's reasonable for the relationship of those developing LibreOffice with the community of users.
People need to recognise that they get a huge amount of value out of LibreOffice, for which they aren't required to pay a penny, so it's not unreasonable to be asked if they would like to contribute something back in return.
But amongst large populations of people, when it comees to free things, some portion of that population will always undervalue that free thing and fail to recognise how much benefit they get from it and start acting entitled. There's nothing wrong with calling that out.
> It doesn't look professional when you loose your temper, this article is comparable to that.
Nobody's lost their temper. In no world does the article read like anyone has. That's you applying your own interpretive lens to the text, not what the text actually says.
(But actually, alienating the troublesome portions of their userbase might actually help them and the LibreOffice community over the longer term. C.f., firing customers.)
> Media coverage has largely omitted the fact that LibreOffice has been displaying donation requests for years.
Bringing thunderbird under the bus
> Nobody is making the comparison with Mozilla Thunderbird, which has asked its users for donations practically every time it starts up, with clearly visible banners
And then Wikipedia
> The same logic applies to Wikipedia.
Answering to 'comments'
> Some comments have even suggested
C'mon don't tell me it's professional, it looks amateurish.
First rule: you don't give out names.
Second rule: You don't push the fault on other even when it's their.
Third rule: you don't answer to 'comments', 'tweets', and so on. You say 'we heard feedback that this and this'.
I say it again, it feels like it's been written by a guy alone, no supervision whatsoever, and who didn't have the The necessary step back.
Making a statement of fact about media coverage isn't "bashing". And when you start off your argument by characterising it that way you've already lost.
I've been using "slopocalypse". People already know AI is responsible, but slop existed before — e.g. conventionally generated SEO spam. It's just... so much worse now.
Sorry, I realise this comment isn't up to HN's usual standards for thoughtfulness and it is perhaps a bit inflammatory but... look, I'd bet the majority of us on this site rely on GitHub and I can't be the only one becoming incredibly frustrated with its recent unreliability[0]?
(And, yes, I did enough basic data analysis to confirm that it IS indeed getting worse versus a year, two years, and three years ago, and is particularly bad since the start of this year.)
[0] EDIT: clearly not from looking at the rest of the comments in this discussion.
@KaiserPro has pasted the link to someone else's heatmap, which is really good. Mine was just an Excel spreadsheet with a graph that I'd intended to write a blog about but then got demotivated on because I was too busy with other things and I saw that heatmap as well. Maybe I will do a proper write up next time GitHub has an outage and I'm blocked by it.
I really don't think that's the fundamental issue.
TFA points out, and I agree, that the fundamental issue is political: competing teams across different divisions coming up with different solutions to solve the same problem that are then all released and pushed in a confusing mishmash of messages.
I haven't written a line of code for a Windows desktop app or extension since early 2014, when the picture was already extremely confusing. I have no idea where I'd begin now.
My choice seems to be either a third party option (like Electron, which is an abomination for a "native" Windows app), or something from Microsoft that feels like it's already deprecated (in rhetoric if not in actuality).
It's a million miles from the in the box development experience of even the late zero years where the correct and current approach was still readily apparent and everything you needed to move forward with development was available from the moment you opened Visual Studio.
There's just so much friction nowadays, starting with the mental load of figuring out the most acceptable/least annoying/most likely still to be supported in 5 - 10 years tech to use for solving the problem.
reply