Hacker Newsnew | past | comments | ask | show | jobs | submit | iamcalledrob's commentslogin

We've ended up in a world where power users have been forgotten. Not out of malice, but out of a misguided aim to reduce complexity and achieve consistency with the web.

I would argue that desktop is the platform for power users, and its future depends on them. The keyboard shortcuts, the micro-interactions, the window management -- this stuff is all important when you're using a system for 8+ hours per day.

Yet we risk desktop experiences becoming less useful due to the UI becoming "dumber" as we keep shoehorning websites onto the desktop. Website UI is dumb. It's mouse driven, keyboard is an afterthought. There's no consistency, and you have to re-invent the wheel every time to get the details right (almost never happens).


>We've ended up in a world where power users have been forgotten.

I think its more like the OS vendors have stopped being operating system vendors, and are now - instead - vendors of eyeballs to advertisers.

The less the user is GUI'ing, the more they are just watching, placid, whatever else is on their screen.

For native apps to survive, they need to not be platform-specific - i.e. web apps, which require a browser and all its responsibilities - but rather cross-platform, reliable, predictable on all platforms - i.e. dissuaded from using native, but rather bespoke, UI frameworks.

This is attainable and there are many great examples of apps which are in fact, old wheels not re-invented, which still work for their particular user market.

I have the most respect for apps I can use on MacOS, Windows, and Linux - with the same hotkey/user experience on all platforms, equitably - and the least respect for apps which 'only run on one of them', since that is of course nonsense in this day and age.

The cognitive load of doing a web app that can do all the things a native app can do, is equivalent to the load required to build a cross-platform app using native frameworks, so ..


>i.e. dissuaded from using native, but rather bespoke, UI frameworks.

Based on my experience, I would be quite reluctant to rely on any non-native cross-platform desktop UI framework that is not web-based. These tend to be either less performant, look outdated or are bug-ridden.


What about Qt? It is the gold standard for cross-platform desktop UI frameworks.

It is (1) performant (C++-based), (2) does not look outdated, and (3) not bug-ridden.


Qt apps don't feel great on macOS, though it's by far the best for mac-ish UI. Dropbox was Qt for a long time and I'd argue it worked well for them. Its easy to fall into "uncanny valley".

On Linux, Qt apps feel a bit off in GNOME, though you can never satisfy everyone as its the wild west.

I think Qt also suffers from not really being anyone's favourite.

On the one hand, you have web developers who tend to not really appreciate the nuance of the desktop as a platform. They're not going to advocate for Qt, it's not CSS/HTML/JS.

On the other hand, you have native Mac developers who love Apple's toolkits (AppKit, maybe SwiftUI). They're not going to advocate for Qt either.

Lastly, you have native Windows developers who have been burned so many times they don't advocate for anything in life anymore.


I think Qt is only missing well-written, feature-complete bindings for a major JS runtime, including support for hot reload.

Developing UIs without hot reloading is too painful.


I think what you're asking for has existed for a long, long time. QML.

QML doesn't have a way to define interfaces with JSX and doesn't integrate with the wider JS tooling. From my very limited experience, it still feels too close to the C++ world.

In my experience:

- Qt Widgets worked fine, but looked like a piece of software made in 2013;

- QML looks stylish and is a very nice language, but had a lot of weird bugs.

Neither of these are issues I'd run with if I were to make a web app.


> - Qt Widgets worked fine, but looked like a piece of software made in 2013;

That's too bad, because I prefer software which looks like it was made in 1999.


Ah, the magic times when screen resolutions were large enough to display lots of information, in proper 4:3 aspect ratio, just before they got flattened and the industry started treating them as short view distance TVs.

Widgets looks like whatever you want them to look like, if the feel like they're from the 2010s its because the implementer made that choice, not because of a limitation in qtwidgets.

Could be, but I am mostly speaking about the fact that making a web app looks stylish feels infinitely easier to me.

Qt uses css. Its no different than what you’re describing.

That's your prerogative, but web-based UI's have their hard limits, and native cross-platform desktop UI's are no more/less problematic than the browser.

It seems you fumbled your starmenue click, the start menue will be right back, right after these messages.

> I have the most respect for apps I can use on MacOS, Windows, and Linux - with the same hotkey/user experience on all platforms, equitably - and the least respect for apps which 'only run on one of them', since that is of course nonsense in this day and age.

No. I want things like keyboard shortcuts to reflect the platform norms of where the app is running (macOS in my case). A shared core is fine, but the UI framework must be native to be acceptable. Ghostty is a "gold standard" there.

This is why most web apps are lowest-common-denominator annoyances that I will not use.


Indeed, if the framework is sensible, keyboard shortcuts reflecting platform norms is entirely attainable in a manner that developers don't have to bother with it, much, if they don't want to.

There are plenty of examples of cross-platform UI's surviving the hotkey dance and attaining user satisfaction. There are of course poor examples too, but that's a reflection of care, not effort.


Mozilla removed a lot of power-user features and customization from Firefox claiming that their telemetry showed that few users used them. That's the reality now, nobody wants to develop and maintain things for the 1%.

>their telemetry showed that few users used them

I wonder if they ever stopped to think that power users are the ones that disable telemetry immediately upon install.


That's not remotely universal, but they did consider that. It's immaterial.

Sometimes this is a self-fulfilling prophecy. It is the novice users who, over time, become power users through repetitive usage. If there are no user efficiency gains to be had through experience in a UI, then it just prevents the emergence of power users. Users just have to wait until a product manager or designer somewhere notices their pain and create a new feature through 10x the effort it would have taken to simply maintain the lower level shortcuts (e.g. keyboard accelerators, simple step automations).

Was it the same 1% that was using each of the long-tail features? I suspect that by refusing to invest effort in at least some amount of niche features, we essentially alienate _everybody_

Browsers like Vivaldi that cater to power users are gaining in popularity. They are not trying to be the next Chrome, they are just out to serve their niche well.

Firefox has nothing to differentiate itself from Chrome at this point.


Container tabs, independent proxy config (chrome only respects system-wide proxy), vertical tabs, and functional adblockers are the four big features for me.

Try installing Sidebery or a good adblocker on Chrome.

I use AdBlock on Chrome. It is excellent. Do you not like it?

Go to an adblock test page in Chrome and compare it to Firefox with uBlock Origin. Chrome can't block some ads, and some of the ads it can block leaves behind empty containers.

>Firefox has nothing

Not only that, but for a time, Firefox seemed to be copying everything Chrome did, maybe as a way to stop the exodus of users. But people who wanted Chrome-y things were already using it, and people who didn't might as well, because Firefox was becoming indistinguishable from it.

God I wish Mozilla would be made great again. It's tragic how mismanaged it is.


> It's tragic how mismanaged it is.

Is it mismanaged? Sure, they spend a fair amount on administration. Sure, they spend about 10% on Mozilla Foundation stuff. But they still spend ~2/3 of revenue on software development.

And they're somewhat stuck between a rock and a hard place.

If they try to evolve their current platform, power users bitch. If they don't evolve their current platform, they lose casual users to ad-promoted alternatives (Chrome and Edge).

And they don't really have the money to do a parallel ground-up rewrite.

The most interesting thing I could see on the horizon is building a user-owned browsing agent (in the AI sense), but then they'd get tarred and feathered for chasing AI.

Part of Mozilla's problem is that the browser is already pretty figured out. After tabs and speed and ad blocking, there weren't any killer features.


To a first degree, nearly everyone who installed Chrome did so because of Google putting "Runs best in Chrome" on every page they own and including it with every single possible download, including things like Java updates!

Almost nobody chose Chrome. Microsoft had to change how defaults were managed because Chrome kept stealing defaults without even a prompt.

People use "the internet", they don't give a fuck about browsers. Firefox only got as high a usage as it did because of an entire decade of no competition, as Internet Explorer 6 sat still and degraded.

Chrome was installed as malware for tens of millions of people. It used identical processes as similar malware. It's insane to me how far out of their way lots of "Tech" people go to rewrite that actual history. I guess it shouldn't be surprising since about a thousand people here probably helped make those installer bundling deals and wrote the default browser hijacking code.

It should be a crime what Google did with Chrome. They dropped Chrome onto unsuspecting users who never even noticed when malware did the exact same thing with a skinned Chromium a couple days later. Microsoft was taken to court for far less.

How was Mozilla supposed to compete with millions of free advertising Google gave itself and literal default hijacking?


Personally, its not so much about customisation as it is consistency, quality, and attention to detail.

Being able to keyboard through menus as standard. Focus being deeply considered and always working as expected.

Compact UI elements -- in the 90s/00s we decided buttons should be about 22px tall. Then suddenly they doubled in size.


This resonates deeply. I build native macOS apps in Swift/AppKit and the difference in keyboard-first design between native and web is night and day.

On macOS, if you use standard NSResponder chain and menu items properly, you get Cmd+Z undo, text field navigation, menu bar keyboard access, and accessibility basically for free. The framework was designed around the assumption that users would become experts.

Web apps actively fight this. Every Electron app I use has broken Cmd+` (window cycling), inconsistent text selection behavior, and that characteristic 50-100ms input lag that you stop noticing until you switch back to a native app and remember what "responsive" feels like.

The sad irony is that making a power-user-friendly desktop app is actually less work if you go native, because the frameworks already handle the hard parts. Going web means you have to manually reimplement every platform convention, and almost nobody does.


If you become a power user you realize that nothing matches the power of the command line. And at that point you also realize that are better OSes that allow you to fully explode the true computing power that is terribly limited and constrained by a GUI.

>And at that point you also realize that are better OSes

Nothing beats Windows 11+WSL2; literally the best of both worlds.


Nonsense. Do you read and write your email using the command line? I use Mutt and Vim for that, and that’s not the command line. GUI with power-user support is just as efficient as Mutt and Vim. Did you use curl to read this thread and submit your comment? I use Firefox with Vimium C, which allows most web pages to be navigated and operated efficiently by keyboard.

Wait, mail clients other than mutt exist?

> We've ended up in a world where power users have been forgotten. Not out of malice, but out of a misguided aim to reduce complexity and achieve consistency with the web.

Power users are less susceptible to suggestion and therefore less profitable. They have largely moved to OSes that do not interfere with their wishes, allowing them to make their own choices about what they can or can't do/run (Eg. Linux).


I know this isn't really your main point but I don't think they've been trying to reduce complexity but rather increasing ease-of-use for the end-user*. Those things are often completely at odds with each other in software as I'm sure you know.

*well, that seems to have been their goal in the past; nowadays it just seems like they've been trying to funnel windows users to their other products and forcing copilot into everything.


> Not out of malice, but out of a misguided aim to reduce complexity and achieve consistency with the web.

The web is not consistent itself. Lots of sites, and most web apps, invent their own UI.


The issue is that everyone wants a full-featured remote with only "on, volume, and channel changing" buttons.

I'm planning on writing a desktop (I've wrote about some of my goals here before) precisely for this reason

> We've ended up in a world where power users have been forgotten.

I think the world changed. "Power users" in the traditional sense use Linux and BSD now. Microsoft and Apple dropped them when they realized how lucrative it would be to dumb things down and make computers more like cable TV.


This is why, for me, year of the Linux desktop was 2008. It's been atrocious since then.

In the future it will all be done by AI, no need for GUI. Just write or say what you want to do

Hopefully /s


Desktop, especially Windows, is such a mess.

It's 2026. We're running 8+ cores and 32gb ram as standard. We can run super realistic video games at high frame-rates.

Yet on the same machine, resizing a window with rectangles in it is laggy on every platform except macOS (where there's a chance it's not laggy).


The bloat is pretty incredible. Consider my Amiga 500 could resize windows without lag on a ~7.1 mhz 68000 and 512K of RAM, almost 40 years ago.

Resizing was based on a wireframe system and windows weren't repainted during resizes.

Yes, I agree, it's not apples-to-apples... but we're talking orders of magnitude in CPU, RAM, and "GPU" power.

4-6 cores and 8GB standard* 8 Cores and 32GB+ is the higher end.

imo the resizing test is not useful because it's a useful test of a common operation that needs to be optimized, but because it flexes on every major subsystem of the GUI framework.

Another example is startup time. Time to first frame on screen should be less than 20ms. That doesn't mean time until first content is rendered, but time until _all_ content is rendered (loading dialogs, placeholders, etc are better than nothing but entirely miss the point of being fast).

The second example is why even though I understand why developers pick tauri/electron/webviews/etc I can't get over how fucking slow the startup time is for my own work. None of them could show a blank window in under a second the last time I tried.


Sorry but neither Linux or Windows lag to resize windows in any of my 4 machines.

They range from old laptops to a Ryzen 7 9800X3D workstation.

Just yesterday a friend's father needed help setting up their second-hand old laptop with an old i5 processor. I slapped KDE and there was no lag to be seen.

Bonus point that Windows and some Linux distros have sane, intuitive window management. Whereas with macOS I keep seeing someone suggesting some arcane combination of steps to do some basic things with replies to the effect of "OMG thank you so much, this needs to be known by more people!!!"


I see frame drops when opening the start menu on a clean Windows 11 install on my work laptop (Intel Quad with 32GB memory from two years ago). I have seen the same on 3D Vcache Ryzens on systems from people who claimed there was not lag or sluggishness. It was there and they saw it once pointed out, the standards for Windows users have simply gotten so low that they’ve gotten used to the status quo.

On MacOS, meanwhile, Finder refuses to update any major changes done via CLI operations without a full Finder restart and the search indexing is currently broken, after prior versions of Ventura where stable functionality wise. I am however firm that Liquid Glass is a misstep and more made by the Figma crowd then actual UX experts. It is supposed to look good in static screenshots rather than have any UX advantage or purpose compared to e.g skeuomorphism.

If I may be a bit snarky, I’d advise anyone who does not see the window corner inconsistencies on current MacOS or the appealing lag on Windows 11 to seek an Ophthalmologist right away…

KDE and Gnome are the only projects that are still purely UX focused, though preferences can make one far more appealing than the other.


If we're talking a simple "hello world" window then sure, you can resize that at 60fps on pretty much any system.

But most nontrivial apps can't re-layout at 60fps (or 30fps even).

They either solve it by (A) allowing the window to resize faster than the content, leaving coloured bars when enlarging [electron], or (B) stuttering or dropping frames when resizing.

A pleasant exception to this I've noticed is GTK4/Adwaita on GNOME. Nautilus, for me at least, resizes at 60fps, even when in a folder of thumbnails.

On the Mac side, AppKit, especially with manual `layoutSubviews` math easily hits 60fps too. Yes it was more complex, but you had to do it and it was FAST.


For all the grief they are getting GTK4/Adwaita/Gnome is doing a lot for performance and consistency of experience.

My GNOME desktop is more coherent than my Mac one at this point

My work laptop will stall on resize constantly, and I suspect it is due to the mess of security and backup software. Windows does have an ecosystem problem.

I am also baffled by the multiple control points. I can log in to mail in 3 places. Settings have 3 with different uis....it is gross.


...but then feeling out of place on GNOME / GTK4 / LibAdwaita-land

Linux is a mess, but at least it's unapologetically so.


They've got unified themes that make GTK and Qt applications look nice alongside each other. Users who care will be using those. Users who really care might refuse to use your application because it isn't the toolkit they like, but you shouldn't lose sleep over satisfying such particular and demanding users. They're not paying you anyway.

Re: buggy GNOME extensions, it drives me nuts that GNOME has no built in support for menu bar icons/app indicators.

There's a whole class of GUI apps that should run in the background until needed, and GNOME just has no solution here. I really don't get why they removed this functionality.

I don't want a "service" model where you start/stop gui apps via systemd. And I don't want to keep a window around for no good reason.


GNOME 44 has built-in support finally, but they're hidden in the quick settings menu. I prefer having them in the tray so I can see if they're running without having to click around.

I love the naiveté of this approach.

Unlike <arbitrary heuristic>, it's so easy to reason about. I wish this kind of approach was still viable.


Sloppy technical design ends up manifesting in bugs, experiential jank, and instability.

There are some types of software (e.g. websites especially), where a bit of jank and is generally acceptable. Sessions are relatively short, and your users can reload the webpage if things stop working. The technical rigor of these codebases tends to be poor, but it's generally fine.

Then there's software which is very sensitive to issues (e.g. a multi-player game server, a driver, or anything that's highly concurrent). The technical rigor here needs to be very high, because a single mistake can be devastating. This type of software attracts people who want to take pride in their code, because the quality really does matter.

I think these people are feeling threatened by LLMs. Not so much because an LLM is going to outperform them, but because an LLM will (currently) make poor technical design decisions that will eventually add up to the ruin of high-rigor software.


> the quality really does matter.

If this level of quality/rigor does matter for something like a game, do you think the market will enforce this? If low rigor leads to a poor product, won't it sell less than a good product in this market? Shouldn't the market just naturally weed out the AI slop over time, assuming it's true that "quality really does matter"?

Or were you thinking about "matter" in some other sense than business/product success?


Yes, I think the market will enforce this. A bit. Eventually. But the time horizon is long, and crummy software with a strong business moat can out-compete great software.

Look at Windows. It's objectively not been a good product for a long time. Its usage is almost entirely down to its moat.


How long does that take though? Technical debt from sloppy code doesn't show up in the product until way later. By the time users notice, the team is already three features deep and can't back out.

All these arguments somehow disregards that we’ve all been adding technical debt left and right, every other day to every single codebase in existence. Humans also write sloppy code.

Same as when writing notes by hand, the information is internalized. When you pass the thinking to the LLM, you become the copilot that doesn't even know the code, and the minute decisions madd. Good luck rearchitecturing a bad decision in such a design, and prepare your double d6 for a full functionality reroll.

A lot of software is forced upon people against their will, and purchased bu people who will never use it.

This obscures things in favour of the “quality/performance doesn’t matter argument”.

I am, for example, forced to use a variety of microslop and zoom products. They are unequivocally garbage. Given the option, I would not use them. However, my employer has saddled us with them for reasons, and we must now deal with it.


Even if you're confident you can stop your own company from shipping terrible products, I worry the trend is broad enough and hard enough to audit that the market will enforce it by pulling back on all purchases of such software. If gamers learn that new multiplayer games are just always laggy these days, or CTOs learn that new databases are always less reliable, it's not so easy to convince them that your product is different than the rest.

Yes, there's every reason to believe the market will weed out the AI slop. The problem is, just like with stocks, the market can stay irrational longer than you can stay solvent. While we all wait for executives to learn that code rigor matters, we still have bills to pay. After a year when they start trying to hire people to clean up their mess, we'll be the ones having to shovel a whole new level of shit; and the choice will be between that and starving.

As someone who also falls into camp one, and absolutely loves that we have thinking computers now, I can also recognize that we're angling towards a world of hurt over the next few years while a bunch of people in power have to learn hard lessons we'll all suffer for.


Yes, both the article and GP are making that exact point about it mattering from a customer's perspective.

When I ask the LLM to try and solve a problem that turns out to be difficult or impossible to solve, I've found it will absolutely lose the plot.

I feel like a human would give up a lot quicker and start to learn where the limits are. Claude spins in circles convinced it's finally found a solution. Again. And again. And eventually gets back to where it started.


That's my read too.

Swift was feeling pretty exciting around ~v3. It was small and easy to learn, felt modern, and had solid interop with ObjC/C++.

...but then absolutely exploded in complexity. New features and syntax thrown in make it feel like C++. 10 ways of doing the same thing. I wish they'd kept the language simple and lean, and wrapped additional complexity as optional packages. It just feels like such a small amount of what the Swift language does actually needs to be part of the language.


I get this feeling with C#. I have been here since its release. I looked at Swift and then they moved very quickly at the beginning, so the book I had to teach me was out of date moments after it was printed. With all the complexity being thrown in, I stuck with C++ because at least it was only 1 language I had to keep track of (barely)!

C# is the other direction, IMO.

I've been using C# since the first release in 2003/4 timeline?

Aside from a few high profile language features like LINQ, generics, `async/await`, the syntax has grown, but the key additions have made the language simpler to use and more terse. Tuples and destructuring for example. Spread operators for collections. Switch expressions and pattern matching. These are mostly syntactic affordances.

You don't have to use any of them; you can write C# exactly as you wrote it in 2003...if you want to. But I'm not sure why one would forgo the improved terseness of modern C#.

Next big language addition will be discriminated unions and even that is really "opt-in" if you want to use it.


> Next big language addition will be discriminated unions and even that is really "opt-in" if you want to use it.

I was excited for DU until I saw the most recent implementation reveal.

https://github.com/dotnet/csharplang/blob/main/proposals/uni...

Compared to the beauty of Swift:

https://docs.swift.org/swift-book/documentation/the-swift-pr...


The C# impl is still early and I think what will end up happening is that a lot of the boilerplate will end up being owned by source generators in the long term. C# team has a habit of "make it work, make it better". Whatever v1 gets released is some base capability that v2+ will end up making more terse. I'm happy and OK with that; I'd rather have ugly unions than no unions (yes, I already use OneOf)

Ah Source Generators, after all these years still badly documented, when searching you most likely will find the original implemenation meanwhile deprecated, have poor tooling with string concatenation, and only have a few great blog posts from .NET MVPs to rely on.

:shrug: we're using them very effectively and there are plenty of resources at this point.

Very useful for reducing boilerplate and we can do some interesting things with it. One use case: we generate strongly typed "LLM command" classes from prompt strings.


There are plenty of resources, outside Microsoft Learn that is, and the content is mostly understandable by those of us that have either watched conference talks, or podcasts on the matter.

Now having someone diving today into incremental code generators, with the best practices not to slow down Visual Studio during editing, that is a different matter.

They are naturally useful, as a user, as a provider, Microsoft could certainly improve the experience.


Which keywords would you get rid of and why? You don't have to use all of them!

I would remove result builders and all other uses of @attributes that change the semantics of the code (e.g property wrappers).

I would remove the distinction between value types and reference types at the type level. This has caused so many bugs in my code. This distinction should be made where the types are used not where they are defined.

I would remove everything related to concurrency from the language itself. The idea to let code execute on random threads without any explicit hint at the call site is ridiculous. It's far too complicated and error prone, which is why Swift designers had to radically change the defaults between Swift 6.0 and 6.2 and it's still a mess.

I would remove properties that are really functions (and of course property wrappers). I want to see at the call site whether I'm calling a function or accessing a variable.

I would probably remove async/await as well, but this is a broader debate beyond Swift.

And yes you absolutely do have to know and use all features that a language has, especially if it's a corporate language where features are introduced in order to support platform APIs.


I agree with you about result builders, silly feature that only exists for SwiftUI.

But a lot of what you said, except for the concurrency and property wrapper stuff, largely exists for Obj-C interop. The generated interface is more readable, and swift structs act like const C structs. It’s nice.


I'm not a Swift user, but I can tell you from C++ experience that this logic doesn't mitigate a complex programming language.

* If you're in a team (or reading code in a third-party repo) then you need to know whatever features are used in that code, even if they're not in "your" subset of the language.

* Different codebases using different subsets of the language can feel quite different, which is annoying even if you know all the features used in them.

* Even if you're writing code entirely on your own, you still end up needing to learn about more language features than you need to for your code in order that you can make an informed decision about what goes in "your" subset.


But you have to know all of them to read other people's code.

To answer your question: I would immediately get rid of guard.

Also, I think the complexity and interplay of structs, classes, enums, protocols and now actors is staggering.


I'm surprised, guard is really useful, especially when unwrapping optionals. It's terse, explicit and encourages defensive programming.

internal should definitely go though.


The absence of guard in Kotlin is one of those things that regularly trips me up when bouncing between it and Swift. Rather than Swift losing guard I’d prefer if Kotlin gained it.

I think the ?: operator ends up being a decent alternative, e.g.

  // Swift
  guard let foo = maybeFoo else {
    print("missing foo")
    return false
  }

  // Kotlin
  val foo = maybeFoo ?: run {
    print("missing foo")
    return false
  }
Unless there's a use case for guard I'm not thinking of

It’s a decent alternative, but to someone not familiar with the language what’s going on isn’t as clear.

1. You don't have to use it all, but someone will. And there are over 200 keywords in the language: https://x.com/jacobtechtavern/status/1841251621004538183

2. On top of that many of the features in the language exist not because they were carefully designed, but because they were rushed: https://news.ycombinator.com/item?id=47529006


That number is unfairly exaggerated. The list includes ~40 internal keywords used only by language developers, plus dozens of tokens that would be called preprocessor directives, attributes, or annotations in other languages (e.g. `canImport` as in `#if canImport(...) #endif`; `available` and `deprecated` as in `@available(*, deprecated) func`).

are there actually 217 keywords? Just wondering what the difference between that file and https://docs.swift.org/swift-book/documentation/the-swift-pr... (a mere 102 keywords)

That file is the compiler's list of reserved keywords, so some of them may not have been added to docs, or they're experimental/internal/...

I'm not 100% sure but I think the swift doc you linked is missing at least a dozen keywords so the truth probably lies in the middle


Ah makes sense, personally I wouldn't consider reserved but unused words as keywords in the sense that you don't need to know them to read the language (even though they're keywords in some other technical sense). I was curious because I just tried counting number of keywords by language and it seemed surprisingly ambiguous/subjective/up to the language to say what's a "keyword" vs some type of core module. So my attempt (https://correctarity.com/keywords) probably has mistakes...

> You don't have to use all of them!

You sure pay for the language complexity in high compile times though. Swift is slow, like really slow. I’ve been with it since like v1.2, and its been getting progressively worse for a while IMO. Complex language features (Lets do a borrow checker! Lets do embedded!) and half of the shit isn’t being used internally as far as I can tell


i would get rid of associatedtype, borrowing, consuming, deinit, extension, fileprivate, init, inout, internal, nonisolated, open, operator, precedencegroup, protocol, rethrows, subscript, typealias, #available, #colorLiteral, #else, #elseif, #endif, #fileLiteral, #if, #imageLiteral, #keyPath, #selector, #sourceLocation, #unavailable, associativity, convenience, didSet, dynamic, indirect, infix, lazy, left, mutating, nonmutating, postfix, precedence, prefix, right, unowned, weak, and willSet

It's true that internal is pointless.

Focusing on the keywords rather than the macros, I think the rest of them have legitimate use cases, though they're often misused, especially fileprivate.


this is gonna sound ranty, but it's straight from the heart:

i think most of them are pointless. not every feature needs to be a new keyword. stuff could be expressed within the language. if the language is so inflexible in that regard that it's impossible to express stuff without a keyword, use macros for gods sake.

why is there a need to have a "convenience func" declaration?

why is "didSet" a keyword?

what about "actor"? most other languages don't have nearly as many keywords and manage to express the idea of actors just fine!


You can take this approach in personal projects - with teams you need to decide on this and then on-board people into your use of the language. This does not work.

Yes exactly, it’s easy to blame a language when really it’s a team problem.

I felt that too many smart people were getting involved in the evolution of the language. There should have been a benevolent dictator to say NO.

Interestingly, Kotlin has a pretty solid cross-platform story.

I'd pick it over Swift if targeting Android since it can build and run in the JVM as well as natively -- and has Swift/ObjC interop. Its also very usable on the server if you wanted to, since you can use it in place of Java and tap into the very mature JVM ecosystem. If that's what you're into.

And I have a lot more faith in JetBrains being good stewards of the language rather than Apple, who have a weird collection of priorities.


Kotlin is practically a no-brainer when you have JVM at your finger tips, versus something like Swift which is comparatively young.

I tried to use Vapor with Swift recently and struggled to get something working because the documentation looked comprehensive, but had a lot of gaps. I ended up throwing it out because I didn't have the time to dig through the source to understand how to do something, when I could use a mature framework in any other language instead.

The promise is there but I'm just not ready to invest. My youthful days of unbounded curiosity are coming to an end and these days I just want to get something done without much faff.


Mind you, Kotlin/Native (which is what gets used when you're compiling for iOS) doesn't have access to the JVM.

However, the Kotlin community is fundamentally all about open source, whereas Apple & iOS Devs have an allergy to it. The quality and quantity is already miles above the vast majority of what's in the Swift ecosystem. https://klibs.io has all the native compatible libs. And if you're targeting a platform where the JVM is available then yeah, it's massive. Compose makes UI tolerable compared to JWT too. Even large projects like Spring are Kotlin first nowadays.


JetBrains has monetary interest in promoting Kotlin beyond Android, there’s zero incentive to promote Swift as the language outside of iOS and Mac. They don’t need to capture minds of devs for them to develop for Apple devices.

"Where's the rest of it?"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: