SpacetimeDB looks interesting as a concept (the tech behind this server) but I could never sus out how could it would be in actual practice. I’ve always been interested in some post-mortems or reflections on the tech from other companies besides the founders
Personally I think they launched their 1.0 prematurely. The community seems a bit mired in constant rewriting and immature tooling. Because this setup owns your whole stack, when they having a breaking change it ripples through the whole thing and your on the hook for that as a consumer (prob more so if they control your hosting). Someday they might reach a stable state, but for now if you don't want to bleed for them, I’d be weary.
(I think there are technical and marketing reasons to be weary of as well, but the degree to which that matters is application specific. The above is universal and will probably continue to be the case through at least a few more major revisions if I had to guess).
Yes! I love this framing and it’s spot on. The successful projects that I’ve been involved in someone either cares deeply and resolves the details in real time or we figured out the details before we started. I’ve seen it outside software as well, someone says “I want a new kitchen” but unless you know exactly where you want your outlets, counter depths, size of fridge, type of cabinets, location of lighting, etc. ad infinitum your project is going to balloon in time and cost and likely frustration.
Is your kitchen contractor an unthinking robot with no opinions or thoughts of their own that has never used a kitchen? Obviously if you want a specific cabinet to go in a specific place in the room, you're going to have to give the kitchen contractor specifics. But assuming your kitchen contractor isn't an utter moron, they can come up with something reasonable if they know it's supposed to be the kitchen. A sink, a stove, dishwasher, refrigerator. Plumbing and power for the above. Countertops, drawers, cabinets. If you're a control freak (which is your perogative, it's your kitchen after all), that's not going to work for you. Same too for generated code. If you absolutely must touch every line of code, code generation isn't going to suit you. If you just want a login screen with parameters you define, there are so many login pages the AI can crib from that nondeterminism isn't even a problem.
At least in case of the kitchen contractor, you can trust all the electrical equipment, plumbing etc. is going to be connected in such a way that disasters won't happen. And if it is not, at least you can sue the contractor.
The problem with LLMs is that it is not only the "irrelevant details" that are hallucinated. It is also "very relevant details" which either make the whole system inconsistent or full of security vulnerabilities.
The login page example was actually perfect for illustrating this. Meshing polygons? Centering a div? Go ahead and turn the LLM loose. If you miss any bugs you can just fix them when they get reported.
But if it's security critical? You'd better be touching every single line of code and you'd better fully understand what each one does, what could go wrong in the wild, how the approach taken compares to best practices, and how an attacker might go about trying to exploit what you've authored. Anything less is negligence on your part.
You kitchen contractor will never cook in your kitchen. If you leave the decisions to them, you'll get something that's quick and easy to build, but it for sure won't have all the details that make a great kitchen. It will be average.
Which seems like an apt analogy for software. I see people all the time who build systems and they don't care about the details. The results are always mediocre.
I think this is a major point people do not mention enough during these debates on "AI vs Developers": The business/stakeholder side is completely fine with average and mediocre solutions as long as those solutions are delivered quickly and priced competitively. They will gladly use a vibecoded solution if the solution kinda sorta mostly works. They don't care about security, performance or completeness... such things are to be handled when/if they reach the user/customer in significant numbers. So while we (the devs) are thinking back to all the instances we used gpt/grok/claude/.. and not seeing how the business could possibly arrive to our solutions just with AI and wihout us in the loop... the business doesn't know any of the details nor does it care. When it comes to anything IT related, your typical business doesn't know what it doesn't know, which makes it easy to fire employees/contractors for redundancy first (because we have AI now) and ask questions later (uhh... because we have AI now).
That still requires you to evaluate all the details in order to figure out which you care about. And if you haven't built a kitchen before you, won't know what the details even are ahead of time. Which means you need to be involved in the process, constantly evaluating whether what is currently happening and if you need to care about it.
Maybe they have a kitchen without dishwasher. So unless asked they won't include one. Or even make it possible to include one. Seems like a real possibility. Maybe eventually after building many kitchens they learn they should ask about that one.
I think the Rust community is sleeping on the potential of iced for traditional desktop gui. I monitor the gui space in Rust closely and have seen many toolkits come and go. In my opinion a desktop gui library/framework needs to solve two things to be useful: architecture and advanced widgets.
egui has served me well and is eagerly recommended in "what gui should I use" threads since it solves the widget problem well in an easy-to-use package. However, any sufficiently advanced application ends up needing a nice architecture to maintain development speed and enjoyment. I've found whether using egui/slint/fltk/etc. you end up having to roll your own system. When you start needing things like undo/redo you suspiciously start architecting something that smells like the elm architecture.
Iced is the only Rust toolkit that I track that solves the architecture part upfront. The message pattern is hugely powerful but it is hard to appreciate until you've really gotten in the weeds on larger applications. Once iced reaches a point where there is an advanced set of widgets available I suspect its popularity will rise accordingly.
As a comparison, one of the most successful desktop gui toolkit of all times (Qt Widgets) solved the architecture/widget duality long ago with the signal/slot system and advanced widgets like treeviews, datagrid, etc. Since then we must have had hundreds of "desktop" toolkits across all languages that can draw buttons and dropdowns but nobody has toppled the king yet for building advanced desktop GUIs (although there were a few close competitors in C# with WPF and Java with Swing they only solved the widget part in my opinion). I like to think iced can take this mantle one day, best of luck to them and congrats on the 0.14 release.
Alternatively, Flutter desktop with flutter_rust_bridge works great. Sure, not fully Rust, but Flutter has way more packages on the GUI side than, I'd say, all of the Rust GUIs combined, simply due to how much older it is.
It feels a little cringe how Elm and the "Elm way of thinking" is so heavily emphasised, but after using Iced, IMO this is probably one of the best gui models without having to implement signals or message passing or IPCs like Dioxus/Tauri does with frontend/backend. Other key advantages Iced has includes daemon mode, native ability to use multi threading and also multi-window support
That said, for actually building most apps, what's more important is likely ease of learning and using the framework, platform/OS support, publishing support, etc, none of which Iced itself directly address.
> When you start needing things like undo/redo you suspiciously start architecting something that smells like the elm architecture.
Well Iced itself claims to be inspired by the Elm Arch, so that checks out (see the first line under the Overview section of the Readme https://github.com/iced-rs/iced )
I would much rather see web apps become canvas rendered WASM versions of desktop apps than desktop apps become webview apps. Latter is what we have been seeing in the recent years unfortunately.
Canvas rendered cross-platform UI frameworks like Flutter & Avalonia targeting browsers (WASM), might shift the balance back in favor of desktop UI.
> By switching to fetch(), we can take advantage of its support for readable streams, which allow for a stream of content to be swapped into the DOM, rather than a single response.
Based on this section, it will be interesting to see how this evolves. I've used HTMX a bunch but after stumbling on Datastar I've come to prefer it. Partially because I don't need something like alpine.js to get some frontend goodies but also because I've come to appreciate the power of SSE streaming morphable targets to the browser
As far as I’ve seen, Apple is to blame here as they usually make it harder to target their platform and don’t really try to cooperate with the rest of the industry.
As a game developer, I have to literally purchase Apple hardware to test rather than being able to conveniently download a VM
I run Linux and test my Windows releases on a VM. It works great.
Sure, I'm not doing performance benchmarking and it's just smoke tests and basic user stories, but that's all that 98% of indie developers do for cross platform support.
Apple has been intensely stupid as a platform to launch on, though I did do it eventually. I didn't like Apple before and now I like it even less.
I develop a game that easily runs on much weaker hardware and runs fine in a VM, I would say most simple 3D & 2D games would work fine in a VM on modern hardware.
However, these days it's possible pass-through hardware to your VM so I would be able to pass through a 2nd GPU to MacOS...if it would let me run it as a guest.
on linux, KVM provides passthrough for GPUs and other hardware, so the VM "steals" the passed through resources from the host and provides near-native performance.
I'm not a subject matter expert, but I do find it a little odd to read the second half of that. I'd expect, beyond development/debugging, there's certainly a phase of testing that requires hardware that matches your target system?
Like, I get if you develop for consoles, you probably use some kind of emulation on your development workstation, which is probably running Windows. Especially for consoles like XBOX One or newer, and PS4 or newer, which are essentially PCs. And then builds get passed off to a team that has the hardware.
Is anyone developing games for Windows on Apple hardware? Do they run Parallels and call it a day? How is the gaming performance? If the answers to those 3 questions are "yes, yes, great", then Apple supports PC game development better than they support Apple game development?
> Like, I get if you develop for consoles, you probably use some kind of emulation on your development workstation
I don’t think anybody does this. I haven’t heard about official emulators for any of the mainstream consoles. Emulation would be prohibitively slow.
Developers usually test on dedicated devkits which are a version of the target console (often with slightly better specs as dev builds need more memory and run more slowly). This is annoying, slow and difficult, but at least you can get these dev kits, usually for a decent price, and there’s a point to trying to ship on those platforms. Meanwhile, nobody plays games on macs, and Apple is making zero effort to bring in the developers or the gamers. It’s a no-chicken-and-no-egg situation, really.
Basically you are correct, MacOS has to be treated like a console in that way. Except you get all the downsides of that development workflow with none of the upsides. The consoles provide excellent debugging and other tools for targeting their platform, can't say the same for MacOS.
For testing, I can do a large amount of testing in a VM for my game. Maybe not 100% and not full user testing but nothing beats running on the native hardware and alpha/beta with real users.
Also, since I can pass through hardware to my VM I can get quite good performance by passing through a physical GPU for example. This is possible and quite straightforward to do on a Linux host. I'm not sure if it's possible using Parallels.
I don't hate React developers. I hate developers who build consumer facing software and use top of the line hardware and networks to test it while being ignorant to the fact that most of their users will be using their products on 8+ year old consumer grade hardware over spotty 3G
No my experience is the inverse. The type of library you describe is nice for the basic queries but once you start needing CTE, subquery, postgres json query, etc. it just because easier to manage it all in SQL directly.
I'm using JOOQ. Not saying that JOOQ is the greatest library, but all of what you just mentioned works in there without problem. Including CTEs and json stuff.
With a library such as SQLx, you can never really factor anything out. Or at least it's very hard and you lose the actual typesafety. I've been there and done that with doobie [https://typelevel.org/doobie/] which is basically the same in green.
My original title was edited after submission but here the lead developers of bevy/iced/dioxus have an interesting discussion about the ethics of code reuse vs recognition in open source projects. I thought it could trigger some interesting wider discussions
reply