Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

OK, I suppose I should write to this.

As I've mentioned before, I'm writing a high performance metaverse client. Here's a demo video.[1] It's about 40,000 lines of Rust so far.

If you are doing a non-crappy metaverse, which is rare, you need to wrangle a rather excessive amount of data in near real time. In games, there's heavy optimization during game development to prevent overloading the play engine. In a metaverse, as with a web browser, you have to take what the users create and deal with it. You need 2x-3x the VRAM a comparable game would need, a few hundred megabits per second of network bandwidth to load all the assets from servers, a half dozen or so CPUs running flat out, and Vulkan to let you put data into the GPU from one thread while another thread is rendering.

So there will be some parallelism involved.

This is not like "web-scale" concurrency, which is typically a large number of mini-servers, each doing their own thing, that just happen to run in the same address space. This is different. There's a high priority render thread drawing the graphics. There's a update thread processing incoming events from the network. There are several asset loading and decompression threads, which use up more CPU time than I'd like. There are about a half dozen other threads doing various miscellaneous tasks - handling moving objects, updating levels of detail, purging caches, and such.

There's considerable locking, but no "static" data other than constants. No globals. Channels are used where appropriate to the problem. The main object tree is single ownership, and used mostly by the update thread. Its links to graphics objects are Arc reference counted, and those are updated by both the update thread and the asset loading threads. They in turn use reference counted handles into the Rend3 library, which, via WGPU and Vulkan, puts graphics content (meshes and textures) into the GPU. Rendering is a loop which just tells Rend3 "Go", over and over.

This works out quite well in Rust. If I had to do this in C++, I'd be fighting crashes all the time. There's a reason most of the highly publicized failed metaverse projects didn't reach this level of concurrency. In Rust, I have about one memory related crash per year, and it's always been in someone else's "unsafe" code. My own code has no "unsafe", and I have "unsafe" locked out to prevent it from creeping in. The normal development process is that it's hard to get things to compile, and then it Just Works. That's great! I hate using a debugger, especially on concurrent programs. Yes, sometimes you can get stuck for a day, trying to express something within the ownership rules. Beats debugging.

I have my complaints about Rust. The main ones are:

- Rust is race condition free, but not deadlock free. It needs a static deadlock analyzer, one that tracks through the call chain and finds that lock A is locked before lock B on path X, while lock B is locked before path A on path Y. Deadlocks, though, tend to show up early and are solid problems, while race conditions show up randomly and are hard to diagnose.

- Async contamination. Async is all wrong when there's considerable compute-bound work, and incompatible with threads running at multiple priorities. It keeps creeping in. I need to contact a crate maintainer and get them to make their unused use of "reqwest" dependent on a feature, so I don't pull in Tokio. I'm not using it, but it's there.

- Single ownership with a back reference is a very common need, and it's too hard to do. I use Rc and Weak for that, but shouldn't have to. What's needed is a set of traits to manage consistent forward and back links (that's been done by others) and static analysis to eliminate the reference counts. The basic constraints are ordinary borrow checker restrictions - if you have mutable access to either parent or child, you can't have access to the other one. But you can have non-mutable access to both. If I had time, I'd go work on that.

- I've learned to live without objects, but the trait system is somewhat convoluted. There's one area of asset processing that really wants to be object oriented, and I have more duplicate code there than I like. I could probably rewrite it to use traits more, but it would take some bashing to make it fit the trait paradigm.

- The core graphics crates aren't finished. There was an article on HN a few days ago about this. "Rust has 5 games and 50 game engines". That's not a language problem, that's an ecosystem problem. Not enough people are doing non-toy graphics in Rust. Watch my video linked below.[1] Compared to a modern AAA game title, it's not that great. Compared to anything else being done in Rust (see [2]) it's near the front. This indicates a lack of serious game dev in Rust. I've been asked about this by some pro game devs. My comment is that if you have a schedule to meet, the Rust game ecosystem isn't ready. It's probably about five people working for a year from being ready.

[1] https://video.hardlimit.com/w/tp9mLAQoHaFR32YAVKVDrz

[2] https://gamedev.rs/



We've been building our robotic simulators in Rust for the past 3 years and I have the exact same experience. So far, I think, we've encountered maybe 5 actual runtime bugs over the last 3 years. Sure rust has some problems and yes the async isn't fully there yet, but overal the benefits outweigh the problems.


Async as a paradigm seems so against what GP was discussing. If I understood, and from my experience, we're talking more about concurrent execution with carefully-designed priorities, locks, and timing requirements. This is closer to the embedded / systems-level concurrency, if I understand it right. Are we really expecting a coroutine/ async style to just lift into this world?


Threads are for doing your own work in parallel. Async is for waiting on others to do their work in parallel.

Your own work would be some CPU-intensive operations you can logically divide and conquer.

Others' work would be waiting for file I/O from the OS, waiting for a DB result set following a query, waiting for a gRPC response, etc.

Conceptually quite distinct, and there are demonstrated advantages and drawbacks to each. Right tool for right job and all that.


All correct. An additional comment is that, when I was coming up, parallelism in its many forms was of the variety "I need to do this job in parallel" or "I need to handle exactly 32 concurrent workers". After web & such, it was common to just think of paralellism as "I declare this one method as returning a promise" and then "async def", which semantically is very different than managing threads. As pointed out, it's now more like "This function is basically a server for any and all uncontrolled calls from elsewhere".


This was my thoughts, async is just ONE valid approach to the ultimate problem of "do multiple things at once" it is not the end all be all of approaches


Out of curiosity, is this robotics simulator open source/available?


> Rust is race condition free, but not deadlock free. It needs a static deadlock analyzer, one that tracks through the call chain and finds that lock A is locked before lock B on path X, while lock B is locked before path A on path Y.

That sounds like a great idea. Something in the style of lockdep, that (when enabled) analyzes what locks are currently held while any other lock is taken, and reports any potential deadlocks (even if they haven't actually deadlocked).

That would require some annotation to handle cases of complex locking, so that the deadlock detection knows (for instance) that a given class of locks are always obtained in address order so they can't deadlock. But it's doable.


There's tracing-mutex that builds a dag of your locks when you acquire them and panics (at runtime) if it could deadlock: https://github.com/bertptrs/tracing-mutex

parking_lot has a deadlock detection feature for when you deadlock that iirc tells you what deadlocked (so you're not trying to figure it out with a debugger and a lot of time) https://amanieu.github.io/parking_lot/parking_lot/deadlock/i...

I also just found out about https://github.com/BurtonQin/lockbud which seems to detect deadlocks and a few other issues statically? (seems to require compiling your crate with the same version of rust as lockbud uses, which from the docs is an old 1.63 nightly build?)


Google has acquired before: https://abseil.io/docs/cpp/guides/synchronization#thread-ann...

It's quite nice, but for cpp not rust


I wonder if locks may have some thread-local registry, at least in debug builds.

If locks can be numbered or otherwise ordered, it would be easy to enforce a strict order of taking locks and an inverse strict order of releasing them, by looking up in the registry which locks your thread is currently holding. This would prevent deadlocks.

This, of course, would require to have an idea of all the locks you may want to hold, and their relative order (at least partial), as Dijkstra described back in the day. But thinking about locks is ahead of time is a good idea anyway.


I'm doing basically the same thing in Java for an MMO and the JDK makes it so easy. Just move objects via concurrent queues from network to model creation to UI threads. It's actually quite boring, and fast!


Is there video or a demo?


I don't have anything recorded from the past few years. Here's an old video:

https://youtu.be/L7XIFC2SawY?si=qN7TNxZi-P05uXVa

It's basically a custom 3D multithreaded OSM renderer, and the assets are a custom binary format. Uses very little network bandwidth.

Hoping to have an update this year that shows the updated graphics. I wrote a UI framework to improve my productivity (live hot reloading of UI components written with HTML with one way data binding. I had to do this because the game is gonna have so many UIs and I got tired of writing them in Java 8 style Java. Soon I can resume work on the game after sidewaysdata.com is doneish (also using the UI library to build the desktop/mobile timing application).


You can sign up to be notified if I ever get it done :) here https://tdworldgame.com/


Nice.

The "many UI" problem is large in Rust. Egui needs far too much Rust code per dialog box. Someone was working on a generator, but I haven't looked in on that project in a while.


I actually quite liked egui. It was Rust that felt too slow to write. Also the egui template project with eframe and no app code yet took 15 seconds for an incremental compile. The entire game so far compiles and starts faster than that in Java, so...


Non-blocking I/O is quite mature on Java, and it shows. Unfortunately Java is still a rabid devourer of memory. Its RAM consumption tends to be the biggest con whenever evaluating the pros of using Java. Sometimes it's worth it. More and more often it's not anymore.


I think the game takes a few hundred mb to run while zooming out of a city right now.

FastComments pubsub system in Java takes less than 500mb of heap for like 100k subscribers.

But yes, you have to worry about object field count.


Good luck on the metaverse app! I'd love to see more interesting metaverse takes.

One quibble though. Rust isn't race condition free, it's data race free. You can still end up with race conditions outside of data access. https://news.ycombinator.com/item?id=23599598


> Async is all wrong when there's considerable compute-bound work, and incompatible with threads running at multiple priorities

The priority thing is relatively easy to fix:

Either create multiple thread pools, and route your futures to them appropriately.

Or, write your own event loop, and have it pull from more than one event queue (each with a different priority).

It should be even easier than that, but I don’t know of a crate that does the above out of the box.

One advantage of the second approach is (if your task runtime is bounded) that you can have soft realtime guarantees for high priority stuff even when you are making progress on low priority stuff and running at 100% CPU.


This doesn't help with priority inversions; since you don't know who is waiting on a future/promise until it starts waiting on it, you can't resolve them until then, which means you can have work running at too low a priority. It's not structured enough.


> Single ownership with a back reference is a very common need, and it's too hard to do.

I've been collecting a list[1] of what memory-management policies programmers actually want in their code; it is far more extensive than any particular language actually implements. Contributions are welcome!

I already had back reference on the list, but added some details. When the ownership is indirect (really common) it is difficult to automate.

One thing that always irritates me: Rust's decision to make all objects moveable really hurts it at times.

[1] https://gist.github.com/o11c/dee52f11428b3d70914c4ed5652d43f...


Yes, back-linked objects are probably going to have to be pinned.


Cheering for your metaverse app. Hope to hear more about it. I suspected you might be doing gamedev but this is the first time you’ve shown extensive work.

One challenge with rust is that (for better or worse) most gamedev talent is C++. If you ever open source it I’d be interested in contributing, though I’m not sure how effective the contributions would be.

Good luck!


Email sent.

I'm not that interested in self-promotion here as I am in getting more activity on Rust graphics development. I think the Rust core graphics ecosystem needs about five good graphics people for a year to get unstuck. Rust is a good language for this sort of thing, but you've got to have reliable heavy machinery down in the graphics engine room.

Until that exists, nobody can bet a project with a schedule and a budget on Rust. The only successful commercial high-detail game title I know of that uses Rust is a sailing race simulator. They simply linked directly to "good old DX11" (Microsoft Direct-X 11) and wrote the game logic in Rust. Bypassed Rust's own 3D ecosystems completely.


Is it the one by the same guy who made the gold-standard moddable racing simulator?


It's "Hydrofoil Generation".[1] The only game on the "Released" page of the Rust gaming group that looks post-2000.

[1] https://arewegameyet.rs/games/released/


He co-founded and was the lead dev of Kunos Simulazioni, which made Assetto Corsa (https://store.steampowered.com/app/244210/Assetto_Corsa/).

I miss his Twitch streams! https://www.twitch.tv/kunosstefano


Any pointers on what exactly is missing?

I am neither a Rust guy or a graphics guy, but I have some interest in what is missing in the ecosystem.


> Any pointers on what exactly is missing?

Yes. [1]

[1] https://www.reddit.com/r/rust_gamedev/comments/13qt6rq/were_...


    "I've learned to live without objects, but the trait system is somewhat convoluted. There's one area of asset processing that really wants to be object oriented, and I have more duplicate code there than I like. I could probably rewrite it to use traits more, but it would take some bashing to make it fit the trait paradigm."
Can you expand on this? I come from the C# world and the Rust trait system feels expressive enough to implement the good parts of OOP.


I understand this not as objects are missing, after all, struct with methods and traits are objects aren't they? But more like the lack of hierarchical inheritance, that is most often used in OOP to conveniently share common code with added specialization. Override only the methods you want. You can do it with Traits of course, but it's much more verbose. You can technically use the defer trait to simulate a sort of method inheritance, but it is frowned upon as it should be reserved for smart-pointer like objects (so the doc says).


That's about what I was going to say. Traits have no data of their own. If you need that, you have to construct it, with a data object in each trait instance and access functions for it. It turns the notion of inheritance inside out. Awkward enough that it's only done if absolutely necessary.


I'm from the C# world and am working through learning Rust... in C# we've largely moved away from using inheritance. Not sure that it's a good thing but "best practise" results in serialisation being implemented differently (serialisers which use attributes, or for more advanced teams serialisation wired in at compile time targeted by attributes - advantage here being that the state doesn't have to be public).


I still use inheritance in C# although it is only used for is-a relationships and those aren't that common. But when you need it for that, it's usually pretty important.

I also think it's much more common to see it in library / framework code and not in application code.


UI in Rust without inheritance is tricky. There's still no great UI framework written in Rust yet, though not for lack of trying! I'm interested to see how Bevy's UI turns out. They're currently exploring the design space and requirements for production-grade UI, actually.


Wouldn’t something akin to swift UI work well in this situation? I can understand that not having a “component” class to inherit would make building custom components difficult, but if most layout and skinning can be accomplish via functions then you can sidestep the issue for most cases I think…


I think people are looking to SwiftUI for inspiration. It'll still take some time to build and evaluate these solutions.


It's "prefer composition over inheritance" though, not "never use inheritance".

There is a time and a place for it.


Which, as I said, results in you *usually* using composition :)


Does delegate support delegating everything (except what you're specifically implementing for your own struct) yet? That's the way to do it.


> Async contamination

I've always wondered why the "color" of a function can't be a property of its call site instead of its definition. That would completely solve this problem - you declare your functions once, colorlessly, and then can invoke them as async anywhere you want.


> I've always wondered why the "color" of a function can't be a property of its call site instead of its definition. That would completely solve this problem - you declare your functions once, colorlessly, and then can invoke them as async anywhere you want.

If you have a non-joke type system (which is to say, Haskell or Scala) you can. I do it all the time. But you need HKT and in Rust each baby step towards that is an RFC buried under a mountain of discussion.


You can do it without HKTs with an effects system, which you can think of as another kind of generics that causes the function to be sliced in different ways depending on how it's called. There is movement in Rust to try to do this, but I wish it was done before async was implemented considering async could be implemented within it...


The rust guys are working on this very problem with the keyword generics proposal https://blog.rust-lang.org/inside-rust/2022/07/27/keyword-ge...


If a function calls something that does something async, that can't be evaluated synchronously due to 1) no setup; could be async IO and require being called in the context of an async runtime (library feature, not language feature) and 2) blocking synchronously on an async task in an async runtime can result in deadlocks from task waiting on runtime IO polling but the waiting preventing the runtime from being polled.


> could be async IO and require being called in the context of an async runtime

The compiler already has knowledge that a function is being called as async - what prevents it from ensuring that a runtime is present when it does?

> blocking synchronously on an async task in an async runtime can result in deadlocks from task waiting on runtime IO polling but the waiting preventing the runtime from being polled

What prevents the runtime from preempting a task?


> what prevents it from ensuring that a runtime is present when it does?

The runtime being a library instead of a language/compiler level feature. Custom runtimes is necessary for systems languages as they can have specialized constraints.

EDIT: Note that it's the presence of a supported runtime for the async operation (e.g. it relies on runtime-specific state like non-blocking IO, timers, priorities, etc.), not only the presence of any runtime.

> What prevents the runtime from preempting a task?

Memory efficient runtimes use stackless coroutines (think state machines) instead of stackful (think green threads / fibers). The latter comes with inefficiencies like trying to guess stack sizes and growing them on demand (either fixing pointers to them elsewhere or implementing a GC) so it's not always desirable.

To preempt the OS thread of a stackful coroutine (i.e. to catch synchronously blocking on something) you need to have a way to save its stack/registers in addition to its normal state machine context which is the worst of both worlds: double the state + the pointer stability issues from before.

This is why most stackful coroutine runtimes are cooperatively scheduled instead, requiring blocking opportunities to be annotated so the runtime can workaround that to still make progress.


> _green thread inefficiencies_

Ron Pressler (@pron) from Loom @ Java had an interesting talk on the Java Language Summit just recently, talking about Loom’s solution to the stack copying: https://youtu.be/6nRS6UiN7X0


Thank you for your explanation of the trade-space around preemptible coroutines, that greatly helped my understanding. I am still unclear on one thing:

> The runtime being a library instead of a language/compiler level feature. Custom runtimes is necessary for systems languages as they can have specialized constraints.

Compilers link against dynamic libraries all the time. What prevents the compiler from linking against a hypothetical libasync.so just like any other library? (alternatively, if you want to decouple your program from a particular async runtime, what prevents the language from defining a generic interface that async runtimes must implement, and then linking against that?)


This would imply a single/global runtime along with an unrealistic API surface;

For 1) It's common enough to have multiple runtimes in the same process, each setup possibly differently and running independently of each other. Often known as a "thread-per-core" architecture, this is the scheme used in apps focused on high IO perf like nginx, glommio, actix, etc.

For 2) runtime (libasync.so) implementations would have to cover a lot of aspects they may not need (async compute-focused runtimes like bevy don't need timers, priorities, or even IO) and expose a restrictive API (what's a good generic model for a runtime IO interface? something like io_uring, dpdk, or epoll? what about userspace networking as seen in seastar?). A pluggable runtime mainly works when the language has a smaller scope than "systems programming" like Ponylang or Golang.

As a side note; Rust tries to decouple the scheduling of Futures/tasks using its concept of Waker. This enables async implementations which only concern themselves with scheduling like synchronization primitives or basic sequencers/orchestration to be runtime-agnostic.


I did some reading up on this, and found more detail about the "unrealistic API" surface (e.g. [1]), and I think I understand the problem at least as a surface level (and agree with the conclusions of the Rust team).

So then to tie this back to my earlier question - why does this make a difference between "async declared at function definition site" vs "async declared at function call site"?

Libraries have to be written against a specific async API (tokio vs async-std, to reference the linked Reddit thread) - that makes sense. But that doesn't change regardless of whether your code looks like `async fn foo() {...}` or `async foo();`. The compiler has ahead-of-time knowledge of both cases, as well...

[1] https://old.reddit.com/r/rust/comments/f10tcq/confusion_with...


I most runtimes you can just call something like `block_on`. There are some things to be careful about to avoid starving other takes but most general-purpose runtimes will spawn more threads as needed. Similarly blocking in an asynx task is generally not much of an issue for these runtimes for the same reasons.

It isn't like JavaScript where there is truly only one thread of execution at a time and blocking it will block everything.


`std::thread::spawn()` and `.join()` are the ultimate async implementation.


>The normal development process is that it's hard to get things to compile, and then it Just Works. That's great! I hate using a debugger, especially on concurrent programs. Yes, sometimes you can get stuck for a day, trying to express something within the ownership rules. Beats debugging.

This is a far superior workflow when you factor in outcomes. More up front time to get a "correct"/more-reliable output scales infinitely better than than churning out crap that you need to wrap in 10,000 lines of tests to keep from breaking/validate (See: the dumpster-fire that is Rails)


> This is a far superior workflow when you factor in outcomes.

I’m a strong-typing enthousiast, too, but still, I’m not fully convinced that’s true.

It seems you can’t iterate fast at all in Rust because the code wouldn’t compile, but can iterate fast in C++, except for the fact that the resulting code may be/often is unstable.

If you need to try out a lot of things before finding the right solution, the ability to iterate fast may be worth those crashes.

Maybe, using C++ for fast iterations, and only using various tools to hunt down issues the borrow checker would catch on the iteration you want to keep beats using Rust.

Or do Rust programmers iterate fast using unsafe where needed and then fix things once they’ve settled on a design?


> It seems you can’t iterate fast at all in Rust because the code wouldn’t compile

Yup, this is correct - and the reason is because Rust forces you to care about efficiency concerns (lifetimes) everywhere. There's no option to "turn the borrow checker off" - which means that when you're in prototyping mode, you pay this huge productivity penalty for no benefit.

A language that was designed to be good at iteration would allow you to temporarily turn the borrow checker off, punch holes in your type system (e.g. with Python's "Any"), and manage memory for you - and then let you turn those features on again when you're starting to optimize and debug. (plus, an interactive shell and fast compilation times - that's non-negotiable) Rust was never designed to be good at prototyping.

I heard a saying a few years ago that I like - "it's designed to make hardware [rigid, inflexible programs], not software". (it's from Steve Yegge - I could track it down if I cared)


>There's no option to "turn the borrow checker off" - which means that when you're in prototyping mode, you pay this huge productivity penalty for no benefit

That’s not really true. The standard workaround for this is just to .clone() or Rc<RefCell<>> to unblock yourself, then come back later and fix it.

It is true that this needs to be done with some care otherwise you can end up infecting your whole codebase with the “workaround”. That comes with experience.


> It is true that this needs to be done with some care otherwise you can end up infecting your whole codebase with the “workaround”

It's a "workaround" precisely because the language does not support it. My statement is correct - you cannot turn the borrow-checker off, and you pay a significant productivity penalty for no benefit. "Rc" can't detect cycles. ".clone()" doesn't work for complex data structures.


You can’t turn off undefined behavior in C++ either. Lifetimes exist whether the language acknowledges them or not.

Except if you go to a GC language, but then you’re prototyping other types of stuff than you’d probably pick Rust for.


You can use unsafe if you really want to "turn the borrow-checker off", no?


No, because that doesn't give you automatic memory management, which is the point. When I'm prototyping, there's zero reason for me to care about lifetimes - I just want to allocate an object and let the runtime handle it. When you mark everything in your codebase unsafe (a laborious and unnecessary process that then has to be laboriously undone), you still have to ask the Rust runtime for dynamic memory manually, and then track the lifetimes in your head.


If you're saying you want GC/Arc then that's more than just "turning off the borrow checker".


Pedantry. Later on in my comment I literally say "manage memory for you" - it should be pretty clear that my intent was to talk about a hypothetical language that allowed you to change between use of a borrow checker and managed memory, even if I didn't use the correct wording ("turn off the borrow checker") in that particular very small section of it.


Bit much to complain about pedantry with how prickly your tone has been in this whole thread. If you only want this functionality for rapid iteration/prototyping, which was what you originally said, then leaking memory in those circumstances is not such a problem.


You're right, I have been overly aggressive. I apologize.

> If you only want this functionality for rapid iteration/prototyping, which was what you originally said, then leaking memory in those circumstances is not such a problem.

There's use-cases for wanting your language to be productive outside of prototyping, such as scripting (which I explicitly mentioned earlier in this thread[1] - omission here was not intentional), and quickly setting up tools (such as long-running web services) that don't need to be fast, but should not leak memory.

"Use Rust, but turn the borrow checker off" is inadequate.

[1] https://news.ycombinator.com/item?id=37441120


Yeah, I do think the space where manual memory management is actually desirable is pretty narrow - and so I'm kind of baffled that Rust caught on where the likes of OCaml didn't. But seemingly there's demand for it. (Either that, or programming is a dumb pop culture that elevates performance microbenchmarks beyond all reason)


> There's no option to "turn the borrow checker off" - which means that when you're in prototyping mode, you pay this huge productivity penalty for no benefit.

Frankly I think this is a good thing! And I disagree with your "no benefit" assertion.

I don't like prototyping. Or rather, I don't like to characterize any phase of development as prototyping. In my experience it's very rare that the prototype actually gets thrown away and rewritten "the right way". And if and when it does happen, it happens years after the prototype has been running in production and there's a big scramble to rewrite it because the team has hit some sort of hard limit on fixing bugs or adding features that they can't overcome within the constraints of the prototype.

So I never prototype. I do things as "correctly" as possible from the get-go. And you know what? It doesn't really slow me down all that much. I personally don't want to be in the kind of markets where I can't add 10-20% onto a project schedule without failing. And I suspect markets where those sorts of time constraints matter are much rarer than most people tell themselves.

(And also consider that most projects are late anyway. I'd rather be late because I was spending more time to write better, safer code, than because I was frantically debugging issues in my prototype-quality code.)


The dev cycle is slower, yes, but once it compiles, there is no debug cycle.


Wildly false. Rust's design does virtually nothing to prevent logic errors.


I have found someone that never introduces logic errors, and found out a way to use dependent types in Rust. /s


95% of my "logic errors" are related to surprise nulls (causing a data leak of sensitive data) or surprise mutability. The idea that there is no debug cycle is ridiculous but I am confident that there will be less of them in Rust.


I bet it won't survive a pentest attack, and there are more things missing on program expectations than only nullability.

On the type system theory, Rust still has quite something to catch up to theorem provers, which even those aren't without issues.


So then tests are optional?

Most bugs are elementary logic bugs expressible in every programming language.


> So then tests are optional?

Yes and no. You're gonna write far fewer tests in a language like Rust than in a language like Python. In Python you'll have to write tests to eliminate the possibility of bugs that the Rust compiler can eliminate for you. I would much rather just write logic tests.

> Most bugs are elementary logic bugs expressible in every programming language.

I don't think that's true. I would expect that most bugs are around memory safety, type confusion, or concurrency issues (data races and other race conditions).


Python is not a language I would consider to be meaningfully comparable to Rust. They have very different use cases.

In modern C++, memory safety and type confusion aren’t common sources of bugs in my experience. The standard idiomatic design patterns virtually guarantee this. The kinds of concurrency issues that tend to cause bugs can happen in any language, including Rust. Modern C++, for all its deficiencies, has an excellent type safety story, sometimes better than Rust. It doesn’t require the language to provide it though, which is both a blessing and a curse.


I think mostly you just need less iteration with Rust because the language seems to guide you towards nice, clean solutions once you learn not to fight the borrow checker.

Rust programmers don't iterate using unsafe because every single line of unsafe gives you more to think and worry about, not less. But they might iterate using more copying/cloning/state-sharing-with-ref-counting-and-RefCell than necessary, and clean up the ownership graph later if needed.


> I think mostly you just need less iteration with Rust because the language seems to guide you towards nice, clean solutions once you learn not to fight the borrow checker.

That's not iteration. That's debugging. "Iteration" includes design work. Rust's requirement to consider memory management and lifetimes actively interferes with design work with effectively zero contributions towards functional correctness (unlike types, which actually help you write less buggy code - but Rust's type system is not unique and is massively inferior to the likes of Haskell and Idris), let alone creating things.


> Rust's requirement to consider memory management and lifetimes actively interferes with design work with effectively zero contributions towards functional correctness

I don't really agree with that. If you've decided on a design in Rust where you're constantly fighting with lifetimes (for example), that's a sign that you may have designed your data ownership wrong. And while it's not going to be the case all the time, it's possible that a similar design in another language would also be "wrong", but in ways that you don't find out until much later (when it's much harder to change).

> Rust's type system is not unique and is massively inferior to the likes of Haskell and Idris

Sure, but few people use Haskell or Idris in the real world for actual production code. Most companies would laugh me out of an interview if I told them I wanted to introduce Haskell or Idris into their production code base. That doesn't invalidate the fact that they have better type systems than Rust, but a language I can't/won't use for most things in most places isn't particularly useful to me.


No, I was not talking about debugging or correctness. The point was that Rust does not merely guide you towards correct code, it tends to guide you towards good code.


The point of iteration is not typically to find the best implementation for a given algorithm, it's to find the best algorithm for solving a given problem.

I can see the argument that Rust encourages by its design a clean implementation to any given algorithm. But no language's design can guide you to finding a good algorithm for solving a given problem - you often need to quickly try out many different algorithms and see which works best for your constraints.


I don't think iteration is usually about testing different algorithms – it's much more about finding out what the problem is in the first place (that is, attaining a good enough understanding of the problem to solve it), and secondarily about finding out a solution to the problem that satisfies any relevant boundary conditions.


In Rust you have the option to Box and/or Rc everything. That gets you out of all the borrowing problems at the cost of rubtime performance (it basically puts you in the C++ world). This is a perfectly reasonable way to program but people forget about it due to the more "purist" approach that's available. But it's a good way to go for iteration and simplicity, and (in my opinion) still miles better than C++, due to the traits, pattern matching, error handling, and tooling.


I tend to agree, but pro game dev is a hell where people demand that a new feature be demoed for the producer by 1 PM tomorrow. I have the luxury of not being under such pressure.


> Yes, sometimes you can get stuck for a day, trying to express something within the ownership rules.

This is a big problem. Fast iteration time is very valuable.

And who likes doing this to themselves anyway? Isn't it a very frustrating experience? How is this the most loved language?


> And who likes doing this to themselves anyway? Isn't it a very frustrating experience? How is this the most loved language?

The thing is, these dependencies do exist no matter what language you use if they stem from an underlying concept. In that case rust just makes you explicitly write them which is a good thing since in C++ all these dependencies would be more or less implicit and everytime somebody edits the code he needs to think all these cases through and get a mental model (if he sees it at all!). In Rust you at least have the lifetime annotations which make it A: obvious there is some special dependency going on and B: show the explicit lifetimes etc.

So what I'm saying, you need to put in this work no matter which language you choose, writing it down is then not a big problem anymore. If you don't think about these rules your program will probably work most of the time but only most of the time, and that can be very bad for certain scenarios.


> So what I'm saying, you need to put in this work no matter which language you choose

This is very false. Managed-memory languages don't require you to even think about lifetimes, let alone write them down.

Yes, I understand that this is for efficiency - but claiming that you have to think about lifetimes everywhere is just wrong, and irrelevant when discussing topics (prototyping/design work/scripting) where you don't care about efficiency.


Lifetimes are still important in managed languages. You just have to track them in your head, which is fallible. The difference is that if you get it wrong in a managed language, you get leaks or stale objects or other logic bugs. In rust you get compile time errors.


While this is correct, it's still much easier to think about lifetimes in managed languages. The huge majority of allocated objects gets garbage-collected after a very short time, when they leave the context (similar to RAII).

Mostly you need to think about large and/or important objects, and avoid cycles, and avoid unneeded references to such objects that would live for too long. Such cases are few.

The silver lining is that if you make a mistake and a large object would have to live slightly longer, you won't have to wrangle with the lifetime checker for that small sliver of lifetime. But if you make a big mistake, nothing will warn you about a memory leak, before the prod monitoring does.


> The huge majority of allocated objects gets garbage-collected after a very short time, when they leave the context (similar to RAII).

Those objects are also virtually no problem in languages like Rust or C++. Those are local objects whose lifetimes are trivial and they are managed automatically with no additional effort from the developer.


> The difference is that if you get it wrong in a managed language, you get leaks or stale objects or other logic bugs.

Can you provide concrete examples of this? I've literally never had a bug due to the nature of a memory-managed language.


Once upon a time (at least through IE7) Internet Explorer had separate memory managers for javascript and the DOM. If there was a cycle between a JS object and a DOM object (a DOM node is assigned as a property of an object, and another property was assigned as an event handler to the DOM node) then IE couldn't reclaim the memory.

Developers of anything resembling complex scripts (for the time) had to manually break these cycles by setting to null the attributes of the DOM node that had references to any JS objects.

Douglas Crockford has a little writeup here[0] with a heavy-handed solution, but it was better than doing it by hand if you were worried another developer would come along and add something and forget to remove it.

Other memory managed languages also have to deal with the occasional sharp corners. Most of the time, this can be avoided by knowing to clean up resources properly, but some are easier to fall for than others.

Oracle has a write up on hunting Java memory leaks [1] Microsoft has a similar, but less detailed article here[2]

Of course, sometimes a "leak" is really a feature. One notorious example is variable shadowing in the bad old days of JS prior to the advent of strict mode. I forget the name of the company, but someone's launch was ruined because a variable referencing a shopping cart wasn't declared with `var` and was treated as a global variable, causing concurrent viewers to accidentally get other user's shopping cart data as node runs in a single main thread, and concurrency was handled only by node's event loop.

[0] https://www.crockford.com/javascript/memory/leak.html

[1] https://docs.oracle.com/en/java/javase/17/troubleshoot/troub...

[2] https://learn.microsoft.com/en-us/dotnet/core/diagnostics/de...


My question was about the nature of a memory-managed language causing "leaks or stale objects or other logic bugs". This issue is not that - this is due to a buggy implementation causing memory leaks.

To be more precise: this is a bug, that was fixable, in the runtime, not in user applications that would run on top of it.

Assume a well-designed memory-safe language and implementation. What kinds of memory hazards are there?


> separate memory managers

Notwithstanding the rest of your comment, this doesn't seem like a good example of the problem, since most GCs have a complete view of their memory.


CMEs in Java are a constant thorn to many Java programmers as a lifetime violation bug. Hell even NPEs are too for that matter, lol.


You can certainly get memory abandonment, which is like a leak but for memory that's still referenced and is just never going to be used again.


I will note that in GC literature at least, that is still considered a leak.

In an ideal world, we could have a GC that reclaimed all unused memory, but that turns out to be impossible because of the halting problem. So, we settle for GCs that reclaim only unreachable memory, which is a strict subset of unused memory. Unused reachable memory is a leak.


> Managed-memory languages don't require you to even think about lifetimes, let alone write them down.

Memory is only one of many types of resources applications use. Memory-managed languages do nothing to help you with those resources, and effectively managing those resources is way harder in those languages than in Rust or C++.


What? Rust doesn't do anything to "help you with those resources", either - you can still create cycles in ARC objects or allocate huge amounts of memory and then forget about it.

In both languages you have to rely on careful design, and then profile memory use and manage it.

However, Rust requires you to additionally reason about lifetimes explicitly. Again - great for performance, terrible for design, prototyping, and tools in non-resource-constrained environments*.


> The thing is, these dependencies do exist no matter what language you use

Sure, but in a lot of cases, these invariants can be trivially explained, or intuitive enough that it wouldn't even need explanation. While in Rust, you can easily spend a full day just explaining it to the compiler.

I remember spending litteral _days_ tweaking intricate lifetimes and scopes just to promise Rust that some variables won't be used _after_ a thread finishes.

Some things I even never managed to be able to express in Rust, even if trivial in C, so I just rely on having a C core library for the hot path, and use it from Rust.

Overall, performance sensitive lifetime and memory management in Rust (especially in multithreaded contexts) often comes down to:

1) Do it in _sane_ Rust, and copy everything all over the place, use fancy smart pointers, etc.

2) Do it in a performant manner, without useless copies, without over the top memory management, but prepare a week of frustrating development and a PhD in Rust idiosyncrasies.


>use fancy smart pointers, etc.

The thing is, you think your code is safe and it most likely is, but mathematically speaking, what you are doing is difficult or even impossible to prove correct. It is akin to running an NP complete algorithm on a problem that is easier than NP. Most practical problem instances are easy to solve, but the worst case which can't be ruled out is utterly, utterly terrible, which forces you to use a more general solution than is actually necessary.


Don't let perfect be the enemy of good.

Since smart pointers because ubiquitous in c++, I've (personally) had only a handful of memory and lifetime issues. They were all deduceable by looking at where we "escape hatched" and stored a raw ptr that was actually a unique pointer, or something similar. I'll take having one of those every 18 months over throwing away my entire language, toolchain,ecosystem and iteration times.


I don't think it's a matter of putting one versus the other.

If you can get away with smart pointers and such, life is beautiful, nothing wrong there!

The debate here is rather for the cases where you cannot afford such things.


> Some things I even never managed to be able to express in Rust, even if trivial in C, so I just rely on having a C core library for the hot path, and use it from Rust.

i can’t think of anything you can do in c that you can’t do in unsafe rust, and that has the advantage that you can both narrow it down to exactly where you need it and only there, and your can test it in miri to find bugs


To be fair, unsafe Rust has an entirely new set of idiosyncrasies that you have to learn for your code not to cause UB. Most of them revolve around the many ways in which using references can invalidate raw pointers, and using raw pointers can invalidate references, something that simply doesn't exist in C apart from the rarely-used restrict qualifier.

(In particular, it's very easy to inadvertently trigger the footgun of converting a pointer to a reference, then back to a pointer, so that using the original pointer again can invalidate the new pointer.)

Extremely pointer-heavy code is entirely possible in unsafe Rust, but often it's far more difficult to correctly express what you want compared to C. With that in mind, a tightly-scoped core library in C can make a lot of sense; more lines of unsafe code in either language leave more room for bugs to slip in.


> i can’t think of anything you can do in c that you can’t do in unsafe rust

That is not my point.

There is a world between "you can do it" and "you will do it".

Some things in Rust are doable in theory, but end up being so insane to implement that you won't do it in practice. That is my point.


> How is this the most loved language?

Personal preference and pain tolerance. Just like learning Emacs[1] - there's lots of things that programmers can prioritize, ignore, enjoy, or barely tolerate. Some people are alright with the fact that they're prototyping their code 10x more slowly than in another language because they enjoy performance optimization and seeing their code run fast, and there's nothing wrong with that. I, myself, have wasted a lot of time trying to get the types in some of my programs just right - but I enjoy it, so it's worth it, even though my productivity has decreased.

Plus, Rust seems to have pushed out the language design performance-productivity-safety efficiency frontier in the area of performance-focused languages. If you're a performance-oriented programmer used to buggy programs that take a long time to build, then a language that gives you the performance you're used to with far fewer bugs and faster development time is really cool, even if it's still very un-productive next to productivity-oriented languages (e.g. Python). If something similar happened with productivity languages, I'd get excited, too - actually, I think that's what's happening with Mojo currently (same productivity, greater performance) and I'm very interested.

[1] https://news.ycombinator.com/item?id=37438842


> even if it's still very un-productive next to productivity-oriented languages (e.g. Python).

The thing is, for many people, including me, Rust is actually a more productive language than Python or other dynamic languages. Actually writing Python was an endless source of pain for me - this was the only language where my code did not initially work as expected more times than it did. Where in Rust it works fine from the first go in 99% of cases after it compiles, which is a huge productivity boost. And quite surprisingly, even writing the code in Rust was faster for me, due to more reliable autocomplete / inline docs features of my IDE.


I think part of the problem is "developer productivity" is a poorly-defined term that means different things to different people.

To some, it means getting something minimal working and running as quickly as possible, accepting that there will be bugs, and that a comprehensive test suite will have to be written later to suss them all out.

To others (myself included), it means I don't mind so much if the first running version takes a bit longer, if that means the code is a bit more solid and probably has fewer bugs. And on top of that, I won't have to write anywhere near as many tests, because the type system and compiler will ensure that some kinds of bugs just can't happen (not all, but some!).

And I'm sure it means yet other things to other people!


> Python or other dynamic languages

I should have stated that I'm comparing Rust to typed Python (or TypeScript or typed Racket or whatever). Typed Python gives you a type system that's about a good as Rust's, and the same kinds of autocompletion and inline documentation that you would get with Rust, while also freeing you from the constraints of (1) being forced to type every variable in your program upfront, (2) being forced to manage memory, and (3) no interactive shell/REPL/Jupyter notebooks - Rust simply can't compete against that.

You're experience would likely have been very different if you were using typed Python.


> Typed Python gives you a type system that's about a good as Rust's

No, it absolutely does not.

Also consider that Python has a type system regardless of whether or not you use typing, and that type system does not change because you've put type annotations on your functions. It does allow you to validate quite a few more things before runtime, of course.


> Some people are alright with the fact that they're prototyping their code 10x more slowly than in another language because they enjoy performance optimization and seeing their code run fast, and there's nothing wrong with that.

I look at it a little differently: I'm fine with the fact that I'm prototyping my code 10x more slowly (usually the slowdown factor is nowhere near that bad, though; I'd say sub-2x is more common) than in another language because I enjoy the fact that when my code compiles successfully, I know there are a bunch of classes of bugs my code just cannot have, and this wouldn't be the case if I used the so-called "faster development" language.

I also hate writing tests; in a language like Rust, I can get away with writing far fewer tests than in a language like Python, but have similar confidence about the correctness of the code.


> Some people are alright with the fact that they're prototyping their code 10x more slowly than in another language because they enjoy performance optimization and seeing their code run fast, and there's nothing wrong with that.

Disclaimer: I've sort of bounced off of Rust 3 or so times and while I've created both long-running services in it as well as smaller tools I've basically mostly had a hard time (not enjoying it at all, feeling like I'm paying a lot in terms of development friction for very little gain, etc.) and if you're the type to write off most posts with "You just don't get it" this would probably just be one more on the pile. I would argue that I do understand the value of Rust, but I take issue with the idea that the cost is worth it in the majority of cases, and I think that there are 80% solutions that work better in practice for most cases.

From personal experience: You could be prototyping your code faster and get performance in simpler ways than dealing with the borrow checker by being able to express allocation patterns and memory usage in better, clearer ways instead and avoid both of the stated problems.

Odin (& Zig and other simpler languages) with access to these types of facilities are just an install away and are considerably easier to learn anyway. In fact, I think you could probably just learn both of them on top of what you're doing in Rust since the time investment is negligible compared to it in the long run.

With regards to the upsides in terms of writing code in a performance-aware manner:

- It's easier to look at a piece of code and confidently say it's not doing any odd or potentially bad things with regards to performance in both Odin and Zig

- Both languages emphasize custom allocators which are a great boon to both application simplicity, flexibility and performance (set up limited memory space temporarily and make sure we can never use more, set up entire arenas that can be reclaimed or reused entirely, segment your resources up in different allocators that can't possibly interfere with eachother and have their own memory space guaranteed, etc.)

- No one can use one-at-a-time constructs like RAII/`Drop` behind your back so you don't have to worry about stupid magic happening when things go out of scope that might completely ruin your cache, etc.

To borrow an argument from Rust proponents, you should be thinking about these things (allocation patterns) anyway and you're doing yourself a disservice by leaving them up to magic or just doing them wrong. If your language can't do what Odin and Zig does (pass them around, and in Odin you can inherit them from the calling scope which coupled with passing them around gives you incredible freedom) then you probably should try one where you can and where the ecosystem is based on that assumption.

My personal experience with first Zig and later Odin is that they've provided the absolute most productive experience I've ever had when it comes to the code that I had to write. I had to write more code because both ecosystems are tiny and I don't really like extra dependencies regardless. Being able to actually write your dependencies yourself but have it be such a productive experience is liberating in so many ways.

Odin is my personal winner in the race between Odin and Zig. It's a very close race but there are some key features in Odin that make it win out in the end:

- There is an implicit `context` parameter primarily used for passing around an allocator, a temp-allocator and a logger that can be implicitly used for calls if you don't specify one. This makes your code less chatty and let's you talk only about the important things in some cases. I still prefer to be explicit about allocators in most plumbing but I'll set `context.allocator` to some appropriate choice for smaller programs in `main` and let it go

- We can have proper tagged unions as errors and the language is built around it. This gives you code that looks and behaves a lot like you'll be used to with `Result` and `Option` in Rust, with the same benefits.

- Errors are just values but the last value in a multiple-value-return function is understood as the error position if needed so we avoid the `if error != nil { ... }` that would otherwise exist if the language wasn't made for this. We can instead use proper error values (that can be tagged unions) and `or_return`, i.e.:

    doing_things :: proc() ParsingError {
        parsed_data := parse_config_file(filename) or_return
        ...
    }
If we wanted to inspect the error this would instead be:

    // The zero value for a union is `nil` by default and the language understands this
    ParsingError :: union {
        UnparsableHeader,
        UnparsableBody,
    }

    UnparsableHeader :: struct {
        ...
    }

    UnparsableBody :: struct {
        ...
    }

    doing_things :: proc() {
        parsed_data, parsing_error := parse_config_file(filename)
        // `p in parsing_error` here unpacks the tag of the union
        // Notably there are no actual "constructors" like in Haskell
        // and so a type can be part of many different unions with no syntax changes
        // for checking for it.
        switch p in parsing_error {
        case UnparsableHeader:
            // In this scope we have an `UnparsableHeader`
            function_that_deals_with_unparsable_header(p)
        case UnparsableBody:
            function_that_deals_with_unparsable_body(p)
        }

        ...
    }
- ZVI or "zero-value initialization" means that all values are by default zero-initialized and have to have zero-values. The entire language and ecosystem is built around this idea and it works terrifically to allow you to actually talk only about the things that are important, once again.

P.S. If you want to make games or the like Odin has the absolute best ecosystem of any C alternative or C++ alternative out there, no contest. Largely this is because it ships with tons of game related bindings and also has language features dedicated entirely to dealing with vectors, matrices, etc., and is a joy to use for those things. I'd still put it forward as a winner with regards to most other areas but it really is an unfair race when it comes to games.


Debugging rare crashes and heisenbugs is more frustrating, and in non-safe languages, a chronic problem.

Whereas after you prove the safety of a design once, it stays with you.


It stays with you until you need to change something and find yourself unable to make incremental changes.

And in many use cases people are throwing Rust (and especially async Rust) on problems solved just fine with GC languages so the safety argument doesn’t apply there.


> change something and find yourself unable to make incremental changes

why do you believe this becomes the case with rust code?


The safety argument is actually the reason why you can use Rust in those cases to begin with. If it was C or C++ you simply couldn't use it for things like webservers due to the safety problems inherent to these languages. So Rust creeps into the part of the market that used to be exclusive to GC languages.


What do you think nginx and Apache are written in?


How few severe vulnerabilities and other major defects (memory corruption or crashes) do you think Nginx and Apache have had over the years?


Sort of. Do you want someone that doesn't understand the constraints that likely is creating a bug that will cause crashes? Or do you want to block them until they understand the constraints?


So you use a safe, garbage-collected language like Python, and iterate 5x as fast as Rust. Problem solved. It's 2023 - there are at least a dozen production-quality safe languages.


> and iterate 5x as fast as Rust.

I've been involved in Java, Python, PHP, Scala, C++, Rust, JS projects in my career. I think I'd notice a 5x speed difference in favor of Python if it existed. But I haven't.


You're probably just using Python wrong, then. You can use a Jupyter notebook to incrementally develop and run pieces of code in seconds, and this scales to larger programs. With Rust, you have to re-compile and re-run your entire application every time you want to test your changes. That's a 5x productivity benefit by itself.


You’re seriously suggesting writing a game engine in Python?


You accidentally responded to the wrong comment. I never mentioned a game engine.


This thread is about writing a game engine, so GP didn't "accidentally" respond to the wrong comment. Their question is on-topic.

If your comments aren't relevant to writing a game engine, then they're not relevant to this thread.


> This thread is about writing a game engine

This is false. This "thread" is not "about" anything. The top-level comment was about writing a game engine, and various replies to that thread deviated from that topic to a greater or lesser extent. Nobody has the authority to decide what a thread is "about".

Additionally, the actual article under consideration is about Rust's design in general. That makes my comments more on topic than one about game engines in particular, and so it should be pretty clear that if you're going to assume anything about my comments, then it would not be that they're about game engines.


It doesn't really matter, there doesn't exist a problem space where both Rust and Python are reasonable choices.

Case in point, I once wrote a program to take a 360 degree image and rotate it so that the horizon followed the horizontal line along the middle, and it faced north. I wrote it in python first and running it on a 2k image took on the order of 5 minutes. I rewrote it in rust and it took on the order of 200ms.

Could I iterate in Python faster? Yes, but the end result was useless.


> there doesn't exist a problem space where both Rust and Python are reasonable choices

This thread, and many other threads about Rust, are filled with people arguing the exact opposite - that Rust is a good, productive language for high-level application development. I agree with you, there's relatively little overlap - that's what I'm arguing for!


Both qualify for writing tiny web servers, cli/byte-manipulation scripts, server automation jobs, in-house GUI applications, and other small stuff. Could technically argue that these are a "relatively little overlap" depending on what you do though..


The “beats debugging” part I took it as meaning “it is better than spending that day debugging”.

I have fought the ownership rules and lost (replaced references by integers to a common vector-ugly stuff, but I was time constrained). But I have seen people spend several weeks debugging a single problem, and that was really soul-crushing.


I think you may be misunderstanding what GP means. It's about spending a day working on issues. You're either doing it before you launch your iteration, or you're doing it after. GP thinks it's better to spend the time before you push the change. From a quality perspective it's hard to see how anyone could disagree with that, but I can certainly see why there would be different preferences from programmers.

I don't personally mind debugging, too much, but if your goal is to avoid bugs in your running software, then Rust has some serious advantages. We mainly use TypeScript to do things, which isn't really comparable to Rust. But we do use C when we need performance, and we looked into Rust, even did a few PoCs on real world issues, and we sort of ended up in a situation similar to GP. Rust is great though a bit "verbose" to write, but its eco-system is too young to be "boring" enough for us, so we're sticking with C for the time being. But being able to avoid running into crashes by doing the work before your push your code is immensely valuable in fault-intolerant systems. Like, we do financial work with C, it cannot fail. So we're actually still doing a lot of the work up-front, and then we handle it by rigorously testing everything. Because it's mainly used for small performance enhancement, our C programs are small enough to where this isn't an issue, but it would be a nightmare to do with 40.000 lines of C code.


I agree that fast iteration time is valuable, but I don't think this has to hold 100% of the time.

I would much rather bang my head against a compiler for N hours, and then finally have something that compiles -- and thus am fairly confident works properly -- than have something that compiles and runs immediately, but then later I find I have to spend N hours (or, more likely, >N hours) debugging.

Your preferences may differ on this, and that's fine. But in the medium to long term, I find myself much more productive in a language like Rust than, say, Python.


I’m working on an unrelated project that does some stuff similarly to you. I’m at 4k lines right now.

Just wondering, how long did it take you to hit 40k lines? I’m a new Rust developer and it’s taken me ages to get this far.

I totally relate to your experience though. When I finally get my code to compile, it “just works” without crashes. I’ve never felt so confident in my code before.


> how long did it take you to hit 40k lines?

3 years.


Impressive dedication! I hope I can make it that long. The project looks cool and the technical details sound even cooler.

Thanks for the perspective.


>When I finally get my code to compile, it “just works” without crashes. I’ve never felt so confident in my code before.

This isn't a new idea for a desirable state. Same experience with Modula-2 three decades ago. A page or more of compiler errors to clear, then suddenly adiabatic. A very satisfying experience.


I don’t know what you mean by web-scale, you’d be mistaken if you meant “the multi-threaded services that power giant internet properties”.

If you want extreme low contention extreme high-utilization, you’re doing threading and event-driven simultaneously, there are no easy answers on heavily contended data structures because you can’t duplicate to exploit immutability if mere insane complexity is an alternative, and mistakes cost millions in real time.

There’s a reason why those places scout so heavily for the lock-free/wait-free galaxy brains the minute they finish their PhDs.


> There was an article on HN a few days ago about this. "Rust has 5 games and 50 game engines".

That's not a serious article. That's a humourous video.

Source: https://youtu.be/TGfQu0bQTKc?t=169


It has some truth to it, still.


Not really. To get to 50 game engines you need some creative accounting. The real joke would be 3 engines and 0 profitable games.


Out of curiosity, why not go all in on Tokio? Make everything a future, including writing to the GPU.

And are you using an ECS based architecture? Do you feel you’d have a different opinion if you were?


As a past active SecondLife user back in the day (circa 15 years ago), and a short-stunt OpenSimulator dev, I had been thinking a lot about how much better SecondLife could be if it had the modern tech absorbed - thanks for doing this! :-) I did a short return to try SL recently, and the lagginess of the viewer made me sad.

Is there a ML to subscribe to, to learn when the viewer is more generally available for testing? Thanks again!


What’s the server for a metaverse client? Is there a standardized protocol, or a particularly popular one you’re targeting?


It's a client for Second Life or Open Simulator.


Rust is not race condition free, it guarantees no data races though.


This is very interesting. How do you manage latency of events coming over the network?

Do... you... wind up having to set TCP_NODELAY?

•͡˘㇁•͡˘


Embarrassingly, yes, because I can't turn off delayed ACKs from Rust.


I learned about nagling 20 years ago when I helped write a networked game server and had to turn it off so that input packets would be sent quickly. Thank you for your response to my troll, easily a top 5 career highlight.


> If I had to do this in C++, I'd be fighting crashes all the time.

Why? I'd take modern C++ over Rust every day of the week.


> As I've mentioned before, I'm writing a high performance metaverse client.

Why? (Serious question)


Started 3 years ago during covid when metaverse looked attractive. In 3 years many of these AI applications will face the same questions.


Will they? AI has adoption already, the metaverse is still waiting for meaningful adoption. You could argue that it’s never coming.


What you've just described is basically every networked video game, the majority of which are happily running via c++.

(Plus some increase in content load over the network, which does exist ala runtime mod loading, streaming, etc)


Yes? Not architecturally different, but with fewer bugs. People are always complaining about bugs in videogames.


Looks great!

Without judgment I must ask, what made you decide to target metaverse specifically? Is it more of a fun challenge, or do you see it having a bright/popular future?


I was really bored during COVID lockdown and needed a hard problem. I may say more about metaverse stuff in another place, but don't want to derail the Rust issues.


Rust is not race condition free unless it the compiler does formal verification like Ada/Sparks?

It is data-race free however.


Props on doing this work! That being said is it just me or does the video seem to stutter?


The big WGPU project to improve concurrency wasn't finished then. All the texture loading that's requested from other threads currently goes into a work queue inside Rend3 executed by the refresh thread. Because the application is frantically loading and unloading textures at various resolutions as the camera moves, there's a stutter. There's too much texture content for it all to be in VRAM at high resolution all at once. Vulkan allows concurrent loading of data. WGPU now does. (As of this morning, unless someone finds another blocking bug.) Rend3 next. Then I'll probably have to change something in my code. That's what I mean about problems down in the graphics engine room.

This is the metaverse data overload problem - many creators, little instancing. No art director. No Q/A department. No game polishing. It's quite solveable, though.

Those occasional flashes on screen are the avatar (just a block in this version) moving asynchronously from the camera. That's been fixed.


>It's probably about five people working for a year from being ready.

The trouble is, we actually have tens/hundreds of people, all working on their own. The blessing and curse of open source development


aside: I love egui




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: