Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Reading/hearing talks from people doing server backends there is many half insane things being done to keep servers and persistence in check, and everyone still a different custom solution.

A common scenarios:

- "Game-server" using "regular" server+DB frameworks, often works but seldom in real-time scenarios.

- Realtime Game-servers (with in-memory state, sometimes supporting rollback,etc) that then has persistence more or less as an afterthought (duplicating state management between client<->server<->persistence)

What spacetime tries to do (and my current experiment) is to merge the server and DB to move persistence _performance_ concerns into the backend process.

We _rightfully_ moved databases out of server processes for easier management and safety in the 90s, but since f.ex. SQLite has been a bit of a resurgence because the usercode was in a managed language and wasn't as likely to cause a corruption issue.

WASM is a great enabler here because it allows the server/DB to sandbox the logic-code even in "unsafe"/realtime languages from the more precarious DB parts of the process so we don't have to risk data-safety even as we move back to co-locating server-state and persistence data within the same process.



I have also experimented in this space but came to the conclusion that it's not good for games. Not necessarily because it's a bad idea but because it's not how games are made. To make a game with something like SpacetimeDB you would need to build a game engine. You'd also need to build out tooling for managing assets for the server.

One thing about SpacetimeDB that I'm not sure about is if reducers can run concurrently or not. I would hope so but it's very possible they don't and that can be a big regression vs. current game servers.


Agreed, going for a custom engine anyhow in my scenario so that's fine.

It seems that they're built to run serially to achieve reproducible simulations so my guess would be no, this is one thing my experiment does differently although formally it might just be a naming difference.

It's not a major issue though I think, the stuff I work on could shard work by area so single areas are single-threaded but world-wise it's parallel (Eve online has done this for ages).


Serial execution makes the most sense from a database perspective because they work off a transaction log. Parallel execution may require separate transaction logs (basically durable queues) to accomplish sharding but that introduces a whole new set of issues because tables can only be mutated from a single transaction log. Code that needs to access multiple suddenly gets a lot more complex. There was actually a post on HN the other day about this: https://news.ycombinator.com/item?id=43661181


It's not meant as a regular SQL database with full ACID semantics (even if that is a interesting stretch target).

Rather the main thing is mainly about coupling storage and sharding decisions with the simulation, since both a sharded simulator and sharded storage system will need logic for sharding and time-versioning, it's silly to duplicate work (or worse have out of sync semantics). In addition cohabiting simulation and storage will lessen latency and other issues w.r.t. to separating them.

Now, it's entirely possible that I'm barking up the wrong tree but considering that many are exploring this design-space means that there is interest and merits even if only the Spacetime and a few others have taken the leap yet.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: