I'm not using Rails, but I now have several sites using my own little thing that is a single docker container where all state + content is in a single sqlite file, and it's very nice to be able to just move that single file around. I love postgres, but doing the equivalent of that with Postgres is a lot more hassle.
While I'm fine with using SQLite for these things, I would counter that a docker-compose file makes using a db with your app roughly as easy as a sqlite file, only in that you'd have have a data directory as a volume mount for the db. PostgreSQL and MySQL/MariaDB in particular are pretty easy to launch with a configured user/pass for the db/app specifically with docker/compose.
If you're comparing to SQLite, sure... if you're comparing to installing and configuring an RDBMS server on a host OS for development work, I'm going to hard disagree. Most services already have a hosted docker container configured, usually by the developers of said service. Getting that running is often as simple as googling "service-name docker-compose example" and "docker compose up".
And once you do understand docker-compose, it becomes second nature. I'd be willing to state that dealing with a merge conflict with source control is more difficult than docker-compose.
It is until you realize that using SQLite means you don't have to worry about N+1 queries, which actually does make a pretty big difference in Rails code.
Not sure I understand this point, how does SQLite fix the N+1 query problem? Just by having the data co-located with the app and avoiding the round-trip latency hit?
If so, I'd argue you still have N+1 problems, you just won't notice them until N gets a bit larger.
For others, the short-ish answer is that doing hundreds of SQL queries in response to a request (loading nested timeline elements in their case) in SQLite is fine because of the lack of networking/IPC overhead. The nature of N+1 queries is unchanged.
The other half of it that the sqlite page doesn't mention is that sqlite lacks query engine optimizations which would make one large query faster than many smaller queries. If you had a hypothetical in-process version of postgres which didn't have any IPC or networking overhead you'd still see benefits from performing fewer larger queries when using it because the query planner adds some overhead to small queries but makes complex queries faster.
I'm wondering how would it perform if we can compile https://pglite.dev/ as a native library and use it as an in-process Postgres... I know Node folks already use it as a Wasm module, so it shouldn’t be too tricky?
I would suggest that sometimes you want N+1 with a collapsed data set (JSON column data) if you have limited request size, separate stores/service and/or otherwise have clear primary key lookups for the secondary data. I've seen these types of queries run faster with separate lookups, especially depending on caching in environments where the lookup would mean a DBMS connecting to another DBMS for the extra data much more slowly.
I kind of have to agree, I recently thought I'll use sqlite in Rails for my new project to keep things simple but then realized it's actually more annoying for my use case. I'd need a persistent volume with the right permissions, and I can't just connect to my PG instance running on the server from my local machine to run some queries.
I'm sure it makes things easier for some use cases but it's not a given.
Yeah, it's about 5x slower than realtime with the current configuration. The good news is that diffusion models and transformers are constantly benefitting from new acceleration techniques. This was a big reason we wanted to take a bet on those architectures.
Edit: If we generate videos at a lower resolution and with a fewer number of diffusion steps compared to what's used in the public configuration, we are able to generate videos at 20-23 fps, which is just about real-time. Here is an example: https://6ammc3n5zzf5ljnz.public.blob.vercel-storage.com/fast...
It's something we're thinking about. Our main focus right now is to make the model as good as it can be. There are still many edge cases and failure modes.
Depending on the setup it can be a pain to get nested containers working sometimes. There is, e.g., Docker In Docker but this often required a privileged host container which is often not provided in CI/CD pipelines.
Only insofar as it's "kinda weird" to run a service where you notice interesting new things that happen in the world, and then write professional, mostly-reliable words about them, to share with your general audience.
In past centuries, this was a fairly common business model, but I understand the concept sounds pretty jarring to modern ears.
Javascript's one is a little bit different in that you are renaming the destructured variable - it's not a function-specific thing - whereas Swift's version is a property of the function.
e.g.
const user = {id: "123"};
const { id } = user; // This extracts the `id` field
const { id: userId } = user; // This also extracts the `id` field, but renames it to userId
console.log(userId)