Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

SQLite is incredible. If you are struggling to beat the "one query per second" meme, try the following 2 things:

1. Only use a single connection for all access. Open the database one time at startup. SQLite operates in serialized mode by default, so the only time you need to lock is when you are trying to obtain the LastInsertRowId or perform explicit transactions across multiple rows. Trying to use the one connection per query approach with SQLite is going to end very badly.

2. Execute one time against a fresh DB: PRAGMA journal_mode=WAL

If at this point you are still finding that SQLite is not as fast or faster than SQL Server, MySQL, et.al., then I would be very surprised.

I do not think you can persist a row to disk faster with any other traditional SQL technology. SQLite has the lowest access latency that I am aware of. Something about it living in the same process as your business application seems to help a lot.

We support hundreds of simultaneous users in production with 1-10 megs of business state tracked per user in a single SQLite database. It runs fantastically.



You've just moved a bunch of problems to whatever is accessing SQLite. How do you scale out the application server compute if it needs to make transactionally conditional updates?

E.g.:

    transaction {
      if (expensiveFunction(query()))
        update();
    }
(My applications always get much faster when I plug them into Postgres after SQLite. But then I do do the odd sort and group by, not OLAP, but because they express the computation I want and Postgres is simply much better at that.)


> If at this point you are still finding that SQLite is not as fast or faster than SQL Server, MySQL, et.al., then I would be very surprised.

What about large aggregation queries, that are parallelized by modern DBMS?

Does it still scale that well if you have many concurrent read and write transactions (e.g. on the same table)?


We aren't running any reports on our databases like this. I would argue it is a bad practice in general to mix OLTP and OLAP workloads on a single database instance, regardless of the specific technology involved.

If we wanted to run an aggregate that could potentially impact live transactions, we would just copy the SQLite db to another server and perform the analysis there. We have some telemetry services which operate in this fashion. They go out to all of the SQLite databases, make a copy and then run analysis in another process (or on another machine).

I am not aware of any hosted SQL technology which is capable of magically interleaving large aggregate queries with live transactions and not having one or both impacted in some way. At the end of the day, you still have to go to disk on writes, and this must be serialized against reads for basic consistency reasons. After a certain point, this is kinda like trying to beat basic information theory with ever-more-complex compression schemes. I'd rather just accept the fundamental truth of the hardware/OS and have the least amount of overhead possible when engaging with it.


> At the end of the day, you still have to go to disk on writes, and this must be serialized against reads for basic consistency reasons.

No, absolutely not.

That's why modern databases use a thing called multi version concurrency control. You can run (multiple) queries on the same table that is updated by multiple transactions at the same time without one blocking the others (assuming the write transactions don't block each other). Of course they are fighting for I/O, but there is no need so serialize anything.

Mixing OLTP and OLAP becomes increasingly "normal" theses days as the capabilities of the database products and the hardware improve. With modern high-end hardware (hundreds of CPUs, a lot of SSDs, large RAM) this actually scales quite nicely .


OLTP databases are optimized for mutable data. OLAP databases are optimized for immutable data. There's a big difference between appropriate data structures for each use case that has little to do with hardware capabilities.

OLAP databases tend to write columns in large blocks and apply sort orders to improve compression. This type of structure works well if you write the data once and read it many times. It's horrendously inefficient for concurrent updates to things like user session contexts. (Or even reading them for that matter.) You are better off using a row store with ACID transactions and relatively small pages.

The dichotomy has been visible for decades and shows no sign of disappearing, because the difference is mostly how you arrange and access data, not so much the hardware used.


"serialized" here doesn't really mean processed in serial, it means "serializable" in the context of database information theory. Databases have special concurrency control requirements in order to create hard guarantees on database consistency. You can process queries in parallel and still have a serializable result, because of transaction coordination. Doing this on one server is much easier than doing this across a cluster of servers.

So in your case, MVCC is what you're talking about, which is not the same level of consistency guarantee as serializable, rather it is based on snapshot isolation. Some database vendors consider them effectively the same isolation level because the anomalies associated to other common non-serializable isolation levels aren't typically present in most MVCC implementations, but there's a lot more complexity here than you are acknowledging.

Mixing OLTP and OLAP workloads on the same database is pretty much always a bad idea. This is why it's common practice to use ETL jobs to move data from an OLTP optimized database like Postgres or MySQL to a separate database for OLAP (which could be another MySQL or PG instance, or could be something like ClickHouse or another columnar database optimized for OLAP). Just because you /can/ do something, doesn't mean you /should/ do something...


There are MVCC systems with serializable, strictly serializable, and even externally consistent transactions. FoundationDB and Spanner are both externally consistent (with geo-replication in Spanner’s case). CockroachDB is serializable, though not strictly serializable. Single-master Postgres can do serializable transactions as well.


I'd be very interested in you providing an example of an MVCC system which is fully serializable. Conventional wisdom is, that while possible, it is prohibitively expensive to ensure a snapshot isolation system like MVCC is fully serializable, and it is explicitly most expensive for analytics / OLAP workloads because you must keep track of the read set of every transaction. It is possible for such a system to exist, then, but it would cost such a performance penalty as to be a painfully bad choice for a workload where OLAP and OLTP would be mixed, bringing us back to the point I originally made.

In most cases, snapshot isolation is sufficient, and some database vendors even conflate snapshot isolation with serializable, but they're not the same thing. I'd be hesitant to believe any vendor claims that they implement serializable MVCC without validating it via testing. As we've been shown by Jepsen, database vendors make many claims, some of which are unsubstantiated. Spanner is very cool technology, however I have personally heard some very interesting claims from folks on the Spanner team that would violate the laws of physics, so again, without a demonstration of their claims, I'd take them with a grain of salt.


Both the FoundationDB docs and Spanner whitepapers are very clear that their definitions of strict serializability match with the conventional academic one. FoundationDB does keep track of the read set of every transaction: there are nodes dedicated to doing that in their architecture. You can even add things to the read set of each transaction without actually reading them to get additional guarantees about ordering (e.g. to implement table-level locks). FoundationDB (and obviously Spanner as well) don’t have any massive penalties for this; FoundationDB clusters can handle millions of operations per second with low single digit read and commit latencies: https://apple.github.io/foundationdb/performance.html.

If your condition for believing these claims is approval from Jepsen (i.e. Kyle Kingsbury), he apparently didn’t both testing FoundationDB because their test suite is “waaaay more rigorous”: https://twitter.com/aphyr/status/405017101804396546. In particular their test suite is able to produce reproducible tests of any variation in messaging (dropped messages, reordered messages) across their single-threaded nodes, which is extremely useful in narrowing down places where serialization violations can hide. He also seems to believe Spanner’s claims: https://www.youtube.com/watch?v=w_zYYF3-iSo

I’m not sure where this “conventional wisdom” about serializability having an unavoidable large performance hit is coming from; the databases I mentioned are very well known in the distributed systems field and not some random vendors making outlandish claims.


There's an enormous leap between something which is slow on SQLite and something which requires etl into a data warehouse or similar tech, columnar store etc.

I mean, at least three orders of magnitude, minimum.

It's just a ludicrous argument. SQLite is fine for a file format, and in very specific dumb CRUD scenarios it's just about ok. But it's not worth sticking with if you need anything interesting over any volume of data, far far far below what would warrant a different DB tech to rdbms.

Ironically, you're more representative of the "we'll need Hadoop" crowd.


> Ironically, you're more representative of the "we'll need Hadoop" crowd.

That seems an especially odd assertion to make. What is your basis for this comment? I'd suggest it makes clear you know nothing about me.


A defense of SQLite on unsuitability to OLAP grounds, when many aggregate and analytic functions can be performed perfectly acceptably on RDBMSes like Postgres (and even MySQL, with care), diving straight to the append-only, data pipeline with ETL approaches.


> Mixing OLTP and OLAP becomes increasingly "normal"

Just because it's "normal" doesn't mean it's correct. Just because you can doesn't mean you should.

All hail bob1029:

> We aren't running any reports on our databases like this. I would argue it is a bad practice in general to mix OLTP and OLAP workloads on a single database instance, regardless of the specific technology involved.


I think this is specifically because random reads are scaling much better than writes, even though you won't see it on standard "that many MB/s" benchmarks where read is 'just' a few multiples of the write performance.

Persisting a transaction to the database is still (and especially in MVCC): "send data write". "wait for write to be flushed". "toggle metadata bit to mark write as completed". "wait for bit to be completed" which still serialises transaction commits while reads can complete in parallel as fast as the device can handle.

Especially now that the reads and writes don't have to share the disk head, it makes sense for random reads to keep on scaling better than writes.


This is actually the opposite of current limitations. Stick a capacitor and some DRAM on the NVMe and you can "instantly" flush to disk, but there's no way to anticipate where the next read will come from and therefore no way to accelerate it.

You'll see modern NVMe disks with sustained writes greatly outpacing reads until the write cache is saturated, at which point what you say is true and reads will greatly outpace writes. But you don't want your disks to ever be in that threshold.


I think we're seeing the difference between someone who is a programmer and someone who plays a programmer at work. :) Arguing that some modern, complex feature is "better than" a simpler system is crazy talk. Any time I can simplify a system vs. add complexity I will go simplicity. Adding in plug-ins and features goes a long way toward vendor lock-in that prevents your org from being agile when you have to swap systems because you run into limits or have catastrophic failures.


> I am not aware of any hosted SQL technology which is capable of magically interleaving large aggregate queries with live transactions and not having one or both impacted in some way. At the end of the day, you still have to go to disk on writes, and this must be serialized against reads for basic consistency reasons.

I do sometimes wonder if dirty reads are what the business folks actually want.

Not necessarily unconstrained dirty reads. But if it were possible to say, "The statistics in your reports may only be accurate to (say) +/- x%," would that be good enough?

Going really philosophical, might they even make better decisions if they had less precision to work with? There are certainly plenty of studies that suggest that that's basically how it works when people are managing their investments.


My experience is that customers/executives will accept latency but not inaccuracy. In practice this may be the same thing, it just depends how you position it. “Reports are accurate but may be up to 5 minutes out of date” is a very easy sell to a corporate worker who logs in to check a dashboard once a month.

Primary/replica is probably the correct way to solve this. In some places, I have also shunted writes through SQS queues, which in practice protects us from a locking operation in one place impacting other operations in a customer-facing way. I don’t think this is strictly necessary but it is a nice technical guard against the sociological problem of a contractor merging code like that. They don’t feel the pain of locks gone bad because they (per contract) can’t be on call.


You're absolutely right. But I also find that what customers/executives actually want, and what they say they want, turn out to be different things once you start unpacking them.

And sometimes it's just a matter of framing. Don't say accurate, say, "Accurate to within x%" or "Rounded to the nearest $x" type of thing. But I certainly would never actually pick an argument over it. Sometimes they do know what they want. Other times they really don't, but you still don't get to decide for yourself how the problem is going to be solved.


We effectively do dirty reads when we just go around and copy the databases for purposes of offline analysis. Our use cases for aggregation queries are all tolerant to incomplete recent data. Most of the time we are running analysis 24 hours after whatever event, so we never get into a situation where we can't answer important business questions because data hasn't been flushed to disk yet.

The fact that most of the stuff we care about is time-domain & causal means that we can typically leverage this basic ideology. Very rarely does a time-series aggregate query need to be consistent with the OLTP workloads in order to satisfy a business question.


Are you living on another planet? Are you seriously suggesting people replace SqlServer or MySQL with SQLite?

If you can come to my company and replace our 96-core SqlServer boxes with SQLite I'll pay you any salary you ask for.


I can assure you that I live on the same planet as everyone else posting here.

Whether or not I could perform this miracle depends entirely on your specific use cases. Many people who have this sort of reaction are coming from a place where there is heavy use of the vendor lock-in features such as SSIS and stored procedures.

If you are ultimately just trying to get structured business data to/from disk in a consistent manner and are seeking the lowest latency and highest throughput per request, then SQLite might be what you are looking for.

The specific core counts or other specifications are meaningless. SQLite scales perfectly on a single box, and if you have some good engineers you might even be able to build a clustering protocol at the application layer in order to tie multiple together. At a certain point, writing your own will get cheaper than paying Microsoft for the privilege of using SQL Server.


This is a great answer. The details REALLY matter. One of my best early tech success stories was rewriting a SQL query that took 27 hours to one that took ~5 seconds. This was running on a very large Oracle cluster. They had poured more and more money into hardware and licensing trying solve this. In the end, it was a matter of turning a cursor-based query into a set-based query.


In some respects, I think the constraints of something like SQLite can focus people's attention on making things work properly rather than throwing hardware at the problem.

I can think of a couple of places I've worked where they had simple problems that could have been solved by some thinking and coding but instead were solved* by more expensive hardware.


This is precisely my favorite part of SQLite. The constraints (aka lack of features) is what makes it so compelling. We experienced some serious revelations going down this path. The biggest thing from a devops perspective is that you can push very consolidated releases. There is no need to install anything for SQLite to function.

For instance, we use .NET Core Self-Contained Deployments combined with SQLite. As a result, we ship a zip file containing the dotnet build artifacts to a blank windows or linux host and have a working application node within a matter of seconds. The databases will be created and migrated automatically by our application logic, and the host is automatically configured using OS interop mechanisms.

So, when you really look at it, the constraints imposed upon us by SQLite encouraged us to completely sidestep the containerization game. Our "container" is just a single platform-specific binary path that has all of its dependencies & data contained within.

Without SQLite, we would have to have some additional process for every environment that we stand up. This is where the container game starts to come in, and I feel like its a bandaid to a problem that could have been avoided so much further down in the stack (aka SQLite vs MySQL/SQLServer/Postgres). Sure, there are applications where you absolutely must use a hosted solution for one reason or another, but for many (most) others where you do not, it's really the only thing stopping you from having a single process/binary application distribution that is absolutely trivial to install and troubleshoot. You literally just zip up prod bin path and load it on a developer workstation to review damages. 100% of the information you need will be there every time. No trying to log into a SQL server or losing track of which trace folder is for what issue # and sql dump. It keeps things very well organized at all levels. We can just launch the production app with --console to see its exact state at the time of the copy operation and then attach debuggers, etc.


The last time I reduced database server load by 70% by just spending a week tuning indexing strategies and ill-performing queries, nobody thanked me. Sure, a new database server costs way more than a week of my time, but that is completely beside the point.

The point is that I robbed someone of the chance to buy a shiny new computer.


> The last time I reduced database server load by 70% [...] nobody thanked me.

It's crazy. I could maybe understand if there's a time crunch where it's quicker and easier to get more hardware in order to make a sale that'll keep the company alive (which I have experienced once) but that's maybe 1% of the cases.

Anyway, in lieu of their gratitude, I offer my thanks because I appreciate the effort.


That's a fantastic point. Most people in technology these days look at a problem and immediately think things like "more hardware" or "cloud deployment" all to get scalability. Scalability can come in forms other than throwing lots of money at an issue... Oftentimes, money can be saved if one throws more intelligence at the problem. :)


Is SQLite likely to be faster than postgres? In terms of ease of use / admin overhead I consider them mostly equivalent. I thought the main problem with SQLite was it was slow tih concurrent writers. Whereas the "bigger" SQL databases have code that allows concurrent writes.


The issue with SQLite and concurrent writers isn't that it's slow, it's that it just can't do it. WAL mode lets you have as many readers as you want concurrent with a single writer, but it doesn't give you multiple concurrent writers. If you really need concurrent writes, use PostgreSQL or another RDBMS.

In my experience, SQLite is likely to be faster when you have lots of reading. Being in-process gives SQLite a natural advantage in read-heavy situations.


WAL mode is how you address this problem with SQLite.

See: https://www.sqlite.org/wal.html

"Write transactions are very fast since they only involve writing the content once (versus twice for rollback-journal transactions) and because the writes are all sequential. Further, syncing the content to the disk is not required, as long as the application is willing to sacrifice durability following a power loss or hard reboot."


As long as you don't mind volatile memory issues, that is.


I think there are a lot of places where sqlite can outperform postgres... This is read-heavy and latency-critical apps, where additional hop is costly, for example.


> come to my company and replace our 96-core SqlServer boxes with SQLite I'll pay you any salary you ask for.

I also had a server with 96 cores until we realized a developer had inadvertently made a query happen every time a scroll event fired... it was a nice chunk of chance saved.


Did you look at the article in question? Expensive uses (built on) SQLite as the RDMS for their application.

What people are conveniently leaving out is they wrote a serious wrapper around it that makes it very similar to other conventional large scale systems like MSSQL or MySQL: https://bedrockdb.com/


How do you handle access from multiple worker processes? Some languages/frameworks handle have poor multi-threading performance and must be deployed in a multi-process setup (e.g. python webapps). Or is it not a good fit for sqlite?


Multiprocess can work, but you may want to develop an intermediary process that is exclusive owner of the database for performance reasons and then delegate to it. I.e. you could have:

database.db <=> SQLiteAgent <=> localhost HTTP JSON <=> MyPythonWebApps

This exercise would also encourage development of a concise and generic schema for storing your business data (presumably because changes to the above JSON contract would be time consuming).


Where are these databases persisted ? HA (multi-process)? Failover ? Backup ? How do you handle containerisation ?Some details would be seriously welcome.


Why single connection? Sqlite in WAL mode will allow to co-exist writer and readers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: