Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I.e. a commit can complete successfully before the data has actually been persisted? Or just the part about all data being kept in memory as well? If the former, seems like a bad idea, big e-commerce company or no.



The former. I think most sane serious dbs cache and manipulate recently-used data in memory (in addition to the logical log).

Bad idea or no, it's what they do - performance gains are large. There is an ersatz logical log in the form of application logs, and these have been used to piece together transactional information before.


Yes. As the GP stated, all data added/updated/deleted since last snapshot will be lost. If your data is that critical you can use AOF with "appendfsync always" which will fsync every write operation into a log that will replayed on startup (at a speed cost). Setting it to fsync every second (instead of always) is a good compromise.


innodb_flush_log_at_trx_commit=0 or 2 are very common performance tweaks that trade some level (about a second) of data vulnerability for performance.


Hmm, the thing I'm not really clear on is how this helps performance. The database has to do the same amount of work whether you commit before or after the sync, so why not just make all the commits wait until the sync has happened?


fsync() is a blocking call, which requires confirmation from the disk before your process continues. roughly, this means that redis can't do anything while the fsync() happens. while this isn't any more work (cpu) it is significantly slower because of all of the time spent waiting.


Hmm . . . as long as the requests are not being handled on the same thread as the one doing the fsync, this shouldn't stop work in the process or make anything appreciably slower.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: