Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's why you always test your backup. I backup the full sqlite.db every day and test the litestream replication every week. So far litestream have been solid.


By the time the TB is restored, time to start the next test

How do you detect restored but bit flipped data ?


I do this in backup testing:

    sqlite3 /path/to/db
    sqlite> PRAGMA integrity_check;

See SQLite3 documentation: https://www.sqlite.org/pragma.html#pragma_integrity_check


Sounds like it will take awhile for TB and it checks db integrity not data integrity


Would you care to tell us what your backup and restore policy would be for 1.7 PB of data?


I'm replying to the question of how one would replicate SQLite 3 in production for redundancy. I myself consider 10GB would be the limit for using SQLite 3 in read/write in production and switch to PostgreSQL.


That's a huge discrepancy. One half of HN wants to put petabytes on SQLite, while your limit is only 10GB.


Why not use SQLite's own guidance on where SQLite probably isn't appropriate:

- Client/Server applications (Check)

- High-volumes (Check)

- Large datasets (Check)

- High concurrency, particularly for writes (Check)

https://www.sqlite.org/whentouse.html


There's only one person in this thread trying to put petabytes in SQLite. Everyone else is telling them the myriad reasons why it's a terrible idea.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: