ClickHouse uses and validates checksums for both compressed blocks of data and uncompressed data; it also checksums and validates data during network transfers. Additionally, every replica validates checksums with other replicas when doing deterministic computations.
It is the only reasonable way to work with data, and if someone thinks otherwise, I will happily provide motivating examples.
Fully agree. There are many bad things that can happen to data. Another good feature is that ClickHouse is fairly tolerant of torn blocks and handles many failures automatically. This fault tolerance actually makes some corner cases hard to illustrate in demos, because ClickHouse repairs them silently. Users never see them.