That’s with their delta-compressed format. Postgres isn’t that. An empty row on its own is 24 bytes. In PostGIS, a point is 32 bytes. If you implemented it with two floats instead, that’s still 16 bytes. A timestamp is 8 bytes. Assuming a bigint PK (8 bytes) and perhaps an int tenant id (4 bytes), that’s at best 60 bytes per row, ignoring any alignment buffering. Likely more.
If they’re instead using something more random as a PK, like a UUIDv4, the page splits alone will bloat that well past the raw storage.
Similarly, presumably they’re indexing at least one other column, so add that to the overhead.
That overhead is still just a factor 2x, so might still be fine?
Anyway, TimescaleDB adds compression (including delta encodings) to Postgres. Pretty sure it would handle this load just fine, as long as the inserts are batched.
If they’re instead using something more random as a PK, like a UUIDv4, the page splits alone will bloat that well past the raw storage.
Similarly, presumably they’re indexing at least one other column, so add that to the overhead.