The biggest table contains 30B records. A query that uses a B-tree index completes in a few microseconds.
EXPLAIN ANALYZE SELECT * FROM table_name WHERE id = [ID_VALUE];
Index Scan using table_name_pkey on table_name (cost=0.71..2.93 rows=1 width=32) (actual time=0.042..0.042 rows=0 loops=1)
Index Cond: (id = '[ID_VALUE]'::bigint)
Planning Time: 0.056 ms
Execution Time: 0.052 ms
Index-backed point look-ups are not the problem for analytical queries, but rather minimizing disk I/O for large scans with high column or predicate selectivity.
Once you've optimized for the more obvious filters like timestamp and primary key, like using partitions to avoid using indexes in the first place, you're left with the situation where you need to aggregate over many gigabytes of data and an index doesn't help since your query is probably going to touch every page within the filtered partitions.
You can solve some of these problems in Postgres, like partitioning, but now you're stuck with random I/O within each page to perform non-SIMD aggregations in a loop. This approach has a ceiling that other implementations like ClickHouse do not.
If the original comment is talking about saving money and running in Google cloud, could they be using virtual machines and HDD while both you and the parent talking about good speeds (and not money) are using physical servers and SSD?
Not OP but OP seemed to be talking about analytical queries. Which typically summarise data over a period of time. For year on year comparisons this is summarising data over two years. So the query would have at least one aggregate function.
The biggest table contains 30B records. A query that uses a B-tree index completes in a few microseconds.