Generally indexes are cheap, if built concurrently, so I often build all the possible indexes (for relatively modest sized data types, load, and tables) and look at the planner statistics in production to validate which indexes are useful for the query load. That only works if you have a read-heavy usage pattern, for write-heavy usage patterns it can sometimes be better not to have an index at all (one of the things he alludes to in the article about missing foreign key indexes - indexes come with a write cost, so on a write-only table, foreign key indexes are a bad idea) but as with everything real timing data is the key.
For new table design in Serious Business Databases I do two or three step: offline replica with the same data and settings (but not 100% identical), usually in a transaction on a logical replica or main production (postgres supports DDL transactions, which is a big help, but if you alter a table in a transaction it can block tons of other transactions).
It's important to carefully construct your alter table / create table / create index statements to ensure that everything is concurrent and won't block, of course, and there are plenty of pitfalls there, but definitely doable.
For new table design in Serious Business Databases I do two or three step: offline replica with the same data and settings (but not 100% identical), usually in a transaction on a logical replica or main production (postgres supports DDL transactions, which is a big help, but if you alter a table in a transaction it can block tons of other transactions).
It's important to carefully construct your alter table / create table / create index statements to ensure that everything is concurrent and won't block, of course, and there are plenty of pitfalls there, but definitely doable.