In enterprise clients you commonly run into issues where the company thinks you have to save all data forever. Very often this runs in a pattern where the application is lightly used a first then uptake increases over time. Then you run into the slowness issue. They typically expand DB sizing, bit eventually run into the problem where archiving is needed. This can be a huge problem when it's an after thought instead of a primary design. All kinds of fun when you have key relationships between different tablea.
It's more that once it gets to a certain size (say, 100s of 1000s of rows), doing anything with the table is painfully slow, often requiring you to take your application offline for considerable periods. Even deleting rows can take 10s of minutes at a time, and it can certainly take a very long time to work out what indexes need to be added and whether they're actually helping.
Yes, sometimes the pressure comes from management etc., but more often than not it would be premature optimisation to add the archiving, so it's a matter of finding a balance and "predicting" at what point the archiving needs to happen.