This is sort of the reverse of the "curse of dimensionality", for those interested in machine learning (http://en.wikipedia.org/wiki/Curse_of_dimensionality). From an ML perspective, as you add dimensions to a dataset, the amount of data you'd need to accurately model the data without overfitting (i.e. without memorizing specific details of the sample rather than underlying trends) grows very fast, because due to the combinatorics you end up with extremely sparse coverage of the overall possibility space even with huge data sets.
The reverse of that phenomenon is that, given a data set in a high-dimensional space (even 3 dimensions, if each dimension has more than a few bits of entropy), it will cover the dimensions very sparsely (even if it's large!), and therefore it's relatively easy to recover specific details of the sample from the aggregate statistics.
The Curse of Dimensionality is a key piece of knowledge to keep in mind anytime you're building some software that searches/sorts/filters/etc. a large dataset and your users keep asking for things like "but I want to search based on <some new thing>!"
Someone has to be the gatekeeper on your dimensions.
The reverse of that phenomenon is that, given a data set in a high-dimensional space (even 3 dimensions, if each dimension has more than a few bits of entropy), it will cover the dimensions very sparsely (even if it's large!), and therefore it's relatively easy to recover specific details of the sample from the aggregate statistics.
edit: Well, I was hoping this might be a new insight, but in fact there's a good 2005 paper exploring that connection in much more detail: http://www.vldb2005.org/program/paper/fri/p901-aggarwal.pdf