This reminds me of some Information Theory:

[Here](http://planetmath.org/entropyofapartition) we have a definition of Shannon entropy of a *partition* (more developed also [here](http://www.cambridge.org/9780521883894)). The book shows how the finer a partition is, the higher the resulting entropy.

And in slide 7 [here](http://math.ucr.edu/home/baez/networks_oxford/networks_entropy.pdf) Shannon entropy of a finite probability measure \\(p\\) is interpreted as

> How much information you learn, on average, when someone tells you an element \\(x \in X\\), if all you’d known was that it was
randomly distributed according to \\(p\\).

So this would quantify how much more you learn by moving to finer partitions.

I take that [this](https://math.stackexchange.com/questions/381986/prove-that-it-is-a-random-variable-iff-it-is-constant-on-each-partition) MO question helps in viewing real functions on partition blocks as random variables. Problems may arise in infinite sample spaces though.