There should be a lot of interesting things to say about the information of partitions.

The equation \$$H(\sigma \wedge \tau) = H(\sigma) + H(\tau)\$$ only holds for independent partitions, so I wouldn't say the information-theoretic meaning of the meet of partitions is "understood" based on just that.

I'd tackle the overall problem this way. You cited my [Oxford talk about information, entropy and Bayesian networks](math.ucr.edu/home/baez/networks_oxford/networks_entropy.pdf). There I describe the category \$$\mathrm{FinProb} \$$, where the objects are finite sets equipped with probability distributions, and the morphisms are stochastic maps. In my paper with Fritz and Leinster we show how to associate an entropy to each morphism, and how to characterize entropy very naturally in these terms - the formula for entropy is not postulated, it's derived.

Here's one way that theory interacts with partitions. Any finite set \$$X\$$ can be equipped with a uniform probability distribution \$$u_X\$$. Any partition \$$P\$$ of \$$X\$$ gives rise to an onto function \$$f : X \to Y\$$ where \$$Y\$$ is the set of parts of \$$P\$$, and then a stochastic map \$$f : (X,u_X) \to (Y,p) \$$ for some uniquely determined probability distribution \$$p \$$. \$$p\$$ simply assigns to each part of the partition its measure. This stochastic map \$$f\$$ has an entropy as given by my paper with Fritz and Leinster - but this is equal to [the entropy of the partition as defined on PlanetMath](http://planetmath.org/entropyofapartition)!