In the discussion of sparsity of models, an example to concretize the ideas would be helpful.

Aren't attempts to _make_ the models sparse biases introduced by the researchers? I understand the issues of computational complexity, etc., but one has to be careful about compromising models for reasons that have nothing to do with the subject matter.

One the other hand, one could argue that any model is a simplification, and hence compromising part of the truth in order to be understandable or computable by us.

But still I feel that one needs to careful about how and why models are compromised.

For instance, there are all kinds of models for the semantics of logic programs that involve negation. The purely declarative interpretation of such logic programs will not in general have a unique minimal model. And answering queries using the semantics of logical consequence is not computationally feasible. In response, there's a whole literature devoted to different ways of choosing "the" model for the logic program, and using this to answer queries. In this pursuit, the complexity of computing the model is an important factor. But what's the point of these variant notions of truth and consequence? If it's because they are of interest in themselves that's one thing, but if its because they thing we want is uncomputable, that's another. If the latter, then I'd rather go through the stages of mourning and acceptance, and then move on to other pursuits which are feasible.