Today I attended a tutorial by logician Jean-Yves Béziau at the [6th Universal Logic School](http://www.uni-log.org/start6s.html). This issue came up and we seemed to agree with the problematic nature of the way math is being done.

If we think of math as just dealing with mental abstractions, such as labels, symbols, points, sequences, etc., then we can always specify the underlying alphabet or symbol set that we are drawing from. From this point of view, we never have a "set of dogs" but rather a "set of symbols" where the symbols refer to particular dogs. The key thing is that the symbols have a context. So, for example, we need to know, when we deal with Zero, which Zero we're talking about. Now the Zero may be polymorphic - it may be the Zero for the Integers and also for the Reals, and there is, by default, one set of Integers, and one set of Reals. It's the responsibility of the mathematicians to have a coherent system of symbols.

But in the approach that you are describing, if I understand you correctly, then the objects are in the real world Well, the real world can have infinitely many different Zeros. The zero in {0,1} need not be the same as the zero in {0,1,2} because they may be refering to different zeros or perhaps the same zeroes. The responsibility has been pushed out to the real world. But the real world doesn't offer any particular notion of Zero. The real world doesn't offer any labels or any systems. And so if we think what {0,1} could mean in the real world, well it could mean so many different things dependent on the context for 1 and the context for 0.

I'm wondering if I'm understandable? But at least Jean-Yves Béziau understood me and seemed to agree. My point is that in our mind we can build models - limited, partial models - but in the world there is no such thing and we are lost.