I'm having some trouble understanding conceptual differences between string
diagrams and the "usual" categoric diagrams with morphisms as arrows, especially when it comes to drawing monoidal products.

In other words, how would I draw the following diagram:

![](https://image.ibb.co/jB7krz/Screenshot_20181006_175300.png)

using the notation where morphisms are arrows?
Sure, I can always do this:

![](https://image.ibb.co/h8RdBz/Screenshot_20181006_175314.png)

where I just say the function accepts a product as the input, but I feel this
is just raising another question: how did I end up with \\( A \times B \\) ?
A possible answer could be that we can just specify the product using the universal
property and we somehow just "have" it.

But I feel this doesn't get to the gist of the answer. To translate a monoidal
product to usual notation, we'd need an arrow to accept two things as input.
Arrows are inherently one-dimensional objects and have as inputs two-dimensional
objects, points.
I suspect that using two-dimensional shapes as arrows instead of one-dimensional
could help alleviate the problem. Which is exactly what string diagrams are, in
the end!

Is this sort of reasoning valid? Where can I read more about this?
Are there higher-dimensional generalizations of string diagrams?

This seems like an important thing to know but I haven't been able to find good resources.
CT is usually introduced as points and arrows between them, but does this mean there's an inherent limitation to arrow notation?
It took me quite a while trying to draw products using arrows before I realized this might not be possible.