Options

Ontological foundations of collaboration - do we understand each other?

edited October 2015 in Azimuth Forum

Hi People.

I'm new to this place, coming in from a very integral or "holistic" perspective, and very motivated to explore ways that "scientists, mathematicians and engineers" can work together to "save the planet" (and, along the way, of course, human civilization). After looking around a bit, I thought I'd take a crack at posting a new discussion.

Of course it's true that there are major scientific issues associated with maintaining a healthy planet -- and many of you have no doubt heard the phrase "Planetary Boundaries" (if not, there are good essays on the Great Transition Initiative, such as http://www.greattransition.org/publication/bounding-the-planetary-future-why-we-need-a-great-transition -- "Bounding the Planetary Future", by Johan Rockstrom, Professor of Environmental Science at Stockholm University) -- but for me, what might be even more of a human emergency is the inability of the scientific community to fully persuade the political communities of the world that action is needed promptly. Have you see video clips of the floods in South Carolina, USA? We've got to learn to live within our limits -- within our boundaries -- and if we can't do it, we're going to reap the whirlwind.

[PS -- here's a brand-new article from Rockstrom: http://www.socialeurope.eu/2015/10/leaving-our-children-nothing/ ]

If we want to hang on to this planet, we human beings have to find ways to work together -- effectively, directly, correctly, with substantial influence and impact. But the reality is -- human beings these days at grassroots levels are tending to bicker or fight with each other about just about everything. The human community isn't just "divided". We're atomized, around almost every possible dimension of difference (there are a number of influential books on this theme, like The Big Sort, by Bill Bishop), and our collective failure at the large-scale task of collective governance puts us all into the hot water with the mythical "boiling frog" (https://en.wikipedia.org/wiki/Boiling_frog ). We see this problem all over the world -- and we absolutely see it in the gridlock and paralysis of the US Congress, on just about any issue more serious than naming a post office. If you've been watching the US news, this is the number-one topic right this minute: the paralysis of our congress.

I'm a network builder with a background in algebraic semantics, and I want to work on building models of shared understanding that fully embrace "diversity", and support vital disagreement or discussion on critical issues -- but hold the entire conversation together in "co-creative" and respectful/constructive ways, that lead to creative solutions. As regards "apples and oranges" arguments -- I've heard it said recently that a major reason for crazy health-care costs in the USA isn't simply the avarice of health-care providers or pharmaceutical companies -- but also emerges in large part from the sheer fragmentation and internal disconnects of the health-care delivery system. We're living in a world of mis-matched taxonomies. It doesn't work.

DIMENSIONALITY

Many years ago, I started working on generalizations of epistemology and category theory, in terms mostly defined by dimensionality. Today, I'm feeling a burst of enthusiasm for this field, thinking that some cocreative work by passionate analysts might provide what I believe could become an amazing "breakthrough" theory in general cognitive and semantic theory -- with big implications for database processing, cognitive science, any kind of taxonomy or any process that involves classification. There might be serious implications for hard sciences. There might be serious implications for collaboration in a diverse culture. Can we diffuse the problems of "Babel" with a new integral vision?

Obviously, we've living in a highly networked world -- where building smooth mappings between cultures and systems -- and branches of science -- looks to me like an increasingly essential process. We gotta get "people" AND "computers" talking to each other with less confusion. In this context, it looks to me like a theorem with significance comparable to Gödel’s Proof is out there ripe for the picking. There are currently no widely accepted "industry standards" for ontological fundamentals -- and the so-called "foundations of mathematics" -- but there absolutely should be. The right theorem might sweep away centuries of cobwebs.

I'm wanting to post a few ambitious ideas on the fundamentals of scientific method and the language of process description. It looks to me like we are living in an era of high convergence -- a convergence across a very wide spectrum of interconnected elements -- and I'd like to see that idea tested and grown under a sharp and constructive and motivated scientific critique.

Whether this possibility goes anywhere here might depend on what kind of response it gets. There's a lot to talk about, and some critically-important scientific and technical issues in play. And there's an opportunity to do something great. But nobody can do this stuff alone. Creativity takes cross-fertilization. So let's see what happens when I post this. I might go get a theme or two from an interesting current discussion started on GooglePlus by John Baez, on the theme "A Moebius strip in the space of concepts" -- at https://plus.google.com/117663015413546257905/posts/jkqH5e48w6L

This Moebius thread gets into two areas I find fascinating: the dimensionality of conceptual structure -- and maybe (???) how something like a topological deformation of this space along the lines of Moebius might "fully integrate" the dimensionality of conceptual form -- "closing the space", or something like that. Personally, I think it's possible -- and could produce an amazing and very significant theorem. I'd love to talk about it here.

Thanks!

Bruce Schuman, Santa Barbara CA

Comments

  • 1.
    edited October 2015

    I don't have a perfect order for these ideas, and this is an evolutionary creative process -- but I have a few principles that I think are foundational for high shared coherence. Of course, these ideas, and anything else I might post, are highly subject to critique and testing --


    FUNDAMENTALS OF GOOD METHOD

    There's a basic method that to me seems essential for avoiding messy confusions in semantic and symbolic representation, and that method involves a couple of principles I think are critically important.

    1) "Reality is continuous, concepts are discrete."
    Reality itself is an "undifferentiated continuum" -- with no boundaries, characteristics, or properties. There are no "objects" in reality, because it is not possible to draw an accurate and realistic boundary around an object. The mystics have told us this forever, and the best concept theorists (see Sowa, cited below) have explained why it's true. The correct and proper way to proceed is to recognize a primary distinction between "reality itself" (with no properties) and the "model of reality" that is created by a human mind operating with symbols and distinctions in some representational medium (like an alphabet/language or a computer or blackboard or a brain made out of neurons).

    This claim is of course controversial and some will say this is obviously wrong -- "don't tell me there's no concrete wall over there -- try driving your car into it and see what happens". But I resist this this "naive realism", because I think it confuses and misleads the kind of really accurate and sophisticated epistemology and process/object description language that I'd like to see emerge with near-continuous micro-fine accuracy. Let's keep "reality" and "symbolic descriptions of reality" in two entirely separate compartments with highly accurate mappings between them (ie, let's make our models isomorphic to experience in the most fine-grained way possible)

    In essence: "reality is a continuum and concepts are discrete". "Reality is an unknowable limit towards which concepts converge"

    But remember: "there are no straight lines in nature". Straight lines are artificial/synthetic creations of the human mind. There are no straight lines in a wetland.

    Human beings build houses out of 4x8 plywood and sheetrock and 2x4s -- all apparently cut to straight lines -- but even these dimensionally crisp objects are only approximately clean-cut. Even synthetic structures are dimensionally messy.

    2) Science is building and testing models.
    "Science" is the process of building models of reality -- and then testing those models to confirm they can be trusted. As models grow and are tested, their accuracy tends to converge to tighter degrees of accuracy.

    3) Distinguish continuous reality and abstract/symbolic model
    Once these principles are established and built into our methodology, it then becomes possible to develop crisp mathematical models, which we can strive to perfect.

    4) With this distinction clarified, we can then strive to perfect the accuracy and full detail of our model
    Now we CAN use straight lines, and digital models, and matrix structures, and we are going to have some measure of the error tolerance between our matrix dimensions and the actual unknowable fractal boundaries of reality. Just don't confuse these models with the wetland itself. You're drawing an approximation -- ideally with known error tolerances.

    5) What is identity? What is similarity?
    So, given these postulates -- we can now consider a couple of fundamental definitions on which our entire model-construction world can be safely built. What is the concept of "identity"? In what sense can we say that "two things are the same" -- especially when the entire notion of a "thing" is seen as a conceptual construction and abstraction and not "a real object" (because there are no boundaries in reality).

    What is "identity"? Does A = B? What does that mean? A is clearly "not the same thing as" B.

    So -- a couple of lead questions:

    What is identity? What is similarity? What is difference? Can we develop a single algebraic principle or method to answer these questions in general terms and across all cases?

    Here's a quote, from Linnaeus, generally recognized as the founder of taxonomy:

    "All the real knowledge which we possess depends on
    methods by which we distinguish the similar from the
    dissimilar. The greater the number of natural distinctions
    this method comprehends the clearer becomes our idea of
    things. The more numerous the objects which employ our
    attention the more difficult it becomes to form such a
    method, and the more necessary."

    -- Carolus Linnaeus, Genera Plantarum, 1737

    OBJECTIVE
    I'd say that what we want to do is build an all-encompassing mathematical space based on this principle from Linnaeus, and accommodating the tension between continuous variation and finite-state/discrete categories defined within boundary values in some known number of decimal places. If we can clarify and generalize this principle, we have universalized the principles of category formation -- and we can begin to help converge the world towards smooth continuously-variable integration....

    Comment Source:I don't have a perfect order for these ideas, and this is an evolutionary creative process -- but I have a few principles that I think are foundational for high shared coherence. Of course, these ideas, and anything else I might post, are highly subject to critique and testing -- <br> <b>FUNDAMENTALS OF GOOD METHOD</b> There's a basic method that to me seems essential for avoiding messy confusions in semantic and symbolic representation, and that method involves a couple of principles I think are critically important. 1) <b>"Reality is continuous, concepts are discrete."</b> <br /> Reality itself is an "undifferentiated continuum" -- with no boundaries, characteristics, or properties. There are no "objects" in reality, because it is not possible to draw an accurate and realistic boundary around an object. The mystics have told us this forever, and the best concept theorists (see Sowa, cited below) have explained why it's true. The correct and proper way to proceed is to recognize a primary distinction between "reality itself" (with no properties) and the "model of reality" that is created by a human mind operating with symbols and distinctions in some representational medium (like an alphabet/language or a computer or blackboard or a brain made out of neurons). This claim is of course controversial and some will say this is obviously wrong -- "don't tell me there's no concrete wall over there -- try driving your car into it and see what happens". But I resist this this "naive realism", because I think it confuses and misleads the kind of really accurate and sophisticated epistemology and process/object description language that I'd like to see emerge with near-continuous micro-fine accuracy. Let's keep "reality" and "symbolic descriptions of reality" in two entirely separate compartments with highly accurate mappings between them (ie, let's make our models isomorphic to experience in the most fine-grained way possible) In essence: "reality is a continuum and concepts are discrete". "Reality is an unknowable limit towards which concepts converge" But remember: "there are no straight lines in nature". Straight lines are artificial/synthetic creations of the human mind. There are no straight lines in a wetland. Human beings build houses out of 4x8 plywood and sheetrock and 2x4s -- all apparently cut to straight lines -- but even these dimensionally crisp objects are only approximately clean-cut. Even synthetic structures are dimensionally messy. 2) <b>Science is building and testing models.</b> <br /> "Science" is the process of building models of reality -- and then testing those models to confirm they can be trusted. As models grow and are tested, their accuracy tends to converge to tighter degrees of accuracy. 3) <b>Distinguish continuous reality and abstract/symbolic model</b> <br /> Once these principles are established and built into our methodology, it then becomes possible to develop crisp mathematical models, which we can strive to perfect. 4) <b>With this distinction clarified, we can then strive to perfect the accuracy and full detail of our model</b><br /> Now we CAN use straight lines, and digital models, and matrix structures, and we are going to have some measure of the error tolerance between our matrix dimensions and the actual unknowable fractal boundaries of reality. Just don't confuse these models with the wetland itself. You're drawing an approximation -- ideally with known error tolerances. 5) <b>What is identity? What is similarity?</b> <br /> So, given these postulates -- we can now consider a couple of fundamental definitions on which our entire model-construction world can be safely built. What is the concept of "identity"? In what sense can we say that "two things are the same" -- especially when the entire notion of a "thing" is seen as a conceptual construction and abstraction and not "a real object" (because there are no boundaries in reality). What is "identity"? Does A = B? What does that mean? A is clearly "not the same thing as" B. So -- a couple of lead questions: What is identity? What is similarity? What is difference? Can we develop a single algebraic principle or method to answer these questions in general terms and across all cases? Here's a quote, from Linnaeus, generally recognized as the founder of taxonomy: "All the real knowledge which we possess depends on <br /> methods by which we distinguish the similar from the <br /> dissimilar. The greater the number of natural distinctions <br /> this method comprehends the clearer becomes our idea of <br /> things. The more numerous the objects which employ our <br /> attention the more difficult it becomes to form such a <br /> method, and the more necessary." -- Carolus Linnaeus, Genera Plantarum, 1737 <b>OBJECTIVE</b> <br /> I'd say that what we want to do is build an all-encompassing mathematical space based on this principle from Linnaeus, and accommodating the tension between continuous variation and finite-state/discrete categories defined within boundary values in some known number of decimal places. If we can clarify and generalize this principle, we have universalized the principles of category formation -- and we can begin to help converge the world towards smooth continuously-variable integration.... </blockquote>
  • 2.
    edited October 2015

    EXPLORATIONS

    Ok, having posted this much, and beginning to explore this space, and starting to get the hang of the local html, I bumped around a few categories and I found some postings that are right up my line -- particularly on hierarchy in biology, posted by http://www.azimuthproject.org/azimuth/show/Cameron+Smith Cameron Smith, who describes himself as a theoretical biologist.

    image

    http://www.azimuthproject.org/azimuth/show/Blog+-+hierarchical+organization+and+biological+evolution+(part+1)

    Now -- this article by Cameron Smith is the kind of thing I have looked at very closely. In fact, years ago I bought one of the seminal books on this theme that he cites in this article - Hierarchy, Perspectives for Ecological Complexity, by T.F.H. Allan and T.B. Starr http://www.amazon.com/Hierarchy-Perspectives-Ecological-Complexity-Allen/dp/0226014312

    Cameron Smith goes on to cite a number of other articles and books that influenced me -- primary among them being The Sciences of the Artificial, by Herbert Simon, which was a revelation when it was published in 1969. Cameron also cites Simon's famous parable of the two watchmakers -- both of whom built sophisticated watches of about 1,000 parts -- but one of the watchmakers had a system for compiling stable sub-units ("modules") of about 10 parts each, so that when he was interrupted (UPS at the door) and his current work-piece completely fell apart, he only lost the integrity of one 10-part module -- whereas the other watchmaker, equally a master, but lacking an instinct for modular integration, lost his entire 1,000 part construction and had to start all over again from the beginning. You know who ended up being the most successful.

    image

    I was a psychology major who was interested in deep intuition and holistic symbolism, but I was not interested in "unscientific" approaches to psychology. I was following the call of Edmund Husserl, who sounded the call that "philosophy must be made scientific", and I could see why. Philosophy is critically important to human welfare -- but because it is grounded in "mere opinion", its pronouncements are endlessly controversial and ambiguous. Not good enough for a humanity in crisis. Gotta tighten this stuff up, big time!

    So, for me, the challenge was to invent some new scientific models and descriptions that could accommodate the mysteries of deep intuition and "holism", while not compromising the hard edges of "real science". I kept studying the content of psychology and the subject of "mind" -- but all the books I was reading were engineering and math.

    Another very promising text cited by Cameron Smith is Conceptual Mathematics: A First Introduction to Categories

    http://www.shelfari.com/books/1400080/Conceptual-Mathematics-A-First-Introduction-to-Categories?widgetId=172511

    "The idea of a "category"--a sort of mathematical universe--has brought about a remarkable unification and simplification of mathematics. Written by two of the best-known names in categorical logic, Conceptual Mathematics is the first book to apply categories to the most elementary mathematics. It thus serves two purposes: first, to provide a key to mathematics for the general reader or beginning student; and second, to furnish an easy introduction to categories for computer scientists, logicians, physicists, and linguists who want to gain some familiarity with the categorical method without initially committing themselves to extended study."

    Or, maybe stronger, on Amazon, a more recent edition

    http://www.amazon.com/Conceptual-Mathematics-First-Introduction-Categories/dp/052171916X

    "In the last 60 years, the use of the notion of category has led to a remarkable unification and simplification of mathematics. Conceptual Mathematics, Second Edition, introduces the concept of 'category' for the learning, development, and use of mathematics, to both beginning students and general readers, and to practicing mathematical scientists. The treatment does not presuppose knowledge of specific fields, but rather develops, from basic definitions, such elementary categories as discrete dynamical systems and directed graphs; the fundamental ideas are then illuminated by examples in these categories."

    Let's be talking about "remarkable unification and simplification of mathematics".

    This might fit in perfectly with my simple-minded obsession with basic constructionist clarity and "disambiguation" in the representation of fundamental mathematical objects like A and B.

    Comment Source:<b>EXPLORATIONS</b> Ok, having posted this much, and beginning to explore this space, and starting to get the hang of the local html, I bumped around a few categories and I found some postings that are right up my line -- particularly on hierarchy in biology, posted by http://www.azimuthproject.org/azimuth/show/Cameron+Smith Cameron Smith, who describes himself as a theoretical biologist. <img src="http://ecx.images-amazon.com/images/I/51hxiPkX46L._SX338_BO1,204,203,200_.jpg" height=250 align=right hspace=10 vspace=1> http://www.azimuthproject.org/azimuth/show/Blog+-+hierarchical+organization+and+biological+evolution+%28part+1%29 Now -- this article by Cameron Smith is the kind of thing I have looked at very closely. In fact, years ago I bought one of the seminal books on this theme that he cites in this article - Hierarchy, Perspectives for Ecological Complexity, by T.F.H. Allan and T.B. Starr http://www.amazon.com/Hierarchy-Perspectives-Ecological-Complexity-Allen/dp/0226014312 Cameron Smith goes on to cite a number of other articles and books that influenced me -- primary among them being The Sciences of the Artificial, by Herbert Simon, which was a revelation when it was published in 1969. Cameron also cites Simon's famous parable of the two watchmakers -- both of whom built sophisticated watches of about 1,000 parts -- but one of the watchmakers had a system for compiling stable sub-units ("modules") of about 10 parts each, so that when he was interrupted (UPS at the door) and his current work-piece completely fell apart, he only lost the integrity of one 10-part module -- whereas the other watchmaker, equally a master, but lacking an instinct for modular integration, lost his entire 1,000 part construction and had to start all over again from the beginning. You know who ended up being the most successful. <img src="http://relationary.files.wordpress.com/2008/11/watch-parts1.jpg" height=300 align=left hspace=10> I was a psychology major who was interested in deep intuition and holistic symbolism, but I was not interested in "unscientific" approaches to psychology. I was following the call of Edmund Husserl, who sounded the call that "philosophy must be made scientific", and I could see why. Philosophy is critically important to human welfare -- but because it is grounded in "mere opinion", its pronouncements are endlessly controversial and ambiguous. Not good enough for a humanity in crisis. Gotta tighten this stuff up, big time! So, for me, the challenge was to invent some new scientific models and descriptions that could accommodate the mysteries of deep intuition and "holism", while not compromising the hard edges of "real science". I kept studying the content of psychology and the subject of "mind" -- but all the books I was reading were engineering and math. Another very promising text cited by Cameron Smith is <b>Conceptual Mathematics: A First Introduction to Categories</b> http://www.shelfari.com/books/1400080/Conceptual-Mathematics-A-First-Introduction-to-Categories?widgetId=172511 "The idea of a "category"--a sort of mathematical universe--has brought about a remarkable unification and simplification of mathematics. Written by two of the best-known names in categorical logic, Conceptual Mathematics is the first book to apply categories to the most elementary mathematics. It thus serves two purposes: first, to provide a key to mathematics for the general reader or beginning student; and second, to furnish an easy introduction to categories for computer scientists, logicians, physicists, and linguists who want to gain some familiarity with the categorical method without initially committing themselves to extended study." Or, maybe stronger, on Amazon, a more recent edition http://www.amazon.com/Conceptual-Mathematics-First-Introduction-Categories/dp/052171916X "In the last 60 years, the use of the notion of category has led to a remarkable unification and simplification of mathematics. Conceptual Mathematics, Second Edition, introduces the concept of 'category' for the learning, development, and use of mathematics, to both beginning students and general readers, and to practicing mathematical scientists. The treatment does not presuppose knowledge of specific fields, but rather develops, from basic definitions, such elementary categories as discrete dynamical systems and directed graphs; the fundamental ideas are then illuminated by examples in these categories." Let's be talking about "remarkable unification and simplification of mathematics". This might fit in perfectly with my simple-minded obsession with basic constructionist clarity and "disambiguation" in the representation of fundamental mathematical objects like A and B.
  • 3.
    Comment Source:This ontology is geared for earth sciences https://sweet.jpl.nasa.gov/ We applied it here: http://entroplet.com/ref/foundation/D-knowledge_based_enviromental_modeling.pdf
  • 4.
    edited October 2015

    FRACTALS AND THE LINEAR MATRIX

    Now, as regards the "hierarchical perspective" and (Nobel Prize winner) Herbert Simon -- I want to talk a little bit about how/why this kind of "traditional" hierarchy theory tends to violate the basic principles I introduced in the first comment in this thread, on Fundamentals of Good Method.

    I love hierarchy theory and Herbert Simon, and both were absolutely seminal in my life. But as the questing mind has continued to drive forward looking for better analytic models -- one of the deep/big things we are learning is -- analytic models written in straight-line matrix presumptions don't fit reality. Maybe this is a key to Simon's "near decomposability" idea. We need a decomposition/analytic model that can "perfectly" describe organic systems, to a near-continuous degree of accuracy.

    Neat little box categories are very convenient for science -- but reality is continuously variable in every discernible dimension, right on down to some atomic level of granularity. This discovery was basic to the emergence of fractals. "There are no straight lines in nature".

    Concepts are inventions of the human mind used to construct a model

    "Concepts are inventions of the human mind used to construct a model of the world. They package reality into discrete units for further processing, they support powerful mechanisms for doing logic, and they are indispensable for precise, extended chains of reasoning. But concepts and percepts cannot form a perfect model of the world -- they are abstractions that select features that are important for one purpose, but they ignore details and complexities that may be just as important for some other purpose. Leech (1974) noted that "bony structured" concepts form an imperfect match to a fuzzy world. People make black and white distinctions when the world consists of a continuum of shadings.

    For many aspects of the world, a discrete set of concepts is adequate: plants and animals are grouped into species that usually do not interbreed; most substances can quickly be classified as solid, liquid, or gas; the dividing line between a person's body and the rest of the world is fairly sharp. Yet such distinctions break down when pushed to extremes. Many species do interbreed, and the distinctions between variety, subspecies, and species are often arbitrary. Tar, glass, quicksand, and substances under high heat or pressure violate common distinctions between the states of matter. Even the border between the body and the rest of the world is not clear: Are non-living appendages such as hair and fingernails part of the body? Is so, what is the status of fingernail polish, hair dye, and makeup? What about fillings in the teeth or metal reinforcements embedded in a bone? Even the borderline between life and death is vague, to the embarrassment of doctors, lawyers, politicians, and clergymen.

    These examples show that concepts are ad hoc: they are defined for specific purposes; they may be generalized beyond their original purposes, but they soon come into conflict with other concepts defined for other purposes. This point is not merely a philosophical puzzle; it is a major problem in designing data-bases and natural language processors. Section 6.3, for example, cited the case of an oil company that could not merge its geological database with its accounting database because the two systems used different definitions of oil well. A database system for keeping track of computer production would have a similar problem: the distinctions between minicomputer and mainframe, between microcomputer and minicomputer, between computer and pocket calculator, are all vague. Attempts to draw a firm boundary have become obsolete as big machines become more compact and small machines adopt features from big ones.

    If an oil company can't give a precise definition of an oil well, a computer firm can't define computer, and doctors can't define death, can anything be defined precisely? The answer is that the only things which can be represented accurately in concepts are man-made structures that once originated as concepts in some person's mind. The rules of chess, for example, are unambiguous and can be programmed on a digital computer. But a chess piece carved out of wood cannot be described completely because it is partly the product of discrete concepts in the mind of the carver and partly the result of continuous processes in growing the wood and applying the chisel to it. The crucial problem is that the world is a continuum and concepts are discrete. For any specific purpose, a discrete model can form a workable approximation to a continuum, but it is always an approximation that must leave out features that may be essential for other purposes.

    Since the world is a continuum and concepts are discrete, a network of concepts can never be a perfect model of the world. At best, it can only be a workable approximation."

    ~ John Sowa, Conceptual Structures, Information Processing in Mind and Machine, Addison-Wesley System Programming Series, 1984

    http://originresearch.com/sd/sd4.cfm

    I ran into the Sowa book in 1984, and at that time, it was the best review of semantic fundamentals I had ever seen. It was very influential on me.

    In recent years, I've been a participant in the "Ontolog" listserv discussion group, mostly led by John Sowa and generally composed of senior engineers and mathematicians from major institutions who grew up as the computer and A.I. world was being invented, so they trace their lineage back to places like Xerox Parc and writers like Marvin Minsky.

    Their direct ambitions these days tend to involve machine translation of natural language, and their general perspective tends to be very empirical and bottom-up, so I'm not entirely in resonance with everything that happens there (I'm very aware of the famous tension in Artificial Intelligence between "the neats" and "the scruffies" -- https://en.wikipedia.org/wiki/Neats_vs._scruffies ), but it's highly educational, and many of these guys are "semantic ontology professionals" with long experience in real-world project development. My ideas on "incommensurate fundamentals" have been largely documented by discussions there.

    THE DREAM OF NUMERIC TAXONOMY

    Just to quickly tack in an additional theme -- this issue of "continuous reality/digital categories" also has potent implications for the dream of "numeric taxonomy", a subject that fascinated me, and which initially looked like the analytic answer I was looking for. Any reason we can't numerically specify the qualifying dimensions/boundary values of taxonomic elements like "genus" and "species"?

    Well, it turns out that this is either very difficult to do in any general way, or simply impossible -- in essence, because the idea is inherently a contradiction in terms. Why? Because any taxonomic structure is defined for a particular purpose (as Sowa notes above), under the controlling specifications of particular context-specific guiding presumptions and expectations and intentions. If a taxonomy becomes widely accepted as an industry standard, it's because it's been negotiated as an acceptable compromise for some working group. It's been "socialized".

    This does NOT mean that we can't numerically specify the boundary value specifications for specific cases. "What is the difference between a dog and a wolf?" In a particular context, for a particular reason, we can answer that question. But we can't write a single mathematical generalization that is true for all cases and purposes. It took a while for the biological/taxonomic/scientific/philosophic community to figure that out -- and this theme was right at the core of the evolutionary discovery process of the philosopher Ludwig Wittgenstein. Taxonomies are always ad hoc. Scary, huh?

    Comment Source:<b>FRACTALS AND THE LINEAR MATRIX</b> Now, as regards the "hierarchical perspective" and (Nobel Prize winner) Herbert Simon -- I want to talk a little bit about how/why this kind of "traditional" hierarchy theory tends to violate the basic principles I introduced in the first comment in this thread, on <b>Fundamentals of Good Method</b>. I love hierarchy theory and Herbert Simon, and both were absolutely seminal in my life. But as the questing mind has continued to drive forward looking for better analytic models -- one of the deep/big things we are learning is -- analytic models written in straight-line matrix presumptions don't fit reality. Maybe this is a key to Simon's "near decomposability" idea. We need a decomposition/analytic model that can "perfectly" describe organic systems, to a near-continuous degree of accuracy. Neat little box categories are very convenient for science -- but reality is continuously variable in every discernible dimension, right on down to some atomic level of granularity. This discovery was basic to the emergence of fractals. "There are no straight lines in nature". <blockquote> <b>Concepts are inventions of the human mind used to construct a model</b> <p> "Concepts are inventions of the human mind used to construct a model of the world. They package reality into discrete units for further processing, they support powerful mechanisms for doing logic, and they are indispensable for precise, extended chains of reasoning. But concepts and percepts cannot form a perfect model of the world -- they are abstractions that select features that are important for one purpose, but they ignore details and complexities that may be just as important for some other purpose. Leech (1974) noted that "bony structured" concepts form an imperfect match to a fuzzy world. People make black and white distinctions when the world consists of a continuum of shadings. <p> For many aspects of the world, a discrete set of concepts is adequate: plants and animals are grouped into species that usually do not interbreed; most substances can quickly be classified as solid, liquid, or gas; the dividing line between a person's body and the rest of the world is fairly sharp. Yet such distinctions break down when pushed to extremes. Many species do interbreed, and the distinctions between variety, subspecies, and species are often arbitrary. Tar, glass, quicksand, and substances under high heat or pressure violate common distinctions between the states of matter. Even the border between the body and the rest of the world is not clear: Are non-living appendages such as hair and fingernails part of the body? Is so, what is the status of fingernail polish, hair dye, and makeup? What about fillings in the teeth or metal reinforcements embedded in a bone? Even the borderline between life and death is vague, to the embarrassment of doctors, lawyers, politicians, and clergymen. <p> These examples show that concepts are ad hoc: they are defined for specific purposes; they may be generalized beyond their original purposes, but they soon come into conflict with other concepts defined for other purposes. This point is not merely a philosophical puzzle; it is a major problem in designing data-bases and natural language processors. Section 6.3, for example, cited the case of an oil company that could not merge its geological database with its accounting database because the two systems used different definitions of oil well. A database system for keeping track of computer production would have a similar problem: the distinctions between minicomputer and mainframe, between microcomputer and minicomputer, between computer and pocket calculator, are all vague. Attempts to draw a firm boundary have become obsolete as big machines become more compact and small machines adopt features from big ones. <p> If an oil company can't give a precise definition of an oil well, a computer firm can't define computer, and doctors can't define death, can anything be defined precisely? The answer is that the only things which can be represented accurately in concepts are man-made structures that once originated as concepts in some person's mind. The rules of chess, for example, are unambiguous and can be programmed on a digital computer. But a chess piece carved out of wood cannot be described completely because it is partly the product of discrete concepts in the mind of the carver and partly the result of continuous processes in growing the wood and applying the chisel to it. The crucial problem is that the world is a continuum and concepts are discrete. For any specific purpose, a discrete model can form a workable approximation to a continuum, but it is always an approximation that must leave out features that may be essential for other purposes. <p> Since the world is a continuum and concepts are discrete, a network of concepts can never be a perfect model of the world. At best, it can only be a workable approximation." <p> ~ John Sowa, Conceptual Structures, Information Processing in Mind and Machine, Addison-Wesley System Programming Series, 1984 <p> http://originresearch.com/sd/sd4.cfm </blockquote> I ran into the Sowa book in 1984, and at that time, it was the best review of semantic fundamentals I had ever seen. It was very influential on me. In recent years, I've been a participant in the "Ontolog" listserv discussion group, mostly led by John Sowa and generally composed of senior engineers and mathematicians from major institutions who grew up as the computer and A.I. world was being invented, so they trace their lineage back to places like Xerox Parc and writers like Marvin Minsky. Their direct ambitions these days tend to involve machine translation of natural language, and their general perspective tends to be very empirical and bottom-up, so I'm not entirely in resonance with everything that happens there (I'm very aware of the famous tension in Artificial Intelligence between "the neats" and "the scruffies" -- https://en.wikipedia.org/wiki/Neats_vs._scruffies ), but it's highly educational, and many of these guys are "semantic ontology professionals" with long experience in real-world project development. My ideas on "incommensurate fundamentals" have been largely documented by discussions there. <b>THE DREAM OF NUMERIC TAXONOMY</b> Just to quickly tack in an additional theme -- this issue of "continuous reality/digital categories" also has potent implications for the dream of "numeric taxonomy", a subject that fascinated me, and which initially looked like the analytic answer I was looking for. Any reason we can't numerically specify the qualifying dimensions/boundary values of taxonomic elements like "genus" and "species"? Well, it turns out that this is either very difficult to do in any general way, or simply impossible -- in essence, because the idea is inherently a contradiction in terms. Why? Because any taxonomic structure is defined for a particular purpose (as Sowa notes above), under the controlling specifications of particular context-specific guiding presumptions and expectations and intentions. If a taxonomy becomes widely accepted as an industry standard, it's because it's been negotiated as an acceptable compromise for some working group. It's been "socialized". This does NOT mean that we can't numerically specify the boundary value specifications for specific cases. "What is the difference between a dog and a wolf?" In a particular context, for a particular reason, we can answer that question. But we can't write a single mathematical generalization that is true for all cases and purposes. It took a while for the biological/taxonomic/scientific/philosophic community to figure that out -- and this theme was right at the core of the evolutionary discovery process of the philosopher Ludwig Wittgenstein. Taxonomies are always ad hoc. Scary, huh?
  • 5.
    edited October 2015

    Re: https://forum.azimuthproject.org/discussion/comment/14943/#Comment_14943

    Dear Paul - WebHubTel - thanks for the links.

    I'm looking at the JPL graphs, in particular https://sweet.jpl.nasa.gov/graph?domain=Process

    image

    And I've scanned the longer PDF on environmental modeling, at

    http://entroplet.com/ref/foundation/D-knowledge_based_enviromental_modeling.pdf

    Abstract: This paper describes a semantic web architecture based on patterns and logical archetypal building-blocks well suited for comprehensive environmental modeling framework. The patterns span a range of features that cover specific land, atmospheric and aquatic domains intended for terrestrial and amphibious vehicles. The modeling engine contained within the server relied on knowledge-based inferencing capable of supporting formal terminology (through the SWEET ontology and a domain specific language) and levels of abstraction via integrated reasoning modules.

    I'd say the issues and intentions in both of these links are directly relevant to the points I was making above about process description.


    I'll look more carefully at both examples. Regarding the PDF, I'm trying to understand what is meant by the phrase "Dynamic Context Semantic Web Server", supposing that in some sense, the server or system is adaptive to particular context-specific conditions. If that's true, I'd like to understand how it works.

    Regarding the JPL graphs -- at a first glance, those look like an observation-based bundle of concepts presumed to be relevant in some way to a general theme ("domain"), and gathered together on the basis of somebody's experience and opinion, or maybe some extended testing by working groups, etc.

    Any of these individual topics is then subject to further drill-down to higher levels of specificity.

    From the point of view of method -- I might want to see this same "list" of elements organized in a strict linear matrix or hierarchy. The bottom-up approach is empirical and gathers "elements that are observed in a particular domain" -- but does not impose an ordering over those elements. I'd say that's a natural tension, and neither a "strength" or a "weakness", but simply a "property" of this kind of categorization -- a property with various implications and consequences. Also -- I want to understand what is meant by the "centering" of this graph -- since centering seems to me to be a very significant mathematical process in a context of collaboration or optimal cross-correlation/balancing/compromise (and PS -- it might also be highly significant for the fundamental concept of "azimuth").

    In very broad social-transition terms, I made a 7-level taxonomic system a couple years ago called "Pattern of the Whole" based on some popular ways to parse the general space of human experience, exploring the tension between a rigid top-down taxonomic categorization -- like a Dewey Decimal System for libraries -- versus the free-form "tag" (or attribute) approach that breaks or can over-ride the rigid boundaries of classification.

    http://networknation.net/pattern.cfm

    Here's a drill-down search on this framework: http://goo.gl/n6sndJ

    image

    Comment Source:Re: https://forum.azimuthproject.org/discussion/comment/14943/#Comment_14943 Dear Paul - WebHubTel - thanks for the links. I'm looking at the JPL graphs, in particular https://sweet.jpl.nasa.gov/graph?domain=Process <img src="http://sharedpurpose.net/groupgraphics/jplgraph2.png" width=1000> And I've scanned the longer PDF on environmental modeling, at http://entroplet.com/ref/foundation/D-knowledge_based_enviromental_modeling.pdf <blockquote> <b>Abstract:</b> This paper describes a semantic web architecture based on patterns and logical archetypal building-blocks well suited for comprehensive environmental modeling framework. The patterns span a range of features that cover specific land, atmospheric and aquatic domains intended for terrestrial and amphibious vehicles. The modeling engine contained within the server relied on knowledge-based inferencing capable of supporting formal terminology (through the SWEET ontology and a domain specific language) and levels of abstraction via integrated reasoning modules. </blockquote> I'd say the issues and intentions in both of these links are directly relevant to the points I was making above about process description. *** I'll look more carefully at both examples. Regarding the PDF, I'm trying to understand what is meant by the phrase "Dynamic Context Semantic Web Server", supposing that in some sense, the server or system is adaptive to particular context-specific conditions. If that's true, I'd like to understand how it works. Regarding the JPL graphs -- at a first glance, those look like an observation-based bundle of concepts presumed to be relevant in some way to a general theme ("domain"), and gathered together on the basis of somebody's experience and opinion, or maybe some extended testing by working groups, etc. Any of these individual topics is then subject to further drill-down to higher levels of specificity. From the point of view of method -- I might want to see this same "list" of elements organized in a strict linear matrix or hierarchy. The bottom-up approach is empirical and gathers "elements that are observed in a particular domain" -- but does not impose an ordering over those elements. I'd say that's a natural tension, and neither a "strength" or a "weakness", but simply a "property" of this kind of categorization -- a property with various implications and consequences. Also -- I want to understand what is meant by the "centering" of this graph -- since centering seems to me to be a very significant mathematical process in a context of collaboration or optimal cross-correlation/balancing/compromise (and PS -- it might also be highly significant for the fundamental concept of "azimuth"). In very broad social-transition terms, I made a 7-level taxonomic system a couple years ago called "Pattern of the Whole" based on some popular ways to parse the general space of human experience, exploring the tension between a rigid top-down taxonomic categorization -- like a Dewey Decimal System for libraries -- versus the free-form "tag" (or attribute) approach that breaks or can over-ride the rigid boundaries of classification. http://networknation.net/pattern.cfm Here's a drill-down search on this framework: http://goo.gl/n6sndJ <img src="http://sharedpurpose.net/groupgraphics/patternofthewhole.png" width=1000>
  • 6.

    A context is the enclosing environment for the model. It's not much deeper in meaning than that. So if you are interested in building a wind turbine, you could invoke a context model of wind speed PDFs to evaluate the statistical efficiency of the turbine over time.

    http://en.wikipedia.org/wiki/Context_model

    I call it a dynamic context server because it serves up models on the fly, constructing them from algorithms instead of from static files.

    "Regarding the JPL graphs -- at a first glance, those look like an observation-based bundle of concepts presumed to be relevant in some way to a general theme ("domain"), and gathered together on the basis of somebody's experience and opinion, or maybe some extended testing by working groups, etc."

    You are right. The origin of the SWEET Ontology was the brainchild of one person, Robert Raskin of NASA JPL, as an outgrowth of earth sciences working groups he was involved with. I was going to work with him on a project but he sadly passed away the week after the startup meeting. I and one of his JPL colleagues carried on with the work and tried the best we could applying the SWEET ontology, but we will never know what kind of interesting direction that we could have taken it.

    Comment Source:A context is the enclosing environment for the model. It's not much deeper in meaning than that. So if you are interested in building a wind turbine, you could invoke a context model of wind speed PDFs to evaluate the statistical efficiency of the turbine over time. http://en.wikipedia.org/wiki/Context_model I call it a dynamic context server because it serves up models on the fly, constructing them from algorithms instead of from static files. > "Regarding the JPL graphs -- at a first glance, those look like an observation-based bundle of concepts presumed to be relevant in some way to a general theme ("domain"), and gathered together on the basis of somebody's experience and opinion, or maybe some extended testing by working groups, etc." You are right. The origin of the SWEET Ontology was the brainchild of one person, Robert Raskin of NASA JPL, as an outgrowth of earth sciences working groups he was involved with. I was going to work with him on a project but he sadly passed away the week after the startup meeting. I and one of his JPL colleagues carried on with the work and tried the best we could applying the SWEET ontology, but we will never know what kind of interesting direction that we could have taken it.
  • 7.
    edited October 2015

    Re: https://forum.azimuthproject.org/discussion/comment/14945/#Comment_14945

    FRACTALS AND THE LINEAR MATRIX

    I want to keep looking at this issue of "organic mathematics" -- and the idea that "there are no straight lines in nature".

    And on that theme, a quick Google search finds a book that exploded that thought into my brain, The Beauty of Fractals, which I discovered in 1986 and rode my bike 20 miles to scare up the $30 I needed to buy that amazing book immediately. This lead quote is a bit wild, but it's interesting and challenging. This guy is a famous European architect with a strong drive towards organic design.

    https://books.google.com/books/about/The_Beauty_of_Fractals.html?id=aIzsCAAAQBAJ&source=kp_cover&hl=en

    "In 1953 I realized that the straight line leads to the downfall of mankind. But the straight line has become an absolute tyranny. The straight line is something cowardly drawn with a rule, without thought or feeling; it is the line which does not exist in nature. And that line is the rotten foundation of our doomed civilization. Even if there are places where it is recognized that this line is rapidly leading to perdition, its course continues to be plotted . . . Any design undertaken with the straight line will be stillborn. Today we are witnessing the triumph of rationalist knowhow and yet, at the same time, we find ourselves confronted with emptiness. An esthetic void, desert of uniformity, criminal sterility, loss of creative power. Even creativity is prefabricated. We have become impotent. We are no longer able to create. That is our real illiteracy."

    ~ Friedensreich Hundertwasser

    From the book:
    "Fractals are all around us, in the shape of a mountain range or in the windings of a coast line. Like cloud formations and flickering fires some fractals under go never-ending changes while others, like trees or our own vascular systems, retain the structure they acquired in their development. To non-scientists it may seem odd that such familiar things have recently become the focus of intense research. But familiarity is not enough to ensure that scientists have the tools for an adequate understanding."
    Comment Source:Re: https://forum.azimuthproject.org/discussion/comment/14945/#Comment_14945 <b>FRACTALS AND THE LINEAR MATRIX</b> I want to keep looking at this issue of "organic mathematics" -- and the idea that "there are no straight lines in nature". And on that theme, a quick Google search finds a book that exploded that thought into my brain, The Beauty of Fractals, which I discovered in 1986 and rode my bike 20 miles to scare up the $30 I needed to buy that amazing book immediately. This lead quote is a bit wild, but it's interesting and challenging. This guy is a famous European architect with a strong drive towards organic design. https://books.google.com/books/about/The_Beauty_of_Fractals.html?id=aIzsCAAAQBAJ&source=kp_cover&hl=en <blockquote> "In 1953 I realized that the straight line leads to the downfall of mankind. But the straight line has become an absolute tyranny. The straight line is something cowardly drawn with a rule, without thought or feeling; it is the line which does not exist in nature. And that line is the rotten foundation of our doomed civilization. Even if there are places where it is recognized that this line is rapidly leading to perdition, its course continues to be plotted . . . Any design undertaken with the straight line will be stillborn. Today we are witnessing the triumph of rationalist knowhow and yet, at the same time, we find ourselves confronted with emptiness. An esthetic void, desert of uniformity, criminal sterility, loss of creative power. Even creativity is prefabricated. We have become impotent. We are no longer able to create. That is our real illiteracy." <br><br> ~ Friedensreich Hundertwasser <br><br> From the book: <br /> "Fractals are all around us, in the shape of a mountain range or in the windings of a coast line. Like cloud formations and flickering fires some fractals under go never-ending changes while others, like trees or our own vascular systems, retain the structure they acquired in their development. To non-scientists it may seem odd that such familiar things have recently become the focus of intense research. But familiarity is not enough to ensure that scientists have the tools for an adequate understanding." </blockquote>
  • 8.
    edited October 2015

    "HILBERT'S PROBLEMS" FOR 21st CENTURY MATHEMATICS AND SYSTEM SCIENCE

    The world is undergoing an amazing period of transition. If there is any one single way to describe this change -- any one dimension along which it can be characterized -- it is a shift in the organization of civilization from a network of loosely interconnected semi-independent nations to a one-world integral planetary civilization. This change is obviously the case in a number of big clear-cut global issues like climate change and economic interdependence, and becoming increasingly evident in any number of other issues.

    This change presents tremendous challenges for scientists working to develop systematic models supporting national and international governance and collaborative approaches to inter-cultural and inter-demographic issues. This period of transition is dangerous, in many ways. A failure or inability to respond to the complexities of interdependence could have devastating consequences in economics and climate. But modern system science may not be fully ready to accommodate the demands of managing this emerging new interdependent global world.

    In this note, I want to begin describing a series of technical issues that I believe are critically important. If there are solutions out there, I'd like to know about them.

    I'll begin simply with some notes in a draft format, and given some time, do what I can to grow and refine this list. Many of these themes are overlapping, and might be facets or implications of one inclusive "integral theorem" that combines mereology (part/whole relationships) with many other facets of mathematics, including taxonomy, semantic ontology and decision science. The "relation of the whole to the part" is related to -- and might be "isomorphic with" -- the relation of the infinite with the infinitesimal across "all possible" levels of scale. We need to define a model in these basic dimensions that is applicable to the "homeostatic balancing" to local environments in the context of the (global) whole.

    • The taxonomy of issues
      What is an "issue"? In an absolute definition, we might say that "an issue is any concern or division-point where two or more people, or two or more groups, or two or more nations disagree." In a world with thousands or millions of interdependent issues crashing into one another in undefined or badly defined ways, what hope is there to resolve them? Presuming the emergence of some idealized form of democracy and collaborative governance, how can we use regional GIS-based parsing to locate and resolve high-pressure issues and social concerns in an interdependent global matrix that ranges across all levels of jurisdictional scale?
      • We must locate issues in an algebraic computer space. How do we define the boundaries and semantic structure of an issue, so we can clearly discern "what is this issue and what is not this issue?"
      • Interdependent simultaneity -- "everything is happening at once" -- and we must process or regulate or balance it with respect to this simultaneity
      • Proportional/balanced or "optimal" resolution -- how are issues weighted? Can weighting or motivation in a local context be defined "with respect to the whole", and does this approach define an absolute criteria?

    • The linearization of qualitative dimensionality
      Science has established its beachhead and frontier in the indisputable realm of linear/quantitative dimensionality. But a high proportion of human thought and experience is defined in qualitative dimensionality. Our politics and our culture and our private lives are conducted and described in an inherently ambiguous semantics that "currently prevailing science" tends to dismiss or regard as fundamentally unresolvable. It's understandable why this is true, of course -- but it's a problem that should be attacked and solved.
      • Emerging new models
      • Clarifying the algebra of abstraction
      • Computer-based representation of conceptual space in terms of a universal primitive. All concepts are "constructed from distinctions", and abstractions are nested distinctions. All these distinctions can be defined in dimensions. This recursive cascade can define any qualitative abstraction.

    • Whole systems
      The world -- indeed, the "universe" -- in fact and actually provide and simply are the context and framework for all human experience and interaction. But prevailing cultural assumptions remain myopic, local and short-sighted. People don't see the whole, don't see their lives as positioned within the whole, and don't take responsibility for the whole.
      • Ethics defined within and by wholeness
      • Justice defined by balance within wholeness
      • Mereology - the relation of parts and wholes - the concept of "holon" -- "a part that is also a whole" (Arthur Koestler)

    • Integral foundations of mathematics
      How can we map the continuity of reality into discrete/digital finite-state space without distortion. Is there a single container or framework that can map or contain the full dimensionality of conceptual space?

    • Industry standards in the foundations of logic
      Can we all somehow learn how to "think the same way" about absolute fundamentals? Can there be some kind of fundamental standards -- maybe something like Mathematica -- where fundamental concepts and constructions become the common ground of analysts anywhere? How many different ways are there to define the unit circle or a row vector? Can we follow a computer-science-based "constructivist" approach to definitions and build our systems out of the same kind of alphabet blocks? Computer science has established many broad industry standards. Does it make sense to extend some of these definitions to fundamentals of logic? Is that even possible? Can every "abstract" definition of an algebraic object be instantiated in terms of "bits" in a digital machine space?

    • The algebraic representation and integration of law
      Law in general seems to involve boundaries values. Stay within the boundaries, you are legal. Go outside the boundaries, you are not legal.

    • High-dimensional (infinite dimensional) alliances - infinite dimensional issue-negotiation space
      The future of the world depends on super-sophisticated negotiation and alliance development in highly nuanced ways. General principles should not be applied to local instances in inaccurate ways. We must fluidly map the integration of the whole to the local point without imposing broad general categories on fine-grained actual/real/local contexts
      • Network-based decision science - global/local balancing
        In a collective decision context that is increasingly global, and will become "absolutely global", how can we make informed/enlightened collective decisions -- on global trade agreements, for example -- such that global policies can simultaneously affect "the whole" and at the same time remain balanced at the local point?

    • Cascaded absolute/relative coordinate frames
      "The whole" is a network of cascaded coordinate frames holding all relativity in a single context linked through a single common "highest" origin or common zero-point. Every subset or sub-region of the whole has its own "relatively local" coordinate frame with its own origin. The whole defines "the absolute" and is the context of "the absolute one", and every other regionalized frame is relative within its own subset context, operating on the basis of "its own local center-point", but not establishing its balance "with respect to the whole". These coordinate frames are linearly cascaded across descending levels of scale like fractals



    Comment Source:<h1>"HILBERT'S PROBLEMS" FOR 21st CENTURY MATHEMATICS AND SYSTEM SCIENCE</h1> The world is undergoing an amazing period of transition. If there is any one single way to describe this change -- any one dimension along which it can be characterized -- it is a shift in the organization of civilization from a network of loosely interconnected semi-independent nations to a one-world integral planetary civilization. This change is obviously the case in a number of big clear-cut global issues like climate change and economic interdependence, and becoming increasingly evident in any number of other issues. This change presents tremendous challenges for scientists working to develop systematic models supporting national and international governance and collaborative approaches to inter-cultural and inter-demographic issues. This period of transition is dangerous, in many ways. A failure or inability to respond to the complexities of interdependence could have devastating consequences in economics and climate. But modern system science may not be fully ready to accommodate the demands of managing this emerging new interdependent global world. In this note, I want to begin describing a series of technical issues that I believe are critically important. If there are solutions out there, I'd like to know about them. I'll begin simply with some notes in a draft format, and given some time, do what I can to grow and refine this list. Many of these themes are overlapping, and might be facets or implications of one inclusive "integral theorem" that combines mereology (part/whole relationships) with many other facets of mathematics, including taxonomy, semantic ontology and decision science. The "relation of the whole to the part" is related to -- and might be "isomorphic with" -- the relation of the infinite with the infinitesimal across "all possible" levels of scale. We need to define a model in these basic dimensions that is applicable to the "homeostatic balancing" to local environments in the context of the (global) whole. <ul> <li> <b>The taxonomy of issues</b> <br> What is an "issue"? In an absolute definition, we might say that "an issue is any concern or division-point where two or more people, or two or more groups, or two or more nations disagree." In a world with thousands or millions of interdependent issues crashing into one another in undefined or badly defined ways, what hope is there to resolve them? Presuming the emergence of some idealized form of democracy and collaborative governance, how can we use regional GIS-based parsing to locate and resolve high-pressure issues and social concerns in an interdependent global matrix that ranges across all levels of jurisdictional scale? <ul> <li> We must locate issues in an algebraic computer space. How do we define the boundaries and semantic structure of an issue, so we can clearly discern "what is this issue and what is not this issue?" <li> Interdependent simultaneity -- "everything is happening at once" -- and we must process or regulate or balance it with respect to this simultaneity <li> Proportional/balanced or "optimal" resolution -- how are issues weighted? Can weighting or motivation in a local context be defined "with respect to the whole", and does this approach define an absolute criteria? </ul> <br> <li> <b>The linearization of qualitative dimensionality</b> <br> Science has established its beachhead and frontier in the indisputable realm of linear/quantitative dimensionality. But a high proportion of human thought and experience is defined in qualitative dimensionality. Our politics and our culture and our private lives are conducted and described in an inherently ambiguous semantics that "currently prevailing science" tends to dismiss or regard as fundamentally unresolvable. It's understandable why this is true, of course -- but it's a problem that should be attacked and solved. <ul> <li> Emerging new models <li> Clarifying the algebra of abstraction <li> Computer-based representation of conceptual space in terms of a universal primitive. All concepts are "constructed from distinctions", and abstractions are nested distinctions. All these distinctions can be defined in dimensions. This recursive cascade can define any qualitative abstraction. </ul> <br> <li> <b>Whole systems</b> <br /> The world -- indeed, the "universe" -- in fact and actually provide and simply are the context and framework for all human experience and interaction. But prevailing cultural assumptions remain myopic, local and short-sighted. People don't see the whole, don't see their lives as positioned within the whole, and don't take responsibility for the whole. <ul> <li> Ethics defined within and by wholeness <li> Justice defined by balance within wholeness <li> Mereology - the relation of parts and wholes - the concept of "holon" -- "a part that is also a whole" (Arthur Koestler) </ul> <br> <li> <b>Integral foundations of mathematics</b> <br /> How can we map the continuity of reality into discrete/digital finite-state space without distortion. Is there a single container or framework that can map or contain the full dimensionality of conceptual space? <br><br> <li> <b>Industry standards in the foundations of logic</b> <br /> Can we all somehow learn how to "think the same way" about absolute fundamentals? Can there be some kind of fundamental standards -- maybe something like Mathematica -- where fundamental concepts and constructions become the common ground of analysts anywhere? How many different ways are there to define the unit circle or a row vector? Can we follow a computer-science-based "constructivist" approach to definitions and build our systems out of the same kind of alphabet blocks? Computer science has established many broad industry standards. Does it make sense to extend some of these definitions to fundamentals of logic? Is that even possible? Can every "abstract" definition of an algebraic object be instantiated in terms of "bits" in a digital machine space? <br><br> <li> <b>The algebraic representation and integration of law</b> <br /> Law in general seems to involve boundaries values. Stay within the boundaries, you are legal. Go outside the boundaries, you are not legal. <br><br> <li> <b>High-dimensional (infinite dimensional) alliances - infinite dimensional issue-negotiation space</b> <br /> The future of the world depends on super-sophisticated negotiation and alliance development in highly nuanced ways. General principles should not be applied to local instances in inaccurate ways. We must fluidly map the integration of the whole to the local point without imposing broad general categories on fine-grained actual/real/local contexts <ul> <li> Network-based decision science - global/local balancing <br /> In a collective decision context that is increasingly global, and will become "absolutely global", how can we make informed/enlightened collective decisions -- on global trade agreements, for example -- such that global policies can simultaneously affect "the whole" and at the same time remain balanced at the local point? </ul> <br> <li> <b>Cascaded absolute/relative coordinate frames</b> <br> "The whole" is a network of cascaded coordinate frames holding all relativity in a single context linked through a single common "highest" origin or common zero-point. Every subset or sub-region of the whole has its own "relatively local" coordinate frame with its own origin. The whole defines "the absolute" and is the context of "the absolute one", and every other regionalized frame is relative within its own subset context, operating on the basis of "its own local center-point", but not establishing its balance "with respect to the whole". These coordinate frames are linearly cascaded across descending levels of scale like fractals </ul> <br> <br>
  • 9.
    edited October 2015

    Re: https://forum.azimuthproject.org/discussion/comment/14947/#Comment_14947

    Thanks for the reply on "context", Paul. Your definition and specification is highly relevant to the broader "general systems" approach that is emerging for me. I think the big challenge today is to embed "all local or relative contexts" into one single integral "whole systems" context, that defines the dimensionality and borders/boundaries that "interconnect all local contexts".

    Is this a "fantastically complicated" idea? Or "impossibly complex?" Maybe yes. If it is feasible, it will demand some kind of algebraic generalization -- so that a very specific context definition, such as you suggest, not only "fits into" a larger (and possibly "absolute") more inclusive context. To make something like this work -- what would be needed? Maybe some general standards on creating taxonomies like the system created at JPL that you worked with. Today, it seems that ontologies of this type depend on negotiation among some industry group in a position to establish standards for a specific domain. But it's true today that we are headed into "one global domain" -- and for that to make sense, we need to generalize the basics of ontology and taxonomy in ways that define a universal industry standard for collaborating agencies and projects.

    I don't think it would make sense or be workable to attempt defining each of these terms in some rigid way that is supposedly "applicable to all contexts". So how do we generalize the taxonomic issue so that definitions "travel well" across the borders of immediate/local context?

    I wonder if each of the terms that is presented in this image could simply be "drilled down" to any number of higher degrees of specification? If so, we'd have to preserve the adaptability of each definition for its application in a specific local context.

    image

    Comment Source:Re: https://forum.azimuthproject.org/discussion/comment/14947/#Comment_14947 Thanks for the reply on "context", Paul. Your definition and specification is highly relevant to the broader "general systems" approach that is emerging for me. I think the big challenge today is to embed "all local or relative contexts" into one single integral "whole systems" context, that defines the dimensionality and borders/boundaries that "interconnect all local contexts". Is this a "fantastically complicated" idea? Or "impossibly complex?" Maybe yes. If it is feasible, it will demand some kind of algebraic generalization -- so that a very specific context definition, such as you suggest, not only "fits into" a larger (and possibly "absolute") more inclusive context. To make something like this work -- what would be needed? Maybe some general standards on creating taxonomies like the system created at JPL that you worked with. Today, it seems that ontologies of this type depend on negotiation among some industry group in a position to establish standards for a specific domain. But it's true today that we are headed into "one global domain" -- and for that to make sense, we need to generalize the basics of ontology and taxonomy in ways that define a universal industry standard for collaborating agencies and projects. I don't think it would make sense or be workable to attempt defining each of these terms in some rigid way that is supposedly "applicable to all contexts". So how do we generalize the taxonomic issue so that definitions "travel well" across the borders of immediate/local context? I wonder if each of the terms that is presented in this image could simply be "drilled down" to any number of higher degrees of specification? If so, we'd have to preserve the adaptability of each definition for its application in a specific local context. <img src="http://sharedpurpose.net/groupgraphics/jplgraph2.png" width=450>
  • 10.
    edited October 2015

    THE ALGEBRA OF ABSTRACTION (1)

    "Twenty questions"

    I want to explore a possible algebraic generalization of "abstraction", by proposing that the entire structure of abstraction is characterized by a single linear dimension ranging across a spectrum of levels that begins with something concrete and specific and ascends across levels to increasing generality and abstraction -- heading towards or converging into "the highest level" or "the one" at the top of an ascending cascade.

    My sense is that this entire symbolic structure can be constructed with one primitive element which we might describe as "distinction" -- where a distinction is analogous to a "Dedekind Cut" in some dimension. The argument is recursive and cascading, so that the general structure takes the form of "a distinction on a distinction on a distinction on a distinction..." ranging across descending levels of abstraction. Another way to say this is "a cut on a cut on a cut on a cut on a cut...", where each of these individual elements is like a "genus" (a taxon, a "row") in a taxonomy, and the cuts in the genus are like species. "Cows" are a cut (species) on the genus "mammal".

    The Prime Dimension

    The argument is raised that the entire structure of abstraction and the process of generalization (and hence logic itself, as well as categorization/classification) can be understood as linearly ordered across one primary spectrum or dimension (range of values).

    In simple/minimal terms, this dimension connects a "physical object or concrete instance" to some basic/minimal symbolic representation of this object, which is defined in symbolic terms represented in a medium (such as a computer or a piece of paper).

    These general ideas are consistent with common usage and intuition. The Wikipedia article on abstraction reviews the basics and begins to outline the characteristics of the prime dimension.

    https://en.wikipedia.org/wiki/Abstraction

    Abstraction in its main sense is a conceptual process by which general rules and concepts are derived from the usage and classification of specific examples, literal ("real" or "concrete") signifiers, first principles, or other methods. "An abstraction" is the product of this process—a concept that acts as a super-categorical noun for all subordinate concepts, and connects any related concepts as a group, field, or category.

    Conceptual abstractions may be formed by filtering the information content of a concept or an observable phenomenon, selecting only the aspects which are relevant for a particular purpose. For example, abstracting a leather soccer ball to the more general idea of a ball selects only the information on general ball attributes and behavior, eliminating the other characteristics of that particular ball.

    Abstraction involves induction of ideas or the synthesis of particular facts into one general theory about something. It is the opposite of specification, which is the analysis or breaking-down of a general idea or abstraction into concrete facts.

    Abstraction can be illustrated with Francis Bacon's Novum Organum (1620), a book of modern scientific philosophy written in the late Elizabethan era[3] of England to encourage modern thinkers to collect specific facts before making any generalizations. Bacon used and promoted induction as an abstraction tool, and it countered the ancient deductive-thinking approach that had dominated the intellectual world since the times of Greek philosophers like Thales, Anaximander, and Aristotle.

    We can begin to illustrate the properties of this prime dimension ("level of abstraction") by compiling a series of polarized opposites taken from the Wikipedia article.

    1. Abstraction involves induction, or the synthesis of particular ideas into one general theory. It is the opposite of specification or deduction, which is the analysis or "breaking down" of a general theory or abstraction into concrete facts.
    2. Abstraction can be seen as a "compression" process, mapping multiple pieces of constituent data to a single piece of abstract data, based on similarities in the constituent data

    These diagrams begins to show the directed properties of this prime dimension and some of its attributes, emphasizing similarity and difference.

    image

    image

    Comment Source:<b>THE ALGEBRA OF ABSTRACTION (1)</b> <b>"Twenty questions"</b> I want to explore a possible algebraic generalization of "abstraction", by proposing that the entire structure of abstraction is characterized by a single linear dimension ranging across a spectrum of levels that begins with something concrete and specific and ascends across levels to increasing generality and abstraction -- heading towards or converging into "the highest level" or "the one" at the top of an ascending cascade. My sense is that this entire symbolic structure can be constructed with one primitive element which we might describe as "distinction" -- where a distinction is analogous to a "Dedekind Cut" in some dimension. The argument is recursive and cascading, so that the general structure takes the form of "a distinction on a distinction on a distinction on a distinction..." ranging across descending levels of abstraction. Another way to say this is "a cut on a cut on a cut on a cut on a cut...", where each of these individual elements is like a "genus" (a taxon, a "row") in a taxonomy, and the cuts in the genus are like species. "Cows" are a cut (species) on the genus "mammal". <b>The Prime Dimension</b> The argument is raised that the entire structure of abstraction and the process of generalization (and hence logic itself, as well as categorization/classification) can be understood as linearly ordered across one primary spectrum or dimension (range of values). In simple/minimal terms, this dimension connects a "physical object or concrete instance" to some basic/minimal symbolic representation of this object, which is defined in symbolic terms represented in a medium (such as a computer or a piece of paper). These general ideas are consistent with common usage and intuition. The Wikipedia article on abstraction reviews the basics and begins to outline the characteristics of the prime dimension. https://en.wikipedia.org/wiki/Abstraction <blockquote> Abstraction in its main sense is a conceptual process by which general rules and concepts are derived from the usage and classification of specific examples, literal ("real" or "concrete") signifiers, first principles, or other methods. "An abstraction" is the product of this process—a concept that acts as a super-categorical noun for all subordinate concepts, and connects any related concepts as a group, field, or category. <br><br> Conceptual abstractions may be formed by filtering the information content of a concept or an observable phenomenon, selecting only the aspects which are relevant for a particular purpose. For example, abstracting a leather soccer ball to the more general idea of a ball selects only the information on general ball attributes and behavior, eliminating the other characteristics of that particular ball. <br> <br> Abstraction involves induction of ideas or the synthesis of particular facts into one general theory about something. It is the opposite of specification, which is the analysis or breaking-down of a general idea or abstraction into concrete facts. <br> <br> Abstraction can be illustrated with Francis Bacon's Novum Organum (1620), a book of modern scientific philosophy written in the late Elizabethan era[3] of England to encourage modern thinkers to collect specific facts before making any generalizations. Bacon used and promoted induction as an abstraction tool, and it countered the ancient deductive-thinking approach that had dominated the intellectual world since the times of Greek philosophers like Thales, Anaximander, and Aristotle. </blockquote> We can begin to illustrate the properties of this prime dimension ("level of abstraction") by compiling a series of polarized opposites taken from the Wikipedia article. <ol> <li><i>Abstraction</i> involves induction, or the synthesis of particular ideas into one general theory. It is the opposite of <i>specification</i> or deduction, which is the analysis or "breaking down" of a general theory or abstraction into concrete facts. <li>Abstraction can be seen as a "compression" process, mapping multiple pieces of constituent data to a single piece of abstract data, based on similarities in the constituent data </ol> These diagrams begins to show the directed properties of this prime dimension and some of its attributes, emphasizing similarity and difference. <img src="http://sharedpurpose.net/groupgraphics/dia04.png"> <img src="http://sharedpurpose.net/groupgraphics/dia06b.png">
  • 11.

    Ontologies and knowledge-based systems require a lot of work to maintain. And they take a lot of discipline to apply. One may think that the process can be automated, but the issue is that the knowledge can't really be completely bootstrapped. It's always the corner cases of knowledge classification that require all the work. One wrong classification defeats the purpose. Think of all the cases of Wikipedia disambiguation pages that exist. In a perfect ontological world, you would never reach one of those because the reasoner would know what you would mean from the context. That's another kind of context, this time referring to surrounding knowledge,

    So apropos with the dynamic context server, the meaning of the term context is overloaded via (1) environmental context models and (2) semantic context information that is supplied with each of the models.

    To understand how that gets applied, consider how people routinely tag social media posts. That serves to classify the information. With the semantic web server, the information is tagged with SWEET triples in RDF format that matches the environmental situation under study and thus acts as a disambiguator.

    Some parts of Wikipedia do this (Google dbpedia and the semantic Wiki) but it is not completely widespread and adopted.

    Here is an example of the maintenance involved. I just tried testing my semantic Wikipedia interface and discovered that a query feature is broken. It turns out that the dbpedia folks removed a namespace qualifier called "dbpprop" and replaced it with the shorter "dbp", which classifies property terms

    http://dbpedia.org/sparql?nsdecl

    This was the query, which returned temperature ranges for a geographic location:

    http://entroplet.com/context_temperature/temperature?site=http://dbpedia.org/resource/Acklington

    Now compare that against a free-form query such as you would use with Google search. That never needs maintenance because you get what you get. OTOH, ontological queries require some sort of standards that need to be maintained.

    Bottomline, this stuff is not for the faint-of-heart.

    Comment Source:Ontologies and knowledge-based systems require a lot of work to maintain. And they take a lot of discipline to apply. One may think that the process can be automated, but the issue is that the knowledge can't really be completely bootstrapped. It's always the corner cases of knowledge classification that require all the work. One wrong classification defeats the purpose. Think of all the cases of Wikipedia disambiguation pages that exist. In a perfect ontological world, you would never reach one of those because the reasoner would know what you would mean from the context. That's another kind of context, this time referring to surrounding knowledge, So apropos with the dynamic context server, the meaning of the term context is overloaded via (1) environmental context models and (2) semantic context information that is supplied with each of the models. To understand how that gets applied, consider how people routinely tag social media posts. That serves to classify the information. With the semantic web server, the information is tagged with SWEET triples in RDF format that matches the environmental situation under study and thus acts as a disambiguator. Some parts of Wikipedia do this (Google dbpedia and the semantic Wiki) but it is not completely widespread and adopted. Here is an example of the maintenance involved. I just tried testing my semantic Wikipedia interface and discovered that a query feature is broken. It turns out that the dbpedia folks removed a namespace qualifier called "dbpprop" and replaced it with the shorter "dbp", which classifies property terms http://dbpedia.org/sparql?nsdecl This was the query, which returned temperature ranges for a geographic location: http://entroplet.com/context_temperature/temperature?site=http%3A%2F%2Fdbpedia.org%2Fresource%2FAcklington Now compare that against a free-form query such as you would use with Google search. That never needs maintenance because you get what you get. OTOH, ontological queries require some sort of standards that need to be maintained. Bottomline, this stuff is not for the faint-of-heart.
  • 12.
    edited October 2015

    Re: https://forum.azimuthproject.org/discussion/comment/14953/#Comment_14953

    Thanks, and yes. Kinda hairy stuff. A lot of work, maybe impossible, maybe a Don Quixote quest against windmills. Whenever I start pushing this stuff, I am reminded that Georg Cantor was occasionally hospitalized for his troubles...

    I did find what looks like a very good review of RDF at https://github.com/JoshData/rdfabout/blob/gh-pages/intro-to-rdf.md#

    RDF is a method for expressing knowledge in a decentralized world and is the foundation of the Semantic Web, in which computer applications make use of distributed, structured information spread throughout the Web. Just to get it out of the way, RDF isn't strictly an XML format, it's not just about metadata, it has little to do with RSS, and it's not as complicated as you think.

    The Big Picture

    RDF is a general method to decompose any type of knowledge into small pieces, with some rules about the semantics, or meaning, of those pieces. The point is to have a method so simple that it can express any fact, and yet so structured that computer applications can do useful things with it.

    This morning, I am punching in a few notes on the general process of abstraction, which is highly related to this explanation of RDF (decompose any type of knowledge into small pieces, and the inverse, assemble generalities through induction from small pieces), though perhaps expressed in more general terms.

    I'll look at your specific case, to help me better understand RDF. Yes, you are right about maintaining standards. My question is -- is it possible to generalize the creation of data structures such that "they are always made the same way, out out of the same algebraic element(s)" -- and if that wildly ambitious notion makes sense, how would that help us create an absolutely fluent "local" or context-specific ontology in general or universal terms?

    PS, I like the author of this github article -- he's a powerhouse revolutionary programmer building very successful political projects, and his personal web space is https://razor.occams.info/ He calls himself a "civic hacker"

    My idea on generalizing data structure is occam's razor pushed to the limit: everything built from one element -- with isomorphic mapping from symbolic abstraction to machine instantiation. "shortest distance between two points" -- superfast....

    Comment Source:Re: https://forum.azimuthproject.org/discussion/comment/14953/#Comment_14953 Thanks, and yes. Kinda hairy stuff. A lot of work, maybe impossible, maybe a Don Quixote quest against windmills. Whenever I start pushing this stuff, I am reminded that Georg Cantor was occasionally hospitalized for his troubles... I did find what looks like a very good review of RDF at https://github.com/JoshData/rdfabout/blob/gh-pages/intro-to-rdf.md# <blockquote> RDF is a method for expressing knowledge in a decentralized world and is the foundation of the Semantic Web, in which computer applications make use of distributed, structured information spread throughout the Web. Just to get it out of the way, RDF isn't strictly an XML format, it's not just about metadata, it has little to do with RSS, and it's not as complicated as you think. <br><br> The Big Picture <br><br> RDF is a general method to decompose any type of knowledge into small pieces, with some rules about the semantics, or meaning, of those pieces. The point is to have a method so simple that it can express any fact, and yet so structured that computer applications can do useful things with it. </blockquote> This morning, I am punching in a few notes on the general process of abstraction, which is highly related to this explanation of RDF (decompose any type of knowledge into small pieces, and the inverse, assemble generalities through induction from small pieces), though perhaps expressed in more general terms. I'll look at your specific case, to help me better understand RDF. Yes, you are right about maintaining standards. My question is -- is it possible to generalize the creation of data structures such that "they are always made the same way, out out of the same algebraic element(s)" -- and if that wildly ambitious notion makes sense, how would that help us create an absolutely fluent "local" or context-specific ontology in general or universal terms? PS, I like the author of this github article -- he's a powerhouse revolutionary programmer building very successful political projects, and his personal web space is https://razor.occams.info/ He calls himself a "civic hacker" My idea on generalizing data structure is occam's razor pushed to the limit: everything built from one element -- with isomorphic mapping from symbolic abstraction to machine instantiation. "shortest distance between two points" -- superfast....
  • 13.
    edited October 2015

    CATEGORIES, SETS, AND DIMENSION-BASED CONSTRUCTIONS (1)

    As I explore this Azimuth space, I see repeated references to "category theory" -- which, when I look closely, I don't understand very well, if at all. So I've looked up category theory as it seems to be understood in modern algebra, and have been reviewing this Wikipedia page, which appears to be discussing the correct meaning of "category":

    https://en.wikipedia.org/wiki/Category_theory

    Category theory formalizes mathematical structure and its concepts in terms of a collection of objects and of arrows (also called morphisms). A category has two basic properties: the ability to compose the arrows associatively and the existence of an identity arrow for each object. Category theory can be used to formalize concepts of other high-level abstractions such as sets, rings, and groups.

    Several terms used in category theory, including the term "morphism", are used differently from their uses in the rest of mathematics. In category theory, a "morphism" obeys a set of conditions specific to category theory itself. Thus, care must be taken to understand the context in which statements are made.

    The presumption here is that a meaningful/useful analogy can be made between the structures and conclusions of category theory and effective/meaningful "real world solutions to real world problems." Given the general intention of this Azimuth space, I have no doubt this is true -- but I do have to note the comment the Wikipedia author felt drawn to include, regarding "general abstract nonsense", which links here: https://en.wikipedia.org/wiki/Abstract_nonsense

    Category theory has several faces known not just to specialists, but to other mathematicians. A term dating from the 1940s, "general abstract nonsense", refers to its high level of abstraction, compared to more classical branches of mathematics. Homological algebra is category theory in its aspect of organising and suggesting manipulations in abstract algebra.

    But the Wikipedia comment suggests that the term was coined by the founders of category theory, including Saunders MacLane. This is very abstract stuff, but it's coherent.

    What's happening for me, as I am drawing together some fundamentals of my own dimension-based constructive method -- is that I want to ask whether -- and if so, how, and then is it useful to do so -- can the fundamental elements of category theory be defined in lower-level elements out of which they can be constructed.

    I ask this question all the time. I am looking to drive complexity to its lowest root, and find some way to build "everything" from one common rock-bottom-simple element, probably the notion of "distinction", which adopts a few composite guises as it cascades upward -- like "dimension"...

    BUILDING HIGH-LEVEL ABSTRACT OBJECTS OUT OF LOWER-LEVEL ELEMENTS

    So, I have been looking at very similar questions regarding the objects of "set theory" and the common understanding that all mathematics is/can be grounded in set theory. What are these objects called "sets"? "What are they made out of?"

    I think my instincts classify me as a kind of computer-science constructivist -- maybe (?) somewhat akin to Leopold Kronecker, who insisted on "constructivist definitions" of any kind of mathematical object before he would concede its reality. I'm nervous about abstractions -- maybe because I am so full of them -- and I sense that the lack of hard grounding for abstract definition is a primary cause of human confusion and suffering. If we want to fix this place ("Planet Earth"), we gotta clear up our foundational definitions. Any mud we leave there is going to drive us nuts later when we are trying to mate incommensurate models and assumptions that are unfortunately riddled with little confusions and uncertainties that have morphed into big ones.

    So I like the idea of enforcing a constructivist approach to all abstract definitions. Show me how you are going to build this thing. And make it really hard-core, so no abstract definitions with a pencil and paper. Put it in a computer, and define the object in terms of hard-core system-state elements -- with clear-cut digital boundaries between states, so there's no fuzz and indeterminism built into the logic in its fundamentals. This logic gate is in this state (list matrix with attributes), then undergoes this state transition to this defined state. Build all the fundamentals in this way, and map the entire alphabet of constructive elements (keystrokes) back to the computation in this way.

    This is akin to the "levels of language" on which computer programs and "apps" are built. I am a ColdFusion programmer (the only language I know anything about), but I know that its high-level functions -- extremely easy to use, with very few keystrokes -- are compiled subroutines written in lower-level languages. There is a "mapping" between these high-level functions I write when I do a "query" and the actual machine instructions that process the request and return a result.

    UNIVERSAL PRIMITIVE

    So, right now -- I am just asking the question. I want to build the fundamental elements of "category theory" AND "set theory" -- and maybe some other stuff -- from a fundamental common alphabet that shares common primitives.

    I am reminded of a quite clarifying and brilliant little online class I took recently in web site programming, that was written, as I understand it -- in the style adopted at Google. This presentation was EXTREMELY linear -- in every conceivable regard that I could detect -- and started off early by emphasizing that "everything on the internet is a box". Oh yes, even circles are boxes.

    So, yes, they show the programmer how to make circles out of boxes. Pretty basic -- but an interesting theme to consider. What I think I want to ask is -- can we ("who is WE, white man?") create a common set of primitives - maybe a very simple/basic programming language -- that constructs all these basic elements -- set theory, category theory, network theory -- out of a simple class of common elements. Maybe there are some fundamental objects -- that look like "boxes" -- little places which we label "A" where can stick things like "the value of A". My guess is -- if want to push this all the way, we might have to get into font definitions/specifications. I seem to remember Douglas Hofstadter went through a major "font phase" as he was working through "Metamagical Themas", his follow-up to Godel, Escher, Bach published as a series of Scientific American columns. A font is defined as a series of point values in a square rows/columns matrix -- either on or off. The entire world of categorical distinctions is built up from that on/off grid of font definitions. So let's start with the nitty/gritty, and build a zillion cascading distinctions from a common alphabet of on/off font definitions.

    DIMENSIONS/DISTINCTIONS?

    Assuming this might be motivated, is it even possible? the category theory article seems to suggest that the basics are pretty clear, and there are not too many objects. Can they all be "built out of dimensions"? 'd say -- and maybe this is totally obvious or trivial -- the answer is yes.

    All those on/off foundational distinctions can be defined as dimensions with a two-state distinction. They are all a "range of values" that extends from 0 to 1, with a transition from one state to the other that is either "instantaneous" or highly studied by physicists who define whatever the concept means to a working programmer who simply assumes that its a two-state logic and very close to instantaneous.

    https://en.wikipedia.org/wiki/Abstraction_(computer_science)

    image

    Comment Source:<h1>CATEGORIES, SETS, AND DIMENSION-BASED CONSTRUCTIONS (1)</h1> As I explore this Azimuth space, I see repeated references to "category theory" -- which, when I look closely, I don't understand very well, if at all. So I've looked up category theory as it seems to be understood in modern algebra, and have been reviewing this Wikipedia page, which appears to be discussing the correct meaning of "category": https://en.wikipedia.org/wiki/Category_theory <blockquote> Category theory formalizes mathematical structure and its concepts in terms of a collection of objects and of arrows (also called morphisms). A category has two basic properties: the ability to compose the arrows associatively and the existence of an identity arrow for each object. Category theory can be used to formalize concepts of other high-level abstractions such as sets, rings, and groups. <br><br> Several terms used in category theory, including the term "morphism", are used differently from their uses in the rest of mathematics. In category theory, a "morphism" obeys a set of conditions specific to category theory itself. Thus, care must be taken to understand the context in which statements are made. </blockquote> The presumption here is that a meaningful/useful analogy can be made between the structures and conclusions of category theory and effective/meaningful "real world solutions to real world problems." Given the general intention of this Azimuth space, I have no doubt this is true -- but I do have to note the comment the Wikipedia author felt drawn to include, regarding "general abstract nonsense", which links here: https://en.wikipedia.org/wiki/Abstract_nonsense <blockquote> Category theory has several faces known not just to specialists, but to other mathematicians. A term dating from the 1940s, "general abstract nonsense", refers to its high level of abstraction, compared to more classical branches of mathematics. Homological algebra is category theory in its aspect of organising and suggesting manipulations in abstract algebra. </blockquote> But the Wikipedia comment suggests that the term was coined by the founders of category theory, including Saunders MacLane. This is very abstract stuff, but it's coherent. What's happening for me, as I am drawing together some fundamentals of my own dimension-based constructive method -- is that I want to ask whether -- and if so, how, and then is it useful to do so -- can the fundamental elements of category theory be defined in lower-level elements out of which they can be constructed. I ask this question all the time. I am looking to drive complexity to its lowest root, and find some way to build "everything" from one common rock-bottom-simple element, probably the notion of "distinction", which adopts a few composite guises as it cascades upward -- like "dimension"... <b>BUILDING HIGH-LEVEL ABSTRACT OBJECTS OUT OF LOWER-LEVEL ELEMENTS</b> So, I have been looking at very similar questions regarding the objects of "set theory" and the common understanding that all mathematics is/can be grounded in set theory. What are these objects called "sets"? "What are they made out of?" I think my instincts classify me as a kind of computer-science constructivist -- maybe (?) somewhat akin to Leopold Kronecker, who insisted on "constructivist definitions" of any kind of mathematical object before he would concede its reality. I'm nervous about abstractions -- maybe because I am so full of them -- and I sense that the lack of hard grounding for abstract definition is a primary cause of human confusion and suffering. If we want to fix this place ("Planet Earth"), we gotta clear up our foundational definitions. Any mud we leave there is going to drive us nuts later when we are trying to mate incommensurate models and assumptions that are unfortunately riddled with little confusions and uncertainties that have morphed into big ones. So I like the idea of enforcing a constructivist approach to all abstract definitions. Show me how you are going to build this thing. And make it really hard-core, so no abstract definitions with a pencil and paper. Put it in a computer, and define the object in terms of hard-core system-state elements -- with clear-cut digital boundaries between states, so there's no fuzz and indeterminism built into the logic in its fundamentals. This logic gate is in this state (list matrix with attributes), then undergoes this state transition to this defined state. Build all the fundamentals in this way, and map the entire alphabet of constructive elements (keystrokes) back to the computation in this way. This is akin to the "levels of language" on which computer programs and "apps" are built. I am a ColdFusion programmer (the only language I know anything about), but I know that its high-level functions -- extremely easy to use, with very few keystrokes -- are compiled subroutines written in lower-level languages. There is a "mapping" between these high-level functions I write when I do a "query" and the actual machine instructions that process the request and return a result. <b>UNIVERSAL PRIMITIVE</b> So, right now -- I am just asking the question. I want to build the fundamental elements of "category theory" AND "set theory" -- and maybe some other stuff -- from a fundamental common alphabet that shares common primitives. I am reminded of a quite clarifying and brilliant little online class I took recently in web site programming, that was written, as I understand it -- in the style adopted at Google. This presentation was EXTREMELY linear -- in every conceivable regard that I could detect -- and started off early by emphasizing that "everything on the internet is a box". Oh yes, even circles are boxes. So, yes, they show the programmer how to make circles out of boxes. Pretty basic -- but an interesting theme to consider. What I think I want to ask is -- can we ("who is WE, white man?") create a common set of primitives - maybe a very simple/basic programming language -- that constructs all these basic elements -- set theory, category theory, network theory -- out of a simple class of common elements. Maybe there are some fundamental objects -- that look like "boxes" -- little places which we label "A" where can stick things like "the value of A". My guess is -- if want to push this all the way, we might have to get into font definitions/specifications. I seem to remember Douglas Hofstadter went through a major "font phase" as he was working through "Metamagical Themas", his follow-up to Godel, Escher, Bach published as a series of Scientific American columns. A font is defined as a series of point values in a square rows/columns matrix -- either on or off. The entire world of categorical distinctions is built up from that on/off grid of font definitions. So let's start with the nitty/gritty, and build a zillion cascading distinctions from a common alphabet of on/off font definitions. <b>DIMENSIONS/DISTINCTIONS?</b> Assuming this might be motivated, is it even possible? the category theory article seems to suggest that the basics are pretty clear, and there are not too many objects. Can they all be "built out of dimensions"? 'd say -- and maybe this is totally obvious or trivial -- the answer is yes. All those on/off foundational distinctions can be defined as dimensions with a two-state distinction. They are all a "range of values" that extends from 0 to 1, with a transition from one state to the other that is either "instantaneous" or highly studied by physicists who define whatever the concept means to a working programmer who simply assumes that its a two-state logic and very close to instantaneous. https://en.wikipedia.org/wiki/Abstraction_(computer_science) <img src="http://sharedpurpose.net/groupgraphics/levelsofabstraction.png">
  • 14.
    edited October 2015

    CATEGORIES, SETS, AND DIMENSION-BASED CONSTRUCTIONS (2)

    Hmm -- bouncing around on the Wikipedia pages for category theory, I find a too-cute article that just seems to ring my bell. It's illustrating a bunch of points i just tried to make in my crude amateurish way -- and the authors use the example of CATS -- which for some reason I put into my diagram above.

    http://katmat.math.uni-bremen.de/acc/acc.pdf

    "the cat is on the mat???" (that famous diagram is on the Wikipedia page on abstraction - below)

    THE GENERAL THEORY OF STRUCTURES

    That's the subject here, guys.

    "Sciences have a natural tendency toward diversification and specialization. In particular, contemporary mathematics consists of many different branches and is intimately related to various other fields. Each of these branches and fields is growing rapidly and is itself diversifying. Fortunately, however, there is a considerable amount of common ground — similar ideas, concepts, and constructions. These provide a basis for a general theory of structures.

    "The purpose of this book is to present the fundamental concepts and results of such a theory, expressed in the language of category theory — hence, as a particular branch of mathematics itself."

    Can we construct ALL this stuff from an absolute minimalist feature-set composed of one element -- a cut and some medium it's cutting....

    And then give it absolute minimalist definition in universal hardware standards?

    https://en.wikipedia.org/wiki/Abstraction

    image

    Comment Source:<h1>CATEGORIES, SETS, AND DIMENSION-BASED CONSTRUCTIONS (2)</h1> Hmm -- bouncing around on the Wikipedia pages for category theory, I find a too-cute article that just seems to ring my bell. It's illustrating a bunch of points i just tried to make in my crude amateurish way -- and the authors use the example of CATS -- which for some reason I put into my diagram above. http://katmat.math.uni-bremen.de/acc/acc.pdf "the cat is on the mat???" (that famous diagram is on the Wikipedia page on abstraction - below) <b>THE GENERAL THEORY OF STRUCTURES</b> That's the subject here, guys. <blockquote> "Sciences have a natural tendency toward diversification and specialization. In particular, contemporary mathematics consists of many different branches and is intimately related to various other fields. Each of these branches and fields is growing rapidly and is itself diversifying. Fortunately, however, there is a considerable amount of common ground — similar ideas, concepts, and constructions. These provide a basis for a general theory of structures. <br><br> "The purpose of this book is to present the fundamental concepts and results of such a theory, expressed in the language of category theory — hence, as a particular branch of mathematics itself." </blockquote> Can we construct ALL this stuff from an absolute minimalist feature-set composed of one element -- a cut and some medium it's cutting.... And then give it absolute minimalist definition in universal hardware standards? <!--- <img src="http://sharedpurpose.net/groupgraphics/thejoyofcats.png"> <img src="http://sharedpurpose.net/groupgraphics/catsincategorytheory.png"> <img src="http://sharedpurpose.net/groupgraphics/dia04.png"> <img src="http://sharedpurpose.net/groupgraphics/thejoyofcats2.png"> ---> https://en.wikipedia.org/wiki/Abstraction <img src="http://sharedpurpose.net/groupgraphics/catonamatplushofstadter.png">
  • 15.
    edited October 2015

    CATEGORIES, SETS, AND DIMENSION-BASED CONSTRUCTIONS (3)

    Let's drive abstract definitions straight from the hardware layer, vertical in this diagram, and establish conventions as we do so, if that makes sense and is not limiting, corrupting or misleading. This means that all algebraic objects are/should be defined directly in terms of machine definitions.

    Let's presume no level of uncertainty anywhere, with no opening for ambiguous interpretation. Build anything you want, as free as you want, but build from the common rock-bottom platform and universal convention.

    https://en.wikipedia.org/wiki/Abstraction_layer

    image

    Comment Source:<h1>CATEGORIES, SETS, AND DIMENSION-BASED CONSTRUCTIONS (3)</h1> Let's drive abstract definitions straight from the hardware layer, vertical in this diagram, and establish conventions as we do so, if that makes sense and is not limiting, corrupting or misleading. This means that all algebraic objects are/should be defined directly in terms of machine definitions. Let's presume no level of uncertainty anywhere, with no opening for ambiguous interpretation. Build anything you want, as free as you want, but build from the common rock-bottom platform and universal convention. https://en.wikipedia.org/wiki/Abstraction_layer <img src="http://sharedpurpose.net/groupgraphics/abstractionlayer.png">
  • 16.

    This thesis was written in 2013, and I see that the deprecated dbpprop namespace prefix shows up Usage-dependent maintenance of structured Web data sets

    Comment Source:This thesis was written in 2013, and I see that the deprecated dbpprop namespace prefix shows up [Usage-dependent maintenance of structured Web data sets](http://www.diss.fu-berlin.de/diss/servlets/MCRFileNodeServlet/FUDISS_derivate_000000014794/luczak-roesch-data-set-maintenance-publication-online.pdf)
  • 17.

    This thesis was written in 2013, and I see that the deprecated dbpprop namespace prefix shows up Usage-dependent maintenance of structured Web data sets

    I feel a little unease with how "the" eigenvector W (I guess the eigenvector belonging to thePerron–Frobenius eigenvalue is meant) of the pairwise comparison matrix (p.124) was computed. That is I am not sure wether I understand this procedure as explained there, but as said the first impression I got from this alleged Perron-Frohbenius method is rather quirky.

    Comment Source:>This thesis was written in 2013, and I see that the deprecated dbpprop namespace prefix shows up Usage-dependent maintenance of structured Web data sets I feel a little unease with how "the" eigenvector W (I guess the eigenvector belonging to the<a href="https://en.wikipedia.org/wiki/Perron%E2%80%93Frobenius_theorem">Perron–Frobenius eigenvalue</a> is meant) of the pairwise comparison matrix (p.124) was computed. That is I am not sure wether I understand this procedure as explained there, but as said the first impression I got from this alleged Perron-Frohbenius method is rather quirky.
  • 18.

    I tend to tune out anyone that uses the terms eigenfunctions, eigenvalues, or eigenvectors indiscriminately. The assumption is that this is some fundamental characteristic of the system under study.

    Consider the case of QBO and the underlying 2.33 year fundamental period. As I think it is becoming obvious, this is not so much an eigenvalue of the system as it is a forcing function from the external driving system (the moon orbital forcing in this case).

    Climate science and earth science research papers appear to be overloaded with these kinds of misinterpreted models. IMO, they're missing the actual physics in favor of trying to impress by waving about the Eigen-word.

    If an engineer found a 60-Hz signal in an electrical circuit he was measuring and then claimed it was an eigenvalue, he would be laughed off :)

    ... as long as this thread is going off-topic, I couldn't resist.

    Comment Source:I tend to tune out anyone that uses the terms eigenfunctions, eigenvalues, or eigenvectors indiscriminately. The assumption is that this is some fundamental characteristic of the system under study. Consider the case of [QBO and the underlying 2.33 year fundamental period](https://forum.azimuthproject.org/discussion/comment/14959/#Comment_14959). As I think it is becoming obvious, this is not so much an eigenvalue of the system as it is a forcing function from the external driving system (the moon orbital forcing in this case). Climate science and earth science research papers appear to be overloaded with these kinds of misinterpreted models. IMO, they're missing the actual physics in favor of trying to impress by waving about the Eigen-word. If an engineer found a 60-Hz signal in an electrical circuit he was measuring and then claimed it was an eigenvalue, he would be laughed off :) ... as long as this thread is going off-topic, I couldn't resist.
  • 19.
    edited October 2015

    Bottom-Up / Top-down

    image

    This continues to be an emerging thesis, experimental in many regards, prone to weakness at many points, and demanding a high level of technical expertise in many particulars. All of this needs to be precisely clarified in robust detail. But let's just postulate for a moment that from "the bottom-up perspective" it might be possible and make sense to define all algebraic fundamentals and "dimensions" (elements, aspects, facets, parts) of fundamentals in terms of a mechanistic machine-based definition "at the lowest level of the abstraction cascade" as per the Wikipedia diagram. Right at that first-level interface, between "the machine itself" and the interpretation of the machine state by the firmware, "abstract" distinctions begin to define the properties of the layers above this interface -- and the meaningful elements of language begin to be defined. "On/off", "Yes/no", a shade of grey, a shade of color, a data cell with known boundary values in x and y and a defined content...

    Based on these fundamentals, this proposal is suggesting that "all" fundamental algebraic objects can receive a common and "indisputable" definition -- or a definition that could be negotiated as an ideal compromise and "industry standard". Those objects might include all abstract notions like "set" or "category" or "group" or "ring" or "vector", and more basic notions like "distinction" or "value" or "difference" -- as well as other terms such as "identity", or fundamental operations such as "addition". What are the basic constructive "objects" from which all symbolic abstractions are constructed? This process probably must involve looking at basic "layers of language" and noting the first-level distinction -- probably "the difference between a 0 and a 1" -- involving the question "what does that mean?"


    STIPULATION

    Right at this point, we have to introduce the notion of stipulation. Meaning is not inherent in the mechanical system. It is assigned to the mechanical system by human intention.

    (insert something very careful about this first-level mapping -- show specific instances)

    Though the interpretation of meaning by other people or by machines often depends on correlation and large-scale statistics ("what do most people mean by this term?"), the actual usage of any word or term in a specific context or act of communication is intended and purposive.

    Interpreting intended meaning becomes increasingly difficult as levels of abstraction increase, because abstract terms carry an "implicitly nested cascade" of alternative possible interpretations (alternative particular meanings), which become more complex and potentially confusing as the level of abstraction increases. When a speaker used a term like "beautiful", they might have had a very specific intended meaning, that they could describe in detail. "What do I mean by 'beauty'? Glad you asked -- I can describe it exactly, with numeric precision." But unless that "drill-down dialogue" between the speaker and the listener is conducted in detail, in ways that define the intended meaning in precise detail, the listener has to guess the intended meaning -- to "interpolate" it based on knowledge of the speaker and the context. This interpolation (interpretation) is a very error-prone process and human communications in general are often badly damaged or ruined by this problem ("we fail to understand one another").

    If we want to validate or reinforce the value of abstractions in any exact or precise context, we have to define this interpolation process with high precision -- not an easy thing to do, and often seen as impossible. This study of abstraction is an exploration of possible solutions to this critical problem.


    LEVELS OF MEASUREMENT: TYPES OF VARIABLES / DIMENSIONS

    In the social sciences and in statistics, it is common to see a range of variable types ("dimension types") defined. This text excerpt describes the 4 basic classes (types) of variables defined by Stanley Smith Stevens -- a psychology professor at Harvard https://en.wikipedia.org/wiki/Stanley_Smith_Stevens It is interesting to note that among Steven's prime disciples was psychologist George A. Miller, author of the famous "Magical Number Seven" article in cognitive psychology and a founder/director of WordNet https://en.wikipedia.org/wiki/WordNet

    image

    http://www.graphpad.com/support/faqid/1089/

    This scheme is commonly seen as controversial, and I have noticed over several years that the Wikipedia article on this method has been re-edited several times, tending to strip out elements that might be significant but which are seen as questionable by some scholars.

    Here's an interesting critique/review: http://www4.uwsp.edu/geo/faculty/gmartin/geog476/Lecture/BeySt.htm

    This is a tricky area, and does depend on "stipulative" (humanly intended) values and meaning -- rather than, for example, "observed" meaning. But this scheme has the great virtue of simple commensurateness with a basic dimension-based approach to levels of abstraction and the construction of a universal semantic model of concepts based on dimensions. Keep the focus on the concept of abstraction and the relationship of "quantitative" and "qualitative" variables -- with the idea that stipulation can provide precise dimensional meaning to the "implicit cascade of intended but not explicit meaning" inherent any any abstraction.


    ABSTRACTION AS THE RE-NAMING OF BOUNDARY VALUES

    The "levels" defined by Stanley Smith Stevens can be defined as different kinds of labels for a bounded range of values in some dimension, that becomes "increasingly quantitative" as the level of abstraction decreases.

    This very simple diagram shows the assignment of stipulative and abstract ("qualitative") values to a temperature range that can be assigned a precise numeric value. This same process, of renaming boundary values for reasons of convenience, is basic to the process of normal "psychological economy". We "don't have time" to describe absolutely everything we discuss in terms of precise dimensional measurements -- so we use shortcuts -- like "hot" -- to describe what we could more precisely describe as a bounded quantitative range or a precise number.

    This is what is meant in the Wikipedia discussion of abstraction, where it is said

    Abstraction uses a strategy of simplification, wherein formerly concrete details are left ambiguous, vague, or undefined; thus effective communication about things in the abstract requires an intuitive or common experience between the communicator and the communication recipient. This is true for all verbal/abstract communication.
    https://en.wikipedia.org/wiki/Abstraction

    image

    In formal settings or a context where very precise specifications are required -- such as in contracting -- very broad abstractions are negotiated in particular by lawyers, down to high degrees of precision.

    If the basic job at the highest level of abstraction is defined as "Build an aircraft carrier for 5 billion dollars", there will be a process of drill-down specification that goes into every imaginable detail with exacting precision -- perhaps starting with the question "What is an aircraft carrier?" ("What do we agree is meant by that term?")

    Comment Source:<h1>Bottom-Up / Top-down</h1> <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/0/03/Computer_abstraction_layers.svg/200px-Computer_abstraction_layers.svg.png" align=right hspace=10 width=150> This continues to be an emerging thesis, experimental in many regards, prone to weakness at many points, and demanding a high level of technical expertise in many particulars. All of this needs to be precisely clarified in robust detail. But let's just postulate for a moment that from "the bottom-up perspective" it might be possible and make sense to define all algebraic fundamentals and "dimensions" (elements, aspects, facets, parts) of fundamentals in terms of a mechanistic machine-based definition "at the lowest level of the abstraction cascade" as per the Wikipedia diagram. Right at that first-level interface, between "the machine itself" and the interpretation of the machine state by the firmware, "abstract" distinctions begin to define the properties of the layers above this interface -- and the meaningful elements of language begin to be defined. "On/off", "Yes/no", a shade of grey, a shade of color, a data cell with known boundary values in x and y and a defined content... Based on these fundamentals, this proposal is suggesting that "all" fundamental algebraic objects can receive a common and "indisputable" definition -- or a definition that could be negotiated as an ideal compromise and "industry standard". Those objects might include all abstract notions like "set" or "category" or "group" or "ring" or "vector", and more basic notions like "distinction" or "value" or "difference" -- as well as other terms such as "identity", or fundamental operations such as "addition". What are the basic constructive "objects" from which all symbolic abstractions are constructed? This process probably must involve looking at basic "layers of language" and noting the first-level distinction -- probably "the difference between a 0 and a 1" -- involving the question "what does that mean?" <br /> <b>STIPULATION</b> Right at this point, we have to introduce the notion of stipulation. Meaning is not inherent in the mechanical system. It is assigned to the mechanical system by human intention. (insert something very careful about this first-level mapping -- show specific instances) Though the interpretation of meaning by other people or by machines often depends on correlation and large-scale statistics ("what do most people mean by this term?"), the actual usage of any word or term in a specific context or act of communication is intended and purposive. Interpreting intended meaning becomes increasingly difficult as levels of abstraction increase, because abstract terms carry an "implicitly nested cascade" of alternative possible interpretations (alternative particular meanings), which become more complex and potentially confusing as the level of abstraction increases. When a speaker used a term like "beautiful", they might have had a very specific intended meaning, that they could describe in detail. "What do I mean by 'beauty'? Glad you asked -- I can describe it exactly, with numeric precision." But unless that "drill-down dialogue" between the speaker and the listener is conducted in detail, in ways that define the intended meaning in precise detail, the listener has to guess the intended meaning -- to "interpolate" it based on knowledge of the speaker and the context. This interpolation (interpretation) is a very error-prone process and human communications in general are often badly damaged or ruined by this problem ("we fail to understand one another"). If we want to validate or reinforce the value of abstractions in any exact or precise context, we have to define this interpolation process with high precision -- not an easy thing to do, and often seen as impossible. This study of abstraction is an exploration of possible solutions to this critical problem. <br /> <b>LEVELS OF MEASUREMENT: TYPES OF VARIABLES / DIMENSIONS</b> In the social sciences and in statistics, it is common to see a range of variable types ("dimension types") defined. This text excerpt describes the 4 basic classes (types) of variables defined by Stanley Smith Stevens -- a psychology professor at Harvard https://en.wikipedia.org/wiki/Stanley_Smith_Stevens It is interesting to note that among Steven's prime disciples was psychologist George A. Miller, author of the famous "Magical Number Seven" article in cognitive psychology and a founder/director of WordNet https://en.wikipedia.org/wiki/WordNet <img src="http://sharedpurpose.net/groupgraphics/typesofvariables.png"> http://www.graphpad.com/support/faqid/1089/ This scheme is commonly seen as controversial, and I have noticed over several years that the Wikipedia article on this method has been re-edited several times, tending to strip out elements that might be significant but which are seen as questionable by some scholars. Here's an interesting critique/review: http://www4.uwsp.edu/geo/faculty/gmartin/geog476/Lecture/BeySt.htm This is a tricky area, and does depend on "stipulative" (humanly intended) values and meaning -- rather than, for example, "observed" meaning. But this scheme has the great virtue of simple commensurateness with a basic dimension-based approach to levels of abstraction and the construction of a universal semantic model of concepts based on dimensions. Keep the focus on the concept of abstraction and the relationship of "quantitative" and "qualitative" variables -- with the idea that stipulation can provide precise dimensional meaning to the "implicit cascade of intended but not explicit meaning" inherent any any abstraction. <br /> <b>ABSTRACTION AS THE RE-NAMING OF BOUNDARY VALUES</b> The "levels" defined by Stanley Smith Stevens can be defined as different kinds of labels for a bounded range of values in some dimension, that becomes "increasingly quantitative" as the level of abstraction decreases. This very simple diagram shows the assignment of stipulative and abstract ("qualitative") values to a temperature range that can be assigned a precise numeric value. This same process, of renaming boundary values for reasons of convenience, is basic to the process of normal "psychological economy". We "don't have time" to describe absolutely everything we discuss in terms of precise dimensional measurements -- so we use shortcuts -- like "hot" -- to describe what we could more precisely describe as a bounded quantitative range or a precise number. This is what is meant in the Wikipedia discussion of abstraction, where it is said <blockquote> Abstraction uses a strategy of simplification, wherein formerly concrete details are left ambiguous, vague, or undefined; thus effective communication about things in the abstract requires an intuitive or common experience between the communicator and the communication recipient. This is true for all verbal/abstract communication. <br > https://en.wikipedia.org/wiki/Abstraction </blockquote> <img src="http://sharedpurpose.net/groupgraphics/dia10.png" width=700> In formal settings or a context where very precise specifications are required -- such as in contracting -- very broad abstractions are negotiated in particular by lawyers, down to high degrees of precision. If the basic job at the highest level of abstraction is defined as "Build an aircraft carrier for 5 billion dollars", there will be a process of drill-down specification that goes into every imaginable detail with exacting precision -- perhaps starting with the question "What is an aircraft carrier?" ("What do we agree is meant by that term?")
  • 20.
    edited October 2015

    AN ABSOLUTE INTEGRATING FRAMEWORK

    Following this general line of definition, we can begin to unfold a comprehensive "epistemological ontology of the whole" organized around the prime dimension of abstraction.

    image

    This "prime dimension" defines the range of a series of polar opposites that characterize the fundamentals of logic and "conceptual space". Because definitions are always fluent and context-specific, these are not rigid or static definitions, and are meant to illustrate the basic dimensionality of this integral model in an intuitive way. In many cases, these terms are interchangeable. And there are other important terms that can also be defined this way.

    Bringing all these elements together into one integrated framework seems consistent with natural intuition and the way we would likely think about any of these things individually. This approach suggests that all these factors can be brought together at the same time. Intuitively, looking at all of this at the same time, one might say -- either there is something very wrong with this, or it just might turn out to be astonishingly powerful. All these things -- all these concepts and terms -- at the same time? Is this just analytic mud, or can we track this all down to its atomic specifics and actually build all these structures and interpretations in one common linear language?

    This diagram defines many basic concepts from epistemology and logic, all defined within a single framework, and all defined as polar opposites on the primary and linear dimension "levels of abstraction".

    "Holon" is a term from Arthur Koestler that has been picked up by some modern analysts including the philosopher ken Wilber, and it refers to "a part that is also a whole" -- just as we might understand that a car part like a carburetor is both "a part" (part of a whole car) and "a whole" (a whole carburetor).

    Is there a single linear algebraic form that can reliably define all these elements in a single framework organized in such a simple and basic -- even "primitive" -- way?

    The diagram seems intuitively appealing. Some kind of constructivist algebraic proof is probably required to fully open the power of this model. Can all these terms and definitions be "constructed from dimensions"? Can this definition chain start "at the foundations of computer science" and extend to a generalization of epistemological ontology?

    This model does appear to be a kind of "mashup" of taxonomy, mereology, ontology, epistemology, logic, and hierarchy theory -- and probably a few other topics we could think of. And in this format, it includes a reference to "left brain" and "right brain" cognitive tendencies. But it might be possible to integrate all those seemingly disparate areas around a single integrated model of abstraction/generalization, and this diagram begins to be a map of how that could be done.

    And perhaps amazingly, though the "high level" elements of this framework are highly "multi-dimensional" and composite/synthetic, the entire structure is "100% linear" in every aspect and facet of its construction. Every "part" of the construction is itself linear.

    image
    Comment Source:<h1>AN ABSOLUTE INTEGRATING FRAMEWORK</h1> Following this general line of definition, we can begin to unfold a comprehensive "epistemological ontology of the whole" organized around the prime dimension of abstraction. <img src="http://sharedpurpose.net/groupgraphics/dia11.png" width=550> This "prime dimension" defines the range of a series of polar opposites that characterize the fundamentals of logic and "conceptual space". Because definitions are always fluent and context-specific, these are not rigid or static definitions, and are meant to illustrate the basic dimensionality of this integral model in an intuitive way. In many cases, these terms are interchangeable. And there are other important terms that can also be defined this way. Bringing all these elements together into one integrated framework seems consistent with natural intuition and the way we would likely think about any of these things individually. This approach suggests that all these factors can be brought together at the same time. Intuitively, looking at all of this at the same time, one might say -- either there is something very wrong with this, or it just might turn out to be astonishingly powerful. All these things -- all these concepts and terms -- at the same time? Is this just analytic mud, or can we track this all down to its atomic specifics and actually build all these structures and interpretations in one common linear language? This diagram defines many basic concepts from epistemology and logic, all defined within a single framework, and all defined as polar opposites on the primary and linear dimension "levels of abstraction". "Holon" is a term from Arthur Koestler that has been picked up by some modern analysts including the philosopher ken Wilber, and it refers to "a part that is also a whole" -- just as we might understand that a car part like a carburetor is both "a part" (part of a whole car) and "a whole" (a whole carburetor). Is there a single linear algebraic form that can reliably define all these elements in a single framework organized in such a simple and basic -- even "primitive" -- way? The diagram seems intuitively appealing. Some kind of constructivist algebraic proof is probably required to fully open the power of this model. Can all these terms and definitions be "constructed from dimensions"? Can this definition chain start "at the foundations of computer science" and extend to a generalization of epistemological ontology? This model does appear to be a kind of "mashup" of taxonomy, mereology, ontology, epistemology, logic, and hierarchy theory -- and probably a few other topics we could think of. And in this format, it includes a reference to "left brain" and "right brain" cognitive tendencies. But it might be possible to integrate all those seemingly disparate areas around a single integrated model of abstraction/generalization, and this diagram begins to be a map of how that could be done. And perhaps amazingly, though the "high level" elements of this framework are highly "multi-dimensional" and composite/synthetic, the entire structure is "100% linear" in every aspect and facet of its construction. Every "part" of the construction is itself linear. <center> <img src="http://sharedpurpose.net/groupgraphics/dia12a.png"> </center>
  • 21.
    edited October 2015

    Richard Feynman on the Hierarchy of Ideas

    From Richard P. Feynman, The Character of Physical Law, quoted in God and the New Physics, Paul Davies, p224:

    "We have a way of discussing the world . . . at various hierarchies, or levels. Now I do not mean to be very precise, dividing the world into definite levels, but I will indicate, by describing a set of ideas, what I mean by hierarchies of ideas.

    "For example, at one end we have the fundamental laws of physics. Then we invent other terms for concepts which are approximate, which have, we believe, their ultimate explanation in terms of the fundamental laws. For instance, "heat". Heat is supposed to be jiggling, and the word for a hot thing is just the word for a mass of atoms which are jiggling. But for a while, if we are talking about heat, we sometimes forget about the atoms jiggling -- just as when we talk about the glacier we do not always think of the hexagonal ice and the snowflakes which originally fell. Another example of the same thing is a salt crystal. Looked at fundamentally it is a lot of protons, neutrons, and electrons; but we have this concept "salt crystal", which carries a whole pattern already of fundamental interactions. An idea like pressure is the same.

    "Now if we go higher up from this, in another level we have properties of substances -- like "refractive index", how light is bent when it goes through something; or "surface tension", the fact that water tends to pull itself together, both of which are described by numbers. I remind you that we have got to go through several laws down to find out that it is the pull of the atoms, and so on. But we still say "surface tension", and do not always worry, when discussing surface tension, about the inner workings.

    "On, up in the hierarchy. With the water we have waves, and we have a thing like a storm, the word "storm" which represents an enormous mass of phenomena, or a "sun spot", or "star", which is an accumulation of things. And it is not worth while always to think of it way back. In fact we cannot, because the higher up we go the more steps we have in between, each one of which is a little weak. We have not thought them all through yet.

    "As we go up in this hierarchy of complexity, we get to things like muscle twitch, or nerve impulse, which is an enormously complicated thing in the physical world, involving an organization of matter in a very elaborate complexity. Then come things like "frog".

    "And then we go on, and we come to words and concepts like "man" and "history", or "political expediency", and so forth, a series of concepts which we use to understand things at an ever higher level.

    "And going on, we come to things like evil, and beauty, and hope...

    "Which end is nearer to God, if I may use a religious metaphor. Beauty and hope, or the fundamental laws? I think that the right way, of course, is to say that what we have to look at is the whole structural interconnection of the thing, and that all the sciences, and not just the sciences but all the efforts of intellectual kinds, are an endeavor to see the connections of the hierarchies, to connect beauty to history, to connect history to man's psychology, man's psychology to the working of the brain, the brain to the neural impulse, the neural impulse to the chemistry, and so forth, up and down, both ways. And today we cannot, and it is no use making believe that we can, draw carefully a line all the way from one end of this thing to the other, because we have only just begun to see that this is a relative hierarchy.

    "And I do not think that either end is closer to God."

    There is no generally accepted scale by which academic disciplines or sciences can be strictly classified. But taken loosely, maybe this partition -- "looking at the whole structural interconnection of the thing" -- begins to illustrate why there seems to be a natural grouping or spectrum of disciplines. Can they all be interconnected through a single integrated/composite data structure? Can we define the holism of the "right brain" and deep intuition in exactly the terms that we define the elements of the "left brain" and reductionist analysis, and construct a 100% linear bridge (1 prime linear dimension like the trunk of a tree, with all nested sub-elements also defined through 100% linear construction as something like "branching sub-trees") across all these levels of scale -- where the the integrating linear scale is simply "level of abstraction"? That would be quite astonishing, and an amazing reformation.


    SPECTRUM OF SCIENTIFIC AND ACADEMIC SECTORS
    image


    LEVELS OF ONTOLOGY
    image


    SCIENCE / HUMANITIES SCHISM
    image

    Comment Source:<h1>Richard Feynman on the Hierarchy of Ideas</h1> From Richard P. Feynman, <i>The Character of Physical Law</i>, quoted in <i>God and the New Physics</i>, Paul Davies, p224: <blockquote> "We have a way of discussing the world . . . at various hierarchies, or levels. Now I do not mean to be very precise, dividing the world into definite levels, but I will indicate, by describing a set of ideas, what I mean by hierarchies of ideas. <p> "For example, at one end we have the fundamental laws of physics. Then we invent other terms for concepts which are approximate, which have, we believe, their ultimate explanation in terms of the fundamental laws. For instance, "heat". Heat is supposed to be jiggling, and the word for a hot thing is just the word for a mass of atoms which are jiggling. But for a while, if we are talking about heat, we sometimes forget about the atoms jiggling -- just as when we talk about the glacier we do not always think of the hexagonal ice and the snowflakes which originally fell. Another example of the same thing is a salt crystal. Looked at fundamentally it is a lot of protons, neutrons, and electrons; but we have this concept "salt crystal", which carries a whole pattern already of fundamental interactions. An idea like pressure is the same. <p> "Now if we go higher up from this, in another level we have properties of substances -- like "refractive index", how light is bent when it goes through something; or "surface tension", the fact that water tends to pull itself together, both of which are described by numbers. I remind you that we have got to go through several laws down to find out that it is the pull of the atoms, and so on. But we still say "surface tension", and do not always worry, when discussing surface tension, about the inner workings. <p> "On, up in the hierarchy. With the water we have waves, and we have a thing like a storm, the word "storm" which represents an enormous mass of phenomena, or a "sun spot", or "star", which is an accumulation of things. And it is not worth while always to think of it way back. In fact we cannot, because the higher up we go the more steps we have in between, each one of which is a little weak. We have not thought them all through yet. <p> "As we go up in this hierarchy of complexity, we get to things like muscle twitch, or nerve impulse, which is an enormously complicated thing in the physical world, involving an organization of matter in a very elaborate complexity. Then come things like "frog". <p> "And then we go on, and we come to words and concepts like "man" and "history", or "political expediency", and so forth, a series of concepts which we use to understand things at an ever higher level. <p> "And going on, we come to things like evil, and beauty, and hope... <p> "Which end is nearer to God, if I may use a religious metaphor. Beauty and hope, or the fundamental laws? I think that the right way, of course, is to say that <i>what we have to look at is the whole structural interconnection of the thing</i>, and that all the sciences, and not just the sciences but <i>all the efforts of intellectual kinds, are an endeavor to see the connections of the hierarchies</i>, to connect beauty to history, to connect history to man's psychology, man's psychology to the working of the brain, the brain to the neural impulse, the neural impulse to the chemistry, and so forth, up and down, both ways. And today we cannot, and it is no use making believe that we can, draw carefully a line all the way from one end of this thing to the other, because we have only just begun to see that this is a relative hierarchy. <p> "And I do not think that either end is closer to God." </blockquote> There is no generally accepted scale by which academic disciplines or sciences can be strictly classified. But taken loosely, maybe this partition -- "looking at the whole structural interconnection of the thing" -- begins to illustrate why there seems to be a natural grouping or spectrum of disciplines. Can they all be interconnected through a single integrated/composite data structure? Can we define the holism of the "right brain" and deep intuition in exactly the terms that we define the elements of the "left brain" and reductionist analysis, and construct a 100% linear bridge (1 prime linear dimension like the trunk of a tree, with all nested sub-elements also defined through 100% linear construction as something like "branching sub-trees") across all these levels of scale -- where the the integrating linear scale is simply "level of abstraction"? That would be quite astonishing, and an amazing reformation. <br /> <b>SPECTRUM OF SCIENTIFIC AND ACADEMIC SECTORS</b> <br /> <img src="http://sharedpurpose.net/groupgraphics/dia14.png"> <br /> <b>LEVELS OF ONTOLOGY</b><br /> <img src="http://sharedpurpose.net/groupgraphics/dia13.png" width=700> <br /> <b>SCIENCE / HUMANITIES SCHISM</b><br /> <img src="http://sharedpurpose.net/groupgraphics/dia15a.png" width=500>
  • 22.
    edited October 2015

    COMMENTS ON FEYNMAN

    THE HIERARCHY OF IDEAS

    "We have a way of discussing the world . . . at various hierarchies, or levels. Now I do not mean to be very precise, dividing the world into definite levels, but I will indicate, by describing a set of ideas, what I mean by hierarchies of ideas."

    Feynman gets the big picture. He sees the full range of human ideas, ranging from the most demanding and fine-grained science to the broadest and most abstract kind of philosophy. In one brief sentence, he outlines a theology. He sees that this vast interconnected arrangement of ideas can be understood as a hierarchy. He affirms that the idea of hierarchy or "levels" makes sense, even if it does not make sense to define these levels in rigid ways. He see that this hierarchy has "two ends" -- essentially affirming the "linear" or spectral quality of this entire vast framework.

    ABSTRACTION

    "For example, at one end we have the fundamental laws of physics. Then we invent other terms for concepts which are approximate, which have, we believe, their ultimate explanation in terms of the fundamental laws. For instance, "heat". Heat is supposed to be jiggling, and the word for a hot thing is just the word for a mass of atoms which are jiggling. But for a while, if we are talking about heat, we sometimes forget about the atoms jiggling -- just as when we talk about the glacier we do not always think of the hexagonal ice and the snowflakes which originally fell."

    Feynman is talking about abstraction -- and as just these diagrams indicate, it makes sense to position physics "at one end" of this spectrum. It might be a fairly complex discussion outlining why we would put physics at "the bottom" -- what we want to call "the empirical ground" -- but generally, what we are saying is -- we are building our abstract model of the world -- of reality -- out of elements that we initially understand in terms of the rock-bottom elements of physics -- and in terms of which all other definitions and concepts -- like "heat" -- can be constructed. Seen this way, "heat" is an abstraction.

    And Feynman acknowledges the basic facts of psychological economy. "We sometimes forget about the atoms jiggling -- just as when we talk about the glacier we do not always think of the hexagonal ice and the snowflakes which originally fell." It's too complicated to acknowledge all these details -- so we create a synthetic abstraction called "heat" which gives us one linear expression for a complex atomic phenomena. We want to speak briefly and "at a higher level" -- not in a formally graduated series of levels -- but as a general jump up the ladder of abstraction, which we feel we can take with confidence, because we are certain of the definition and its grounding in empirical specifics.

    And of course, in fact, if we DID need to formally acknowledge this jump up the ladder of abstraction, we could do so, because the definitions are well grounded and can be exactly specified, if that is helpful.

    "HOLONS" AND COMPOSITE/SYNTHETIC CONCEPTS

    "Another example of the same thing is a salt crystal. Looked at fundamentally it is a lot of protons, neutrons, and electrons; but we have this concept "salt crystal", which carries a whole pattern already of fundamental interactions. An idea like pressure is the same."

    This is the same idea. "Salt crystal" is a composite/aggregated abstraction, a name for a general class of highly complex objects. "Pressure" -- like "heat" -- is also a composite/aggregate concept. We might think of it as fundamental -- and in many ways it is. But in fact, tracking the definition to its hard ground, it's a composite abstraction. It's "at a higher level" than the fundamental elements from which the definition is constructed. These concepts are "holons" -- simple integral composite names for complex phenomena that we have compiled into a single aggregated general concept.

    If we really want to understand the ontology of intellectual structure in any general way, this is an essential point. All ideas are built in this way, across an ascending scale of abstraction.

    Given this apparently universal character, why not build an unbroken spectral model, built from universal primitive/constructive elements. Do we know how to do this? Maybe we're getting closer...

    IMPLICIT NESTING - AND WHY WE GET SO CONFUSED AND ANGRY

    "Now if we go higher up from this, in another level we have properties of substances -- like "refractive index", how light is bent when it goes through something; or "surface tension", the fact that water tends to pull itself together, both of which are described by numbers. I remind you that we have got to go through several laws down to find out that it is the pull of the atoms, and so on. But we still say "surface tension", and do not always worry, when discussing surface tension, about the inner workings."

    This is the essence of abstraction. We do not always worry about the inner workings. But note the issue that arises when we assign a label to a higher-level abstraction, and "do not worry about the inner workings." As long as we are on the fairly safe ground of well-defined and quantitative subjects, we can be pretty sure that the abstract labels we are using refer to well-defined non-controversial non-ambiguous things.

    As Feynman says, "we have to to go through several laws down to find out it is the pull of the atoms" -- but in physics, we feel safe. The definition chain is solid, widely accepted, proven over a couple hundreds years of testing and experience.

    But we're starting to float on higher and higher levels of abstractions, with increasing complex and potentially vulnerable grounding. For reasons of psychological economy, we're using "shortcuts" Are those short-cuts -- their "inner workings" -- their "implicit interpretation" -- well defined and solid? In the case of physics, generally yes. In the case of the humanities and philosophy or metaphysics -- much less so, if at all..

    And why? Because the definition chain inherent in "the inner workings" -- the "implicit" cascade of definition to the empirical ground -- nested beneath the abstract term and implicit in our confident use of the term -- is missing, broken, or ambiguous/confused.

    This basic fact of epistemology and conceptual structure (semantics) is a big deal in the real world. If we are failing to understand each other on this planet, in very large part it is because we have huge fragmentation in our grounding definition chains. "Science and religion" are at each other's throats for just this reason. We are staggering around in confused mythology because we have not figured this out -- even though a fairly simple and intuitive analysis like this -- not very sophisticated -- starts to make it obvious.

    And politicians -- in today's world, the misinterpretation of abstraction is a high political skill. It's called "spin". Don't understand the intended meaning -- with its supposed implicit meaning that the speaker simply did not have time to fully specify -- and instead, see a twisted meaning and build your attack on that basis...
    Comment Source:<h1>COMMENTS ON FEYNMAN</h1> <b>THE HIERARCHY OF IDEAS</b> "We have a way of discussing the world . . . at various hierarchies, or levels. Now I do not mean to be very precise, dividing the world into definite levels, but I will indicate, by describing a set of ideas, what I mean by hierarchies of ideas." <blockquote> Feynman gets the big picture. He sees the full range of human ideas, ranging from the most demanding and fine-grained science to the broadest and most abstract kind of philosophy. In one brief sentence, he outlines a theology. He sees that this vast interconnected arrangement of ideas can be understood as a hierarchy. He affirms that the idea of hierarchy or "levels" makes sense, even if it does not make sense to define these levels in rigid ways. He see that this hierarchy has "two ends" -- essentially affirming the "linear" or spectral quality of this entire vast framework. </blockquote> <b>ABSTRACTION</b> "For example, at one end we have the fundamental laws of physics. Then we invent other terms for concepts which are approximate, which have, we believe, their ultimate explanation in terms of the fundamental laws. For instance, "heat". Heat is supposed to be jiggling, and the word for a hot thing is just the word for a mass of atoms which are jiggling. But for a while, if we are talking about heat, we sometimes forget about the atoms jiggling -- just as when we talk about the glacier we do not always think of the hexagonal ice and the snowflakes which originally fell." <blockquote> Feynman is talking about abstraction -- and as just these diagrams indicate, it makes sense to position physics "at one end" of this spectrum. It might be a fairly complex discussion outlining why we would put physics at "the bottom" -- what we want to call "the empirical ground" -- but generally, what we are saying is -- we are building our abstract model of the world -- of reality -- out of elements that we initially understand in terms of the rock-bottom elements of physics -- and in terms of which all other definitions and concepts -- like "heat" -- can be constructed. Seen this way, "heat" is an abstraction. <br /> <br /> And Feynman acknowledges the basic facts of psychological economy. "We sometimes forget about the atoms jiggling -- just as when we talk about the glacier we do not always think of the hexagonal ice and the snowflakes which originally fell." It's too complicated to acknowledge all these details -- so we create a synthetic abstraction called "heat" which gives us one linear expression for a complex atomic phenomena. We want to speak briefly and "at a higher level" -- not in a formally graduated series of levels -- but as a general jump up the ladder of abstraction, which we feel we can take with confidence, because we are certain of the definition and its grounding in empirical specifics. <br /><br /> And of course, in fact, if we DID need to formally acknowledge this jump up the ladder of abstraction, we could do so, because the definitions are well grounded and can be exactly specified, if that is helpful. </blockquote> <b>"HOLONS" AND COMPOSITE/SYNTHETIC CONCEPTS</B> "Another example of the same thing is a salt crystal. Looked at fundamentally it is a lot of protons, neutrons, and electrons; but we have this concept "salt crystal", which carries a whole pattern already of fundamental interactions. An idea like pressure is the same." <blockquote> This is the same idea. "Salt crystal" is a composite/aggregated abstraction, a name for a general class of highly complex objects. "Pressure" -- like "heat" -- is also a composite/aggregate concept. We might think of it as fundamental -- and in many ways it is. But in fact, tracking the definition to its hard ground, it's a composite abstraction. It's "at a higher level" than the fundamental elements from which the definition is constructed. These concepts are "holons" -- simple integral composite names for complex phenomena that we have compiled into a single aggregated general concept. <br><br> If we really want to understand the ontology of intellectual structure in any general way, this is an essential point. All ideas are built in this way, across an ascending scale of abstraction. <br><br> Given this apparently universal character, why not build an unbroken spectral model, built from universal primitive/constructive elements. Do we know how to do this? Maybe we're getting closer... </blockquote> <b>IMPLICIT NESTING - AND WHY WE GET SO CONFUSED AND ANGRY</b> "Now if we go higher up from this, in another level we have properties of substances -- like "refractive index", how light is bent when it goes through something; or "surface tension", the fact that water tends to pull itself together, both of which are described by numbers. I remind you that we have got to go through several laws down to find out that it is the pull of the atoms, and so on. But we still say "surface tension", and do not always worry, when discussing surface tension, about the inner workings." <blockquote> This is the essence of abstraction. We do not always worry about the inner workings. But note the issue that arises when we assign a label to a higher-level abstraction, and "do not worry about the inner workings." As long as we are on the fairly safe ground of well-defined and quantitative subjects, we can be pretty sure that the abstract labels we are using refer to well-defined non-controversial non-ambiguous things. <br><br> As Feynman says, "we have to to go through several laws down to find out it is the pull of the atoms" -- but in physics, we feel safe. The definition chain is solid, widely accepted, proven over a couple hundreds years of testing and experience. <br><br> But we're starting to float on higher and higher levels of abstractions, with increasing complex and potentially vulnerable grounding. For reasons of psychological economy, we're using "shortcuts" Are those short-cuts -- their "inner workings" -- their "implicit interpretation" -- well defined and solid? In the case of physics, generally yes. In the case of the humanities and philosophy or metaphysics -- much less so, if at all.. <br><br> And why? Because the definition chain inherent in "the inner workings" -- the "implicit" cascade of definition to the empirical ground -- nested beneath the abstract term and implicit in our confident use of the term -- is missing, broken, or ambiguous/confused. <br><br> This basic fact of epistemology and conceptual structure (semantics) is a big deal in the real world. If we are failing to understand each other on this planet, in very large part it is because we have huge fragmentation in our grounding definition chains. "Science and religion" are at each other's throats for just this reason. We are staggering around in confused mythology because we have not figured this out -- even though a fairly simple and intuitive analysis like this -- not very sophisticated -- starts to make it obvious. <br><br> And politicians -- in today's world, the misinterpretation of abstraction is a high political skill. It's called "spin". Don't understand the intended meaning -- with its supposed implicit meaning that the speaker simply did not have time to fully specify -- and instead, see a twisted meaning and build your attack on that basis... </blockquote>
  • 23.
    edited October 2015

    COMMENTS ON FEYNMAN (2)

    "STEPS IN BETWEEN, EACH ONE OF WHICH IS A LITTLE WEAK"

    "On, up in the hierarchy. With the water we have waves, and we have a thing like a storm, the word "storm" which represents an enormous mass of phenomena, or a "sun spot", or "star", which is an accumulation of things. And it is not worth while always to think of it way back. In fact we cannot, because the higher up we go the more steps we have in between, each one of which is a little weak. We have not thought them all through yet."

    And here's where our definition chain starts to fall apart. Suppose we want to think about huge abstractions and "whole systems" and large composite/aggregate things or broadly inclusive properties of reality. Here's where the guesswork of religion starts to enter the picture. The far-seeing human mind knows that life is embedded in the big picture, and that its laws or principles govern or influence what we can do. But can we conceptualize these principles with the same acuity that physics brings to its study of sunspots and ice crystals?

    "As we go up in this hierarchy of complexity, we get to things like muscle twitch, or nerve impulse, which is an enormously complicated thing in the physical world, involving an organization of matter in a very elaborate complexity. Then come things like "frog".

    "And then we go on, and we come to words and concepts like "man" and "history", or "political expediency", and so forth, a series of concepts which we use to understand things at an ever higher level.

    "And going on, we come to things like evil, and beauty, and hope..."

    All of this in one hierarchy, one framework. This is a grand vision of the unity of knowledge -- perhaps intuitive and "non-scientific" -- but Feynman explains the foundations in scientific terms. Where do the fundamental definitions come from? How are composite more abstract definitions created? What is "political expediency"? What is "evil", or "beauty"? If these things are simply what people say they are (stipulated definitions), how do they fit into the hierarchy? Can they be specified with precision by the speaker in the particular context of usage? It's true that these these high-level and "qualitative" abstractions are not grounded in the same way as scientific definitions -- but they are a fact of human experience -- and in an immediate context, when understood as intended stipulations ("words mean what the speaker wants them to mean"), any word meaning can be specified to a very high degree.

    THE WHOLE STRUCTURAL INTERCONNECTION OF THE THING

    "Which end is nearer to God, if I may use a religious metaphor. Beauty and hope, or the fundamental laws? I think that the right way, of course, is to say that what we have to look at is the whole structural interconnection of the thing, and that all the sciences, and not just the sciences but all the efforts of intellectual kinds, are an endeavor to see the connections of the hierarchies, to connect beauty to history, to connect history to man's psychology, man's psychology to the working of the brain, the brain to the neural impulse, the neural impulse to the chemistry, and so forth, up and down, both ways.

    Religion, the unity of the sciences, the unity of science and religion -- map it all through neurology and perception, or a theory of concepts and data structures -- but see how it's connected, and map the connections. Where they are weak, make them strong. Where they are ambiguous, make them clear. Where they are necessarily intentional and stipulative and context-specific, see that and recognize it and don't be dismayed or doctrinally shocked. Just understand it. These elements ARE part of human understanding -- and whether they are "metaphor" or not, they are inherent in human experience.

    "And today we cannot, and it is no use making believe that we can, draw carefully a line all the way from one end of this thing to the other, because we have only just begun to see that this is a relative hierarchy."

    Yes, it's interesting -- and challenging -- that Feynman makes this point -- that we "cannot draw a line from one end to the other" -- since this "line" is indeed what is being suggested here, with some qualifications. It might have been helpful if he had told us more regarding his notion of a "relative hierarchy"? What does that mean?

    It's an attractive and suggestive notion. Definitions are "relative to one another"? Each of these weakly defined "levels" is somehow its own domain, with no hard links of connections to related sub-domains -- the way "heat" is related to "atoms jiggling" through a cascade of definitions across levels of abstraction?

    He says himself that "we have not thought through all these connections" and they are all a little bit weak. And we are only beginning to understand it.

    We do need a sophisticated semantics to explain this framework -- maybe a semantics based on new ideas that are not yet fully demonstrated or proven.

    And "relativity"? Maybe there will emerge some potent new mathematical framework that can contain all of these levels, with some clear-cut approach to their internal dimensional structure, in ways that do link across all these levels from end to end. Is there an analogy something like "relative is to absolute like part is to whole?" Maybe part/whole mereology is a key to cascaded relativity within the all-inclusive framework of the whole. "The conceptualiztion of reality is a fractally-cascaded holon?"

    THE UNIVERSE IN A SINGLE ATOM

    "And I do not think that either end is closer to God."

    This seems to be a theology of immanence in a single sentence. Not the highest level, not the lowest level, maybe the term "God" is a metaphor, nothing is lost. Whatever it is, call it what you like, the energy or meaning is equally distributed and present over the entire framework -- the infinite, the whole, the absolute, the parts, the relative, the infinitesimal -- all imbued with, all interconnected by -- what? Maybe like the Dalai Lama's idea of "The Universe in a Single Atom" -- a universal ordering principle grounded in a beginningless ontology that replicates across all levels of scale? That would explain a lot.
    http://www.amazon.com/The-Universe-Single-Atom-Spirituality/dp/0767920813
    https://en.wikipedia.org/wiki/Immanence
    Comment Source:<h1>COMMENTS ON FEYNMAN (2)</h1> <b>"STEPS IN BETWEEN, EACH ONE OF WHICH IS A LITTLE WEAK"</b> "On, up in the hierarchy. With the water we have waves, and we have a thing like a storm, the word "storm" which represents an enormous mass of phenomena, or a "sun spot", or "star", which is an accumulation of things. And it is not worth while always to think of it way back. In fact we cannot, because the higher up we go the more steps we have in between, each one of which is a little weak. We have not thought them all through yet." <blockquote> And here's where our definition chain starts to fall apart. Suppose we want to think about huge abstractions and "whole systems" and large composite/aggregate things or broadly inclusive properties of reality. Here's where the guesswork of religion starts to enter the picture. The far-seeing human mind knows that life is embedded in the big picture, and that its laws or principles govern or influence what we can do. But can we conceptualize these principles with the same acuity that physics brings to its study of sunspots and ice crystals? </blockquote> "As we go up in this hierarchy of complexity, we get to things like muscle twitch, or nerve impulse, which is an enormously complicated thing in the physical world, involving an organization of matter in a very elaborate complexity. Then come things like "frog". "And then we go on, and we come to words and concepts like "man" and "history", or "political expediency", and so forth, a series of concepts which we use to understand things at an ever higher level. "And going on, we come to things like evil, and beauty, and hope..." <blockquote> All of this in one hierarchy, one framework. This is a grand vision of the unity of knowledge -- perhaps intuitive and "non-scientific" -- but Feynman explains the foundations in scientific terms. Where do the fundamental definitions come from? How are composite more abstract definitions created? What is "political expediency"? What is "evil", or "beauty"? If these things are simply what people say they are (stipulated definitions), how do they fit into the hierarchy? Can they be specified with precision by the speaker in the particular context of usage? It's true that these these high-level and "qualitative" abstractions are not grounded in the same way as scientific definitions -- but they are a fact of human experience -- and in an immediate context, when understood as intended stipulations ("words mean what the speaker wants them to mean"), any word meaning can be specified to a very high degree. </blockquote> <b>THE WHOLE STRUCTURAL INTERCONNECTION OF THE THING</b> "Which end is nearer to God, if I may use a religious metaphor. Beauty and hope, or the fundamental laws? I think that the right way, of course, is to say that what we have to look at is the whole structural interconnection of the thing, and that all the sciences, and not just the sciences but all the efforts of intellectual kinds, are an endeavor to see the connections of the hierarchies, to connect beauty to history, to connect history to man's psychology, man's psychology to the working of the brain, the brain to the neural impulse, the neural impulse to the chemistry, and so forth, up and down, both ways. <blockquote> Religion, the unity of the sciences, the unity of science and religion -- map it all through neurology and perception, or a theory of concepts and data structures -- but see how it's connected, and map the connections. Where they are weak, make them strong. Where they are ambiguous, make them clear. Where they are necessarily intentional and stipulative and context-specific, see that and recognize it and don't be dismayed or doctrinally shocked. Just understand it. These elements ARE part of human understanding -- and whether they are "metaphor" or not, they are inherent in human experience. </blockquote> "And today we cannot, and it is no use making believe that we can, draw carefully a line all the way from one end of this thing to the other, because we have only just begun to see that this is a relative hierarchy." <blockquote> Yes, it's interesting -- and challenging -- that Feynman makes this point -- that we "cannot draw a line from one end to the other" -- since this "line" is indeed what is being suggested here, with some qualifications. It might have been helpful if he had told us more regarding his notion of a "relative hierarchy"? What does that mean? <br><br> It's an attractive and suggestive notion. Definitions are "relative to one another"? Each of these weakly defined "levels" is somehow its own domain, with no hard links of connections to related sub-domains -- the way "heat" is related to "atoms jiggling" through a cascade of definitions across levels of abstraction? <br><br> He says himself that "we have not thought through all these connections" and they are all a little bit weak. And we are only beginning to understand it. <br><br> We do need a sophisticated semantics to explain this framework -- maybe a semantics based on new ideas that are not yet fully demonstrated or proven. <br><br> And "relativity"? Maybe there will emerge some potent new mathematical framework that can contain all of these levels, with some clear-cut approach to their internal dimensional structure, in ways that do link across all these levels from end to end. Is there an analogy something like "relative is to absolute like part is to whole?" Maybe part/whole mereology is a key to cascaded relativity within the all-inclusive framework of the whole. "The conceptualiztion of reality is a fractally-cascaded holon?" </blockquote> <b>THE UNIVERSE IN A SINGLE ATOM</b> "And I do not think that either end is closer to God." <blockquote> This seems to be a theology of immanence in a single sentence. Not the highest level, not the lowest level, maybe the term "God" is a metaphor, nothing is lost. Whatever it is, call it what you like, the energy or meaning is equally distributed and present over the entire framework -- the infinite, the whole, the absolute, the parts, the relative, the infinitesimal -- all imbued with, all interconnected by -- what? Maybe like the Dalai Lama's idea of "The Universe in a Single Atom" -- a universal ordering principle grounded in a beginningless ontology that replicates across all levels of scale? That would explain a lot. <br />http://www.amazon.com/The-Universe-Single-Atom-Spirituality/dp/0767920813 <br />https://en.wikipedia.org/wiki/Immanence </blockquote>
  • 24.
    edited October 2015

    THE FULL DIMENSIONALITY OF CONCEPTUAL SPACE

    Clearly, this is an intuitive model. Every facet of this framework requires hard definition to be legitimated as a mathematical expression. Maybe we'll get an encompassing definition of the entire framework that unfolds the specifics as determinate implications of the whole. That might be stunning. And if that's not possible or doesn't happen, there's a lot of moving parts to individually confirm or construct.

    But still, it's a big sketch, with a lot of implicit dimensionality making a very broadly inclusive claim. And it combines elements from a lot of sources -- that for now, we are somewhat obliged to see as separate disciplines. Taxonomy, ontology, mereology, hierarchy, abstraction? All kind of the same thing, from different angles, for different purposes? "Holons" and mereology? Maybe "holons" are just a simplistic pop-market smear of a complex technical study?

    Or could it be possible that the complex technical studies tend to lose sight of the forest for the trees? That logic and the entire framework of human thought DOES operate within knowable boundaries? And the entire structure is far simpler -- perhaps vastly so -- than complex technical analysis has thus far been able to conceive? "Everything comes from nothing" -- by some innate ontological template that perhaps we can infer in some of its critical details?

    More or less, yes, it seems to be true: induction goes up the hierarchy, from specific concrete elements of "facts" to broadly inclusive universal conclusions, and deduction goes down the hierarchy, just the reverse. Is it really that simple? And ideas like "abduction" -- "induction plus intuitive guesswork to help validate the conclusion" -- might just be confusing violations of Occam, tacked into the framework because the originating analyst didn't have a very clear picture to begin with?

    But is it always the same hierarchy? The same scale, the same series of levels?

    Why is the answer "yes"? Because it's always "abstraction" -- endlessly shape-shifting, always context-specific, continously variable and as fluent as water -- and we can define abstraction with nth-level precision -- to any number of decimal places.

    https://en.wikipedia.org/wiki/Abstraction
    https://en.wikipedia.org/wiki/Mereology
    https://en.wikipedia.org/wiki/Taxonomy_(biology)
    https://en.wikipedia.org/wiki/Hierarchy
    https://en.wikipedia.org/wiki/Tree_structure
    https://en.wikipedia.org/wiki/Tree_(set_theory)

    **

    What we are gunning for here is an integral algebraic construction mapped directly into the common and widely accepted foundations of mathematics -- particularly the continuum and the "real number line", particularly as approached by Richard Dedekind with his concept of "cut", probably defined within "the unit interval", or the bounded range from 0 to 1.

    A bunch of pieces come together here -- somehow -- and here's a crude initial list.

    1. The mapping of the number system -- the decimal system for most convenient example -- into the continuum, though the process of "cuts".

    2. A recognition or adaptation of the "levels of measurement" theory from Stanley Smith Stevens and others, particularly found in social sciences, that acknowledges a wide range of "types of variables -- that can be arranged in a hierarchical format, just as is described here.

    3. The construction of all classification in the form of taxonomy mapped into this process

    4. A general explanation of the basic principles of mereology in the context of "holons" -- "wholes that are also parts"

    5. A systematic formal construction of this framework, probably defined in terms of a "universal primitive element" -- maybe a "cut", maybe a "distinction", maybe a "range of values" (which might be a series of distinctions made in a distinction, somewhat like a particular species within a genus), maybe a "synthetic dimension" which is an attempt to combine all these definitions into a single concept

    6. Ideally -- a way to "close this space" -- to seal it on itself -- such that the total notion of "distinction" might be seen as an "edge" -- such that the composite integral space becomes "the container" -- it "contains itself" -- in ways which are probably relevant to Russell paradox. Does this ambition have something to do with a Mobius strip? There are strong suspicions that answer is yes. Maybe this entire structure takes the form Douglas Hofstadter described as a "strange loop". A Mobius strip has "one edge" and "one side" -- and appearances to the contrary are deceiving. This is kind of amazing. If "reality itself embodies no distinctions" -- as mystics and great philosophers have suggested -- then maybe this illusion of "sidedness" creates the phenomena of "duality" -- and from there, all the potential for "opposites".

    7. There's a guiding instinct that says -- this space can be closed, and its vastly infinite complexity of internal differentiations can be described by a single general theory that partitions the entire space. Maybe it's a "one-edged space" closed on itself to contain itself, and al logic arises within it.

    8. As an intriguing add-on -- maybe the part/whole relationship on which mereology and holon theory are based can be interpreted in terms of cascaded coordinate frames, where each "part" defines a "relative" context -- a context with its own relative center-point -- maybe a "center of gravity" -- such that each local context can act "independently of the whole" -- and yet is held within the whole -- and that all these levels can be strung together through a common single highest level shared/universal center-point -- creating an infinite cascade of relative coordinate frames held within a single all-inclusive absolute frame, and all connected to and through one another through a common center.

      If these semi-independent relativistic contexts could be understood as collective intelligence decision contexts -- as a context for dialogue and deliberation and collective decision-making -- we might be discussing a cascaded system for some new kind of optimal "global electronic democracy" that finds an ideal balance between global respect for the whole and local freedom and independence.

    image
    Comment Source:<h1>THE FULL DIMENSIONALITY OF CONCEPTUAL SPACE</h1> Clearly, this is an intuitive model. Every facet of this framework requires hard definition to be legitimated as a mathematical expression. Maybe we'll get an encompassing definition of the entire framework that unfolds the specifics as determinate implications of the whole. That might be stunning. And if that's not possible or doesn't happen, there's a lot of moving parts to individually confirm or construct. But still, it's a big sketch, with a lot of implicit dimensionality making a very broadly inclusive claim. And it combines elements from a lot of sources -- that for now, we are somewhat obliged to see as separate disciplines. Taxonomy, ontology, mereology, hierarchy, abstraction? All kind of the same thing, from different angles, for different purposes? "Holons" and mereology? Maybe "holons" are just a simplistic pop-market smear of a complex technical study? Or could it be possible that the complex technical studies tend to lose sight of the forest for the trees? That logic and the entire framework of human thought DOES operate within knowable boundaries? And the entire structure is far simpler -- perhaps vastly so -- than complex technical analysis has thus far been able to conceive? "Everything comes from nothing" -- by some innate ontological template that perhaps we can infer in some of its critical details? More or less, yes, it seems to be true: induction goes up the hierarchy, from specific concrete elements of "facts" to broadly inclusive universal conclusions, and deduction goes down the hierarchy, just the reverse. Is it really that simple? And ideas like "abduction" -- "induction plus intuitive guesswork to help validate the conclusion" -- might just be confusing violations of Occam, tacked into the framework because the originating analyst didn't have a very clear picture to begin with? But is it always the same hierarchy? The same scale, the same series of levels? Why is the answer "yes"? Because it's always "abstraction" -- endlessly shape-shifting, always context-specific, continously variable and as fluent as water -- and we can define abstraction with nth-level precision -- to any number of decimal places. https://en.wikipedia.org/wiki/Abstraction <br /> https://en.wikipedia.org/wiki/Mereology <br /> https://en.wikipedia.org/wiki/Taxonomy_(biology) <br /> https://en.wikipedia.org/wiki/Hierarchy <br /> https://en.wikipedia.org/wiki/Tree_structure <br /> https://en.wikipedia.org/wiki/Tree_(set_theory) <br /> ** What we are gunning for here is an integral algebraic construction mapped directly into the common and widely accepted foundations of mathematics -- particularly the continuum and the "real number line", particularly as approached by Richard Dedekind with his concept of "cut", probably defined within "the unit interval", or the bounded range from 0 to 1. A bunch of pieces come together here -- somehow -- and here's a crude initial list. <ol> <br><br> <li> The mapping of the number system -- the decimal system for most convenient example -- into the continuum, though the process of "cuts". <br><br> <li> A recognition or adaptation of the "levels of measurement" theory from Stanley Smith Stevens and others, particularly found in social sciences, that acknowledges a wide range of "types of variables -- that can be arranged in a hierarchical format, just as is described here. <br><br> <li> The construction of all classification in the form of taxonomy mapped into this process <br><br> <li> A general explanation of the basic principles of mereology in the context of "holons" -- "wholes that are also parts" <br><br> <li> A systematic formal construction of this framework, probably defined in terms of a "universal primitive element" -- maybe a "cut", maybe a "distinction", maybe a "range of values" (which might be a series of distinctions made in a distinction, somewhat like a particular species within a genus), maybe a "synthetic dimension" which is an attempt to combine all these definitions into a single concept <br><br> <li> Ideally -- a way to "close this space" -- to seal it on itself -- such that the total notion of "distinction" might be seen as an "edge" -- such that the composite integral space becomes "the container" -- it "contains itself" -- in ways which are probably relevant to Russell paradox. Does this ambition have something to do with a Mobius strip? There are strong suspicions that answer is yes. Maybe this entire structure takes the form Douglas Hofstadter described as a "strange loop". A Mobius strip has "one edge" and "one side" -- and appearances to the contrary are deceiving. This is kind of amazing. If "reality itself embodies no distinctions" -- as mystics and great philosophers have suggested -- then maybe this illusion of "sidedness" creates the phenomena of "duality" -- and from there, all the potential for "opposites". <br><br> <li> There's a guiding instinct that says -- this space can be closed, and its vastly infinite complexity of internal differentiations can be described by a single general theory that partitions the entire space. Maybe it's a "one-edged space" closed on itself to contain itself, and al logic arises within it. <br><br> <li> As an intriguing add-on -- maybe the part/whole relationship on which mereology and holon theory are based can be interpreted in terms of cascaded coordinate frames, where each "part" defines a "relative" context -- a context with its own relative center-point -- maybe a "center of gravity" -- such that each local context can act "independently of the whole" -- and yet is held within the whole -- and that all these levels can be strung together through a common single highest level shared/universal center-point -- creating an infinite cascade of relative coordinate frames held within a single all-inclusive absolute frame, and all connected to and through one another through a common center. <br><br> If these semi-independent relativistic contexts could be understood as collective intelligence decision contexts -- as a context for dialogue and deliberation and collective decision-making -- we might be discussing a cascaded system for some new kind of optimal "global electronic democracy" that finds an ideal balance between global respect for the whole and local freedom and independence. <br><br> </ol> <center> <img src="http://sharedpurpose.net/groupgraphics/dia12e.png" width=850> <!--- <img src="http://sharedpurpose.net/groupgraphics/dia12a.png" width=1000> <img src="http://sharedpurpose.net/groupgraphics/thebridge.png" width=1000> ---> </center>
  • 25.
    edited October 2015

    UNIVERSAL PRIMITIVE

    One way or the other, every possible data structure or logical object defined within a computer is fundamentally defined in terms of bits. All data structures, all symbolic representation, all symbols, all algebraic objects, all words -- and sentences, paragraphs, books and libraries -- can be 100% reproduced in micro-detail building composite definitions assembled from the 2-state logic of a bit.

    In this exploration, we want to consider a chain of increasingly abstract interpretations of this fundamental 2-state distinction, out of which we want to model the complete dimensionality of conceptual space.

    Nested distinctions
    Our objective is to construct a hierarchy of abstraction capable of modeling or describing any possible complex of nested distinctions, at any desired level of detail.

    We want to build an ascending recursive cascade of composite definitions grounded in the 2-state logic of the bit, beginning with the continuum defined as the unit interval (the continuous "real number" numeric range from 0 to 1) and a "cut" in the continuum.

    The general form of any taxonomy or multi-level system of classification is "a distinction made on a distinction made on a distinction" -- or in more basic terms, "a cut on a cut on a cut...". Any "level" in a taxonomy is a "taxon" (plural "taxa"). In a taxonomy, a "genus" is a taxon, and a "species" is a cut on a taxon. It's a bounded range within that taxon, with a lowest and highest value in some numeric dimension characterizing and distinguishing that taxon.

    Basic unit
    A bit is the basic unit of information in computing and digital communications. A bit can have only one of two values, and may therefore be physically implemented with a two-state device. These values are most commonly represented as either a 0 or 1. The term bit is a portmanteau of binary digit.

    The two values can also be interpreted as logical values (true/false, yes/no), algebraic signs (+/−), activation states (on/off), or any other two-valued attribute. The correspondence between these values and the physical states of the underlying storage or device is a matter of convention...

    In information theory, one bit is typically defined as the uncertainty of a binary random variable that is 0 or 1 with equal probability, or the information that is gained when the value of such a variable becomes known.

    Storage
    A bit can be stored by [any] digital device or physical system that exists in either of two possible distinct states.

    These may be the two stable states of a flip-flop, two positions of an electrical switch, two distinct voltage or current levels allowed by a circuit, two distinct levels of light intensity, two directions of magnetization or polarization, the orientation of reversible double stranded DNA, etc.

    https://en.wikipedia.org/wiki/Bit

    image

    image

    Comments/themes suggested by this image -- issues to explore

    • Definition of continuum defined as "one unit" within/across the unit interval
    • Boundaries/edges of unit interval defined as cuts/distinctions
    • Mapping to 2-state logic. How does this ascending cascade of nested distinctions/levels map to 2-state ontology. "Is present/is not present" ? Or is there an "opposite" dimension somehow defined, with a centered origin or 0-point? Two directions -- positive and negative?
    • Or are we talking about a block or cell that either contains or does not contain a fundamental 2-state unit? How are these elements blocks or cells, when we are defining them essentially in one dimension? Or are we saying that these elements ARE defined in more than one dimension? That would be consistent with the notion that "an abstract cut has width (because it is defined as a multiple of some unit)"
    • How does all of this relate to "figure/ground" issues?
    • Continuum is "unknowable" because to "know" is to conceptualize and bound, and reality and the continuum are unbounded. The intervals of the continuum (the range from 0 to 1 in the unit interval) are either "pure certainty" or "pure uncertainty" (this might relate to bit-based definition of "information").

    A Suite of Tools for creating Simple Discrete Dynamic Systems
    On figure/ground - very suggestive links: http://www.psych.utah.edu/stat/dynamic_systems/Content/examples/E42_Manual/E42_Manual.html

    General Notes

    • continuum -- lowest level -- unbroken span -- defined in unit interval -- unbounded across the span from 0 to 1. continuity IS uncertainty -- it's undefined, unknown and unknowable, not demarcated, not differentiated, and identifiable only as a bounded range "with a value between the two bounds" -- and this notion is recursive at every additional decimal point
    • all we need to know is: it's unbounded between the cuts at 0 and 1
    • cut
    • distinction
    • dimension
    • class
    • value
    • order / sequence / "linear"

    Higher level of abstraction -- seeing all these elements as "dimensions" -- dimensions of a model/abstraction

    • characteristic
    • attribute
    • feature
    • property
    • facet
    • aspect

    Is there a clear-cut distinction between "hardware" (the actual state of a machine) and "software" (the interpretation of that machine state, assigning "meaning" to it)

    This border/boundary is a mysterious zone. Is it an "interpretive holon" -- ie, seen from one point of view it is hardware and from another software? ("looking down the hierarchy -- from more to less abstract) it appears to be hardware -- and "looking up the hierarchy" it appears to be -- and is interpreted as -- software?

    A chain of constructive definitions defining composite aspects like the construction of a higher-level language from lower-level primitives

    so all these terms which we might use at "higher levels" are composite elements "assembled from cuts" -- and indeed, something in the structure (identify what) is isomorphic across levels -- such that it becomes reasonable to say that

    • a cut and a distinction are the same thing -- and we build everything from this element (but this immediately raises the "relativistic" (?) issue of "what is the medium in which the cut is made? this is primary ontological mystery (?) it's a "figure/ground" issue. the cut "is made in something" -- and that something is the lower-level ground (?) of composite cuts
    • a cut/distinction IS a dimension
    • a dimension IS an ordered class
    • a taxon is an ordered class

    Are all of these elements "values in a record" -- like a value in a database row?

    We want to be able to build ordered classes -- at increasingly abstract levels -- bounded like a matrix row, with a known/specifiable order among the elements, with the order defined in one of the dimensions that characterize the elements of the row, and all the distinctions within the row defined as cuts

    a cut IS a dimension

    a dimension is a cut

    the fact that the row has "elements" implies that the elements themselves are constructed objects (with internal composite structure) -- constructed from bits through the same linear/recursive assembly process that characterizes absolutely everything else

    we build complex structure and then assign names to them (words)

    Comment Source:<h1>UNIVERSAL PRIMITIVE</h1> One way or the other, every possible data structure or logical object defined within a computer is fundamentally defined in terms of bits. All data structures, all symbolic representation, all symbols, all algebraic objects, all words -- and sentences, paragraphs, books and libraries -- can be 100% reproduced in micro-detail building composite definitions assembled from the 2-state logic of a bit. In this exploration, we want to consider a chain of increasingly abstract interpretations of this fundamental 2-state distinction, out of which we want to model the complete dimensionality of conceptual space. <b>Nested distinctions</b><br /> Our objective is to construct a hierarchy of abstraction capable of modeling or describing any possible complex of nested distinctions, at any desired level of detail. We want to build an ascending recursive cascade of composite definitions grounded in the 2-state logic of the bit, beginning with the continuum defined as the unit interval (the continuous "real number" numeric range from 0 to 1) and a "cut" in the continuum. The general form of any taxonomy or multi-level system of classification is "a distinction made on a distinction made on a distinction" -- or in more basic terms, "a cut on a cut on a cut...". Any "level" in a taxonomy is a "taxon" (plural "taxa"). In a taxonomy, a "genus" is a taxon, and a "species" is a cut on a taxon. It's a bounded range within that taxon, with a lowest and highest value in some numeric dimension characterizing and distinguishing that taxon. <blockquote> <b>Basic unit</b><br /> A bit is the basic unit of information in computing and digital communications. A bit can have only one of two values, and may therefore be physically implemented with a two-state device. These values are most commonly represented as either a 0 or 1. The term bit is a portmanteau of binary digit. <br><br> The two values can also be interpreted as logical values (true/false, yes/no), algebraic signs (+/−), activation states (on/off), or any other two-valued attribute. The correspondence between these values and the physical states of the underlying storage or device is a matter of convention... <br><br> In information theory, one bit is typically defined as the uncertainty of a binary random variable that is 0 or 1 with equal probability, or the information that is gained when the value of such a variable becomes known. <br><br> <b>Storage</b> <br /> A bit can be stored by [any] digital device or physical system that exists in either of two possible distinct states. <br><br> These may be the two stable states of a flip-flop, two positions of an electrical switch, two distinct voltage or current levels allowed by a circuit, two distinct levels of light intensity, two directions of magnetization or polarization, the orientation of reversible double stranded DNA, etc. <br><br> https://en.wikipedia.org/wiki/Bit </blockquote> <img src="http://sharedpurpose.net/groupgraphics/dia16.png"> <img src="http://sharedpurpose.net/groupgraphics/binarytao1.png"> Comments/themes suggested by this image -- issues to explore <!--- <img src="http://www.psych.utah.edu/stat/dynamic_systems/Content/examples/E42_Manual/Images/Genius02-05-29_Parent2.jpg" align=right hspace=10 vspace=10> ---> <ul> <li> Definition of continuum defined as "one unit" within/across the unit interval <li> Boundaries/edges of unit interval defined as cuts/distinctions <li> Mapping to 2-state logic. How does this ascending cascade of nested distinctions/levels map to 2-state ontology. "Is present/is not present" ? Or is there an "opposite" dimension somehow defined, with a centered origin or 0-point? Two directions -- positive and negative? <li> Or are we talking about a block or cell that either contains or does not contain a fundamental 2-state unit? How are these elements blocks or cells, when we are defining them essentially in one dimension? Or are we saying that these elements ARE defined in more than one dimension? That would be consistent with the notion that "an abstract cut has width (because it is defined as a multiple of some unit)" <li> How does all of this relate to "figure/ground" issues? <li> Continuum is "unknowable" because to "know" is to conceptualize and bound, and reality and the continuum are unbounded. The intervals of the continuum (the range from 0 to 1 in the unit interval) are either "pure certainty" or "pure uncertainty" (this might relate to bit-based definition of "information"). </ul> <b>A Suite of Tools for creating Simple Discrete Dynamic Systems</b> <br /> On figure/ground - very suggestive links: http://www.psych.utah.edu/stat/dynamic_systems/Content/examples/E42_Manual/E42_Manual.html General Notes <ul> <li>continuum -- lowest level -- unbroken span -- defined in unit interval -- unbounded across the span from 0 to 1. continuity IS uncertainty -- it's undefined, unknown and unknowable, not demarcated, not differentiated, and identifiable only as a bounded range "with a value between the two bounds" -- and this notion is recursive at every additional decimal point <li>all we need to know is: it's unbounded between the cuts at 0 and 1 <li>cut <li>distinction <li>dimension <li>class <li>value <li>order / sequence / "linear" </ul> Higher level of abstraction -- seeing all these elements as "dimensions" -- dimensions of a model/abstraction <ul> <li>characteristic <li>attribute <li>feature <li>property <li>facet <li>aspect </ul> Is there a clear-cut distinction between "hardware" (the actual state of a machine) and "software" (the interpretation of that machine state, assigning "meaning" to it) This border/boundary is a mysterious zone. Is it an "interpretive holon" -- ie, seen from one point of view it is hardware and from another software? ("looking down the hierarchy -- from more to less abstract) it appears to be hardware -- and "looking up the hierarchy" it appears to be -- and is interpreted as -- software? A chain of constructive definitions defining composite aspects like the construction of a higher-level language from lower-level primitives so all these terms which we might use at "higher levels" are composite elements "assembled from cuts" -- and indeed, something in the structure (identify what) is isomorphic across levels -- such that it becomes reasonable to say that <ul> <li> a cut and a distinction are the same thing -- and we build everything from this element (but this immediately raises the "relativistic" (?) issue of "what is the medium in which the cut is made? this is primary ontological mystery (?) it's a "figure/ground" issue. the cut "is made in something" -- and that something is the lower-level ground (?) of composite cuts <li> a cut/distinction IS a dimension <li> a dimension IS an ordered class <li> a taxon is an ordered class </ul> Are all of these elements "values in a record" -- like a value in a database row? We want to be able to build ordered classes -- at increasingly abstract levels -- bounded like a matrix row, with a known/specifiable order among the elements, with the order defined in one of the dimensions that characterize the elements of the row, and all the distinctions within the row defined as cuts a cut IS a dimension a dimension is a cut the fact that the row has "elements" implies that the elements themselves are constructed objects (with internal composite structure) -- constructed from bits through the same linear/recursive assembly process that characterizes absolutely everything else we build complex structure and then assign names to them (words)
  • 26.
    edited October 2015

    BIBLIOGRAPHY AND COMPLETE TEXTS


    GENERAL SCIENCE AND ENGINEERING BIBLIOGRAPHY

    http://originresearch.com/sd/biblio.cfm


    COGNITIVE SCIENCE

    Object Oriented Design with Applications
    Grady Booch - top-level programming theory, great review of hierarchy and classes, excellent/charming diagrams - complete book in Word.docx and PDF
    http://originresearch.com/docs/booch/index.cfm

    Categories and Concepts
    Smith and Medin - classic review and top-level survey text on basics of classification and categories
    http://originresearch.com/docs/SmithAndMedin.docx

    Women, Fire and Dangerous Things
    George Lakoff, family resemblance, prototypes, complete text (614 pages)
    http://originresearch.com/docs/WomenFireAndDangerousThings.pdf

    HOLISTIC PHILOSOPHY

    No Boundary
    Ken Wilber, duality and wholeness, distinctions, opposites, boundaries and intuition, mysticism, reality as undifferentiated (complete text)
    http://originresearch.com/docs/KenWilberNoBoundary.docx

    GIT HUB

    Programming Language Theory
    Comprehensive bibliography
    https://github.com/steshaw/plt

    INTUITIVE / HUMANISTIC / COGNITIVE / SOCIAL PSYCHOLOGY

    http://malloy.socialpsychology.org/

    http://www.psych.utah.edu/stat/dynamic_systems/Content/examples/E42_Manual/References.html

    1. Bateson, G. (2000). Steps to an ecology of mind. Chicago: Chicago University Press. Originally published 1979.
    2. Bateson, G. (2002). Mind and nature: A necessary unity. Cresskill, N.J.: Hampton Press. Originally published by Bantam, 1979.
    3. Bateson, G., & Bateson, M. C. (1987). Angels fear: Towards an epistemology of the sacred. New York: Macmillan.
    4. Bostic-St. Clair, C. & Grinder, J. (2001). Whispering in the wind. Scotts Valley, CA: J & C Enterprises.
    5. DeLozier, J. & Grinder, J. (1987). Turtles all the way down. Grinder, DeLozier Associates. Bonny Doon, CA.
    6. Hoffman, D. D. (1998). Visual intelligence: How we create what we see. New York: W. W. Norton.
    7. Hofstadter, D. R. (1985). Metamagical themas: Questing for the essence of mind and pattern. New York: Basic Books.
    8. Holland, J. H. (1998). Emergence: From chaos to order. Reading, MA: Addison-Wesley Publishing.
    9. Kauffman, S. A. (1993). The origins of order: Self-organization and selection in evolution. Oxford: Oxford University Press.
    10. Kauffman, S. A. (1995). At home in the universe: The search for the laws of self-organization and complexity. Oxford: Oxford University Press.
    11. Kauffman, S. A. (2000). Investigations. Oxford: Oxford University Press.
    12. Keller, H. F. (2002). Making sense of life. Cambridge, MA: Harvard University Press.
    13. Malloy, T. E. (1987). Curtain of Dawn. Unpublished manuscript.
    14. Malloy, T. E. (2001). Difference to Inference: Teaching logical and statistical reasoning through online interactivity. Behavior Research Methods Instruments & Computers, 33, 270-273.
    15. Malloy, T. E. & Jensen, G. C. (2001). Utah Virtual Lab: JAVA interactivity for teaching science and statistics online. Behavior Research Methods Instruments & Computers, 33, 282-286.
    16. Malloy, T. E., Bostic St Clair, C. & Grinder, J. (2005). Steps to an ecology of emergence. Cybernetics & Human Knowing, 12, 102-119.
    17. Malloy, T. E., Jensen, G. C., & Song, T. (2005). Mapping knowledge to Boolean dynamic systems in Bateson's epistemology. Nonlinear Dynamics, Psychology, and Life Sciences, 9, 37-60.
    18. Margulis, L. (1998). Symbiotic planet. New York: Basic Books.
    19. Margulis, L & Fester, R. (Eds.) (1991?). Symbiosis as a Source of Evolutionary Innovation, Speciation and Morphogenesis. Boston: MIT Press.
    20. Marr, D. (1982). Vision. New York: Freeman & Co.
    21. McCulloch, W. S. (1965). The embodiment of mind. Cambridge, MA: The MIT. Press.
    22. Palmer, S. E. (1999). Vision science: Photons to phenomenology. Cambridge, MA: The MIT. Press.
    23. Varela, F. J., Thompson, E., & Rosch, E. (1993). The embodied mind. Cambridge, MA: MIT Press.
    24. Wolfram, S. (2002). A new kind of science. Champaign, IL: Wolfram Media, Inc.
    Comment Source:<h1>BIBLIOGRAPHY AND COMPLETE TEXTS</h1> <br> <b>GENERAL SCIENCE AND ENGINEERING BIBLIOGRAPHY</b> http://originresearch.com/sd/biblio.cfm <br> <b>COGNITIVE SCIENCE </b> <b>Object Oriented Design with Applications</b> <br /> Grady Booch - top-level programming theory, great review of hierarchy and classes, excellent/charming diagrams - complete book in Word.docx and PDF <br /> http://originresearch.com/docs/booch/index.cfm <b>Categories and Concepts</b> <br /> Smith and Medin - classic review and top-level survey text on basics of classification and categories <br /> http://originresearch.com/docs/SmithAndMedin.docx <b>Women, Fire and Dangerous Things</b> <br /> George Lakoff, family resemblance, prototypes, complete text (614 pages) <br /> http://originresearch.com/docs/WomenFireAndDangerousThings.pdf <b>HOLISTIC PHILOSOPHY</b> <b>No Boundary</b> <br /> Ken Wilber, duality and wholeness, distinctions, opposites, boundaries and intuition, mysticism, reality as undifferentiated (complete text)<br /> http://originresearch.com/docs/KenWilberNoBoundary.docx <b>GIT HUB</b> <b>Programming Language Theory</b> <br /> Comprehensive bibliography <br /> https://github.com/steshaw/plt <b>INTUITIVE / HUMANISTIC / COGNITIVE / SOCIAL PSYCHOLOGY http://malloy.socialpsychology.org/ http://www.psych.utah.edu/stat/dynamic_systems/Content/examples/E42_Manual/References.html <ol> <li> Bateson, G. (2000). Steps to an ecology of mind. Chicago: Chicago University Press. Originally published 1979. <li>Bateson, G. (2002). Mind and nature: A necessary unity. Cresskill, N.J.: Hampton Press. Originally published by Bantam, 1979. <li>Bateson, G., & Bateson, M. C. (1987). Angels fear: Towards an epistemology of the sacred. New York: Macmillan. <li>Bostic-St. Clair, C. & Grinder, J. (2001). Whispering in the wind. Scotts Valley, CA: J & C Enterprises. <li>DeLozier, J. & Grinder, J. (1987). Turtles all the way down. Grinder, DeLozier Associates. Bonny Doon, CA. <li>Hoffman, D. D. (1998). Visual intelligence: How we create what we see. New York: W. W. Norton. <li>Hofstadter, D. R. (1985). Metamagical themas: Questing for the essence of mind and pattern. New York: Basic Books. <li>Holland, J. H. (1998). Emergence: From chaos to order. Reading, MA: Addison-Wesley Publishing. <li>Kauffman, S. A. (1993). The origins of order: Self-organization and selection in evolution. Oxford: Oxford University Press. <li>Kauffman, S. A. (1995). At home in the universe: The search for the laws of self-organization and complexity. Oxford: Oxford University Press. <li>Kauffman, S. A. (2000). Investigations. Oxford: Oxford University Press. <li>Keller, H. F. (2002). Making sense of life. Cambridge, MA: Harvard University Press. <li>Malloy, T. E. (1987). Curtain of Dawn. Unpublished manuscript. <li>Malloy, T. E. (2001). Difference to Inference: Teaching logical and statistical reasoning through online interactivity. Behavior Research Methods Instruments & Computers, 33, 270-273. <li>Malloy, T. E. & Jensen, G. C. (2001). Utah Virtual Lab: JAVA interactivity for teaching science and statistics online. Behavior Research Methods Instruments & Computers, 33, 282-286. <li>Malloy, T. E., Bostic St Clair, C. & Grinder, J. (2005). Steps to an ecology of emergence. Cybernetics & Human Knowing, 12, 102-119. <li>Malloy, T. E., Jensen, G. C., & Song, T. (2005). Mapping knowledge to Boolean dynamic systems in Bateson's epistemology. Nonlinear Dynamics, Psychology, and Life Sciences, 9, 37-60. <li>Margulis, L. (1998). Symbiotic planet. New York: Basic Books. <li>Margulis, L & Fester, R. (Eds.) (1991?). Symbiosis as a Source of Evolutionary Innovation, Speciation and Morphogenesis. Boston: MIT Press. <li>Marr, D. (1982). Vision. New York: Freeman & Co. <li>McCulloch, W. S. (1965). The embodiment of mind. Cambridge, MA: The MIT. Press. <li>Palmer, S. E. (1999). Vision science: Photons to phenomenology. Cambridge, MA: The MIT. Press. <li>Varela, F. J., Thompson, E., & Rosch, E. (1993). The embodied mind. Cambridge, MA: MIT Press. <li>Wolfram, S. (2002). A new kind of science. Champaign, IL: Wolfram Media, Inc. </ol>
  • 27.
    edited October 2015

    image

    The computer as an abstraction

    http://alanclements.org/1computerhierarchy.html

    At the core of Figure 1 are the atoms from which the computer is made. These were fabricated in the heart of a star a long time ago, and then refined and converted into the semiconductor used to fabricate a chip. The next layer out is the device layer (labeled transistors) that is concerned with the electronic switches that make up a digital circuit. The chip and circuit designer is interested in this layer.

    Above the transistor layer is the gates layer. Gates are the basic building blocks of any digital system and are the subject of Chapter 2. The next layer is the microarchitecture layer. This layer uses gates to implement the computer itself. This is the layer that we are concerned with in chapters 7 and 8. Operations at the microarchitecture level include actions such as moving data into a register or adding numbers in an arithmetic and logic unit.

    The microarchitecture implements the ISA or instruction set architecture in the next layer. It is this layer that defines the computer in terms of its instruction set and operational characters. This layer determines the type or family of a chip; for example, it distinguishes between an Intel IA32 processor and an ARM or MIPS processor. It is entirely possible for an IA32 chip and an ARM chip to have identical inner layers from atoms to microarchitecture, but different ISAs. The ISA layer is also described as the programmer’s view of the chip. Chapter 3 introduces the microarchitecture layer.

    The barrier between the ISA and assembly language layers is depicted as a bold line. Everything within the bold line is part of a chip’s hardware and cannot be changed by the programmer or user. However, some chips do now include programmable logic which makes it possible to alter the gate and microarchitecture layers.

    The first layer outside the barrier is the assembly language layer. This layer is the human representation of the instruction set architecture; for example, the binary sequence 0000101110100111 may be the machine code that is interpreted by the microarchitecture as ADD r0,r1,r2 (add two registers to a third).

    In many ways you could argue that the ISA and assembly language layers are the same; the only difference being that one is meaningful to humans because it is written in a textual notation. However, that is not the whole story. An assembler can make life simpler for the programmer by providing facilities such as macros and conditional assembly. The assembly language can often provide shortcuts by letting the programmer write instructions that don’t actually exist, and then translating them into real instructions. We look at this in Chapter 3 when we introduce the ARM.

    Above the assembly language layer is the operating system layer. This layer is different to all the other layers because it controls the system and its resources. It is not necessary to have an operating system in a dedicated computer that performs a single function. Moreover, you could argue that the operating system should be above the high-level language layer or even the application layer.

    The penultimate layer is the high-level language layer that interprets operations in a form such as IF x < 3 THEN y = y + 4. The high-level language is machine-independent, because a compiler is used to translate a program in high-level language into the appropriate assembly language (or machine code) for a target processor. That is, programmers can write high-level language code without worrying about the target processor.

    Finally, the outermost layer is the applications layer. This is the only layer of interest to the user. A programmer writes a program in a high-level language to carry out some application. In this example, the function is Ng1-f3 Qg5xg3+ which represents a pair of movements in chess. At this level the user sees no difference between a machine that is a dedicated chess player and one that is a computer running a program.

    Abstraction

    Like any other discipline, computer science has its own special vocabulary that distinguishes its followers from other mortals. One such key word in computer science is abstraction.

    The dictionary definition of abstraction defines it as “a general term or idea” or “the act of considering something as a general characteristic free from actual instances”. In computer science abstraction is concerned with separating meaning from implementation. In everyday life we could say that the word “go” was an abstraction because it hides the specific instance (go by walking, cycling, riding, flying).

    Abstraction is an important concept because it separates what we do from how we do it. This is an important concept because it helps us to build complex systems by decomposing them into subtasks or activities.

    image

    image

    image

    image

    Comment Source:<img src="http://alanclements.org/wpimages/wpad06096f_06.png" align=left hspace=10 vspace=10 style="width: 100%; height: auto; max-width: 600px;"> <b>The computer as an abstraction</b> http://alanclements.org/1computerhierarchy.html At the core of Figure 1 are the atoms from which the computer is made. These were fabricated in the heart of a star a long time ago, and then refined and converted into the semiconductor used to fabricate a chip. The next layer out is the device layer (labeled transistors) that is concerned with the electronic switches that make up a digital circuit. The chip and circuit designer is interested in this layer. Above the transistor layer is the gates layer. Gates are the basic building blocks of any digital system and are the subject of Chapter 2. The next layer is the microarchitecture layer. This layer uses gates to implement the computer itself. This is the layer that we are concerned with in chapters 7 and 8. Operations at the microarchitecture level include actions such as moving data into a register or adding numbers in an arithmetic and logic unit. The microarchitecture implements the ISA or instruction set architecture in the next layer. It is this layer that defines the computer in terms of its instruction set and operational characters. This layer determines the type or family of a chip; for example, it distinguishes between an Intel IA32 processor and an ARM or MIPS processor. It is entirely possible for an IA32 chip and an ARM chip to have identical inner layers from atoms to microarchitecture, but different ISAs. The ISA layer is also described as the programmer’s view of the chip. Chapter 3 introduces the microarchitecture layer. The barrier between the ISA and assembly language layers is depicted as a bold line. Everything within the bold line is part of a chip’s hardware and cannot be changed by the programmer or user. However, some chips do now include programmable logic which makes it possible to alter the gate and microarchitecture layers. The first layer outside the barrier is the assembly language layer. This layer is the human representation of the instruction set architecture; for example, the binary sequence 0000101110100111 may be the machine code that is interpreted by the microarchitecture as ADD r0,r1,r2 (add two registers to a third). In many ways you could argue that the ISA and assembly language layers are the same; the only difference being that one is meaningful to humans because it is written in a textual notation. However, that is not the whole story. An assembler can make life simpler for the programmer by providing facilities such as macros and conditional assembly. The assembly language can often provide shortcuts by letting the programmer write instructions that don’t actually exist, and then translating them into real instructions. We look at this in Chapter 3 when we introduce the ARM. Above the assembly language layer is the operating system layer. This layer is different to all the other layers because it controls the system and its resources. It is not necessary to have an operating system in a dedicated computer that performs a single function. Moreover, you could argue that the operating system should be above the high-level language layer or even the application layer. The penultimate layer is the high-level language layer that interprets operations in a form such as IF x < 3 THEN y = y + 4. The high-level language is machine-independent, because a compiler is used to translate a program in high-level language into the appropriate assembly language (or machine code) for a target processor. That is, programmers can write high-level language code without worrying about the target processor. Finally, the outermost layer is the applications layer. This is the only layer of interest to the user. A programmer writes a program in a high-level language to carry out some application. In this example, the function is Ng1-f3 Qg5xg3+ which represents a pair of movements in chess. At this level the user sees no difference between a machine that is a dedicated chess player and one that is a computer running a program. <b>Abstraction</b> Like any other discipline, computer science has its own special vocabulary that distinguishes its followers from other mortals. One such key word in computer science is abstraction. The dictionary definition of abstraction defines it as “a general term or idea” or “the act of considering something as a general characteristic free from actual instances”. In computer science abstraction is concerned with separating meaning from implementation. In everyday life we could say that the word “go” was an abstraction because it hides the specific instance (go by walking, cycling, riding, flying). Abstraction is an important concept because it separates what we do from how we do it. This is an important concept because it helps us to build complex systems by decomposing them into subtasks or activities. <img src="http://coronet.iicm.tugraz.at/sa/s5/img/abstraction_hardware2procedural.gif" align=right hspace=10 width=450> <img src="http://image.slidesharecdn.com/introductiontocomputerarchitecuure-130720111445-phpapp02/95/introduction-to-computer-architecuure-4-638.jpg?cb=1374318901" width=450> <img src="http://image.slidesharecdn.com/apidays2014-thestateofwebapilanguages-141207170824-conversion-gate02/95/apidays-paris-2014-the-state-of-web-api-languages-5-638.jpg?cb=1417972230" align=right width=450> <img src="http://blog.malwarebytes.org/wp-content/uploads/2012/09/FlowDiagram2.png" width=450>
  • 28.
    edited October 2015

    OBJECTIVE

    This series of comments began with some general propositions and principles, and has continued to explore the principle of abstraction as a general integrating principle or "primary dimension" for any and all conceptual structure and abstract symbolic representation. I am currently reviewing a variety of overlapping conversations and technical perspectives that seem relevant and illuminating, doing what I can to follow a guiding instinct that there is some basic or primal or "simple" and non-trivial mathematical or logical way that these elements can be combined. It seems clear that there is an extreme cacophony of methods, approaches, definition schematics and conventions that govern these conversations -- and there are no workable principles or conventions for "inter-operability" -- though I run into groups and projects that recognize this concern and are attempting to do something about it. Somehow, however, it seems to be true that there is something "very difficult" about this work. It's either seen as impossible or a fool's errand -- despite a growing recognition that this cacophony is a serious and perhaps dangerous problem.

    Years ago, hammering away on these themes, I did become convinced that the principle or concept of "dimensionality" is capable of serving as a "universal primitive constructive element" -- from which, in terms of which -- like some universal piece in a child's "erector set" or "tinkertoys" -- all other abstract data structures could be defined.

    I had built a composite "epistemological dictionary", composed of a glossary of terms general taken from logic and epistemology and basic mathematics and engineering, and going over and over this system of definitions using outline program on a personal computer, became convinced that all terms in this general language could be defined in terms of one constructive element. Today, as I take a little time to revisit this issue, I am continuing to see this general pattern of "abstraction" across levels as being a primary integrator of high complexity and great micro-fine fluency. Given the basic restraint of computer representation in a format that can be precisely understood, I am supposign that "absolutely anything" that can be represented in "concepts" can be defined through a cascade of abstractions. Can the form of that cascade be generalized, and defined in one constructive primitive element?

    Abstraction is a process of categorization and logical generalization that builds "more abstract" categories on the basis of similarities among recognizably dissimilar elements, through a process that has been described as "measurement omission". Differences drop out as levels of abstraction increase.

    That process depends on a clearcut method for defining similarity, and the guiding instinct here is that similarity or identity between elements can most usefully and succinctly be defined in terms of a dimensional comparison. So, we need to define or model "objects" in a common language, we need to be able to compare objects: are they "identical", and if so how is that defined, or are they "similar" (and if they are similar, how are they also "different").

    So the large-scale objective is -- building on the continuum as the absolute ground of the process, define the dimensionality of measurement to a known and testable number of decimal places, with a lowest-level uncertainty in the least significant digit, and in a cascade where every element is defined as a "distinction/dimension", construct "absolutely any" abstraction.

    If this process can be defined in a format that somehow "contains itself", and the entire framework shown to be a determinate implication of a one-dimensional linear construction project extended across levels of abstraction -- with "names" (words/labels) and "explicit decomposition cascades" that assign the meaning of those names -- that would seem to be the overarching objective of this project.

    Currently, I am exploring the micro-structure of this framework -- which can quickly become explosively complex and overwhelming. Yet, the micro-structure of this idea cannot be blurred, or it becomes meaningless. Perhaps the "solution", if solution be possible, is most likely to emerge as a kind of holistic gestalt or abstract unit -- perhaps a kind of "vision". The opportunity to compile all these elements here has been helpful. Perhaps more of this compilation is essential -- or perhaps it makes more sense to simply stand back and let "the unconscious idea processor" combine these elements in holistic ways. We'll see how that goes, and for now, many thanks for the bandwidth and the interesting context.

    Comment Source:<h1>OBJECTIVE</h1> This series of comments began with some general propositions and principles, and has continued to explore the principle of abstraction as a general integrating principle or "primary dimension" for any and all conceptual structure and abstract symbolic representation. I am currently reviewing a variety of overlapping conversations and technical perspectives that seem relevant and illuminating, doing what I can to follow a guiding instinct that there is some basic or primal or "simple" and non-trivial mathematical or logical way that these elements can be combined. It seems clear that there is an extreme cacophony of methods, approaches, definition schematics and conventions that govern these conversations -- and there are no workable principles or conventions for "inter-operability" -- though I run into groups and projects that recognize this concern and are attempting to do something about it. Somehow, however, it seems to be true that there is something "very difficult" about this work. It's either seen as impossible or a fool's errand -- despite a growing recognition that this cacophony is a serious and perhaps dangerous problem. Years ago, hammering away on these themes, I did become convinced that the principle or concept of "dimensionality" is capable of serving as a "universal primitive constructive element" -- from which, in terms of which -- like some universal piece in a child's "erector set" or "tinkertoys" -- all other abstract data structures could be defined. I had built a composite "epistemological dictionary", composed of a glossary of terms general taken from logic and epistemology and basic mathematics and engineering, and going over and over this system of definitions using outline program on a personal computer, became convinced that all terms in this general language could be defined in terms of one constructive element. Today, as I take a little time to revisit this issue, I am continuing to see this general pattern of "abstraction" across levels as being a primary integrator of high complexity and great micro-fine fluency. Given the basic restraint of computer representation in a format that can be precisely understood, I am supposign that "absolutely anything" that can be represented in "concepts" can be defined through a cascade of abstractions. Can the form of that cascade be generalized, and defined in one constructive primitive element? Abstraction is a process of categorization and logical generalization that builds "more abstract" categories on the basis of similarities among recognizably dissimilar elements, through a process that has been described as "measurement omission". Differences drop out as levels of abstraction increase. That process depends on a clearcut method for defining similarity, and the guiding instinct here is that similarity or identity between elements can most usefully and succinctly be defined in terms of a dimensional comparison. So, we need to define or model "objects" in a common language, we need to be able to compare objects: are they "identical", and if so how is that defined, or are they "similar" (and if they are similar, how are they also "different"). So the large-scale objective is -- building on the continuum as the absolute ground of the process, define the dimensionality of measurement to a known and testable number of decimal places, with a lowest-level uncertainty in the least significant digit, and in a cascade where every element is defined as a "distinction/dimension", construct "absolutely any" abstraction. If this process can be defined in a format that somehow "contains itself", and the entire framework shown to be a determinate implication of a one-dimensional linear construction project extended across levels of abstraction -- with "names" (words/labels) and "explicit decomposition cascades" that assign the meaning of those names -- that would seem to be the overarching objective of this project. Currently, I am exploring the micro-structure of this framework -- which can quickly become explosively complex and overwhelming. Yet, the micro-structure of this idea cannot be blurred, or it becomes meaningless. Perhaps the "solution", if solution be possible, is most likely to emerge as a kind of holistic gestalt or abstract unit -- perhaps a kind of "vision". The opportunity to compile all these elements here has been helpful. Perhaps more of this compilation is essential -- or perhaps it makes more sense to simply stand back and let "the unconscious idea processor" combine these elements in holistic ways. We'll see how that goes, and for now, many thanks for the bandwidth and the interesting context.
  • 29.
    edited October 2015

    PROGRAMMING LANGUAGES, INFORMATION STRUCTURES AND MACHINE ORGANIZATION

    COMMENTARY ON A CONCEPT OF PLATO

    by Peter Wegner

    "In performing a computation, we do not handle objects of the real world, but merely representations of objects.

    "We are like people who live in a cave and perceive objects only by the shadows which they cast on the walls of the cave. We use the information obtained from studying the form of these shadows to make inferences about the real world.

    "However, we are not merely passive observers of shadows cast by real objects. We modify reality and observe the new pattern of shadows cast by the new configuration of objects.

    "We go even further, forgetting altogether about the real objects that created the shadows, treating the pattern of shadows as physical objects, and studying how patterns of shadows can be transformed and manipulated.

    "Information structures are representations of real objects just like the shadows on the walls of a cave. The programmer studies how information structures can be transformed and manipulated and in so doing learns something about objects represented by the information structures.

    "However the real computer scientist falls in love with information structures and studies their properties not only for what they tell him about the real world but because he finds them beautiful."

    ~ Peter Wegner, Programming Languages, Information Structures and machine Organization, McGraw Hill Computer Science Series, 1968
    Comment Source:<h1>PROGRAMMING LANGUAGES, INFORMATION STRUCTURES AND MACHINE ORGANIZATION</h1> <b>COMMENTARY ON A CONCEPT OF PLATO</b> by Peter Wegner "In performing a computation, we do not handle objects of the real world, but merely representations of objects. "We are like people who live in a cave and perceive objects only by the shadows which they cast on the walls of the cave. We use the information obtained from studying the form of these shadows to make inferences about the real world. "However, we are not merely passive observers of shadows cast by real objects. We modify reality and observe the new pattern of shadows cast by the new configuration of objects. "We go even further, forgetting altogether about the real objects that created the shadows, treating the pattern of shadows as physical objects, and studying how patterns of shadows can be transformed and manipulated. "Information structures are representations of real objects just like the shadows on the walls of a cave. The programmer studies how information structures can be transformed and manipulated and in so doing learns something about objects represented by the information structures. "However the real computer scientist falls in love with information structures and studies their properties not only for what they tell him about the real world but because he finds them beautiful." <center> ~ Peter Wegner, Programming Languages, Information Structures and machine Organization, McGraw Hill Computer Science Series, 1968 </center>
  • 30.
    edited October 2015

    ARTHUR KOESTLER ON HOLONS

    This is a potent statement on hierarchical structure by the author who coined the term "holon", and who was influenced and guided by the emerging new system theory of that day.

    Arthur Koestler on Holons

    Some general properties of

    self-regulating open hierarchic order (SOHO)

    http://www.panarchy.org/koestler/holon.1969.html

    Filegroup 102259, October 15, 2015


    Note

    The idea of the "holon" was introduced by Arthur Koestler in The Ghost in the Machine (1967) and was presented again at the Alpbach Symposium (1968) in a paper titled: Beyond Atomism and Holism - the concept of the holon.

    The "holon" represents a very interesting way to overcome the dichotomy between parts and wholes and to account for both the self-assertive and the integrative tendencies of an organism.

    The following text is the Appendix to the intervention at the Alpbach Symposium, whose acts were published in 1969 as a book edited by Arthur Koestler and J. R. Smythies with the title Beyond Reductionism.


     1. The holon

    1.1 The organism in its structural aspect is not an aggregation of elementary parts, and in its functional aspects not a chain of elementary units of behaviour.

    1.2 The organism is to be regarded as a multi-levelled hierarchy of semi-autonomous sub-wholes, branching into sub-wholes of a lower order, and so on. Sub-wholes on any level of the hierarchy are referred to as holons.

    1.3 Parts and wholes in an absolute sense do not exist in the domains of life. The concept of the holon is intended to reconcile the atomistic and holistic approaches.

    1.4 Biological holons are self-regulating open systems which display both the autonomous properties of wholes and the dependent properties of parts. This dichotomy is present on every level of every type of hierarchic organization, and is referred to as the "Janus phenomenon".

    1.5 More generally, the term "holon" may be applied to any stable biological or social sub-whole which displays rule-governed behaviour and/or structural Gestalt-constancy. Thus organelles and homologous organs are evolutionary holons; morphogenetic fields are ontogenetic holons; the ethologist's "fixed action-patterns" and the sub-routines of acquired skills are behavioural holons; phonemes, morphemes, words, phrases are linguistic holons; individuals, families, tribes, nations are social holons.

     2. Dissectibility

    2.1 Hierarchies are "dissectible" into their constituent branches, on which the holons form the nodes; the branching lines represent the channels of communication and control.

    2.2 The number of levels which a hierarchy comprises is a measure of its "depth", and the number of holons on any given level is called its "span" (Herbert Simon).

     3. Rules and strategies

    3.1 Functional holons are governed by fixed sets of rules and display more or less flexible strategies.

    3.2 The rules - referred to as the system's canon - determine its invariant properties, its structural configuration and/or functional pattern.

    3.3 While the canon defines the permissible steps in the holon's activity, the strategic selection of the actual step among permissible choices is guided by the contingencies of the environment.

    3.4 The canon determines the rules of the game, strategy decides the course of the game.

    3.5 The evolutionary process plays variations on a limited number of canonical themes. The constraints imposed by the evolutionary canon are illustrated by the phenomena of homology, homeoplasy, parallelism, convergence and the loi du balancement (Geoffroy de St. Hilaire).

    3.6 In ontogeny, the holons at successive levels represent successive stages in the development of tissues. At each step in the process of differentiation, the genetic canon imposes further constraints on the holon's developmental potentials, but it retains sufficient flexibility to follow one or another alternative developmental pathway, within the range of its competence, guided by the contingencies of the environment.

    3.7 Structurally, the mature organism is a hierarchy of parts within parts. Its "dissectibility" and the relative autonomy of its constituent holons are demonstrated by transplant surgery.

    3.8 Functionally, the behaviour of organisms is governed by "rules of the game" which account for its coherence, stability and specific pattern.

    3.9 Skills, whether inborn or acquired, are functional hierarchies, with sub-skills as holons, governed by sub-rules. 

    4. Integration and self-assertion

    4. 1 Every holon has the dual tendency to preserve and assert its individuality as a quasi-autonomous whole; and to function as an integrated part of an (existing or evolving) larger whole. This polarity between the Self-Assertive (S-A) and Integrative (INT) tendencies is inherent in the concept of hierarchic order; and a universal characteristic of life.

    The S-A tendencies are the dynamic expression of the holon's wholeness, the INT tendencies of its partness.

    4.2 An analogous polarity is found in the interplay of cohesive and separative forces in stable inorganic systems, from atoms to galaxies.

    4.3 The most general manifestation of the INT tendencies is the reversal of the Second Law of Thermodynamics in open systems feeding on negative entropy (Erwin Schrödinger), and the evolutionary trend towards "spontaneously developing states of greater heterogeneity and complexity" (C. J. Herrick).

    4.4 Its specific manifestations on different levels range from the symbiosis of organelles and colonial animals, through the cohesive forces in herds and flocks, to the integrative bonds in insect states and Primate societies. The complementary manifestations of the S-A tendencies are competition, individualism, and the separative forces of tribalism, nationalism, etc.

    4.5 In ontogeny, the polarity is reflected in the docility and determination of growing tissues.

    4.6 In adult behaviour, the self-assertive tendency of functional holons is reflected in the stubbornness of instinct rituals (fixed action-patterns), of acquired habits (handwriting, spoken accent), and in the stereotyped routines of thought; the integrative tendency is reflected in flexible adaptations, improvisations, and creative acts which initiate new forms of behaviour.

    4.7 Under conditions of stress, the S-A tendency is manifested in the aggressive-defensive, adrenergic type of emotions, the INT tendency in the self-transcending (participatory, identificatory) type of emotions.

    4.8 In social behaviour, the canon of a social holon represents not only constraints imposed on its actions, but also embodies maxims of conduct, moral imperatives and systems of value.

    Comment Source:<h1>ARTHUR KOESTLER ON HOLONS</h1> This is a potent statement on hierarchical structure by the author who coined the term "holon", and who was influenced and guided by the emerging new system theory of that day. <body> <p align="center"><strong><em>Arthur Koestler</em></strong><strong><em> on Holons </em></strong><br /><br /> Some general properties of<br /><br /> self-regulating open hierarchic order (SOHO)<br /><br /> <a href="http://www.panarchy.org/koestler/holon.1969.html">http://www.panarchy.org/koestler/holon.1969.html</a><br /><br /> Filegroup 102259, October 15, 2015</p> <div> <hr size="2" width="100%" noshade="noshade" align="left" /> </div> <p><u>Note</u><br /><br /> The idea of the &quot;holon&quot; was introduced by Arthur Koestler in&nbsp;<em>The Ghost in the Machine</em>&nbsp;(1967) and was presented again at the Alpbach Symposium (1968) in a paper titled:&nbsp;<em>Beyond Atomism and Holism - the concept of the holon</em>.<br /><br /> The &quot;holon&quot; represents a very interesting way to overcome the dichotomy between parts and wholes and to account for both the self-assertive and the integrative tendencies of an organism.<br /><br /> The following text is the Appendix to the intervention at the Alpbach Symposium, whose acts were published in 1969 as a book edited by Arthur Koestler and J. R. Smythies with the title&nbsp;<em>Beyond Reductionism</em>.</p> <div> <hr size="2" width="100%" noshade="noshade" align="left" /> </div> <p>&nbsp;<strong>1. The holon</strong><br /><br /> 1.1 The organism in its structural aspect is not an aggregation of elementary parts, and in its functional aspects not a chain of elementary units of behaviour.<br /><br /> 1.2 The organism is to be regarded as a multi-levelled hierarchy of semi-autonomous sub-wholes, branching into sub-wholes of a lower order, and so on. Sub-wholes on any level of the hierarchy are referred to as holons.<br /><br /> 1.3 Parts and wholes in an absolute sense do not exist in the domains of life. The concept of the holon is intended to reconcile the atomistic and holistic approaches.<br /><br /> 1.4 Biological holons are self-regulating open systems which display both the autonomous properties of wholes and the dependent properties of parts. This dichotomy is present on every level of every type of hierarchic organization, and is referred to as the &quot;Janus phenomenon&quot;.<br /><br /> 1.5 More generally, the term &quot;holon&quot; may be applied to any stable biological or social sub-whole which displays rule-governed behaviour and/or structural Gestalt-constancy. Thus organelles and homologous organs are evolutionary holons; morphogenetic fields are ontogenetic holons; the ethologist's &quot;fixed action-patterns&quot; and the sub-routines of acquired skills are behavioural holons; phonemes, morphemes, words, phrases are linguistic holons; individuals, families, tribes, nations are social holons.<br /><br /> &nbsp;<strong>2. Dissectibility</strong><br /><br /> 2.1 Hierarchies are &quot;dissectible&quot; into their constituent branches, on which the holons form the nodes; the branching lines represent the channels of communication and control.<br /><br /> 2.2 The number of levels which a hierarchy comprises is a measure of its &quot;depth&quot;, and the number of holons on any given level is called its &quot;span&quot; (Herbert Simon).<br /><br /> &nbsp;<strong>3. Rules and strategies</strong><br /><br /> 3.1 Functional holons are governed by fixed sets of rules and display more or less flexible strategies.<br /><br /> 3.2 The rules - referred to as the system's&nbsp;<em>canon</em>&nbsp;- determine its invariant properties, its structural configuration and/or functional pattern.<br /><br /> 3.3 While the canon defines the permissible steps in the holon's activity, the strategic selection of the actual step among permissible choices is guided by the contingencies of the environment.<br /><br /> 3.4 The canon determines the rules of the game, strategy decides the course of the game.<br /><br /> 3.5 The evolutionary process plays variations on a limited number of canonical themes. The constraints imposed by the evolutionary canon are illustrated by the phenomena of homology, homeoplasy, parallelism, convergence and the&nbsp;<em>loi du balancement</em>&nbsp;(Geoffroy de St. Hilaire).<br /><br /> 3.6 In ontogeny, the holons at successive levels represent successive stages in the development of tissues. At each step in the process of differentiation, the genetic canon imposes further constraints on the holon's developmental potentials, but it retains sufficient flexibility to follow one or another alternative developmental pathway, within the range of its competence, guided by the contingencies of the environment.<br /><br /> 3.7 Structurally, the mature organism is a hierarchy of parts within parts. Its &quot;dissectibility&quot; and the relative autonomy of its constituent holons are demonstrated by transplant surgery.<br /><br /> 3.8 Functionally, the behaviour of organisms is governed by &quot;rules of the game&quot; which account for its coherence, stability and specific pattern.<br /><br /> 3.9 Skills, whether inborn or acquired, are functional hierarchies, with sub-skills as holons, governed by sub-rules.&nbsp;</p> <p><strong>4. Integration and self-assertion</strong><br /><br /> 4. 1 Every holon has the dual tendency to preserve and assert its individuality as a quasi-autonomous whole; and to function as an integrated part of an (existing or evolving) larger whole. This polarity between the Self-Assertive (S-A) and Integrative (INT) tendencies is inherent in the concept of hierarchic order; and a universal characteristic of life.<br /><br /> The S-A tendencies are the dynamic expression of the holon's wholeness, the INT tendencies of its partness.<br /><br /> 4.2 An analogous polarity is found in the interplay of cohesive and separative forces in stable inorganic systems, from atoms to galaxies.<br /><br /> 4.3 The most general manifestation of the INT tendencies is the reversal of the Second Law of Thermodynamics in open systems feeding on negative entropy (Erwin Schrödinger), and the evolutionary trend towards &quot;spontaneously developing states of greater heterogeneity and complexity&quot; (C. J. Herrick).<br /><br /> 4.4 Its specific manifestations on different levels range from the symbiosis of organelles and colonial animals, through the cohesive forces in herds and flocks, to the integrative bonds in insect states and Primate societies. The complementary manifestations of the S-A tendencies are competition, individualism, and the separative forces of tribalism, nationalism, etc.<br /><br /> 4.5 In ontogeny, the polarity is reflected in the docility and determination of growing tissues.<br /><br /> 4.6 In adult behaviour, the self-assertive tendency of functional holons is reflected in the stubbornness of instinct rituals (fixed action-patterns), of acquired habits (handwriting, spoken accent), and in the stereotyped routines of thought; the integrative tendency is reflected in flexible adaptations, improvisations, and creative acts which initiate new forms of behaviour.<br /><br /> 4.7 Under conditions of stress, the S-A tendency is manifested in the aggressive-defensive, adrenergic type of emotions, the INT tendency in the self-transcending (participatory, identificatory) type of emotions.<br /><br /> 4.8 In social behaviour, the canon of a social holon represents not only constraints imposed on its actions, but also embodies maxims of conduct, moral imperatives and systems of value.</p> </body> </html>
  • 31.

    Nowadays I typically program in Prolog. Though unintentional at the time, the language was made for the semantic web, with its declarative use of functors, which tie in elegantly to the triple-store knowledge elements.

    functor(Subject,Predicate,Object)?
    

    where functor can be a triple-store RDF identifier.

    If either Subject,Predicate, or Object is bound, it will return matches for the other terms. This is concisely powerful for semantic searches, etc. It's like SPARQL or SQL come to life.

    But alas, most programmers are hackers and use whatever scripting language is popular.

    Comment Source:Nowadays I typically program in Prolog. Though unintentional at the time, the language was made for the semantic web, with its declarative use of functors, which tie in elegantly to the triple-store knowledge elements. functor(Subject,Predicate,Object)? where functor can be a triple-store RDF identifier. If either Subject,Predicate, or Object is bound, it will return matches for the other terms. This is concisely powerful for semantic searches, etc. It's like SPARQL or SQL come to life. But alas, most programmers are hackers and use whatever scripting language is popular.
Sign In or Register to comment.