In other words, an object is a black box. So I'm thinking of what it would mean to rethink the basics of category theory in terms of black boxes.

*In science, computing, and engineering, a black box is a device, system or object which can be viewed in terms of its inputs and outputs (or transfer characteristics), without any knowledge of its internal workings. Its implementation is "opaque" (black). Almost anything might be referred to as a black box: a transistor, an algorithm, or the human brain.* (Wikipedia)

Apparently, there is quite a bit of math in system theory related to black boxes, but the page does not reference category theory.

I think the black box concept is helpful in trying to study equivalences, equalities, identities, etc. It does away with the usual baggage of unspecified collections that I think discredits set theory and category theory. Instead of claiming that there is a set/class/category/collection of "sets" which includes {black cow, brown cow, white cow} and {mama pig, baby pig} and so on, every set would be a black box. If you needed to talk about its elements or subsets or components or features, then those would be black boxes, too.

Then I think it's much easier to focus on all the relevant questions. For example, in what senses are two black boxes equivalent or not?

]]>- On the left side of this forum we see the grouping "Categories" which subsume various hyperlinked items such as "All Categories", "Applied Category Theory Course", "Applied Category Theory Exercise", etc.
- In English dictionaries, we see examples like "the various
*categories*of research" (Oxford), "taxpayers fall into one of several*categories*" (Merriam-Webster), "I wouldn't put this book in the same*category*as the author's first novel" (Wiktionary), etc. - In other disciplines (less everyday), such as Aristotle's categories in philosophy, grammatical categories in linguistics, etc.

I wondered if such non-mathematical uses of "category" could be given some category-theoretic modeling. Considering we already have a category **Cat** for all categories, might there also a category for all uses of the word "category"? Or is it simply part of **Cat**?

I'm guessing such a modeling, if possible, wouldn't involve too much technical complication - perhaps just *sets* - but it'd be interesting to have a CT perspective to the issue. :-)

Just to answer your question,

I'm not sure what question you are answering.

I think you are answering "Couldn't schema be types like how Servant has routes as types?"

the STLC forms a cartesian closed category, where the objects are types and the morphisms are terms. Similarly, the category of categories forms a cartesian closed category, where the objects are categories and the morphisms are functors. Similarly, AQL schemas form a cartesian closed category, where the objects are schemas (finitely presented categories) and the morphisms are schema mappings (morphisms of finitely presented categories). That is why it is natural to translate STLC types <-> categories and STLC terms <-> functors.

I am sorry, are you sure schemas really are the same as finitely presented categories? I thought they looked like this (taken from your website):

schema Schools = literal : Ty { entities Person School Dept foreign_keys instituteOf : Person -> School deptOf : Person -> Dept biggestDept : School -> Dept attributes lastName : Person -> String schoolName : School -> String deptName : Dept -> String }

Under the above analogy, a concrete example of a finitely presented category that I'm not sure how to represent as a Haskell type/type family/etc but I know how to represent as a Coq type, is the presentation with two objects, A and B, two generating arrows f : A -> B and g : B -> A, and the equation f.g = id.

I feel like this is a tractable problem.

*How do we represent a two object finitely presented category in Haskell?*

Please check out Conal Elliot's paper *Compiling to Categories* (2017)

You appear to want a *Constrained Category* which Conal defines in §6.

It looks like you are heavily invested in Java, so I would understand if you weren't really interested in trying Haskell.

]]>https://arxiv.org/abs/1809.00738

Despite being familiar with lens/optics usage in Haskell, I'm finding this paper very difficult, although this course has certainly helped. Profunctors feature heavily of course, as do end/coends, which I have so far found impenetrable.

]]>Cheers, Eldad

]]>If you want to learn more, I recommend this:

- Bartosz Mileweski,
*Category Theory for Programmers*.

I will create one such thread per chapter. Later on, the answers can be composed by topic if need be. Let me now what you think and especially where you struggled yourself! I am also open for change in terminology. (Is No Pains, no Gains suitable? are struggle and resolution as used below the correct terms?) My proposed format:

**Struggle:** < A struggle in the form of a question or a description >

**Resolution:** < Short informal solution, optionally with the some formalism >

Some feedback I'd particularly welcome:

**Errors.**I made various mistakes while putting these together. I think everything's correct now, but please let me know of any false statements.**Use of images.**I tried to size the images and the fonts so that it would look roughly the same as a post on the forums. Since images are totally inflexible, some problems may arise. Is the font size large enough to read clearly? Are the images unpleasant to use, e.g. on mobile? Is being locked into the white background I chose really irritating?**Narrowness.**All my intuition on this material so far comes from finite sets. If the things I say are incomplete, misleading, or incorrect in the wider world of orders, I appreciate being corrected. Similarly, much of the stuff I wrote may only be true for posets, not preorders, which again may mean that my focus was in the wrong places.

It's a hassle to update the images, but I do want to fix any problems if possible.

]]>https://johncarlosbaez.wordpress.com/2014/10/03/network-theory-part-30/

https://arxiv.org/abs/1504.05625

Your thoughts? Best Ps: http://mathworld.wolfram.com/Category.html https://www.encyclopediaofmath.org/index.php/Category

]]>Some background: As Robert Figura described to us in his wonderful post, History of Databases, software developers/engineers/architects regularly distinguish between SQL and NoSQL databases. An SQL database (like MySQL, PostgreSQL, Oracle, SQL Server, etc.) is a relational database management system expressing its querying interface via the Structured Query Language (SQL), a well-defined standard. A NoSQL database (like MongoDB, DynamoDB, Cassandra, Redis) does not (necessarily) use SQL to work with data; I think you might say, rather than say there's no well-defined standard, that there's a multitude of "standards" available for these database engines. Redis, for example, has a very well-defined Redis Serialization Protocol (RESP) for working with data -- it's just not christened with an ISO or ANSI specification for use among multiple database vendors, although other vendors could if they wished. Suffice it to day, the SQL ecosystem consists of many vendors implementing the same (or close to the same) standardized language and the NoSQL ecosystem consists of many vendors implementing many languages for data storage and querying.

Here's my question: Given a schema for an SQL based RDMS, what's a natural transformation to a schema for a document based store? My instinct as a programmer would be to identify some kind of interchange format for records SQL<->NoSQL. One easy such format may be JavaScript Object Notation (JSON). A recent favorite of mine, protocol buffers, fits the need. Dare I mention the extensible markup language (XML)? **But do any of these languages, used as an interchange format from SQL to NoSQL, have a precise mathematical expression in the language of category theory?**

First a little bit on the history of relational databases. Edgar F. Codd coined the term "relational database" in his 1969 theory paper *"A Relational Model of Data for Large Shared Data Banks"* to capture what people were doing at the time, with a neat formalism to describe tables of data:

http://www.morganslibrary.net/files/codd-1970.pdf

By the way, here is a nice obituary about Codd, likely written by one of his colleagues, now part of IBM's "history of progress" thread:

https://www-03.ibm.com/ibm/history/exhibits/builders/builders_codd.html

Ooh, and also notice the other famous names in the "The Builders details" menu on the right hand side of that page! But I digress...

The references from Codd's 1970 paper mention some of the most prolific or important structured data storage systems of the era ("popular" might be a strangely unfitting term here). One notable example is Charles Bachman's "Integrated Data Store" (IDS) which he designed in the early 60's, and got Bachmann his Turing award in 1973:

http://amturing.acm.org/award_winners/bachman_1896680.cfm

Soon thereafter, Codd, together with Raymond F. Boyce and Donald D. Chamberlin began working on the "Structured English Query Language" or "SEQUEL", the predecessor of SQL. The first working implementation came 1979 by what would later be known as Oracle, and IBM strived to put what came out of its System R prototype to work everywhere. SQL became standardized in 1986.

But why did they need to invent SQL in the first place, what was the drive behind it?

First came data structures with O(log n) complexity for lookup and insertion (e.g. AVL, or red-black trees). Then people came up with variants which aimed to maximize density, or to minimize the number of disk accesses, or seeks (e.g. radix Trees, B-Trees). So now that there was efficient indexing, people could start building humongous databases, you merely had to combine querying the indices in a sensible way...

To achieve this, The data operator (a person) would write a query plan, for example in the CODASYL Data Manipulation Language, which told the database management system how to combine the various indices, hopefully in an efficient way. This was rather tedious, error prone, and sometimes very hard to debug or extend. See here for a quick impression:

https://en.wikipedia.org/wiki/Navigational_Database

Or read this nice long interview with Donald D. Chamberlin (or just peek at page 8):

https://conservancy.umn.edu/handle/11299/107215

If you're as ancient as I am, you might remember dBase for CP/M, Apple II, or DOS. Or maybe FoxBase+. Or Clipper, which had data plans with a programming language around it.

However, the solution to this mess was to keep the technology, and merely put a better language on top. SQL is a compiled language to create data plans from rather concise descriptions. Modern databases like MySQL or Postfix will still let you look at the generated data plans!

But nowadays we want even more from a database management system. I won't get into distributed, geographical (geometric), or graph databases, I mean transactional qualities: We want to make sure that we never lose any data! The relevant keywords here are *atomicity*, *consistency*, *isolation*, and *durability* (ACID).

https://en.wikipedia.org/wiki/ACID

Getting this right is immensely difficult, especially in light of the complexity of filesystem drivers (heck, scheduling and other properties of the operating system), as well as the underlying hardware. All of the NoSQL systems might still be too young to excel at this (sorry for the pun), and many of the smaller SQL implementations might also come with compromises.

By the way, the whole problem area is close to the issue of distributed databases: how to build a perfectly reliable one? Spoiler: you can't: see the CAP theorem:

https://en.wikipedia.org/wiki/Cap_theorem

This is what I have for you today. One could easily fill books with this stuff, and I'm certain some authors did. I would love to hear about good books, or more on the history of modern RDBMS!

Some more links:

https://en.wikipedia.org/wiki/SQL

https://en.wikipedia.org/wiki/Relational_database_management_system

https://en.wikipedia.org/wiki/Integrated_Data_Store

https://en.wikipedia.org/wiki/Edgar_F._Codd

https://en.wikipedia.org/wiki/Charles_Bachman

Who is also known for Bachman diagrams:

https://en.wikipedia.org/wiki/Bachman_diagram

https://en.wikipedia.org/wiki/CODASYL

]]>Also, with the other discussion group on storytelling, I'm curious how the linguistic units inside a story relate or even build up a story.

Plus, linguistics is just really neat.

What would be some good starting reading material?

]]>make a 'Categories for the Working Storyteller' thread

That's a splendid idea, Keith! For the last few months I pondered how to transform a set of reaction networks into fairy tales. I'm quite excited about it, so here, let me go ahead and open that thread.

There are already some comments about this otherthread (which I only just noticed), let me copy this link to a paper Keith dug up:

'Generative Story Worlds as Linear Logic Programs' https://www.cs.cmu.edu/~cmartens/int7.pdf

The title certainly sounds interesting!

]]>Enjoy!

]]>Yesterday I was recalling the delta-epsilon definition of a limit L of a function f(x) at a point x=c:

For every ε>0, there exists a δ such that, for all x ∈ D, if 0 < |x-c| < δ, then |f(x)-L| < ε.

What struck me is that we can think of δ as a function δ(ε), in which case it becomes apparent that δ(ε) is constructed in the opposite direction as f(x). For simplicity, let us consider the case where c=0 and L=0. Let X be the preorder of real numbers on (0,1], and Y likewise. Define functors F:X→Y and δ:Y→X. Then F and δ are similar to adjoint functors because:

If |x|<δ(ε), then |f(x)| < ε.

If we had: |x|<δ(ε) iff |f(x)| < ε

then I think F and δ would be adjoint functors because, in a preorder, there is at most a single morphism ≤, so then we could say:

$$ \textrm{hom}_X(\delta (\varepsilon),x) \cong \textrm{hom}_Y(\varepsilon,F(x)) $$

So I suppose δ(ε) would be the adjoint of F if it was the optimal δ. I'm curious what that would mean. But also it seems that typically in analysis, a suboptimal δ(ε) works just fine. I wonder why this example isn't discussed more often, especially given that category theory seems distant to many analysts. Thank you for correcting any misunderstandings I may have.

]]>In writing these proofs, I found it quite convenient to have given names to the 1-hop, 2-hop etc. formulas in my earlier posts.

References to earlier posts: 1-hop constraint, 2-hop inequality, 3-hop equivalence, 4-hop fixed point.

]]>Links to John's lectures: lecture 6

]]>In the previous post, I went through these topics from a generic point of view. This post uses **Cost** as a concrete example.

In the next post, I revisit these topics using **Cost** as a concrete example.

This post has little to do with the Chapter 2 material. I wanted to verify that it's always possible to display posets in linear order, for use when drawing product tables.

]]>Links to John's lectures: lecture 27

]]>Links to John's lectures: lecture 16

]]>