Options

Introduction: Robert Law

I'm a neuroscience postdoc specializing in dynamical systems on networks and biophysical models of magnetic fields generated by brain activity. I think a useful, quantitative understanding of the brain will require a combination of abstract and concrete approaches, with useful stuff to be integrated all the way from semiring theory to the structure of ion channels.

I'm familiar with David Spivak's work, had the pleasure of bothering him as I was finishing up my dissertation, and think this latest book is far and away the most important introduction to the topic I've seen thus far. Starting from (pre)order is I think an optimal approach, and jumping almost immediately into Galois connections, which I think may be nice models for communication in general, brings things to some depth right away.

Beyond that, I have no idea what's going on. And although I have pressing time commitments at the moment (namely finishing a manuscript and hopefully finding a job), I'm going to try and follow along as best I can. I'll likely be late to the party for most if not all topics, so hopefully some of you revisit old posts once in awhile!

Comments

  • 1.

    Here is an abstract approach using category theory: http://www.ellerman.org/on-adjoint-and-brain-functors/

    Comment Source:Here is an abstract approach using category theory: http://www.ellerman.org/on-adjoint-and-brain-functors/
  • 2.

    Nice! I think this is very close to what I had in mind.

    The remaining pieces of the puzzle involve mapping the high-level descriptions, e.g. "internal speech", to brain dynamics. Given studies particularly from Wolf Singer's lab suggesting that synchrony may resolve the so-called "binding problem" for perceptual objects, my favorite description of brain dynamics involves a complete lattice of phase-space subspaces (synchrony or "polydiagonal" subspaces) induced by network symmetry. The dual lattice of subspace complements is also worth exploring. I cover this in my dissertation from 2014, although I'm somewhat embarrassed about some terminological mistakes there.

    Comment Source:Nice! I think this is very close to what I had in mind. The remaining pieces of the puzzle involve mapping the high-level descriptions, e.g. "internal speech", to brain dynamics. Given studies particularly from Wolf Singer's lab suggesting that synchrony may resolve the so-called "binding problem" for perceptual objects, my favorite description of brain dynamics involves a complete lattice of phase-space subspaces (synchrony or "polydiagonal" subspaces) induced by network symmetry. The dual lattice of subspace complements is also worth exploring. I cover this in my dissertation from 2014, although I'm somewhat embarrassed about some terminological mistakes there.
  • 3.

    Having just read your introduction, Professor Ellerman, I thought it worth mentioning that the synchrony lattice describes the equitable partition logic for a given directed graph (if I understand correctly what you mean by logic). It's a constrained version of the partition logic on a set.

    I'd actually spent some time working on a definition of information for the brain's dynamical states from this perspective, but in terms of the number of lattice elements rather than the content of each partition. Might there a natural way to combine the two?

    Summing the logical information from each partition seems maybe too easy. The coarsening maps are fibrations of graphs, so maybe it should be coherent with that notion.

    By the way, I think we're related. I studied under Frank Guenther at BU, and he was Steve Grossberg's graduate student. Steve was also Rota's student, as I recall. Do you two know each other?

    Comment Source:Having just read your introduction, Professor Ellerman, I thought it worth mentioning that the synchrony lattice describes the equitable partition logic for a given directed graph (if I understand correctly what you mean by logic). It's a constrained version of the partition logic on a set. I'd actually spent some time working on a definition of information for the brain's dynamical states from this perspective, but in terms of the number of lattice elements rather than the content of each partition. Might there a natural way to combine the two? Summing the logical information from each partition seems maybe too easy. The coarsening maps are fibrations of graphs, so maybe it should be coherent with that notion. By the way, I think we're related. I studied under Frank Guenther at BU, and he was Steve Grossberg's graduate student. Steve was also Rota's student, as I recall. Do you two know each other?
  • 4.

    Can you send me the material mentioned on directed graphs (david@ellerman.org)? I got my math phd at BU but under Rohit Parikh in '71 before Grossberg or Guenther came to BU. I later worked with Rota and did a joint paper in the '80s and he was my real mentor in mathematics. His untimely death in '99 got me working on some of his unfinished work with partitions, and the work on partition logic and logical information theory came out of that.

    Comment Source:Can you send me the material mentioned on directed graphs (david@ellerman.org)? I got my math phd at BU but under Rohit Parikh in '71 before Grossberg or Guenther came to BU. I later worked with Rota and did a joint paper in the '80s and he was my real mentor in mathematics. His untimely death in '99 got me working on some of his unfinished work with partitions, and the work on partition logic and logical information theory came out of that.
  • 5.
    edited May 2018

    Welcome, Robert! You may enjoy my friend Kathryn Hess' talk at Applied Category Theory 2018.

    She works on neuroscience and algebraic topology. She mentions some attempts to apply category theory to neuroscience: some unsuccessful, and none very successful. This talk might be more optimistic:

    I think there's a lot of room for applying sophisticated mathematics to neuroscience, but I've resolved to focus on much simpler things, at least for the next decade. I don't feel I understand how a leaf works, much less a brain. Luckily, some of the new math we need to develop to understand leaves is bound to be useful for understanding brains.

    Comment Source:Welcome, Robert! You may enjoy my friend Kathryn Hess' talk at Applied Category Theory 2018. * Kathryn Hess, <a href="https://youtu.be/1TyNInHXLcE">Towards a categorical approach to neuroscience</a>. She works on neuroscience and algebraic topology. She mentions some attempts to apply category theory to neuroscience: some unsuccessful, and none very successful. This talk might be more optimistic: * Kathryn Hess, <a href = "https://youtu.be/vD27zKxoio0">Topology meets neuroscience</a>. I think there's a lot of room for applying sophisticated mathematics to neuroscience, but I've resolved to focus on much simpler things, at least for the next decade. I don't feel I understand how a leaf works, much less a brain. Luckily, some of the new math we need to develop to understand leaves is bound to be useful for understanding brains.
  • 6.

    David, apologies for not getting back for a bit, but I'm going to send a few things in short order; please just give me a week or so to compile sources and write a precis.

    Also, am I correct that the logical entropy is 1-sum(r_i^2) where r_i is the measure of each individual part?

    Comment Source:David, apologies for not getting back for a bit, but I'm going to send a few things in short order; please just give me a week or so to compile sources and write a precis. Also, am I correct that the logical entropy is 1-sum(r_i^2) where r_i is the measure of each individual part?
Sign In or Register to comment.