It looks like you're new here. If you want to get involved, click one of these buttons!

- All Categories 2.2K
- Applied Category Theory Course 355
- Applied Category Theory Seminar 4
- Exercises 149
- Discussion Groups 49
- How to Use MathJax 15
- Chat 480
- Azimuth Code Project 108
- News and Information 145
- Azimuth Blog 149
- Azimuth Forum 29
- Azimuth Project 189
- - Strategy 108
- - Conventions and Policies 21
- - Questions 43
- Azimuth Wiki 711
- - Latest Changes 701
- - - Action 14
- - - Biodiversity 8
- - - Books 2
- - - Carbon 9
- - - Computational methods 38
- - - Climate 53
- - - Earth science 23
- - - Ecology 43
- - - Energy 29
- - - Experiments 30
- - - Geoengineering 0
- - - Mathematical methods 69
- - - Meta 9
- - - Methodology 16
- - - Natural resources 7
- - - Oceans 4
- - - Organizations 34
- - - People 6
- - - Publishing 4
- - - Reports 3
- - - Software 21
- - - Statistical methods 2
- - - Sustainability 4
- - - Things to do 2
- - - Visualisation 1
- General 39

Options

I learned some truly amazing things at the workshop on Biological and Bio-inspired Information Theory.

The first big thing was Nafthali Tishby's program to develop a principled approach to biology and intelligence using a combination of

- partially observed Markov decision processes,
- Bayesian networks,
- rate-distortion theory (a branch of information theory)
- Bellman's equation (from control theory).

For some details, read my blog article. Very briefly, the idea is that

organisms store and process information about their past to make decisions to achieve goals in the future, and optimizing this process must take into account the price for information storage and processing.

This idea is sort of obvious... and oversimplified: reality is more complex. But the important part is that Tishby and his colleagues have the mathematical tools to study this idea quantitatively by writing software and proving theorems! It's not just chat, it's math.

I think there's a huge amount left to be done here - and that's fine: this is the kind of ambitious synthesis I want to be involved in! If "green mathematics" ever succeeds, it will have to include ideas like this (and much more).

So, I'm going to steer my research in this direction. It's not hard, because I'm already working on control theory, Markov processes and information theory, trying to fit them together in a unified whole.

That's the *first* big thing.

## Comments

The

secondbig thing is that Susanne Still and a grad student of hers have written software that uses similar information-theoretic ideas to make predictions of time series given past data and to estimatethe optimal model given constraints on how much information the model can use. Even better, my student Blake Pollard is working with her to apply this software to El Niño data!He will start with a simple demonstration just to help her write a paper on this subject. But we may expand this to a larger project: to study Niño and other climate phenomena using ideas from information theory!

This is very nice because again it's compatible with a lot of things I already want to do, and things the Azimuth Code Project is starting to do... and it begins to

connectsome of these ideas.Here is some reading material suggested by Susanne:

Naftali Tishby, Fernando C. Pereira and William Bialek, The information bottleneck method.

Susanne Still, James P. Crutchfield and Christopher J. Ellison, Optimal causal inference: estimating stored information and approximating causal architecture.

Susanne Still and William Bialek, How many clusters? an information-theoretic perspective.

Susanne Still, Information theoretic approach to interactive learning.

Susanne Still, Information bottleneck approach to predictive inference. (Part of a special issue of

Entropyon the information bottleneck method.)Susanne Still, David A. Sivak, Anthony J. Bell and Gavin E. Crooks The thermodynamics of prediction.

If you think you're getting bombarded with too many references, don't worry! I will read a bunch of this stuff and explain it on the blog. You just need to read the blog articles. (I know, that's already hard enough!)

Anyway, I'm very excited.

`The _second_ big thing is that [Susanne Still](http://www2.hawaii.edu/~sstill/) and a grad student of hers have written software that uses similar information-theoretic ideas to make predictions of time series given past data and to estimate _the optimal model given constraints on how much information the model can use_. Even better, my student [[Blake Pollard]] is working with her to apply this software to El Niño data! He will start with a simple demonstration just to help her write a paper on this subject. But we may expand this to a larger project: to study Niño and other climate phenomena using ideas from information theory! This is very nice because again it's compatible with a lot of things I already want to do, and things the Azimuth Code Project is starting to do... and it begins to _connect_ some of these ideas. Here is some reading material suggested by Susanne: * Naftali Tishby, Fernando C. Pereira and William Bialek, [The information bottleneck method](http://www.cnbc.cmu.edu/cns/papers/Tishby-NC-1999.pdf). * Susanne Still, James P. Crutchfield and Christopher J. Ellison, [Optimal causal inference: estimating stored information and approximating causal architecture](http://arxiv.org/abs/0708.1580). * Susanne Still and William Bialek, [How many clusters? an information-theoretic perspective](http://www2.hawaii.edu/~sstill/HowManyClusters.pdf). * Susanne Still, [Information theoretic approach to interactive learning](http://www2.hawaii.edu/~sstill/Still_IL2009_EPL.pdf). * Susanne Still, [Information bottleneck approach to predictive inference](http://www.mdpi.com/1099-4300/16/2/968). (Part of a [special issue of _Entropy_ on the information bottleneck method](http://www.mdpi.com/journal/entropy/special_issues/bottleneck-method).) * Susanne Still, David A. Sivak, Anthony J. Bell and Gavin E. Crooks [The thermodynamics of prediction](http://arxiv.org/abs/1203.3271). If you think you're getting bombarded with too many references, don't worry! I will read a bunch of this stuff and explain it on the blog. You just need to read the blog articles. (I know, that's already hard enough!) Anyway, I'm very excited.`

Hi John

Is the software Blake Pollard is using the "Sir Isaac" system that Nemenman talks about in Automated adaptive inference of coarse-grained dynamical models in systems biology and associated presentation?

Or does Susanne Still have a different system?

Is either system publicly available? I have not been able to find any further information on either Sir Isaac or any system developed by Susanne Still.

cheers Daniel

`Hi John Is the software Blake Pollard is using the "Sir Isaac" system that Nemenman talks about in [Automated adaptive inference of coarse-grained dynamical models in systems biology](http://arxiv.org/abs/1404.6283) and associated [presentation](http://www.nemenmanlab.org/~ilya/images/9/92/DynamicDays_2014.pdf)? Or does Susanne Still have a different system? Is either system publicly available? I have not been able to find any further information on either Sir Isaac or any system developed by Susanne Still. cheers Daniel`

I found Sir Isaac. Is Susanne Still using it or something else?

`I found [Sir Isaac](https://github.com/EmoryUniversityTheoreticalBiophysics/SirIsaac). Is Susanne Still using it or something else?`

I'll ask Blake Pollard what system Susanne Still is using. I don't know its name. I think it's something new. She's written a theoretical paper on it, but she wants to include an example of how it works.

`I'll ask Blake Pollard what system Susanne Still is using. I don't know its name. I think it's something new. She's written a theoretical paper on it, but she wants to include an example of how it works.`

She doesn't use any particular software. The code that implements her method is written in C++. I got in touch with her to get the code. She calls the method 'optimal causal inference'. It is uses a recursive information bottleneck algorithm. The information bottleneck approach was introduced by Naftali Tishby.

`She doesn't use any particular software. The code that implements her method is written in C++. I got in touch with her to get the code. She calls the method 'optimal causal inference'. It is uses a recursive information bottleneck algorithm. The information bottleneck approach was introduced by Naftali Tishby.`

Thanks John

`Thanks John`

... and thanks Blake.

`... and thanks Blake.`