Alexander the Great conquered large portions of Europe and Asia. At the time, being conquered was a disaster, but centuries later it provided languages with a unified basis. So negative entropy has a component of duration. This can be seen as moving to a global optima.

I am interested in the levels of the Ackermann function for modeling non-local optima.

Do folks have other examples demonstrating negative entropy and it's principles?

]]>https://phys.org/news/2021-08-definition-life-implications-cybernetic.html

]]>This is a post I've been putting off for a long time until I was sure I was ready. I am the "lead developer" of a thing called compositional game theory (CGT). It's an approach to game theory based on category theory, but we are now at the point where you don't need to know that anymore: it's an approach to game theory that has certain specific benefits over the traditional approach.

I would like to start a conversation about "using my powers for good". I am hoping particularly that it is possible to model microeconomic aspects of climate science. This seems to be a very small field and I'm not really hopeful that anyone on Azimuth will have the right background, but it's worth a shot. The kind of thing I'm imagining (possibly completely wrongly) is to create models that will suggest when a technically-feasible solution is not socially feasible. Social dilemmas and tragedies of the commons are at the heart of the climate crisis, and modelling instances of them is in scope.

I have a software tool (https://github.com/jules-hedges/open-games-hs) that is designed to be an assistant for game-theoretic modelling. This I can't emphasise enough: A human with expertise in game-theoretic modelling is the most important thing, CGT is merely an assistant. (Right now the tool also probably can't be used without me being in the loop, but that's not an inherent thing.)

To give an idea what sort of things CGT can do, my 2 current ongoing research collaborations are: (1) a social science project modelling examples of institution governance, and (2) a cryptoeconomics project modelling an attack against a protocol using bribes. On a technical level the best fit is for Bayesian games, which are finite-horizon, have common knowledge priors, and private knowledge with agents who do Bayesian updating.

A lot of the (believed) practical benefits of CGT come from the fact that the model is code (in a high level language designed specifically for expressing games) and thus the model can be structured according to existing wisdom for structuring code. Really stress-testing this claim is an ongoing research project. My tool does equilibrium-checking for all games (the technical term is "model checker"), and we've had some success doing other things by looping an equilibrium check over a parameter space. It makes no attempt to be an equilibrium solver, that is left for the human.

This is not me trying to push my pet project (I do that elsewhere) but me trying to find a niche where I can do some genuine good, even if small. If you are a microeconomist (or a social scientist who uses applied game theory) and share the goals of Azimuth, I would like to hear from you, even if it's just for some discussion.

]]>This is an independent community-based multiweb wiki for applied category theory, hosted and endorsed by the Azimuth Project.

Note the "blog" web there. That could be a good place to develop tutorial literature, for things like the Petri net study group at the forum.

]]>Autonomous versus non-autonomous equations. Lotka-Volterra belongs to the former category, a time-invariant system -- is that a stretch for describing real systems where populations depend on the environment? What good will that behavioral description do when the prey species is susceptible to e.g. drought cycles?

So it then becomes a forced system and the focus on an attractor orbit goes out the window. I actually get annoyed by how much people cling to the notion that internal eigenvalues have to be the solution to everything. In many practical cases, it's the forced response and not the natural response that governs the evolution. For this predator-prey system, it appears that ENSO climate cycles and not the internal L-V dynamics drive the cycles. Moreover, even ENSO isn't an internally natural response system, as that is obviously forced by external tidal cycles. Perhaps that's why the scientists are all mystified by this, as they may be deeply attached to the mathematical idealism of eigenvalue-based solutions. But then even this is odd, because climate change and AGW is well-agreed to be a forced response system, driven by adding CO2 to the atmosphere. So I can't generalize either.

I can discuss this aspect of natural vs forced response all day.

]]>I received some very different answers. I am curious if anybody here might post a better answer before I choose one. I think the question I posed is helpful for learning about categories. Certainly, it is helpful for me.

I have been grappling with the realization that in category theory, a category like Grp, the category of groups, may have many objects that are the trivial group. So then I wonder, what are the limits, if any, on what such objects may be? One way to think about that is to ask, as I did, how do we know whether two objects in a category are the same or not.

The conclusion that I am drawing is that set theory and category theory are different ways of thinking. In set theory, we can appeal to the axiom of extensionality, which says that singletons {a} and {b} are the same iff a=b. But in the category Set, there is no concept of element. There are simply objects (sets) and arrows (set functions). There are no "singletons" as such, but we speak instead of the terminal objects. When are two objects the same? When they have the same identity morphism. Thus, in category theory, the identity morphism resolves the question of equality in the same way that, in set theory, the axiom of extensionality resolves it for sets.

Can I have a set Z that consists of a horse, cow and pig? In set theory, I cannot, because sets are built up from the empty set. We can't even have bijections with the set Z because it is not a mathematical object. Whereas in category theory, I believe that we can have such a set Z because the horse, cow and pig are irrelevant. What matters is that we have an object Z and we have the required arrows to all of the other objects.

There can be many set theories which may or may not lead to the same category Set. But the category Set doesn't have to be based on any set theory. It simply has objects and arrows which are related as we would expect sets to be related. So it is about formalizing and studying our expectations and not about any particular implementation. A set in the category Set doesn't actually have to have an element. That's not what makes it a set. What makes it a set is that it fulfills the expectations for a set as regards the composition of arrows. Which, in the case of a terminal object, means that there is a unique arrow to it from every set, and that to every set S it has a set of arrows whose cardinality matches that of S. And then those sets of arrows are also objects in Set and so on.

From my point of view, there are practically no limits on what might possibly be a set, and so in that sense, the category Set is not completely defined. But that seems by design.

So I'm inclined to select the answer that says to look at the identity morphism. But I may be in error. So I thought I would check here. Perhaps this is interesting. I had thought about related quandaries in my post here, Question: Internal structure vs. External relationships (the terminal objects of Cat and Set).

]]>(But: is it different, for proving something more substantial than an exercise, like the Yoneda lemma? Or is that proof just basically a more complex, but still inevitable, computation?)

In any case, the general trend towards abstraction -- which is paramount in category theory -- definitely does involve concentrating complexity in the definitions. It looks like once the definitions are made at their most appropriate general level of abstraction, then they are purified and some results which would otherwise look hard to show will follow almost automatically. If you can do the work of understanding what abstraction X means, and then showing that object Y is an X, then all the things that you know about X become automatic results that apply to Y.

Disclaimer: I haven't gotten very far with category theory -- that's why I'm here, working through it with the team -- so these general comments are more of a gut assessment than an experienced judgement.

]]>In the json mentioned there, some properties have type composition. In the original js file, some properties are even functional type.

]]>updated: the slides are here.

]]>Everyone new is encouraged to start a thread, in category Chat, introducing themselves and their interests. The subject would be "Introduction: YourName"

]]>Multi-liner: I started out by getting a degree in Biophysics from U.C. Berkeley in 1982, imagining I would be a "mathematical molecular biophysicist" (stop laughing). I then went on to write the software and compute the structure of the first protein to be determined in its native solution state via NMR and "distance geometry" while a postdoc in the laboratory of Prof. Kurt Wüthrich at the ETH-Zürich. This helped him to get a piece of the 1992 Nobel Prize in Chemistry, and me to live on NIH grants for the analysis of biomolecular NMR data for the next 15 years or so, ultimately with a non-tenured appointment at the Harvard Medical School. The software I developed at that time served as the engine underlying several molecular modeling packages that were distributed by Acelrys Inc. in the 1990's.

When that funding petered out, I moved on to designing and analyzing the data from NMR experiments which demonstrated the principles of quantum information processing also via solution-state NMR spectroscopy. These demonstrations, which were based on the concept of a pseudopure state, which I developed with Prof. David Cory in the Dept. of Nuclear Science and Engineering at MIT, included the first physical implementation of a quantum error correcting code along with many other techniques that are applicable to the development of quantum computers regardless of the underlying technology. They also demonstrated, again for the first time, the utility of the Lindblad and Kraus superoperator formalisms in the study of NMR relaxation processes, something that few NMR spectrocopists are aware of to this day.

What enabled me to work effectively across such diverse disciplines was my knowledge of geometric (aka Clifford) algebras, together with their broad applicability to the physical sciences and engineering. These algebras, largely developed and popularized in the latter half of the 20th century by the theoretical physicist David Hestenes and his colleagues at Arizona State Univ., are most succinctly described as generalizations of 3D vector algebra to metric vector spaces of all dimensions and signatures. My most significant contribution to this field was the realization in the early 1990's that Prof. Hestene's study of the conformal group via geometric algebra showed that these algebras can also be viewed as the covariant (group) algebras associated with coordinate-rings of algebraic invariants. I had studied these in the course of my earlier work on distance geometry and through my informal associations with Prof. Gian-Carlo Rota and his students from the MIT mathematics dept., together with Prof. Bernd Sturmfels before he became a professor of mathematics at U.C. Berkeley.

Upon turning 50 without a permanent academic position, and thinking the world might yet respond sensibly to the now-near-term threats of climate change, I subsequently went to the MIT Sloan School of management to learn how to launch a clean-tech venture. There I came up with the idea of using the adsorption of compressed air in zeolite minerals to store energy in a very safe, clean and possibly even cheap fashion, and spent much of the next decade trying to get someone with money interested in the approach, to no avail. Current interests are in kernel methods in machine learning (and beyond), dynamical systems models of cognition, and category theoretic approaches to the two (all of which can be pursued with little or no money down).

]]>- Keith Conrad, Tensor Products

Whereas in the world of vector spaces, tensors have a clearly visualizable representations, things become more subtle when we generalize to modules over a ring.

He writes:

There isn’t a simple picture of a tensor (even an elementary tensor) analogous to how a vector is an arrow. Some physical manifestations of tensors are in the previous answer, but they won’t help you understand tensor products of modules. Nobody is comfortable with tensor products at first. Two quotes by Cathy O’Neil and Johan de Jong nicely capture the phenomenon of learning about them:

O’Neil: After a few months, though, I realized something. I hadn’t gotten any better at understanding tensor products, but I was getting used to not understanding them. It was pretty amazing. I no longer felt anguished when tensor products came up; I was instead almost amused by their cunning ways.

de Jong: It is the things you can prove that tell you how to think about tensor products. In other words, you let elementary lemmas and examples shape your intuition of the mathematical object in question. There’s nothing else, no magical intuition will magically appear to help you “understand” it.

This is discouraging. Can we do better than this?

There is the construction of the tensor product as the quotient of enormous (free) module by an enormous sub-module, but it doesn't register with my intuition very well.

Regarding this, Conrad says:

From now on forget the explicit construction of M ⊗R N as the quotient of an enormous free module FR(M × N). It will confuse you more than it’s worth to try to think about M ⊗R N in terms of its construction.

He says instead to use the universal mapping property to understand the tensor product. But I don't like the idea of abandoning the definition of something in order to understand it.

Is this a case where it only makes sense to understand things though its morphisms? I hope not, because I like objects as well as arrows :)

]]>News item #1: California, April 1, 2015

"The governor of California has ordered unprecedented and mandatory water restrictions in the state as officials conducted a regular measurement of the Sierra Nevada snowpack and found “no snow whatsoever” amid the state’s ongoing drought.

“This was the first time in 75 years of early-April measurements at the Phillips snow course that no snow was found there,” the California Department of Water Resources said in a statement on Wednesday at the conclusion of a survey attended by the Governor Jerry Brown. It said readings from Wednesday put the state’s level of water content at just 5% of the historical average for the date."

More to follow.

]]>Here is a classic reference book:

- Python for Data Analysis, Wes McKinney, O'Reilly Media, 2013.

Here is a recommended primer from the Pandas website:

Here are the main components of the scientific python ecosystem. I am paraphrasing/quoting from McKinney:

NumPy. Short for numerical python, NumPy is the foundational package for scientific computing in Python. It provides a fast and efficient multi-dimensional array object; functions for performing element-wise computations with arrays or mathematical operations between arrays; tools for reading and writing array-based data sets to disk; linear algebra operations, Fourier transform, and random number generation; tools for integrating other languages with Python.

pandas. Pandas provides rich data structures and functions designed to make working with structured data fast, easy and expressive. The primary object in pandas is the DataFrame, a two-dimensional tabular, column-oriented structure with both row and column labels. Pandas combines the high performance array-computing features of NumPy with the flexible data manipulation capabilities of spreadsheets and relational databases.

And, I may add: it is seamlessly integrated with the developed high-level language Python, which contains mechanisms for abstraction, functional programming, object-orientation; extensive platform support libraries for systems programming, web services interfaces, etc., etc.

For users of the R statistical computing language, the DataFrame name will be familiar, as it was named after the similar R data.frame object. They are not the same however, as the functionality provided by the R data frame is essentially a strict subset of that provided by the pandas DataFrame.

matplotlib. The most popular Python library for producing plots and other 2D visualizations. It is maintained by a large team of developers, and is well-suited for creating publication-quality plots.

IPython. IPython is the component in the toolset that ties everything together; it provides a robust and productive environment for interactive and exploratory computing.

SciPy. SciPy is a collection of packages addressing a number of different standard problem domains in scientific computing. It includes: scipy.integrate, with numerical integration routines and differential equation solvers; scipy.linalg, with linear algebra and matrix decompostion algorithms; scipy.optimize, with function optimizers and root finding algorithms; scipy.signal, with signal processing tools; scipy.sparse, for sparse matricies and sparse linear system solvers; scipy.stats, with standard continuous and discrete probability distributions, statistical tests, and descriptive statistics; scipy.weave, a tool for using inline C++ code to accelerate array computations.

Together NumPy and SciPy form a reasonably complete computational replacement for much of MATLAB along with some of its add-on toolboxes.

And, I may add: it is free!

]]>**Python data types**

The Python language contains a whole range of standard types, including primitive value types (int, float, etc), lists, tuples, dictionaries (i.e. finite mappings), functions and objects. For tutorials and reference information, see:

**ndarray (NumPy)**

The python module NumPy has an n-dimensional array type. All the elements in an ndarray must be of the same Python type. This is an efficient representation, which gets packed into a contiguous array in memory. This makes it a good format for interfacing with libraries that are external to Python. NumPy provides operators that will apply element-wise operations to entire arrays (vectorization). So, even though the Python interpreter does have performance deficits in comparison with strongly typed compiled languages, by making use of vectorized operators on large data sets, the critical inner loops are being performed in the compiled NumPy library, rather than in the Python interpreter.

**Series and DataFrame (Pandas)**

These two data types (classes in the Pandas module) are built on top of the ndarray data type. They are enrichments of, respectively, the mathematical types Sequence and Relation. A Series is a sequence of values with associated labels, and a DataFrame is a two-dimensional, column-oriented structure with row and column labels.

**Index (Pandas)**

An Index is an object that provides the sequences of labels that are used in the Series and DataFrame objects. An Index may contain multiple levels of hierarchy within it.

This thread will consist of an exposition of the algebra of Series and DataFrames, along with examples of their use.

]]>I’ve used an ANN or Artificial Neural Network program I’ve created, which I used to play around with, with solar cycles, tidal effects and sea current indexes. I discovered the connection with tidal and solar forcing on ENSO a while back, which I have presented to others at a seminar. At that time I used this graph.

I was preparing to write a paper on this and to upgrade the data up to the end of 2014 when, last month I used a new approach. The result I then got was extraordinary. Look at this!

Wow!

The training period is from 1979 until 2005, the testing period is from 2005 until 2012. The rest is forecast. The data is based on the MEI ENSO index. The only inputs I use are from tidal gravitational anomalies and from Kp, Ap and variations in solar wind parameters. Most so called skeptics, refer to the Svensmark’s effect when it comes to the connection between global temperature variations and the Sun. Critics from the other camp often point that this effect is too weak. I think they are right on that. How much influence from ENSO variations which contribute directly to variations in the global mean temperature anomaly is up to others to figure out, but I think it should be somewhere between 30 and up to 80 % of the effect, making any input from human caused temperature impact much smaller that is assumed by the IPCC.

I can show with empirical evidence that one of the main causes of recent temperature changes, directly can be attributed to electromagnetic solar driven ENSO variations. If one take into account this ENSO effect and quality questions related to the main surface temperature station by NASA, NOAA and from CRU Hadley Centre, then there is not much space left for any AGW influence on recent temperature changes, irrespective if such effect exists or not. The temperature variation on the global temperature which is generated by ENSO is caused by variations in sun’s electromagnetic activity which is then blurred somewhat by the tidal gravitational effect. When for example the Ap index is weak there is a tendency for fewer and weaker El Niños which leads to cooling. The opposite happens when the Ap index is higher.

I realize that what I have, is an atomic bomb set to explode in the face of climate scientists, ENSO researchers and of course ultimately this is going to be the (beginning) of the end of the CAGW hysteria. I would also point out that the mechanisms I found is relative simple and should therefore be easy for others to confirm and to replicate. And I hope, also by you.

Here is my forecast from now up to 2020.

As you can see according to this forecast the current ENSO value should peek with an El Niño at the end of the current NH summer. It is going to drop after that and continue to drop into a deep La Niña at the NH winter of 2016/17. Time will tell if I’m right.

Here is the detail of the test period up to 2012 and forecast up to 2015.

As you can see there are some anomalies at the end of the period which I attribute to the recent period with Modoki El Niño. The MEI index from NOAA is not well defined for that.

I haven’t released the detail of the mechanism by which the Sun and the Moon drives ENSO yet, but I am going to this publicly. But, because of the magnitude of what I have found I need step back and do some brainstorming and to think through how to proceed.

Should I just send a description to important blogs and websites and then maybe because I need to continue to work with this by investigate with my ANN NINO1+2. NINO3+4, QBO; SOI, LOD, trade wind index and so on? Maybe I should use crowdsouring to finance my future work? Should I try to publish in a scientific paper, where? Tips on anyone I can co-operate with? Of course my claim to have solved the driving mechanisms of ENSO may seem rather extreme and it is OK to be skeptical. I mean I would if I were you. So let it be hypothetical. What would you do if you have discovered what the drivers are of ENSO and you have the data and mechanism to back it up?

My name is Per Strandberg and I have an M.Sc degree in Physics and electronics. I became interested in the climate question a while back and because I have experience with ANN and climate data is freely available I started to play around with this data. Never in my wildest dream did I ever think that I would solve the ENSO mystery, when so many others have failed.

]]>I'm new to this place, coming in from a very integral or "holistic" perspective, and very motivated to explore ways that "scientists, mathematicians and engineers" can work together to "save the planet" (and, along the way, of course, human civilization). After looking around a bit, I thought I'd take a crack at posting a new discussion.

Of course it's true that there are major scientific issues associated with maintaining a healthy planet -- and many of you have no doubt heard the phrase "Planetary Boundaries" (if not, there are good essays on the Great Transition Initiative, such as http://www.greattransition.org/publication/bounding-the-planetary-future-why-we-need-a-great-transition -- "Bounding the Planetary Future", by Johan Rockstrom, Professor of Environmental Science at Stockholm University) -- but for me, what might be even more of a human emergency is the inability of the scientific community to fully persuade the political communities of the world that action is needed promptly. Have you see video clips of the floods in South Carolina, USA? We've got to learn to live within our limits -- within our boundaries -- and if we can't do it, we're going to reap the whirlwind.

[PS -- here's a brand-new article from Rockstrom: http://www.socialeurope.eu/2015/10/leaving-our-children-nothing/ ]

If we want to hang on to this planet, we human beings have to find ways to work together -- effectively, directly, correctly, with substantial influence and impact. But the reality is -- human beings these days at grassroots levels are tending to bicker or fight with each other about just about everything. The human community isn't just "divided". We're atomized, around almost every possible dimension of difference (there are a number of influential books on this theme, like The Big Sort, by Bill Bishop), and our collective failure at the large-scale task of collective governance puts us all into the hot water with the mythical "boiling frog" (https://en.wikipedia.org/wiki/Boiling_frog ). We see this problem all over the world -- and we absolutely see it in the gridlock and paralysis of the US Congress, on just about any issue more serious than naming a post office. If you've been watching the US news, this is the number-one topic right this minute: the paralysis of our congress.

I'm a network builder with a background in algebraic semantics, and I want to work on building models of shared understanding that fully embrace "diversity", and support vital disagreement or discussion on critical issues -- but hold the entire conversation together in "co-creative" and respectful/constructive ways, that lead to creative solutions. As regards "apples and oranges" arguments -- I've heard it said recently that a major reason for crazy health-care costs in the USA isn't simply the avarice of health-care providers or pharmaceutical companies -- but also emerges in large part from the sheer fragmentation and internal disconnects of the health-care delivery system. We're living in a world of mis-matched taxonomies. It doesn't work.

**DIMENSIONALITY**

Many years ago, I started working on generalizations of epistemology and category theory, in terms mostly defined by dimensionality. Today, I'm feeling a burst of enthusiasm for this field, thinking that some cocreative work by passionate analysts might provide what I believe could become an amazing "breakthrough" theory in general cognitive and semantic theory -- with big implications for database processing, cognitive science, any kind of taxonomy or any process that involves classification. There might be serious implications for hard sciences. There might be serious implications for collaboration in a diverse culture. Can we diffuse the problems of "Babel" with a new integral vision?

Obviously, we've living in a highly networked world -- where building smooth mappings between cultures and systems -- and branches of science -- looks to me like an increasingly essential process. We gotta get "people" AND "computers" talking to each other with less confusion. In this context, it looks to me like a theorem with significance comparable to Gödel’s Proof is out there ripe for the picking. There are currently no widely accepted "industry standards" for ontological fundamentals -- and the so-called "foundations of mathematics" -- but there absolutely should be. The right theorem might sweep away centuries of cobwebs.

I'm wanting to post a few ambitious ideas on the fundamentals of scientific method and the language of process description. It looks to me like we are living in an era of high convergence -- a convergence across a very wide spectrum of interconnected elements -- and I'd like to see that idea tested and grown under a sharp and constructive and motivated scientific critique.

Whether this possibility goes anywhere here might depend on what kind of response it gets. There's a lot to talk about, and some critically-important scientific and technical issues in play. And there's an opportunity to do something great. But nobody can do this stuff alone. Creativity takes cross-fertilization. So let's see what happens when I post this. I might go get a theme or two from an interesting current discussion started on GooglePlus by John Baez, on the theme "A Moebius strip in the space of concepts" -- at https://plus.google.com/117663015413546257905/posts/jkqH5e48w6L

This Moebius thread gets into two areas I find fascinating: the dimensionality of conceptual structure -- and maybe (???) how something like a topological deformation of this space along the lines of Moebius might "fully integrate" the dimensionality of conceptual form -- "closing the space", or something like that. Personally, I think it's possible -- and could produce an amazing and very significant theorem. I'd love to talk about it here.

Thanks!

Bruce Schuman, Santa Barbara CA

]]>"It's not my fault that your species decided to abandon currency-based economics in favor of some philosophy of self-enhancement."

It never really gets any better than that at explaining how it would actually work; Star Trek canon simply avoids the details, always hand-waving whenever it comes up. Well, if you read this https://medium.com/@borgauf/star-trek-economy-direct-logistics-77d44746b4a9 you may begin to get an idea. It's not a scholarly paper, rather, done in the usual chatty Medium style, but you'll still get the main points.

Basically, we have what it takes to go "currency-free" now. My problem, of course, is to get anyone to take this seriously. If there is any aspect of life more prejudicially ingrained in people's minds, it's economics. But with our highly-networked present-day world, we have all it takes to leave off currency-based accounting and simply use the basic logistics data of direct supply and demand, that is, the actual to-and-fro of stuff. Give it a read and let me know if I've come to the right place. I hope so.

]]>It contains the following tabs:

Discussions - brings you back to the list of recent discussions (just in case you were somewhere else).

Wiki

Blog

Guide - contains elements of a user reference guide. Really this will consist of technical tips, for points which are not obvious (e.g. the quirks of the search query language). This is a link to a page on the wiki.

Join - words to encourage people to join, and instructions for how to do so. This is a link to a wiki page.

Sign in / sign out -- this tab says "Sign in" when you are signed out, and vice versa.

I wanted to use words other than Help, because Help functions are generally not that helpful, so I am almost conditioned not to waste time with them. Also Guide and Join are very separate functions.

I'm going to start separate discussions for the contents of each of the two wiki pages mentioned above.

]]>thanks Daniel

]]>To what extent can this be truly modeled as a random variable, in the technical sense of probability theory? For that we need to have a sample space S consisting of "experimental outcomes," a sigma-algebra of events (subsets) on S, and a probability measure on S; a random variable then has to be a measurable function on S.

So what's the probability space underlying our variable T2000? Would S consist of all "conceivable" histories of the world, and T2000 the function which picks off the temperature at that point in space and time? But this would be a purely fictional construction -- who's to say what's in S and what's not -- and even more artificial would be the assignment of a probability measure to the events in S.

Yet without an underlying probability space, there's no way that we could speak of, say the variance of T2000.

]]>To Save the Planet, Don’t Plant Trees

The article was written by an assistant professor of atmospheric chemistry at Yale.

In the article the author warns of socalled V.O.C.'s, (something I haven't heard of before):

Worse, trees emit reactive volatile gases that contribute to air pollution and are hazardous to human health. These emissions are crucial to trees — to protect themselves from environmental stresses like sweltering heat and bug infestations. In summer, the eastern United States is the world’s major hot spot for volatile organic compounds (V.O.C.s) from trees.

and moreover they write:

Climate scientists have calculated the effect of increasing forest cover on surface temperature. Their conclusion is that planting trees in the tropics would lead to cooling, but in colder regions, it would cause warming.

if I understood the article right then more or less both facts taken together (the carbon cycle and its possible wrong understandings is also mentioned) leads to the recommendation: " Don’t Plant Trees." There are no references with respect to the claims though.

even if, as the author writes:

Planting trees and avoiding deforestation do offer unambiguous benefits to biodiversity and many forms of life. But relying on forestry to slow or reverse global warming is another matter entirely.

If you look at that pretty foto traumawald by Christian Miersch, who had recently commented here on Azimuth, then it seems indeed to be an important question wether science is able to determine the right measures to adress climate change.

For me this article brought however up some question, which I've been tossing around for quite a while, which is the question of the role of certain thermodynamic quantities like entropy and chemical energy in the question of global warming. That is a dark surface absorbs a lot of infrared (thats what I figure is behind the assertion: planting trees in the colder regions would lead to cooling etc. that is the net albedo change in reversing grasslands and other soils into forests seems to be different in differetn climates, where I am not sure wether I understood all the resonings behind this)) but one question is also: what's happening with the absorbed infrared. That is black body radiation is only a fist approximation and it might be worthwhile to think about effects like conversion into chemical energy etc. Like if I would look at this example of upconversion then the upconverted light of a dark looking leave would differently contribute to the overall radiation and in particular to the infrared balance, which plays an important role in the green house effect. In that context I am also asking myself how big are the cooling effects of human efforts in killing biodiversity and building rigid structures like streets and houses. That is exageratedly speaking: if earth would be covered with concrete then this could be seen as lowering the overall earth entropy, and if this would be the case some of the sun's energy would have needed to go into that entropy lowering and not into heat. I was hesitating to ask this question, because I always had some unease with certain thermodynamical laws (visible e.g. here.), but I am not sure how much of this under-understanding is due to missing out some literature or forgetting learned content.

]]>The most baffling aspect of our physical world to me is the handedness which is imposed on most living beings! No explanation for it at all, and how it came about even larger mystery.

Handedness appears in Maxwell's Equations (cross product) which is even more exciting to note.

Even oddest:

Let V be a finite-dimensional vector product algebra, then d=Sum (e_i.e_i) e_i some orthonormal basis of V satisfies d(d-1)(d-3)(d-7)=0

Vector product algebra dimensions

For Real d, then spaces supporting such vector products are limited to dimensions 0,1,3 and 7.

]]>I'm starting this thread as a place for us to bat around ideas about this. Perhaps this will help John in his thinking about the subject.

Sorry I don't have a lot concrete to contribute here, because I was busy with other things, and lost track of the flow of what was going on.

Here are some possible points:

Definitions are more complex than they need to be; Graham showed that the same results can be achieved with simpler definitions.

A better goal is to predict a continuous index of El Nino.

Questions about the statistical significance of their results. Someone posted about this, but I forgot who or where.

Is a network of correlation strengths really a meaningful entity, which can form the basis of empirical predictions? Are the other cases where such correlation networks have led to predictive results? What are the underlying physical bases.

They put themselves out on a limb by predicting El Nino in 2014, but now it is appearing to be less likely...

Perhaps you guys who have been more active in the El Nino project can fill in some of the ideas here, or contradict them, or add some other points.

If 2014 turned out to be a non-El-Nino year, and this raises doubts about the Ludescher et. al approach, then part of your talk could be cast as a post-mortem, a returning to the blackboard, and a drumming up of more speculative ideas about climate network theory in new directions. Trouble is your talk occurs too soon to tell.

But now I read that the probability of El Nino this winter has been reduced to 58%. But that means that there is a 42% chance that a strong prediction made by the Ludescher et al theory is incorrect. Doesn't that in itself cast some doubt on the their "theory"?

]]>