#### Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Options

# Blog - the mathematical origin of irreversibility

Matteo Smerlak of the Max Planck Institute for Gravitational Physics has written a very nice article following up on the theme of 'evolution and thermodynamics', which I'd started developing in my more recent 'information geometry' posts:

He wrote it in LaTeX as a little paper. I dumped it onto the wiki, and am gradually fixing the formatting so that it's suitable for the blog.

• Options
1.
edited September 2012

thanks for the article.

In a symmetric mutation scheme (where the mutation rate from $a$ to $b$ equals the mutation rate from $b$ to $a$), the ratio between the $a$↦$b$ and $b$↦$a$ transition rates is completely determined by the fitnesses $f_{a,b}$ of $a$ and $b$, according to

I am a former mathematical physicist :) so I am not sure anymore, but I thought that a condition for a markovian process was that the probabilities where independent, which probably gives some conditions on the transition rates.

So if the fitnesses of an evolution are in the relation to the transition rates, as stated in the article, then this seems to mean that the corresponding evolution model incorporates some (simplifying) condition on the fitnesses. Is this right?

Comment Source:thanks for the article. >In a symmetric mutation scheme (where the mutation rate from $a$ to $b$ equals the mutation rate from $b$ to $a$), the ratio between the $a$↦$b$ and $b$↦$a$ transition rates is completely determined by the fitnesses $f_{a,b}$ of $a$ and $b$, according to I am a former mathematical physicist :) so I am not sure anymore, but I thought that a condition for a markovian process was that the probabilities where independent, which probably gives some conditions on the transition rates. So if the fitnesses of an evolution are in the relation to the transition rates, as stated in the article, then this seems to mean that the corresponding evolution model incorporates some (simplifying) condition on the fitnesses. Is this right?
• Options
2.
edited September 2012

A Markov process of the sort we're discussing here can be specified by a finite set of states $a, b, \dots$ and arbitrarily chosen nonnegative transition rates $\gamma_{a b}$ for any ordered pair of states $a, b$ with $a \ne b$. These numbers $\gamma_{a b}$ are the matrix entries of an infinitesimal stochastic matrix.

But here's he's talking about a symmetric setup where we also assume $\gamma_{a b} = \gamma_{b a}$.

I believe the fitnesses are then determined by the transition rates $\gamma_{a,b}$ according to that formula you didn't quite quote. I don't know if I like the term 'fitness' for something that depends on a pair of states. Maybe 'fitness ratio' or something would be better.

I'm still working to understand this article as I format it...

Comment Source:A Markov process of the sort we're discussing here can be specified by a finite set of **states** $a, b, \dots$ and arbitrarily chosen nonnegative **transition rates** $\gamma_{a b}$ for any ordered pair of states $a, b$ with $a \ne b$. These numbers $\gamma_{a b}$ are the matrix entries of an [infinitesimal stochastic](http://math.ucr.edu/home/baez/networks/networks_20.html) matrix. But here's he's talking about a symmetric setup where we also assume $\gamma_{a b} = \gamma_{b a}$. I believe the fitnesses are then determined by the transition rates $\gamma_{a,b}$ according to that formula you didn't quite quote. I don't know if I like the term 'fitness' for something that depends on a _pair_ of states. Maybe 'fitness ratio' or something would be better. I'm still working to understand this article as I format it...
• Options
3.
edited October 2012

Let me jump in, see if I can clarify my point here. I took the words "fitness" and "symmetric mutation scheme" from the evolutionary dynamics literature. Fitness is indeed a function of a single state (the fitness of a genotype, really), and "symmetric mutation scheme" means "reversible Markov process" or "detailed balance". It does not imply $\gamma_{ab}=\gamma_{ba}$, but precisely that, if they are not equal, it is only because the states they connect are not equally viable. As for the notation, I wrote (confusingly, I agree) $f_{a,b}$ to mean "$f_a$ and $f_b$"!

Comment Source:Let me jump in, see if I can clarify my point here. I took the words "fitness" and "symmetric mutation scheme" from the evolutionary dynamics literature. Fitness is indeed a function of a single state (the fitness of a genotype, really), and "symmetric mutation scheme" means "reversible Markov process" or "detailed balance". It does not imply $\gamma_{ab}=\gamma_{ba}$, but precisely that, if they are not equal, it is only because the states they connect are not equally viable. As for the notation, I wrote (confusingly, I agree) $f_{a,b}$ to mean "$f_a$ and $f_b$"!
• Options
4.

John, there's a typo in the post that I don't manage to fix. Just where I define the "self-information", after the formula $i(t)=-\ln\pi_a(t)$, the initial condition $\pi_a(0)$ is flying where it shouldn't be. I don't know what's going on here!

Comment Source:John, there's a typo in the post that I don't manage to fix. Just where I define the "self-information", after the formula $i(t)=-\ln\pi_a(t)$, the initial condition $\pi_a(0)$ is flying where it shouldn't be. I don't know what's going on here!
• Options
5.

Hello Matteo! Thanks for your explanations!

I think it could be helpful to be a bit more detailed about what you mean with "markovian", like John thinks the columns of the transitions as a matrix have to sum up to zero (by the way it takes some time to find this definition in the text), because the time evolution can be described infinitesimal, but eventually you would implement also different time evolutions?

Depending on the respective definitions one may then get respective conditions on the fitnesses, which may have even different names in the evolutionary dynamics literature?

Comment Source:<a href="http://forum.azimuthproject.org/discussion/1072/hello-azimuth/#Item_0">Hello Matteo!</a> Thanks for your explanations! I think it could be helpful to be a bit more detailed about what you mean with "markovian", like John thinks the columns of the transitions as a matrix have to sum up to zero (by the way it takes some time to find this definition in the text), because the time evolution can be described infinitesimal, but eventually you would implement also different time evolutions? Depending on the respective definitions one may then get respective conditions on the fitnesses, which may have even different names in the evolutionary dynamics literature?
• Options
6.

Thanks nad, I've made the def more explicit. I'm not sure what you mean by different time evolutions though. Are thinking of Markov chains (discrete time steps) instead of Markov processes (continuous time)? That wouldn't change much, really. As for other names in the evolutionary dynamics literature, I can't tell.

Comment Source:Thanks nad, I've made the def more explicit. I'm not sure what you mean by different time evolutions though. Are thinking of Markov chains (discrete time steps) instead of Markov processes (continuous time)? That wouldn't change much, really. As for other names in the evolutionary dynamics literature, I can't tell.
• Options
7.
edited October 2012

Very interesting article! Though I'm a (to-be) statistical physicist myself, I'm not very knowledgeable of non-equilibrium methods and learned quite a bit from the read. A short comment: perhaps when showing that the integral fluctuation theorem leads to a lower bound in the entropy, the Jensen's inequality should be mentioned, as the convexity of the exponential alone doesn't seems to mean much.

As I understood, the relation $\langle \Phi \rangle \geq -\Delta S$ should serve as a theoretical principle for symmetric mutation schemes, pointing the fitness flux is almost always positive in them. Is the only necessary hypothesis in this case to have $\frac{\gamma_{ab}}{\gamma_{ba}} = \left( \frac{f_a}{f_b} \right)^\nu$? Do you know if any similar relations may be obtained for more complicated cases, in which this hypothesis is not satisfied?

It would seem to me that the hypothesis is a little restrictive, but then again I'm not a biologist. It would be interesting to try to study the relation $\langle \Phi \rangle (\Delta S)$ for more general cases.

Thanks!

Comment Source:Very interesting article! Though I'm a (to-be) statistical physicist myself, I'm not very knowledgeable of non-equilibrium methods and learned quite a bit from the read. A short comment: perhaps when showing that the integral fluctuation theorem leads to a lower bound in the entropy, the <a href="http://en.wikipedia.org/wiki/Jensen's_inequality">Jensen's inequality</a> should be mentioned, as the convexity of the exponential alone doesn't seems to mean much. As I understood, the relation $\langle \Phi \rangle \geq -\Delta S$ should serve as a theoretical principle for symmetric mutation schemes, pointing the fitness flux is almost always positive in them. Is the only necessary hypothesis in this case to have $\frac{\gamma_{ab}}{\gamma_{ba}} = \left( \frac{f_a}{f_b} \right)^\nu$? Do you know if any similar relations may be obtained for more complicated cases, in which this hypothesis is not satisfied? It would seem to me that the hypothesis is a little restrictive, but then again I'm not a biologist. It would be interesting to try to study the relation $\langle \Phi \rangle (\Delta S)$ for more general cases. Thanks!
• Options
8.
edited October 2012

I’ve made the def more explicit. I’m not sure what you mean by different time evolutions though.

I just wanted to have some more confirmation that, that what you are talking about is that what I think you are talking about, since as said one can think of all sorts of time processes. So actually a link to wikipedia or to the network theory posts, as you have meanwhile provided is very helpful. thanks.

I haven't yet understood what you mean by self-information. In your definition:

$i(t) : = - \ln \pi_a (t)$ (where $\pi_a (t)$ is the probability that the system is in state a at time t, given some prescribed initial distribution $\pi_a (0)$)

the "a" had disappeared ? Do you mean $i_a (t)$? And the average goes then over all $i_a$ ? Since in particular I have no library account, I skip over sentences like "See (Seifert2005) for details." And I actually could imagine that looking at (Seifert2005) could be rather counterproductive, that is I find highly technical articles often rather disgusting.

Hence it's great that I can here talk to the author of an article which gives an outline.

Comment Source:>I’ve made the def more explicit. I’m not sure what you mean by different time evolutions though. I just wanted to have some more confirmation that, that what you are talking about is that what I think you are talking about, since as said one can think of all sorts of time processes. So actually a link to wikipedia or to the network theory posts, as you have meanwhile provided is very helpful. thanks. I haven't yet understood what you mean by self-information. In your definition: >$i(t) : = - \ln \pi_a (t)$ (where $\pi_a (t)$ is the probability that the system is in state a at time t, given some prescribed initial distribution $\pi_a (0)$) the "a" had disappeared ? Do you mean $i_a (t)$? And the average goes then over all $i_a$ ? Since in particular I have no library account, I skip over sentences like "See (Seifert2005) for details." And I actually could imagine that looking at (Seifert2005) could be rather counterproductive, that is I find highly technical articles often rather disgusting. Hence it's great that I can here talk to the author of an article which gives an outline.
• Options
9.
edited October 2012

I haven’t yet understood what you mean by self-information.

You're right about the missing $a$. The "self-information" of $a$ is the information you'd get if you found out that the system is in state $a$, with no prior knowledge. If you average this over $a$, you get Shannon entropy, by definition. I've added comments on this in the post.

Jensen’s inequality should be mentioned

Will do. But don't you agree that it's the very definition of convexity that the mean of the function is less than the function of the mean?

Do you know if any similar relations may be obtained for more complicated cases, in which this hypothesis is not satisfied?

Yes, in Mustonen's and Lassig's paper where the "fitness flux theorem" is derived, they don't actually assume that. I just thought it made the point easier to grasp with this "detailed balance" assumption, but I'll mention in the text that it's not really needed.

Thanks to both of you! I wish I had such feedback when I write an actual paper! In fact, I think the entire physics literature would be much better off with more of such forums...

Comment Source:>I haven’t yet understood what you mean by self-information. You're right about the missing $a$. The "self-information" of $a$ is the information you'd get if you found out that the system is in state $a$, with no prior knowledge. If you average this over $a$, you get Shannon entropy, by definition. I've added comments on this in the post. >Jensen’s inequality should be mentioned Will do. But don't you agree that it's the very definition of convexity that the mean of the function is less than the function of the mean? >Do you know if any similar relations may be obtained for more complicated cases, in which this hypothesis is not satisfied? Yes, in Mustonen's and Lassig's paper where the "fitness flux theorem" is derived, they don't actually assume that. I just thought it made the point easier to grasp with this "detailed balance" assumption, but I'll mention in the text that it's not really needed. Thanks to both of you! I wish I had such feedback when I write an actual paper! In fact, I think the entire physics literature would be much better off with more of such forums...
• Options
10.
edited October 2012

Hi, Matteo! As promised, I copied your post from the wiki to the blog and posted it today (Monday):

I found this to be a really fascinating article, which takes certain directions I'm fascinated by and goes much further in those directions than I have. I will have a bunch of scientific comments, which I'll make on the blog. But here's a comment of a different kind:

I gave a talk on "diversity, entropy and thermodynamics" at a workshop on the mathematics of biodiversity this summer, which was supposed to summarize things I've written on the blog about how ecosystems "learn" through natural selection, and how this is connected to information theory and thermodynamics. Your post is really about the same questions... but it's full of ideas that were new to me, though in many cases (it seems) present in the literature. Someday I'm hoping and expecting to write an article on this topic for a conference proceedings paper: the conference organizers are, or at least were, looking for a place to publish a proceedings. I think it would be great to take the ideas you're describing here and add them to that paper. Of course you'd be a coauthor... though I hope you wouldn't write too much more; I'll have plenty of work just taking what you wrote here, combining it with my ideas, and explaining it in slightly simpler terms so that biologists have a better chance of understanding it!

Would you be interested in doing this? The advantage of a conference proceedings is that one can freely mix exposition of "known" stuff with new research.

Comment Source:Hi, Matteo! As promised, I copied your post from the wiki to the blog and posted it today (Monday): * Matteo Smerlak, [The mathematical origins of irreversibility](http://johncarlosbaez.wordpress.com/2012/10/08/the-mathematical-origin-of-irreversibility/), Azimuth Blog. I found this to be a really fascinating article, which takes certain directions I'm fascinated by and goes much further in those directions than I have. I will have a bunch of scientific comments, which I'll make on the blog. But here's a comment of a different kind: I gave a talk on "diversity, entropy and thermodynamics" at a workshop on [the mathematics of biodiversity](http://johncarlosbaez.wordpress.com/2012/07/03/the-mathematics-of-biodiversity-part-5/) this summer, which was supposed to summarize things I've written on the blog about how ecosystems "learn" through natural selection, and how this is connected to information theory and thermodynamics. Your post is really about the same questions... but it's full of ideas that were new to me, though in many cases (it seems) present in the literature. Someday I'm hoping and expecting to write an article on this topic for a conference proceedings paper: the conference organizers are, or at least were, looking for a place to publish a proceedings. I think it would be great to take the ideas you're describing here and add them to that paper. Of course you'd be a coauthor... though I hope you wouldn't write too much more; I'll have plenty of work just taking what you wrote here, combining it with my ideas, and explaining it in slightly simpler terms so that biologists have a better chance of understanding it! Would you be interested in doing this? The advantage of a conference proceedings is that one can freely mix exposition of "known" stuff with new research.
• Options
11.

John, I'm more than happy to do that. I'll learn a lot in the process! If I can help in any way, sure, I'm in! And please don't worry about the work: if there's stuff to do, whatever it is (from typing to reading to thinking), let me know frankly, I'll do my best.

Just a word about novelty. Indeed this is known stuff, except for two very minor points which are only implicit in the literature:

• the idea that this is not at a theorem about thermodynamics; it's universal, it's like the central limit theorem, it virtually impacts every science that deals with Markov processes; the theorem is being used more and more often, in more and more different fields, but I don't know that somebody stated this clearly

• giving a name to the $\Sigma$ variable itself: people call it "entropy production", but that's sort of silly, because a theorem that says "the variation of entropy $\Delta S$ is larger than the mean entropy production $\Sigma$" is no theorem at all; but if you understand that $\Sigma$ is "skewness", a measure of how natural or unnatural a trajectory is, then the same inequality is telling you something important

Comment Source:John, I'm more than happy to do that. I'll learn a lot in the process! If I can help in any way, sure, I'm in! And please don't worry about the work: if there's stuff to do, whatever it is (from typing to reading to thinking), let me know frankly, I'll do my best. Just a word about novelty. Indeed this is known stuff, except for two very minor points which are only implicit in the literature: - the idea that this is not at a theorem about thermodynamics; it's universal, it's like the central limit theorem, it virtually impacts every science that deals with Markov processes; the theorem is being used more and more often, in more and more different fields, but I don't know that somebody stated this clearly - giving a name to the $\Sigma$ variable itself: people call it "entropy production", but that's sort of silly, because a theorem that says "the variation of entropy $\Delta S$ is larger than the mean entropy production $\Sigma$" is no theorem at all; but if you understand that $\Sigma$ is "skewness", a measure of how natural or unnatural a trajectory is, then the same inequality is telling you something important
• Options
12.
edited October 2012

Matteo wrote:

John, I’m more than happy to do that.

Great! For starters, do I have you permission to add your post to my 'information geometry' collection here? Of course it will still have your name on it.

It will take me a while to do this, since this collection currently goes up to Part 8 but the blog already goes up to Part 13.

It will take me even longer to do something more substantial. But I will do it... and I'll let you know when I do. There are really few things more fascinating to me right now than the intersection of information theory, evolutionary biology, game theory, Markov process theory and biodiversity studies.

('Information geometry' isn't the best word for this series of posts - they started out being about that, but then the subject started growing...)

Comment Source:Matteo wrote: > John, I’m more than happy to do that. Great! For starters, do I have you permission to add your post to my 'information geometry' collection [here](http://math.ucr.edu/home/baez/information/)? Of course it will still have your name on it. It will take me a while to do this, since this collection currently goes up to Part 8 but the blog already goes up to Part 13. It will take me even longer to do something more substantial. But I will do it... and I'll let you know when I do. There are really few things more fascinating to me right now than the intersection of information theory, evolutionary biology, game theory, Markov process theory and biodiversity studies. ('Information geometry' isn't the best word for this series of posts - they started out being about that, but then the subject started growing...)
• Options
13.

do I have you permission to add your post to my ’information geometry’ collection

Sure!

A friend has spotted a couple of typos in the post. Can I still fix them?

Comment Source:>do I have you permission to add your post to my ’information geometry’ collection Sure! A friend has spotted a couple of typos in the post. Can I still fix them?
• Options
14.
edited October 2012

If you email me and tell me what they are, I can fix them. Jacob Biamonte already caught a few, and I fixed those.

Comment Source:If you email me and tell me what they are, I can fix them. Jacob Biamonte already caught a few, and I fixed those.
• Options
15.
edited October 2012

Thanks to both of you! I wish I had such feedback when I write an actual paper! In fact, I think the entire physics literature would be much better off with more of such forums…

Usually I stop asking after something like two questions (unless I know that it is ok) because questions are often seen as harassment, but now after you encouraged feedback I dare to ask further. I am rather slowly digging through the text and so I still have problems with the defintion

$i_a(t) : = - \ln \pi_a (t)$, where $\pi_a (t)$ is the probability that the system is in state a at time t, given some prescribed initial distribution $\pi_a (0)$)

especially together with your new defintion of

$\Delta i$ as $\Delta i=i_{a_0}(T)-i_{a_N}(0)$

$- \ln$ looks like minus the logarithm, so how is this to be understood if $\pi_a (t)=0$ ?

Comment Source:>Thanks to both of you! I wish I had such feedback when I write an actual paper! In fact, I think the entire physics literature would be much better off with more of such forums… Usually I stop asking after something like two questions (unless I know that it is ok) because questions are often seen as harassment, but now after you encouraged feedback I dare to ask further. I am rather slowly digging through the text and so I still have problems with the defintion >$i_a(t) : = - \ln \pi_a (t)$, where $\pi_a (t)$ is the probability that the system is in state a at time t, given some prescribed initial distribution $\pi_a (0)$) especially together with your new defintion of >$\Delta i$ as $\Delta i=i_{a_0}(T)-i_{a_N}(0)$ $- \ln$ looks like minus the logarithm, so how is this to be understood if $\pi_a (t)=0$ ?
• Options
16.
edited October 2012

$- \ln$ looks like minus the logarithm, so how is this to be understood if $\pi_a (t)=0$ ?

That you'd be infinitely surprised if you found that $\omega_t=a$! This divergence may seem troublesome at first sight, but as far as second law inequality is concerned (after you average over the states), that's fine because $0\times \ln 0=0$. I think the point is, if your system is such that some states never get visited at all, you should just remove them from the definition of the process.

Comment Source:>$- \ln$ looks like minus the logarithm, so how is this to be understood if $\pi_a (t)=0$ ? That you'd be infinitely surprised if you found that $\omega_t=a$! This divergence may seem troublesome at first sight, but as far as second law inequality is concerned (after you average over the states), that's fine because $0\times \ln 0=0$. I think the point is, if your system is such that some states never get visited at all, you should just remove them from the definition of the process.
• Options
17.
edited October 2012

I think I misunderstood you on what is meant by the fluctuation theorem:

$\mathrm{Prob}[\Delta i-\Sigma=-A]=e^{-A}\mathrm{Prob}[\Delta i-\Sigma=A]$

I interpreted it as: The probability that at a time T the quantity $\Delta i-\Sigma$ has the value -A is $e^{-A}$ the probability that this quantity is +A, (A being here $0 \leq A$ a finite number)

however this interpretation doesn't seem really to work if $\Delta i$ is infinite. Or do you include $A= -\infty$ and then you can show that the Probs for this are zero? (while in addition assuming that $e^{\infty}*0$ converges to 0, which looks troublesome)

I should add that the above interpretation looks strange also for other reasons.

Comment Source:I think I misunderstood you on what is meant by the fluctuation theorem: >$\mathrm{Prob}[\Delta i-\Sigma=-A]=e^{-A}\mathrm{Prob}[\Delta i-\Sigma=A]$ I interpreted it as: The probability that at a time T the quantity $\Delta i-\Sigma$ has the value -A is $e^{-A}$ the probability that this quantity is +A, (A being here $0 \leq A$ a finite number) however this interpretation doesn't seem really to work if $\Delta i$ is infinite. Or do you include $A= -\infty$ and then you can show that the Probs for this are zero? (while in addition assuming that $e^{\infty}*0$ converges to 0, which looks troublesome) I should add that the above interpretation looks strange also for other reasons.
• Options
18.

I interpreted it as: The probability that at a time T the quantity $\Delta i-\Sigma$ has the value -A is $e^{-A}$ the probability that this quantity is +A, (A being here $0 \leq A$ a finite number)

That's right.

however this interpretation doesn't seem really to work if $\Delta i$ is infinite.

$\Delta i$ is never infinite: if a given trajectory $\omega$ starts in state $a_0$ and ends in state $a_N$, then clearly $\pi_{a_0}(0)$ and $\pi_{a_N}(T)$ must be non-zero, hence $\Delta i(\omega)$ is finite!

I should add that the above interpretation looks strange also for other reasons.

Do tell us!

Comment Source:>I interpreted it as: The probability that at a time T the quantity $\Delta i-\Sigma$ has the value -A is $e^{-A}$ the probability that this quantity is +A, (A being here $0 \leq A$ a finite number) That's right. >however this interpretation doesn't seem really to work if $\Delta i$ is infinite. $\Delta i$ is never infinite: if a given trajectory $\omega$ starts in state $a_0$ and ends in state $a_N$, then clearly $\pi_{a_0}(0)$ and $\pi_{a_N}(T)$ must be non-zero, hence $\Delta i(\omega)$ is finite! >I should add that the above interpretation looks strange also for other reasons. Do tell us!
• Options
19.
edited October 2012

Where is it excluded that $\pi_{a_0} (0) \neq 0$ ? Do you demand that?

Comment Source:Where is it excluded that $\pi_{a_0} (0) \neq 0$ ? Do you demand that?
• Options
20.

Gee, I wish you folks were having this discussion on the blog, where approximately 20-50 times more people would read it! I'm learning things, and so would lots of other people.

The idea of the "Blog" section here on the Forum is to develop blog articles. Once they're published, the idea is to talk about them on the blog.

I can copy these comments onto the blog...

Comment Source:Gee, I wish you folks were having this discussion on the blog, where approximately 20-50 times more people would read it! I'm learning things, and so would lots of other people. The idea of the "Blog" section here on the Forum is to develop blog articles. Once they're published, the idea is to talk about them on the blog. I can copy these comments onto the blog...
• Options
21.

I wish you folks were having this discussion on the blog, where approximately 20-50 times more people would read it! I’m learning things, and so would lots of other people.

I guess Matteo and me, that is -at least me :)- have enjoyed here the little "privacy" of this thread. In particular one can get easily distracted if there are other people interrupting a discussion. MOreover as pointed out above writing LaTex on the blog is a bit too cumbersome.

Comment Source:> I wish you folks were having this discussion on the blog, where approximately 20-50 times more people would read it! I’m learning things, and so would lots of other people. I guess Matteo and me, that is -at least me :)- have enjoyed here the little "privacy" of this thread. In particular one can get easily distracted if there are other people interrupting a discussion. MOreover as pointed out above writing LaTex on the blog is a bit too cumbersome.
• Options
22.

The Latex on the blog was also an issue for me, because I don't use it enough to remember it well. One way out is to take a programming-style approach, and write InnerProduct(a,b) instead of <a,b>. Or you can learn it by example: if you select any section of text from the browser display, then when you paste into a text editor, it shows the source with the Latex. I just learned this, and it has given me some motivation to relearn at least the basics.

Comment Source:The Latex on the blog was also an issue for me, because I don't use it enough to remember it well. One way out is to take a programming-style approach, and write InnerProduct(a,b) instead of <a,b>. Or you can learn it by example: if you select any section of text from the browser display, then when you paste into a text editor, it shows the source with the Latex. I just learned this, and it has given me some motivation to relearn at least the basics.
• Options
23.
edited October 2012

Or you can learn it by example: if you select any section of text from the browser display

Well of course one could try out the latex here and then paste it over to the blog, as John had suggested.

Comment Source:>Or you can learn it by example: if you select any section of text from the browser display Well of course one could try out the latex here and then paste it over to the blog, as John had suggested.
• Options
24.
edited October 2012

The Latex on the blog was also an issue for me...

I'm always happy to fix people's LaTeX and/or explain how to use LaTeX on the blog. But as you note, you can just cut-and-paste any equation from the blog, stick it inside

$latex$

and it will work on the blog. If you stick it inside



or



it should work here, modulo certain subtleties that aren't worth worrying about until they happen.

Still, I want to enable the ability to preview comments on the blog. Apparently this will cost money, though not much... but when I switch to a paid blog I'd also like to have that blog hosted on a server I have more control over... and this gets complicated enough that I keep putting it off. So this issue is one on which I continue to procrastinate.

Comment Source:> The Latex on the blog was also an issue for me... I'm always happy to fix people's LaTeX and/or explain how to use LaTeX on the blog. But as you note, you can just cut-and-paste any equation from the blog, stick it inside &#36;latex &#36; and it will work on the blog. If you stick it inside &#36; &#36; or &#36;&#36; &#36;&#36; it should work here, modulo certain subtleties that aren't worth worrying about until they happen. Still, I want to enable the ability to preview comments on the blog. Apparently this will cost money, though not much... but when I switch to a paid blog I'd also like to have that blog hosted on a server I have more control over... and this gets complicated enough that I keep putting it off. So this issue is one on which I continue to procrastinate.