#### Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Options

# Short term memory for el nino

When working with the SVR and NN on El Nino anomalies index forecast, I noticed that 36 months training samples was optimal, longer durations would not better the accuracy often lowers it, and shorter same.

I have seen this short term memory for the stock market signals as well.

But my main wonder is: if there is such a memory where is the structure that maintains it e.g. materials with shape memory or actual digital memory systems or human neuronal memory all have a structure that processes that memory and maintains it, but where is the atmospheric memory system? and what is it comprised off?

Dara

PS. Paul would this qualify as crackpot science?

• Options
1.

Dara, my own working model is that external forcing is acting as a running boundary condition which guides the solution.

The shortest scale forcing that I see is the 28 month average QBO period. The characteristic response period is 51 months.

These are oscillatory forcings and responses, so unless you have lots of coefficients in the expansion of the extrapolated function, the prediction will likely diverge past a few years.

The trend is from the memory-less Markov to the more deterministic quasi-periodic.

What I would recommend is to create a trial function that you can express analytically and see if your method can track it.

A symbolic regression machine learning tool such as Eureqa is very difficult to fake out with trial solutions because it relentlessly tries out all possible sinusoidal combinations until it finds the one you picked.

Comment Source:Dara, my own working model is that external forcing is acting as a running boundary condition which guides the solution. The shortest scale forcing that I see is the 28 month average QBO period. The characteristic response period is 51 months. These are oscillatory forcings and responses, so unless you have lots of coefficients in the expansion of the extrapolated function, the prediction will likely diverge past a few years. The trend is from the memory-less Markov to the more deterministic quasi-periodic. What I would recommend is to create a trial function that you can express analytically and see if your method can track it. A symbolic regression machine learning tool such as Eureqa is very difficult to fake out with trial solutions because it relentlessly tries out all possible sinusoidal combinations until it finds the one you picked.
• Options
2.

The shortest scale forcing that I see is the 28 month average QBO period. The

I see this short term memory in US stock market for similar forecasts I issued here. And there is no periodicity in stocks prices though!

I give you an example: A waiter memorizes the location and orders and complaints of customers for past 1 hours or 2... cannot remember longer, maybe a specific situation here or there, but the detailed memory has short duration.

Given this memory of 2 hours, then the waiter could forecast what orders the customers might issue and what would be the income, these forecasts are not that bad in accuracy!

If you give the waiter the data for the customers for past 2 years, would not better his forecasts perhaps even misguide and misjudge the amounts.

I would then tell you that the neuronal circuits in his skull contain the memory, but what memorizes the short-term memory in El Nino forecasts?

For stock market, the memory stored in the public releases of information and the swarming data of the high frequency trading of machines and men, so I know where that memory comes from and what it is made from.

Dara

Comment Source:>The shortest scale forcing that I see is the 28 month average QBO period. The I see this short term memory in US stock market for similar forecasts I issued here. And there is no periodicity in stocks prices though! I give you an example: A waiter memorizes the location and orders and complaints of customers for past 1 hours or 2... cannot remember longer, maybe a specific situation here or there, but the detailed memory has short duration. Given this memory of 2 hours, then the waiter could forecast what orders the customers might issue and what would be the income, these forecasts are not that bad in accuracy! If you give the waiter the data for the customers for past 2 years, would not better his forecasts perhaps even misguide and misjudge the amounts. I would then tell you that the neuronal circuits in his skull contain the memory, but what memorizes the short-term memory in El Nino forecasts? For stock market, the memory stored in the public releases of information and the **swarming** data of the high frequency trading of machines and men, so I know where that memory comes from and what it is made from. Dara
• Options
3.

I think what I am saying is that the short-term forecasts are good in terms of a dead-reckoning algorithm. They can see where the trend is headed, but they can't see the bend in the curve that lies too far ahead. In other words, the model for that is non-existent. And that is where the 2 to 4-year time frame is important because that gives the forcing and characteristic period of the physical mechanisms. Beyond this point, the data will inflect. These pseudo-cyclic periods may, or may not be, extracted by the machine learning.

Perhaps when the learning interval is too long, it simply averages out the cycles, leaving nothing to dead-reckon with. At least with a short interval, it can see where the data is going over the short-term.

Comment Source:I think what I am saying is that the short-term forecasts are good in terms of a *dead-reckoning* algorithm. They can see where the trend is headed, but they can't see the bend in the curve that lies too far ahead. In other words, the model for that is non-existent. And that is where the 2 to 4-year time frame is important because that gives the forcing and characteristic period of the physical mechanisms. Beyond this point, the data will inflect. These pseudo-cyclic periods may, or may not be, extracted by the machine learning. Perhaps when the learning interval is too long, it simply averages out the cycles, leaving nothing to dead-reckon with. At least with a short interval, it can see where the data is going over the short-term.