#### Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Options

# SVR Forecast: Darwin Delta Anomalies

Hello John and hello to everyone

SVR forecast for Darwin Delta Anomalies

I could post the images and data and forecasts later on.

This is a hack for now, I am sure there are some errors and much improvement possible.

As you read this write-up you notice how different the backtesting lingo is than the papers you have read with climatologists's forecasts.

300 months of Darwin data was used for backtesting and then stats applied to understand the forecasts postmortem, as opposed to "last 2 years". No moving averages, all stats are postmortem. 16-cores cpu server runs for 2 hours to produce each backtesting batch with no errors! So on a regular desktop this might run about 2 days each!

All the papers you are reading FUDGE their input data for forecast, you see a FUDGE section and needs to be released and discussed technically and honestly.

John I could not help but add the Remark 1 on p 5... to sound theoretical physicist-ish.

There are no conclusions, I could be full of bologna, or others comment could be off, the idea is:

ONE. to produce REPRODUCIBLE computations that could be checked for veracity of the inferences by third parties, this is a corner stone of all scientific investigations which I find more and more lacking specially in some of the papers on forecasts for El-Nino

TWO. 100% computer science driven and free of preconceived notions about climate.

I am burning the midnight oil, review all the computations over this weekend in case of error I report back

Dara

• Options
1.

I hope this serves as a template for the kinds of computing we need to deliver to be able to have meaningful applicable results. I try to avoid the endless rehashing of thoughts about how or why instead actually using these forecast models, if they work ok enough to build models.

Just a start

Comment Source:I hope this serves as a template for the kinds of computing we need to deliver to be able to have meaningful applicable results. I try to avoid the endless rehashing of thoughts about how or why instead actually using these forecast models, if they work ok enough to build models. Just a start
• Options
2.

I tried larger training set up to 100 months, and it seemed 12 months is the size, which actually in my code is 12+1.

This is now something I have cooked up in my studies with Machine Learning i.e. Short-term Memory. So 12 months are the short-term memory for the Darwin delta anomalies. This number is of course depends on the Machine Learning algorithm as any memory system depends on the techs used for storage.

Comment Source:I tried larger training set up to 100 months, and it seemed 12 months is the size, which actually in my code is 12+1. This is now something I have cooked up in my studies with Machine Learning i.e. Short-term Memory. So 12 months are the short-term memory for the Darwin delta anomalies. This number is of course depends on the Machine Learning algorithm as any memory system depends on the techs used for storage.