Something I remembered from our earlier Scikit experience, they like their APIs (most of them) to have N(0,1) distribution for arguments, so one needs to normalize the parameters into the range [-1,+1]. I also suspect that in some of their code they do that internally which is a disaster to interpret the results. One key reason why we decided to code our own algorithms.
For example for Neural Networks, our algorithm works the best for normalized inputs, but that changes the learning completely, so I added a scalar the programmer could use to scale the input and output to fine tune, same for SVR.
I also noted that some of the plots for stat packages and scientific packages smooth our without notifying the programmer, which again is another disaster to deal with.
Just to be cautious about the interpretations of the results and forming too quick a conclusion.