HI John, I can see where you're coming from. I guess two key points
are (1) I'm not really the best guy to try running pre-canned ML code
on problems and (2) I suffer from having been an academic working on
ML problems for a few years. As such, one of the natural questions I
always have is "if this computerized fitting procedure doesn't produce
an accurate predictor, what can I do in that case?" (It's now great
fun when a canned software package mysteriously produces mediocre results.) I understand regression and sparstiy enough to at least be able to look at the results to explain why it failed, and maybe even see how to tweak things and have another go. My experience with a lot of other models isn't that great: if I was to run a two-level neural network on the problem and got back a poor model that hadn't convincingly detected any features in the data, I wouldn't know what to do about that (beyond try a completely different model!) As such, as well as me being generally a bit flaky, I'm of the opinion that what I'm looking at is probably the most likely avenue for me to make a helpful contribution.

Maybe we have some other Azimuthans who are more familiar with the
properties of other ML toolkits who could help the project in terms of running
other models?