Saturday Rainbow Pick 6

Started by sekrah, March 25, 2016, 05:06:36 PM

Previous topic - Next topic

johnnym


mjellish

Thanks Rocky.  Appreciate your thoughts.

covelj70

Frank,

I hear you on this point and you are right but using that logic, Sky by Sky had a terrible trip as well and if that one got a clean trip and got up over Harmonize, the payout would have been much larger.

Hope you are well.

covelj70

Sorry. Meant Shake Down Baby. Not Sky My Sky. Too many 3 word horses in that race...

TGJB

Both had lots of trouble, and boy would it have paid more with either. I suspect for both of us.
TGJB

FrankD.

Welcome back Jim,

It\'s now officially triple crown season. What took you so long, you usually get derby fever about New Years Eve!

All is well here, I hope you and yours are doing great.

belmont3

Math,

Great stuff.
Have you done any work with the Kullback-Leibler Divergence?
Read something on this a while back and, as I recall, it goes something like this:
(as applied to horse racing or other sports betting)

A bettor wins every race
Minus a degree of uncertainty (Frank\'s horse gets left at the gate etc.)
Minus the difference between the race probability as determined by the bettor and \'true\' probabilities.



If I remember this right, Kullback (as applied to horse race handicapping) relates  to the \"probability\" calculation that bettors make (some intuitively and some methodically) prior to wagering..
Most on this site use TG performance figs along with other considerations such as TG pattern, trainer stats, surface, distance, medication, equipment changes etc. etc. etc. in making their calculation which is then expressed in their wager.
(Some, like mjellish are able to eloquently explain their probability calculations in clear concise handicapping narratives ).  
This \'bettor\' probability calculation is then juxtaposed against the \"true\" probabilities.
The divergence can be large or small. (lately mine have been Supersized!) LOL

May not be 100% accurate on this topic and am interested in your feedback.

In my mind, this also relates to the algo bettors as they seek to reduce the measure of uncertainty in their wagers. But just how do they go about determining the \"true\" probability?

Bob

Mathcapper

Bob – have to say I don't know much about the Kullback-Leibler Divergence, other than that it's a measure of "relative entropy," what you lose when you try to approximate a true distribution with a theoretical one.

I think your reference to its application in horse race handicapping is apropos. Good handicappers, whether they do it intuitively in their heads or by using sophisticated computer models, create some type of "fair odds" line, be it rudimentary or otherwise, that they use to approximate the "true" win probabilities and hence determine value.

The computer guys take this to the extreme with their sophisticated multinomial logit and probit models, but they're still using a theoretical model to try to estimate the "true" probability distributions. They do it empirically by comparing the probabilities computed by their theoretical models to the actual observed probability rates over time. If and when a computer team can get the probabilities computed by their models to coincide with the observed rates with a high enough degree of accuracy, and their predictions are consistent across the set of observations they've deemed as overlays, then it's game, set and match.

The problem is that most horseplayers and wannabe programmers have not or cannot compute probability lines on overlays that are proven to consistently perform as predicted over time. They may think that the even money shots they've identified that go off at 3-1 on the board win at a 50% rate, but when they go and look at all such cases over time, they find that those horses are actually winning at closer to a 25% rate.

This is the crux of the problem. The most successful computer teams (and some professional handicappers, perhaps on this board) have solved it. But it takes a lot of time, a lot of comparing of theoretical predictions vs. actual observations, to get to the point where there's enough certainty that the "Kullback-Leibler divergence" is small enough that a consistent profit can be generated over time.

It's probably no coincidence that Alan Woods, the co-developer of the renowned Benter/Woods model, went by the handle "Entropy" on the Pace Advantage board.

Be happy to discuss it more over a couple of beers in the backyard at Saratoga some afternoon. ;-)

Rocky

belmont3

Thanks Rocky.

You are certainly:

\"The Most Interesting Horseplayer in the Backyard\"

Here\'s a story on the late Alan Woods: (sure there re many others)



http://www.cigaraficionado.com/webfeatures/show/id/Gambling-The-Hundred-and-Fifty-Million-Dollar-Man_8366

Beers are on me :)

covelj70

Frank,

Thanks, appreciate that, everyone great on our end

\"Every time I try to get out....they pull me back in....\"

Got a new role at work which really cut into my play time while I work but some things are more important than work and the Triple Crown is one of those things......

Boscar Obarra

Haven\'t seen this mentioned but as a measure of how \'hard\' it was to hit, only 1 out of every 364,000 tickets purchased was a winner. Byk making it sound like all you have to do is get a few friends together at a picnic table to score ;-)

 Maybe, depends on who the friends are.