Drawing(11)

I have an academic background in psychology and the one lesson that my undergraduate course taught me was that human beings are absolutely useless at making decisions. We Homo sapiens simply do not have the faintest idea how to process information in an objective and systematic way. This makes us useless when it comes to working out the true odds of something happening. A good example of this comes from surveys of lottery players. They all believe that the odds of winning the jackpot are less than a thousand to one against, and are amazed when told that the true statistical odds are about 14 million to 1!

Human beings are particularly bad when we are confronted with a range of information upon which to make a decision. In these circumstances we tend to place too much emphasis on one piece of evidence and not enough on others. In some instances we actually ignore key pieces of information because we do not understand it, or because it does not fit with our preconceived theories about how the world works.

We are also heavily influenced by the source of the information, believing that a piece of information must be true because we like the look of the person saying it or the sound of their voice. Indeed research shows that people are more likely to believe someone speaking in a Yorkshire accent than anyone else, regardless of whether they are speaking the truth or not. Apparently Yorkshire folk are viewed to be more trustworthy (which will please my father-in-law) but on the grounds that they are less intelligent and therefore less cunning (I haven’t mentioned this to him!).

Two psychologists called Paul Meehl and William Grove produced the most comprehensive research on the decision-making abilities of human beings that I have found. They set about assessing the decision-making ability of professional experts, working in fields as diverse as criminal justice, clinical psychology, and education by comparing the accuracy of their decisions to those made by statistical models.

The term ‘statistical model’ implies a high level of sophistication, and one assumes that models require a high level of computation and an honour degree in mathematics to comprehend. However, the models reviewed by Meehl and Grove were remarkably simple systems. In many instances they required only two or three system inputs. In the case of one model, that tried to predict whether an offender would reoffend or not on release from prison, one only had to input the age of the offender and their number of previous convictions. Points were awarded for offenders of a certain age and additional points were awarded for the number of previous convictions. A simple rule was then applied that stated that if the offender had more than a given number of points then they were more likely to reoffend than an offender with fewer points (I’ve purchased betting systems that work in the same way).

Meehl and Grove reported on a study based on 3,000 criminal offenders given parole, which compared the predictions of the statistical model to the expert opinion of three highly experienced prison psychiatrists. The latter were salaried to make decisions about which offenders were a safe bet not to re-offend and which were not. The results were unambiguous. The model, despite is simplicity, proved to be much more accurate in its predictions than the psychiatrists! In fact you would have been better off tossing a coin to decide which offenders should have received parole rather than rely on the expert opinion of the shrinks. It would certainly have been more cost effective!

In a further study Meehl reported the results on an experiment that compared a two variable model with the expert opinion of college tutors in predicting academic grades for a large group of undergraduates. In this instance the college tutors thought that they had a great advantage over the model in that they not only had access to the two pieces of information used by the model (both known from previous research to be predictors of college academic grades), but also had access to a good deal of additional information that one would usually consider relevant. This supplementary information included data on students IQ, previous academic record, and a written report from each student on their academic and vocational interests. In addition the college tutors also had opportunity to interview the students prior to making their grade prediction. However, despite being in receipt of all this additional information the college tutors backed more losers than winners. Their grade predictions were greatly inferior to those predicted by the statistical model!

This finding is interesting because it shows how difficult we humans find it to process large amounts of information and to weight the importance of each item of information accurately. The college tutors simply had too much information at their disposal and got distracted by irrelevant data, or failed to appreciate the full importance of key information.

Meehl and Grove after reviewing the results of these two studies and a further 134 studies, across a range of professional fields, concluded that it was clear that even crude statistical models (or what I call systems) are superior to expert opinion in making probabilistic judgements. In their view when making an odds call about whether or not an offender will re-offend, or the likelihood of a student attaining a certain grade you would be better off relying on some well researched statistical model or system than the judgement of human experts.

This is a lesson not lost on the insurance or financial sectors. The insurance industry has for years been using what they call actuarial models to judge whether someone is a good insurance risk. These actuarial models are basically betting systems for working out whether or not you are a good insurance risk. You may not know it but these systems are used all the time. For instance, every time I apply for my car insurance to be renewed I am asked questions about my past form. I’m asked my age, my driving experience, my address, whether I store my car in a garage or not, whether or not I have had an accident or had a car stolen etc. The answers I give to these questions all feeds into some sort of statistical model that computes the probability of me making a claim on my insurance policy, and the level of probability determines the level of the premium that I will pay. Should the model conclude that I am a safe bet then my premiums will be low, but if I am deemed odds-on to make a claim then my premiums will be high, or the company may refuse to insure me because I’m judged to be a bad risk.

Similarly credit card companies ask for a whole range of information upon which to base a credit assessment. The type of information that they use to make their prediction is not determined at random. A huge amount of investment is made into researching which items of data best predict whether or not I’m likely to default on a loan.
 
You may well wonder what on earth this has got to do with using betting systems to pick winners. My answer is that if human experts can make the wrong odds call in a range of fields as diverse as education and psychiatry then they can also be badly wrong when it comes to betting on the horses. This is fortunate for bookmakers – that more rational member of the human species - and explains why for every pound wagered on the favourite the bookie returns less than 90p to the punter. This salutary statistic demonstrates the challenge facing punters in their bid to make the racing game pay.

In my view most punters lose because they do not appreciate a horses true odds of winning. In other words they are prepared to back an even money favourite when in fact the actual probability is considerably less than a 50-50 bet. This comes back to the point made by Meehl and Grove that we humans are not good at working out probabilities because we do not systematically review all the evidence available, and the information that we do process we weight inappropriately. This definitely applies to betting on the horses. You only need to open up the form pages of the Racing Post or the racing pages of any national newspaper to be confronted with masses of information and opinion on which horse will win a race. The sheer volume of information and opinion makes it difficult to work out which pieces are worth considering and which are not.

All of this data and opinion is not of equal value. In a statistical sense some pieces of information are closely related to a horses true odds of winning while others are totally unrelated or of minimal relevance. However this begs the question of which is which? In my firm opinion the answer to this question can only come from a systematic statistical analysis of all the key variables in the form book, and then to use the results of this analysis to generate objective betting systems.

This cold, calculating approach can be the only path to success because it takes out the human element as far as possible! This is why insurance and credit card companies invest heavily in developing the equivalent of betting systems to work out our chances of crashing our car or defaulting on a loan. In my view this is a fact that shouldn’t be lost on punters. If multi-billion pound industries trust systems more than subjective human opinion in making decisions then punters should take notice. The successful punter will take heed and will spend his or her time developing and using systems rather than trying to work out the winners for themselves.

If you are a systems purist like me the ideal scenario would be one whereby you collected all the necessary data needed to apply a particular system, worked out which horse qualified under the system and then placed a bet, the size of which would have been carefully calculated. In this approach one isn’t distracted by the fact that your mate down the pub knows a guy, who knows another guy, who thinks such and such a horse is a good thing in the 4.30.

In the systematic approach you know that your bet has been arrived at by a careful statistical analysis of the all the key data, and that the odds are in your favour. The level of your stake has been carefully calculated. It has not been determined by personal psychological factors such as you are having you tenth consecutive losing bet and your confidence is shot to bits or (and this is much worse) you are on a winning streak and feel that you can do no wrong. The systematic approach safeguards you against all these emotions. It protects you from making that one off bet that was out of proportion to every other bet you ever made, or from only staking two quid on that 33 to 1 winner you had the other day when your normal stake is a score. This is why betting systems are the only way to make your racing pay. They protect us from ourselves.

Not everyone believes in betting systems like I do. I know that Nick Mordin in his book ‘Winning without thinking’ said that he doesn’t use betting systems is a rigid manner but instead prefers to use them as an aid when making a selection. I should say that I disagree. I’m a systems purist and prefer only to stick to the rules of a system, regardless of any other considerations. I’m not into making subjective judgements. Boring? Not for all. Excitement comes from winning.