AFLM Tips

Displayed on this page are the predictions and results of my AFLM tipping model so far in season 2024. Tips are released on a round-by-round basis and may be updated several times during the week. An FAQ is available at the bottom of the page.



FAQ

How does it work?

There are three elements to each prediction in the table: tip, the team the model is tipping to win; margin, the amount of points the model is tipping that team to win by; and chance, the percentage likelihood of the tipped team winning by any margin. The table also includes methods of evaluating all three elements of the tip. These are discussed in detail in the other FAQs.

The model is still in development and as such I am not currently releasing the details of how it makes predictions, but I may decide to do so in the future. You are welcome to use the information below as you see fit, but please note that you do so at your own risk. I do not recommend or encourage the use of this information for betting.

What is accuracy?

Accuracy, quite simply, records whether the tip was right or wrong. If the team being tipped to win the game does win, you will see a green tick. For any other result you will see a red cross. Until the result is known, you will see a black question mark.

Most users of a tipping model will be primarily concerned with how often it correctly picks the winning team. Accuracy provides a direct answer to this question.

However, past accuracy is not always the best predictor of future accuracy. This is why other methods of evaluation are included.

What is MAE?

The acronym MAE stands for Mean Absolute Error.

The error of a tip is the amount by which the real-world margin differs from the predicted margin. For example, if the model tips the home team to win by 20 points, and they instead win by 35, the error for that tip will be 15. If they win by 5, it will be -15.

The absolute error is the same number, but is always a positive value. In the scenario above, whether the home team wins by 35 or 5, the absolute error of the tip will be 15.

Therefore, the mean absolute error is the average number of points by which the margin prediction was wrong. The lower the MAE, the better the model.

In comparison to accuracy, MAE is a superior predictor of future accuracy. Why is this?

Imagine a match with tips provided by two models. Model A predicts the home team will win by 2 points. Model B predicts the away team will win by 30 points. The match is played, and the away team wins by 4 points.

Model A better predicted the outcome of the match - it was only 6 points off, while Model B missed by 26. But, Model B predicted the right team, and Model A the wrong one. This would be reflected in MAE, but the accuracy would incorrectly report Model B as superior.

This is an outcome that, understandably, can feel counter-intuitive. If you’re interested in reading more, I highly recommend Tony Corke’s article on the topic on his website, Matter of Stats.

What are bits?

The bits measurement evaluates ‘chance’, the probability of the tipped team winning the match by any margin.

If the tip is accurate, then the model will be awarded some value of bits, depending on how probable the model declared the tip to be. The more confident the model is in its tip, the more bits it will receive if the tip is correct.

However, the same is true in reverse. If the tip is inaccurate, the model loses some value of bits, and this value grows larger depending on how confident the model was in the faulty prediction. A draw result also sees the model lose bits, albeit not as many.

As such, bits provides something of a compromise between accuracy and MAE. It’s a score that goes up or down depending on the real world outcome of the match, but with the acknowledgement that the confidence in a prediction is just as important as the prediction itself.

Bits are used to score the Monash University Probabilistic Footy Tipping Competition. You can find more information on how they are calculated on their website.

What are expected tips?

The model’s expected tips is the sum of the prediction probability for each completed game so far. As the name suggests, it is intended to provide an estimate of how many correct tips the model would expect to have achieved, based on the probability it has assigned to each prediction.

For example, if the model tips that a team has a 67% chance of victory, this would be equivalent to 0.67 expected tips. Therefore in a group of three tips all with a 67% chance, the model would expect to get 2 out of 3 tips correct (or 2.01, to be precise).

If the number of correct tips is significantly higher than the expected tips, this suggests - assuming the model is calculating tip probabilities well - that it is currently overperforming, and has probably gotten a bit lucky with the results of some games. Likewise, if corrected tips are significantly lower, it is probably a bit unlucky.

 

Last updated: 11:20pm, 16 April 2024 (AEDT).


Feedback, corrections, coding tips, questions and suggestions are always welcome.

Email | Twitter