Rick introduced a concept earlier in the blog that he called QPI. The idea was to create a measure in which we could determine how successful or unsuccessful we were in our bracket selection. Here's our proposal for QPI:
- For QPI, a lower score is better
- For each team picked correctly and at the exact seed as in the official bracket you get: 0 points
- For each team picked correctly but at an incorrect seed you get the difference in seed (For example, you pick #8 Iowa; actual bracket is #6 Iowa; you get 2 points)
- For each team picked incorrectly you get the difference in your incorrect seeding and a hypothetically considered 17 seed (For example, you select #12 Nebraska, but Nebraska doesn't make the final field, you get 17 -12 = 5 points)
Using this simple formula, I investigated the feasibility of such a formula. I constructed the following table using data provided from The Bracket Projects Blog, using their 2008 bracket data. There are two other scoring formulas used at this website. The first called the Parrish Rubric uses the formula: (# of teams picked correct in filed + # of teams seeded correctly + of teams seeded within one of their actual seed) and the second, the Paymond Rubric, uses the formula ((3 X each at-large team picked correctly) + (2 X each team seeded exactly correct) + (1 X each team picked within one seed).
Teams Picked Correctly | Seeded Exactly | Seeded Within One | Parrish Score | Paymon Score | QPI | |
MMAS1 | 64 | 41 | 59 | 164 | 333 | 33 |
B1012 | 64 | 40 | 60 | 164 | 332 | 35 |
F&B3 | 65 | 38 | 57 | 160 | 328 | 38 |
Palm4 | 65 | 31 | 57 | 153 | 314 | 46 |
CBS5 | 64 | 33 | 55 | 152 | 313 | 48 |
ESPN6 | 65 | 29 | 54 | 148 | 307 | 49 |
JW7 | 62 | 30 | 45 | 137 | 291 | 74 |
(1 = March Madness All Season, 2 = Bracketology 101, 3 = Hoop Group's Facts and Bracks, 4 = Jerry Palm, 5 = CBS Sports, 6 = ESPN's Bracketology, 7 = cbs2.com)
It seems to me that our QPI is a very feasible measure for determining bracket success. Brackets 1 and 2 seem almost exactly similar according to the other two rubrics; their QPI score, however, shows a difference. The QPI seems to favor perfect brackets as shown by the score differences between brackets 5 and 6 as it relates to the other rubrics. While our numbers are much smaller than the scores the other rubrics provide, our ranges seem to be more significant. If the QPI score was multiplied by 5 and 10 to approximate the figures of the Parrish and Paymon scores respectively the figures would look like this:
Parrish Score | Paymon Score | QPI | QPI X 5 | QPI X 10 | |
MMAS1 | 164 | 333 | 33 | 165 | 330 |
B1012 | 164 | 332 | 35 | 175 | 350 |
F&B3 | 160 | 328 | 38 | 190 | 380 |
Palm4 | 153 | 314 | 46 | 230 | 460 |
CBS5 | 152 | 313 | 48 | 240 | 480 |
ESPN6 | 148 | 307 | 49 | 245 | 490 |
JW7 | 137 | 291 | 74 | 370 | 740 |
While this table is in no way representative of how QPI relates exactly to the other rubrics, it can help show the level of discrepancy between brackets the QPI can give. A better statistical look at the differences between the three scoring systems would be:
Parrish Score | Paymon Score | QPI | |
Range | 27 | 42 | 41 |
Mean | 154 | 316 | 46 |
Standard Deviation | 9 | 14 | 13 |
It's been 3 years since my Stats course, so I'll leave it to Rick to make better sense of the data I've collected. Without doing the proper legwork, I would guess that our system is at least comparable if not possibly better than other methods for determining the quality of a bracket.
No comments:
Post a Comment