Explanation of The PiRate College Football Ratings
Welcome to the PiRateCollegeFootball Ratings. Today, we are a group of five relatives in Big Ten and SEC country. We started as one 3rd Grade sports fanatic and math nerd in 1969.
Our ratings use multiple methods–5 different ones to be exact. We start with the final rating from the previous season and then, using 5 separate computer formulas, update it based on lettermen and starters lost and returning, with points added or taken away based on the talent level lost or returning, coaching changes, and several other intangibles, each carrying a graded weight. We add or subtract points for many other factors, and this gives us a preseason rating for each of the 120 current FBS teams. Our ratings are then placed on a sliding scale so that the mean rating is 100.0. When you see State U rated at 113.5, you know they are 13.5 points better than the average team. If Tech is rated at 89.9, then they are 10.1 points weaker than an average team.
During the season, we update our ratings a little differently than most of the other ratings’ services. We do not look at the final score and adjust solely based on the difference in our predicted point spread and the actual score. We actually closely peruse how each game played out. If a team that was supposed to win by 38 points won by “just” 23, we look at how they won by 23. What if they led 31-0 at the half and played their scrubs in the second half, while the opponent continued with their first team?
What if a team was supposed to win by a touchdown and lost by a touchdown instead, while outgaining their opponent by 100 yards and losing on a fluke play? Should we drastically reduce a team’s rating because a punter fell down trying to field a snap, allowing the opposing defense to score a touchdown? This is not going to happen again, and it does not need to be included in the updated ratings.
What if a team that was supposed to lose by one point trailed by four points with two minutes to play, and they drove 85 yards only to lose when a fourth down pass was dropped in the end zone? The team could have easily kicked a field goal to lose by the one point they were supposed to lose by.
What if a game is played in the remnants of a hurricane, a blinding snowstorm, or in 100 degree weather? These would affect both pointspreads and totals.
While many computerized systems base ratings totally on a set formula based on the final scores of games, we try to come up with a “corrected final score” based on the final statistics and the printed play-by-play sheets from the boxscores.
We also adjust a rating if a team loses a key player. Most other ratings stay the same even when a superstar goes down with an injury. While an injury to a second team flanker may not lower a rating by more than 0.1 point, it is obvious that an injury to a potential 1st round NFL draft pick quarterback would affect a team by as many as seven to 17 points, based on the replacement player’s ability.
To sum it up, our updating formulas are computerized, but we allow our perusal of the games to alter each update, and over half of the games do have alterations.
Our ratings are meant to provide a base for predicting the outcomes of the next week of games. We do not rank the teams based on what they have done in the past. We use our ratings as a base when we make our picks for our customers. Thus, a team with one or two losses could wind up being the top-ranked team, while an undefeated team might be rated below them.
For example, this happened in 1976. Pittsburghwas undefeated and ranked number one, but our ratings showed Texas A&M to be the best team in the nation on January 2, 1977. In 1984, we hadFlorida rated number one at 9-1-1 and Brigham Young rated number seven at 12-0.
How have our ratings performed in the last 40+ years? There have been good years and bad years with a top of 58% wins against the spread to a low of 43% (all games vs. the spread). We believe that no predictive formula straight up will ever consistently beat the Las Vegas spread. We do believe that we can isolate games and then select from the list three to nine games each week that we hope to win. If you have been following our selections through the last decade, you no doubt know that we have become sweetheart teaser specialists. In the past five years, our 10-point and 13-point teaser plays have beaten the spread by 60%+. Take away one 52% season, and it would have been 65%+. We will begin issuing 14-point and even 20-point sweetheart teaser selections this year.
We no longer provide a paid subscription service. Starting this year (2011), our picks will be free to all here at this site.
Many people ask us to list our past seasons’ ratings. We would love to do that, and maybe one day we will have the time to go back and do so. We have paper copies of the ratings from 1971 to 1998. From 1999 to the current time, we have them stored in computers. Here is our list of the top teams from 1971 to 2010. We started in 1969, but we do not have the first two seasons’ of ratings saved. From memory, we believePennStatewon in 1969 and Notre Dame won in 1970.
1971 Nebraska
1972 SouthernCal
1973 OhioState
1974 Oklahoma
1975 Alabama
1976 TexasA&M
1977 Alabama
1978 SouthernCal
1979 Alabama
1980 Pittsburgh
1981 Clemson
1982 Nebraska
1983 Nebraska
1984 Florida
1985 Oklahoma
1986 PennState
1987 Miami
1988 Notre Dame
1989 Miami
1990 Georgia Tech
1991 Washington
1992 Alabama
1993 FloridaState
1994 Penn State
1995 Nebraska
1996 Ohio State
1997 Nebraska
1998 OhioState
1999 FloridaState
2000 Oklahoma
2001 Miami
2002 SouthernCal
2003 SouthernCal
2004 Auburn
2005 Texas
2006 SouthernCal
2007 Kansas
2008 Florida
2009 Alabama
2010 T C U
[…] https://piratings.wordpress.com/2011/08/31/the-pirate-ratings-the-hows-and-whys/ […]
Pingback by PiRate Ratings For Week of September 1-5, 2011 « The Pi-Rate Ratings — August 31, 2011 @ 8:21 am
If you see a big difference between your prediction and the spread, do you assume that you’ve got an edge or do you always check to see if you’ve missed some information (a late-breaking report of an injury or suspension, etc.)?
Comment by Russell Carden — September 23, 2012 @ 3:27 am
That is an interesting question. This following data applies only to the NFL and not college. Two of the four ratings (The Regular PiRate and the PiRate Vintage) actually have set formulae to calculate the loss of starters for each game. The other two ratings do not. That said, we have found through the years that the percentage against the spread has remained steady regardless of the difference between the actual line and the ratings. For instance over the last 5 years, the PiRate Regular Rating has picked the spread winner about 55% of the time. When the PiRate spread is 3 or more points different from the Vegas line, it has been about 56%. When the PiRate spread differed by 3 to 7 points from the Vegas line, the percentage has been around 55%, and when the spread differed by 8 or more points from the Vegas line, it has beaten the spread 57%. These numbers are within the margin of error, and no conclusions can be made that any difference in the pointspreads from the Vegas line is more successful. It is our belief that out ratings are best used for a starting point, but the most successful plays have been when the PiRate, Mean, and Biased Ratings agree on playing the underdog. This has been hitting around 57.5% through the years. The Vintage Rating just returned this season, so we have no real data for it yet. Our best advice is not to use our ratings as a be all and end all in predicting games against the spread. If you desire, use them to confirm your other data. We are free for a reason. Our real guru, who used to know enough to be banned from several sports books, no longer actively contributes to this service and only advises. Thanks for your support. Good luck!
> Date: Sun, 23 Sep 2012 11:27:14 +0000 > To: pirate_ratings@live.com >
Comment by piratings — September 25, 2012 @ 10:16 am