The process for my annual preview series is pretty straightforward: set some early projections in February (check), embark on a conference-by-conference preview series (check), then update the projections in August to account for roster movement. With the preview series officially in the books, it's time for step three.
SP+ is my opponent- and tempo-adjusted look at the most sustainable and predictive aspects of college football. It is intended to be predictive and forward-facing; it is a power ranking, not a résumé ranking that gives credit for big wins or brave scheduling. You can find the final 2020 rankings here. SP+ projections, meanwhile, consist primarily of three pieces:
1. Returning production. As I wrote last week, I have updated rosters as much as possible to account for transfers, graduation and the announced return of many 2020 seniors. The combination of last year's SP+ ratings and adjustments based on returning production generally makes up more than two-thirds of the projections formula.
Since February, however, I've made one noteworthy, and hopefully temporary, change to how returning production is weighted. The more I thought about it, the less I was able to reconcile the idea of valuing returning production equally among teams that played a pretty full schedule last season and teams that, like an Ohio or Arizona State, played only three or four games. So I set up a sliding scale: If you played 10-plus games, returning production adjustments count as they would under normal circumstances, but if you played less than that, those effects were diminished, and the projection factors below were factored a bit more heavily.
2. Recent recruiting. Returning production aims to tell us what kind of talent and experience a team is returning. Recruiting rankings inform us of the caliber of the team's potential replacements in the lineup. They generally make up about one-quarter of the projections formula. This piece is determined not only by the most recent recruiting class but also, in diminishing fashion, by the past three classes.
3. Recent history. The previous year's ratings are a huge piece of the puzzle, but using a sliver of information from previous seasons (two to four years ago) gives us a good measure of overall program health. It stands to reason that a team that has played well for one year is less likely to duplicate that effort than a team that has been good for years on end (and vice versa), right? This is a minor piece of the puzzle, but the projections are better with it than without.
Most teams' ratings are similar to what was found in the February projections, but a few ratings ended up changing for any number of reasons. It might have been for teams that, as mentioned above, played a tiny number of games. It might have been for teams that suffered a key long-term injury or lost noteworthy players to transfer. Or perhaps I just couldn't find complete information for which "super seniors" were and weren't returning for a given team before we published the initial projections.