I know, I know.
Perhaps the only thing I enjoy less than the Ratings Percentage Index (OK, there are a lot of things, but I'm exaggerating for effect, so just go with it) is arguing about the Ratings Percentage Index. It is an entirely fruitless exercise. Detractors, who think the RPI is a crude formula beneath the greatest sporting event on planet Earth, say one thing. Supporters of the RPI -- though rare, they do exist! -- respond in kind. These two sides almost never meet in anything resembling a rhetorical middle. More often than not, everyone just ends up arguing past one another. Everyone becomes 30 percent grumpier and vows to turn off Twitter, but otherwise nothing much gets accomplished.
Given the frustrations inherent here, now feels like a good time to clarify one big thing about the RPI debate. But first, let's catch up.
In the time since yours truly last ranted about the RPI, the NCAA and assembled media finished their mock selection committee in Indianapolis. NCAA associate director of men's basketball David Worlock reached out to clarify the NCAA's use of the RPI as merely one more tool in the proverbial toolbox. A day later, our buddy Matt Norlander posted his impressions of the process, noting, as anyone who has seen the NCAA's team comparison and bracketing software will attest, the heavy emphasis still placed on the RPI:
But that's not the point, because, as if you're being hypnotized into a train of reason and deduction, the RPI is placed right in front of the committee members’ faces from the start of the process, and I sincerely doubt they deviate from the materials and data given to them by the NCAA and its computer sorting/ranking/bracketing/filtering system (which is a slick, impressive computer program). This year, the NCAA has made public for the first time its Nitty Gritty (yes, that’s a capital N and G) sheets. These sheets rank teams by RPI. Immediately, you’re sorting teams in accordance with a flawed system. Within the Nitty Gritty you’ll see nine of the 16 columned categories are RPI-dictated.
In other words: The committee may not look at teams's basic RPI numbers side-by-side, but they do organize all of their information in much the same fashion as you'll see in one of the ESPN.com nitty gritty sheets here -- almost every bit of information is grouped and sorted according to RPI. It's everywhere. (Norlander also hosted an enlightening podcast with ESPN Insider John Gasaway and New York Times guru Nate Silver, and much of the discussion centered on the essential confusion surrounding the RPI and the NCAA's use therein. It's a worthwhile listen.)
Then, on Monday, Sports Illustrated's Seth Davis effusively took up the pro-RPI mantle. Per the usual, Davis made a host of valid points.
It's important to note, for example, that the RPI isn't meant to project future success but quantify past accomplishment, that it isn't supposed to pick the nation's most efficient teams, that it doesn't include scoring margin for understandable reasons (the NCAA doesn't want coaches running up the score*). (*This is a really easy fix, by the way. ESPN's own Basketball Power Index provides diminishing returns for blowouts; a 40-point win isn't worth much more than a 20-point victory. See? Problem solved.)
Most important for the pro-RPI argument -- and it's hard to believe the phrase "pro-RPI argument" is being used in 2012, but here we are -- is the following paragraph:
The RPI was never meant to be a hard-and-fast listing of how good teams are, though it essentially accomplishes that. Rather, its primary purpose is to serve as an organizing tool that allows the committee to compare teams with different schedules. We all know that all 25-4 records are not alike. When the committee looks at results -- i.e. the "team sheets," which are being made public this year for the first time -- the games are arranged so people can see how a team did against teams ranked in the top 25 of the RPI, the top 50, the top 100, etc
This is what the NCAA says about the RPI, too: That it is not an end-all metric, but merely an "organizing tool." This is the crux of the pro-RPI argument. And it badly misses the point.
Critics of the RPI are not -- or at least should not -- be disillusioned about what the RPI does, or what it is supposed to do. We know the RPI is a tool for organization. The point is: We think it's a bad tool for organization. And overused to boot.
Making this argument does not mean you want to replace the RPI with Ken Pomeroy's efficiency data, or the Basketball Power Index, or Massey ratings, or Sagarin's stuff, or any one system specifically. Making this argument merely requires that you can see the RPI for its immense flaws. Making this argument does not mean thinking the RPI is always, or even mostly, "wrong." Making this argument merely requires you to believe we can do better. It does not require you to think the selection committee does a bad job selecting and seeding the tournament every year. It merely requires the belief that the committee should be using the best, most informative, richest data sets available.
It means that if the NCAA is going to pick one model by which to organize its materials for the committee, one metric by which every win and loss and conference and nonconference schedule is ultimately defined, it should be one that depicts reality as closely as possible.
To its supporters, the RPI does a good enough job. But since when is that the goal?
I'm not deluded enough to think I have an obvious answer. Nate Silver's suggestion to the committee -- that it tweak the RPI to include weights for other rankings systems, margin of victory, diminishing returns, injuries and the like -- seems sound enough to me. There are lots of hungry young sports statisticians and analytics guys out there; hire one, lock him in front of a few spreadsheets, tell him you want the "RPI on steroids," and see what comes out.
I don't know; maybe it isn't that simple. The NCAA has really smart, dedicated people working on the tournament. It does a fantastic job on said tournament each and every year, and no matter how good the bracket is we'll always have in-or-out debates and questions about geography and seeding. That's part of the fun. Those who criticize the RPI aren't criticizing the tournament, or the NCAA itself. We're merely criticizing the frustrating stubbornness that has kept the RPI in place, with minimal chances or improvements, for so long. Maybe there's no obvious answer to the problem right now. But we should be working on one. And so should the NCAA.
More than anything else, that's what critics of the RPI are talking about. It's not about a comparison of one metric to another. It's not a wholesale criticism of the tournament. It is, plain and simple, a critique of institutional complacency. The selection process works well, but couldn't it work better? And if so, why not try?
Does that make sense? I hope so, because I hope to never speak of the RPI again. A guy can dream.