The RPI is dead, apparently, so long live NET (the new NCAA Evaluation Tool).
If nothing else, credit the NCAA for a clever acronym. Forgive me for withholding any additional credit for the time being.
Had the RPI long outlived its usefulness? Had schools and conferences learned how to "game" the system? Was the tiresome approach -- "it's only a sorting tool" -- failing to recognize the RPI's outsized influence within the NCAA men's basketball selection committee?
Yes, yes and yes.
Cutting to the chase, then, we are left with two unanswered questions as the RPI is laid to rest:
• Is NET a better metric for team selection and seeding?
• Will the selection committee utilize it correctly?
To answer the first question, even without a complete breakdown of the new formula, NET almost can't help but improve upon the RPI. The only thing worse than RPI's emphasis on "who you played" over "how you played" was the committee's seeming inability to evaluate outliers. This is how we'd get inexplicable outcomes such as Wichita State (30-4) being a No. 10 seed in 2017.
Hopefully NET turns out to be as advertised: A model that optimizes both performance results (e.g., "most deserving" teams) and predictive data (an objective version of the so-called "eye test") to rank a widely disparate Division I more accurately. Presumably, if not already, the formula should be detailed to the 32 Div. I conferences -- for scheduling and other evaluation purposes -- as well as its rankings made public to fans throughout the course of the season.
In a perfect world, such a significant change would have been announced long before schools completed their scheduling for a new season, but NCAA and "perfect world" are rarely used in the same sentence. What matters more is some kind of commitment from the committee that it is going to consistently apply its new tool for a foreseeable period of time.
Which leads to the second and, unfortunately, inevitable question: Will the committee use NET correctly? Recent history raises more than a few doubts.
Last year, the committee introduced changes to its team sheets, including the quadrant system and the implementation of several non-RPI metrics. It was a major improvement on paper. In practice, however, the committee was as inconsistent as ever.
Everyone remembers the great Oklahoma debate. The Trae Young-led Sooners were 3-12 over the second half of the season. What they proved beyond all doubt was an ability to lose, admittedly to good teams, with alarming consistency. Their 18-13 record on Selection Sunday was buoyed by six Quad 4 wins, so their record was more like 12-13 against "real" opponents.
Yet Oklahoma was solidly in the field because of its six Quad 1 wins, all but one of which came in the first half of the season. Ironically, compared to the additional metrics included on last year's team sheets, the Sooners worst ranking (No. 49) came in the RPI column.
Meanwhile, it is generally assumed that Loyola Chicago (28-5 on Selection Sunday, having won 17 of 18) would not -- given recent committee rationale -- have made the NCAA field had it stumbled in the Missouri Valley Conference tournament. With the benefit of short-term hindsight, we know the Ramblers (a No. 11 seed) reached the Final Four and Oklahoma (a No. 10) lost in the first round.
In the short-term, it is generally unfair to evaluate a Selection Sunday decision based on results over the next three weekends. However, when long-term data continually demonstrates that the Loyola's of the world -- despite fewer bids and inferior seeding -- outperform middling majors such as Oklahoma, we should pay attention and adjust accordingly.
Does NET do that? Would the committee even notice? Or will it continue its incremental, obvious and, in my eyes, damaging shift toward undistinguished teams in power conferences?
Only time will tell. In the meantime, it's apparently nothing but NET.