How should the committee watch games?

A few days ago, following his press-row run-in with a member of the NCAA selection committee at the BYU-San Diego State game, Ken Pomeroy wrote a thought-provoking blog post.

See, Pomeroy sat next to this unnamed committee member on press row last week, where he observed said committee member dutifully filling out a uniform note sheet that provided space to record the member's thoughts on each team's style of play. Categories included "post-play," "guard-play," and the like. That prompted Ken to worry -- perhaps rightfully -- that the committee was adding style of play to an already difficult, confusing NCAA tournament selection process. From his post:

[...] I’m not comfortable with the committee monitoring the kinds of offenses teams run. It’s great in terms of having an intelligent discussion about basketball, but whether a team runs the DDM or the flex shouldn’t have any bearing on the selection process, and it adds clutter to an already difficult task. [...] In the end, it probably doesn’t matter either way. From all accounts, the selection process is chaotic, and RPI data is burned into the computer screens of committee members. I’m sure there’s some time for qualitative discussion, but I hope it doesn’t revolve around a committee member saying “When I saw Team X play, they had really good post-play, and I think that’s important to win basketball games. Therefore, Team X should get extra consideration.”

The committee’s charge is to select the 37 best at-large teams. It should be based on the play on the court, not on things like how much depth a team has or whether they have an effective press. At selection time, we’ll again hear about how the committee is seeing more games. However, I won’t get a warm and fuzzy feeling because I’m not sure it makes a difference in terms of the quality of the bracket that’s produced.

Ken actually provides a few reasons why seeing more games could actually damage the quality of the bracket. It's the same principle that leads us to use statistical information like his adjusted efficiency data in the first place: Our eyes can be deceiving. Or, at the very least, they can create small sample biases, in which we put the value of a team we saw over a team we didn't see simply because we saw that team. That's not the most sound way to analyze basketball; the best use numbers to back up, or challenge, what they see on between the whistles.

There are a few arguments here. One of them is about whether committee members should watch games at all. The other is what information they should be drawing from those games. And there is, as always, an argument to be had about which numbers the committee uses at the end of the season. RPI is the go-to for committee members, as yours truly saw during last year's mock selection committee, even though Ken's data is a much better gauge of actual team quality.

As for whether committee members should be watching games? Well, yes. It's hard to argue they shouldn't. One of Ken's points is that if your goal is merely to find the best at-large teams in the nation, you don't really need to know why Team X beat Team Y; all you need to know is that it did. As Ken says, "there's your data point for that game." But it's nice to think that committee members are on hand (or watching on TV) to see whether a game was, for example, decided by a questionable call. Or if one team was the victim of a late, game-changing injury. Or if a crowd was particularly rowdy. Or not. All of that stuff could be useful information in a close bubble argument in the committee room stretch run on Selection Sunday.

But it's easy to agree with Ken that committee members might do well to avoid the X's and O's stuff therein. It's hard to see why you'd want to add that extra element into an already difficult, convoluted, and overloaded appraisal process.

Then again, you can argue -- as Dan Hanner of Yet Another Basketball Blog does quite well here -- that the risk of bias is well worth the committee's attempts to be "engaged, aware, and thoughtful." From Hanner:

Ken Pomeroy may find his formula to be the best way to answer these dilemmas, but I think he would agree this is not a one-dimensional question. People can differ in the weights they put on different factors. Ken’s rightful crusade is to try to remove the RPI from team data sheets, because the RPI is very weakly correlated with anything meaningful. And his crusade to eliminate non-essential variables like “style-of-offense” from the discussion is important. But I would never discourage the committee from following college basketball and collecting more information, even if watching games induces the possibility of “subset bias”.

If there's anything worth remembering here, it's that the selection committee members are a) humans, b) intelligent humans, c) intelligent humans that do an awfully good job of seeding the tournament, and d) intelligent humans that, by the nature of their task, have to vote on any and all decisions they make with the other nine intelligent humans in the committee room in March. In that scenario, the chances of a style of play argument -- which, as Ken says, probably doesn't belong in the discussion -- arising from an individual bias severely affecting the tournament seems awfully slim.

In fact, I'd encourage committee members to keep seeing games. See them with perspective, committee members. But see them all the same.

In the meantime, we can work on a much larger issue afflicting the committee: the hegemony of RPI. But that's an argument for another day.