From Nick Piecoro's (previously linked) interview with my ex-employer:
How much confidence do you have in the Fielding Bible data?
I’m confident that 90 percent of it is right and that some percentage of it is dead wrong. The problem is that you don’t know which 10 percent is dead wrong. I am confident that the great majority is on target and accurate. Like Upton, for example. We all know that when he came up he made so many mistakes in right field, which undermined his natural athletic gifts. Now that he’s been in the league a while the statistical measurements are that his natural gifts are working for the team at a greater rate than his mistakes are working against the team. OK, that’s interesting. But frankly I would have more confidence in your evaluation watching him than I would in the statistical summary.
Have you seen any studies that show predictive measures from the fielding data?
That’s the big issue. You can’t get predictive measures at this time that are equivalent to what we have for batters because we don’t have the history with it. We have batters’ stats going back 130 years and have a really strong sense of what’s an anomaly and what’s not.
I'm going to say flat-out I believe the fielding data's extensive enough to get predictive measures. FanGraphs shows eight seasons of Ultimate Zone Rating. Now, you might not love that particular method, but the point is that if the data's available to figure UZR, then by golly it's there for whatever method you might decide to invent. No (as Bill points out) we might not have excellent, accessible data for the first half of Junior Griffey's career ... but so what? That negatively impacts our ability to evaluate Griffey's first decade in the majors, but it's got absolutely nothing to do with our ability to predict how many runs Jacoby Ellsbury will save in 2010.
But this talk about "predictive measures" suggests something that probably isn't new, but I can't recall seeing anything along these lines before ... If modern fielding metrics "work" (and I think they do), there would seem to be an obvious test: team projections that include (for example) individual UZR projections should be more predictive than team projections that don't.
Further, couldn't we use the same method to compare the various metrics? We've got UZR and BP's Fielding Runs and the Fielding Bible and David Pinto's Probabilistic Model of Range. I'm not saying this would be easy, but couldn't we check their accuracy by 1) converting them into projections, 2) combining them with baseline pitching and hitting projections to arrive at team projections, and 3) comparing the team projections with the standings.
Again, I know this wouldn't be easy. Separating pitching from fielding (and vice versa) is a bear. But so many bright people are working so hard to figure this stuff out, and I'd love to just choose one and go with it for a few years.
(For a different sort of take on Bill James, there's this.)