In Case You’re Interested

Phil Birnbaum isn’t too happy with me.

But to J.C. Bradbury, that research might as well not exist.

As he writes, all that research was done only by the “analytical baseball community” – not by reputable Ph.D.s in economics. It has not undergone “formal peer review.” Further, “it has not been tested with sufficient statistical rigor” and “has undergone very little formal scrutiny.” It does not use “proper econometric techniques.”

And so Dr. Bradbury sets out to correct this. How? Not by reviewing the existing research, and validating it academically. Not by finding those studies which have “insufficient statistical rigor” and analyzing them statistically. Not by summarizing what’s already out there and criticizing it.

No, Dr. Bradbury’s paper ignores it. Completely. He mentions none of it in his article, not even in the bibliography. Instead, Dr. Bradbury’s explains his own study as if it’s the first and only test of the DIPS hypothesis.

This happens all the time in academic studies involving baseball. Years of sabermetric advances, as valid as anything in the journals, are dismissed out of hand because of a kind of academic credentialism, an assumption that only formal academic treatment entitles a body of knowledge to be considered, and the presumption that only the kinds of methods that econometricians use are worthy of acknowledgement.

What kind of person would I be if I didn’t acknowledge my critics? I’m not sure why Phil is so angry with me, and I don’t think he chose the best forum to address his concerns. I guess I should apologize for knowing some econometrics, since it’s only useful for out-credentialing other scholars. If you read the paper, I think it’s very clear that I’m not claiming to have invented DIPS, just trying to replicate it for an academic audience, then to test how well the labor market values DIPS. I’m not sure that it warrants a detour through a few studies on the Internet. I think it’s a pretty pro-sabermetric paper.

The paper he’s discussing is an earlier version of a paper of mine that is forthcoming in Journal of Sports Economics. I don’t own the copyright to the newest version, so I can’t post it, but I’ll tell you that the cluster of years where BABIP seems to matter disappears when I make some slight changes to the model. Also, the results hold for runs allowed as well as ERA. These modifications were the result of some excellent suggestions by an anonymous referee. If you’re reading, thanks!

Sometimes I wonder, if everyone is just going to assume I’m a professor ass-hat, maybe I ought to just act like the real deal. And now I’m off to devour my sons.

Addendum: More comments at BTF.

6 Responses “In Case You’re Interested”

  1. A.West says:

    I read a draft of your paper and his comments and don’t understand what this guy is so upset about. Actually, I can. I think he’s upset that discussions and calculations without statistical hypothesis testing and proper model specification is not deemed “formal”. Tough luck. Your paper isn’t about the history of DIPS, it’s about using a statistically validated model of DIPS to test a further hypothesis. There was no possible place to put a discussion of online exploration of DIPS ideas, other than possibly a footnote thanking A, B, C, D, etc for floating various hypotheses related to DIPS. However, that would be a total sideline and digression.

    The key thing is that you did your thing with the data directly. It was not a “meta-analysis” of DIPS studies (study of studies). You did not need to, I think, because you have the full population of relevant data to work with. You did not need to combine the power of other people’s samples to strengthen your case.

    I don’t write academic papers, but I do read them. And while a lot of people in my field (finance) fling around untested hypothesis, I do try to get some data, open up R, and test such stuff formally.

  2. tangotiger says:

    I only have two issues with the presentation of the paper.

    The first is: “However, this finding, though widely accepted, has not been tested with sufficient statistical rigor”.

    I don’t know how widely accepted it is, and I’m not sure that “sufficient statistical rigor” is a necessary requirement.

    The “widely accepted” is a line thrown in, without basis. I wouldn’t claim it’s widely accepted by the mainstream, nor by the analysts. Maybe it is, but unless you substantiate it, it shouldn’t be mentioned.

    Likely the most rigorous of these studies is the Allen/Hsu called Solving DIPS on my site. Even if it doesn’t meet whatever threshholds are required to make J.C.’s statement true, it is easily the most compelling of the anti-strong-DIPS position out there. That is doesn’t meet the threshholds doesn’t mean that it gets to be dismissed or rejected. The Allen/Hsu paper is a prime example of smart guys, who love baseball, who spend tons of their time, doing things that didn’t require formal training. The results, if undergone through sufficient statistical rigor, would have likely yielded the same conclusions.

    The second: to the extent that a bibliography is created, other DIPS-related research should have been prevalent.

    The unfortunate part of the Birnbaum criticism is that he wanted to take exception to academia, and focused his thoughts on J.C.’s paper in particular. Other than the line I quoted, which many may read as being dismissive of otherwise quality work, the rest of the paper stands well on its own.

  3. tangotiger says:

    As for the Hakes/Sauer paper, there was no reason to look at OBP and SLG. Looking at events discretely, like walks per PA, hits per AB, and XBH per AB, or some such, would have been far more informative than using two such highly-correlated measures as OBP and SLG. At their core, they both share the same variable (batting average).

    As well, fielding also plays a large role, as Mike Cameron, Mark Kotsay, and Torii Hunter are prime examples of guys who got a fielding-bump to their salaries.

    Seeing how much the coefficients jumped year-to-year was more of an indication that the study was lacking in something fundamental, rather than a huge shift in a market response.

  4. tangotiger says:

    For some reason, my other post was lost. Maybe it’s too long. Here it is in pieces:

    I only have two issues with the presentation of the paper.

    The first is: “However, this finding, though widely accepted, has not been tested with sufficient statistical rigor”.

    I don’t know how widely accepted it is, and I’m not sure that “sufficient statistical rigor” is a necessary requirement.

    The “widely accepted” is a line thrown in, without basis. I wouldn’t claim it’s widely accepted by the mainstream, nor by the analysts. Maybe it is, but unless you substantiate it, it shouldn’t be mentioned.

  5. tangotiger says:

    The second: to the extent that a bibliography is created, other DIPS-related research should have been prevalent.

    The unfortunate part of the Birnbaum criticism is that he wanted to take exception to academia, and focused his thoughts on J.C.’s paper in particular. Other than the line I quoted, which many may read as being dismissive of otherwise quality work, the rest of the paper stands well on its own.

  6. JC says:

    I don’t think there is much useful in the Solving DIPS discussion. I agree with Walt Davis’s characterization in the BTF thread.

    I have sufficiently addressed the critique that I “under-cited” the sabermetric literature. I’m not going to comment on it any further. I didn’t do anything even borderline inappropriate. It’s nit picking. I am shutting down comments on this subject. If you want to write or talk about what an awful person I am, you are free do so in another forum.