Monday, December 26, 2005

Bias Is In The Eye Of The Beholder

Well, it looks like there's been plenty of back-and-forth over the conceptual assumptions behind the UCLA media bias study, which was released in an earlier form a couple years ago. What follows is my attempt to separate the wheat from the chaff. First on the chopping block, the Columbia Journalism Review:
Nevertheless, their methodology still falls short of the ideal bias-detecting machine. To date, their method involves hiring a bunch of college students to comb through some (but not all) of the archives for some (but not all) American news outlets and then counting up some (but not all) references to some (but not all) think tanks and then comparing some (but not all) of these references to the amount of times certain members of the U.S. Congress refer to some (but not all) think tanks. Suffice to say, it's a bulky bit of bias-detection and quite primitive. But with a few tweaks, this new quantitative approach to media criticism will undoubtedly soon replace all the old tools of the industry -- from analogy and analysis, to insight and wit.

The writer's chief quibble appears to be one of comprehensiveness, with the implication that the study failed only through the insufficient diligence of its data collectors. While more data certainly would have helped, that's hardly Groseclose and Milyo's biggest problem, as subsequent entries will make clear. Also, contra the last bit, I don't think they ever claimed that their work should be considered the end-all-be-all of media bias research. At best, it offers one measure of bias that should be evaluated against the rest of the media research corpus. But the author was clearly engaging in a spot of gratuitous hyperbole, so perhaps I'll let that one go.

Next up, the Wall Street Journal:

The Wall Street Journal's news coverage is relentlessly neutral. Of that,
we are confident.

Oh, that settles that then. G&M obviously should have just polled news organizations on whether or not they consider their own coverage biased; that certainly would have been cheaper and easier.

First, its measure of media bias consists entirely of counting the number of mentions of, or quotes from, various think tanks that the researchers determine to be "liberal" or “conservative." By this logic, a mention of Al Qaeda in a story suggests the newspaper endorses its views, which is obviously not the case. And if a think tank is explicitly labeled “liberal” or “conservative” within a story to provide context to readers, that example doesn’t count at all. The researchers simply threw out such mentions.

Oh, come on. Al-Qaeda isn't a think tank, and anyone foolish enough to believe that mentioning the group's name constitutes an endorsement of its views has no place on the faculty of any accredited university. The conceit of imputing a think tank's overall ideological bent to the media outlets and politicians that cite it poses significant difficulties, but propping up Al-Qaeda as a strawman muddles the issue needlessly. I suspect that the Journal recognizes this and is intentionally mischaracterizing the scholars' methodology for its own benefit. Not only does such a false portrayal reflect poorly on the paper's intellectual integrity, it's also completely unnecessary—both the study's concept and execution are susceptible to plenty of honest criticism. We continue:

Second, the universe of think tanks and policy groups in the study hardly covers the universe of institutions with which Wall Street Journal reporters come into contact.
This is the comprehensiveness argument again, but with a slight twist—the Journal correctly points out that think tanks aren't the only organizations referenced by media outlets, and that focusing exclusively on think tanks may exclude other important sources of bias (granting the authors' basic thesis as a given).
Third, the reader of this report has to travel all the way Table III on page 57 to discover that the researchers’ "study" of the content of The Wall Street Journal covers exactly FOUR MONTHS in 2002, while the period examined for CBS News covers more than 12 years, and National Public Radio’s content is examined for more than 11 years. This huge analytical flaw results in an assessment based on comparative citings during vastly differing time periods, when the relative newsworthiness of various institutions could vary widely. Thus, Time magazine is “studied” for about two years, while U.S. News and World Report is examined for eight years. Indeed, the periods of time covered for the Journal, the Washington Post and the Washington Times are so brief that as to suggest that they were simply thrown into the mix as an afterthought. Yet the researchers provide those findings the same weight as all the others, without bothering to explain that in any meaningful way to the study’s readers.
This is an excellent point that I haven't seen underscored in any other response to the study. One of the key pieces of evidence that must be shown to properly substantiate allegations of bias is a long history of systemic favor given to one side over another. Four months is clearly not sufficient to support any conclusions about bias, which raises the question of why the analyses of the three newspapers mentioned above were included as anything more than a footnote.

I'll consider one more critic's response in my next post; I'm predicting boatloads of fun in a similar vein.