It has gone on for so long, and been debunked so many times, that it would seem unnecessary to issue further comment. But in the latest edition of Harper’s magazine, it happens again. This time, the culprit is novelist Martin Amis. In a feature article on Donald Trump, Amis says that, according to the independent fact-checking website PolitiFact, Trump lies more than 90% of the time.

In citing Trump’s extraordinary falsity-to-truth ratio, Amis is anything but alone. Scores of media outlets have used PolitiFact’s numbers to damn Trump. The Washington Post has cited the “amazing fact” of Trump’s lie rate, with bar charts showing the comparative frequency of his falsifications. The Week counted only those things deemed completely “True,” and thus concluded that “only 1 percent of the statements Donald Trump makes are true.” Similar claims have been repeated in the US News, Reason, and The New York Times.

But all of these numbers are bunk. They’re meaningless. They don’t tell us that lies constitute a certain percentage of Trump’s speech. In fact, they barely tell us anything at all.

The problem here is that PolitiFact isn’t actually evaluating what it purports to be evaluating. The assertion is that a certain percentage of Trump’s “statements” are true. But which statements are we talking about? When Trump speaks publicly, he can go on for more than an hour. During that time, hundreds of sentences will pour forth from his mouth. Each one of these is a statement. But PolitiFact is not evaluating all of them. Instead, it’s selecting particular statements to evaluate. Thus when someone says “X percentage of Trump’s statements are true,” what they mean is “X percentage of the statements PolitiFact chose to evaluate” are true.

That distinction makes an important difference. PolitiFact does not take a random sample of the sentences in Trump’s speeches. If it did, it would include things like “My father was in real estate” and “I’ve run a business for many years.” (Plus things like “You’re a beautiful crowd!”) Instead, it picks out certain statements that it thinks ought to be evaluated. But, by their nature, these are going to be the most contentious and controversial statements in the speech.

Because of that, speaking of PolitiFact ratings as percentages makes little sense. Percentages out of what? Out of the statements we decided to examine. But think about the implications of that. I could give a speech that is 99% uncontroversial truisms, plus one incredibly controversial lie. If PolitiFact checks my lie and nothing else, then by the prevailing standard I have a record of 100% falsehood, even though 99% of what I have said may be true.

This is not to say that Donald Trump doesn’t tell an astonishing number of lies. Clearly, he does. It’s merely to say that quantifying this number as a specific percentage makes little sense. You can conclude that Trump issues an extraordinary number of unbelievable whoppers. But you can’t make a ratio without acknowledging an enormous selection bias that is excluding large numbers of uncontroversial statements. (You might think that this would likely affect Trump and Clinton equally, thus making comparisons possible even in the absence of percentages. But even that’s not true. If both candidates told equal numbers of lies, but Trump’s lies were more noticeable, they would likely get picked for scrutiny more often.)

In fact, we can easily see the selection problem in action. Consider the following quotation from Donald Trump:

The question most young people ask me is about the rising cost of education, terrible student debt and total lack of jobs. Youth unemployment is through the roof, and millions more are underemployed. It’s a total disaster!

PolitiFact decided to fact-check Trump’s claim that “youth unemployment is through the roof.” It rated the statement “false,” because youth unemployment is now close to the rate that it was at before the recession. But PolitiFact ignored the second half of Trump’s statement, that “millions more are underemployed.” Now, this is just as much a factual assertion as Trump’s claim about unemployment. If we were trying to calculate a percentage of Trump’s assertions that are true or false, we would have to assess and include it. But PolitiFact didn’t.

printedit

As it turns out, while Trump may have been wrong about youth unemployment, he was correct about underemployment. There are millions of underemployed youths, working fewer hours than they need. As Derek Jacobson explained last year in The Atlantic, looking solely at youth unemployment actually gives a deceptively rosy picture of the economic situation facing young people. They may have jobs, but their incomes are low and their prospects for advancement are minimal. By setting aside Trump’s other remarks, about student debt and education costs, as well as underemployment, PolitiFact simultaneously portrayed him as more of a liar than he was and gave a misleading impression of young people’s economic fortunes.

Thus if there’s no clear logic to PolitiFact’s selections, the percentages are totally uninterpretable. This becomes even more absurd when PolitiFact tries to precisely rank candidates by their propensities for truth. Last year, The New York Times published an op-ed by PolitiFact’s editor, Angie Holan, which contained detailed comparisons of the lie-rates of various political figures. Yet Holan didn’t even briefly address the all-important question of how PolitiFact selects which statements to evaluate. Without understanding that, there’s no way we can assess the organization’s claim that Carly Fiorina lies more than Marco Rubio, who in turn lies more than Lindsey Graham. It should be somewhat incredible that the editors of The New York Times didn’t even ask their writer to explain what all the numbers in her article were actually supposed to mean.

To the extent that PolitiFact has publicly commented on its selection procedure, it has only further undermined its claim to numerical precision. On its website, the only information offered about selection is a series of highly imprecise and qualitative questions, such as whether the statement is “newsworthy.” Two of the listed questions even directly indicate that PolitiFact’s chosen statements are disproportionately likely to be lies: “Is the statement leaving a particular impression that may be misleading?” and “Would a typical person hear or read the statement and wonder: Is that true?” Thus PolitiFact specifically looks for “misleading” statements, meaning that if “percentage of checked statements” is treated as equivalent to “percentage of speech,” every politician will look like more of a liar than they actually are. (With better selection criteria, you could produce something more meaningful. For example, you could check all statements in which candidates cited a statistic, and then say that X percent of statistics in Trump’s speeches were true, versus Y percentage in Clinton’s speeches. But that’s decidedly not what’s being done here.)

PolitiFact “truth propensity” measurements become yet more dubious when we inquire into the underlying question of how “truth” and “falsity” are determined to begin with. PolitiFact believes that the veracity of all evaluated statements can be measured on a scale, ranging from completely “True” to a “Pants on Fire” lie. But many of the selected quotes are murky and open to interpretation. In fact, the very idea of a truth “meter” lends a faux-precision to the website’s often questionable parsings of political language.

Consider another Trump claim: that Hillary Clinton “wants to essentially abolish the Second Amendment.” For PolitiFact, this was false. As they explained, they “found no evidence that Clinton has ever said she wants to repeal or abolish the Second Amendment. She has called for stronger regulations, but continuously affirms her support for the right to bear arms.” But Trump’s claim can’t easily be rated “true” or “false.” Evaluating it requires adopting a particular interpretation of what the Second Amendment actually means, a question that is highly contested and difficult to resolve. For conservatives, nearly all restrictions on gun ownership “essentially abolish” the Second Amendment, since they believe the Second Amendment protects the right to unrestricted gun ownership. For liberals, the Second Amendment is narrower in its scope, and permits a number of regulatory measures. Recently, the Supreme Court has tended toward the conservative interpretation, having struck down gun controls in the D.C. v. Heller case. And since the Supreme Court is supposed to have the last word as to what the Constitution means, a conservative may well have a good case for saying that calls for regulation effectively constitute a call to repeal the Second Amendment.  

There are no easy answers in constitutional law; debates over the interpretation of various clauses and punctuation marks can go on for decades, sometimes centuries. But while PolitiFact ostensibly recognizes this (since it recounts the constitutional arguments in evaluating the claim), ultimately it feels compelled to color-code the statement using its Truth Machine.

It’s also the case that by focusing on the type of claims that can (supposedly) fit neatly into a truth measurement index, the “PolitiFact mentality” misses many of the most important ways in which deception operates. It’s notable that on PolitiFact’s scorecard of political truthfulness, Bill Clinton ranks as the most truthful of all the assessed politicians. Bill Clinton is, in fact, one of the most willfully deceptive and dishonest politicians of all time. Yet PolitiFact is doubtlessly correct that Clinton’s words are rarely factually inaccurate. That’s because Bill Clinton has always chosen his language very carefully, making statements that are accurate in a narrow, technical sense, yet totally misleading in their effect on the listener. (Plenty of examples can be found in my book Superpredator: Bill Clinton’s Use and Abuse of Black America.)

superad1

So PolitiFacts effort to quantify truth is both futile and unscientific in multiple ways. Yet it has been totally embraced by the media. Just last week, Nicholas Kristof cited PolitiFact’s truth percentages to support an argument that while Donald Trump tells lies, Hillary Clinton tells mere fibs. Now, Kristof may be right about that (though as a general rule, when you find yourself defensively insisting on the importance of the difference between a lie and a fib, you’re probably a pretty egregious purveyor of both.) But in sprinkling his column with vacuous numbers, he adds an unwarranted veneer of social-scientific support to his claim. Kristof says that “Trump has nine times the share of flat-out lies as Clinton.” This means nothing.

Yet it happens over and over. Kristof cited the same fanciful numbers for a similar article in April defending Clinton’s record of dishonesty. (Most New York Times opinion writers only have about six columns in them, which are recycled indefinitely over multi-decade careers.)  And it is done without any thought by people at the very top of the journalistic profession. After all, Nicholas Kristof is a Rhodes scholar. He has won a Pulitzer Prize. He should know that he can’t possibly put a precise number on something like this. Yet he does so, unhesitatingly and without qualification. (It’s a good reason not to trust people who have won Rhodes scholarships and Pulitzer Prizes.)

Of course, the news media constantly makes use of misleading statistics. Darrell Huff’s delightful little book How to Lie With Statistics, published in 1954, reveals a number of manipulative tendencies that are still just as prevalent in the press more than half a century later. It’s understandable why they persist. Faced with the fact that we would love to quantify the world, but have very limited means of doing so, we make a load of totally unwarranted assumptions and produce an impressive-looking chart. Writers think they need lots of numbers to make an argument stick, and so they come up with some by any means necessary. Explanation isn’t justification, though. It may be perfectly easy to understand why the media feels compelled to dispense empty percentages, but that doesn’t make PolitiFact’s numbers any more meaningful or defensible.

Unfortunately, PolitiFact is unlikely to admit the problem. Their editor, Angie Holan, displays an irritating tendency common of journalists: believing that if you are angering those on “both sides,” this is evidence that you are objective and neutral. (The position is mistaken because it fails to consider an alternate possibility: that the reason you are disliked by both left and right is that you are universally known to be worthless.) “Partisan audiences,” Holan has said, “will savage fact-checks that contradict their views.” Dismissing criticism this way is worrisome, because it means that fact-checkers may view substantive objections to their methods as the product of mere “partisanship,” and treat all arguments that they are wrong as further evidence that they are right.

Yet the spreading of misinformation by major news outlets should be objectionable. It should stop. And if PolitiFact or the Washington Post’s Pinnochio-thing did have any interest in facts, they would entirely eliminate their use of percentages, and discourage the press from using their work this way. Every day they keep spitting out meaningless numbers, they undermine their credibility as scientifically-minded evaluators of truth. That’s a fact.