truth bot 2000At some point in the history of letters, fact-checking went from a foundational part of journalism to a specialization practiced by few to a buzzwordy media trend, alternately praised and dismissed depending on what politician was getting called out. During the 2012 election, a race notable for its high levels of deceit, fact-checking was especially prevalent, and the Washington Post, along with Politifact and Factcheck.org, were leaders in the field.

Now the Post has debuted a real-time fact-checking program called Truth Teller. It transcribes political speeches using voice-to-text technology and automatically cross-checks the speaker’s claims against databases of facts, half-truths, and lies. In one example, House Majority Whip Kevin McCarthy repeats the well-worn claim that taxing the rich will result in the loss of 700,000 jobs. As he says this, the word “False” materializes in big red letters along with a link to a blog post where the Post’s resident fact-checker Glenn Kessler debunks the claim.

To be clear, the program is still a work in progress, but the Post’s executive producer for digital news Cory Haik told Poynter’s Craig Silverman, “The goal is to get closer to … real-time than what have now. It’s about robots helping us to do better journalism — but still with journalists.”

Using the terms “robots” and “journalism” in the same sentence doesn’t always inspire good vibes among a tribe that’s seen more than its share of cut-backs and lay-offs over the past decade. But robots like the ones developed by Narrative Science have already managed to write “convincingly human” recaps of sporting events and earnings reports, and those same robots are working on turning “unstructured data” (like Tweets) into readable (and, most importantly, trustworthy) narratives. So why not have robot fact-checkers? What’s more robot-friendly than a simple binary construct like true/false?

The biggest advantage of robot fact-checkers, and one the Post has capitalized on with Truth Teller, is that ability to work in real-time. An algorithm can comb through Congressional Budget Office studies or public records much faster than a lowly human journalist. That said, some of the most skilled orators work in the gray areas where figures may be literally true, but misleading in certain contexts. Take Bill Clinton’s speech at the Democratic National Convention, a masterpiece of rhetoric and restrained emotion, but also a perfect exercise in how to fool a robot fact-checker. For example, Clinton didn’t lie when he said, “In the past 29 months, our economy has produced about four and a half million private sector jobs.” But the “29 month” threshold was carefully chosen to reflect positively on President Obama’s leadership. Had that threshold been stretched out by a few months, the economic growth under Obama wouldn’t look so impressive. A robot might not catch that. A person, like FactCheck.org’s Robert Farley, did.

Having robots do the fact-checking is also a good way to sidestep the most common criticism of fact-checkers, which is that they’re biased. Kessler himself heard these accusations throughout the 2012 campaign. But robots need input to function, and if the data sets they pull from lean one way or the other, then your algorithmic fact-checker could be no better than a second-rate cable news pundit. A robot that culls data from the non-partisan Congressional Budget Office would have a much different take on tax hikes than a robot that relies on data from Grover Norquist’s conservative Americans for Tax Reform.

As for Truth Teller, it pulls data in large part from blog posts written on the Post’s Fact Checker blog, so there’s still a very strong human element to the automation. For right now at least, the program seems to hit a sweet spot between human reporting and algorithmic data collection. But for a program to be truly real-time and comprehensive, it would need to look beyond what’s already been fact-checked by humans, and this is where those pitfalls could come into play.

It makes me wonder if fully-automated fact-checking is better suited to breaking news events like the Newtown shooting and Hurricane Sandy, where truth and falsehood, while muddled, are certainly less so than in a politician’s speech. In the wake of the London riots, the Guardian posted a visualization of how rumors were spread then quickly debunked on social media, and the results showed that Twitter might really be a “truth machine” as some have suggested. If algorithms could harness this data in real time for the sake of fact-checking, could it help journalists avoid potentially devastating reporting errors during breaking news events?

Journalism, whether we’re talking about fact-checking, earnings report summaries, or investigative pieces, will never be completely taken over by automatons, and the Post certainly isn’t trying to do that with Truth Teller. Until robots are as intricate as the replicants in Blade Runner, humans will still be needed to determine what information is crucial, what information is forgettable and, most importantly, to talk to people who wouldn’t otherwise share their stories with the world. That said, people’s trust in data is perhaps at an all-time high, particularly in the wake of Nate Silver’s data-centric victory during the 2012 election. And now that bloviating TV pundits have come to dominate so many of the places we used to look to for journalistic authority, algorithmic authority looks more appealing than ever.

[Illustration by Hallie Bateman]