robot_basketball

It’s the third week of March and that means America’s love affair with brackets and balls has re-emerged as it does every year to sap our productivity and our sanity. If Warren Buffet’s one billion dollar March Madness challenge is any indication, this might be the biggest year for amateur bracketology yet. Some base their predictions on which team is higher seeded (those boring brackets always tend to win, don’t they?), others rely on experts, and then there are those poor souls who pick winners because of mascot or team color.

Of course, America has developed another love affair over the past few years that intersects neatly with this annual obsession: big data. Google searches for “big data” have risen sharply since 2011 are now at an all-time high. Algorithms that predict future results have been especially popular lately, thanks to Nate Silver’s perfect 50-for-50 prediction of the 2012 presidential elections. And it’s no accident that Silver chose today, March Madness Monday, to relaunch his Five Thirty Eight site which promises to offer loads of data-informed reporting on sports and everything else. Even sites you might not expect like Huffington Post are going long on algorithmic March Madness predictions. As Hamish McKenzie wrote, in the fight between our data and gut, data is winning.

Does that mean you should tune out the old-school ESPN analysts, forget about “intangibles” like toughness or maturity, and let a robot fill out your bracket for you?

To find out I talked to a couple guys whose love of college hoops is rivaled only by their love of quantitative analysis. Ed Feng has a phD from Stanford in statistical mechanics and built an algorithm called the Power Rank designed to rank teams and predict the outcomes of games. He is an advisor for the “March Machine Learning Madness” competition run by the predictive analytics company Kaggle, which will award $15,000 to the person behind the algorithm that correctly predicts the most winners. Scott Turner has a Ph.D from UCLA in artificial intelligence and is running his own machine learning contest at his blog, Net Prophet. The contest is in its fifth year but this is Turner’s first running it.

So how well do our robot overlords perform? In last year’s machine competition which Turner participated in, the winning entrant beat the two human brackets included in the contest: one belonged to the father of one of the organizers, Danny Tarlow, and the other was President Obama’s bracket. Obama’s bracket had a lot of “chalk,” meaning it had few upsets and hewed closely to the seedings picked by the selection committee — only one of Obama’s “Sweet Sixteen” teams was seeded lower than four. And two years ago, the winning machine bracket beat ESPN experts Jay Bilas and Dick Vitale, along with a bracket that picked the higher seed in every matchup. In a sense, that kind of bracket is the most “human” because it places emphasis on whatever teams the very human selection committee thinks will win. Score one for the robots.

Does that mean you should copy one of the machine-made brackets for this year’s tournament? Maybe but picking the best algorithm/methodology is about as random as trusting a smart human. “The win was of course very lucky,” writes last year’s winner, Ryan Boesch. “Basketball games are random in nature so to find which model is actually the best it would require many years of tournaments. One tournament is not statistically significant.”

Based on the outsized hype around “big data,” you’d be forgiven for thinking algorithms have some secret line on how the world works outside of what humans can know. We saw this last week when Google’s vaunted flu tracker was found to be monumentally flawed. “The first problem with algorithms is that they don’t ask the right questions,” says Feng. “If you’re trying to say predict the outcome of a college basketball game you have to ask, what’s important?”

For a computer to know what factors are most significant in creating a certain outcome it needs a ton of past data. Even though there are thousands of Division 1-A basketball games played each year, it would take millions of years of game data for a machine to make consistently accurate predictions, Tarlow told me last year. It’s the same reason that despite all of our technological advances, we’re still not very good at predicting giant earthquakes. The analysis relies on big quakes from the past, but those only happen once every hundred years or more.

Other factors related to basketball success are simply difficult to quantify, though these are related more to strategy than the “intangibles” like toughness that ESPN analysts like to go on and on about in an attempt to justify their expertise. Feng looks to his favorite team, the Michigan Wolverines, as an example: “Michigan is a particularly good passing team and they also shoot three pointers really well, which is particularly hard to play zone (a type of defense) against. That’s kind of an element that is difficult for an algorithm to capture.”

Then there’s the most obvious limitation: Basketball games are full of unpredictability and randomness — that’s the whole reason they’re fun to watch.

“Sports is a really interesting area for predictions because outcomes are influenced by both systematic and random forces,” Turner says. “We can certainly measure how good a team is at shooting the basketball, and we can understand to some extent how that influences the game outcome. But how well a team shoots the ball in any particular game is partially random — even good shooting teams have off nights, and vice versa. So you have this very intriguing challenge of digging through game data trying to separate the systematic from the random. Can you find some systematic clue hidden under all the random noise that no one has found before? Can you exploit that to predict outcomes better than anyone else?”

What ways are computers better at predicting basketball wins than humans? They don’t have human biases like favoritism and contrarianism that dominate our think-piece thought-leader pundit culture. “Experts” are incentivized to make bold predictions to further the notion that they know something us non-analysts don’t. How else can they justify their big paychecks and fill airtime on 24-hour sports channels? If any of those bold predictions come true, they can brag about it. If not, they can just chalk it up to the unpredictability of sports which is what people love about the tournament anyway.

While March Madness algorithms are fun exercises that teach us about the advantages and limitations of machine predictions, ultimately they are only slightly less random than well-reasoned human predictions. And while watching and studying a lot of basketball may make you sound smart, anyone who regularly enters March Madness pools knows the winner is just as likely to be some dude in payroll who’s never watched a game of basketball in his life than it is the die-hard Duke fan who obsesses over every game. If you really want to win, maybe you should follow these Indiana researchers’ advice and pick as few upsets as possible. But where’s the fun in that?

[Image via Thinkstock]