Basketball and big data: Are robots the secret to winning your March Madness pool?

By David Holmes , written on March 17, 2013

From The News Desk

When Nate Silver went 50-for-50 in predicting the outcome of the 2012 presidential election, Hamish McKenzie called it "a win for data over gut." It showed that no matter how loud pundits like Karl Rove and Joe Scarborough talk, they are no longer immune to the tyranny of algorithms and arithmetic.

And that was all over a silly little election! Today we're on the eve of what's arguably an even bigger predictive event in America: March Madness.

According to an MSN survey, 60% of Americans will fill out a March Madness bracket this year, compared to the 57.5% of eligible voters that voted in the 2012 presidential election. What that says about civic duty is for another essay. Instead, we ask: Is this the year we tune out ESPN analysts like Andy Katz and Jay Bilas in favor of the data mavens? Can March Madness be, in the parlance of our times, "hacked"? Who's the Nate Silver of college basketball?

Some would argue that the Nate Silver of college basketball is, well, Nate Silver. He fills out a bracket each year using a methodology combining human- and computer-based predictive models. But Silver's recent record when it comes to sports has been pretty dismal. At the onset of the NFL playoffs, he predicted that the Seahawks and the Patriots would play in the Super Bowl (wrong on both counts). Once the Super Bowl arrived, he predicted the 49ers would defeat the Ravens (wrong again).

Maybe there's another way: For the purest form of data-driven basketball analysis, look no further than the annual Machine March Madness contest. Run for the past four years by Danny Tarlow, a postdoctoral researcher at Microsoft Research Cambridge*, and Lee-Ming Zen, a software developer for Amazon, the contest requires entries to be compiled by a machine with no consideration for human judgment.

So how do the machine-driven models compare to the experts? Two years ago, the winning bracket was more accurate than both Nate Silver's bracket and a bracket filled out by always picking the higher seed. (The higher seed analysis is in some ways the purest human-driven metric: Seeds are determined by a back-room selection committee the weekend before the tournament commences). And last year, the winning entrant beat high-profile ESPN experts Jay Bilas and Dick Vitale, along with the higher seed baseline.

Some might say the decision to rely solely on machines is unnecessarily rigid. Humans know stuff too! But by constraining the competition to what machines can do, the pool becomes a fascinating experiment in determining the strengths and limitations of data. "It keeps the focus on what you do with the data rather than turning to the experts," Tarlow says. "Computers are in some sense impartial to the biases." Of course, algorithms have their own set of biases, but they're different from the ones we see trumpeted by pundits on ESPN.

For example, all year pundits have been saying the Big Ten is the best conference in college basketball. And while the strength of a team's opponents is an important factor in determining its tournament success, it can often cloud a human's judgment when picking head-to-head matchup winners, Tarlow says.

Another danger of following the experts is that they love to make bold predictions. After all, bold predictions and contrarianism draw eyeballs. Chalk doesn't. And if Dick Vitale says some 12-seeded team is headed to the Final Four and it actually happens, people will say he's a genius, right? And If not? "Hey, it's a crazy tournament. Who can really predict these things anyway?"

Well, sometimes data can. As political pundits begin to lose their stranglehold on public opinion in the post-Nate Silver era, we're also seeing sports punditry crack under the pressure of numbers. One of the starkest examples of this came a week and a half ago when ESPN's so-called expert Skip Bayless – who the Classical's Bryan Joiner calls "as unloved and unlovable and ineffective and just awful as the Cross Bronx Expressway, and just as permanent" – was taken down on air by Seattle Seahawks cornerback Richard Sherman using little more than simple arithmetic:

That said, there are still a lot of limitations to what data can do, particularly in March Madness. Even as big data continued to break ground in 2012 – remember when robots learned to identify cats in YouTube videos? – the art of algorithmically predicting basketball games hasn't quite kept pace. How come? Not enough data, Tarlow says.

"If we had a million years (of games) for data, that would open the door to the next generation of machine learning." Unfortunately, there are only around a few thousand Division 1-A college basketball games played each year. And while that may sound like a lot, the more data a machine consumes the more data it needs to learn something new. As the Atlantic's Alexis Madrigal writes, "Sure, throwing more data at an algorithm makes it better, just like people know more words as they read more books. But as you go, the amount of data you need to make the algorithm better gets larger. To extend the metaphor: you have to read many more words to learn a new one."

It's an important lesson for anyone who's bullish on big data. And it's the same reason predictive models for things like giant earthquakes, which have a much greater impact on human life than, say, the winner of Duke-Kentucky, are still inadequate despite the fact that we're supposedly living in a brave new world of data analysis. A lot of the data seismologists need to predict the next "big one" comes from previous big earthquakes. Unfortunately, those only occur every hundred years or more. We're awash in data but it's never enough.

If you've made it this far, you're probably thinking, "Hey I thought that headline was about how I could win my March Madness pool!" Fine, if you insist: Between following robots, experts, or your friends' opinions, your best bet is to probably copy one of the machine-based brackets from this year's Machine March Madness pool. Or you could do what researchers at Indiana University suggest and pick the higher seed every time. What about the Vegas oddsmakers? They're highly incentivized to pick the right teams. But last year, for example, you would have fared slightly better by picking no upsets in the first-round than by copying Vegas' picks. For my part, I can't bear to root against my hometown favorite so I always pick my alma mater to win the whole thing no matter how unlikely it is. (Did I mention I've never won one of these things? Go Buckeyes!)

But if there's one thing the experts agree on, it's this: Don't trust the experts.

Via ohmagif

[Top image courtesy j9sk9s on Flickr]

  • This article originally stated that Tarlow was a post-doctoral student at Cambridge University.