Believe it or not, there was a time when all of the world’s information wasn’t just a tap away. Living in these dark ages meant that people would have to retain information on their own or – gasp! – consult a book. Thankfully, between Google and a countless number of other services, that’s no longer the case.

One of those services is Shazam. Founded in 2002, Shazam “listens” to a snippet of audio and accesses its database to quickly return any relevant information. Now, when you hear a particularly wonderful song in your local hipster cafe, you can immediately learn the name of the song, the band playing it, and where you might be able to purchase it.

The service, with over 250 million users that “tag” over 10 million songs, commercials, or TV shows each month, recently partnered with the MLB to bring its audio-identification goodness to every post-season game, including the World Series. Users will be able to push one button, wait a few seconds and then get detailed stats about the game and its players.

How? Well, the company’s servers “watch” the game at the same time as users, processing information and making it searchable in its database at about the same rate users are able to capture an audio snippet.

Shazam has performed this wizardry before, first with the Super Bowl, then the Grammys, and then with “American Idol”. Between these events, its ability to identify just about any recently-aired TV show in the United States, and a database of over 20 million songs, Shazam has transitioned from parlor trick to information juggernaut over the last decade.

Unfortunately, as with any magic trick, Shazam’s act has gotten a bit stale. People don’t want to see the same act over and over again – they expect constant iteration and aren’t willing to settle for last week’s trick. Though the company has made some serious advancements in how it processes and stores information, Shazam’s main act is starting to become old hat. Which is a shame, because the service has the potential to be the system for bridging the gap between the physical and digital worlds.

In a way, that’s what Shazam already does. It’s taking ambient sounds and comparing what it “hears” to what it has already broken down and identified on its servers. The service takes data that would otherwise be impossible (or impossibly frustrating) to reach and makes it available as quickly and conveniently as possible. It would be interesting to see if the service could couple this simplicity with other data sets, expanding beyond the identification of content to other, more practical uses, like restaurant reviews or other location-based information.

I asked David Jones, Shazam’s executive vice president of marketing, if this might be possible using high-frequency audio. What if Shazam’s app were able to pick up on sounds that I wouldn’t be able to hear and use its existing technologies to identify that sound and return relevant information?

Say, for example, that a restaurant were to use a unique series of high-frequency sounds to serve as an identifier. Shazam might then be able to offer up reviews of the restaurant, its menu, etc. Instead of having to scan a QR code or fumble with an augmented reality app that uses a smartphone’s camera, Shazam could “magically” offer a way to get this information with the push of a button.

Jones says that the company does play with audio in ways that the human ear can’t recognize, but in a way that benefits brands, not consumers. Shazam has previously worked with Old Navy, which runs television, radio, Web, and in-store advertisements, to pitch-shift (read: change) the audio that ran in each iteration of the ad enough that Shazam could “hear” the difference but the human ear couldn’t. Old Navy was then able to learn how many of its customers “tagged” an advertisement from a television ad instead of, say, a video that they watched on YouTube.

Unfortunately, the use-case I had described was a bit trickier. Though Jones says that there’s “absolutely a lot of opportunity in that area to be explored and executed against,” he says that it would be “extremely difficult” to bake that functionality into Shazam due to the large variety of microphones that modern smartphones use. Because not all microphones are built equal, a Motorola-built phone might not be able to “hear” the same sounds as the iPhone, limiting the range of high-frequency audio that Shazam might be able to use.

There went my hopes and dreams. Though Jones says that Shazam is looking to expand the types of things that it can identify, it definitely seemed like we aren’t anywhere near the augmented reality future that I had hoped. Sure, I could turn to QR codes but, like I mentioned above, those are cumbersome and frustrating to use, which cuts into the main reason – convenience – that someone would want to use them in the first place.

For now, Shazam is limited to evolving its basic functionality. Though there isn’t necessarily something wrong with that — Google Search has served the same function since its inception – it does tend to become less exciting as time goes on. It’s a bit like watching a juggler continue to add more objects into the mix; eventually, it goes from impressive to “meh.”

[Image courtesy Ky_olsen]