Even in a “lost year for tech” there have been great gains, as John Gruber at Daring Fireball points out. For one, smartphones have become commodities, and more than 1.5 billion people worldwide tote full-on miniature computers in their pockets.
The fact that one out of every five people on the planet have access to real-time information and can communicate with virtually anyone around the world at any time and from anywhere indicates we are entering a period of great change. What’s more, it’s likely the pace of that change will only quicken.
But, as technology marches on, it’s important to consider that the ability to stop is as important as speeding up. It wasn’t until George Westinghouse invented the air brake in 1868 that trains could run faster and pull hundreds of cars. Before then train wrecks were common even as trains moved slowly across the land. We’ll need a cultural analog to handle this ever-quickening pace.
As we head into a new year, John Markoff of the New York Times reports that “computers have entered the age when they are able to learn from their own mistakes, a development that is about to turn the digital world on its head.”
Further, he writes:
The new computing approach, already in use by some large technology companies, is based on the biological nervous system, specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.
In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming.
Designers say the computing style can clear the way for robots that can safely walk and drive in the physical world, though a thinking or conscious computer, a staple of science fiction, is still far off on the digital horizon.
Couple this with advances in quantum computing, and the possibilities are dazzling or frightening depending on your mood and world view. But buckle up because when it comes to the evolution of technology we’re just beginning.
Look at this (somewhat outdated though influential) chart compiled by Hans Moravec, a Carnegie Mellon computer science professor. It took until 1970 before computers could match the brain power of a bacterium. In the late 80s our computers achieved mental parity with a worm and in the mid-90s had the processing power of a guppy. By 2010 computers reached the capability of a mouse. The chart stops there. (Like I said, somewhat out of date).
The chart also illustrates how the pace of computation power and robot intelligence has been speeding up, represented as a hockey stick-like procession. The trend in 1965 was slower than in 1985, which was significantly slower than 1995′s. At this rate our computers will match the processing power of a monkey in about five years and a human by 2030 or before.
What will this mean?
Three-Dimensional holograms in virtual reality? Robot sex (the porn industry is constantly innovating), drones that build other drones and transform our delivery systems? Each of these — and many more — may or may not be worth exploring. But this time I want to look at media taken to an extreme and forged into advanced simulations encompassing… everything. This was a topic I explored in my book, Play at Work: How Games Inspire Breakthrough Thinking.
We’ve all heard that ‘Life is a game.’ But what if we are all, right now, actually living in one, designed by someone living deep into the future?
It’s the kind of far out idea debated in college dorms, often with the ritual passing of a bong, and constructed of equal parts The Matrix and Star Trek’s Holodeck. According to the theory, which an academic from Oxford and a scientist from NASA have put forth separately, there’s an almost mathematical certainty that we’re toiling inside an intricate simulation created by someone existing anywhere from 30 years to 5 million years or more into the future. In essence, we’re just some future being’s hobby, his or her version of SimCity or a massively multiplayer online role-playing game such as World of Warcraft. I suppose you could say we’re living in sim.
Nick Bostrom, a philosopher at Oxford University as well as director of its Future of Humanity Institute, coined it the “simulation Argument.” But he’s not the only notable thinker who buys into this. So does Rich Terrile, director of the Center for Evolutionary Computation and Automated Design at NASA’s Jet Propulsion Laboratory. Neither dons a tin foil hat, wanders around city parks and spouts sci-fi worthy conspiracy theories. Their views, they say, have been shaped by math, science, and human history.
Bostrom believes that processing power mixed with more sophisticate software will get to the point that it will be possible for computers to create consciousness. Meanwhile, Rich Terrile predicts that in the next 10 to 30 years artificial consciousness will be embedded in machines. Even now, the fastest supercomputers crunch data at twice the speed of the human brain. Plot that out on the exponential computer processing power curve and within the decade, he told Vice, we’ll be able “to compute an entire human lifetime of 80 years — including every thought ever conceived during that lifetime — in the span of a month.”
His Vice interview is fascinating and worth your time. In it he performed a back-of-the-napkin calculation involving the PlayStation, which Sony releases every six to seven years. Thirty years from now a Sony PlayStation should be able to compute 10,000 human lifetimes simultaneously and in real time, or about a human lifetime per hour. Between PlayStation 1, PS2 and PS3 there are about 100 million devices in the world. If each held 10,000 humans, more people would reside in Sony PlayStations than maintain a corporeal existence on earth.
Terrile finds inspiration in the idea that we may soon have the technological wherewithal to create our own synthesized universes. That would mean that we, who live in a simulated world, have created a simulated world, whose denizens wouldn’t know they’re the product of our collective computing imagination. Now, what if our master designers also lived inside a simulation? Same for those who designed their simulation. Potentially you could have levels and levels of sims, perhaps millions of them.
In that case, Terrile speculates, if there is a creator for our world, it is we, or at least an offshoot of we hailing from the distant future. In a sense, “we are both God and servants of God,” he says.
But we might not even know we were living in a simulation.
“If the simulators don’t want us to find out, we probably never will,” Bostrom wrote in a 2003 paper titled “The Simulation Argument: Why the Probability that You Are Living in a Matrix is Quite High.”
What reasons might an advanced being have for conjuring these complex, albeit imperfect, simulated worlds? Bostrom doesn’t know.
But if processing power and computer hardware keep improving at the rate it has been, it’s possible this far-out scenario won’t be all that far out, after all.