Shuja_Shah_Durrani_of_Afghanistan_in_1839

It’s not quite as good as an Elvis sighting, but I saw a Buddha of Bamiyan last week, the tall one, whose left leg was already missing before the Taliban blew up the rest of him in 2001. I stood at his sandstone feet, then levitated 60 yards in the air to hover near the long Buddha ears and thick gray lips.

No one bothered to scan the two 6th-century monuments carved into an Afghan cliff face while they were still just mud and rock, but they left a digital legacy, mostly snapshots scattered online. Today, software can take those photographs, stitch them into a 3D model and take you on a flyby. It’s a more perfect experience than experience itself, a counter-historical present that makes you forget the Buddhas were destroyed.

When machines enhance your experience with an overlay, it’s called augmented reality; when they replace it entirely, it’s called virtual reality. Both techniques have something in common — they derive from reality itself, and they do it through a sort of 3D scanning called reality capture.

Reality capture and its algorithmic heart, reality computing, are two technologies on the verge of ubiquity, and they’re the magic behind a lot of things you’ve already heard of. The last 18 Academy Award winners for best visual effects have used a piece of reality-computing software called Autodesk Maya, the Sanskrit word for “illusion.” So does the video game Halo 4, which has sold more copies in America than any other Microsoft Studios title. Both of them feed off of reality scans, which is what sets them apart.

While we’ve had virtual realities for a while, they’re usually a little too cute. MineCraft and Second Life are two worlds where players painstakingly created a built environment from scratch — but they’ve never rivaled the physical world in detail (or failed miserably if that was their goal).

What makes reality computing different from old VR, and from run-of-the-mill 3D modeling, is its scale, speed and resolution. Sensors have begun to ingest the world — entire cities at a time — and to reconstruct a doppelganger, which is just as rich as the real, much more malleable, and unbowed by the laws of physics or the tyranny of the past.

Living across multiple realities, the Buddha of Bamiyan exists and it does not exist. You met your wife-to-be on a random subway platform and did not. You live in Alaska, or Zanzibar, depending on the time of day. With visions of what might have been, this technology doesn’t just capture reality — it captures you.

Take the Oculus Rift, a virtual-reality headset that monopolizes your sight and hearing with immersive CGI environments. Oculus shipped about 40,000 goggles to developers last year, and two of them landed with some coders I know.

The “demo realities” — dulcet Tuscan villas, interplanetary space, Arctic floes and first-person shooters — aren’t perfect, but they’re just real enough to make you wonder why anyone would ever buy real estate in Italy again.

After work in the evening, I’ve seen grown men cry from the power of Ocular visions, heard them laugh silly, lunatic laughs after a few minutes in VR. Floating through meteor showers to a Chopin nocturne is unforgettable, even if you never took your feet off the ground. The potential to expand human experience is immense, frightening and probably illegal in some states.

It also has side effects.

Just a little time in immersive, high fidelity virtual worlds produces a nagging sensation during the rest of your waking hours. The world you thought you knew has multiplied to become massively parallel, and your brain retains a distinct impression of them all. Sight and sound tell you they’re equally real, and preferable substitutes for the earth you’re about to return to.

This world begins to blur with the rest. It’s only distinction is to play across a smaller screen, your eyes. Very little distinguishes it from the texture, motion and complexity of the virtual, now that virtual rips from real. In fact, the sensation you have with goggles on — that you’re somehow nested in a larger reality you can’t see — returns at odd moments in real life, when you’re driving the freeway at night, say, or standing in line for coffee. You start to distrust your self-contradicting senses.

“Cyber-reality and physical reality are converging” — that’s what Jordan Miller tells me. He sounds like an avatar in a Neal R. Stephenson book, but he’s talking with a straight face. Miller co-founded Reality Cap, which makes an iPhone app called 3DK.

Alongside Autodesk, Microsoft and Adobe, the startup is part of a wave of innovation in reality capture and computing. 3DK is a spatial sensor that measures the world through the phone’s camera and embeds those measurements on screen. It superimposes a layer of machine precision over human vision. Intruding on the real, it gives the illusion of superpowers.

In his 1992 novel Snow Crash, Stephenson coined the term Metaverse: a fusion of virtually enhanced physical reality and physically persistent virtual space. It’s the virtual and the real interwoven. And twenty years later, it’s upon us.

Most people don’t know it, but current smartphones have all the hardware they need rip a version of reality — just like you would rip an MP3 off a disk — and cloud computing is powerful enough to handle those virtual models, which can eat up hundreds of gigabytes of memory. Those two developments are propelling reality capture towards a Brownie moment, when it will be in every hand. Translation: your life can be the stuff games are made of.

It wasn’t always clear that reality computing would breakthrough to consumers. The inflection point for the technology, the point where everyone paused, probably came in 2009, when Microsoft released the motion-sensing Xbox device Kinect. To anybody who was watching, Kinect’s instant fan base demonstrated that the rapidly evolving technology had mass-market appeal, at mass-market prices.

Using active infrared sensors, Kinect reconstructs 3D images of humans in motion. Basically, every gamer has a virtual doppelganger reproducing her movements within the machine, a double inside the looking glass.

The first version of Kinect relied on depth sensors and range camera technology invented by an Israeli startup called Primesense, which Apple happened to buy last year. With that deal, the colossus of Cupertino obtained the IP and savoir-faire to make iPhones into full-fledged motion-capture devices. Everywhere becomes gamespace.

At the risk of looking like a fool, I’m going to make some predictions. They’re about reality computing and alternative worlds. Serious people, who otherwise consider themselves adult, will go to therapy because of this technology. These simulations are way beyond cute. Only literature has the same plasticity and detail.

Users will feel the itch to log in, and wake up from a virtual world eight hours later if they’re lucky. Like the woman in Inception, Limbo will become their reality. And as with video games, some will die of dehydration and others will let their children go hungry. Some will receive interventions, others will play til the money runs out.

These new virtual realities are addictions waiting to happen. Life, unemployment, the inevitable dreariness of daily existence surrenders to an online world where you can take risks, accomplish something, be beautiful and belong. I know, because I can feel the pull already.