Reading the Web is a messy experience. There’s plenty of great content (you’ve probably noticed), but it’s dispersed so widely and across so many different reading experiences that my meagre brain quickly gets fatigued. As I jump from the New York Times to the Atlantic to Slate, my eyes are ducking and weaving, adjusting to new font sizes and types, as well as to line spaces and leading, avoiding the clutter of ads and sidebars, noting suggestions for what to read next, toolbars, nav bars, bio boxes, Google Ads, and subscription pleas. (Alas, not every site enjoys design as lovely as PandoDaily’s.)

That’s why I love read-later apps such as Instapaper, Read It Later, and Readability (and of course, Flipboard, Pulse, Flud, and Zite are doing cool things, too). Thanks to these guys, I can switch to “read mode” and enjoy a calm zen experience of plain text on a white background. No ads. No blinking banners. No attention stress.

So far, however, these reading experiences have been pretty limited. For the most part, “read modes” have been restricted to headlines and body text, with perhaps the occasional image. While that works great for news stories, magazine articles, and blog posts, anything that slightly resembles rich media has had to sit in the corner. If an article happened to have video in it, for instance, the video wouldn’t display in read mode.

Readability just took a big step towards fixing that. Today, they launched Iris, the next generation of their “content normalization engine” (what sort of world do we live in when that’s considered a legitimate descriptor?), which will bring videos into the read-mode experience, further circumventing the need to ever endure an ESPN banner ad.

Iris also brings with it the ability to display Wikipedia pages in read mode, simplifying the plagiarism experience for all budding journalists, and story subheadings will be displayed more consistently and frequently in the navigation screens of apps such as Longform and Instapaper.

In his blog post announcing the Iris launch, Readability CTO Chris Dary said Iris is inspired by IBM’s Jeopardy-winning computer Watson:

Iris’ first order of business is to figure what type of content source is at hand. It analyzes a page, determines the likely context based on a number of factors and extracts what a human would expect as meaningful information from that source. Each context is fully malleable, and can be modified and improved upon individually.

Iris is already getting into its work. Try, for instance, watching the Vimeo version of Caine’s Arcade (warning for those who haven’t seen it: happy tears ahead), using the Read Now function on Readability. So far, Iris is only supporting Vimeo and YouTube, but more will be added soon enough.

At first sniff, some content owners might worry that their videos are being watched out of their original ad-heavy environments, but most publishers probably won’t see this as much of a threat. As Dary explained by email, “Videos are actually (happily) one of the most shareable things on the web. We’re just using the embeds that the services themselves expose for others to use to share, so I suspect content owners are happy to have those videos in whatever context they can.”

By the same token, many reading apps get around publisher nervousness related to having their content experienced in a foreign space by first sending readers to the original website. Of course, that might not be enough for certain curmudgeonly tycoons who still believe Google News steals their content.

All going well, the day we can read the Web without having to deal with websites might not be that far off. Dary says it’ll be a while before we can read the whole Web via a parsed experience, but, because of improving standards and the proliferation of APIs, “we’re getting closer by the day.”

Perhaps you should save this one for later.