With every innovation comes the potential for its abuse. In the age of digital advertising, advertisers’ power to peer into the private lives of consumers is an increasingly hot topic. But provocative headlines often ignore nuances that define an emerging reality.

On the one hand, some industry watchers appear eager to portray advertisers as cartoonish boogeymen, warning their audiences about ominous-sounding but undefined concepts like “tracking” and “targeting” and thus leveraging the twin destructive powers of sound-bites and 21st Century short attention spans. This perpetuates a false narrative of global advertiser malevolence.

On the other hand, advertisers continue to struggle to better engage with interested audiences, turning to a growing list of technologies with evolving and murky privacy implications. Some early ad tech ventures overreached (see the demise of Ringleader Digital), exacerbating consumer fears about the bewildering maze of technology under the hood of the Internet. To consumers this can be confusing and daunting, resulting in the stultification of mobile as an effective advertising medium, as well as advertiser brand damage and reduced sales.

A few years ago, the World Wide Web Consortium (WC3), egged on by the Federal Trade Commission (FTC), tried to address these issues and formed the “Do Not Track (DNT) Working Group.” The goal was to establish a universal do not track standard in an effort to forestall one size fits all regulation related to the “tracking” of individual Internet users. But after a series of stalled initiatives the path forward for WC3 and the DNT self-regulation framework remains unclear.

In the absence of uniform self-regulation, calls increase for regulation. In 2011, Senator John Rockefeller (D-WV), sponsored Senate Bill 418 (the “Do Not Track Online Act of 2011”), which would require the FTC “to prescribe regulations regarding the collection and use of personal information obtained by tracking the online activity of an individual . . .” At the state level, privacy laws and regulations are being proposed with greater frequency.

Even President Obama, riding a post-Snowden wave of privacy sensitivity, is getting in on the action, recently tapping former Clinton chief of staff John Podesta to lead a review of the way that big data will “affect the way we live and work” and how “public and private sectors can spur innovation and maximize the opportunities and free flow of this information while minimizing the risks to privacy.” The results of Podesta’s working group will be documented in a report to the President which will “anticipat[e] future technological trends and fram[e] the key questions that the collection, availability, and use of ‘big data’ raise – both for our government, and the nation as a whole.”

But without current uniform standards, what’s an advertiser actually supposed to do?

For advertisers in the short term, the most effective privacy best practice is to foster transparency and user empowerment. Individuals want to make informed choices about how they are “behaviorally tracked,” which means advertisers’ data collection practices should be clearly shared and opt-out processes made simple. And, where there are legal red lines — for example, the Children’s Online Privacy Protection Act (COPPA) prohibitions against obtaining data from children under age 13 — establish compliance procedures now.

Taking a longer term view, perhaps the purest form of self-regulation is innovation, the creation of a new technical and methodological paradigm to correct flaws that currently exist. The FTC’s 2012 “Privacy Report” states that privacy protections should be factored into the architecture of advertising technologies — advertisers should adopt a “Privacy by Design ethos.” Sen. Rockefeller’s Bill also contains a similar clue, including among the “Factors” to be considered by the FTC in promulgating standards and rules “ . . . [w]hether and how information can be collected and used on an anonymous basis . . .”

Simply put, de-linking data points from their individual human sources is a conceptual premise conducive to addressing industry-wide privacy challenges.  As if on cue, a few ad tech companies are developing offerings that transition the industry from traditional “behavioral targeting” methodologies to data-driven predictive advertising.

By way of comparison, “behavioral” advertising involves capturing data about a user’s web behaviors and then, based on such individualized behavioral data, assuming which types of ads will cause the user to engage. Conversely, predictive digital advertising involves ingesting data from large pools of individual digital ad impressions and then, through data aggregation and modeling, identifying anonymous characteristics (e.g., phone type, age, gender, location) likely to yield the highest engagement with a given ad type.  Appropriate ads are directed to the most optimum non-personally identifiable digital ad impressions. 

Unlike behavioral targeting, no specific user is pursued or “targeted” for delivery of an ad.  For example, Morty from Bernardsville, NJ isn’t impacted or affected by the fact that his prior engagement with an automobile ad provided one thousandth of one percent of the data fuel for a predictive model teaching that German automobile ads are most likely to engage with males on the East Coast using an iPhone 5, especially on rainy days before 10 pm.  The use of predictive modeling, which by its nature presupposes statistically significant groupings, cleanses any temporary (and mostly theoretical) invasion of privacy.

If, upon entering bookstore, an employee furtively trailed me with a notepad, jotting down my apparent interests so that the store could later send me “useful” ads, it would represent the first step down a path of me never again leaving the house.  But if the bookstore simply gathered statistics about middle-aged males in NY who purchased biographies, it would seem more like an unintrusive exercise in capitalism.

Behavioral targeting relies upon assumptions derived from the Internet behaviors of individuals, which can be proxies for private human thought. The notion that an advertiser may be lurking, armed with insights about an individual’s Web browsing behaviors, can be creepy, although the level of creepiness perhaps depends on whether the user has a “reasonable expectation of privacy” in a given undertaking, a concept central to well settled privacy law.

Twitter, for example, recently announced its cross-device re-targeting capabilities, leveraging user Desktop cookies to also target the user’s mobile devices through account-based device correlation.  Tweets are public communications, so targeting based on such communications may be low on the creepy meter.

But the issue is more complicated for Google and Facebook, whose users may have a reasonable expectation of privacy in their communications and Web activity.  Predictive advertising is quite different.  Like census data, it is statistical and anonymous, its fruits being trends, tendencies and propensities of entire subsets of the population.

Until laws are passed or advertisers’ adopt this or another corrective paradigm, their self-regulatory initiatives must focus on being transparent and empowering users. It’s possible for advertisers to make advertisements useful to users without compromising basic privacy expectations.

Indeed, not all intelligence-driven advertising involves the hot pursuit of specific individuals across the Web. The predictive advertisers already know this.