Around a million years ago, I led the user experience and technology area of a project that attempted to quantify audience sentiment and acceptance for digital marketing and advertising practices. The project was funded by Intel, run by AOL and used sample campaigns from Hilton and Nissan as well as FexEx and others through a partnership with IPG and Omnicom. The Nielsen organization led the research which paid people to participate. There were various living room sets built inside CompUSA stores and the test subjects were paid between $10 and $50 to participate depending on their life status (a doctor might get paid more than a student).
Several ad models were sandwiched into typical web video viewing sessions that included long and short pre-roll ads (before the video such as YouTube), post-roll ads (after the video), commercial breaks (like Hulu) as well as other more abstract models.
Months and months of physical research and painstakingly compiled metrics normalizing the data against demographics, psychographics and geographics, yielded a clear and indisputable result. People hate ads. Wha?
Ok, now we know. Now we know quantifiably, not just intuitively. The interesting part was a more nuanced area of the exercise that looked at targeting, specifically providing information to users that was of interest, but in the form of soft advertising. What we learned was the basis for Google's billions of dollars of valuation thanks to their AdSense technology. If users are looking for something, and you can return a reasonable answer or resource that addresses their desires, the experience becomes informational, appropriate and well received—flipping the signal-to-noise ratio or ham-to-spam ratio into a workable relationship with the consumer.
o here's my suggestion. As a consumer, I'd be more than willing to give advertisers a lot of information about who I am and what interests me if they promise to transform their spam into ham, or even bacon.
Here goes; I don't have kids, so no more diaper ads! I do have a dog, so send me some dog-treat coupons ok? I'm a techie; drive a Prius, I'm turned on by LED light bulbs and I'm an Apple user. I don't eat ice cream, I do drink mineral water, and I'm fond of Thai. Vampires or zombies? Definitely vampires. And, *squirm,* I'm a Brony - TMI?
ey all you wary consumers, give it up! Be willing to give up some voluntary information. You don't have to admit to smoking cigarettes or being a boozer (your grocery loyalty card data has already been sold to your health insurance company) you might stand to get some relief without ever seeing that HEAD-ON commercial again!
What we need is a Netflix prize, but for marketing standards.
Instead of invasive violations of our basic privacy, how about we create an opt-in large-scale predictive modeling algorithm based on a voluntary and updatable set of data?
The Netflix Prize was an open competition that began on October 2, 2006, to award $1,000,000 for the best collaborative filtering algorithm to predict user ratings for films based on previous ratings. This kind of rating system could serve as the basis for recognizing patterns in consumer supplied data about products and services.
The Netflix challenge happened over a three-year period, 40,000 teams from 186 countries made submissions. On September 21, 2009, the grand prize was given to a seven-man multinational group named BellKor’s Pragmatic Chaos, a team of Bob Bell, Martin Chabbert, Michael Jahrer, Yehuda Koren, Martin Piotte, Andreas Töscher, and Chris Volinsky that bested Netflix’s own algorithm for predicting ratings by 10.06 percent. The winners represented a collaboration of multiple teams that mashed up a variety of mathematical models to produce the winning algorithm, rather than relying on a single approach. In other words, an algorithm to wrangle other algorithms.
As a consumer I want to be in control of the messaging and monetising that clutters up my life, I want more ways to participate, less noise and information that is meaningful to me.
The Internet news-sphere was burned up recently by he Buycott App.
This is a wonderful, consumer-empowering tool for consumers to make good choices in how they support or punish companies that are not politically or ideologically aligned to their beliefs.
Buycott describes the experience as "When you use Buycott to scan a product, it will look up the product, determine what brand it belongs to, and figure out what company owns that brand (and who owns that company, ad infinitum). It will then cross-check the product owners against the companies and brands included in the campaigns you've joined, in order to tell you if the scanned product conflicts with one of your campaign commitments."
magine this kind of technology tied to your Amazon account, your Browser profiles or the other unrelenting sources targeting your mind and wallet.
The way to gain power and control over our lives as marketing targets is to take a proactive and participatory role in the conversations and implementations.
Lets build a freely distributable "Like" and "Unlike" button technology, that we as consumers can click for any product, service or company we encounter and consequently add or block those resources in our "acceptable opt-in" marketing profiles. This profile would be the basis of an opt-in large-scale predictive modeling algorithm associated with our consumer profile data. Noise canceled. Everyone's happy and the bacon has moved closer.
All I can say is, you've got to give to receive, so "give it up people!" There's ham at the end of this maze.