This week I presented my C2 training course, Web Design & Content that Delivers ROI, at Proven Direct. One technique I discussed was the uncanny ability of a type of online graphic to attract attention (as measured by eye scan heatmaps) and move people to action. A lot of ads have used this technique, either intentionally or accidentally.
The technique: Have a person in your ad look directly at the user.
One example I gave was about a coffee station at a university with an “honors system” money collection jar. When the pricing sheet on the wall included the eyes of a person looking back out at the coffee drinkers, the money collected in the jar more than doubled, compared to weeks when the photo used was of a field of flowers. The photo could include any human, as long as the gaze was straight out.
What’s more, apparently the gaze does not necessarily have to be convincingly human — instead, just human-like. The graphic you see to the right depicts an application of this is fascinating technique described in New Scientist magazine. Here’s the account, as I described it to my class:
The researchers split the group into two. Half made their choices undisturbed at a computer screen, while the others were faced with a photo of Kismet — ostensibly not part of the experiment.
The players who gazed at the cute robot gave 30 per cent more to the pot than the others. (Investigators Terry) Burnham and (Brian) Hare believe that at some subconscious level they were aware of being watched. Being seen to be generous might mean an increased chance of receiving gifts in future or less chance of punishment …
Burnham believes that even though the parts of our brain that carry out decision-making know that the robot image is just that, Kismet’s eyes trigger something more deep-seated. We can manipulate altruistic behaviour with a pair of fake eyeballs because ancient parts of our brain fail to recognise them as fake, he says.
Keep this in mind where you are seeking to design an ad or interface that you don’t want overlooked.
If you’re in the Milwaukee or Madison areas, please be sure to attend my second course, presented by C2: Web Analytics That Clients Love. It will be held in Madison on April 27, and Milwaukee on May 11. Either of these presentations is just $69, but the Milwaukee course continues its $59 Early-Bird Pricing for another 12 days.
Robert Scoble just posted this YouTube video of a demonstration of Microsoft’s Surface multi-touch tabletop monitor. Shot at last week’s Gnomedex, this video serves as a sneak preview of what this technology can do in a social setting.
Last month I posted about the new Bokode barcode. One application I described to friends was its use on trade show floors. The barcodes would be worn on presenter name tags, and reveal much about the wearers to any conference attendee wielding a smart-phone camera (the link to the Bokode post is below).
The MS Surface offers a different solution to the same challenge. It’s one of making the most of a networking opportunity. The tabletop displays the conference’s social graph, which can be manipulated and organized by anyone who steps forward and plops down their name tag.
Making the most of conferences
National conferences demand efficiency from its attendees. The cost in time and money is considerable, so many of us look at them as a competition to beat our personal best: How many relevant contacts can we make? How many friendships and business ties can we deepen? It’s all an effort to be efficient, and not fly home feeling we’ve overspent on a rare chance to make valuable face-to-face contacts.
This strong networking benefit is what’s convinced me that Microsoft is on to something. I suspect the main challenge with their tables will be the over-crowding that takes place around them. Conferences providing this technology will be hard-pressed to have enough tables to go around.
Every year the TED conference introduces new and provocative ideas, many of which soon become commonplace. Two years ago, Jeff Han’s demonstration of multi-touch screens presaged the Microsoft Surface, and the first mass-produced multi-touch cell phone: the iPhone. These multi-touch screens are many things, but unencumbered is not an adjective that comes to mind.
Even the iPhone requires you to hold a cell phone, which is a barrier for a lot of real-world applications. MIT Media Lab’s Pattie Maes explained the challenge at the latest TED conference. She said that, for instance, “If you are in the toilet tissue aisle of your supermarket, you don’t take out your cell phone, open a browser and go to a web site when you want to know which is the most ecologically sound toilet tissue to buy.” She and Pranav Mistry, also of MIT’s lab, have devised a potential solution to accessing this type of rich information in the real world. They call call this sort of computer interface their Sixth Sense. Here is the video of the computer demo.
The demonstration had the audience on their feet, cheering.
Here are three things I love about this concept, as crude as it currently is:
It’s cheap, light and small
It can very quickly become cheaper, lighter and smaller
With video recognition, the need for colored finger-markers will be unnecessary (so will logging in, since it will recognize its owner’s unique fingertips from anyone else’s)
Wearable computers have been talked about for decades, but this is the first user interface that is starting to make sense to me.
When Jeff Han’s concept of multi-touch computer interfaces was presented two years ago, my blog post was effusive about the possibilites. Someday we might be able to work standing up — more prone to both creativity and collaboration (please excuse the obscure pun). The biggest barrier to this future was that darned wall-sized screen. With the Sixth Sense device, any white wall becomes a screen — and an inviting whiteboard for one or more knowledge workers to play in.
Do you agree that this crazy contraption has a lot of possibilities?
A recent post on Experience Matters offered a great example of how you can design a user experience around an existing need. Their example is YouTube:
Consumer Need: The ability to share themselves with potentially millions of others through site and sound.
Augment Behavior to include Brand: In the case of YouTube … perhaps the best use of this network was not for a brand to spread its own content, but help consumers share their own. After all, the initial consumer need identified above was the desire for consumers to share themselves with the masses. Wouldn’t it make more sense to empower them in continuing this behavior rather than competing against them? If successful, this takes the process full circle and makes the brand-infused behavior become part of the original consumer need.
Why is this behavior occurring?: YouTube made video distribution easier (on a mass scale) than ever before. It didn’t require hosting a server or website, or being isolated to sending your large files across flaky channels. From a content consumer perspective, YouTube and sites like it offer the depth and variety that professional producers simply cannot match. The quality (for now) of the content is obviously not comparable but consumers are willing to look past it because the content is original, very controllable, and often more personal.
Consumer Behavior: Millions of people are uploading their thoughts, talents, and parodies onto a video sharing network. Even more millions of people are watching those videos (the majority of which are user generated, not professional).
They call this reverse engineering. But really, it’s simply finding a need and filling it in a unique and viral way. Major online successes are the clearest examples to describe this process because their imprint is so deep and the applications are so new and different from the status quo. Here’s another example:
The Birth of Amazon
Jeff Bezos set out to make an e-commerce site. Period. He reverse-engineered from a fundamental desire to buy something in your underwear (so to speak) and not to buy books. It turns out that books simply met the right criteria for ease of warehousing and shipping. Also, books were searchable in a vast and accurate — but, before Amazon, difficult to access — database.
The success of both of these examples is obvious. Len Kendall of Experience Matters defines success this way: “When a brand can improve or change a consumer’s behavior so it still satisfies their initial needs.”
He goes on to say that a really big success is when a brand can radically change consumer behavior in a way that makes it virtually inseparable from the initial need.
The Killer App For Your Brand
What is the fundamental need that your audience wants to fill? How can you satisfy that need with your brand and some unique technology?
As an experiment, I tried this reverse engineering exercise with a brand that seems the very antithesis of high-tech. Next week I’ll reveal the brand and the solution.
Web usability expert Jakob Nielsen is admittedly “bullish” on the mobile web. But his recent post bemoans how lacking most web sites are when viewed on mobile devices. For most sites, his prescription isn’t a web design overhaul. Instead, he recommends creating a separate version specifically for the lowest common denominator mobile browser.
To be more precise, he recommends that only if your site is frequented by cell phones and smart phones should you make this investment. “Not all sites need mobile versions,” says Nielsen. “According to a diary study we conducted with users in 6 countries, people use their phones for a fairly narrow range of activities.
“So, because many mainstream websites won’t see a lot of mobile users, they should just adapt their basic design to avoid the worst pitfalls for those few mobile users they’ll get.”
Narrow Range of Activities
So, you may wonder: What is this “narrow range” of activities? The following list is a good summary. These happen to be the “behaviors users engage in when using mobile devices,” as described in the upcoming Usability Week 2009 Conference(s), presented by the Nielsen Norman Group and presented by Raluca Budiu in full-day tutorials.
The course description lists these activities. If your site has users doing any of these 11 activities, seriously consider designing or upgrading a mobile version for your site:
Navigation to websites on mobile devices
Browsing for news, entertainment, sports
Finding specific information (weather, movie times, etc.)
Transactions (such as online banking and other financial operations)
Using maps and location information
Integrating e-mail and contact information with browsing and fact-finding
Content management (ringtones, photos, etc.)
Monitoring and communication
Finding information about a product
Comparing online and in-store costs
Video, music, and games
Accessing, choosing, and downloading content
Accessing nutrition and health information
Far from a narrow range, that seems like a lot of functionality! In Nielen’s Utopia, we’d all be doing most of our work and online recreation from our phones. It seems more like science fiction than a glimpse of things to come.
So why exactly is Nielen so bullish on mobile? Here’s an excerpt of his reasoning in today’s post:
The iPhone is certainly not perfect, and competitors could easily make better mobile devices. By “easily” I don’t mean over a weekend. I simply mean that it’s possible to do it given a strong focus on user experience and user-centered design [UCD]; iPhone leaves a lot of ground for improvement. So far, however, iPhone competitors have been disappointing because they haven’t been created with UCD.
He goes on to write that, whereas mobile browsers may improve over time, it is the user experience designed into mobile web sites that will lead the way in the short-term. He explains, “There is immense potential for advances in mobile usability as more website, intranet, and enterprise software designers build mobile versions and revamp their current designs for usability.
“The mainstream Web’s state in 1998 actually provides a hopeful precedent: just a year later, in 1999, interest in Web usability began to explode as Internet managers realized how chasing ‘cool’ rather than usable design yielded poor business results.”
Nielsen concludes by stating that he hope history repeats itself. As we marketing technologists struggle to deliver more value with every customer contact (in today’s economy more than ever!), I see this being likely to happen.
Heatmaps to observe eye movements of online shoppers have been around for a while. They’re quite helpful. But in a perfect world marketers would get direct consumer intelligence. They’d see maps of consumers’ “emotional flow,” displayed dynamically as shopping decisions are taking place.
Brace yourself. We’re getting our wish.
Armed with fMRI imagery, emotional heatmaps (my term) are being charted and analyzed. They’re yielding fascinating insights into why we choose the purchases we do.
Take the recent work of William Hedgcock and Akshay R. Rao (in this PDF report). Hedgecock is assistant professor of marketing at the University of Iowa’s College of Business. Rao is director of the Institute for Research in Marketing at the Department of Marketing & Logistics Management at the University of Minnesota. This duo has recently published findings on why some shopping decisions are so difficult to make — and how adding a “decoy” option can get consumers “unstuck” and back in the buying mood.
Overall, they are using functional magnetic resonance imaging — or fMRI — to “offer an assessment of whether and how neuroscientific techniques might be employed in the study of consumer choice in particular and consumer behavior in general.” Yeah, right. Here’s the English translation …
Relieving Aristotle’s Anxiety
This is what they did:
Subjects were hooked up to fMRI machines and presented a choice between two purchases. The choice was so close in desirability a mental stalemate occurred. The consumer chose neither. (As the researchers noted, Aristotle first discussed this tendency toward stalemate by describing a person who was equally thirsty and hungry, and equidistant from food and drink. In this famous thought experiment, Aristotle’s subject remained in place until he dies.)
A third choice — one less desirable than the first two — was presented in the mix. This was their decoy choice.
fMRI readings showed that the mental discomfort generated by the stalemate went away. Once this anxiety level was lowered, a selection between the two “dominant” options usually followed.
Their conclusion suggested that the addition of an item, simply to hasten a decision, not only makes sense when you tally purchases, but is also validated by watching real-time fMRI heatmaps.
For e-marketers, a greater takeaway is this: The day is on its way when we can validate our assumptions about major types of “shopping cart conflicts,” and find automated ways to aviod or resolve them.
A newly-launched iPhone application allows Google searches through voice alone. This brings us closer to when non-computing types can work and play in a Web 2.0 world. Imagine: If this future comes to pass, productivity increases in many industries would be huge.
More significant to us marketers, large swaths of the workforce will no longer consider the computing world to be hostile — or at the very least, impenetrable. As I speculated two years ago many workers simply will not make portable computing a habit until it is easy enough to do through speech alone.
You might consider this Part II of a two-part post. Last week I reported on Powerset, Microsoft’s acquisition in semantic search. Now, here is an exciting stride in the the voice-recognition half of the hands-free computing equation.
Both Yahoo and Microsoft already offer voice services for cellphones. The Microsoft Tellme service returns information in specific categories like directions, maps and movies. Yahoo’s oneSearch with Voice is more flexible but does not appear to be as accurate as Google’s offering. The Google system is far from perfect, and it can return queries that appear as gibberish. Google executives declined to estimate how often the service gets it right, but they said they believed it was easily accurate enough to be useful to people who wanted to avoid tapping out their queries on the iPhone’s touch-screen keyboard.
The service can be used to get restaurant recommendations and driving directions, look up contacts in the iPhone’s address book or just settle arguments in bars. The query “What is the best pizza restaurant in Noe Valley?” returns a list of three restaurants in that San Francisco neighborhood, each with starred reviews from Google users and links to click for phone numbers and directions.
The emphasis above is mine. Here’s a demo of the new Google app for the iPhone:
This is going to get very interesting, very fast.
As Raj Reddy, an artificial intelligence researcher at Carnegie Mellon University, reported in the NY Time’s piece: “Whatever [Google] introduces now, it will greatly increase in accuracy in three or six months.”
The semantic search problem, when solved, will help computers understand what people are saying based on their wording and a phrase’s context. On the other hand, voice recognition requires something at least as daunting: Penetrating regional accents. The most visible flaw in this first full week of the iPhone app’s release is it is baffled by British accents.
Joshua Porter recently shared an excellent presentation on leveraging cognitive biases in the design of social networking sites. I’ve followed Joshua’s blog for years, and his thoughts on this subject reinforce my loyalty. You should know that I’ll vehemently defend the time I spend reading his stuff, even if you try to persuade me that I could be reading other, similar blogs. This die-hard loyalty is itself an example of something called a cognitive bias. It shows the Ownership Effect, one of the biases — or heuristics — that secretly influence our decisions and actions.
Heuristics are mental shortcuts. Unknowingly, our brain processes information through filters. These filters add more weight to some facts and less to others. Should real facts be scarce, heuristics can sometimes fill in the blanks.
Are cognitive biases good or bad?
The virtue of gut-based judgments is a subject of heated debate. Some think rational decisions are the best. They point to addiction and racial prejudice as two consequences of unchecked cognitive biases. Others, most famously Malcolm Gladwell, feel that heuristics have more value than they get credit for. His book Blink is full of examples of the gut overruling the brain and proving itself more accurate.
Regardless of your take on the phenomenon, as marketers we can only benefit by getting to know cognitive biases, especially in good web interface design. Lucky us: In the process we can learn a little more of how our own minds make judgments.
But be forewarned. It’s not pretty.
A bias that is particularly interesting is Loss Aversion. If I approached a random sample of people and offered them money based on the outcome of a coin toss, the wagers that they accept and refuse are far from rational. If I said I’d pay $1.25 if they win the coin toss, and the cost of the bet is a dollar, a person’s rational brain would do the math and say “Go for it.” In a fair bet, the win far outweighs the loss.
However, most pass on this offer. They refuse the likely ROI of 25%.
In fact, when the stakes are changed and other amounts are tested, it turns out that the majority of participates will hold onto their dollars until the reward increases to $2.00!
This is not rational. Even when haggling is eliminated (defined as holding out until you get the best possible offer), the majority of people walk away from making a likely profit.
Why do they do this? Because the value of not losing is twice as great as the value of winning. In a 50-50 wager, the value of losing $1.00 is equal to that of gaining $2.00. An absurd 100% ROI is the usual tipping point.
Could loss aversion be inherited from our cave-dwelling days?
Where does this reflex to avoid loss come from? Work by researcher Keith Chen with capuchin monkeys suggests that loss aversion may be innate in humans — and indeed, in other more primitive primates. In his research (purchase of white paper required — otherwise, here’s my blog entry on the topic), he sees exactly the same loss aversion in monkeys that are taught to use tokens in exchange for food as others have observed in humans playing identical games. And I mean exactly, as in: If you just looked at the numbers, you couldn’t tell human response from monkey response.
It really makes you think.
Joshua Porter gave the example of using loss aversion to get people to register on a site, with offers of “Never lose another password!” or “Don’t miss out on opportunities to save!” If the cost of loss is presented as half as great as the cost of preventing a loss, you will likely generate a simple conversion.
Paradoxically, the fear of loss is also seen in gambling. As soon as you sit down at the poker table, your more primal, reptile brain wants to ensure others do not get your chips. Even if you have a weak hand, you might try to bluff your way to victory. Even in the face of almost certain loss. To casinos, the cognitive bias of loss aversion is definitely something good.
Swoopo goads bidders in a chase for merchandise
I’ve observed this phenomenon in a truly scary “auction site.” Swoopo.com offers merchandise that you can bid on. But these aren’t true eBay-like auctions, because your bid is lost whether you win the prize or not. Everything from software to flat-screen TVs are presented with starting prices in the teens. Then participants throw their cash at the merchandise, hoping to be the last person to place their money down before the timer ends the contest.
The site’s designers have cleverly avoid gambling laws through a technicality. Their site is a “game of skill.” It is arguable that it takes skill to be the last to bid, and thus take home a prize.
But this contest is more like a carnival game of topple-the-milk-bottles. Every bid bumps up the cost of the item, but also adds seconds to the countdown clock. Yes, it’s a game of skill. But like those who have pitted their wits against a carnival game, there are far more who walk away penniless than victorious. And since there are no sweepstakes laws requiring full disclosure, you enter the game unsure of your odds of winning — even with practice.
So what force brings people back to bid again and again? And what causes bidders to pursue an item so vigorously in the face of disappointing odds?
One reason certainly has to be the thrill of the chase — the determination not to let others get the item you feel rightly entitled to claim. After all, you’ve already committed real dollars to trying to win the item! This is classic loss aversion.
Go to the site and see for yourself. Watch the bidding, which has a level of supposed transparency. You can see your opponents bid against you in real time. To watch is both thrilling and deeply chilling.
Spending time on Swoopo.com is like watching the id in pitched battle with the super ego. On this site, due to excellent interface design, the id is the only sure winner.
We in the web design business often talk about what users see above the fold. The assumption is that people may not be compelled enough to browse down. But there are certain situations where the most suitable “browse” direction is sideways and not down. TheHorizontalWay.com is a collection of sites that turn our preconceptions on their ears. Of particular note among the collection is Interview Magazine, which uses the orientation for both novelty and to avert long load times.
The only downside of this approach that I can think of — aside from being slightly disorienting — is the mobile edition of a site would be difficult to maintain, since mobile pages are more traditionally vertical.
Can you think of other potential programs with a horizontal design?
As anyone who is reading the headlines will agree, this has been a harrowing week. Here’s something to put a smile on your face, first passed along to me by David Berkowitz. Watch this YouTube ad for a Wii game all the way through. You’ll agree its creators really did think outside the box.
Like David, I had to run it several times. I laughed in amazement each time.
As I post this, I’m still on vacation in the Faroe Islands, where I’ve attended the wedding of a dear friend’s daughter. It was a traditional ceremony, blending ancient and new traditions. For instance, ancient Faroese and Danish songs were sung during the wedding reception, which also featured PowerPoint slideshows of photos and Quicktime videos depicting the bachelor and bachelorette parties. Digital cameras were everywhere, of course.
I’ve thought for many months how digital technology has changed the way we experience the world. We like to think that we craft our tools to serve us, but the limitations of these tools cannot help but change us as well, in the same way that our human eyes see a different spectrum of light than, say, the puffins I photographed the other day on the steep Faroese cliffs.
One example of this profound change is electricity, which is quite obvious. The other is more subtle, and involves digital photography.
Electric Light: The Other Midnight Sun
Faroese weddings go on for two solid days. The first day, which included what Americans would call the reception, had three distinct meals (the formal dinner, the serving of cakes, and an early-morning soup course). The first meal was only just ending at 11 PM, which didn’t seem so late, since the sun was only just behind the horizon. What’s more, being so close to the Arctic Circle, the sun didn’t stay away for long. As it began to reemerge, at 4 AM, we were still dancing to a band that played exclusively American — and British Invasion — rock songs.
I was told that the wedding dancing of a few hundred years ago would have included a traditional Faroese dance that takes at least an hour to complete (danced, as it is, to a song with 300+ verses). Back then oil lamplight would have illuminted the steps. This certainly would have dampened some of the more boiserous aspects of the event!
So much about us has changed because of technology’s “electric sun.”
In Maury Klein’s The Power Makers: Steam, Electricity, and the Men Who Invented Modern America, I recently read of the pivitol day in September of 1882, when Thomas Edison, the man known as the “Wizard of Menlo Park,” illuminated the first 400 electric lights installed in New York City.
What struck me about his description is the muted reaction of New York Times reporters. Keep in mind that daily news reporting is driven by extremely tight press deadlines. Yet before the electric light, there was much that could be forgiven. A reporter could more easily file stories developed over weeks — and in the process, get more sleep.
Edison’s “lighting of New York” included 27 electric lamps in the Times editorial rooms. And so, you may wonder, what was the account of this sudden conquest over darkness from the reporters of “The Grey Lady?” Well, the column on Page 8 (yes, 8!) of the next day’s paper said it was, “In every way satisfactory.”
Klein made the obvious point that the paper, “never fully grasped its significance.” Only hindsight could show these reporters that their careers were to be changed forever. And also their family life. The electric light would extend both wedding festivities and work responsibilities — allowing for a day that need never fade into darkness.
Life In A Digital Viewfinder
In my travels these two weeks I’ve visited some extraordinary families (and I have one more to meet, in Belgium, before returning to the States). On the walls of homes in Milan, Berlin, Copenhagen — and now Torshavn, Faroe Islands — I’ve admired photos of relatives that sometimes go back to the very first silver plate photographs of the mid-1800’s. These photos are sometimes right next to the latest generation’s photos. Having observed at the same time some very ancient Eurpoean traditions, attitudes and mannerisms, I have to again posit that the medium has changed us as surely as we have changed the medium.
It was two years ago, when I saw this pose depicted in a still from a movie (illustrated below), that I first realized that the portability and disposability of digital camera technology actually created a new type of romantic embrace.
Compare the stock-still (and emotion-free) poses of couples and families in the tintypes of antiquity with this commonplace example of PDA (public display of affection), and you have to wonder if our cameras own us as much as we do them.
Traditional values — superceding romantic love with love of family, and narcissism with selflessness — may have been made quaint as much by our evolving tools as our evolving beliefs.
Am I onto something? Or have I simply been eating too much wedding day halibut salad and whale blubber?
Marketing Technology Musings and Tips by Jeff Larche