What monkeys can teach us about offers and pricing

Earlier this month, research that I had come across a year ago, in The Economist, received additional attention in Seed. This study of the economic behavior of capuchin monkeys suggests that the human response to various pricing strategies has been in our DNA for a very long time.

When these monkeys were trained to use special shiny disks as money (which could be exchanged for pieces of their favorite fruit), they tended to behave with this cash in exactly the same ways as us humans. In fact, looking only at the data, you would be hard-pressed to differentiate a human consumer from one of these monkeys.

The research sheds light on behavior that marketers have puzzled over, and exploited, for generations. These include:

Why are “premium” test offers so much more likely to out-pull non-premium packages in direct response, even when the price of the offer covers the cost of the premium?
Answer: We all love getting a free “bonus” with our purchase.
Why are gambling games with some of the worst odds, such as lottery tickets and slot machines, also among the most popular?
Answer: They give the player small rewards more frequently, and keep our losses incrementally small.
Why are bonds more popular than stocks, in spite of the latter always performing better over the long haul?
Answer: We are loss-averse, and would rather guard what we have than take short term risks for long term gains.

What do I mean by loss-averse? Human experiments in game theory have repeatedly shown that in two scenarios — one where (for instance) we lose half of our transaction every third time we trade, and another where we double our transaction every third time we trade — we tend to choose the second set of trades more often.

Even when the equation is altered significantly to favor the first set of trades over the long run, we still favor the occasional free prize over the less likely loss. It’s simply human nature. Now we know the same rules apply to capuchin monkeys. Go figure.

Parenthetically, there is one other way that these monkeys seem to be behaving a lot like humans. Last year I read an account of this study in The New York Times. There I read that these researchers witnessed what was “probably the first observed exchange of money for sex in the history of monkeykind.” Keith Chen, the Yale economist behind this study, said that he noticed the exchange out of the corner of his eye. Although he wanted to think skeptically, that the trade was coincidental, he conceded that “The monkey who was paid for sex immediately traded the token in for a grape.”

When is an email click-through not a click-through? Think “unsubscribe”

When is an e-mail click-through not a click-through? When they’re telling you to kiss off!

It’s hard to believe it’s been nearly a year since I had lunch with my friend and long-time career doppleganger, Melinda Krueger, and she told me about her latest email metrics discovery. It was a way to take into account the click-throughs that people register from your emails when they are in fact clicking through to unsubscribe.

She described it, and it made perfect sense. Melinda’s formula in many cases would take meaningless data and actually tell us something. Specifically, it measures the power of a specific offer or message to cause a segment of your email audience to decide that enough is enough.

She was thinking of calling it the DI, the Disaffection Index. Personally, I thought something a little more dramatic was in order for a metric that could enter the email lexicon. I suggested, because it measured their very last click-through with you, the LCI — the Last Click Index.

She thought otherwise, and DI it remained. Do read this article, and the other articles and advice that Melinda provides as MediaPost’s “Email Diva.”

Crunching the numbers can expose myths

A recent article in the New York Times Magazine’s Freakonomics column, and one of my favorite books of the year, both remind me that a careful examination of data can dispel long-held myths. Neither is directly related to a particular marketing challenge. But they both inspire me to continue to goad my clients into thinking beyond the obvious. We can seize a strong competitive advantage by assuming nothing and testing our premises whenever possible.

The Freakonomics article is ostensibly about soccer and an odd correlation between player excellence and the month that player was born. In analyzing data about the birth months of some of Europe’s best soccer players, it was found that they were born far more than you would expect in the first three months of the year. When researchers looked deeper, they realized that a logical explanation is that children born in these months were exposed to more months of coaching in their schools — more repetition, more chances to excel.

It suggests that the power of the Two P’s of practice and passion — as opposed to simply having “raw talent” — is far more important in excellence than is commonly believed. Thus the title of the Freakonomics article: “A Star Is Made.”

If you know me, you know I care little about sports. But after reading Moneyball by Michael Lewis, I was so inspired I bought three copies*. One to keep, one to pass around to co-workers, and a third to give to my father as a gift. The message of the Freakonomics article was that stars are made, not born. Similarly, the message of this book, about the unlikely, data-driven success strategy of the Oakland Athletics baseball team, is: “A winning baseball team is made, not bought.”

Read the book, and marvel at how Billy Beane, the general manager, refused to believe the group think of baseball scouts and the status quo. He wouldn’t listen to them when they told him how to identify promising players for his under-financed, under-performing team.

It’s a great read, and another reminder that looking at the data instead of listening to the way things have always been done can pay huge dividends.

*Thank you Bret Stasiak, my boss from my BVK/respond360 days, for letting me know about this wonderful book!

Internal search data is free, quantitative usability testing, if you use it

Even if I’ve never met you or visited your web site, I can diagnose with a fair amount of certainty what many users say about it. Whether you realize it or now, they don’t particularly enjoy visiting your site.

That’s because most people use web sites only out of necessity. And your web site really has only one responsibility to these people: To give them the information they value. Period.

Ideally this trade of “effort for information” should be short and sweet. No visitors to your site want to feel like they’re on a scavenger hunt. But that’s exactly what it often feels like, and it pisses them off. Thus, your site’s low conversion rates and high abandon rates. How did I know about those? They’re about as predictable as inhaling and exhaling.

So how do you take some of the frustration out of using your web site? Simple. Fix your site’s confusing navigation and it’s improperly labeled and organized content.

And I suggest you start with the single easiest and best source for learning what’s missing on your site: Namely, data from your internal search.

Think about it. If you have an internal search engine operating right now, the people who find your site the most frustrating are often typing out their frustration in that little text box. The sound of user dissatisfaction (dissatisfaction with your navigation, dissatisfaction with your content) is right there … loud and unequivocal. But it’s got to be captured and measured or this gold mine of information is lost.

Okay, here’s a shameless plug: I and my team at ec-connection build this system in many of our clients’ web specifications. By tabulating the search phrases that users type in, we get to see what’s frustrating them, or at the very least, what they want to see on this site that they’re not finding. With this valuable, free quantitative research, we can fix our clients’ navigation and content problems. And watch the searches, and the user pain they suggest, fall off.

Many customers who made the same types of phone calls as you also bombed The World Trade Center

I’m not ordinarily a defender of Bush Administration actions concerning its response to The World Trade Center attacks, but the database analysis proponent in me feels something should be clarified in the minds of most Americans. According to a recent NEWSWEEK poll, “53 percent of Americans think the NSA‘s surveillance program ‘goes too far in invading people’s privacy.'” This of course is the taking of cell phone and other telephone records and mining them for clues to possible terrorists.

The outcry, I think, is in part because when we think of phone surveillance we think of wire-tapping (or, in the case of cell phones, wireless-tapping). However, if I understand this situation correctly, the NSA used this vast database of phone call numbers (both of originators and recipients), along with call dates, times and lengths, to look for suspicious patterns that were similar to those found in known terrorists’ phone behaviors.

I know, I know. If you analyze for this type of activity, you can also find patterns in the activities of your political enemies. Imagine the blackmail potential! It could shut down Washington! (Hmmm … could the blackmail have already begun?)

But let’s assume for the moment that we could somehow shine some light on the activity, thus preventing such abuses. Is this data mining an invasion of privacy? I suspect it’s closer to the surveillance we’re all accustomed to — and appreciative of — in our quiet suburban neighborhoods.

Probable cause is a term used to justify a police officer pulling over a citizen for questioning. I would equate this database research to looking for probably cause. So how is the research done? It uses the same technique that marketers use to predict whether a consumer will like this product versus that one.

For instance, you buy a CD on Amazon, and the web site immediately says, “Other of our customers who bought that CD also purchased these.” Then it lists three or four other, often surprisingly unrelated, artists, along with their latest CDs. If you have a big enough music collection, and predictable enough tastes, you’re surprised that you already love the work of one or two of those other artists. Amazing!

Amazon, and other large marketers using this profiling, let you know in advance that they looked into their database and found those correlations (through the statement, “Other customers of ours …”). What they don’t tell you is that usually, those data relationships are — on their own — too obscure or unrelated to be recognized in any way other than by using a sophisticated statistical regression analysis.

The same for this NSA action. I think a lot of Americans are concerned because they imagine an all-seeing computer is examining every single phone call they make or receive. I also suspect they are angry because now they have yet another privacy vulnerability to worry about, along with identity theft, spyware, etc.

But I suspect the process of profiling that was done by the NSA is more along the lines of the Amazon example. The predictive model takes into consideration thousands of weak correlations — possible coincidences that are only significant because when added together they match the behavior of known terrorists, (I would say convicted, but good ole Mr. Moussaoui is about it, and that’s an awfully small sample to try to model against! Known domestic terrorists would include the guys who died in their planes on 9/11, and made plenty of phone calls before they did).

So, if that is the case, is this intrusive? That depends.

 Is a police officer driving down your quiet residential neighborhood invading your neighborhood’s privacy when looking for probably cause to investigate a possible crime? This officer may not stop if one suspicious fact is noted about someone in your neighborhood. Maybe even two or three aren’t sufficient for probably cause. Each on its own may be too subtle — too similar to the behavior of those not breaking the law. But if there are enough suspicious facts concentrated around the behavior of, let’s say, that guy parked outside your door, then the officer will conclude the correlation is too great. The behavior and evidence surrounding that guy show too many similarities to those of convicted criminals. This behavior taken as a whole is too close to that of a burglar, let’s say.

The brain of that cop isn’t going to retain much information the next day, or even the next hour, about the non-suspicious behaviors that were observed, and in a similar way, I don’t think the NSA’s computers will be able to do much else but identify the behavior patterns they are programmed to sniff out.

Which brings me back to my original observation. How in the world did I become a defender of Bush? The answer is the NSA, under his watch, found a non-intrusive way to comb this country for possible criminal activity. I only pray that there will now be enough judicial (and judicious) oversight to ensure that the profiling being done is for real enemies of the state, and not enemies of the administration and its incumbents.