Studying a Twitter ecosystem one user at a time

If you’ve been following my (roughly) monthly posts on Jason Falls’ blog you know that I’ve taken this tack: On his blog I cover the key concepts of a particular web analytics approach, then provide additional support for that idea here.

A recent example is from two months ago. I posted about the use of Brownie Charts as a way to report Content Interest Index. I posted a parallel piece here on another use of the technique (Using Brownie Charts to Measure Bounce Rates). You could say this blog has become my laboratory: Results of preliminary experiments are described here, while the “real” story is broken on Jason’s blog. Tomorrow will be a little different.

Tomorrow, on Jason’s blog, I’ll be posting on someone else’s innovation. It is a review of an extraordinary book: Hashtag Analytics. I’m a huge fan of its author, Kevin Hillstrom, and over the years I’ve spent way too many hours creating Excel-driven models in order to replicate and fully understand his findings.

I’ll be doing that again, this time in support of Kevin’s approach to monitoring Twitter communities. Check back at this tag (hashtag-analytics) to read updates on my “lab work.” Ill be reporting over the next several weeks.

When A Hashtag Community Member Is “Removed”

You may want to check Kevin’s blog as well — especially later this week, when Kevin reports on the future vitality of the hashtag community #measure.  He posted about it last week. Now he plans to theoretically whack an active member. Here’s an excerpt from his post, where he invites readers to suggest whom to “remove”:

In every e-commerce company, somebody is responsible for forecasting sales for the next twelve months, by day. So it makes logical sense that any community manager would want to know what the future of his/her community is, right? This is something you don’t find in any of the popular Twitter-based analytics tools. This is my focus. This is what I love doing, it’s completely actionable, and it’s an area of analysis not being explored!

Next week [the week starting January 24 — that’s today!], we’ll do something neat — we’ll remove one important user from the community, and we’ll see if the absence of the individual harms or helps the future trajectory of the community. If you are an active participant in the #measure community, please send me a user_id that you’d like to see removed in the forecast … I’ll run an example for the individual who gets the most votes.

And in two weeks, we’ll compare the #measure community to the #analytics community … competing communities doing similar work … which community is forecast to have a stronger future?

It’s a fun stunt / modeling experiment that has real world implications. It should serve as a proof of sorts of the predictive power of his Hashtag Digital Profiles and the statistical work behind them. More relevant to online community managers, it should illustrate why showing your participants “love,” lest they never return, is of tremendous importance.

What To Expect Here

I will be applying my own Hashtag Analytics to a different online group — one that has the advantage of weekly meetings. It’s a fairly new group, so the rules may not fully apply (Does an acorn sprout follow the same natural laws of growth as a full-grown tree?). To ensure I don’t jinx my test or influence the community — in a far more direct way than Heisenberg was referring to — its identity will remain unknown until I’ve gathered and analyzed a critical mass of data.

Do stop back.

January 25, 2011 Update:

Here are two related links I didn’t have yesterday. The first is Kevin’s post where he removes that member to the #measure Hashtag community. The second is my review of his book today on Jason Falls’ blog:

  1. Hashtag Analytics: Removing a Member of the Community
  2. Lessons from the Twitter Lover Guru

How to resolve those infuriating analytics discrepancies

Yesterday I was conducting a “Web Analytics Forensics” session with a new client. They posed a common question: The monthly reports on clicks that they were getting from suppliers of their ad buys were off by 10 percent — sometimes even more — compared to their own Google Analytics metrics. The number differences were veering all over the road. Sometimes these vendor reports seemed to overstate traffic, other times the clicks seemed to be understated. When I responded, I was reminded of the reassurances that a friend of mine gives. He’s a pediatrician.

My friend Paul had told me once that the bane of pediatricians everywhere are the late-night calls from parents who are worried about their child’s fever. He doesn’t mind being awakened (well, not much), but he has trouble fully reassuring parents of this fact: Fevers are normal, even healthy. If a child doesn’t run a fever every so often, it’s then that he’d be alarmed!

Don’t take Paul’s word for it. Here is a post in the New York Times a couple of days about on this very subject.

Client concerns about analytics discrepancies are my own profession’s “fever fears.” They can be a distraction from deeper problems. (The doctor in the post mentioned a mom whose child had a fever and abdominal pains. He said his primary concern was the abdominal symptom, but Mom kept steering things back to the fever!)

So the answer to the promise I gave you in the headline is this. For media buy discrepancies, don’t bother trying to resolve them!

Unless you think you are being overcharged for the traffic you’re buying from vendors, in the form of ad clicks or affiliate links, rest assured. I’d only be worried if their numbers were identical to yours. No two systems measure web traffic precisely, or the same way. They are all uniquely flawed.

So instead, focus on what you’re doing with this traffic after it arrives. Are visitors finding what they came for? Are they returning in healthy numbers? And most importantly …

Are they converting?

Usually when I’m called in to consult, the answer to all of these questions appears to be no. Focus your attention, and your boss’s, on these issues. They are the only path to online marketing success.

Finding B2B leads from your web logs

Occasionally I share links to blog posts of others in my industry. Some things are too good to keep to myself. Here’s a perfect example, from Luna Metrics:

We have a customer who considers the SEO we do for her to be one of her “sales channels” and we get ranked along with her other channels. She sends us reports when a lead comes in and when a lead is closed. The other day, I saw that she closed one that was worth not quite half a million dollars. (!! that was my reaction, too.) So I wrote her and said, how awesome. To which she replied,

“Started with google analytics. Saw that they spent some time on the site… sicked Jane on a cold calling mission… after a bunch of calls she found the engineer at the company who was interested in the product. I flew out.. presented… sold and they put out a public bid. Our company is the low bidder and need to send a sample next week for review then release of contract.”

So in case you are wondering, she was talking about the [Google Analytics] Network Locations report, which she mines daily for sales leads.

It’s hard to believe all of this was accomplished by reviewing a typically-overlooked report in a totally free analytics package. Read the rest of their post, and check out this helpful post from the past on how to exploit the fact that many large organizations will self-identify in this report instead of resolving to their ISP’s name.