Reading Virtual Minds Volume II: Experience and Expectation Now Available on Amazon

First, we appreciate everyone’s patience while we got this volume out.
And now, from Holly Buchanan‘s Foreword to the book…

Reading Virtual Minds Volume II: Experience and ExpectationAfter inhaling Reading Virtual Minds Volume I I was like an antsy 3-year old waiting for Reading Virtual Minds Volume II. It did not disappoint.
I love the way Joseph Carrabis thinks. He has a unique ability to share broad rich theory with actionable specifics. Unlike many technical writers, he has a unique voice that is both approachable and humorous. It makes for an enjoyable read.
But what’s the main reason why you should read Reading Virtual Minds Volume II: Experiences and Expectations? Because where most companies and designers fail is on the expectation front.

Humans are designed as expectation engines.

This is, perhaps, the most important sentence in this book. One of the main points Joseph makes in this volume is this – Understand your audiences’ whys and you’ll design near perfect whats.
Design failures come from getting the whys wrong. That can lead to failures on the experience side, but also on the expectation side. And that can be the bigger problem.

Expectation is a top-down process. Higher-level information informs lower-level processing. Experience is a bottom-up process. Sensory information goes into higher-level processing for evaluation. Humans are designed as expectation engines. Topdown connections out number bottom-up connections by about 10:1.

Why is this so important?

In language, more than anywhere else, we see or hear what we expect to hear, not necessarily what is said or written. Across all cultures and languages, neurophysiologists and psychologists estimate that what we experience is as much as 85% what we expect to experience, not necessarily what is real or ‘environmentally available’.


When people expect A and get B they go through a few moments of fugue. External reality is not synching up with internal reality and the mind and brain will, if allowed, burn themselves out making the two mesh.

Get your consumer/visitor/user experience AND expectation right, get their why right, and you’ll be exponentially more successful.

Here are just a few of the goodies you’ll find in this book:

  • Privacy vs. value exchange and when to ask for what information. Joseph has some actionable specifics on this that will surprise you.
  • Why we design for false attractors rather than the real problem.
  • The importance of understanding convincer strategies. Convincer strategies are the internal processes people go through in order to convince themselves they should or should not do something.
  • Companies spend a lot of time trying to convince consumers to trust them. But what may be even more important is understanding how to let consumers you know you trust them. This book has ideas on how to show your customers/users/visitors, “I believe in you”.
  • How often our own experience influence our designs. Unless you’re able to throw all your experience out, and let the user’s experience in, get out of the usability and design business.
  • How to allow your visitors easy Anonymous-Expressive Identity and make them yours forever.
  • Regarding new material, design, interface, the importance of making sure your suggestions provide a clear path to the past (thus being risk averse while providing marketable innovation).

As always, Reading Virtual Minds provides specific actionable ideas. But it will also make you think and approach your work in a new way. And I think that’s the best reason to treat yourself to this book and the inner workings of NextStage and Joseph Carrabis.

(and we never argue with Holly Buchanan…)

Posted in , , , , , , , , , ,

Reading Virtual Minds Volume I: Science and History, 4th edition

It’s with great pleasure and a little pride that we announce Reading Virtual Minds Volume I: Science and History, 4th EDITION.

Reading Virtual Minds V1: Science and History, 4th edThat “4th EDITION” part is important. We know lots of people are waiting for Reading Virtual Minds Volume II: Experience and Expectation and it’s next in the queue.

But until then…

Reading Virtual Minds Volume I: Science and History, 4th EDITION is about 100 pages longer than the previous editions and about 10$US cheaper. Why? Because Reading Virtual Minds Volume II: Experience and Expectation is next in the queue.

Some Notes About This Book

I’m actually writing Reading Virtual Minds Volume II: Experience and Expectation right now. In the process of doing that, we realized we needed to add an index to this book. We also wanted to make a full color ebook version available to NextStage Members (it’s a download on the Member welcome page. And if you’re not already a member, what are you waiting for?)

In the process of making a full color version, we realized we’d misplaced some of the original slides and, of course, the charting software had changed since we originally published this volume (same information, different charting system). Also Susan and Jennifer “The Editress” Day wanted the images standardized as much as possible.

We included an Appendix B – Proofs (starting on page 187) for the curious and updated Appendix C – Further Readings (starting on page 236). We migrated a blog used for reference purposes so there may be more or less reference sources and modified some sections with more recent information.

So this edition has a few more pages and a few different pages. It may have an extra quote or two floating around.

You also need to know that Reading Virtual Minds Volume I: Science and History is a “Let’s explore the possibilities” book, not a “How to do it” book. As such, it deals with how NextStage did it (not to mention things that happened along the way). It does not explain how you can do it. This book’s purpose is to open a new territory to you and give you some basic tools for exploration.

There are no magic bullets, quick fixes, simple demonstrations, et cetera, that will turn you into jedis, gurus, kings, queens, samurai, rock stars, mavens, heroes, thought leaders, so on and so forth.

How to Do It starts with Volume II: Experience and Expectation and continues through future volumes in this series. We’ve included a Volume II: Experience and Expectation preview with a How to Do It example on page 302 so you can take a peek if that’s your interest.

That noted, I’m quite sure that you won’t get the full benefit of future volumes without reading this one because unless you’ve read this one you won’t understand the territory you’re exploring in those future volumes.

Reading Virtual Minds V1: Science and History, 4th edThat’s Reading Virtual Minds Volume I: Science and History, 4th EDITION. It’s so good and so good for you! Buy a copy or two today!

Posted in , , , , , , , , , ,

NextStage Evolution Research Brief – Image v Text Use in Menu Systems

Basis: A one year study of twelve (12) international websites (none in Asia), M/F 63/37, 17-75yo, either in college or college educated, middle to upper income class in all countries studied

Objective: To determine if people were more decisive in their navigation when an image or text was used as a primary navigation motif (menu).

Method: Four separate functions were evaluated

  1. Presentation Format Preference (a simple A/B test)
  2. Sensory to Δt Mapping (time-to-target study)
  3. Teleology (how long did they remain active after acting)
  4. Time Normalization (determines what brain functions are active during navigation)

Results: Key take-aways for this research include

  • Visual (graphic or image)-based menus cause a 40.5% increase in immediate clickthrough, site activity is sustained an additional 32% with site-penetration being an additional 2.48 pages ending in a 36% increase in capture/closure/conversion.
  • Although not tested with Asian audiences, it is doubtful this technique will work with ideographic language cultures
  • The graphics/images used must be clear, distinct and be obvious iconographic metaphors for the items/concepts they open/link to. Example: Images of a WalMart storefront, a price tag with the words “Best Price” and people shopping resulted in greater activity than a simple shopping cart (too familiar as a “What have I already selected?” image) and the simple words “Store” and “Shop” to drive visitors into buying behaviors.
  • Existing sites with text-based menu systems need to use both systems (at the obvious loss of screen real-estate) to train existing visitors on the new iconography until image-based menu items are used more often than text-based menu items.

NextStage Evolution Research Brief – EU Audiences Adapt to and Integrate Site Redesigns Faster than US, GB and Oz Audiences

Basis: This publication concludes a two year study of visitor adaptation to and adoption of new technologies and site redesigns on similar product or purpose sites in the US, EU, GB and Australia. No Asian, South American or African sites were part of this study.

Objective: To determine if neuro-cognitive information biases exist in certain cultures and if so, is there benefit or detriment to those biases?

Method: Twenty sites (monthly visitor populations between 10-35k) were monitored in the USA, Italy, France, Germany, Great Britain and Australia. The sites included social platforms, ecommerce, news-aggregator, travel-destination and research postings. Activity levels were monitored before, during and after design changes were instituted, as well as before, during and after new technologies (podcasts, vcasts, YouTube feeds, social tools) were placed on the sites.

In addition to activity levels a study was made of viral propagation vectors to determine if changes to the site promoted new influencers or demoted existing influencers.


  • Announced changes to the sites increased adoption and adaptation rates among all visitors (in some cases by as much as 65%)
    • Announced changes most greatly benefitted US, GB and Australian audiences with adaptation and adoption rates increasing 12.5% on average.
  • Site previews increased adoption and adaptation rates among all visitors
    • 77% of EU based visitors who chose to preview site changes became influencers regardless previous social standing on site.
    • 35% of US based visitors who chose to preview site changes became influencers regardless of previous social standing on site.
    • 32.5% of Australian based visitors who chose to preview site changes became influencers regardless of previous social standing on site.
    • 27.5% of GB based visitors who chose to preview site changes became influencers regardless of previous social standing on site.
  • EU audiences demonstrated the highest rates of adaptation to and adoption of new technologies and site redesigns in all categories at 92.5% and 85% respectively.
  • Australian audiences demonstrated the lowest rates of adaptation to and adoption of new technologies and site redesigns in all categories at 30% and 7.5% respectively.

Key take-aways for this research include

  • Travel destination sites should provide a good deal of lead up time to site changes.
    • This lead up time should include previews and announcements.
    • This is especially true for US audiences.
  • Sites introducing social tools should select, train and promote influencers from within the existing visitor community before the social tools are made public.
  • The introduction of social tools to news-aggregator sites recognizably slowed the adaptation and adoption rates of EU audiences.
  • US based audiences were most likely to contact site admins, web admins, managers, etc., criticizing site redesigns and new technology implementations although they were the least likely to abandon sites due to those changes.
  • Australian audiences were the least likely to contact site admins, web admins, managers, etc., criticizing site redesigns and new technology implementations although they were the most likely to abandon a site due to those changes.
  • EU based audiences were the most likely to visit several sites all serving the same purpose.
  • EU based audiences were the most likely to give a site “time to settle” during redesign and new technology implementation before returning to it on a regular basis.

The Unfulfilled Promise of Online Analytics, Part 3 – Determining the Human Cost

Knowledge will forever govern ignorance, and a people who mean to be their own governors, must arm themselves with the power knowledge gives. A popular government without popular information or the means of acquiring it, is but a prologue to a farce or a tragedy or perhaps both. – James Madison

There was never suppose to be a part 3 to this arc (Ben Robison was correct in that). Part 1 established the challenge (and I note here that the extent of the response and the voices responding indicates that the defined challenge does exist and is recognized to exist) and Part 2 proposed some solution paths. That was suppose to be the end of it. I had fulfilled my promise to myself1 and nothing more (from my point of view) was required.

But many people contacted me asking for a Part 3. There were probably as many people asking for a Part 3 as I normally get total blog traffic. Obviously people felt or intuited that something was missing, something I was unaware of was left out.

But I never intended there to be a Part 3. What to cover? What would be its thematic center?

It was during one of these conversations that I remembered some of the First Principles (be prepared. “First Principles” will be echoed quite a bit in this post) in semiotics.2

According to semiotics, you must ask yourself three questions in a specific order to fully understand any situation3:

  1. What happened?
  2. What do I think happened?
  3. What happened to me?

More verbosely:

  1. Remove all emotionality, all belief, all you and detail what happened (think of quis, quid, quando, ubi, cur, quomodo – the six evidentiary questions applied to life).
  2. What do your personal beliefs, education, training, cultural origins, etc., add to what actually and unbiasedly happened?
  3. Finally, how did you respond — willingly or unwillingly, knowingly or unknowingly, with all of your history and experience — to what happened.

The power of this semioticism is that it forms an equation that is the basis of logical calculus, the calculus of consciousness4, modality engineering5 and a bunch of other fields. I use a simplified form of it in many of my presentations, A + B = C.6

Talking with one first reader, I realized that Part 1 was “What happened?” (the presentation of the research) and Part 2 was “What do I think happened?” (my interpretation of the research). What was left for part 37 was “What happened to me?”

And if you know anything about me, you know I intend to have fun finding out!

All Manner of People Tell Me All Manner of Things

Oliver's TravelsThe above is a line from Oliver's Travels (highly recommended viewing), something said by the Mr. Baxter character. Mr. Baxter is himself a mystery and — although his true nature is hinted at several times — it is not revealed until the last episode. There we are told about The Legend of Hakon and Magnus. In short, Mr. Baxter could be a good guy, a bad guy or the individual directing the good or bad guy's actions. His role entirely depends on what side you are on yourself, a true Rashomon scenario. I found myself in something similar to Mr. Baxter's situation as how people responded to my research, its publication and myself also depended greatly on what side people were on when they contacted me.

I was both dumbfounded and honored by the conversations Parts 1 and 2 generated. The number of people who picked up on or continued the thread on their own blogs (here (and alphabetically) Christopher Berry (and a note that Chris continues the conversation in A Response (The Unfulfilled Promise of Analytics 3) ), Alec Cochrane, Stephane Hamel, Kevin Hillstrom, Daniel Markus, Jim Sterne, Shelby Thayer and if I've forgotten someone, my apologies), twittered it onward, skyped and called me was…I could say unprecedented and remind me to tell you about a psychology convention in the early 1990s (nothing to do with NextStage, just me being me, stating what is now recognized as common knowledge yet way before others decided it was common. Talk about unprecedented results. I had to be escorted out under guard. For those of you who know Dr. Geertz, his comment upon learning this was “I'm not surprised you'd have to be escorted out by guards. You have that subtle way about you…”8).

But to note the joy means to recognize the sorrow (as was done in Reading Virtual Minds Vol. 1: Science and History Chapter VI, “The Long Road Home”). While the majority of people honored me and a good number of people appreciated that I had done some useful research and donated something worth pondering, there were a few (just a few, honestly) who damned me.

The damning per se I don't mind. It's part of the territory. It was the manner and the persons involved that truly surprised me.

I was accused of possibly destroying a marriage (Susanism: If you think this is about you, it's not. We know a lot more people than just you), maligning certain individuals (usually by people who maligned other individuals during the research. I guess I wasn't maligning the correct individuals in their view), not demonstrating the proper respect to industry notables (same parenthetical comment as previous and you guessed it, another NextStage Principle), that I better post an apology to these same industry notables (two people wrote apologies in my name and strongly suggested that I publish them), …


Who gave me such power and authority to make or break people's lives? Certainly I didn't give it to myself, nor did I ask others to give it to me. And if anybody did give it to me without my knowing I gladly give it back. As I've said and written many times, I do research. When new data makes itself available and as required, I update my research. But until such new data comes in, the research stands.

What I really want to know is if, when the results of research are discomforting, the industry's standard and usual procedure is

  • to change either the research or results so that people feel warm and fuzzy — hence have no impetus to act (according to one person at yesterday's NH WAW, “Don't measure what you can't change”. An interesting statement that I disagree with. Doing so means to throw out meteorology, astronomy, … much of what has been historically measured without any change-ability allowed us to create the technologies that would produce change in previously unchangeable systems)
  • or let the discomfiting research stand — so that the challenge can be recognized and either action can be either taken or the challenge go ignored.

Seems to be the “change either the research or results” is the standard (or at least done when required) because while few asked that I rewrite research or results so that certain individuals appeared more favorably, the ones who did ask sure were some high-ranking industry folks.

Heaven forbid these folks wanting different results published or do complimentary research that either validated or invalidated my results.

Wait a second. What am I thinking? Obviously it would be impossible for them to do research that validates mine.9

Of course, publishing research would also mean publishing their methodologies, models, analytic methods, … and the reasons that ain't gonna happen will be covered later in this post.

And if that is the standard and usual procedure — at least among those in the high ranks — then

  • congratulations to all the companies hiring high ranking consultants to make them feel good rather than solve real problems and
  • be prepared for those coming up through the ranks to learn this lesson when it is taught them.

I'm mad as hell and I'm not going to take it anymore!For the record, not much upsets me (ask Susan for a more honest opinion of that). The sheer stupidity of arguments that resort to emotionalism or are nothing more than attempts to protect personalities and positions, though… Them they do offend me (can't wait to learn how our Sentiment Analysis tool reports this). And more about stupidity later in this post (Let me know if you recognize Joseph's “I'm mad as hell and I'm not going to take it anymore” persona).

When the Stories Meet the Numbers (Statistics, Probability and Logic)

I originally surveyed about sixty people for Part 1. That number grew to about one hundred in Part 2 due to responses to Part 1. Currently I've had conversations (I'm counting phone calls, Skype chats and calls, email exchanges and face-to-face discussions at meetings I've attended as “conversations”) with a few hundred people about those posts.

I noticed something interesting (to me) about the conversations I was having. Lots of people made statements about statistics, probability and logic but were using these terms and their kin in ways that were unfamiliar to me. Especially when I started asking people what their confidence levels were regarding their reporting results.

I'll offer that search analysts (I'm including SEO and SEM in “search analysts”) seem to have things much easier than web analysts do. “We were getting ten visits a day, changed our search terms/buy/imaging/engines/… and now we're getting twenty visits per day.” Granted, that's a simplification and it's the heart of search analytics — improving first the volume and second the quality of traffic to a site. Assuming {conversions::traffic-count} has standard variance, search analytics produces or it doesn't and it's obvious either way.

Web analytics, though… “The Official WAA Definition of Web Analytics” is

Web Analytics is the measurement, collection, analysis and reporting of Internet data for the purposes of understanding and optimizing Web usage.

The analytics organization I see most often cited, SEMPO, doesn't even attempt to define (“SEMPO is not a standards body…”) or police (“…or a policing organization.“) itself. It does offer search courses but the goals of the SEMPO courses and the WAA recognized courses are greatly different (an opinion, that, based on reading their syllabi as someone having taught a variety of courses in a variety of disciplines at various educational levels in various educational settings).

There are twenty-one words in the official WAA definition and a philologist will tell you that at least ten require further definition.

Definitions that require definitions worry me. Semiotics and communication theory dictate that the first communication must be instructions on how to build a receiver. Therefore any stated definition that requires further definition is not providing instructions on how to be understood (no receiver can be built because there is no common signal, sign or symbol upon which to construct a receiver. If you've ever read my attempts at French, you know exactly what I mean10).

One of the statements made during the research for this arc was “[online] Analysts need to share the error margins, not the final analysis, of their tools.” It expressed a sentiment shared if not directly stated by a majority of respondents and it truly surprised me. It states as a working model that any final analysis is going to be flawed regardless of tools used therefore standardize on the error margins of the tools rather than the outputs of the tools.

So…decisions should be made based on the least amount of error in a calculation, not what is being calculated (does the math we're using make sense in this situation?), the inputs (basic fact checking; can we validate and verify the inputs?) or the outcome (does the result seem reasonable considering the inputs we gave it and the math we used?)?

A kind of “That calculation says we're going to be screwed 100% but the error margin is only 3% while that other calculation says we're only going to be screwed 22% but the error margin is 10%.

Let's go with the first calculation. Lots less chances of getting it wrong there!”, ain't it?

More seriously, this is a fairly sophisticated mathematical view. Similar tools have similar mathematical signatures when used in similar ways. When a tool has an output of y with fixed input x in one run and y+n with that same fixed input x in another run but a consistent error margin in both runs, standardizing on the error margin e is a fairly good idea. It indicates there's more going on in the noise than you might think.11

Of course, this means you better start investigating that noise darn quick.

My understanding of “statistics, probability and logic” was often at odds with what people were saying when they used those words. The differences were so profound (in some cases) that I asked follow up questions to determine where my misunderstandings were placed.

Serendipity doing it's usual job in my life, over this fall-winter cycle I took on the task of relearning statistics12, partly so I could understand how online analysts were using statistics-based terms. As noted above, the differences between what I understood and how terms were being used and applied was so different that I questioned my understanding of the field and its applications.

And to whither I wander, I offer a philologic-linguistic evidentiary trail for all who will follow. For those who just want to get where I'm going, click here.

Web Analytics is Hard

Of course it is. Anything that has no standards, no base lines, no consistent and accurate methods for comparisons is going to be hard because all milestones, targets and such will have to be arbitrarily set, will have no real meaning in an ongoing, “a = b” kind of way, and therefore Person A's results are actually just as valid as Person B's results because both are really only opinion and the HiPPOs rule the riverbank…

…until a common standard can be decided upon.

Web Analytics is easy

Of course it is. Anything that applies principled logic, consistent definitions, repeatable methodologies that provide consistent results, … is going to be.

Online Analytics Is Whatever Someone Needs It to Be

Ah…of course it is.

And this is the truest statement of the three for several reasons. Consider the statement “(something) is Hard“.

It doesn't matter what that “(something)” is, it can be driving a car, riding a bike, watching TV, playing the oboe, composing poetry, doing online analytics, … . What that “(something)” is is immaterial because the human psyche, when colloquial AmerEnglish is used, assigns greater cognitive resources to understanding “Hard” than it assigns to “Web Analytics”, and this resource allocation has nothing to do with whether or not “Web Analytics” is easier to understand than “Hard”, it has to do with what are called Preparation Sets13. The non-conscious essentially goes into overdrive determining how hard “Hard” is. It immediately throws out things like “iron”, “stone” and “rock” because the sensory systems don't match (iron, stone and rock involve touch-based sensory systems, transitive expressions such as “(something) is hard” don't) and starts evaluating the most difficult {C,B/e,M}14 tasks in memory — most recent to most distant past — to determine if the individual using the term “Hard” is qualified to use the term as a surrogate for the person being told “(something) is Hard” (ie, our non-conscious starts asking “Do they mean what I think they mean when they say 'Hard'?”, “Do they know what 'Hard' is?”, “What do they think 'Hard' means, anyway?”, “Do they mean what I mean when I say 'Hard'?” and so on).15

What I will offer is what I've offered before; any discipline that defines success “on the fly” isn't a discipline at all (at least it's not a discipline as as I understand “discipline”). Lacking evidentiary trails, definitions and numeric discipline, comparisons of outputs and outcomes degenerates to “I like this one better” regardless of reporting frame.

Teach Your Children Well

Where statements like “(something) is Hard” and “(something) is Easy” really make themselves known is when teaching occurs.

Let me give you an example. You have a fear of (pick something. Let's go with spiders because I love them and most people don't (Only click on this link if you love spiders)). Phobias are learned behaviors. This means someone taught you to be afraid of spiders. It's doubtful someone set out some kind of educational curriculum with the goal of teaching you to fear spiders (barring Manchurian Candidate scenarios). It's much more likely that when you were a child, someone demonstrated their fear of spiders to you. Probably either repeatedly or very dynamically, so you learned either osmotically or via imprinting. Children demonstrate their parents' behaviors in hysteresis patterns. This means that if you measured a parent's level of arachniphobia and assigned it a value of 10, chances are the child would demonstrate their arachniphobia at a level of 100 or so in a few years' time. Children who learn their parents' fears and anxieties do so without understanding any logical basis for those fears, only the demonstration of them. When there is no logic to temper the emotional content, hysteria results.

However, if a parent demonstrates a fear response and the ability to control it, to explain to the child that fear response's origin, etc., most often the child learns caution and not fear (not to mention that the parent usually learns to control their fear). The difference can be thought of as the difference between teaching a child to “Be careful” versus hysterically screaming “EEEEK!”

What's so fascinating about this is that it's also how we pass on our core, personality and identity beliefs whether we mean to or not (I cover this in detail in Reading Virtual Minds Volume I: Science and History). We can be teaching physics, soccer, piano, bread-baking, … It doesn't matter because all these activities will be vectors for our core, identity and personal beliefs and behaviors. If we are joyful people then we will teach others to be joyful and the vector for that lesson will be physics, soccer, piano, bread-baking, … And if we are miserable people? Then we will teach others to be miserable and to be so especially when they do physics, play soccer, the piano, bake bread, …

Thus if any teaching/training occurs intentionally or otherwise, the individual doing the training/teaching is going to de facto teach their internal philosophies and beliefs — both business and personal — as well as their methods and practices to their students. This can't be helped. It's how humans function. If the philosophy and belief is that things are hard, then that philosophy and belief will be taught de facto to the students. Likewise for the philosophy and belief that something is easy. There will be no choice.16

The point is we protect others from what we fear. Humans are born with precious few fears hard-wired into us (heights and loud noises are the two most cited. Heights because we're no longer well adapted to an arboreal existence and loud noises because predators tend to make them when they attack).

So the statement “(something) is hard” either means we fear “(something)” or we wish to protect others from having the difficulties we have when we do “(something)”, and if difficulties existed then the non-conscious mind is going to place a fear response around whatever “(something)” is to make sure we don't put ourselves into unnecessary difficulties yet again.

The statement “(something) is easy” generates the polarity of the above and I, dear reader, I am the neuro- and philo-linguist's nightmare because my training is simply that “(something) is”. My training is that both whatever exists and whatever state it exists in are mind of the observer17 dependent. Thus things simply are and our perceptions, experience and decisions make them hard, soft, easy, whatever, to us individually.

It's always all about you, isn't it?More colloquially, whatever your perceptions of the world are, it's all you and precious little of anything else (a favorite quote along these lines is “What if life is fair and we get exactly what we deserve?” Ouch!).

The Trail Leads Here

There are lots of errors I can understand. A lack of knowledge, of mathematical rigor, of logic training, of problem solving skills, … These and a host of others I can appreciate. Especially in those junior to any given discipline.

But unprovable math, a lack of basic fact checking, outputs that have no meaning based on what's come before and (let's not forget) emotionalism? This really blew me away. Math can be taught, junior people who don't fact check can be trained, making sure units match can be taught and comes with experience, … but emotionalism?

I'll accept any of the above in junior players with the caveat that the first to go has got to be emotionalism.

But senior people failing any of these before offering something for publication? Then defending this lack of rigor with an emotional outburst? And when it happens more than once?

Talk about abandoning First Principles!

We don't need no stinking badgesFirst Principles? We don't need no stinking First Principles!

Challenge logic, challenge research, challenge findings, sure. Challenge a person if they challenge you, sometimes maybe. I'll tolerate a lot, folks (ask Susan for confirmation), and I have a real challenge with such as these — Arguing emotionally and telling me it's logic, arguments based on no facts at all… I'll accept, entertain and work with ignorance, arrogance, discomfiture, anxiety, joy, love, appreciation, anger, … quite a wide thrall of human response.

But arguments such as these are, in my opinion, stupid.

There, I typed it.

Yet because such arguments were presented as such I must recognize that in some camps doing web analytics means to heck with fact-checking, logic, … That it's acceptable to ignore truth and common practice to base outcomes on what one needs them to be. I mean, when someone with title and prestige does it, the overt statement is that others should, will or do do it, as well. Definitely people in the same company should or will do it. Whatever's lacking in the master's portfolio won't be found in the student's (in most cases).

Want to know why I stopped attending conferences? See the above.

Joseph, the Abominable Outsider

Joseph, The Abominable Outsider

Stephane Hamel applauded me (I think) when he referenced me as an industry “outsider” in his A nod to Joseph Carrabis: The unfulfilled promise of online analytics. Others used the term to applesauce me. (I was flattered by both, actually.)

I had been wondering if it was worth my writing a little bit on elementary logic, probability theory, problem solving or some such. A previous draft of this post contained an explanation of elementary statistics and problem solving as it might be applied to online analytics. Now I really had to question such an effort. If the notables don't know how to apply these things…

Where the stories meet the numbers, there Understanding dwells

The power of logic, knowing problem solving methods, basic statistics, probability and so on is that they provide basic disciplines that prevent or at least inhibit mistakes such as listed above. You have the tools and training to basically “…draw an XY axes on the paper, chart those numbers and the picture that results points you in the direction you need to go.” You can be emotional about your research and your findings and you can't defend your research emotionally. The research and findings are either valid or they ain't.18

As for drawing an XY axes, charting numbers and getting some direction…what can you do with such evidentiary information? There are lots of things you can do. Determine the relationships between the numbers and you can exploit their meanings.

But if the basics are beyond the industry greats

  • then explaining the differences between cross-sectional studies and longitudinal studies (cross-sectional studies involve measuring a single (x,y) pair, meaning x is fixed for all y. Longitudinal studies involve countably infinite (x,y) pairs. Longitudinal studies are greatly more expensive than their cross-sectional cousins and is why cross-sectional regression models are often used when longitudinal regression models are needed) won't do much good19,
  • nor will explaining the need for creating a “standard” site for calibration purposes,
  • models can only be standardized once methods themselves are analyzed and an accuracy “weighting” is determined (allowing all models to be compared to a “gold standard”, meaning comparing my results to your results actually has analytic meaning),
  • Figuring out where your normals are on your curveexplaining the meaning of and how to “normalize” samples is out (doing so allows you to see where the normals fall on your standard curve. You put your normals in the middle to lower part of the curve because a) this is where population densities are greatest and b) no naturally occuring line is going to be straight so you shoot for placing your normals on the straightest part of the curve to get some kind of linearity (that y = mx + b thing). Every naturally occuring phenomenon follows mathematical rules that produce curves. Between the two blue lines is where standards occur. Below the bottom blue is “below standard”, above the top blue is “out of standard”. Between the bottom blue and green line is the normal range. You calibrate your methods against the gold-standard normals and anything above is where the money lies),
  • 20

It takes more effort to reorder a partially ordered system than it does to create order in an unordered system (bonds, even when incorrect, have existing binding energy).

I completely understand why so many of NextStage's clients couldn't document the accuracy of the online analytics tools they were using at the time they contacted us for help. This lack of documentation was something I was very uncomfortable with. If there's no proven methodology for demonstrating a number's validity then you've essentially moved away from the gold standard and declared that the value of your dollar is based entirely on what others are going to value it at (pretty much determined by your political-military-industrial capabilities or in this case, those guarding the riverbank). Your numbers only have meaning so far as others are willing to accept them as valid and if lots of money is being paid for an opinion, that opinion is going to be gold regardless if it's based on invalid assumptions or documentable facts.

The online analytics field is partially ordered — it's been around long enough for a hierarchy to appear — so only those willing to expend the energy are going to attempt fixing it for the sake of getting it fixed rather than changing it to suit their own objectives.

And this is where

The detritus encounters the many winged whirling object

NSE was seeing so many erroneous tool results (my favorite example was the company that was getting 10k visitors/day and only 3 conversions/month. Their online analyst swore by the numbers) that it lead us to come up with a reliable y = x 2db that we could prove, repeat and document. It relied solely on First Principles. This led to our in-house analytics tools, which is why we're analytics tool agnostic. We really don't care what tools clients use. If we don't believe the numbers we'll use our own tools to determine them because we know and can validate how our tools work. As a result we now often use our tools to validate the accuracy of other tools.

I have no dog in this fight (both the “Web Analytics is…” and whether or not a promise existed and has gone unfulfilled fights because I'm a recognized industry outsider) and won't be dragged into it (I mean, would you really want me involved?). My agenda is making sure that those coming to NextStage for help either bring with them some mathematical rigor or allow NextStage to invoke it. There is little that can be done when a tool lacks internal consistency (given a consistent input it generates different outputs).

It really is that simple, folks. This is First Principles and they always work. Don't believe me? Ask Ockham. First Principles have to work. As long as the sun rises in the east and sets in the west, as long as there are stars up in the sky, as long as the recognized laws of reality are valid, …

And because mathematics is a universal language, the stars are in the sky, etc., etc., these rules have to apply to online analytics and the tools used therein.

Unless you're happy with high variability in results sets given a known and highly defined set of inputs.

Which is fine, if that's what your values are based on.

And I doubt it is, so be prepared for companies to use HiPPOs only for political purposes (“Our methods are valid because they were installed/given to us/updated/validated/… by HiPPO du jour“), not for accuracy purposes.

How fast are you going?I mean, people make a living out of these things, right? When someone talks about a regression curve and that a decision was made because the probabilities were such and so, does it matter if they know what they're talking about?

Or is being able to use a tool the same as understanding what the tool is doing?

And I know there are online analysts out there who take high variability and weave it into gold. Good for them (truly!). They have a skill I lack. And they're performing art, not science, and as someone who walks in both worlds I will share my opinion that science is lots easier than art. Science has rules. Art is governed by what the buying public is willing to spend and on whom.


That offered, HiPPOs du jour should be prepared for highly defined and validatable game-changing methods and technologies to un-du jour them because such methods and technologies will, given time and regardless of where they originate and how they emerge. In this, like stars shining in the sky, there is no option, no way out. The laws of evolutionary dynamics apply in everything from rainstorm puddles on the pavement to galactic clustering (I can demonstrate their validity in the online analytics world very quickly and easily; start with the first online analytics implementation at UoH in the early 1990s and follow the progression to today. Simple, clean and neat. I love it when things work. Don't you? Gives me confidence in what I think, do and say).

My suggestion (note the italics) is that the online community create an unbiased, product agnostic experimental group. All empirical sciences that I know of have experimental disciplines within them (physics has “experimental physics”, immunology has “experimental immunology”, …). NextStage is not part of this community so again, we have no dog in this fight. Let me offer NextStage as an example, though — we do regularly publish our experimental methods and their results in our own papers and in business-science journals and in scientific conference papers. This allows others to determine for themselves if our methods are valid and worthy. Granted, NextStage comes from a scientific paradigm and perhaps taking on some of science's disciplines would benefit the industry as a whole, or at least bring more confidence and comfort to those within it.

But what about the Third Semiotic Question?

Answering “What happened to me?” follows the trail of asking trusted others (my thanks to Susan, Charles, Barb, Mike, Warner, Lewis, Todd, Little-T and the Girls, M, Gladys and Dolph) many questions to bridge holes in my understandings.

All the ills referenced in parts 1 and 2 demonstrated themselves to their full — people who didn't like what I wrote triangulated. They contacted others whom they thought were socially closer to me or “might have an in” but heaven forbid they contact me directly. Others focused their frustration at me because (probably in their minds) I was something concrete and tangible, something they could point at, instead of something they felt powerless against; the industry as a whole. Still others because they consider me an industry leader (I'm not. I'm an outsider, remember? I can't lead an industry I'm not a part of. Or will Moses start telling Buddhists how to behave?). And (I'm told) I became the subject of klatch-talk on at least two continents (obviously, I need to start charging more for my time).

All of these things add up to determining the human cost of the unfulfilled promise of online analytics. As I quoted before, Coca-Cola Interactive Marketing Group Manager Tom Goodie said “Metrics are ridiculously political.” He was correct and not by half. The cost is high. It is highest amongst

  • those unsure of the validity of their methods, their measurements and their meanings who want to be accepted and acknowledged as doing valuable work yet are unable to concisely and consistently document what they're doing to the satisfaction of executives signing their checks
  • and those who are cashing those checks to buy new clothes.

Do I think online analytics industry will change because of my research and its publication?

See this tool? I must know what I'm doing because I use this tool.Did you read what I wrote about accountability in The Unfulfilled Promise of Online Analytics, Part 1? People are being paid without being accountable for what they're being paid to do. The sheer human inertia put forth to not change that model has got to be staggering, don't you think?

And I doubt anything I could do would bring such a change about. My work may contribute, it may be a drop in the bucket helping that bucket to fill and that's all.

The industry itself will change regardless (surprise!). As a WAWB colleague recently wrote, “For a field that's changing rapidly, based on rapidly changing technologies, I personally feel that holding any expectations for the future is a set up for disappointment. The expectation of change is the only realistic expectation I can hold today.” and I agree. Things will change. They always do. To promise anything else is to lie first to one's self then to others.

Final Thoughts

This is the end of the Unfulfilled Promise arc for me, folks. Please feel free to continue it on your own and give me a nod if you wish.

(my thanks to readers of Questions for my Readers who suggested this footnoting format over my usual <faux html> methods and to participants in the First NH WAW who, knowing nothing about this post, covered much the same topics during our lunch conversation)

1 – A constant promise to myself regarding my work — perform honest research, report results accurately and unbiasedly and (when possible) determine workable solutions to any challenges that presented themselves in either research or results.


2 – For those who don't know, much of ET is based on anthrolingualsemiotics — how humans communicate via signs. “Signs” means things like “No Parking”, true, and also means language, movement, symbols, art, music, … . According to Thomas Carlyle, it is through such things “that man consciously or unconsciously lives, works and has his being.” You can find more about semiotics in the following bibliography:

Aho, Alfred V. 2004 27 Feb Software and the Future of Programming Languages, .Science V 303 , I 5662 , DOI: 10.1126/science.1096169

Balter, Michael 2004 27 Feb Search for the Indo-Europeans, .Science V 303 , I 5662 , DOI: 10.1126/science.303.5662.1323

Balter, Michael 2004 27 Feb Why Anatolia?, .Science V 303 , I 5662 , DOI: 10.1126/science.303.5662.1324

Benson J.; Greaves W.; O'Donnell M.; Taglialatela J. 2002 Evidence for Symbolic Language Processing in a Bonobo (Pan paniscus), .Journal of Consciousness Studies V 9 , I 12

Bhattacharjee, Yudhijit 2004 27 Feb From Heofonum to Heavens, .Science V 303 , I 5662 , DOI: 10.1126/science.303.5662.1326

Carrabis, Joseph 2006 Chapter 4 “Anecdotes of Learning”, Reading Virtual Minds Volume I: Science and History, V 1 , Northern Lights Publishing , Scotsburn, NS 978-0-9841403-0-5

Carrabis, Joseph 2006 Reading Virtual Minds Volume I: Science and History, V 1 , Northern Lights Publishing , Scotsburn, NS

Chandler, Daniel 2007 Semiotics: The Basics, , Routledge 978-0415363754

Crain, Stephen; Thornton, Rosalind 1998 Investigations in Universal Grammar, , MIT Press 0-262-03250-3

Fitch, W. Tecumseh; Hauser, Marc D. 2004 16 Jan Computational Constraints on Syntactic Processing in a Nonhuman Primate, .Science V 303 , I 5656

Gergely, Gyorgy; Bekkering, Harold; Kiraly, Ildiko 2002 14 Feb Rational imitation in preverbal infants, .Nature V 415 , I 6873 , DOI:

Graddol, David 2004 27 Feb The Future of Language, .Science V 303 , I 5662 , DOI: 10.1126/science.1096546

Holden, Constance 2004 27 Feb The Origin of Speech, .Science V 303 , I 5662 , DOI: 10.1126/science.303.5662.1316

Montgomery, Scott 2004 27 Feb Of Towers, Walls, and Fields: Perspectives on Language in Science, .Science V 303 , I 5662 , DOI: 10.1126/science.1095204

Pennisi, Elizabeth 2004 27 Feb The First Language?, .Science V 303 , I 5662 , DOI: 10.1126/science.303.5662.1319

Pennisi, Elizabeth 2004 27 Feb Speaking in Tongues, .Science V 303 , I 5662 , DOI: 10.1126/science.303.5662.1321


3 – There is (in my opinion) no greater demonstration of this principle than in The Book of the Wounded Healers, a long forgotten book that I hope will become available again sometime soon.


4 – Aleksander, Igor; Dunmall, Barry 2003 Axioms and Tests for the Presence of Minimal Consciousness in Agents I: Preamble, .Journal of Consciousness Studies V 10 , I 4-5


5 – Carrabis, Joseph 2004, 2006, 2009 A Primer on Modality Engineering, 18 Pages, , Northern Lights Publishing , Scotsburn, NS

Carrabis, Joseph 2009 18 Aug I'm the Intersection of Four Statements, , BizMediaScience

Carrabis, Joseph 2009 8 Sep Addendum to “I'm the Intersection of Four Statements”, , BizMediaScience

Nabel, Gary J. 2009 2 Oct The Coordinates of Truth, .Science V 326 , I 5949


6 – The simplest things often have the most power. The semioticist's A + B = C demonstrates itself with three questions to form equations of meaning such as:

(what happened) + (what do I think happened) = (what happened to me)

(what happened to me) – (what do I think happened) = (what happened)

(what happened to me) – (what happened) = (what do I think happened)

Know any two and the last reveals itself to you.

But only if you're willing.


7 – Note to Jacques Warren: Un et un est troi. Ha!


8 – Note to Ben Robison: Nope, ET wouldn't detect the sarcasm. The string was too short. We're working on it.


9 – Note to Ben Robison: Still working on that sarcasm thing. We have what we think is a good go at it in the NS Sentiment Analysis tool we'll be making public either this week or next (still waiting for the interface and may decide to go without it just to learn what happens).


10 – As Jacques Warren, Stephane Hamel and Rene can tell you, my best French is laughable. My attempt at “My gosh, what a beautiful day” usually comes out as “Joli jour heureux je”. (C'est rire, n'est-ce pas?)


11 – Carrabis, Joseph 2007 10 Jan Standards and Noisy Data, Part 1, , BizMediaScience

Carrabis, Joseph 2007 11 Jan Standards and Noisy Data, Part 2, , BizMediaScience

Carrabis, Joseph 2007 12 Jan Standards and Noisy Data, Part 3, , BizMediaScience

Carrabis, Joseph 2007 14 Jan Standards and Noisy Data, Part 4, , BizMediaScience

Carrabis, Joseph 2007 27 Jan Standards and Noisy Data, Part 5, , BizMediaScience

Carrabis, Joseph 2007 28 Jan Standards and Noisy Data, Part 10, , BizMediaScience

Carrabis, Joseph 2007 28 Jan Standards and Noisy Data, Part 6, , BizMediaScience

Carrabis, Joseph 2007 28 Jan Standards and Noisy Data, Part 7, , BizMediaScience

Carrabis, Joseph 2007 28 Jan Standards and Noisy Data, Part 8, , BizMediaScience

Carrabis, Joseph 2007 28 Jan Where Noisy Data Meets Standards (The Noisy Data arc, Part 9), , BizMediaScience

Carrabis, Joseph 2007 29 Jan For Angie and Matt, and The Noisy Data Finale, , BizMediaScience

Carrabis, Joseph 2007 29 Jan Standards and Noisy Data, Part 11, , BizMediaScience


12 – Periodic relearnings are part of my training and makeup. I put myself through periodic re-educations because I question my knowledge, not because I question someone else's. My goal is to find the flaws in my understanding, not to pronounce someone else's in error. Periodic re-educations keep subject matter knowledge fresh within me, brings new understandings to old educations, increases wisdom, all sorts of good things. Admittedly, this has enabled me to recognize flaws in other people's reasonings. Two examples that the online community may be familiar with are Eric Peterson's engagement equation (flawed definitions and mathematical logic) and Stephane Hamel's WAMM (frame confusion).

To respond to some comments made on the (now dead) TheFutureOf blog, I had to study other people's work. One such work was Eric Peterson's engagement equation. Other people had contacted me about his equation with some questions about it's validity (for the record, I had no intention of looking at Eric's engagement equation until he mentioned it in response to something I'd written. Once he mentioned it, my belief was he'd “placed it in the game” so to speak, hence opened it up to inspection).

In any case, the result of my own and others' questioning was that I studied how that equation was derived (was the mathematical logic viable and consistent, were the variables defined and used consistently, …) and found it flawed. Eric asked if it would be possible for us to simply work together on the equation to remove some ambiguities and make it more generally applicable, thereby removing any questions of mathematical validity and provide business value.

The public response to my reworking of Eric's original equation both confused and concerned me. My reworking was nothing more than turning it into a multiple regression model with the b0 and e terms set to 0 and all bn assumed to 1 (they could be changed as needs dictated). This allowed people using the reworking to determine by simple variance which models/methods weren't valid in their business setting and ignore them. I kept thinking people would laugh at how simplistic my reworking was and the response was quite the opposite. It was at this point my concerns about basic mathematical knowledge among online analysts flared.

I read through Stephane Hamel's WAMM paper (also because others entered it into a discussion) and recognized that by adding some consistent variable definitions that tool would have a great deal of power across disciplines. I asked Stephane if he'd mind my tinkering and so the story goes.

The challenge with Eric Peterson's engagement equation and Stephane Hamel's WAMM is (in my current understanding) that there is no “standard”, itself a theme I'll return to in this post. As an example, my current work with WAWB involves applying some standard modeling techniques so a “normal” can be determined. This would allow Company A to measure itself against a normal rather than comparing itself to bunches of other companies (that might not be good exemplars based on differing business and market conditions) and determine upon which vector Company A should place its efforts to insure cost-efficient gains along all WAMM vectors. The first aspect (my opinion) would be organizational. Without people accepting recognized truth there is no truth (again, my opinion).

And each time I take on such a task I require myself to relearn the necessary disciplines so I can be confident that my understandings are as close to the original author's as possible.

My method for learning and re-learning anything is to go back to First Principles (as mentioned earlier in this post). Some people may have heard or seen me talk about learning theory and how it can be applied everywhere. That's a lot of what First Principles are about. Start with the most basic elements you can, understand them as completely as possible, build upon that. One thing this provides me is the ability and confidence to discuss my ideas openly, the freedom to ask questions honestly and truthfully, and to understand and accept conflicting views easily and graciously. Put another way, the more you know, the wider your field of acceptance and understanding, and the more fluid and dynamic you become in your ability to respond to others.

So I started relearning statistics by going back to First Principles, studying Gauss, Galton, Fisher and Wright, giving myself the time to understand how the discipline evolved, how the concepts of regression, regression to the mean, ANOVA, ANCOVA, trait analysis, path analysis, structural equations modeling, causal analysis, least squares analysis, …, came about, how they're applied to different sciences (agriculture, eugenics, medicine, …), how bias, efficiency, optimality, sufficiency, ancillarity, robustness, … came about and how they are solved.

I also learned that the advent of fast, inexpensive computing power tended to focus people's attentions to problems that could be solved via fast, inexpensive computing rather than problems that needed to be solved. This was (to me) a point of intersection with the Unfulfilled Promise posts; “gathered data that [we] knew how to gather rather than asking what data would be useful to gather and figuring out how to gather it.”

So I shifted my focus a bit. I decided to use online analytics as the groundwork for teaching myself statistics.


13 – Somebody remind me to publish The Augmented Man. It covers Preparation Sets, EEGSLs and all that stuff in detail.

And it's another darn good read. Phphttt!


14 – Carrabis, Joseph 2006 Chapter 2, “What The Reading Virtual Minds Series Is About”, Reading Virtual Minds Volume I: Science and History, , Northern Lights Publishing , Scotsburn, NS 978-0-9841403-0-5

Carrabis, Joseph 2006 Chapter 4 section 2, “The Investors Heard the Music”, Reading Virtual Minds Volume I: Science and History, V 1 , Northern Lights Publishing , Scotsburn, NS 978-0-9841403-0-5

Carrabis, Joseph 2006 10 Nov Mapping Personae to Outcomes,

Carrabis, Joseph 2007 23 Mar Websites: You've Only Got 3 Seconds, , ImediaConnections

Carrabis, Joseph 2007 11 May Make Sure Your Site Sells Lemonade, , iMediaConnections

Carrabis, Joseph 2007 29 Nov Adding sound to your brand website, , ImediaConnections

Carrabis, Joseph 2008/9 28 Jan/1 Jul From TheFutureOf (22 Jan 08): Starting the discussion: Attention, Engagement, Authority, Influence, , , The Analytics Ecology

Carrabis, Joseph 2008 26 Jun Responding to Christopher Berry's “A Vexing Problem, Part 4” Post, Part 3, , BizMediaScience

Carrabis, Joseph 2008 2 Jul Responding to Christopher Berry's “A Vexing Problem, Part 4” Post, Part 2, , BizMediaScience

Carrabis, Joseph 2008/9 11 Jul/3 Jul From TheFutureOf (10 Jul 08): Back into the fray, , The Analytics Ecology

Carrabis, Joseph 2008/9 18 Jul/7 Jul From TheFutureOf (16 Jul 08): Responses to Geertz, Papadakis and others, 5 Feb 08, , The Analytics Ecology

Carrabis, Joseph 2008/9 18 Jul/7 Jul From TheFutureOf (16 Jul 08): Responses to Papadakis 7 Feb 08, , The Analytics Ecology

Carrabis, Joseph 2008/9 29 Aug/9 Jul From TheFutureOf (28 Aug 08): Response to Jim Novos 12 Jul 08 9:40am comment, , The Analytics Ecology

Carrabis, Joseph 2008 1 Oct Do McCain, Biden, Palin and Obama Think the Way We Do? (Part 1), , BizMediaScience

Carrabis, Joseph 2008 6 Oct Do McCain, Biden, Palin and Obama Think the Way We Do? (Part 2), , BizMediaScience

Carrabis, Joseph 2008 30 Oct Me, Politics, Adam Zand's Really Big Shoe, How Obama's and McCain's sites have changed when we weren't looking, , BizMediaScience

Carrabis, Joseph 2008 31 Oct Governor Palin's (and everybody else's) Popularity, , BizMediaScience

Carrabis, Joseph 2008/9 10 Nov/15 Jul From TheFutureOf (7 Nov 08): Debbie Pascoe asked me to pontificate on What are we measuring when we measure engagement?, , The Analytics Ecology

Carrabis, Joseph 2009 A Demonstration of Professional Test-Taker Bias in Web-Based Panels and Applications, 20 Pages, , NextStage Evolution , Scotsburn, NS

Carrabis, Joseph 2009 Machine Detection of and Response to User Non-Conscious Thought Processes to Increase Usability, Experience and Satisfaction – Case Studies and Examples, . , Towards a Science of Consciousness: Hong Kong 2009, University of Arizona, Center for Consciousness Studies , Tucson, AZ

Carrabis, Joseph 2009 5 Jun Sentiment Analysis, Anyone? (Part 1), , BizMediaScience

Carrabis, Joseph 2009 12 Jun Canoeing with Stephane (Sentiment Analysis, Anyone? (Part 2)), , BizMediaScience

Carrabis, Joseph; 2007 30 Mar Technology and Buying Patterns, , BizMediaScience

Carrabis, Joseph; 2007 9 Apr Notes from UML's Strategic Management Class – Saroeung, 3 Seconds Applies to Video, too, , BizMediaScience

Carrabis, Joseph; 2007 16 May KBar's Findings: Political Correctness in the Guise of a Sandwich, Part 1, , BizMediaScience

Carrabis, Joseph; 2007 16 May KBar's Findings: Political Correctness in the Guise of a Sandwich, Part 2, , BizMediaScience

Carrabis, Joseph; 2007 16 May KBar's Findings: Political Correctness in the Guise of a Sandwich, Part 3, , BizMediaScience

Carrabis, Joseph; 2007 16 May KBar's Findings: Political Correctness in the Guise of a Sandwich, Part 4, , BizMediaScience

Carrabis, Joseph; 2007 Oct The Importance of Viral Marketing: Podcast and Text, ,

Carrabis, Joseph; 2007 9 Oct Is Social Media a Woman Thing?, ,

Carrabis, Joseph; Bratton, Susan; Evans, Dave; 2008 9 Jun Guest Blogger Joseph Carrabis Answers Dave Evans, CEO of Digital Voodoos Question About Male Executives Weilding Social Media Influence on Par with Female Executives, , PersonalLifeMedia

Carrabis, Joseph; Carrabis, Susan; 2009 Designing Information for Automatic Memorization (Branding), 35 Pages, , NextStage Evolution , Scotsburn, NS

Carrabis, Joseph; 2009 Frequency of Blog Posts is Best Determined by Audience Size and Psychological Distance from the Author, 25 Pages, , NextStage Evolution , Scotsburn, NS

Daw, Nathaniel D.; Dayan, Peter 2004 18 Jun Matchmaking, .Science V 304 , I 5678

Draaisma, Douwe 2001 8 Nov The tracks of thought, .Nature V 414 , I 6860 , DOI:

Ferster, David 2004 12 Mar Blocking Plasticity in the Visual Cortex, .Science V 303 , I 5664

Harold Pashler; Mark McDaniel; Doug Rohrer; Robert Bjork 2008 Learning Styles: Concepts and Evidence, .Psychological Science in the Public Interest V 9 , I 3 1539-6053 %+ University of California, San Diego; Washington University in St. Louis; University of South Florida; University of California, Los Angeles

Hasson, Uri; Nir, Yuval; Levy, Ifat; Fuhrmann, Galit; Malach, Rafael 2004 12 Mar Intersubject Synchronization of Cortical Activity During Natural Vision, .Science V 303 , I 5664

Kozlowski, Steve W.J.; Ilgen, Daniel R. 2006 Dec Enhancing the Effectiveness of Work Groups and Teams, .Psychological Science in the Public Interest V 7 , I 3 , DOI:

Matsumoto, Kenji; Suzuki, Wataru; Tanaka, Keiji 2003 11 Jul Neuronal Correlates of Goal-Based Motor Selection in the Prefrontal Cortex, .Science V 301 , I 5630

Ohbayashi, Machiko; Ohki, Kenichi; Miyashita, Yasushi 2003 11 Aug Conversion of Working Memory to Motor Sequence in the Monkey Premotor Cortex, .Science V 301 , I 5630

Otamendi, Rene Dechamps; Carrabis, Joseph; Carrabis, Susan 2009 Predicting Age & Gender Online, 8 Pages, , NextStage Analytics , Brussels, Belgium

Otamendi, Rene Dechamps; 2009 22 Oct NextStage Announcements at eMetrics Marketing Optimization Summit Washington DC, , NextStage Analytics

Otamendi, Rene Dechamps; 2009 24 Nov NextStage Rich PersonaeTM classification, , NextStage Analytics

Paterson, S. J.; Brown, J. H.; Gsdl, M. K.; Johnson, M. H.; Karmiloff-Smith, A. 1999 17 Dec Cognitive Modularity and Genetic Disorders, .Science V 286 , I 5448

Pessoa, Luiz 2004 12 Mar Seeing the World in the Same Way, .Science V 303 , I 5664

Richmond, Barry J.; Liu, Zheng; Shidara, Munetaka 2003 11 Jul Predicting Future Rewards, .Science V 301 , I 5630

Sugrue, Leo P.; Corrado, Greg S.; Newsome, William T. 2004 18 June Matching Behavior and the Representation of Value in the Parietal Cortex, .Science V 304 , I 5678

Tang, Tony Z.; DeRubeis, Robert J.; Hollon, Steven D.; Amsterdam, Jay; Shelton, Richard; Schalet, Benjamin 2009 1 Dec Personality Change During Depression Treatment: A Placebo-Controlled Trial, .Arch Gen Psychiatry V 66 , I 12


15 – And before I get another flurry of emails that I'm attacking one person or another, no, I'm not. An almost identical process occurs when someone says “(something) is Easy”. I describe the “(something) is Hard” version because it's easier for people to understand. One of the wonders of AmerEnglish and American cultural training, that — it is easier to accept that something can be hard and harder to accept that something could be easy.

Human neural topography. Gotta love it.


16 – This understanding of what happens during teachings and trainings is why all NextStage trainings are done the way they are (see Eight Rules for Good Trainings (Rules 1-3) and Eight Rules for Good Trainings (Rules 4-8)) and could be why our trainings get the responses they do (see Comments from Previous Participants and Students).


17 – Bloom, Paul 2001 Precis of How Children Learn the Meanings of Words, .Behavioral and Brain Sciences V 24

Burnett, Stephanie; Blakemore, Sarah-Jayne 2009 6 Mar Functional connectivity during a social emotion task in adolescents and in adults, .European Journal of Neuroscience V 29 , I 6 , DOI: 10.1111/j.1460-9568.2009.06674.x

Frith, Chris D.; Frith, Uta 1999 26 Nov Interacting Minds–A Biological Basis, .Science V 286 , I 5445

Gallagher, Shaun 2001 The Practice of Mind (Theory, Simulation or Primary Interaction), .Journal of Consciousness Studies V 8 , I 5-7

Senju, Atsushi; Southgate, Victoria; White, Sarah; Frith, Uta 2009 14 Aug Mindblind Eyes: An Absence of Spontaneous Theory of Mind in Asperger Syndrome, .Science V 325 , I 5942

Tooby, J.; Cosmides, L. 1995 'Foreward' to S. Baron-Cohen, “MindBlindness: An Essay on Autism and Theory of Mind”, . , MIT Press , Cambridge, Mass.

Zimmer, Carl 2003 16 May How the Mind Reads Other Minds, .Science V 300 , I 5622


18 – I'll use myself as an example. I've often become emotional when talking about research and results. But (But!) regardless of my emotionalism, the work stands or doesn't. I can clarify, elucidate, explain, divulge, describe, … and in the end, the work stands or it doesn't.


19 – If your model is a linear variation (all regression analyses are linear in nature) then you have something like y = mx + b, y = b0 + b1x + e, … and every change in one unit of x will cause a one unit change in y. Using the above equations as examples we get the textbook definition of the regression coefficient (either m or b1 in the above); the effect that a one unit change in x has on y.


20 – I have experience working with large data sets. Some of you might know I worked for NASA in my younger years. I was responsible for downloading and analyzing satellite data. The downloads came every fifteen minutes and reported atmospheric phenomena the world over. My job was to catch the incongruous data and discard it. I got to a point where I could look at this hexidecimal data stream and determine weather conditions any where in the world before it got sent on for analysis.

Amazing that I got dates back then, isn't it?


Posted in , , , , ,

The Unfulfilled Promise of Online Analytics, Part 2

Perfection is achieved,
not when there is nothing more to add,
but when there is nothing left to take away.
– Antoine de Saint-Exupery, The Little Prince

Readers can find the previous entry in this arc at The Unfulfilled Promise of Online Analytics, Part 1.

First, I want to thank all the people who read, commented, twittered, emailed, skyped and phoned me with their thoughts on Part 1.

My special thanks to the people with reputations and company names who commented in Part 1. Avinash Kaushik and Jim Novo, I thank and congratulate you for stepping up and responding (I queried others if I could include them in this list, they never responded). Whether you intended to or not, whether you recognize it or not, you demonstrated a willingness to lead and a willingness to get involved. Please let's keep the discussion going.

Also my thanks to those who took up the gauntlet by propagating the discussion via their own blogs. Here Chris Berry (and I also note that Chris' The Schism in Analytics, A response to Carrabis, Part II post presages some of what I'll post here) and Kevin Hillstrom come to mind. My apologies to others I may not have encountered yet.

Second, I was taken aback by the amount of activity this post generated. I was completely unprepared for the responses. It never occurred to me there was a nerve to be struck; only one person interviewed responded purely in the positive. The lack of positive response caused me to think this information was self-evident.

Well…there was one of the problems. It was self-evident. Like the alcoholic brother-in-law elephant in the living room, it took someone new to the family to point and say, “My god is that guy drunk or what!”

And like the family who's been working very hard making sure nobody acknowledges the elephant, the enablers came forward — okay, they emailed, skyped and phoned forward. One industry leader commented, saw my response and asked that their comment be removed. I did so with great regret because there can be no leadership without discussion, no unification of voices until all voices are heard.

Please note that some quotes appearing in this entry may be from different sources than in part 1 and (as always) are anonymous unless a) express permission for their use is given or b) the quote is in the public domain (Einstein, Saint-Exupery, etc).

Okay, enough preamble. Enjoy!

The whole industry needs a fresh approach. This situation isn't going to improve itself.There was a sense of exhaustion among respondents regarding the industry. It took two forms and I would be hard pressed to determine which form took precedent.

One form I could liken to the exhaustion a spouse feels when their partner continually promises that tomorrow will be better, that they'll stop drinking/drugging/gambling/overeating/abusing or otherwise acting out.

It wasn't always the case. Once upon a time (that phrase was actually used by more than one respondent) there was a belief that if things were implemented correctly, if a new tool could be developed, if management would understand what was being done, if if if… Things could and would be better. Promises were made that were never kept and were then comfortably forgotten.

The second form I could liken to the neglected child who starts acting out simply to get attention. Look at me, Look at me! But mom&dad always have something else to focus their attention on. There's the new product launch, opening new markets, having to answer to the Board, (and probably the worst) the other children (marketing, finance, logistics, …), …

“When you know the implementation is correct you have to wonder if the specifications are wrong.”

Several respondents showed an impressive level of self-awareness. Many of them have moved on, either out of the industry completely or into more fulfilling positions within. All recognized that any industry that succumbs to promise and hype will ultimately end in disappointment.

First we're told to nail things down then given a block of unobtainium to nail them in then told to do it now!The disappointment took two primary forms (clear schisms abounded in this research. Clear schisms are usually indicative of deep level challenges to unification in social groups) and the division was along personality types. Respondents who were more analytic than business focused were disappointed because “…a fraction of implementation achieve business goals. A tiny faction of those actually work.”

Respondents who were more business than analytics focused were disappointed because the industry didn't help them achieve their career goals.

For many in both camps moving on was a recognition of their own personal growth and maturation, for most it was frustration based, a running away-from pain rather than a movement towards pleasure. This latter again demonstrates a victim mentality, a caught in the middle between warring parents.

“When the tools don't agree management's solution is to get a new tool.”

Deciding on tools is more politics than smarts. Management doesn't ask us, they just go with the best promises.Respondents demonstrated frustration with clients/organizations and vendors that refuse to demonstrate leadership. This was such a strong theme that I address it at length below. Sometimes a lack of leadership is the result of internal politics (“…and that's (competition, keeping knowledge to themselves, backstabing) is starting to happen (we see the schism (right word?) between Eric's 'hard' position and Avinash 'easy' (and others)…”).

Leadership vacuums also develop when power surges back and forth between those given authority positions by others. Family dynamics recognizes this when parents switch roles without clearly letting children know who's taking the lead (think James Dean's “You're tearing me apart” in Rebel Without a Cause). This frustration was exacerbated when respondents began to recognize that no tool was truly new, only the interfaces and report formats changed.

There was a sense among respondents that vendors and clients/organizations were switching roles back and forth, neither owning leadership for long, and again, the respondents were caught in the middle.

“Management pays attention to what they paid for, not what you tell them.”

Some respondents are looking at the horizon and reporting a new (to them) phenomenon; as vendors merge, move and restructure there's an increasing lack of definition around “what can we do with this?” This is disturbing in lots of ways.

...everybody's agreeing with their own ideas and nobody elses.Analysts will begin to socially and economically bifurcate (there will be no “middle class”). Those at the bottom of the scale will get into the industry as a typical “just out of school” job then move elsewhere unless they're politically adept. The political adepts will join the top runners, either associating themselves with whatever exemplars exist or by becoming exemplars themselves. But the social setting thus created allows for a multitude of exemplars, meaning there are many paths to the stars, meaning one must choose wisely, meaning most will fail and thus the culture bifurcates again and fewer will stay long enough to reach the stars. “You have to pick who you listen to. I get tired figuring out who to follow each day.”

Respondents admitted to lacking (what I recognize as) research skills. I questioned several people about their decision methods — had they considered this or that about what they did or are planning to do — and universally they were grateful to me for helping them clarify issues. Those that had appreciable research skills were hampered by internal politics (“Until my boss is ready nothing gets done.”)

Most respondents confused outputs with outcomes (as noted in part 1) because tools are presented and trained in two levels (this is my conclusion based on discussions. I'm happy to be corrected). There's the tool core that only few learn to use and there's the tool interface that everyone has access to.

Everyone can test and modify their plans based on the interface outputs but what happens at the core level — how the interface outputs are arrived at — is the great unknown hence can't be defended in management discussions and “…I can't explain where it came from so I'm ignored.” Management's (quite reasonable, to me) response follows Arthur C. Clarke's “Mankind never completely abandons any of its ancient tools”, they go with what they know, especially when analysts themselves don't demonstrate confidence in their findings. “I can only shrug so many times before they stop listening, period.” Management is left to make decisions based on experience and now we see the previously mentioned bifurcation creeping into business decisions. Those with the most experience, the most tacit knowledge, win. As John Erskine wrote, “Opinion is that exercise of the human will that allows us to make a decision without information” and management — asking for more accountability — is demanding to understand the basis for the information given.

“Did you ever get the urge when someone calls up or sends e-mails asking, 'How's that data coming?' to say, 'Well, we're about two hours behind where we would be if I didn't have to keep stopping to answer your goofy-?ss phone calls and e-mails.' This is called project management, I guess.”

Some tools are rejected even when they make successful predictions.“Ignore them” as a strategy for responding to business requests works two-ways. Management repeatedly asking difficult to solve questions results in they're being ignored by analysts until the final results are in. By that time both question and answer are irrelevant to a tactical business decision and once again the “promise” is lost. In-house analysts can suggest new tools and must deal with their suggestions gaining little traction. “Management works in small networks that look at the same thing. They're worse than g?dd?mn children. You have to whack them on the side of the head to get their attention.”

Management's reluctance to take on different tools and methodologies is understandable. Such decisions increase risk and no business wants risk.

“To change the form of a tool is to lose it's power. What is a mystery can only be experienced for the first time once.” online analytics matures it must evolve to survive.I asked for clarification of the statement on the right and was told that yes, there are times when old paradigms need to be tossed aside and knowing when is a recognizable management skill that can only be exercised by extreme high-level management, by insanely confident upstarts and lastly by (you guessed it) trusted leaders/guides. The speaker had recently returned to the US from a study of successful EU-based startups. When and how paradigms should be shifted and abandoned is a hot topic among 30ish EU entrepreneurs.

“We're suppose to be solving problems. But I can't figure out what problems we're suppose to solve.”

Random metric names and symbols is not an equation.(the quote on the right is from Anna O'Brien's Random Acts of Data blog)

Business and Science are orthogonal, not parallel. Any science-based endeavor works to overcome obstacles. If not directly, then to provide insight into how and what obstacles can be overcome. Business-based endeavors work to generate profit. Science involves empirical investigation. Investigation takes time and only certain businesses can afford time because unless the science is working at overcoming a business obstacle, it's a cost, not a profit.

So if you can't afford the time involved in research and are being paid to solve business problems your options are limited. Most respondents relied on literature (usually read at home during “family time” or while traveling), conferences, private conversations and blogs. Literature is only produced by people wanting to sell something (this includes yours truly). It may be a book, a conference ticket, a tool, consulting, a metaphysic, …, and even when what they offer is free (such as most blogs) consumers pay with their attention, engagement and time (yes, I know. Especially with my posts).

...I don't believe in WA anymore, I haven't seen any of my clients change because of it and all the presentations that I've seen are always similar...Conferences and similar venues are biased by geographies, time and cost (again, even if free you're paying somehow. Whoever is picking up the bar tab and providing the munchies is going to be boasting about how many attended).

Private conversations provide limited access and that leaves blogs. The largest audiences will be (most often) offline in the form of books and online in the form of blogs.

Behold, and without most people realizing it's happening, exemplars form. The exemplar du jour provides the understanding du jour, hence a path to what problems can be solved du jour. Who will survive?

Historical precedent indicates that exemplars who embrace and encourage new models will thrive. More than thrive, they will continue as positive exemplars. Exemplars not embracing or at least acknowledging new models will quickly become negative exemplars and the “negativity” will be demonstrated socially first in small group settings then spill over into large group settings once a threshold is reached (and once that threshold is reached, watch out!). The latter won't happen “over night” and it will definitely happen (my opinion) because all societies follow specific evolutionary and ecologic principles (evolutionary biology, Red Queen, Court Jester, evolutionary dynamics, niche construction and adaptive radiation rules (along with others) all apply). The online analytics world is no different.

Some people contacted me about Stephane Hamel's Web Analytics Maturity Model. I knew nothing about it, contacted Stephane, asked to read his full paper (not the shortened version available at, did so, talked with him about it, told him my conclusions and take on it and got his permission to share those conclusions and takes here. I also asked Stephane if I could apply his model to some of my work with the goal of creating something with objective metricization that would be predictive in nature and he agreed (if you treat Stephane's axes as clades and consider each node as a specific situation then cladistic analysis tools via Situational Calculus looks very promising (asleep yet?)).

A case in point is Stephane Hamel and his Web Analytics Maturity Model (WAMM). Stephane will emerge as an exemplar for several reasons and WAMM is only one of them.

KISS should be part of the overall philosophy.WAMM is (my opinion) an excellent first step to solving some of the issues recognized in part 1 because it does something psycholinguists know must be done before any problem can be solved; it gives the problem a name. Organizations can place themselves or be placed on a scale of 0-5, Impaired to Addicted (Stephane, did you know that only 1-4 would be considered psychologically healthy?). WAMM helps the online analytics world because it creates a codification, an assessment tool for where an organization is in their online efforts.

I asked Stephane if he thought his tool was a solution to what I identified in part 1. He agreed with me that it wasn't. Its purpose (my interpretation, Stephane agreed) was that it creates a 2D array, creates buckets therein and then explains what goes in each bucket.

I asked Stephane if he believed WAMM provided a metricizable solution with universally agreed to objective measures (I told Stephane that I wasn't grasping how WAMM becomes an “x + y = z” type of tool and asked if I'd missed something). Stephane replied “…no, you haven't missed anything, because it is NOT a x+y=z magical/universal formula, that's not the goal. The utmost goal is to enable change, facilitate discussion, and it's not 'black magic'. A formula would imply there is some kind of recipe to success. Just like we can admire Amazon or Google success and could in theory replicate everything they do, you simply can't replicate the brains working there – thus, I think there is a limit to applying a formula (or 'brain power' is a huge randomized value in the formula).”

WAMM and any similar models would be considered observational tools (I explain “observational” tools further down in this post). Most observational tools (I would write “all” and don't have enough data to be convinced) trace their origins (and this is a fascinating study) to surveying; People could walk the land and agree “here is a rise, there is a glen” but it wasn't until surveying tools (the plumb&line, levels, rods&poles, tapes, compass, theodolite, …) came along that territories literally became maps (orienteers can appreciate this easily) that told you “You are here” and gave very precise definitions of where “here” was.

The only problem with observational tools is that the map is not the territory. Yes, large enough maps can help you figure out how to get from “here” to “there” and how far you can travel (how much your business can successfully change) depends on the size of your map, your confidence in your guide/leader, … . Lots of change means maps have to be very large (ie, very large data fields/sets), updated regularly (to insure where you're walking is still where you want to walk). The adage “Here there be dragons” places challenges in a fixed, historical location, it doesn't account for population and migrational dynamics (market movements, audience changes).

Or you need lots of confidence in your leaders.

“…any science first start as art until it's understood and mature enough, no?”

A conclusion of this research is that online analytics is still more art than science, more practitioner than professional (at least in the client/organization's mind). This was demonstrated as a core belief in responses as the ratio of respondents using practitioner to professional was 6:1. This language use truly shocked me. Even among non-AmerEnglish speakers the psycholinguistics of practitioner and professional makes itself known. “Practitioner” is to “professional” as “seeking” is to “doing”, “deed” to “task”, “questing” to “working”, …

The disconnect between what practitioners do and what businesses need is an embarassment. There's a widening gulf between [online analytics] and business requirements.Online analytics makes use of mathematics (statistics, anyway) and although some people use formulae the results are often not repeatable except in incredibly large frames hence any surgical work is highly questionable. As the USAF Ammo Troop manual states “Cluster bombing from B-52s is very, very accurate. The bombs are guaranteed always to hit the ground.”

A challenge for online analysts may be recognizing the current state being more art than science as such and promoting both it and themselves accordingly. They are doing themselves and those they answer to a disservice if they believe and promote that they're doing “science” while the error rates between methods are recognized (probably non-consciously) as “art” by clients. Current models and methods allow for high degrees of flexibility (read “unaccountable error sources”).

Modern medical science has no cure for your condition. Fortunately for you, I'm a quack.A good metaphor is modern medicine. Without a diagnosis there can be no prognosis. You can attempt a cure but without a prognosis you have no idea if the patient is getting better or not. Most people think a prognosis is what they hear on TV and in the movies. “Doctor, will he live?” “The prognosis is good.” Umm…no. A prognosis is a description of the normal course of something, a prediction based on lots of empirical data seasoned with knowledge of the individual's general health. A prognosis of “most people turn blue then die” coupled with observations of “the skin is becoming a healthy pink and the individual is running a marathon” means the cure has worked and that the prognosis has failed.

Right now the state of online analytics is like the doctor telling the patient “We know you're ill but we don't know what you have.” The patient asks “Is there a cure?” and the doctor responds, “We don't know that either. Until we know what you have we don't know how to treat you…but we're willing to spend lots of money figuring it out.”

This philosophy is good in the individual and not in the whole (as recently witnessed by the public outcries about the recently published mammogram studies and no more demonstration of communicating science to non-scientists has occurred in recent years).

But once the disease is named? Then we have essentially put a box around whatever it is. We know its size, its shape and its limits.

There can be no standardization, no normalization of procedure or protocol, when the patient can shop for opinions until they find the one they want.

The challenge current models and methods face is that they serve the hospitals (vendors), not the doctors (practitioners) nor the patients (clients/organizations). It doesn't matter if all the doctors agree on a single diagnosis, what matters is whether or not there is a single prognosis that will heal the client. In that sense, WA is still much more an art than it is a science, and while we may all attend Hogwarts, our individual levels of wizardry may leave much to be desired.

...but give us a second and we'll run the data again.If you wish to claim the tools of mathematics then you must be willing to subject yourself to mathematical rigor. Currently there can be no version of Karl Popper's falsifiability when the same tool produces different results each time it's used (forget about different tools producing different results. When the same tool produces different results you're standing at the scientific “Abandon Hope All Ye Who Enter Here” gate).

“…gathered data that [we] knew how to gather rather than asking what data would be useful to gather and figuring out how to gather it.”

All the online tools currently available are “observational” (anthropologists, behavioral ethologists, etiologists, …, rely heavily on such tools). “Observation” is the current online tool sets' origin (going back to the first online analytics implementation at UoH in the early 1990s) and not much has changed. The challenge to observational tools is that they can only become predictive tools when amazingly large numbers are involved. And even then you can only predict generalized mass movement, neither small group nor individual behavior (for either you need what PsyOps calls ITATs — Individualizing Target Acquisition Technologies), with the mass' size determining the upper limit of a prediction's accuracy.

At this point we start circling back to part 1's discussions about “accountability” and why the suggestion of it gets more nervous laughter than serious nods. Respondents' resulting language indicates there is more a desire to currently keep WA an art than a science . There is less accountability when things are an art form. But “metrics as an art” is in direct conflict with client goals. And unless a great majority of practitioners wish their industry to mature there is no cure for its current malaise.

The promise has been unfulfilled since 2003. We were talking about more effective marketing, improved customer retention and all that stuff back then.One solution to this is giving the industry time to mature. Right now there is conflict between the art and science paradigms, between Aristophanes' “Let each man exercise the art he knows” and Lee Mockenstrum's “Information is a measure of the reduction of uncertainty.”

Time as a solution has been demonstrated historically, most obviously in our medical metaphor. Village wisdomkeepers gave way to doctors then to university degrees in medicine because the buying public (economic pressure) demanded consistency of care/cures. Eventually things will circle back and again due to economic pressure. Enough clients will seek alternatives not provided by institutional medicine and go back to practitioners of alternative medicine at which point the cycle will begin again. People have been openly seeking alternative cures to catastrophic illnesses since the 1960s. Eventually money began escaping institutional medicine's purview and insurers were being forced to pay. The end result was that institutional medicine and insurers started recognizing and accepting alternative medical technologies…provided some certification took place, usually through some university program.

It will be interesting to see how WAMM economizes the online analytics ecology: will practitioners decide institutions lower in the WAMM matrix are too expensive to deal with? This means such institutions — which require experienced practitioners to survive — will only be able to afford low quality/low experienced practitioners to help them. This can be likened to a naval gunnery axiom, “The farther one is from a target, either the larger the shell or the better the targeting mechanism” and companies will opt for larger shells (poorly defined efforts) rather than better targeting mechanisms (experienced practitioners).

“A dominant strand for [online analytics] the past ten fifteen years has been incorporating web information with executive decisions.”

So far no single solution to concerns raised in this research is apparent (to me). Instead a solution matrix of several components seems most likely to succeed (WAMM is a type of solution matrix; you can excel along any axis and to be successful you need to excel evenly along all axes). So far three matrix elements — time, a lack of leadership and realism — have been identified. Time to mature is culture dependent so the online community as a whole must do the work.

Not enough gets said about the importance of abandoning crap.(I believe the quote on the right originated with Ira Glass)

Realism — in the sense of being realistic about what should be expected and what can be accomplished is obvious — deals with social mores and leads in the “lack of leadership” concern. There can be no “realism” until the social frame accepts “realism” as a standard, until hype and promise are dismissed and this isn't likely to happen until leaders/exemplars emerge that make it so.

“Yes, I see your point. Please remove my post from your blog”

Progress in any discipline depends on public debate and the criticism of ideas. That recognized, it is unfortunate that the current modes of online analytics public debate and criticism are limited to conferences, private conversations and (as witnessed here) online posts. Conferences (by their nature) only allow for stentorian and HiPPOish debate. Private conversations only allow for senatorial flow. In both cases the community at large doesn't take part.

Blogs and related online venues are an interesting situation. They provide a means for voices to be raised from the crowd. Social mechanics research NextStage has been doing (we're working on a whitepaper) documents how leaders emerge (become senatorial, sometimes stentorian and in some cases HiPPOtic), how they fade, how to create and destroy them (for marketing purposes), (probably most importantly) how a given audience will perceive and respond to a given leader and what an individual can do regarding their own leadership status.

The WAA is very US focussed.I bring this into the discussion because several people commented publicly (both in Part 1 comments and elsewhere) and privately (emails and skypes) that the industry (more true of web than search) suffers from a lack of leadership.

People who enjoy the mantle of leadership yet refuse to lead are not leaders. Recognized names had an opportunity to both join and take leadership in the discussion (I mention some who did at the top of this post). Yet the majority of others either failed to respond, chose to ignore the discussion or — as indicated by the quote opening this section — simply backed away when the discussion was engaged. No explanation, no attempt at writing something else. Considering the traffic, twits, follow-up posts on other blogs (for something I posted, anyway), this was an opportunity for people to step forward. Especially when lots of other people were writing that there was a leadership vacuum.

Leaders/Influencers take different forms (as documented in the previously mentioned social mechanics paper). Two forms are Guide and Responder. Guides are those who are in front. They may know the way (hence are “experts”) and may not. Experts may or may not be trusted depending on how well they can demonstrate their expertise safely to their followers (you learn to trust your guide quickly if you've ever gone walking on Scottish bogs. They demonstrate their knowledge by saying “Don't step there”, you step there and go in over your head at which point they pull you out and say “I said, 'Don't step there'.” A clear, clean, quick demonstration of expertise).

Guides who don't know the way rely heavily on the trust of those following them and can be likened to “chain of command” situations; they are followed because they are trusted and have the moral authority to be followed.

The Guide role is definitely riskier. It's also the more respected one because Guides lead by “being in front of the pack, stepping carefully, being able to read the trail signs hence guiding them safely”. The Responder doesn't lead by being in front. Instead they assume a position “closer to the end, perpetually working at catching up, but always telling the pack where to go, where to look and what to do”. The major problem for Responders is that people don't have lots of respect for that latter role. They may respect the individual and most people will quickly recognize the role they play and the lack of respect will filter backward to the individual.

This plays greatly into any industry's maturation cycle. New school will replace old school and unless our forebears' wisdom is truly sage — evergreen rather than time&place dependent — the emerging schools will seek their own influencers, leaders and guides. This is already being demonstrated in the fractionalizing of the conference market.

One industry leader offered three points in a comment, saw my response and asked that I remove their comment before it went live. I'm going to address two points (the third was narrative and doesn't apply) because I believe the points should be part of the discussion and more so due to their origin.

First, Web Analytics is not a specific activity.

People need to look beyond the first conclusions that come to mind.I responded that nothing I'd researched thus far led me to think of 'Web Analytics' as an 'always do this – always get that' type of activity and offered that while different people use 'Web Analytics' for different purposes, the malaise is quite pervasive. Whether or not 'Web Analytics' includes a host of different activities or not is irrelevant to the discussion. The analysts' dissatisfaction with their role in the larger business frame, their dissatisfaction with the tools they are asked or choose to use, their dissatisfaction with their 'poor country cousin' position in the chain-of-command, …, are what need to be addressed.

Second, the individual wrote that there was no “right way” to do web analytics.

I both agreed and disagreed with this and explained that there are lots of ways to dig a hole. In the end, the question is 'Did you dig the hole?' More specifically, if one is asked to excavate a foundation hole, dig a grave, plow a field, dig a well, plant tomatoes, …, all involve digging holes, each requires different tools (time dependency for completion becomes an issue, I know. You can excavate a foundation hole with a hand trowel. I wouldn't want to and you could). Stating that 'There is a right way to do it' is a faulty assumption demonstrates a belief that standardization will never apply, therefore chaos is the rule.

Chaos being the rule is usually indicative of crossing a cultural boundary (such as a western educated individual having to survive in the Bush. None of the socio-cognitive rules apply until the western individual learns the rules of the Bush culture) or crazy-making behavior (from family and group dynamics theory). Culture of any kind is basically a war against chaos and what cultures do is create rules for proper conduct and tool use within their norms.

One could conjecture that the cross-cultural boundary is the analytics-management boundary. So long as management controls that boundary a) there will be no “one-way” to do analytics (the patients will self-diagnose and -prescribe) and b) analytics will never be granted a seat at the grown-ups' table.

The numbers need a context.So there better be a 'right way to do it', at least as far as delivering results and being understood are concerned, because without that the industry — more accurately, the practitioners — are lost.

“I could tell them 'It is not possible to send in the Armadillos for this particular effort but communication will continue without interruption' and they'd nod and agree.”

Two needs surfaced quickly:

  • recognize what's achievable when (so people aren't set up to fail) and
  • learn how to promote faster adoption of an agenda (without going to Lysistratic extremes, of course. Everybody wants to keep their job).

Accepting increased accountability addresses some issues and not all. Concepts from several sources (some distilled and not in quotes, some stated more elegantly than I could and in quotes) revealed the following additional matrix components:

1) “[online] Analysts need to share the error margins, not the final analysis, of their tools”
2) stop or at least recognize and honestly report measurement inflation
3) “Trainings need to focus on a proficiency threshold”
4) “…provide a strong evidence of benefit”
5) understand what [a tool] is really reporting
6) “It's better to come at [online analytics] from a business background than the other way around…” (“…but who wants the cut in pay?”)
7) “We should standardize reports because the vendors won't”
8) initiate regular, recognized adaptive testing for higher level practitioners
9) include communication and risk assessment training (some time we're at a conference, ask me about the bat&ball question. It's an amazingly simple way to discover one's risk assessment abilities)

We must work to get uncertianty off the table.“The problem is uncertainty…”

That's a long component list and most readers will justifiably back away or become overwhelmed and disheartened. Fortunately there's historically proven, overlapping strategies for dealing with the above items collectively rather than individually.

  • Analysts live with uncertainty, clients fear it, so “…get uncertainty off the table” when presenting reports (this was termed “stop hedging your bets” by some respondents).This single point addresses items 1, 2, 4, 5, 8 and 9 above (hopefully you begin to appreciate that working diligently on any one component suggested here will accrue benefits in several directions (so to speak)).
  • Identify the real problem so you can respond to their (management's) problem. This point addresses items 1, 2, 3, 4, 5, 6, 7 and 9.
  • Speak their (management's) language. Items 4, 5, 6, 7 and 9.
  • Learn to communicate the same message many ways without violating the core message (we've isolated eight vectors addressing this and the previous item: urgency, certainty, integrity, language facility, positioning, hope, outcome emphasis (Rene, I'm seeing another tool. Are you?)) Items 3, 4, 5, 6, 7, 8 and 9 are handled here.
  • Be drastic. Rethink and redo from the bottom up if you have to. This point deals with items 1, 2, 4, 5, 8 and 9.
  • Focus on opportunities, not difficulties. This point deals with items 4, 5, 6 and 9.

Any one of the above will cover several matrix components right out of the gate. The benefit to any of the above stratagems is that implementing any one will cause other stratagems to root over time as well, and thus the shift

  • in what the numbers are about,
  • how they are demonstrated,
  • how to derive actionable meaning from them and
  • how accountability is framed

mentioned at the end of part 1 can be easily (? well, at least more easily) achieved.


I wrote a little about how this study was done in part 1. We contacted some people via email, performed various analysis on their responses, others via phone, ditto, others via skype, ditto, and some in face-to-face conversation. All electronic information exchanges were retained and analyzed using a variety of analog and digital tools. Face-to-face conversations were performed with at least one other observer present to check for personal biasing in the resulting analysis.

Like any research, others will need to add their voices and thoughts to the work presented here. I make no claims to its completeness, only that it's as complete as current time and resources allow.


Posted in , , , ,

The Unfulfilled Promise of Online Analytics, Part 1

Man is the symbol using animal,
Inventor of the negative, separated from his natural condition
By instruments of his own making
Goaded by the spirit of Hierarchy,
With knowledge of his own mortality
And rotten with perfection.
– Kenneth Burke

I've had this theory that thermodynamic principles could be used to predict user attitudes and behaviors in finite populations for a while. A population threshold has to be reached before accuracy could be achieved. One prediction of the theory is that once that population threshold has been reached, the largest segment of that user population will be unsatisfied users for any given product or service. You don't need to sample the entire population or even the threshold. Another fallout from the theory is that you can create an exemplar group, study that, and make extremely accurate predictions about the entire population (including segments of the population not represented by the exemplar). I've been studying population dynamics for different industries for a while and had an opportunity to study the online analytics community, some results of that are shared here.

I initiated the study by sending the following (or at least a similar) request to people in the online analytics community.

would you be willing to write me your thoughts on “the unfulfilled promise of web analytics/search”? I'm preparing a column/blog post. Your response will be kept confidential (I'm keeping everybody's responses confidential).


My request was intentionally open ended (surprise!). I wanted to know their responses, not what might be predicated by any guidance on my part.

One respondent wrote, “In a survey, this question would be tagged with a 'leader bias'.” I pointed out that as this particular respondent opened their response with “The question was 'why is it that web analytics isn't delivering on its promise'?” they demonstrated that they were quite willing to follow any leader bias that may have existed. The fact that they rephrased my original request is a demonstration that the bias — hence prejudices, acceptances and beliefs — existed long before my request was made.

The question isn't whether or not leader bias exists, the question is “where were respondents willing to be lead?” and is typical of my (and NextStage's) use of the Chinese General Solicitation. Knowing what someone responds isn't as actionable as knowing how they respond (you're shocked I'd offer that, yes?).

I also didn't ask for responses from people who had previously demonstrated (via other writings, etc) they had a “company line” or brand to protect.

The title of this post was originally “The Unfulfilled Promise of Web Analytics” and came from a conversation I had in early June '09. I was talking with some folks, one of whom was an SVP of web analytics and marketing for an international marketing company. Unprompted, this individual shared their disillusion with the web analytics field and provided the title phrase.

In all I received some 60 responses, some from people “just doing their job” to others with national and in some cases international reputations to protect. The responses came from everywhere except South America, Africa and East Asia (I hope to cast a better net in the future). I kept the original spellings and grammar of respondents when I quote them (AmerEnglish is not the native language of several of them) because doing so keeps their intent clear over my own.

I've studied the “largest user group will be 'unsatisfied' users” phenomenon across industries and (so far) it holds true. No doubt I'll write a formal research paper (and include an extensive bibliography) about this phenomenon in my copious free time someday.

In the meantime, allow me to share the results specific to the online analytics industry with you.

See this tool? I must know what I'm doing because I use this tool.The quote on the right was made to me during a discussion. It was offered jokingly and I accept it as such. I also know a little about how the mind works and where such statements — even as jokes — come from.

The person making that comment went on to tell me about a recent conference they attended. At some point a bunch of attendees got into a cab to go out to dinner. One of them offered that companies wanted more and more accountability in their analytics.

The universal response was “What? Accountability? It's time to get into another business.”

Such responses are understandable and they can only be made by people at or near the top of their industry. Nobody wants to work and everybody wants to play. The more fun (play) they can have in their job the more they'll enjoy it. Being accountable isn't fun, though.

When Web Analytics came around, my first thought was 'cool, plenty of data'And such responses must be put next to “When Web Analytics came around, my first thought was 'cool, plenty of data'. Little did I know data would replace the actual business reflection that spun all of this.”

I recognized true schisms in the responses. I'll mention one here because it relates to the first quote above (I'll get to the other schisms further on). It deals with people experiencing non-conscious incongruities between their identity-core and their identity-personality (more colloquially, The Impostor Syndrome, feeling they're frauds. Personality, Identity and Core make up an individual's psychological self-concept. Different disciplines have different terms for these elements). I would have thought that such sentiment would be prevalent at the lower end of the disciplinary spectrum and it wasn't. More than one well recognized individual shared that they feel like the emperor without any clothes when challenged about their conclusions. They often want to respond as is indicated above; “See this tool? I must know what I'm doing because I use this tool.”

The schism here was more psychological and psycho-social than analytics wise. Did these individuals have confidence in their analysis? Most often, yes. Did they have confidence their analysis would be accepted/have meaning/provide value?


Sensing or believing that one's work is not honored or respected is damning. Such attitudes are psychological death to the vulnerable and emotionally uncomfortable to the strong.

“…the question of accuracy did not shake off easily. To be totally honest, I kept this to myself.”

Maybe new fields need to emerge -- web psycho-analytics perhaps?Online Analytics is a numerical discipline. That's its whole point; here are numbers that prove something. It is not a psycho-social discipline or, as one respondent wrote, “Maybe new fields need to emerge — web psycho-analytics perhaps?”

Such fields may exist and may emerge if they don't exist already. What is true about them — if they're primarily left to the current online analytics paradigm — is that they will require large numbers to demonstrate accuracy. The accurate metricizing of any social system (the internet as an information-exchange is such a system) requires threshold numbers for accuracy to be demonstrated when traditional methods are used. For example, a data space of 50,000 people within 2 days is reasonable for traditional analytics methods to prevail (use of conditional change models can shrink these numbers considerably). Typical numerical methods involving smaller populations require either longer timeframes or smaller environments to demonstrate reliable, repeatable business value.

“Web psycho-analytics” requires different numerical methods and mathematical paradigms from traditional analytics to demonstrate reliable, repeatable business value.

The above is especially true when individualization — the ability to recognize a visitor as neuro-socio-psychologically unique from all other visitors — is to occur.

...your analytics will not match the vendor's numbers. If you add two or three analytics systems, the numbers will not match each other. This creates situations where it is impossible to reconcile any data sets.However, until such methods are widely adopted clients and consultants are left with

  • conflicting numbers from different tools (“…your analytics will not match the vendor's numbers. If you add two or three analytics systems, the numbers will not match each other. This creates situations where it is impossible to reconcile any data sets.”),
  • conflicting numbers from the same tool (“Even using the same tool depending of how it is set-up it can lead to very different numbers.”),
  • tools that are difficult to use (“And those vendors said it was really, really easy! Pfff! Liars!”),
  • conflicting vendor definitions (“Vendors have different standards, meaning that what one vendor considers a visit is not the same as another vendor, thus making comparisons is often misleading.”) and
  • unachievable expectations (“Web Analytics is often sold as the thing that will improve your website results by 100-200%, well that's not true.”)

“…talk to other people about what you were trying to accomplish and beg for them to play along.”

Jim Sterne asked me what I'd learned about web analysts a while back at an eM SF. I was onstage at the time. My statements have (I believe) proved cassandric. I offered that there was discontent bordering on malcontent. There was little to no job satisfaction and advised the eM staff to start shifting their conference focus from pure WA to cross disciplinary offerings. I've also openly stated that I left the WAA because it had all the hallmarks of a society in decline (think Rome, Persia, the USSR, …, all overthrown by invaders from without or within).

This comment was made a few years back and I have no knowledge of the WAA as it exists now.

Where does this discontent among online analysts come from?

One place is unaccepted accuracy (as mentioned above). And if the accuracy is accepted, it isn't acted upon. But there are a lot of hindrances to accuracy that are beyond the analyst's control.

Tagging seems to be a major issue in this area. Tagging was originally considered a solution to the accuracy problem. But the world works in balance — especially when unnatural processes are assembled together. Tagging solved accuracy issues but required more sophistication in the collection and analysis of the data. This sophistication required the involvement of other organizational players, some of whom couldn't or wouldn't play along. The end result is that tagging — a relatively simple concept and method — still does not have an industry standard.

So long as I show that I'm doing something I'm not responsible if nothing useful gets done.The tagging problem (I believe) would go away if clients — not vendors and not consultants — were invited to find common ground in what they're looking for (more on this later). At present clients have no fixed, pervasive idea of what advantage online analytics provides. It doesn't reduce costs, reducing costs is done by rethinking processes. Instead of rethinking their internal processes companies “…have become very lazy.” When the website isn't producing what they believe it should produce the solution is to get another tool.

But there is no magic bullet. Companies who go from one tool to the next are like psychotherapy patients who stay in therapy with no desire to get well. One respondent confided “So long as I show that I'm doing something I'm not responsible if nothing useful gets done.”

The unfulfilled promise of web analytics, at the root, is because of people.Business politics can not be ignored when considering the unfulfilled promise of online analytics. “Where one person has all the authority and all the ability to change a site as they see fit: optimization actually really works. A new headline here, a picture of somebody looking into the camera there. Demand increases, everybody's happy. But, for most corporations, this is not the case” was a sentiment stated often if not as eloquently by many respondents.

“If only I had this report, as shown into the vendors slick presentations…”

All we get in the tools are simple averages and little in terms of correlation. In my opinion this leads me as an analyst to have a high degree of scepticism in the underlying data and hence difficult to really delve into hard core analysis of the data.One of the problems that came through the responses was that online analysts often demonstrate a victim mentality. This was greatly the case in web analytics and often in search. When asked directly, “If you know these problems exist in your industry why don't you take steps to solve them?” responses tended to manifest feelings and verbiage of powerlessness. The phrases “We can't do anything to solve this.”, “It's out of our hands.”, “It's beyond our control.” and “We don't have access to those people.” were repeated on both sides of the Atlantic. One respondent offered “…is it in the realm of a little web analyst within a large multinational to actually do that?”

Consulting online analysts are caught between the vendors and the clients they serve. If not a victim mentality, this “serving two masters” creates a psychology that's very close. It also ties back to Why hasn't Marketing caught on as a “Science”? and Matching Marketing and IT Mythologies about analysts and marketers finding common ground.

Consensus points abounded on these research elements. Vendors were viewed

  • as only being interested in selling licenses,
  • as promising more than could be delivered (“the space has been high jacked by vendors who promise a mountain of diamonds without much effort. This is not true.”),
  • as not offering proper or worthwhile optimization tools and methods (“…the important thing is the optimization that is done afterwards.”) or
  • as offering wolves in sheep's clothing — tools that actually produced simplistic results, could not do deep analysis and therefore produced skepticism about the underlying data.

Analytics and the web are suppose to be transparent and easy to track. However, once you start marketing you find out that is not really the case.It is likely that as more and more accountability is demanded from different organizational groups measurement efforts will merge and (perhaps) result in easier corporate buy-in. What may not go down well is that these efforts are more likely to come from marketing than from analytics. Multi-channel marketing will need to learn from online analytics if it is to have value to any business.

“There are simply not enough employees in the companies focusing on adopting web analytics in the organization.”

A challenge that falls out of the above section and the above quote is truer for web analysts than their search-based compatriots in any given organization. Web analysts have fewer champions at the top of the corporate ladders than do marketers and search (which is often not considered an analytics discipline even though the science of search has been documented elsewhere). Marketers have traditionally been closer to the top of the corporate recognition ladder than analysts could ever be. This goes back to the opening statements about work versus play; marketers play, analysts work. This is demonstrated in language if not in physical reality, and one needs to recognize that perception is reality.

Online analytics grew out of (and could very well still be mired in) IT departments. Worse, any kind of analytics smells of accountants (hence accountability), and everybody knows the accountants only come in when the business has failed or is recognizably close to. One respondent wrote that they knew their company was in trouble when the bank sent accountants in (evidently the waves of layoffs and learning they were US$20M in the hole weren't warnings enough).

Online analytics is a discipline of numbers. Whenever there's a discipline of numbers it means there's an evidentiary trail for decisions. Consider the political and psycho-economic meaning of this for a moment.

If I have the option of taking advice from someone who goes with their gut then I really can't be held accountable because there are no numbers, therefore from any evidentiary standpoint I'm pretty safe. Should things go sour it's a political issue because there was no real evidence that we should have gone pro or con, we went with our guts, flipped a coin and took what came.

Even better, it was (point finger in some general direction) their gut feeling, we went with it, it flopped, it was their gut not mine, they're out and I'm still good.

But if I go with hard numbers and my decision is in error? Now it's psycho-economic and I'm the idiot and fool because I didn't understand what I was doing. Both I and the group that helped me make the decision are forfeit.

So which is politically safer to place higher on the corporate ladder, to listen to and feel good about? But even at the top of the corporate ladder guts and numbers are in conflict; the average CMO corporate lifespan is about two years, often less.

“The tools promise a lot, and can live up to most of it.”

Most psychotherapists would look at the responses and recognize a love-hate relationship in the making if not already extant.

However, the love-hate relationship doesn't take the form most psychotherapists are familiar with. Most love-hate relationships exist between an individual and some one thing external to that individual (another person, another thing). The love-hate continuum usually takes the form of “I can't live with (it) and I can't live without (it)”.

...many larger companies buy (what used to be) expensive, full-featured web analytics packages, only to use the tip of the iceberg: the core metrics that should be obvious.This isn't the case with analysts. Most of those surveyed liked what they do and believe they add value for their efforts. They love what they do, just not who they do it for or how it is done (“It's not so much the unrealized promise of web analytics, as organizational politics leading to weak and vaguely defined goals in larger organizations.”). This creates a triangulism and triangulums are always psychologically deadly.

An example of psychological triangulism is the parent who loves their partner and recognizes their partner has an unhealthy relationship with their common child. Parent 1 is caught between protecting the child from the partner and protecting the partner from the eventual wrath of the child (think Oedipus and Electra). Their loyalties are constantly divided (as mentioned above and especially if no psychological reward manifests itself). The psychological challenge escalates until Parent 1 finds themselves developing their own animosity towards the child. They mistakenly believe if the child were not present the parent-partner relationship would be better.

The end result is that both child and parent-partner relationship suffer. Here the analyst-client relationship and the online analytics industry is suffering.

I think the promise is fulfilled for some and not for others!  The difference is the level of sophistication of the user.This tension is manifesting in the industry in the same way it manifests in the therapist's office — fingerpointing. Consider the following responses, some obviously from consultants, others from vendors, and those where the lines blur greatly:

  • “What's sure is that when it comes to Web Analytics and vendors, there's just one number that counts: quarterly sales. When it comes to clients, well they don't really know which number they're looking for but know it's damn hard & expensive to get it. Those in between, the expensive consultants, they're just trying to make a living and fight for peace on earth and accountable decision making.”
  • “Log file data and Web analytics are both sources of information. They are tools, like a hammer. A hammer in the hands of an unskilled, ignorant but self-righteous and overly confident carpenter? That is a scary thought. Well, it is equally as scary to me about Web analytics and log file data. There are plenty of unskilled, ignorant, self-righteous, and overly confident search engine optimization (SEO) professionals, Web analysts, and other marketing people. Even many search engine software engineers are not competent carpenters or architects, but they honestly believe they are. And we are buying what they have to sell.”
  • “I honestly don't think there are unfulfilled promises of web analytics. The companies are doing great and the software is progressing all the time. I love analytics!”
  • “Why is it so hard for people in the web to take actions and optimize based on what the tool reports? One of the reasons of this is that often they don't have a clue of what can be changed or they have an idea which is incorrect.”
  • “If web analytics is under-delivering in any way, it is largely because of most organizations inability to address web analytics at the strategic level rather than a tactical tool to optimize the online marketing channel.”
  • “I think the promise is fulfilled for some and not for others! The difference is the level of sophistication of the user. For some companies, even if they deploy it properly there is more volume and nuance to the data than they can properly grok.”

Is more training the answer? And if so, who do we train?

I'll restate here what I wrote in Learning to Use New Tools; the use of any tool is going to require training across the usage spectrum. The use of new tools definitely so. This training can be self-training and the user should be prepared for scraped knuckles, smashed thumbs and lots of cursing. Self-training is great when the user has lots of time and patience. Otherwise, take a class or let the experts (“consultants”) in.

Do remember Buckminster Fuller's definition — An expert is someone who can spit over a boxcar. I often tell people that the front of my shirt is soaked based on my failed efforts.

It reminds me of the development of web sites themselves ten years ago - everybody had to have one, still not being absolutely sure what to use them for.More training is the answer only if the training results in well-reasoned and understandable business actions. Tools and trainings are worthless without knowing what one wants to build (“It reminds me of the development of web sites themselves ten years ago – everybody had to have one, still not being absolutely sure what to use them for. Of course the free tools have done their part in this evolution.”).

“The unfulfilled promise of web analytics and search is measuring outcomes instead of outputs.”

Our culture (western, not analytics) has been “objective and evidence driven” for about 400 years. There has been the unstated Field of Dreams-like belief that “If you have the numbers, the truth will come”.

I believe most of the analysts surveyed would consider this a desirable yet inaccurate depiction of the real world. Their tools produce “…beautiful charts that don't tell me what to do to make things different – not better, just different. For that I have to go somewhere else.” None of the analysts surveyed wrote or talked about growth curves, forward discounting, debts, rates of depreciation, technological obsolescence, energy consumption (the company that can correctly respond to market needs faster wins because a) it responded correctly and b) it required less energy to do so).

This greatly surprised me. For all the analysts “in the room”, none talked of analysis. Several responses demonstrated a level of contempt regarding available tools (vendor agnostic) so it's possible analysis per se isn't a subject of high regard in its own community.

The above presents a discomfiting scenario. It demonstrates a severe disconnect between “what should be” and “what is”, something in keeping with C.P. Snow's two cultures yet far more pervasive (in this industry) therefore far more damaging.

If this paper focuses more on psychologies than on analytics it's because the responses dictated it so. need a team of people who know what all this is about to digest it for the more common mortals.“…analysis is a story based upon data put into context.”

The quote starting this section was very telling but not unique. No respondents believed the numbers alone proved anything, nor even when presented as part of a strategy. And few respondents seemed to be equally comfortable in boardrooms as in spreadsheets.

Yet the need for online analytics to be part of a larger picture, a grander story, was everywhere. Analysts uniformly perceive themselves as

  • not part of a unified business reporting structure,
  • not contributing to the big picture, and
  • lacking the political power or psycho-social maturity (within the organization) to sit at the grownups' table.

“And then there's this vague notion from Mr. Kaushik…”

Let me emphasize that I did not choose the exemplars noted in this paper. Respondents demonstrated exemplar recognition in conversation and written material.

Any pervasive duality will present itself in exemplars (not to be confused with my previous mention of exemplars as part of this research). Here the exemplars (or probably more accurately, “doyas”) are Avinash Kaushik and Eric Peterson with Avinash Kaushik leading the pack in references by almost three to one.

Equally interesting was that the anti-Kaushik camp's complaint wasn't necessarily against Avinash Kaushik himself, it was against his “You, too, can do this” mantra (perceived if not actual). Yet another schism appears; those who need (for whatever reason) analytics to be hard and those who need it to be easy.

Web Analytics is hardEric Peterson is well known for his “Web Analytics is hard” statement (interesting Reading Virtual Minds Vol. 1: Science and History tie in; the majority of respondents wrote web analytics or search. Very few capitalized online analytic disciplines. Most people capitalize their own discipline. It demonstrates a non-conscious recognition of the value of what they do).

This belief begs the question of whether or not something can be “hard” (meaning “difficult”) if it is properly understood. Educational Psychology, Cognitive Psychology, Sports Medicine, Kinesiology and related disciplines all demonstrate that anything done improperly is hard. Many people give up on mathematics due to poor teachers, poor curriculum, lack of discipline, … To them, math is hard. Aikido is dangerous without proper instructors present.

But is something in and of itself difficult? Only if there's a social or political reason for it to be so. Perhaps the priests wish to keep the mysteries of the divine for themselves. This provides them the opportunity to select who'll enter their ranks, who'll excel, and to whom the teachings will be “difficult”. Only one respondent offered a centralizing attitude (“I'd rather be of the school of thought that web analytics can be easier… if given time and approached in the right way.”).

Id rather be of the school of thought that web analytics can be easier... if given time and approached in the right way.Here again politics more than psycho-economics rears its head. “I will protect my (place in the) industry by making it difficult for others to succeed in that industry” hence controlling the industry itself. The problem with this ethos is that eventually a large enough (ne' “threshold”) group will occur that takes the industry in some other direction completely.

There are psychologic ramifications to both “hard” and “easy” statements. “Hard” statements set up the majority of participants to fail, or if not to fail then to prepare for failure rather than success. Likewise, the “easy” statement can cause false expectations of success to develop. What is obvious from the responses is that Avinash Kaushik owns the “actionable outcomes” space and neuro- and psycho-linguistic Towards space when it comes to online analytics as a discipline (his was the only work directly quoted in the responses; “Actionable insights and metrics are the uber-goal simply because they drive strategic differentiation and a sustainable competitive advantage.”) and Eric Peterson owns the neuro- and psycho-linguistic AwayFrom space when it comes to online analytics as a discipline.

AwayFrom and Towards are used in their neuro- and psycho-linguistic sense here to describe how people hence the industry is thinking, not necessarily how the industry is moving. See's Chris Bjorklund interviews viral marketing expert Joseph Carrabis, founder of NextStage Evolution, Part 4a) and Using Sound and Music on Websites for more on these concepts.

The exemplar messaging is polarizing an industry already divided by a great many other factors. I can say playing guitar is easy and I know I'm never going to be a Segovia or Kottke. Likewise, I recognize I could play better if I practiced more. This “centering of duality” needs to take place in the online analytics world if it is to survive, yet most respondents demonstrated extremum statements (statements with language demonstrating polarity behavior and belief) rather than centering statements (statements with language demonstrating unifying or centering behavior and belief) in their responses.

All things require some degree of practice before facility in their use is obvious. There's also the intersection of lack of correct practice and lack of understanding. This can be mixed into The Impostor Syndrome mentioned earlier (see Reading Virtual Minds Vol. 1: Science and History or I'm the Intersection of Four Statements for more on The Impostor Syndrome). Anything can be difficult if the practitioner doesn't really understand what they're doing, is acting by rote but from neither repetitive action nor repetitive practice of the correct action.

Disciplines may be represented by exemplars and responses to the exemplars are sometimes not the responses to the discipline. Respondents tended to present AwayFrom behaviors regarding Avinash Kaushik and Towards behaviors regarding Eric Peterson in their responses (noting as offered earlier than Avinash Kaushik is more in their consciousness than is Eric Peterson and with basic normalization applied).

And then there's this vague notion from Mr. Kaushik: give more insights, knowing more about what's going on within your visitors minds & hearts so that you can better service them.These presentations are understandable. Correct or not, the perception is that Avinash Kaushik wants to move the industry away from a “numbers are evidence” basis (one respondent offered “And then there's this vague notion from Mr.Kaushik: give more insights, knowing more about what's going on within your visitors minds & hearts so that you can better service them. Sure, cool, sounds great. Still scratching my head. With surveys you say? Asking them a question? Just 4 questions? Ok so when I get the answers, is this representative? Should it influence my copywriting, my product offering, my pricing scheme?”)

The concept of “knowing more about what's going on within your visitors hearts and minds” is one I and NextStage strongly encourage.

You're shocked, I know. Simply shocked.

I also encourage evidentiary — hence numbers based — decision making practices.

A curiosity of this research is that no exemplars arose on the search side of online analytics. Search respondents noted Avinash Kaushik and none of their own. This could be due to the different lifespans of search and web analytics, the different mentalities and ego structures that arise in these two disciplines or simply that no one in search demonstrates a strong enough personality for a cult-of-personality to develop around them.

“How you measure success depends on how you define success”

There are many ways to interpret the above and all of them point to a lack of standardization. I remember conversations where the definition of success was moving away from online sales to “I got their name” or “they downloaded a paper”. These conversations always intrigued me because they were examples of defining success in terms of the visitor's action, not the desired outcome of the site owner.

This is another example of non-standard definitions plaguing an industry and no one stepping up to lead the way (equally interesting, no respondents mentioned any professional organizations in their communications. This indicates online analytics professional organizations are not serving their membership enough to warrant conscious recognition). Online analytics is quite capable of comparing the numbers between “sales” and “newsletter signups” and the comparison truly is one of apples and oranges; business development versus transactional business, strategic vision versus “I went to the bank today” tactics.

And if the success definition the consultant is comfortable with, knows how to demonstrate and can defend is one the business client can no longer accepts?

“The consensus among industry leaders is that web analytics will be a different entity in five years.”

The consensus among industry leaders is that web analytics will be a different entity in five years. Its ultimate purpose is to facilitate action in support of any initiative on the web, so it also is much like plastic.Clients are asking for more … something … from their vendors. One respondent stated “Procter&Gamble is moving from 'eyeballs' to 'engagement' but leaving 'engagement' for others to define.”

This is the intersection of Jim Sterne's “how you measure success” mantra with the “gut vs numbers” statements above. The only sure winner of letting others define your success is that politics will prevail.

A recent Forrester paper indicates a move towards free analytics over for-pay analytics. The report is interesting and perhaps more interesting when viewed outside the online analytics silo.

I point out in Reading Virtual Minds Volume 1: Science and History that growth numbers can seem impressive until you recognize population dynamics, population ecology and evolutionary rescue at work. I used these and similar concepts in From TheFutureOf (13 Mar 09): The Analytics Ecology and From TheFutureOf (5 Jan 09): Omniture and Google Considered Environmentally to indicate that populations would shift, go near death then bounce back dependent entirely on the existence of (again) threshold populations (I hope readers appreciate how important the threshold population concept is in any socio-environmental dynamic).

Conclusions for Part 1

In the end, it seems the online analytics world is setting itself up to fail. It's as if an architect were to create a negative space then attempt to fill it. Analytics doesn't matter be it search or web; all business — B2C, B2B, B2whatever and whatever platform you're using — is going to come down to personal relationships, establishing them, maintaining them, personal interaction and commitment (readers who've heard or seen my “10 Must Messages” presentation will recognize those communications here).

Nothing communicated by any respondents indicated that analytics is in and of itself a worthless discipline, only that it is a misunderstood hence misguided discipline in the online world. Yes, all forms of analytics will get you to the door (and in some cases may even open the door) and in the final conclusion it will be the establishment and demonstration of trust that powers commerce, not numbers. Or at least not numbers alone. This indicates a shift

  • in what the numbers are about,
  • how they are demonstrated,
  • how to derive actionable meaning from them and
  • how accountability is framed

are in the offing.

Problems are (in my experience) pretty easy to discover. Solutions, though…

(more on possible solutions in next month's post)

RVMsmallfrontcover.jpgHave you read Reading Virtual Minds Volume I: Science and History? It's a whoppin' good read.

Posted in , , , ,