Reading Virtual Minds Volume II: Experience and Expectation Now Available on Amazon


First, we appreciate everyone’s patience while we got this volume out.
And now, from Holly Buchanan‘s Foreword to the book…

Reading Virtual Minds Volume II: Experience and ExpectationAfter inhaling Reading Virtual Minds Volume I I was like an antsy 3-year old waiting for Reading Virtual Minds Volume II. It did not disappoint.
I love the way Joseph Carrabis thinks. He has a unique ability to share broad rich theory with actionable specifics. Unlike many technical writers, he has a unique voice that is both approachable and humorous. It makes for an enjoyable read.
But what’s the main reason why you should read Reading Virtual Minds Volume II: Experiences and Expectations? Because where most companies and designers fail is on the expectation front.

Humans are designed as expectation engines.

This is, perhaps, the most important sentence in this book. One of the main points Joseph makes in this volume is this – Understand your audiences’ whys and you’ll design near perfect whats.
Design failures come from getting the whys wrong. That can lead to failures on the experience side, but also on the expectation side. And that can be the bigger problem.

Expectation is a top-down process. Higher-level information informs lower-level processing. Experience is a bottom-up process. Sensory information goes into higher-level processing for evaluation. Humans are designed as expectation engines. Topdown connections out number bottom-up connections by about 10:1.

Why is this so important?

In language, more than anywhere else, we see or hear what we expect to hear, not necessarily what is said or written. Across all cultures and languages, neurophysiologists and psychologists estimate that what we experience is as much as 85% what we expect to experience, not necessarily what is real or ‘environmentally available’.

And

When people expect A and get B they go through a few moments of fugue. External reality is not synching up with internal reality and the mind and brain will, if allowed, burn themselves out making the two mesh.

Get your consumer/visitor/user experience AND expectation right, get their why right, and you’ll be exponentially more successful.

Here are just a few of the goodies you’ll find in this book:

  • Privacy vs. value exchange and when to ask for what information. Joseph has some actionable specifics on this that will surprise you.
  • Why we design for false attractors rather than the real problem.
  • The importance of understanding convincer strategies. Convincer strategies are the internal processes people go through in order to convince themselves they should or should not do something.
  • Companies spend a lot of time trying to convince consumers to trust them. But what may be even more important is understanding how to let consumers you know you trust them. This book has ideas on how to show your customers/users/visitors, “I believe in you”.
  • How often our own experience influence our designs. Unless you’re able to throw all your experience out, and let the user’s experience in, get out of the usability and design business.
  • How to allow your visitors easy Anonymous-Expressive Identity and make them yours forever.
  • Regarding new material, design, interface, the importance of making sure your suggestions provide a clear path to the past (thus being risk averse while providing marketable innovation).

As always, Reading Virtual Minds provides specific actionable ideas. But it will also make you think and approach your work in a new way. And I think that’s the best reason to treat yourself to this book and the inner workings of NextStage and Joseph Carrabis.


(and we never argue with Holly Buchanan…)


Posted in , , , , , , , , , ,

Reading Virtual Minds Volume I: Science and History, 4th edition

It’s with great pleasure and a little pride that we announce Reading Virtual Minds Volume I: Science and History, 4th EDITION.

Reading Virtual Minds V1: Science and History, 4th edThat “4th EDITION” part is important. We know lots of people are waiting for Reading Virtual Minds Volume II: Experience and Expectation and it’s next in the queue.

But until then…

Reading Virtual Minds Volume I: Science and History, 4th EDITION is about 100 pages longer than the previous editions and about 10$US cheaper. Why? Because Reading Virtual Minds Volume II: Experience and Expectation is next in the queue.

Some Notes About This Book

I’m actually writing Reading Virtual Minds Volume II: Experience and Expectation right now. In the process of doing that, we realized we needed to add an index to this book. We also wanted to make a full color ebook version available to NextStage Members (it’s a download on the Member welcome page. And if you’re not already a member, what are you waiting for?)

In the process of making a full color version, we realized we’d misplaced some of the original slides and, of course, the charting software had changed since we originally published this volume (same information, different charting system). Also Susan and Jennifer “The Editress” Day wanted the images standardized as much as possible.

We included an Appendix B – Proofs (starting on page 187) for the curious and updated Appendix C – Further Readings (starting on page 236). We migrated a blog used for reference purposes so there may be more or less reference sources and modified some sections with more recent information.

So this edition has a few more pages and a few different pages. It may have an extra quote or two floating around.

You also need to know that Reading Virtual Minds Volume I: Science and History is a “Let’s explore the possibilities” book, not a “How to do it” book. As such, it deals with how NextStage did it (not to mention things that happened along the way). It does not explain how you can do it. This book’s purpose is to open a new territory to you and give you some basic tools for exploration.

There are no magic bullets, quick fixes, simple demonstrations, et cetera, that will turn you into jedis, gurus, kings, queens, samurai, rock stars, mavens, heroes, thought leaders, so on and so forth.

How to Do It starts with Volume II: Experience and Expectation and continues through future volumes in this series. We’ve included a Volume II: Experience and Expectation preview with a How to Do It example on page 302 so you can take a peek if that’s your interest.

That noted, I’m quite sure that you won’t get the full benefit of future volumes without reading this one because unless you’ve read this one you won’t understand the territory you’re exploring in those future volumes.

Reading Virtual Minds V1: Science and History, 4th edThat’s Reading Virtual Minds Volume I: Science and History, 4th EDITION. It’s so good and so good for you! Buy a copy or two today!


Posted in , , , , , , , , , ,

NextStage Evolution Research Brief – The Basics for Forming Strong, Lasting Social Networks


Basis: This publication documents an ongoing (ten years to date) study of social network lifecycles and what is required for any given social network to thrive.

Background: The number of extant social networks increases along well-defined rules that are dependent on the number of social media channels and the technology required to access any given social network. This translates to a change in the past ten years from a few social media channels with a diversity of internal networks to a diversity of social networks each with their own social media channel.

Whenever there's a proliferation of similar organisms the laws of evolution kick in with an unmatchable ferocity. A few social media channels with a diversity of internal networks demonstrated a user preference for the interface (usability) above the information (content value). A diversity of social networks each with their own social media channel demonstrates cladic growth that in turn is subject to evolutionary methods.

This is demonstrated in both online and offline worlds in how social networks form, grow, die and evolve into new social networks. Note that for the purposes of this study social network “stability” is defined as a creation-evolution cycle, meaning the social network thrives (a YouTube video that receives 1MM hits in two days then fades into oblivion does not constitute a thriving social network). “Healthy” networks are those that grow while maintaining focus and direction. “Vital” information is information required to keep a conversation going.

Objective: To determine if any specific requirements exist for the health of social networks regardless of social media channels (what is required for healthy fish regardless of the pond they're in?).

Method: This research is an outgrowth of NextStage's previous and ongoing social network studies, and is built on the mid 1980s-1990s cultural anthropology studies performed on such social networks as CompuServ, AOL, Genie and the like.

Five hundred differentiable areas of interest were identified across automotive, destination, entertainment, food, motorcycle, science and travel meta-networks. Similarities of subject matter (content, focus), contributor (voice, style, tone, knowledge-base, experiential-base, post/comment frequency), structure (interface, posting requirements/mechanics, alerting mechanism) and visitor (income level, education level, geographic location, life experience, age, gender) were isolated and routinely measured to determine social network mechanics.

Results:The greatest factors contributing to the longevity of a social network regardless of social medium are

  1. Three “golden ratios”
    • The ratio of contributors to entire network population must be between 1:100 and 1:30. Social networks with contributor to population ratios in this realm demonstrate a reasonable dialogue is taking place. Fewer indicates unguided conversations, greater indicates a dearth of vital information.
    • The ratio of influencers to entire network population must be greater than 1:3,000. Influencers are required to inject source-recognized vital information to generate discussions among network participants.
    • The ratio of influencers to contributors should be within a few points of 1:100. Greater and there aren't enough “Watsons” to support the “Holmses”, fewer and there are too many “Watsons” (see Another Ommaric Intersection – Holmses&Watsons).
  2. The regular injection of vital information
    • Vital information must be “forward thinking” information. It must recognize a community challenge and offer direction for its solution. It does not need to solve the challenge, only demonstrate a possible solution path. Consensus solutions indicate there's nothing left to talk about and are death to social networks until a new challenge is identified.
    • Injection to general conversation ratios should be within a few points of 1:55. Fewer and the conversation collapses, greater and the conversation becomes confusing.
    • Networks without regular injections of vital information first stagnate and eventually collapse.
      • The collapse speed is related to the size of the network. Larger networks collapse more quickly (relative to their size) than smaller networks due to higher social bonding factors usually present in smaller social networks.
    • Too many or ill-timed vital information injections cause confusion in the general population. This confusion translates to
      • a decrease in the general population.
      • an increase in the level of conversation among the “literati”. Note that this is a demonstration of a stable, evolving network.
  3. The information gradient (dispersion vector) should be directly proportional to the size of the network.

Key TakeAways: Brands (and others) wishing to maintain stable, healthy and growing social networks should focus their efforts on maintaining the necessary mix of

  • influencers, contributors and visitors to insure necessary conversation ratios
  • general comment to vital information posts/comments to insure necessary social growth incentive ratios


Posted in , , ,

NextStage Evolution Research Brief – Image v Text Use in Menu Systems


Basis: A one year study of twelve (12) international websites (none in Asia), M/F 63/37, 17-75yo, either in college or college educated, middle to upper income class in all countries studied

Objective: To determine if people were more decisive in their navigation when an image or text was used as a primary navigation motif (menu).

Method: Four separate functions were evaluated

  1. Presentation Format Preference (a simple A/B test)
  2. Sensory to Δt Mapping (time-to-target study)
  3. Teleology (how long did they remain active after acting)
  4. Time Normalization (determines what brain functions are active during navigation)

Results: Key take-aways for this research include

  • Visual (graphic or image)-based menus cause a 40.5% increase in immediate clickthrough, site activity is sustained an additional 32% with site-penetration being an additional 2.48 pages ending in a 36% increase in capture/closure/conversion.
  • Although not tested with Asian audiences, it is doubtful this technique will work with ideographic language cultures
  • The graphics/images used must be clear, distinct and be obvious iconographic metaphors for the items/concepts they open/link to. Example: Images of a WalMart storefront, a price tag with the words “Best Price” and people shopping resulted in greater activity than a simple shopping cart (too familiar as a “What have I already selected?” image) and the simple words “Store” and “Shop” to drive visitors into buying behaviors.
  • Existing sites with text-based menu systems need to use both systems (at the obvious loss of screen real-estate) to train existing visitors on the new iconography until image-based menu items are used more often than text-based menu items.

NextStage Evolution Research Brief – EU Audiences Adapt to and Integrate Site Redesigns Faster than US, GB and Oz Audiences

Basis: This publication concludes a two year study of visitor adaptation to and adoption of new technologies and site redesigns on similar product or purpose sites in the US, EU, GB and Australia. No Asian, South American or African sites were part of this study.

Objective: To determine if neuro-cognitive information biases exist in certain cultures and if so, is there benefit or detriment to those biases?

Method: Twenty sites (monthly visitor populations between 10-35k) were monitored in the USA, Italy, France, Germany, Great Britain and Australia. The sites included social platforms, ecommerce, news-aggregator, travel-destination and research postings. Activity levels were monitored before, during and after design changes were instituted, as well as before, during and after new technologies (podcasts, vcasts, YouTube feeds, social tools) were placed on the sites.

In addition to activity levels a study was made of viral propagation vectors to determine if changes to the site promoted new influencers or demoted existing influencers.

Results:

  • Announced changes to the sites increased adoption and adaptation rates among all visitors (in some cases by as much as 65%)
    • Announced changes most greatly benefitted US, GB and Australian audiences with adaptation and adoption rates increasing 12.5% on average.
  • Site previews increased adoption and adaptation rates among all visitors
    • 77% of EU based visitors who chose to preview site changes became influencers regardless previous social standing on site.
    • 35% of US based visitors who chose to preview site changes became influencers regardless of previous social standing on site.
    • 32.5% of Australian based visitors who chose to preview site changes became influencers regardless of previous social standing on site.
    • 27.5% of GB based visitors who chose to preview site changes became influencers regardless of previous social standing on site.
  • EU audiences demonstrated the highest rates of adaptation to and adoption of new technologies and site redesigns in all categories at 92.5% and 85% respectively.
  • Australian audiences demonstrated the lowest rates of adaptation to and adoption of new technologies and site redesigns in all categories at 30% and 7.5% respectively.

Key take-aways for this research include

  • Travel destination sites should provide a good deal of lead up time to site changes.
    • This lead up time should include previews and announcements.
    • This is especially true for US audiences.
  • Sites introducing social tools should select, train and promote influencers from within the existing visitor community before the social tools are made public.
  • The introduction of social tools to news-aggregator sites recognizably slowed the adaptation and adoption rates of EU audiences.
  • US based audiences were most likely to contact site admins, web admins, managers, etc., criticizing site redesigns and new technology implementations although they were the least likely to abandon sites due to those changes.
  • Australian audiences were the least likely to contact site admins, web admins, managers, etc., criticizing site redesigns and new technology implementations although they were the most likely to abandon a site due to those changes.
  • EU based audiences were the most likely to visit several sites all serving the same purpose.
  • EU based audiences were the most likely to give a site “time to settle” during redesign and new technology implementation before returning to it on a regular basis.

A Note About Research Methods (with implications for any kind of analytics)

NextStage will be posting some of its research here (as noted in NextStage Evolution Research Brief – The Importance of Brand as it Relates to Product v Feature Diversity and MarketShare). We normally apply our research methodology — one familiar to anyone doing psych, social, anthro or language research — to any engagement.

One thing we're repeatedly told is that our problem solving methods are unique and very different from what everyone else does so we decided to offer our methodology's high level form here.

The methodology is simple, adaptable and expandable. What is offered here is a core that can be used in any discipline with little modification.

  1. Background Study
  2. When presented with a challenge or a question to be answered or investigated, learn as much as possible about everything that's been done before, regardless of how seemingly irrelevant to the task at hand. Be thorough, be detailed. Find out what's failed and why. Find out what got close to solving the problem or answering the question and why it didn't go all the way.

    Learn this, study the background (even if it's obvious. And if it's that obvious to you, have someone not familiar with this particular paradigm do the study. You're missing something if you think the background is obvious), study the personalities, the models, the methods, the politics, everything.

    If you're not willing or don't have the time, don't take on the research, project or task.

  3. Necessary Data
  4. This one, we'll admit, causes people the most concern. Many people attempt to solve problems with either available data or easily obtainable data.

    Stop. Go no further until you honestly answer this question:

    What data — existing or not, obtainable or not — best solves the problem, answers the question or furthers the research?

    Talking with researchers and analysts world wide, the above question is the greatest stumbling block. People defer to what data is currently on hand, previously obtained data made public by other researchers, data that current methods make easily obtainable or collectable, etc.

    However, “ease of collection” or “prevalence of availability” should not be equated with “solves the problem”, “answers the question” or “furthers the research”. Agreed, it would be great if the exact data that would do all three was there for the taking and yes, solution vendors make wonderful cases for their data collection methods.

    The first challenge to solving any problem, answering any question, etc., is to determine what kind and how much data is necessary to provide a solution or answer. Find out how to measure what you really need to measure to solve what you really need to solve and you're 90% of the way to answering the question, solving the problem or furthering the research (there are two corollaries to this and they go into the third step in this research model).

    I've seen research come to a halt until the investigators could determine what they really needed to measure to answer what they really needed to answer. Take your time at this stage. I've heard that “We have money to do it over but not enough to do it right the first time” and, while I know several businesses accept that concept, and while I agree with Jeff Bezos' “Anything worth doing is worth doing poorly” I don't believe or accept that these two statements are congruous at all.

  5. Equals Must Be Equals
  6. The number of times projects fail, results are erroneous or research flounders due to people forgetting the simple rule that “1=1” is staggering. Using an online analytics term, “clicks” here must mean the same thing and be measured the same way as “clicks” there.

    The first corollary is

    Make sure you're measuring what really needs to be measured.

    The second is

    Units must be the same — and have the same meaning — on both sides of the equation.

    The “1=1” requirement most often fails because people mix Categorical, Rank, Metrical Analysis techniques, measures and methods as if one is identical to the other and they are different.

    And if you're not sure of the differences, I'm sorry, you should not be doing research. NextStage is often called in to help businesses make sense of some research they performed or contracted with another group, and more often than not the solution comes from clearing up categorical, rank and metrical overlaps. Categorical, Rank and Metric are basic measurement concepts. I explain them briefly in The Social Conversion Differences Between Facebook, LinkedIn and Twitter – Providence eMarketing Con 13 Nov 2011. Learn them and learn them well.

    To that end, the market is contributing to poor research models and measurement methodologies. The number of solution providers promoting self-serving “equations” as solving industry problems that

    1. change critical term and KPI definitions midstream,
    2. measure irrelevant (at worst) or very loosely (at best) correlated elements and profess a one-to-one correspondence between measurement and claim,
    3. include data having no direct relevance to the problem yet are included because they're easily obtained, and
    4. make up their own KPIs and claim relevance

    is mind-numbing.

    Companies can find vendors whose definitions make the companies' failures look good and aren't we all a little tired of naked emperors?

Research Quotes

People who know me or NextStage know we love quotes. Here are some our researchers keep on their walls for easy reference:

  • We are continually faced by great opportunities brilliantly disguised as insoluble problems. – Lee Iacocca
  • Simple solutions to complex problems are often wrong. – Jeanne Ryer
  • The cause of a problem is the system that produced it. – Tom Bigda-Peyton
  • Judgement consists not only of applying evidence and rationality to decisions, but also the ability to recognize when they are insufficient for the problem at hand. – Tom Davenport
  • Should you encounter a problem along your way, change your direction, not your destination.
  • For every complex problem there is an answer that is clear, simple, and wrong. – H L Mencken

NextStage Evolution Research Brief – The Importance of Brand as it Relates to Product v Feature Diversity and MarketShare

NextStage routinely makes its research available to Members. Research that's been published in the Members area for more than a year will be moved here, to The Analytics Ecology, as time and tide allow.

Basis:

This publication reports on an ongoing (sixteen years to date) study of market fluctuations due to decreases in product/service versus feature diversity resulting in increased marketshare.

Background:

Markets arise under two conditions:

  • Products or services are developed that meet a specific need within a given population
  • Existing products or services are redefined or modified to meet the expectations within a given population

Emerging markets move from need-based to expectation-based in direct relation to the spread of product/service information within the given population. Note that this replaces the “adopter” model with a social contagion model — markets increase proportionally to the information level within a market. Early adopters are individuals who require minimal social information about a product/service, late adopters are those who require maximal social information in order to become market members.

Markets establish themselves when multiple vendors recognize possible revenue sources and expend resources to first enter then maintain marketshare. Traditionally market establishment followed an organic dispersement model due to minimal channels (information transmission vectors). The past sixteen years has seen an explosion of channels.

The traditional model dictates that the vendor able to saturate a market's chosen channels will claim more marketshare. However, channels are proliferating with the end result that vendors must create their own channels to insure controlled information dispersion.

The social contagion model dictates that an uncontrollable information exchange be met with decreased marketshare and decreased product/service diversity while proliferating features and brands to meet consumers at different social contagion levels within the market population.

Objective:

To determine if branding concepts, product/service or feature diversity is more adept at establishing marketshare in socially engaged markets.

Method:

Eleven markets (agri, auto, construction, home electronics, personal apparel, personal communications, pharma, real estate, recreation, sports, travel) were observed from Jan 1995 to Jan 2011. Analysis was done on vendors in those markets, messaging, market reach, marketshare, channeling, brand imaging and management shifts.

Results:

  • Brand allure continues to play a role in marketshare
  • However brand allure is rapidly giving way to feature diversity (the brand that supports the largest feature set wins)
  • Feature diversity is becoming the new standard for opening markets and increasing marketshare, especially when features are tailored to a given market
  • Feature diversity benefits are increasingly communicated socially rather than through “traditional” channels
  • Product/service diversity benefits are decreasingly communicated socially although they maintain their place in “traditional” channels

Key TakeAways:

  • Brands able to demonstrate the greatest feature diversity within a market will maintain the greatest share of that market moving forward
  • Emerging markets will best be captured/maintained by products/services that are app enhanceable rather than those coming with a diversity of built-in features
  • There will be an increasing move to “app platform” devices as feature diversity moves from “what x can do out of the box” to “tailoring x to do what you want”
  • This app platform move will be the vector of future market segmentation

Defining “Definition” and People as “Programmable Entities”

I've been studying The Calculus of Intentions (it's where semiotics and mathematics intersect) with some remarkably learned people over the past few months. A core question of the study is “How do we create a working definition that can serve as a baseline of knowledge while allowing us to create new knowledge?”

I believe this question is ignored in many disciplines today, especially in those disciplines where business mixes with science (see Why hasn't Marketing caught on as a “Science”?). I've worked in pure research (work that had no obvious ROI) and applied research (“Solve this problem because we can productize the solution”). The former must create working definitions that are expandable, the latter works to create definitions that are brandable. Very different. The two can go into conflict.

A Valid Definition Must Be So General as to Encompass All Variants

In Reading Virtual Minds Volume II: Theory and Online Applications (still writing it, folks), I define “Usable” as

Something is usable when the individual using that thing achieves a goal both known and recognized prior to the usage event.

and “Usability” as

Usability is a measure of an individual's conscious and non-conscious recognition of the pleasure derived from achieving their goal.

Creating as general as possible definitions is crucial to Reading Virtual Minds Volume II: Theory and Online Applications because I provide non-NextStage examples1 of how to do what I'm describing and I want readers to know ahead of time what they can expect as outcomes.

Readers will notice that “Usable” is objective and digital (you either did or did not achieve a known and recognized goal), “Usability” is subjective and analog (did you get a lot or a little pleasure? What do you mean by “a lot” and “a little” and is it the same as what I mean?) and I go into the reasons for this in the book.2

Readers will also (I hope) notice that the two definitions above exist in that borderland where pure becomes applied research. The goal is to create something general enough to be wholly true and restrictive enough to be uniquely identifiable as true.3

Pure “Definitions” versus Applied “Definitions”

What else is required? Pure research is usually interested in creating a definition for what hasn't been in experience before, applied research not so much so, hence any definition used in business, etc., should encompass all previous similar experiences and definitely should not negate any previous similar experiences.

Bill Cosby and New CokeThe classic business blunder example of this is “New Coke”. There was no question in consumer's consciousness that the New Cok wasn't “the real thing” and the debranding halo went from New Coke to Coca-Cola to Bill Cosby himself. The moral can be found easily in the Calculus of Intentions; instead of “Trust me, this is the real thing” (when “the real thing” was the existing definition of the old Coca-Cola formula) using “Trust me, this isn't for everybody, so give it a taste. This could be the real thing for you” with Mr. Cosby's finger first pointing at the Coca-Cola can then at the audience would have both captured the existing Coca-Cola audience and integrated it into the new formulation.

Integrate existing experience into the new definition and — from a marketing standpoint — you bring the existing audience with you (this is the heart of redesign and rebranding, also covered in Reading Virtual Minds V2). An example of not integrating existing experience into a new definition was something I heard in a radio spot earlier today (21 Jul 10). Some company in the Boston area is publishing a report, “The 25 Most Powerful Businesses in Massachusetts”. The ad then referenced their website with “Learn what makes a brand powerful at …”.

Essentially they use an existing term, “powerful”, then redefine it into something they lay claim to. It doesn't matter if their definition of “powerful” is accurate or meaningful to anything else we may apply that term to because they're also telling us what the term means when they use it.4

Organization and Structure

Next comes a definition's ability to organize a body of knowledge into a clear, irrefutable structure. Such things are called “elegant solutions” in mathematics, meaning the definition demonstrates simple, easily repeatable solutions. This tends to be where pure and applied research — especially when the application is intended for branding — diverge greatly. Pure research works to create foundations, applied research builds on those foundations in the hope that nothing else will be built. Boston's Hancock Tower, New York's Empire State Building and Chicago's Sears Tower (I think it has a different name now) are all buildings (foundational definition) and each has a separate name (applied definition).

The fact that what was Chicago's Sears Tower for many years is now known as The Willis Tower is a demonstration of an applied definition's mutability and temporality. People of a certain age will always reference that building as “The Sears Tower” and, if asked about “The Willis Tower”, will have to pause and perform the definition translation before answering with any confidence. Another example is The Boston Garden. I have no idea how many name changes it has gone through and to most people within a 100 mile radius of Boston who are over 35 years old, it will always be “The Boston Garden” (if for no other reason than “The TD BankNorth Garden” does not lend itself to alliteration and syllabation. It officially went from the full “The TD BankNorth Garden” to “The TD Garden” over a year's time, I think, perhaps longer. Such is the strength of pure and applied definitions as brands). Very often, when placial applied definitions change, society imposes a foundational definition to replace all applied definitions.

Again using Boston area examples, Foxboro Stadium is Foxboro Stadium, not Gillette Stadium (readers specializing in search engines know such examples by heart). The Tweeter Center is the Comcast Center and was Great Woods. Most people have to guess where it's located (Mansfield, MA). An example of pure and applied definitions going hand in glove is Gilford, NH's, “The Meadowbook U.S. Cellular Pavilion” (once MeadowBook Farm. “MeadowBook” has always been part of the venue's name so anybody and everybody knows about “The Meadowbrook”).

It is rare that a pure definition will change. Applied definitions are generational (as indicated above).

Recognize what is and what isn't defined

Lastly, both pure and applied definitions need to clearly demonstrate what is not included in the definition. Binary definitions are great for this. “0” is not “1”, (business) “male” is not (business) “female”. The definition of “Usable” provided at the start of this post is both binary and objective, good on both counts. Things like “Usability” and “New Coke”, being subjective, must always include the author's intent as part of the definition. I enjoy math puzzles so their usability to me is quite high, lots of people I know find no enjoyment in them, so my intent must be included in my definition of math puzzle usability.

And it is the recognition of my intent, the pleasure I feel5, that brings us back to The Calculus of Intentions and creating definitions.

People as Programmable Entities

It is possible to determine usability for different personality types, meaning one can plot how much pleasure a group of people will derive from a given object/device/tool, meaning it's possible to determine what features said object/device/tool must have to have penultimate usability, what features to change and how when introducing that object/device/tool into a new market, …

The same can be done for utility.

I was asked recently, “What sort of prison have you constructed, where the communications of people make such sense to you that their actions are programmably obvious…?”

I responded with “The foil here is probably an element of Cassandranism; if things are that obvious you'll know who can be communicated with and who not.”

Such research is, I think, a ship and not a prison, although the two are only different based on definition and intent.


1 – A non-NextStage example is one where NextStage's Evolution TechnologyTM (“ET”) isn't required to achieve the result. The result may have been proven with ET and ET isn't required to achieve the result.

back

2 – “Usability” as defined is not “utility”, the measure of relative satisfaction. I may be incredibly satisfied by something but derived absolutely no pleasure hence never want to use/do it again, such as being extremely satisfied that I survived a plane flight through a hurricane. However, I'll never do it again, therefore the usability is zero.

Utility is a measure subjective and analog, and it provides no cycle for improvement. “Usable” provides a binary measure of improvement — it wasn't usable before and now it is. “Usability” provides an improvement cycle — if usability is low (there is little to no pleasure in something's use) we can go through iterations wherein changes to some object/device/tool increases usability (each change allows greater pleasure in its use).

As a further example of the difference between usability and utility, note that usability is sensory in nature (another reason it's analog), utility is psychological in nature. We are prewired for usability (pleasure/pain), we have to learn utility.

back

3 – ET and humans move from “wholly true” to “uniquely identifiable as true” (the phenotype-genotype continuum) regularly and both do so via identity-relational models. For example, there exists a “business” definition of gender that is binary and has nothing to do with psychological, neurological, endocrinological, biological, …, science. By it's definition, I am male and that is wholly true because it is a binary definition. Either I am or I am not.

When we say “You remind me of …” we're dealing with “uniquely identifiable as true” and our conscious and non-conscious thoughts are using identity-relational models. We're basically comparing our memory of person A with our immediate awareness of person B who's standing in front of us. Are A and B a one-to-one match? Then they are uniquely identifiable and we say “Oh, you're …”. When the match isn't one-to-one we say things like “You remind me of …”, “You're a lot like …” or “I knew someone (just) like you …”

The slide from “I recognize you're a male” to “You remind me of …” to “You're …” is the slide from wholly true to uniquely identifiable as true and uses identity-relational models (how many unique elements are required to uniquely identify this as “not that”? See Chapter 5 Section 3, “The Toddness Factor” in Reading Virtual Minds Volume I: Science and History for a description of this).

back

4 – Shades of “Pornography is what I'm pointing at when I say it.” I pretty much believe redefining something to suit your needs is obscene and pornographic. In this case, by going to the company's website we learn “This national ranking is the first of its kind, … and provides a new benchmark for marketers”. Excellent! There's no real validity to their “metric” other than self-promotion and the desire to become a standard. Wonderful! Truly! Therefore the basis of the metric is the audience's acceptance of the company's statements as valid.

But wait… I knew an emperor like that…

And truth in advertising here; I have at times advised clients to do something similar. The dissimilarity is that the clients so advised could back up their definitions and claims with long, well documented evidentiary trails.

back

5 – I also derive utility from them, a sense of self-satisfaction at being able to solve them.

back


Posted in ,

The Unfulfilled Promise of Online Analytics, Part 3 – Determining the Human Cost

Knowledge will forever govern ignorance, and a people who mean to be their own governors, must arm themselves with the power knowledge gives. A popular government without popular information or the means of acquiring it, is but a prologue to a farce or a tragedy or perhaps both. – James Madison

There was never suppose to be a part 3 to this arc (Ben Robison was correct in that). Part 1 established the challenge (and I note here that the extent of the response and the voices responding indicates that the defined challenge does exist and is recognized to exist) and Part 2 proposed some solution paths. That was suppose to be the end of it. I had fulfilled my promise to myself1 and nothing more (from my point of view) was required.

But many people contacted me asking for a Part 3. There were probably as many people asking for a Part 3 as I normally get total blog traffic. Obviously people felt or intuited that something was missing, something I was unaware of was left out.

But I never intended there to be a Part 3. What to cover? What would be its thematic center?

It was during one of these conversations that I remembered some of the First Principles (be prepared. “First Principles” will be echoed quite a bit in this post) in semiotics.2

According to semiotics, you must ask yourself three questions in a specific order to fully understand any situation3:

  1. What happened?
  2. What do I think happened?
  3. What happened to me?

More verbosely:

  1. Remove all emotionality, all belief, all you and detail what happened (think of quis, quid, quando, ubi, cur, quomodo – the six evidentiary questions applied to life).
  2. What do your personal beliefs, education, training, cultural origins, etc., add to what actually and unbiasedly happened?
  3. Finally, how did you respond — willingly or unwillingly, knowingly or unknowingly, with all of your history and experience — to what happened.

The power of this semioticism is that it forms an equation that is the basis of logical calculus, the calculus of consciousness4, modality engineering5 and a bunch of other fields. I use a simplified form of it in many of my presentations, A + B = C.6

Talking with one first reader, I realized that Part 1 was “What happened?” (the presentation of the research) and Part 2 was “What do I think happened?” (my interpretation of the research). What was left for part 37 was “What happened to me?”

And if you know anything about me, you know I intend to have fun finding out!

All Manner of People Tell Me All Manner of Things

Oliver's TravelsThe above is a line from Oliver's Travels (highly recommended viewing), something said by the Mr. Baxter character. Mr. Baxter is himself a mystery and — although his true nature is hinted at several times — it is not revealed until the last episode. There we are told about The Legend of Hakon and Magnus. In short, Mr. Baxter could be a good guy, a bad guy or the individual directing the good or bad guy's actions. His role entirely depends on what side you are on yourself, a true Rashomon scenario. I found myself in something similar to Mr. Baxter's situation as how people responded to my research, its publication and myself also depended greatly on what side people were on when they contacted me.

I was both dumbfounded and honored by the conversations Parts 1 and 2 generated. The number of people who picked up on or continued the thread on their own blogs (here (and alphabetically) Christopher Berry (and a note that Chris continues the conversation in A Response (The Unfulfilled Promise of Analytics 3) ), Alec Cochrane, Stephane Hamel, Kevin Hillstrom, Daniel Markus, Jim Sterne, Shelby Thayer and if I've forgotten someone, my apologies), twittered it onward, skyped and called me was…I could say unprecedented and remind me to tell you about a psychology convention in the early 1990s (nothing to do with NextStage, just me being me, stating what is now recognized as common knowledge yet way before others decided it was common. Talk about unprecedented results. I had to be escorted out under guard. For those of you who know Dr. Geertz, his comment upon learning this was “I'm not surprised you'd have to be escorted out by guards. You have that subtle way about you…”8).

But to note the joy means to recognize the sorrow (as was done in Reading Virtual Minds Vol. 1: Science and History Chapter VI, “The Long Road Home”). While the majority of people honored me and a good number of people appreciated that I had done some useful research and donated something worth pondering, there were a few (just a few, honestly) who damned me.

The damning per se I don't mind. It's part of the territory. It was the manner and the persons involved that truly surprised me.

I was accused of possibly destroying a marriage (Susanism: If you think this is about you, it's not. We know a lot more people than just you), maligning certain individuals (usually by people who maligned other individuals during the research. I guess I wasn't maligning the correct individuals in their view), not demonstrating the proper respect to industry notables (same parenthetical comment as previous and you guessed it, another NextStage Principle), that I better post an apology to these same industry notables (two people wrote apologies in my name and strongly suggested that I publish them), …

Whoa!

Who gave me such power and authority to make or break people's lives? Certainly I didn't give it to myself, nor did I ask others to give it to me. And if anybody did give it to me without my knowing I gladly give it back. As I've said and written many times, I do research. When new data makes itself available and as required, I update my research. But until such new data comes in, the research stands.

What I really want to know is if, when the results of research are discomforting, the industry's standard and usual procedure is

  • to change either the research or results so that people feel warm and fuzzy — hence have no impetus to act (according to one person at yesterday's NH WAW, “Don't measure what you can't change”. An interesting statement that I disagree with. Doing so means to throw out meteorology, astronomy, … much of what has been historically measured without any change-ability allowed us to create the technologies that would produce change in previously unchangeable systems)
  • or let the discomfiting research stand — so that the challenge can be recognized and either action can be either taken or the challenge go ignored.

Seems to be the “change either the research or results” is the standard (or at least done when required) because while few asked that I rewrite research or results so that certain individuals appeared more favorably, the ones who did ask sure were some high-ranking industry folks.

Heaven forbid these folks wanting different results published or do complimentary research that either validated or invalidated my results.

Wait a second. What am I thinking? Obviously it would be impossible for them to do research that validates mine.9

Of course, publishing research would also mean publishing their methodologies, models, analytic methods, … and the reasons that ain't gonna happen will be covered later in this post.

And if that is the standard and usual procedure — at least among those in the high ranks — then

  • congratulations to all the companies hiring high ranking consultants to make them feel good rather than solve real problems and
  • be prepared for those coming up through the ranks to learn this lesson when it is taught them.

I'm mad as hell and I'm not going to take it anymore!For the record, not much upsets me (ask Susan for a more honest opinion of that). The sheer stupidity of arguments that resort to emotionalism or are nothing more than attempts to protect personalities and positions, though… Them they do offend me (can't wait to learn how our Sentiment Analysis tool reports this). And more about stupidity later in this post (Let me know if you recognize Joseph's “I'm mad as hell and I'm not going to take it anymore” persona).

When the Stories Meet the Numbers (Statistics, Probability and Logic)

I originally surveyed about sixty people for Part 1. That number grew to about one hundred in Part 2 due to responses to Part 1. Currently I've had conversations (I'm counting phone calls, Skype chats and calls, email exchanges and face-to-face discussions at meetings I've attended as “conversations”) with a few hundred people about those posts.

I noticed something interesting (to me) about the conversations I was having. Lots of people made statements about statistics, probability and logic but were using these terms and their kin in ways that were unfamiliar to me. Especially when I started asking people what their confidence levels were regarding their reporting results.

I'll offer that search analysts (I'm including SEO and SEM in “search analysts”) seem to have things much easier than web analysts do. “We were getting ten visits a day, changed our search terms/buy/imaging/engines/… and now we're getting twenty visits per day.” Granted, that's a simplification and it's the heart of search analytics — improving first the volume and second the quality of traffic to a site. Assuming {conversions::traffic-count} has standard variance, search analytics produces or it doesn't and it's obvious either way.

Web analytics, though… “The Official WAA Definition of Web Analytics” is

Web Analytics is the measurement, collection, analysis and reporting of Internet data for the purposes of understanding and optimizing Web usage.

The analytics organization I see most often cited, SEMPO, doesn't even attempt to define (“SEMPO is not a standards body…”) or police (“…or a policing organization.“) itself. It does offer search courses but the goals of the SEMPO courses and the WAA recognized courses are greatly different (an opinion, that, based on reading their syllabi as someone having taught a variety of courses in a variety of disciplines at various educational levels in various educational settings).

There are twenty-one words in the official WAA definition and a philologist will tell you that at least ten require further definition.

Definitions that require definitions worry me. Semiotics and communication theory dictate that the first communication must be instructions on how to build a receiver. Therefore any stated definition that requires further definition is not providing instructions on how to be understood (no receiver can be built because there is no common signal, sign or symbol upon which to construct a receiver. If you've ever read my attempts at French, you know exactly what I mean10).

One of the statements made during the research for this arc was “[online] Analysts need to share the error margins, not the final analysis, of their tools.” It expressed a sentiment shared if not directly stated by a majority of respondents and it truly surprised me. It states as a working model that any final analysis is going to be flawed regardless of tools used therefore standardize on the error margins of the tools rather than the outputs of the tools.

So…decisions should be made based on the least amount of error in a calculation, not what is being calculated (does the math we're using make sense in this situation?), the inputs (basic fact checking; can we validate and verify the inputs?) or the outcome (does the result seem reasonable considering the inputs we gave it and the math we used?)?

A kind of “That calculation says we're going to be screwed 100% but the error margin is only 3% while that other calculation says we're only going to be screwed 22% but the error margin is 10%.

Let's go with the first calculation. Lots less chances of getting it wrong there!”, ain't it?

More seriously, this is a fairly sophisticated mathematical view. Similar tools have similar mathematical signatures when used in similar ways. When a tool has an output of y with fixed input x in one run and y+n with that same fixed input x in another run but a consistent error margin in both runs, standardizing on the error margin e is a fairly good idea. It indicates there's more going on in the noise than you might think.11

Of course, this means you better start investigating that noise darn quick.

My understanding of “statistics, probability and logic” was often at odds with what people were saying when they used those words. The differences were so profound (in some cases) that I asked follow up questions to determine where my misunderstandings were placed.

Serendipity doing it's usual job in my life, over this fall-winter cycle I took on the task of relearning statistics12, partly so I could understand how online analysts were using statistics-based terms. As noted above, the differences between what I understood and how terms were being used and applied was so different that I questioned my understanding of the field and its applications.

And to whither I wander, I offer a philologic-linguistic evidentiary trail for all who will follow. For those who just want to get where I'm going, click here.

Web Analytics is Hard

Of course it is. Anything that has no standards, no base lines, no consistent and accurate methods for comparisons is going to be hard because all milestones, targets and such will have to be arbitrarily set, will have no real meaning in an ongoing, “a = b” kind of way, and therefore Person A's results are actually just as valid as Person B's results because both are really only opinion and the HiPPOs rule the riverbank…

…until a common standard can be decided upon.

Web Analytics is easy

Of course it is. Anything that applies principled logic, consistent definitions, repeatable methodologies that provide consistent results, … is going to be.

Online Analytics Is Whatever Someone Needs It to Be

Ah…of course it is.

And this is the truest statement of the three for several reasons. Consider the statement “(something) is Hard“.

It doesn't matter what that “(something)” is, it can be driving a car, riding a bike, watching TV, playing the oboe, composing poetry, doing online analytics, … . What that “(something)” is is immaterial because the human psyche, when colloquial AmerEnglish is used, assigns greater cognitive resources to understanding “Hard” than it assigns to “Web Analytics”, and this resource allocation has nothing to do with whether or not “Web Analytics” is easier to understand than “Hard”, it has to do with what are called Preparation Sets13. The non-conscious essentially goes into overdrive determining how hard “Hard” is. It immediately throws out things like “iron”, “stone” and “rock” because the sensory systems don't match (iron, stone and rock involve touch-based sensory systems, transitive expressions such as “(something) is hard” don't) and starts evaluating the most difficult {C,B/e,M}14 tasks in memory — most recent to most distant past — to determine if the individual using the term “Hard” is qualified to use the term as a surrogate for the person being told “(something) is Hard” (ie, our non-conscious starts asking “Do they mean what I think they mean when they say 'Hard'?”, “Do they know what 'Hard' is?”, “What do they think 'Hard' means, anyway?”, “Do they mean what I mean when I say 'Hard'?” and so on).15

What I will offer is what I've offered before; any discipline that defines success “on the fly” isn't a discipline at all (at least it's not a discipline as as I understand “discipline”). Lacking evidentiary trails, definitions and numeric discipline, comparisons of outputs and outcomes degenerates to “I like this one better” regardless of reporting frame.

Teach Your Children Well

Where statements like “(something) is Hard” and “(something) is Easy” really make themselves known is when teaching occurs.

Let me give you an example. You have a fear of (pick something. Let's go with spiders because I love them and most people don't (Only click on this link if you love spiders)). Phobias are learned behaviors. This means someone taught you to be afraid of spiders. It's doubtful someone set out some kind of educational curriculum with the goal of teaching you to fear spiders (barring Manchurian Candidate scenarios). It's much more likely that when you were a child, someone demonstrated their fear of spiders to you. Probably either repeatedly or very dynamically, so you learned either osmotically or via imprinting. Children demonstrate their parents' behaviors in hysteresis patterns. This means that if you measured a parent's level of arachniphobia and assigned it a value of 10, chances are the child would demonstrate their arachniphobia at a level of 100 or so in a few years' time. Children who learn their parents' fears and anxieties do so without understanding any logical basis for those fears, only the demonstration of them. When there is no logic to temper the emotional content, hysteria results.

However, if a parent demonstrates a fear response and the ability to control it, to explain to the child that fear response's origin, etc., most often the child learns caution and not fear (not to mention that the parent usually learns to control their fear). The difference can be thought of as the difference between teaching a child to “Be careful” versus hysterically screaming “EEEEK!”

What's so fascinating about this is that it's also how we pass on our core, personality and identity beliefs whether we mean to or not (I cover this in detail in Reading Virtual Minds Volume I: Science and History). We can be teaching physics, soccer, piano, bread-baking, … It doesn't matter because all these activities will be vectors for our core, identity and personal beliefs and behaviors. If we are joyful people then we will teach others to be joyful and the vector for that lesson will be physics, soccer, piano, bread-baking, … And if we are miserable people? Then we will teach others to be miserable and to be so especially when they do physics, play soccer, the piano, bake bread, …

Thus if any teaching/training occurs intentionally or otherwise, the individual doing the training/teaching is going to de facto teach their internal philosophies and beliefs — both business and personal — as well as their methods and practices to their students. This can't be helped. It's how humans function. If the philosophy and belief is that things are hard, then that philosophy and belief will be taught de facto to the students. Likewise for the philosophy and belief that something is easy. There will be no choice.16

The point is we protect others from what we fear. Humans are born with precious few fears hard-wired into us (heights and loud noises are the two most cited. Heights because we're no longer well adapted to an arboreal existence and loud noises because predators tend to make them when they attack).

So the statement “(something) is hard” either means we fear “(something)” or we wish to protect others from having the difficulties we have when we do “(something)”, and if difficulties existed then the non-conscious mind is going to place a fear response around whatever “(something)” is to make sure we don't put ourselves into unnecessary difficulties yet again.

The statement “(something) is easy” generates the polarity of the above and I, dear reader, I am the neuro- and philo-linguist's nightmare because my training is simply that “(something) is”. My training is that both whatever exists and whatever state it exists in are mind of the observer17 dependent. Thus things simply are and our perceptions, experience and decisions make them hard, soft, easy, whatever, to us individually.

It's always all about you, isn't it?More colloquially, whatever your perceptions of the world are, it's all you and precious little of anything else (a favorite quote along these lines is “What if life is fair and we get exactly what we deserve?” Ouch!).

The Trail Leads Here

There are lots of errors I can understand. A lack of knowledge, of mathematical rigor, of logic training, of problem solving skills, … These and a host of others I can appreciate. Especially in those junior to any given discipline.

But unprovable math, a lack of basic fact checking, outputs that have no meaning based on what's come before and (let's not forget) emotionalism? This really blew me away. Math can be taught, junior people who don't fact check can be trained, making sure units match can be taught and comes with experience, … but emotionalism?

I'll accept any of the above in junior players with the caveat that the first to go has got to be emotionalism.

But senior people failing any of these before offering something for publication? Then defending this lack of rigor with an emotional outburst? And when it happens more than once?

Talk about abandoning First Principles!

We don't need no stinking badgesFirst Principles? We don't need no stinking First Principles!

Challenge logic, challenge research, challenge findings, sure. Challenge a person if they challenge you, sometimes maybe. I'll tolerate a lot, folks (ask Susan for confirmation), and I have a real challenge with such as these — Arguing emotionally and telling me it's logic, arguments based on no facts at all… I'll accept, entertain and work with ignorance, arrogance, discomfiture, anxiety, joy, love, appreciation, anger, … quite a wide thrall of human response.

But arguments such as these are, in my opinion, stupid.

There, I typed it.

Yet because such arguments were presented as such I must recognize that in some camps doing web analytics means to heck with fact-checking, logic, … That it's acceptable to ignore truth and common practice to base outcomes on what one needs them to be. I mean, when someone with title and prestige does it, the overt statement is that others should, will or do do it, as well. Definitely people in the same company should or will do it. Whatever's lacking in the master's portfolio won't be found in the student's (in most cases).

Want to know why I stopped attending conferences? See the above.

Joseph, the Abominable Outsider

Joseph, The Abominable Outsider

Stephane Hamel applauded me (I think) when he referenced me as an industry “outsider” in his A nod to Joseph Carrabis: The unfulfilled promise of online analytics. Others used the term to applesauce me. (I was flattered by both, actually.)

I had been wondering if it was worth my writing a little bit on elementary logic, probability theory, problem solving or some such. A previous draft of this post contained an explanation of elementary statistics and problem solving as it might be applied to online analytics. Now I really had to question such an effort. If the notables don't know how to apply these things…

Where the stories meet the numbers, there Understanding dwells

The power of logic, knowing problem solving methods, basic statistics, probability and so on is that they provide basic disciplines that prevent or at least inhibit mistakes such as listed above. You have the tools and training to basically “…draw an XY axes on the paper, chart those numbers and the picture that results points you in the direction you need to go.” You can be emotional about your research and your findings and you can't defend your research emotionally. The research and findings are either valid or they ain't.18

As for drawing an XY axes, charting numbers and getting some direction…what can you do with such evidentiary information? There are lots of things you can do. Determine the relationships between the numbers and you can exploit their meanings.

But if the basics are beyond the industry greats

  • then explaining the differences between cross-sectional studies and longitudinal studies (cross-sectional studies involve measuring a single (x,y) pair, meaning x is fixed for all y. Longitudinal studies involve countably infinite (x,y) pairs. Longitudinal studies are greatly more expensive than their cross-sectional cousins and is why cross-sectional regression models are often used when longitudinal regression models are needed) won't do much good19,
  • nor will explaining the need for creating a “standard” site for calibration purposes,
  • models can only be standardized once methods themselves are analyzed and an accuracy “weighting” is determined (allowing all models to be compared to a “gold standard”, meaning comparing my results to your results actually has analytic meaning),
  • Figuring out where your normals are on your curveexplaining the meaning of and how to “normalize” samples is out (doing so allows you to see where the normals fall on your standard curve. You put your normals in the middle to lower part of the curve because a) this is where population densities are greatest and b) no naturally occuring line is going to be straight so you shoot for placing your normals on the straightest part of the curve to get some kind of linearity (that y = mx + b thing). Every naturally occuring phenomenon follows mathematical rules that produce curves. Between the two blue lines is where standards occur. Below the bottom blue is “below standard”, above the top blue is “out of standard”. Between the bottom blue and green line is the normal range. You calibrate your methods against the gold-standard normals and anything above is where the money lies),
  • 20

It takes more effort to reorder a partially ordered system than it does to create order in an unordered system (bonds, even when incorrect, have existing binding energy).

I completely understand why so many of NextStage's clients couldn't document the accuracy of the online analytics tools they were using at the time they contacted us for help. This lack of documentation was something I was very uncomfortable with. If there's no proven methodology for demonstrating a number's validity then you've essentially moved away from the gold standard and declared that the value of your dollar is based entirely on what others are going to value it at (pretty much determined by your political-military-industrial capabilities or in this case, those guarding the riverbank). Your numbers only have meaning so far as others are willing to accept them as valid and if lots of money is being paid for an opinion, that opinion is going to be gold regardless if it's based on invalid assumptions or documentable facts.

The online analytics field is partially ordered — it's been around long enough for a hierarchy to appear — so only those willing to expend the energy are going to attempt fixing it for the sake of getting it fixed rather than changing it to suit their own objectives.

And this is where

The detritus encounters the many winged whirling object

NSE was seeing so many erroneous tool results (my favorite example was the company that was getting 10k visitors/day and only 3 conversions/month. Their online analyst swore by the numbers) that it lead us to come up with a reliable y = x 2db that we could prove, repeat and document. It relied solely on First Principles. This led to our in-house analytics tools, which is why we're analytics tool agnostic. We really don't care what tools clients use. If we don't believe the numbers we'll use our own tools to determine them because we know and can validate how our tools work. As a result we now often use our tools to validate the accuracy of other tools.

I have no dog in this fight (both the “Web Analytics is…” and whether or not a promise existed and has gone unfulfilled fights because I'm a recognized industry outsider) and won't be dragged into it (I mean, would you really want me involved?). My agenda is making sure that those coming to NextStage for help either bring with them some mathematical rigor or allow NextStage to invoke it. There is little that can be done when a tool lacks internal consistency (given a consistent input it generates different outputs).

It really is that simple, folks. This is First Principles and they always work. Don't believe me? Ask Ockham. First Principles have to work. As long as the sun rises in the east and sets in the west, as long as there are stars up in the sky, as long as the recognized laws of reality are valid, …

And because mathematics is a universal language, the stars are in the sky, etc., etc., these rules have to apply to online analytics and the tools used therein.

Unless you're happy with high variability in results sets given a known and highly defined set of inputs.

Which is fine, if that's what your values are based on.

And I doubt it is, so be prepared for companies to use HiPPOs only for political purposes (“Our methods are valid because they were installed/given to us/updated/validated/… by HiPPO du jour“), not for accuracy purposes.

How fast are you going?I mean, people make a living out of these things, right? When someone talks about a regression curve and that a decision was made because the probabilities were such and so, does it matter if they know what they're talking about?

Or is being able to use a tool the same as understanding what the tool is doing?

And I know there are online analysts out there who take high variability and weave it into gold. Good for them (truly!). They have a skill I lack. And they're performing art, not science, and as someone who walks in both worlds I will share my opinion that science is lots easier than art. Science has rules. Art is governed by what the buying public is willing to spend and on whom.

Ahem.

That offered, HiPPOs du jour should be prepared for highly defined and validatable game-changing methods and technologies to un-du jour them because such methods and technologies will, given time and regardless of where they originate and how they emerge. In this, like stars shining in the sky, there is no option, no way out. The laws of evolutionary dynamics apply in everything from rainstorm puddles on the pavement to galactic clustering (I can demonstrate their validity in the online analytics world very quickly and easily; start with the first online analytics implementation at UoH in the early 1990s and follow the progression to today. Simple, clean and neat. I love it when things work. Don't you? Gives me confidence in what I think, do and say).

My suggestion (note the italics) is that the online community create an unbiased, product agnostic experimental group. All empirical sciences that I know of have experimental disciplines within them (physics has “experimental physics”, immunology has “experimental immunology”, …). NextStage is not part of this community so again, we have no dog in this fight. Let me offer NextStage as an example, though — we do regularly publish our experimental methods and their results in our own papers and in business-science journals and in scientific conference papers. This allows others to determine for themselves if our methods are valid and worthy. Granted, NextStage comes from a scientific paradigm and perhaps taking on some of science's disciplines would benefit the industry as a whole, or at least bring more confidence and comfort to those within it.

But what about the Third Semiotic Question?

Answering “What happened to me?” follows the trail of asking trusted others (my thanks to Susan, Charles, Barb, Mike, Warner, Lewis, Todd, Little-T and the Girls, M, Gladys and Dolph) many questions to bridge holes in my understandings.

All the ills referenced in parts 1 and 2 demonstrated themselves to their full — people who didn't like what I wrote triangulated. They contacted others whom they thought were socially closer to me or “might have an in” but heaven forbid they contact me directly. Others focused their frustration at me because (probably in their minds) I was something concrete and tangible, something they could point at, instead of something they felt powerless against; the industry as a whole. Still others because they consider me an industry leader (I'm not. I'm an outsider, remember? I can't lead an industry I'm not a part of. Or will Moses start telling Buddhists how to behave?). And (I'm told) I became the subject of klatch-talk on at least two continents (obviously, I need to start charging more for my time).

All of these things add up to determining the human cost of the unfulfilled promise of online analytics. As I quoted before, Coca-Cola Interactive Marketing Group Manager Tom Goodie said “Metrics are ridiculously political.” He was correct and not by half. The cost is high. It is highest amongst

  • those unsure of the validity of their methods, their measurements and their meanings who want to be accepted and acknowledged as doing valuable work yet are unable to concisely and consistently document what they're doing to the satisfaction of executives signing their checks
  • and those who are cashing those checks to buy new clothes.

Do I think online analytics industry will change because of my research and its publication?

See this tool? I must know what I'm doing because I use this tool.Did you read what I wrote about accountability in The Unfulfilled Promise of Online Analytics, Part 1? People are being paid without being accountable for what they're being paid to do. The sheer human inertia put forth to not change that model has got to be staggering, don't you think?

And I doubt anything I could do would bring such a change about. My work may contribute, it may be a drop in the bucket helping that bucket to fill and that's all.

The industry itself will change regardless (surprise!). As a WAWB colleague recently wrote, “For a field that's changing rapidly, based on rapidly changing technologies, I personally feel that holding any expectations for the future is a set up for disappointment. The expectation of change is the only realistic expectation I can hold today.” and I agree. Things will change. They always do. To promise anything else is to lie first to one's self then to others.

Final Thoughts

This is the end of the Unfulfilled Promise arc for me, folks. Please feel free to continue it on your own and give me a nod if you wish.


(my thanks to readers of Questions for my Readers who suggested this footnoting format over my usual <faux html> methods and to participants in the First NH WAW who, knowing nothing about this post, covered much the same topics during our lunch conversation)

1 – A constant promise to myself regarding my work — perform honest research, report results accurately and unbiasedly and (when possible) determine workable solutions to any challenges that presented themselves in either research or results.

back

2 – For those who don't know, much of ET is based on anthrolingualsemiotics — how humans communicate via signs. “Signs” means things like “No Parking”, true, and also means language, movement, symbols, art, music, … . According to Thomas Carlyle, it is through such things “that man consciously or unconsciously lives, works and has his being.” You can find more about semiotics in the following bibliography:

Aho, Alfred V. 2004 27 Feb Software and the Future of Programming Languages, .Science V 303 , I 5662 , DOI: 10.1126/science.1096169

Balter, Michael 2004 27 Feb Search for the Indo-Europeans, .Science V 303 , I 5662 , DOI: 10.1126/science.303.5662.1323

Balter, Michael 2004 27 Feb Why Anatolia?, .Science V 303 , I 5662 , DOI: 10.1126/science.303.5662.1324

Benson J.; Greaves W.; O'Donnell M.; Taglialatela J. 2002 Evidence for Symbolic Language Processing in a Bonobo (Pan paniscus), .Journal of Consciousness Studies V 9 , I 12 http://www.ingentaconnect.com/content/imp/jcs/2002/00000009/00000012/1321

Bhattacharjee, Yudhijit 2004 27 Feb From Heofonum to Heavens, .Science V 303 , I 5662 , DOI: 10.1126/science.303.5662.1326

Carrabis, Joseph 2006 Chapter 4 “Anecdotes of Learning”, Reading Virtual Minds Volume I: Science and History, V 1 , Northern Lights Publishing , Scotsburn, NS 978-0-9841403-0-5

Carrabis, Joseph 2006 Reading Virtual Minds Volume I: Science and History, V 1 , Northern Lights Publishing , Scotsburn, NS

Chandler, Daniel 2007 Semiotics: The Basics, , Routledge 978-0415363754

Crain, Stephen; Thornton, Rosalind 1998 Investigations in Universal Grammar, , MIT Press 0-262-03250-3

Fitch, W. Tecumseh; Hauser, Marc D. 2004 16 Jan Computational Constraints on Syntactic Processing in a Nonhuman Primate, .Science V 303 , I 5656

Gergely, Gyorgy; Bekkering, Harold; Kiraly, Ildiko 2002 14 Feb Rational imitation in preverbal infants, .Nature V 415 , I 6873 , DOI: http://dx.doi.org/10.1038/415755a

Graddol, David 2004 27 Feb The Future of Language, .Science V 303 , I 5662 , DOI: 10.1126/science.1096546

Holden, Constance 2004 27 Feb The Origin of Speech, .Science V 303 , I 5662 , DOI: 10.1126/science.303.5662.1316

Montgomery, Scott 2004 27 Feb Of Towers, Walls, and Fields: Perspectives on Language in Science, .Science V 303 , I 5662 , DOI: 10.1126/science.1095204

Pennisi, Elizabeth 2004 27 Feb The First Language?, .Science V 303 , I 5662 , DOI: 10.1126/science.303.5662.1319

Pennisi, Elizabeth 2004 27 Feb Speaking in Tongues, .Science V 303 , I 5662 , DOI: 10.1126/science.303.5662.1321

back

3 – There is (in my opinion) no greater demonstration of this principle than in The Book of the Wounded Healers, a long forgotten book that I hope will become available again sometime soon.

back

4 – Aleksander, Igor; Dunmall, Barry 2003 Axioms and Tests for the Presence of Minimal Consciousness in Agents I: Preamble, .Journal of Consciousness Studies V 10 , I 4-5

back

5 – Carrabis, Joseph 2004, 2006, 2009 A Primer on Modality Engineering, 18 Pages, , Northern Lights Publishing , Scotsburn, NS

Carrabis, Joseph 2009 18 Aug I'm the Intersection of Four Statements, , BizMediaScience

Carrabis, Joseph 2009 8 Sep Addendum to “I'm the Intersection of Four Statements”, , BizMediaScience

Nabel, Gary J. 2009 2 Oct The Coordinates of Truth, .Science V 326 , I 5949

back

6 – The simplest things often have the most power. The semioticist's A + B = C demonstrates itself with three questions to form equations of meaning such as:

(what happened) + (what do I think happened) = (what happened to me)

(what happened to me) – (what do I think happened) = (what happened)

(what happened to me) – (what happened) = (what do I think happened)

Know any two and the last reveals itself to you.

But only if you're willing.

back

7 – Note to Jacques Warren: Un et un est troi. Ha!

back

8 – Note to Ben Robison: Nope, ET wouldn't detect the sarcasm. The string was too short. We're working on it.

back

9 – Note to Ben Robison: Still working on that sarcasm thing. We have what we think is a good go at it in the NS Sentiment Analysis tool we'll be making public either this week or next (still waiting for the interface and may decide to go without it just to learn what happens).

back

10 – As Jacques Warren, Stephane Hamel and Rene can tell you, my best French is laughable. My attempt at “My gosh, what a beautiful day” usually comes out as “Joli jour heureux je”. (C'est rire, n'est-ce pas?)

back

11 – Carrabis, Joseph 2007 10 Jan Standards and Noisy Data, Part 1, , BizMediaScience

Carrabis, Joseph 2007 11 Jan Standards and Noisy Data, Part 2, , BizMediaScience

Carrabis, Joseph 2007 12 Jan Standards and Noisy Data, Part 3, , BizMediaScience

Carrabis, Joseph 2007 14 Jan Standards and Noisy Data, Part 4, , BizMediaScience

Carrabis, Joseph 2007 27 Jan Standards and Noisy Data, Part 5, , BizMediaScience

Carrabis, Joseph 2007 28 Jan Standards and Noisy Data, Part 10, , BizMediaScience

Carrabis, Joseph 2007 28 Jan Standards and Noisy Data, Part 6, , BizMediaScience

Carrabis, Joseph 2007 28 Jan Standards and Noisy Data, Part 7, , BizMediaScience

Carrabis, Joseph 2007 28 Jan Standards and Noisy Data, Part 8, , BizMediaScience

Carrabis, Joseph 2007 28 Jan Where Noisy Data Meets Standards (The Noisy Data arc, Part 9), , BizMediaScience

Carrabis, Joseph 2007 29 Jan For Angie and Matt, and The Noisy Data Finale, , BizMediaScience

Carrabis, Joseph 2007 29 Jan Standards and Noisy Data, Part 11, , BizMediaScience

back

12 – Periodic relearnings are part of my training and makeup. I put myself through periodic re-educations because I question my knowledge, not because I question someone else's. My goal is to find the flaws in my understanding, not to pronounce someone else's in error. Periodic re-educations keep subject matter knowledge fresh within me, brings new understandings to old educations, increases wisdom, all sorts of good things. Admittedly, this has enabled me to recognize flaws in other people's reasonings. Two examples that the online community may be familiar with are Eric Peterson's engagement equation (flawed definitions and mathematical logic) and Stephane Hamel's WAMM (frame confusion).

To respond to some comments made on the (now dead) TheFutureOf blog, I had to study other people's work. One such work was Eric Peterson's engagement equation. Other people had contacted me about his equation with some questions about it's validity (for the record, I had no intention of looking at Eric's engagement equation until he mentioned it in response to something I'd written. Once he mentioned it, my belief was he'd “placed it in the game” so to speak, hence opened it up to inspection).

In any case, the result of my own and others' questioning was that I studied how that equation was derived (was the mathematical logic viable and consistent, were the variables defined and used consistently, …) and found it flawed. Eric asked if it would be possible for us to simply work together on the equation to remove some ambiguities and make it more generally applicable, thereby removing any questions of mathematical validity and provide business value.

The public response to my reworking of Eric's original equation both confused and concerned me. My reworking was nothing more than turning it into a multiple regression model with the b0 and e terms set to 0 and all bn assumed to 1 (they could be changed as needs dictated). This allowed people using the reworking to determine by simple variance which models/methods weren't valid in their business setting and ignore them. I kept thinking people would laugh at how simplistic my reworking was and the response was quite the opposite. It was at this point my concerns about basic mathematical knowledge among online analysts flared.

I read through Stephane Hamel's WAMM paper (also because others entered it into a discussion) and recognized that by adding some consistent variable definitions that tool would have a great deal of power across disciplines. I asked Stephane if he'd mind my tinkering and so the story goes.

The challenge with Eric Peterson's engagement equation and Stephane Hamel's WAMM is (in my current understanding) that there is no “standard”, itself a theme I'll return to in this post. As an example, my current work with WAWB involves applying some standard modeling techniques so a “normal” can be determined. This would allow Company A to measure itself against a normal rather than comparing itself to bunches of other companies (that might not be good exemplars based on differing business and market conditions) and determine upon which vector Company A should place its efforts to insure cost-efficient gains along all WAMM vectors. The first aspect (my opinion) would be organizational. Without people accepting recognized truth there is no truth (again, my opinion).

And each time I take on such a task I require myself to relearn the necessary disciplines so I can be confident that my understandings are as close to the original author's as possible.

My method for learning and re-learning anything is to go back to First Principles (as mentioned earlier in this post). Some people may have heard or seen me talk about learning theory and how it can be applied everywhere. That's a lot of what First Principles are about. Start with the most basic elements you can, understand them as completely as possible, build upon that. One thing this provides me is the ability and confidence to discuss my ideas openly, the freedom to ask questions honestly and truthfully, and to understand and accept conflicting views easily and graciously. Put another way, the more you know, the wider your field of acceptance and understanding, and the more fluid and dynamic you become in your ability to respond to others.

So I started relearning statistics by going back to First Principles, studying Gauss, Galton, Fisher and Wright, giving myself the time to understand how the discipline evolved, how the concepts of regression, regression to the mean, ANOVA, ANCOVA, trait analysis, path analysis, structural equations modeling, causal analysis, least squares analysis, …, came about, how they're applied to different sciences (agriculture, eugenics, medicine, …), how bias, efficiency, optimality, sufficiency, ancillarity, robustness, … came about and how they are solved.

I also learned that the advent of fast, inexpensive computing power tended to focus people's attentions to problems that could be solved via fast, inexpensive computing rather than problems that needed to be solved. This was (to me) a point of intersection with the Unfulfilled Promise posts; “gathered data that [we] knew how to gather rather than asking what data would be useful to gather and figuring out how to gather it.”

So I shifted my focus a bit. I decided to use online analytics as the groundwork for teaching myself statistics.

back

13 – Somebody remind me to publish The Augmented Man. It covers Preparation Sets, EEGSLs and all that stuff in detail.

And it's another darn good read. Phphttt!

back

14 – Carrabis, Joseph 2006 Chapter 2, “What The Reading Virtual Minds Series Is About”, Reading Virtual Minds Volume I: Science and History, , Northern Lights Publishing , Scotsburn, NS 978-0-9841403-0-5

Carrabis, Joseph 2006 Chapter 4 section 2, “The Investors Heard the Music”, Reading Virtual Minds Volume I: Science and History, V 1 , Northern Lights Publishing , Scotsburn, NS 978-0-9841403-0-5

Carrabis, Joseph 2006 10 Nov Mapping Personae to Outcomes,

Carrabis, Joseph 2007 23 Mar Websites: You've Only Got 3 Seconds, , ImediaConnections

Carrabis, Joseph 2007 11 May Make Sure Your Site Sells Lemonade, , iMediaConnections

Carrabis, Joseph 2007 29 Nov Adding sound to your brand website, , ImediaConnections

Carrabis, Joseph 2008/9 28 Jan/1 Jul From TheFutureOf (22 Jan 08): Starting the discussion: Attention, Engagement, Authority, Influence, , , The Analytics Ecology

Carrabis, Joseph 2008 26 Jun Responding to Christopher Berry's “A Vexing Problem, Part 4” Post, Part 3, , BizMediaScience

Carrabis, Joseph 2008 2 Jul Responding to Christopher Berry's “A Vexing Problem, Part 4” Post, Part 2, , BizMediaScience

Carrabis, Joseph 2008/9 11 Jul/3 Jul From TheFutureOf (10 Jul 08): Back into the fray, , The Analytics Ecology

Carrabis, Joseph 2008/9 18 Jul/7 Jul From TheFutureOf (16 Jul 08): Responses to Geertz, Papadakis and others, 5 Feb 08, , The Analytics Ecology

Carrabis, Joseph 2008/9 18 Jul/7 Jul From TheFutureOf (16 Jul 08): Responses to Papadakis 7 Feb 08, , The Analytics Ecology

Carrabis, Joseph 2008/9 29 Aug/9 Jul From TheFutureOf (28 Aug 08): Response to Jim Novos 12 Jul 08 9:40am comment, , The Analytics Ecology

Carrabis, Joseph 2008 1 Oct Do McCain, Biden, Palin and Obama Think the Way We Do? (Part 1), , BizMediaScience

Carrabis, Joseph 2008 6 Oct Do McCain, Biden, Palin and Obama Think the Way We Do? (Part 2), , BizMediaScience

Carrabis, Joseph 2008 30 Oct Me, Politics, Adam Zand's Really Big Shoe, How Obama's and McCain's sites have changed when we weren't looking, , BizMediaScience

Carrabis, Joseph 2008 31 Oct Governor Palin's (and everybody else's) Popularity, , BizMediaScience

Carrabis, Joseph 2008/9 10 Nov/15 Jul From TheFutureOf (7 Nov 08): Debbie Pascoe asked me to pontificate on What are we measuring when we measure engagement?, , The Analytics Ecology

Carrabis, Joseph 2009 A Demonstration of Professional Test-Taker Bias in Web-Based Panels and Applications, 20 Pages, , NextStage Evolution , Scotsburn, NS

Carrabis, Joseph 2009 Machine Detection of and Response to User Non-Conscious Thought Processes to Increase Usability, Experience and Satisfaction – Case Studies and Examples, . , Towards a Science of Consciousness: Hong Kong 2009, University of Arizona, Center for Consciousness Studies , Tucson, AZ

Carrabis, Joseph 2009 5 Jun Sentiment Analysis, Anyone? (Part 1), , BizMediaScience

Carrabis, Joseph 2009 12 Jun Canoeing with Stephane (Sentiment Analysis, Anyone? (Part 2)), , BizMediaScience

Carrabis, Joseph; 2007 30 Mar Technology and Buying Patterns, , BizMediaScience

Carrabis, Joseph; 2007 9 Apr Notes from UML's Strategic Management Class – Saroeung, 3 Seconds Applies to Video, too, , BizMediaScience

Carrabis, Joseph; 2007 16 May KBar's Findings: Political Correctness in the Guise of a Sandwich, Part 1, , BizMediaScience

Carrabis, Joseph; 2007 16 May KBar's Findings: Political Correctness in the Guise of a Sandwich, Part 2, , BizMediaScience

Carrabis, Joseph; 2007 16 May KBar's Findings: Political Correctness in the Guise of a Sandwich, Part 3, , BizMediaScience

Carrabis, Joseph; 2007 16 May KBar's Findings: Political Correctness in the Guise of a Sandwich, Part 4, , BizMediaScience

Carrabis, Joseph; 2007 Oct The Importance of Viral Marketing: Podcast and Text, , AllBusiness.com

Carrabis, Joseph; 2007 9 Oct Is Social Media a Woman Thing?, , AllBusiness.com

Carrabis, Joseph; Bratton, Susan; Evans, Dave; 2008 9 Jun Guest Blogger Joseph Carrabis Answers Dave Evans, CEO of Digital Voodoos Question About Male Executives Weilding Social Media Influence on Par with Female Executives, , PersonalLifeMedia

Carrabis, Joseph; Carrabis, Susan; 2009 Designing Information for Automatic Memorization (Branding), 35 Pages, , NextStage Evolution , Scotsburn, NS

Carrabis, Joseph; 2009 Frequency of Blog Posts is Best Determined by Audience Size and Psychological Distance from the Author, 25 Pages, , NextStage Evolution , Scotsburn, NS

Daw, Nathaniel D.; Dayan, Peter 2004 18 Jun Matchmaking, .Science V 304 , I 5678

Draaisma, Douwe 2001 8 Nov The tracks of thought, .Nature V 414 , I 6860 , DOI: http://dx.doi.org/10.1038/35102645

Ferster, David 2004 12 Mar Blocking Plasticity in the Visual Cortex, .Science V 303 , I 5664

Harold Pashler; Mark McDaniel; Doug Rohrer; Robert Bjork 2008 Learning Styles: Concepts and Evidence, .Psychological Science in the Public Interest V 9 , I 3 1539-6053 %+ University of California, San Diego; Washington University in St. Louis; University of South Florida; University of California, Los Angeles

Hasson, Uri; Nir, Yuval; Levy, Ifat; Fuhrmann, Galit; Malach, Rafael 2004 12 Mar Intersubject Synchronization of Cortical Activity During Natural Vision, .Science V 303 , I 5664

Kozlowski, Steve W.J.; Ilgen, Daniel R. 2006 Dec Enhancing the Effectiveness of Work Groups and Teams, .Psychological Science in the Public Interest V 7 , I 3 , DOI: http://dx.doi.org/10.1111/j.1529-1006.2006.00030.x

Matsumoto, Kenji; Suzuki, Wataru; Tanaka, Keiji 2003 11 Jul Neuronal Correlates of Goal-Based Motor Selection in the Prefrontal Cortex, .Science V 301 , I 5630

Ohbayashi, Machiko; Ohki, Kenichi; Miyashita, Yasushi 2003 11 Aug Conversion of Working Memory to Motor Sequence in the Monkey Premotor Cortex, .Science V 301 , I 5630

Otamendi, Rene Dechamps; Carrabis, Joseph; Carrabis, Susan 2009 Predicting Age & Gender Online, 8 Pages, , NextStage Analytics , Brussels, Belgium

Otamendi, Rene Dechamps; 2009 22 Oct NextStage Announcements at eMetrics Marketing Optimization Summit Washington DC, , NextStage Analytics

Otamendi, Rene Dechamps; 2009 24 Nov NextStage Rich PersonaeTM classification, , NextStage Analytics

Paterson, S. J.; Brown, J. H.; Gsdl, M. K.; Johnson, M. H.; Karmiloff-Smith, A. 1999 17 Dec Cognitive Modularity and Genetic Disorders, .Science V 286 , I 5448

Pessoa, Luiz 2004 12 Mar Seeing the World in the Same Way, .Science V 303 , I 5664

Richmond, Barry J.; Liu, Zheng; Shidara, Munetaka 2003 11 Jul Predicting Future Rewards, .Science V 301 , I 5630

Sugrue, Leo P.; Corrado, Greg S.; Newsome, William T. 2004 18 June Matching Behavior and the Representation of Value in the Parietal Cortex, .Science V 304 , I 5678

Tang, Tony Z.; DeRubeis, Robert J.; Hollon, Steven D.; Amsterdam, Jay; Shelton, Richard; Schalet, Benjamin 2009 1 Dec Personality Change During Depression Treatment: A Placebo-Controlled Trial, .Arch Gen Psychiatry V 66 , I 12

back

15 – And before I get another flurry of emails that I'm attacking one person or another, no, I'm not. An almost identical process occurs when someone says “(something) is Easy”. I describe the “(something) is Hard” version because it's easier for people to understand. One of the wonders of AmerEnglish and American cultural training, that — it is easier to accept that something can be hard and harder to accept that something could be easy.

Human neural topography. Gotta love it.

back

16 – This understanding of what happens during teachings and trainings is why all NextStage trainings are done the way they are (see Eight Rules for Good Trainings (Rules 1-3) and Eight Rules for Good Trainings (Rules 4-8)) and could be why our trainings get the responses they do (see Comments from Previous Participants and Students).

back

17 – Bloom, Paul 2001 Precis of How Children Learn the Meanings of Words, .Behavioral and Brain Sciences V 24

Burnett, Stephanie; Blakemore, Sarah-Jayne 2009 6 Mar Functional connectivity during a social emotion task in adolescents and in adults, .European Journal of Neuroscience V 29 , I 6 , DOI: 10.1111/j.1460-9568.2009.06674.x

Frith, Chris D.; Frith, Uta 1999 26 Nov Interacting Minds–A Biological Basis, .Science V 286 , I 5445

Gallagher, Shaun 2001 The Practice of Mind (Theory, Simulation or Primary Interaction), .Journal of Consciousness Studies V 8 , I 5-7

Senju, Atsushi; Southgate, Victoria; White, Sarah; Frith, Uta 2009 14 Aug Mindblind Eyes: An Absence of Spontaneous Theory of Mind in Asperger Syndrome, .Science V 325 , I 5942

Tooby, J.; Cosmides, L. 1995 'Foreward' to S. Baron-Cohen, “MindBlindness: An Essay on Autism and Theory of Mind”, . , MIT Press , Cambridge, Mass.

Zimmer, Carl 2003 16 May How the Mind Reads Other Minds, .Science V 300 , I 5622

back

18 – I'll use myself as an example. I've often become emotional when talking about research and results. But (But!) regardless of my emotionalism, the work stands or doesn't. I can clarify, elucidate, explain, divulge, describe, … and in the end, the work stands or it doesn't.

back

19 – If your model is a linear variation (all regression analyses are linear in nature) then you have something like y = mx + b, y = b0 + b1x + e, … and every change in one unit of x will cause a one unit change in y. Using the above equations as examples we get the textbook definition of the regression coefficient (either m or b1 in the above); the effect that a one unit change in x has on y.

back

20 – I have experience working with large data sets. Some of you might know I worked for NASA in my younger years. I was responsible for downloading and analyzing satellite data. The downloads came every fifteen minutes and reported atmospheric phenomena the world over. My job was to catch the incongruous data and discard it. I got to a point where I could look at this hexidecimal data stream and determine weather conditions any where in the world before it got sent on for analysis.

Amazing that I got dates back then, isn't it?

back

Posted in , , , , ,

The Unfulfilled Promise of Online Analytics, Part 2

Perfection is achieved,
not when there is nothing more to add,
but when there is nothing left to take away.
– Antoine de Saint-Exupery, The Little Prince

<CAVEAT LECTOR>
Readers can find the previous entry in this arc at The Unfulfilled Promise of Online Analytics, Part 1.

First, I want to thank all the people who read, commented, twittered, emailed, skyped and phoned me with their thoughts on Part 1.

My special thanks to the people with reputations and company names who commented in Part 1. Avinash Kaushik and Jim Novo, I thank and congratulate you for stepping up and responding (I queried others if I could include them in this list, they never responded). Whether you intended to or not, whether you recognize it or not, you demonstrated a willingness to lead and a willingness to get involved. Please let's keep the discussion going.

Also my thanks to those who took up the gauntlet by propagating the discussion via their own blogs. Here Chris Berry (and I also note that Chris' The Schism in Analytics, A response to Carrabis, Part II post presages some of what I'll post here) and Kevin Hillstrom come to mind. My apologies to others I may not have encountered yet.

Second, I was taken aback by the amount of activity this post generated. I was completely unprepared for the responses. It never occurred to me there was a nerve to be struck; only one person interviewed responded purely in the positive. The lack of positive response caused me to think this information was self-evident.

Well…there was one of the problems. It was self-evident. Like the alcoholic brother-in-law elephant in the living room, it took someone new to the family to point and say, “My god is that guy drunk or what!”

And like the family who's been working very hard making sure nobody acknowledges the elephant, the enablers came forward — okay, they emailed, skyped and phoned forward. One industry leader commented, saw my response and asked that their comment be removed. I did so with great regret because there can be no leadership without discussion, no unification of voices until all voices are heard.

Please note that some quotes appearing in this entry may be from different sources than in part 1 and (as always) are anonymous unless a) express permission for their use is given or b) the quote is in the public domain (Einstein, Saint-Exupery, etc).

Okay, enough preamble. Enjoy!
</CAVEAT LECTOR>

The whole industry needs a fresh approach. This situation isn't going to improve itself.There was a sense of exhaustion among respondents regarding the industry. It took two forms and I would be hard pressed to determine which form took precedent.

One form I could liken to the exhaustion a spouse feels when their partner continually promises that tomorrow will be better, that they'll stop drinking/drugging/gambling/overeating/abusing or otherwise acting out.

It wasn't always the case. Once upon a time (that phrase was actually used by more than one respondent) there was a belief that if things were implemented correctly, if a new tool could be developed, if management would understand what was being done, if if if… Things could and would be better. Promises were made that were never kept and were then comfortably forgotten.

The second form I could liken to the neglected child who starts acting out simply to get attention. Look at me, Look at me! But mom&dad always have something else to focus their attention on. There's the new product launch, opening new markets, having to answer to the Board, (and probably the worst) the other children (marketing, finance, logistics, …), …

“When you know the implementation is correct you have to wonder if the specifications are wrong.”

Several respondents showed an impressive level of self-awareness. Many of them have moved on, either out of the industry completely or into more fulfilling positions within. All recognized that any industry that succumbs to promise and hype will ultimately end in disappointment.

First we're told to nail things down then given a block of unobtainium to nail them in then told to do it now!The disappointment took two primary forms (clear schisms abounded in this research. Clear schisms are usually indicative of deep level challenges to unification in social groups) and the division was along personality types. Respondents who were more analytic than business focused were disappointed because “…a fraction of implementation achieve business goals. A tiny faction of those actually work.”

Respondents who were more business than analytics focused were disappointed because the industry didn't help them achieve their career goals.

For many in both camps moving on was a recognition of their own personal growth and maturation, for most it was frustration based, a running away-from pain rather than a movement towards pleasure. This latter again demonstrates a victim mentality, a caught in the middle between warring parents.

“When the tools don't agree management's solution is to get a new tool.”

Deciding on tools is more politics than smarts. Management doesn't ask us, they just go with the best promises.Respondents demonstrated frustration with clients/organizations and vendors that refuse to demonstrate leadership. This was such a strong theme that I address it at length below. Sometimes a lack of leadership is the result of internal politics (“…and that's (competition, keeping knowledge to themselves, backstabing) is starting to happen (we see the schism (right word?) between Eric's 'hard' position and Avinash 'easy' (and others)…”).

Leadership vacuums also develop when power surges back and forth between those given authority positions by others. Family dynamics recognizes this when parents switch roles without clearly letting children know who's taking the lead (think James Dean's “You're tearing me apart” in Rebel Without a Cause). This frustration was exacerbated when respondents began to recognize that no tool was truly new, only the interfaces and report formats changed.

There was a sense among respondents that vendors and clients/organizations were switching roles back and forth, neither owning leadership for long, and again, the respondents were caught in the middle.

“Management pays attention to what they paid for, not what you tell them.”

Some respondents are looking at the horizon and reporting a new (to them) phenomenon; as vendors merge, move and restructure there's an increasing lack of definition around “what can we do with this?” This is disturbing in lots of ways.

...everybody's agreeing with their own ideas and nobody elses.Analysts will begin to socially and economically bifurcate (there will be no “middle class”). Those at the bottom of the scale will get into the industry as a typical “just out of school” job then move elsewhere unless they're politically adept. The political adepts will join the top runners, either associating themselves with whatever exemplars exist or by becoming exemplars themselves. But the social setting thus created allows for a multitude of exemplars, meaning there are many paths to the stars, meaning one must choose wisely, meaning most will fail and thus the culture bifurcates again and fewer will stay long enough to reach the stars. “You have to pick who you listen to. I get tired figuring out who to follow each day.”

Respondents admitted to lacking (what I recognize as) research skills. I questioned several people about their decision methods — had they considered this or that about what they did or are planning to do — and universally they were grateful to me for helping them clarify issues. Those that had appreciable research skills were hampered by internal politics (“Until my boss is ready nothing gets done.”)

Most respondents confused outputs with outcomes (as noted in part 1) because tools are presented and trained in two levels (this is my conclusion based on discussions. I'm happy to be corrected). There's the tool core that only few learn to use and there's the tool interface that everyone has access to.

Everyone can test and modify their plans based on the interface outputs but what happens at the core level — how the interface outputs are arrived at — is the great unknown hence can't be defended in management discussions and “…I can't explain where it came from so I'm ignored.” Management's (quite reasonable, to me) response follows Arthur C. Clarke's “Mankind never completely abandons any of its ancient tools”, they go with what they know, especially when analysts themselves don't demonstrate confidence in their findings. “I can only shrug so many times before they stop listening, period.” Management is left to make decisions based on experience and now we see the previously mentioned bifurcation creeping into business decisions. Those with the most experience, the most tacit knowledge, win. As John Erskine wrote, “Opinion is that exercise of the human will that allows us to make a decision without information” and management — asking for more accountability — is demanding to understand the basis for the information given.

“Did you ever get the urge when someone calls up or sends e-mails asking, 'How's that data coming?' to say, 'Well, we're about two hours behind where we would be if I didn't have to keep stopping to answer your goofy-?ss phone calls and e-mails.' This is called project management, I guess.”

Some tools are rejected even when they make successful predictions.“Ignore them” as a strategy for responding to business requests works two-ways. Management repeatedly asking difficult to solve questions results in they're being ignored by analysts until the final results are in. By that time both question and answer are irrelevant to a tactical business decision and once again the “promise” is lost. In-house analysts can suggest new tools and must deal with their suggestions gaining little traction. “Management works in small networks that look at the same thing. They're worse than g?dd?mn children. You have to whack them on the side of the head to get their attention.”

Management's reluctance to take on different tools and methodologies is understandable. Such decisions increase risk and no business wants risk.

“To change the form of a tool is to lose it's power. What is a mystery can only be experienced for the first time once.”

...as online analytics matures it must evolve to survive.I asked for clarification of the statement on the right and was told that yes, there are times when old paradigms need to be tossed aside and knowing when is a recognizable management skill that can only be exercised by extreme high-level management, by insanely confident upstarts and lastly by (you guessed it) trusted leaders/guides. The speaker had recently returned to the US from a study of successful EU-based startups. When and how paradigms should be shifted and abandoned is a hot topic among 30ish EU entrepreneurs.

“We're suppose to be solving problems. But I can't figure out what problems we're suppose to solve.”

Random metric names and symbols is not an equation.(the quote on the right is from Anna O'Brien's Random Acts of Data blog)

Business and Science are orthogonal, not parallel. Any science-based endeavor works to overcome obstacles. If not directly, then to provide insight into how and what obstacles can be overcome. Business-based endeavors work to generate profit. Science involves empirical investigation. Investigation takes time and only certain businesses can afford time because unless the science is working at overcoming a business obstacle, it's a cost, not a profit.

So if you can't afford the time involved in research and are being paid to solve business problems your options are limited. Most respondents relied on literature (usually read at home during “family time” or while traveling), conferences, private conversations and blogs. Literature is only produced by people wanting to sell something (this includes yours truly). It may be a book, a conference ticket, a tool, consulting, a metaphysic, …, and even when what they offer is free (such as most blogs) consumers pay with their attention, engagement and time (yes, I know. Especially with my posts).

...I don't believe in WA anymore, I haven't seen any of my clients change because of it and all the presentations that I've seen are always similar...Conferences and similar venues are biased by geographies, time and cost (again, even if free you're paying somehow. Whoever is picking up the bar tab and providing the munchies is going to be boasting about how many attended).

Private conversations provide limited access and that leaves blogs. The largest audiences will be (most often) offline in the form of books and online in the form of blogs.

Behold, and without most people realizing it's happening, exemplars form. The exemplar du jour provides the understanding du jour, hence a path to what problems can be solved du jour. Who will survive?

Historical precedent indicates that exemplars who embrace and encourage new models will thrive. More than thrive, they will continue as positive exemplars. Exemplars not embracing or at least acknowledging new models will quickly become negative exemplars and the “negativity” will be demonstrated socially first in small group settings then spill over into large group settings once a threshold is reached (and once that threshold is reached, watch out!). The latter won't happen “over night” and it will definitely happen (my opinion) because all societies follow specific evolutionary and ecologic principles (evolutionary biology, Red Queen, Court Jester, evolutionary dynamics, niche construction and adaptive radiation rules (along with others) all apply). The online analytics world is no different.

<TRUTH IN ADVERTISING DEPT>
Some people contacted me about Stephane Hamel's Web Analytics Maturity Model. I knew nothing about it, contacted Stephane, asked to read his full paper (not the shortened version available at http://immeria.net/wamm), did so, talked with him about it, told him my conclusions and take on it and got his permission to share those conclusions and takes here. I also asked Stephane if I could apply his model to some of my work with the goal of creating something with objective metricization that would be predictive in nature and he agreed (if you treat Stephane's axes as clades and consider each node as a specific situation then cladistic analysis tools via Situational Calculus looks very promising (asleep yet?)).
</TRUTH IN ADVERTISING DEPT>

A case in point is Stephane Hamel and his Web Analytics Maturity Model (WAMM). Stephane will emerge as an exemplar for several reasons and WAMM is only one of them.

KISS should be part of the overall philosophy.WAMM is (my opinion) an excellent first step to solving some of the issues recognized in part 1 because it does something psycholinguists know must be done before any problem can be solved; it gives the problem a name. Organizations can place themselves or be placed on a scale of 0-5, Impaired to Addicted (Stephane, did you know that only 1-4 would be considered psychologically healthy?). WAMM helps the online analytics world because it creates a codification, an assessment tool for where an organization is in their online efforts.

I asked Stephane if he thought his tool was a solution to what I identified in part 1. He agreed with me that it wasn't. Its purpose (my interpretation, Stephane agreed) was that it creates a 2D array, creates buckets therein and then explains what goes in each bucket.

I asked Stephane if he believed WAMM provided a metricizable solution with universally agreed to objective measures (I told Stephane that I wasn't grasping how WAMM becomes an “x + y = z” type of tool and asked if I'd missed something). Stephane replied “…no, you haven't missed anything, because it is NOT a x+y=z magical/universal formula, that's not the goal. The utmost goal is to enable change, facilitate discussion, and it's not 'black magic'. A formula would imply there is some kind of recipe to success. Just like we can admire Amazon or Google success and could in theory replicate everything they do, you simply can't replicate the brains working there – thus, I think there is a limit to applying a formula (or 'brain power' is a huge randomized value in the formula).”

WAMM and any similar models would be considered observational tools (I explain “observational” tools further down in this post). Most observational tools (I would write “all” and don't have enough data to be convinced) trace their origins (and this is a fascinating study) to surveying; People could walk the land and agree “here is a rise, there is a glen” but it wasn't until surveying tools (the plumb&line, levels, rods&poles, tapes, compass, theodolite, …) came along that territories literally became maps (orienteers can appreciate this easily) that told you “You are here” and gave very precise definitions of where “here” was.

The only problem with observational tools is that the map is not the territory. Yes, large enough maps can help you figure out how to get from “here” to “there” and how far you can travel (how much your business can successfully change) depends on the size of your map, your confidence in your guide/leader, … . Lots of change means maps have to be very large (ie, very large data fields/sets), updated regularly (to insure where you're walking is still where you want to walk). The adage “Here there be dragons” places challenges in a fixed, historical location, it doesn't account for population and migrational dynamics (market movements, audience changes).

Or you need lots of confidence in your leaders.

“…any science first start as art until it's understood and mature enough, no?”

A conclusion of this research is that online analytics is still more art than science, more practitioner than professional (at least in the client/organization's mind). This was demonstrated as a core belief in responses as the ratio of respondents using practitioner to professional was 6:1. This language use truly shocked me. Even among non-AmerEnglish speakers the psycholinguistics of practitioner and professional makes itself known. “Practitioner” is to “professional” as “seeking” is to “doing”, “deed” to “task”, “questing” to “working”, …

The disconnect between what practitioners do and what businesses need is an embarassment. There's a widening gulf between [online analytics] and business requirements.Online analytics makes use of mathematics (statistics, anyway) and although some people use formulae the results are often not repeatable except in incredibly large frames hence any surgical work is highly questionable. As the USAF Ammo Troop manual states “Cluster bombing from B-52s is very, very accurate. The bombs are guaranteed always to hit the ground.”

A challenge for online analysts may be recognizing the current state being more art than science as such and promoting both it and themselves accordingly. They are doing themselves and those they answer to a disservice if they believe and promote that they're doing “science” while the error rates between methods are recognized (probably non-consciously) as “art” by clients. Current models and methods allow for high degrees of flexibility (read “unaccountable error sources”).

Modern medical science has no cure for your condition. Fortunately for you, I'm a quack.A good metaphor is modern medicine. Without a diagnosis there can be no prognosis. You can attempt a cure but without a prognosis you have no idea if the patient is getting better or not. Most people think a prognosis is what they hear on TV and in the movies. “Doctor, will he live?” “The prognosis is good.” Umm…no. A prognosis is a description of the normal course of something, a prediction based on lots of empirical data seasoned with knowledge of the individual's general health. A prognosis of “most people turn blue then die” coupled with observations of “the skin is becoming a healthy pink and the individual is running a marathon” means the cure has worked and that the prognosis has failed.

Right now the state of online analytics is like the doctor telling the patient “We know you're ill but we don't know what you have.” The patient asks “Is there a cure?” and the doctor responds, “We don't know that either. Until we know what you have we don't know how to treat you…but we're willing to spend lots of money figuring it out.”

This philosophy is good in the individual and not in the whole (as recently witnessed by the public outcries about the recently published mammogram studies and no more demonstration of communicating science to non-scientists has occurred in recent years).

But once the disease is named? Then we have essentially put a box around whatever it is. We know its size, its shape and its limits.

There can be no standardization, no normalization of procedure or protocol, when the patient can shop for opinions until they find the one they want.

The challenge current models and methods face is that they serve the hospitals (vendors), not the doctors (practitioners) nor the patients (clients/organizations). It doesn't matter if all the doctors agree on a single diagnosis, what matters is whether or not there is a single prognosis that will heal the client. In that sense, WA is still much more an art than it is a science, and while we may all attend Hogwarts, our individual levels of wizardry may leave much to be desired.

...but give us a second and we'll run the data again.If you wish to claim the tools of mathematics then you must be willing to subject yourself to mathematical rigor. Currently there can be no version of Karl Popper's falsifiability when the same tool produces different results each time it's used (forget about different tools producing different results. When the same tool produces different results you're standing at the scientific “Abandon Hope All Ye Who Enter Here” gate).

“…gathered data that [we] knew how to gather rather than asking what data would be useful to gather and figuring out how to gather it.”

All the online tools currently available are “observational” (anthropologists, behavioral ethologists, etiologists, …, rely heavily on such tools). “Observation” is the current online tool sets' origin (going back to the first online analytics implementation at UoH in the early 1990s) and not much has changed. The challenge to observational tools is that they can only become predictive tools when amazingly large numbers are involved. And even then you can only predict generalized mass movement, neither small group nor individual behavior (for either you need what PsyOps calls ITATs — Individualizing Target Acquisition Technologies), with the mass' size determining the upper limit of a prediction's accuracy.

At this point we start circling back to part 1's discussions about “accountability” and why the suggestion of it gets more nervous laughter than serious nods. Respondents' resulting language indicates there is more a desire to currently keep WA an art than a science . There is less accountability when things are an art form. But “metrics as an art” is in direct conflict with client goals. And unless a great majority of practitioners wish their industry to mature there is no cure for its current malaise.

The promise has been unfulfilled since 2003. We were talking about more effective marketing, improved customer retention and all that stuff back then.One solution to this is giving the industry time to mature. Right now there is conflict between the art and science paradigms, between Aristophanes' “Let each man exercise the art he knows” and Lee Mockenstrum's “Information is a measure of the reduction of uncertainty.”

Time as a solution has been demonstrated historically, most obviously in our medical metaphor. Village wisdomkeepers gave way to doctors then to university degrees in medicine because the buying public (economic pressure) demanded consistency of care/cures. Eventually things will circle back and again due to economic pressure. Enough clients will seek alternatives not provided by institutional medicine and go back to practitioners of alternative medicine at which point the cycle will begin again. People have been openly seeking alternative cures to catastrophic illnesses since the 1960s. Eventually money began escaping institutional medicine's purview and insurers were being forced to pay. The end result was that institutional medicine and insurers started recognizing and accepting alternative medical technologies…provided some certification took place, usually through some university program.

It will be interesting to see how WAMM economizes the online analytics ecology: will practitioners decide institutions lower in the WAMM matrix are too expensive to deal with? This means such institutions — which require experienced practitioners to survive — will only be able to afford low quality/low experienced practitioners to help them. This can be likened to a naval gunnery axiom, “The farther one is from a target, either the larger the shell or the better the targeting mechanism” and companies will opt for larger shells (poorly defined efforts) rather than better targeting mechanisms (experienced practitioners).

“A dominant strand for [online analytics] the past ten fifteen years has been incorporating web information with executive decisions.”

So far no single solution to concerns raised in this research is apparent (to me). Instead a solution matrix of several components seems most likely to succeed (WAMM is a type of solution matrix; you can excel along any axis and to be successful you need to excel evenly along all axes). So far three matrix elements — time, a lack of leadership and realism — have been identified. Time to mature is culture dependent so the online community as a whole must do the work.

Not enough gets said about the importance of abandoning crap.(I believe the quote on the right originated with Ira Glass)

Realism — in the sense of being realistic about what should be expected and what can be accomplished is obvious — deals with social mores and leads in the “lack of leadership” concern. There can be no “realism” until the social frame accepts “realism” as a standard, until hype and promise are dismissed and this isn't likely to happen until leaders/exemplars emerge that make it so.

“Yes, I see your point. Please remove my post from your blog”

Progress in any discipline depends on public debate and the criticism of ideas. That recognized, it is unfortunate that the current modes of online analytics public debate and criticism are limited to conferences, private conversations and (as witnessed here) online posts. Conferences (by their nature) only allow for stentorian and HiPPOish debate. Private conversations only allow for senatorial flow. In both cases the community at large doesn't take part.

Blogs and related online venues are an interesting situation. They provide a means for voices to be raised from the crowd. Social mechanics research NextStage has been doing (we're working on a whitepaper) documents how leaders emerge (become senatorial, sometimes stentorian and in some cases HiPPOtic), how they fade, how to create and destroy them (for marketing purposes), (probably most importantly) how a given audience will perceive and respond to a given leader and what an individual can do regarding their own leadership status.

The WAA is very US focussed.I bring this into the discussion because several people commented publicly (both in Part 1 comments and elsewhere) and privately (emails and skypes) that the industry (more true of web than search) suffers from a lack of leadership.

People who enjoy the mantle of leadership yet refuse to lead are not leaders. Recognized names had an opportunity to both join and take leadership in the discussion (I mention some who did at the top of this post). Yet the majority of others either failed to respond, chose to ignore the discussion or — as indicated by the quote opening this section — simply backed away when the discussion was engaged. No explanation, no attempt at writing something else. Considering the traffic, twits, follow-up posts on other blogs (for something I posted, anyway), this was an opportunity for people to step forward. Especially when lots of other people were writing that there was a leadership vacuum.

Leaders/Influencers take different forms (as documented in the previously mentioned social mechanics paper). Two forms are Guide and Responder. Guides are those who are in front. They may know the way (hence are “experts”) and may not. Experts may or may not be trusted depending on how well they can demonstrate their expertise safely to their followers (you learn to trust your guide quickly if you've ever gone walking on Scottish bogs. They demonstrate their knowledge by saying “Don't step there”, you step there and go in over your head at which point they pull you out and say “I said, 'Don't step there'.” A clear, clean, quick demonstration of expertise).

Guides who don't know the way rely heavily on the trust of those following them and can be likened to “chain of command” situations; they are followed because they are trusted and have the moral authority to be followed.

The Guide role is definitely riskier. It's also the more respected one because Guides lead by “being in front of the pack, stepping carefully, being able to read the trail signs hence guiding them safely”. The Responder doesn't lead by being in front. Instead they assume a position “closer to the end, perpetually working at catching up, but always telling the pack where to go, where to look and what to do”. The major problem for Responders is that people don't have lots of respect for that latter role. They may respect the individual and most people will quickly recognize the role they play and the lack of respect will filter backward to the individual.

This plays greatly into any industry's maturation cycle. New school will replace old school and unless our forebears' wisdom is truly sage — evergreen rather than time&place dependent — the emerging schools will seek their own influencers, leaders and guides. This is already being demonstrated in the fractionalizing of the conference market.

One industry leader offered three points in a comment, saw my response and asked that I remove their comment before it went live. I'm going to address two points (the third was narrative and doesn't apply) because I believe the points should be part of the discussion and more so due to their origin.

First, Web Analytics is not a specific activity.

People need to look beyond the first conclusions that come to mind.I responded that nothing I'd researched thus far led me to think of 'Web Analytics' as an 'always do this – always get that' type of activity and offered that while different people use 'Web Analytics' for different purposes, the malaise is quite pervasive. Whether or not 'Web Analytics' includes a host of different activities or not is irrelevant to the discussion. The analysts' dissatisfaction with their role in the larger business frame, their dissatisfaction with the tools they are asked or choose to use, their dissatisfaction with their 'poor country cousin' position in the chain-of-command, …, are what need to be addressed.

Second, the individual wrote that there was no “right way” to do web analytics.

I both agreed and disagreed with this and explained that there are lots of ways to dig a hole. In the end, the question is 'Did you dig the hole?' More specifically, if one is asked to excavate a foundation hole, dig a grave, plow a field, dig a well, plant tomatoes, …, all involve digging holes, each requires different tools (time dependency for completion becomes an issue, I know. You can excavate a foundation hole with a hand trowel. I wouldn't want to and you could). Stating that 'There is a right way to do it' is a faulty assumption demonstrates a belief that standardization will never apply, therefore chaos is the rule.

Chaos being the rule is usually indicative of crossing a cultural boundary (such as a western educated individual having to survive in the Bush. None of the socio-cognitive rules apply until the western individual learns the rules of the Bush culture) or crazy-making behavior (from family and group dynamics theory). Culture of any kind is basically a war against chaos and what cultures do is create rules for proper conduct and tool use within their norms.

One could conjecture that the cross-cultural boundary is the analytics-management boundary. So long as management controls that boundary a) there will be no “one-way” to do analytics (the patients will self-diagnose and -prescribe) and b) analytics will never be granted a seat at the grown-ups' table.

The numbers need a context.So there better be a 'right way to do it', at least as far as delivering results and being understood are concerned, because without that the industry — more accurately, the practitioners — are lost.

“I could tell them 'It is not possible to send in the Armadillos for this particular effort but communication will continue without interruption' and they'd nod and agree.”

Two needs surfaced quickly:

  • recognize what's achievable when (so people aren't set up to fail) and
  • learn how to promote faster adoption of an agenda (without going to Lysistratic extremes, of course. Everybody wants to keep their job).

Accepting increased accountability addresses some issues and not all. Concepts from several sources (some distilled and not in quotes, some stated more elegantly than I could and in quotes) revealed the following additional matrix components:

1) “[online] Analysts need to share the error margins, not the final analysis, of their tools”
2) stop or at least recognize and honestly report measurement inflation
3) “Trainings need to focus on a proficiency threshold”
4) “…provide a strong evidence of benefit”
5) understand what [a tool] is really reporting
6) “It's better to come at [online analytics] from a business background than the other way around…” (“…but who wants the cut in pay?”)
7) “We should standardize reports because the vendors won't”
8) initiate regular, recognized adaptive testing for higher level practitioners
9) include communication and risk assessment training (some time we're at a conference, ask me about the bat&ball question. It's an amazingly simple way to discover one's risk assessment abilities)

We must work to get uncertianty off the table.“The problem is uncertainty…”

That's a long component list and most readers will justifiably back away or become overwhelmed and disheartened. Fortunately there's historically proven, overlapping strategies for dealing with the above items collectively rather than individually.

  • Analysts live with uncertainty, clients fear it, so “…get uncertainty off the table” when presenting reports (this was termed “stop hedging your bets” by some respondents).This single point addresses items 1, 2, 4, 5, 8 and 9 above (hopefully you begin to appreciate that working diligently on any one component suggested here will accrue benefits in several directions (so to speak)).
  • Identify the real problem so you can respond to their (management's) problem. This point addresses items 1, 2, 3, 4, 5, 6, 7 and 9.
  • Speak their (management's) language. Items 4, 5, 6, 7 and 9.
  • Learn to communicate the same message many ways without violating the core message (we've isolated eight vectors addressing this and the previous item: urgency, certainty, integrity, language facility, positioning, hope, outcome emphasis (Rene, I'm seeing another tool. Are you?)) Items 3, 4, 5, 6, 7, 8 and 9 are handled here.
  • Be drastic. Rethink and redo from the bottom up if you have to. This point deals with items 1, 2, 4, 5, 8 and 9.
  • Focus on opportunities, not difficulties. This point deals with items 4, 5, 6 and 9.

Any one of the above will cover several matrix components right out of the gate. The benefit to any of the above stratagems is that implementing any one will cause other stratagems to root over time as well, and thus the shift

  • in what the numbers are about,
  • how they are demonstrated,
  • how to derive actionable meaning from them and
  • how accountability is framed

mentioned at the end of part 1 can be easily (? well, at least more easily) achieved.

<ABOUT THIS RESEARCH>

I wrote a little about how this study was done in part 1. We contacted some people via email, performed various analysis on their responses, others via phone, ditto, others via skype, ditto, and some in face-to-face conversation. All electronic information exchanges were retained and analyzed using a variety of analog and digital tools. Face-to-face conversations were performed with at least one other observer present to check for personal biasing in the resulting analysis.

Like any research, others will need to add their voices and thoughts to the work presented here. I make no claims to its completeness, only that it's as complete as current time and resources allow.

</ABOUT THIS RESEARCH>

Posted in , , , ,