Why Isn't Marketing a Science, Part II

Note 1: This post is dedicated to Christopher Berry, Calum MacKenzie and David Morf, three gentleman who taught me new ways to solve old problems.

Note 2: This research began in June 2009 based on conversations spawned by Why hasn't Marketing caught on as a “Science”?. A full paper will be available sometime in 2011.

Mass marketing is a uniquely US invention. It was an invention of necessity because up until the 1970s the USA was a uniquely monolithic culture. People immigrated to the US to be American and the definition was both supplied and propagated worldwide via TV, radio, print and movies (the only media and channels available at that time). The definition of being “American” itself was a product of US based marketing and intended to foster a consumer culture (it was very successful).

The US's late 19th century and early 20th century economic power and geographic isolation (equivalent to information isolation or “islanding” at that time) enforced that “American” stereotype (material rich while being psycho-emotionally independent of others). Indeed, pop cultural icons were not deemed to have “made it” unless they conquered the American market. The end result was that everyone strove to market this ideal and marketing didn't need to be a science — the market was so big that any effort was destined to be successful with only minor modifications to the definition of “success”.

Meanwhile, immigrants continued to celebrate their ethno-cultural uniqueness and usually in private, behind closed doors, in ghettoes with festivals that eventually became tourist attractions, etc., but rarely was ethnicity celebrated on main street.

Enter Consumer Choice

The late 1970s introduced a perfect storm of socio-technical events that worked to destroy the monolithic US market; the oil crisis, the rise of cable television and emergence of cellphones (both forms of information technology) and the influx of populations who wanted to maintain their ethno-cultural heritage at all costs. Where immigrant status was once considered a curse it was now considered a blessing. Uniqueness was transferring from the population as a whole to the individuals in that population.

A self-cannibalizing cycle emerged. Marketers started to market towards the influx of self-actualized immigrants arriving in the US due to their numbers providing them with previously unknown economic power. Self-actualized immigrants re-established ethno-cultural identity to existing peer minorities who had been taught to be quiet about their unique heritages and a generation emerged who gave up their americanized names for either assumed or given ethno-cultural names.

By creating and marketing to self-actualized immigrants and existing minorities, marketers and businesses acknowledged and validated both their ethnicity and economic power. By acknowledging and validating ethnicity and economic power, self-actualized immigrants and existing minorities were increasingly able to maintain their (in some cases emerging) pride in their ethno-cultural identities. Increasing pride in ethno-cultural identity allowed ethno-cultural exemplars to be demonstrated on main street. It was no longer a symbol of being “right off the boat” to walk downtown in ethnic dress and regalia.

The end result of this acknowledgement of ethnicity was an acknowledgement of diversity and individuality — again, uniqueness. This shift demonstrated itself in the market by giving all consumers more choices. Grocery stores that once only provided shelfspace to “American” brands and palettes are increasingly giving over real estate to ethnic specific foods and products.

The other place consumer real estate is demonstrating ethno-cultural diversity is in information resources. This first appeared with cable television systems. Originally solely in American English (the US produced and distributed the majority of information for this technology), cable television systems have become increasingly culture specific.

The mid 1990s brought the advance of cheap information distribution via the web and widespread cellphone technology. Now consumer choice is “complete”. The web has moved from an English only medium to a truly culturally diverse medium and if you want “the complete story” you can get the news from separate US, British, French, Japanese, Indian, South African, Russian, … sources with the click of a mouse. Macbeth is rewritten for Japanese and Nigerian audiences and Tales of the Monkey King are available with an American Western twist. The increase in web-enabled mobile devices means a diverse information pool is available 24x7x365 and the consumer demonstrates their information source preference more clearly than ever before.

Markets as Scarce Resources

When presented with scarce but necessary resources, technologies will emerge to exploit those resources as economically as possible.

European, African and Arabic businesses have been well aware of this for decades, centuries and millennia, in that order. Tailoring marketing messages and campaigns to a total possible market of seven million consumers is the standard and getting a response of 700,000 is considered a success. Traditional american marketing would see a seven million response as a failure.

Geomarketing or “local” marketing — emerging in the US as yet another innovation and big thing — has been the standard elsewhere in the world for quite a long time.

Redefining Information Hegemonies

One of the greatest challenges to marketing become a true science (at least in the US) is the existing information hegemonies — megalithic broadcast companies that own multiple information outlets owning multiple media channels.

These hegemonies purchase successful culturally specific outlets and channels to increase the hegemonies' reach and economic power. Unfortunately, they then apply marketing methods based on media and information consumption models developed in the first half of the 20th century (when first radio and then TV were in every home).

These models will not thrive long into the 21st century. The cost of information production and distribution — once extremely prohibitive and therefore making information itself a scarce resource — is now, like Macbeth and Tales of the Monkey King. Information sources from anywhere in the world are a click away and available for anyone with an internet, wifi, etc., connection. YouTube, FaceBook, Flickr and related sites turn everyone into their own marketing company. A few nods from PayPal and basement efforts are internationally financed sensations “overnight”.

Technology and information distribution infrastructures were already defined by cultural constraints elsewhere in the world so the american style hegemonies didn't exist even though “american” cultural standards did. It was possible for other cultures to leapfrog the US in applying concepts of cultural anthropology, linguistics, ethnic studies and other sciences to exploit “small” markets because their infrastructure required it1.

And nowhere was the concept of “small marketing” becoming more obvious than in the online and mobile world due to the advent of social “small world” models (a concept borrowed from biopharmic clinical trials methodologies, mathematics, social anthropology and a few other sciences).

Changing Models

The increasing global awareness of cultural identity and diversity, the rise in ethnic pride and awareness, the acceptance of minorities and their requests for equal recognition, etc., are the psychological results of the information-accessibility explosion.

The increase in media outlets, methods and channels is destroying the old, american mass-marketing concepts and forcing marketers to use more and more scientific approaches and methods. Marketing is moving from a “…cast your bread upon the waters” mentality to a “choose your bread and waters carefully, and determine ahead of time how far to cast…” paradigm with the latter being something the rest of the world has been doing for a very, very long time.

Conclusion

Marketing will become a science. The rise and fall of disciplines (“neuromarketing” is the latest of these) attempting to explain marketing from a “scientific” paradigm is an example of evolutionary forces in the market looking for an answer to the “how do I best exploit this environment” question and not yet finding it.

Much as evolutionary forces caused biologies to answer “how do I best exploit this environment” with big muscles, big teeth and big brains so will several new disciplines take hold and only for as long as there are marketing companies willing to pay the prices they demand 2. I offer an E3 — economic-ecologic-environmental — model because co-evolution, ecologic diversity and resource economics will always apply.

As with our world so with marketing as a science. Humans fall into a wide variety of ecological niches that range from the obvious (age, gender, language, … essentially hangers on from the mass-marketing model) to the increasingly subtle (decision styles3, intender status4, psychological needs5 — extremely rapidly.

The language of commerce has changed throughout history and it can be thought of mathematically as a function of “which cultural-language groups had the best information-distribution technologies” X “the largest audience ready and willing to accept the commerce message”. Any change in the information environment creates new opportunities in the information-environment. Only those willing to create the technologies necessary to explain the changes/new opportunities will thrive there.

Markets will have to become a science — first borrowing from existing disciplines then giving to them — just to keep up.


1 – A favorite anecdote that demonstrates this is Atlantic Canada making digital phone and internet technology available to everyone who wanted it decades before the US. Atlantic Canada had never made the investment in copper wires and telephone poles as the standard communications technology. As copper based communication technology didn't exist, installing a digital optical and wireless infrastructure was both possible and economical.

2 – I recently learned of a major brand who's jumped on the Neuromarketing bandwagon in a big way. Realizing that a single person placed in an fMRI and flashing brand images at them doesn't demonstrate “group” behavior, they've purchased ten fMRI machines — probably a US$10M investment in “cheap” machines alone, not counting training, staff, users, housing, maintenance, … — and placed them all in one room. This way they can get ten people into them at once at the same time and flash the exact same image to find out how the “group” responds to the brand!

Ah…yeah…

While the efficacy of fMRI, CT and related technologies for marketing purposes is still greatly in doubt (see Brain imaging skewed, “Nearly half of the neuroimaging studies published in prestige journals in 2008 contain unintentionally biased data that could distort their scientific conclusions, according to scientists at the National Institute of Mental Health in Bethesda, Maryland.” among others), the methodology itself is a legacy american paradigm — the only real answers are big, expensive answers…regardless if they're answering the correct question.

3 – Bekoff, Marc 2002 19 Sep Animal reflections, .Nature V 419 , I 6904 , *DI: http://dx.doi.org/10.1038/419255a
Carrabis, Joseph 2006 Chapter 1, “What this Book is About”, Reading Virtual Minds Volume I: Science and History, , Northern Lights Publishing , Scotsburn, NS 978-0-9841403-0-5
Carrabis, Joseph 2006 Chapter 2, “History”, Reading Virtual Minds Volume I: Science and History, , Northern Lights Publishing , Scotsburn, NS 978-0-9841403-0-5
Carrabis, Joseph 2006 Chapter 4 “Anecdotes of Learning”, Reading Virtual Minds Volume I: Science and History, V 1 , Northern Lights Publishing , Scotsburn, NS 978-0-9841403-0-5
Carrabis, Joseph 2006 10 Nov Mapping Personae to Outcomes,
Carrabis, Joseph 2007 11 May Make Sure Your Site Sells Lemonade, , iMediaConnections
Carrabis, Joseph; 2007 16 May KBar's Findings: Political Correctness in the Guise of a Sandwich, Part 2, , BizMediaScience
Carrabis, Joseph; 2007 16 May KBar's Findings: Political Correctness in the Guise of a Sandwich, Part 3, , BizMediaScience
Carrabis, Joseph; 2007 16 May KBar's Findings: Political Correctness in the Guise of a Sandwich, Part 4, , BizMediaScience
Carrabis, Joseph; 2007 16 May KBar's Findings: Political Correctness in the Guise of a Sandwich, Part 1, , BizMediaScience
Carrabis, Joseph 2007 23 Mar Websites: You've Only Got 3 Seconds, , ImediaConnections
Carrabis, Joseph 2007 29 Nov Adding sound to your brand website, , ImediaConnections
Carrabis, Joseph; 2007 30 Mar Technology and Buying Patterns, , BizMediaScience
Carrabis, Joseph; 2007 9 Apr Notes from UML's Strategic Management Class – Saroeung, 3 Seconds Applies to Video, too, , BizMediaScience
Carrabis, Joseph; 2007 9 Oct Is Social Media a Woman Thing?, , AllBusiness.com
Carrabis, Joseph; 2007 Oct The Importance of Viral Marketing: Podcast and Text, , AllBusiness.com
Carrabis, Joseph 2008 1 Oct Do McCain, Biden, Palin and Obama Think the Way We Do? (Part 1), , BizMediaScience
Carrabis, Joseph 2008 2 Jul Responding to Christopher Berry's “A Vexing Problem, Part 4” Post, Part 2, , BizMediaScience
Carrabis, Joseph 2008 24 Oct How OJ Simpson Won Barack Obama the 2008 Presidential Election, , BizMediaScience
Carrabis, Joseph 2008 26 Jun Responding to Christopher Berry's “A Vexing Problem, Part 4” Post, Part 3, , BizMediaScience
Carrabis, Joseph 2008 30 Oct Me, Politics, Adam Zand's Really Big Shoe, How Obama's and McCain's sites have changed when we weren't looking, , BizMediaScience
Carrabis, Joseph 2008 31 Oct Governor Palin's (and everybody else's) Popularity, , BizMediaScience
Carrabis, Joseph 2008 6 Oct Do McCain, Biden, Palin and Obama Think the Way We Do? (Part 2), , BizMediaScience
Carrabis, Joseph 2008/9 10 Nov/15 Jul From TheFutureOf (7 Nov 08): Debbie Pascoe asked me to pontificate on What are we measuring when we measure engagement?, , The Analytics Ecology
Carrabis, Joseph 2008/9 11 Jul/3 Jul From TheFutureOf (10 Jul 08): Back into the fray, , The Analytics Ecology
Carrabis, Joseph 2008/9 18 Jul/7 Jul From TheFutureOf (16 Jul 08): Responses to Papadakis 7 Feb 08, , The Analytics Ecology
Carrabis, Joseph 2008/9 18 Jul/7 Jul From TheFutureOf (16 Jul 08): Responses to Geertz, Papadakis and others, 5 Feb 08, , The Analytics Ecology
Carrabis, Joseph 2008/9 28 Jan/1 Jul From TheFutureOf (22 Jan 08): Starting the discussion: Attention, Engagement, Authority, Influence, , , The Analytics Ecology
Carrabis, Joseph 2008/9 29 Aug/9 Jul From TheFutureOf (28 Aug 08): Response to Jim Novos 12 Jul 08 9:40am comment, , The Analytics Ecology
Carrabis, Joseph 2009 Machine Detection of and Response to User Non-Conscious Thought Processes to Increase Usability, Experience and Satisfaction – Case Studies and Examples, .V 3 , The 2nd International Multi-Conference on Engineering and Technological Innovation, International Institute of Informatics and Systemics , Orlando, FL
Carrabis, Joseph 2009 A Demonstration of Professional Test-Taker Bias in Web-Based Panels and Applications, 20 Pages, , NextStage Evolution , Scotsburn, NS
Carrabis, Joseph; 2009 Frequency of Blog Posts is Best Determined by Audience Size and Psychological Distance from the Author, 25 Pages, , NextStage Evolution , Scotsburn, NS
Carrabis, Joseph 2009 12 Jun Canoeing with Stephane (Sentiment Analysis, Anyone? (Part 2)), , BizMediaScience
Carrabis, Joseph 2009 5 Jun Sentiment Analysis, Anyone? (Part 1), , BizMediaScience
Carrabis, Joseph; Bratton, Susan; Evans, Dave; 2008 9 Jun Guest Blogger Joseph Carrabis Answers Dave Evans, CEO of Digital Voodoos Question About Male Executives Weilding Social Media Influence on Par with Female Executives, , PersonalLifeMedia
Carrabis, Joseph; Carrabis, Susan; 2009 Designing Information for Automatic Memorization (Branding), 35 Pages, , NextStage Evolution , Scotsburn, NS
Carrabis, Joseph; Carrabis, Susan; 2009 Machine Detection of Website Visitor Age and Gender via Analysis of Psychomotor Behavioral Cues, 34 Pages, , Northern Lights Press , Scotsburn, NS
David Cesarini; Christopher T. Dawes; Magnus Johannesson; Paul Lichtenstein; Bjrn Wallace 2009 Experimental Game Theory and Behavior Genetics, .Annals of the New York Academy of Sciences V 1167 , I Values, Empathy, and Fairness across Social Barriers , *DI: http://dx.doi.org/10.1111/j.1749-6632.2009.04505.x
Daw, Nathaniel D.; Dayan, Peter 2004 18 Jun Matchmaking, .Science V 304 , I 5678
Draaisma, Douwe 2001 8 Nov The tracks of thought, .Nature V 414 , I 6860 , *DI: http://dx.doi.org/10.1038/35102645
Ethan Kross 2009 When the Self Becomes Other, .Annals of the New York Academy of Sciences V 1167 , I Values, Empathy, and Fairness across Social Barriers , *DI: http://dx.doi.org/10.1111/j.1749-6632.2009.04545.x
Ferster, David 2004 12 Mar Blocking Plasticity in the Visual Cortex, .Science V 303 , I 5664
Harold Pashler; Mark McDaniel; Doug Rohrer; Robert Bjork 2008 Learning Styles: Concepts and Evidence, .Psychological Science in the Public Interest V 9 , I 3 1539-6053 %+ University of California, San Diego; Washington University in St. Louis; University of South Florida; University of California, Los Angeles
Hasson, Uri; Nir, Yuval; Levy, Ifat; Fuhrmann, Galit; Malach, Rafael 2004 12 Mar Intersubject Synchronization of Cortical Activity During Natural Vision, .Science V 303 , I 5664
Kozlowski, Steve W.J.; Ilgen, Daniel R. 2006 Dec Enhancing the Effectiveness of Work Groups and Teams, .Psychological Science in the Public Interest V 7 , I 3 , *DI: http://dx.doi.org/10.1111/j.1529-1006.2006.00030.x
Lilach Nir; Ariel Knafo 2009 Reason within Passion, .Annals of the New York Academy of Sciences V 1167 , I Values, Empathy, and Fairness across Social Barriers , *DI: http://dx.doi.org/10.1111/j.1749-6632.2009.04600.x
Matsumoto, Kenji; Suzuki, Wataru; Tanaka, Keiji 2003 11 Jul Neuronal Correlates of Goal-Based Motor Selection in the Prefrontal Cortex, .Science V 301 , I 5630
Ohbayashi, Machiko; Ohki, Kenichi; Miyashita, Yasushi 2003 11 Aug Conversion of Working Memory to Motor Sequence in the Monkey Premotor Cortex, .Science V 301 , I 5630
Otamendi, Rene Dechamps; 2009 22 Oct NextStage Announcements at eMetrics Marketing Optimization Summit Washington DC, , NextStage Analytics
Otamendi, Rene Dechamps; 2009 24 Nov NextStage Rich PersonaeTM classification, , NextStage Analytics
Otamendi, Rene Dechamps; Carrabis, Joseph; Carrabis, Susan 2009 Predicting Age & Gender Online, 8 Pages, , NextStage Analytics , Brussels, Belgium
Paterson, S. J.; Brown, J. H.; Gsdl, M. K.; Johnson, M. H.; Karmiloff-Smith, A. 1999 17 Dec Cognitive Modularity and Genetic Disorders, .Science V 286 , I 5448
Pessoa, Luiz 2004 12 Mar Seeing the World in the Same Way, .Science V 303 , I 5664
Prut, Yifat; Fetz, Eberhard E. 1999 7 Oct Primate spinal interneurons show pre-movement instructed delay activity, .Nature V 401 , I 6753
Richmond, Barry J.; Liu, Zheng; Shidara, Munetaka 2003 11 Jul Predicting Future Rewards, .Science V 301 , I 5630
Sugrue, Leo P.; Corrado, Greg S.; Newsome, William T. 2004 18 June Matching Behavior and the Representation of Value in the Parietal Cortex, .Science V 304 , I 5678
Tang, Tony Z.; DeRubeis, Robert J.; Hollon, Steven D.; Amsterdam, Jay; Shelton, Richard; Schalet, Benjamin 2009 1 Dec Personality Change During Depression Treatment: A Placebo-Controlled Trial, .Arch Gen Psychiatry V 66 , I 12
Tania Singer; Nikolaus Steinbeis 2009 Differential Roles of Fairness- and Compassion-Based Motivations for Cooperation, Defection, and Punishment, .Annals of the New York Academy of Sciences V 1167 , I Values, Empathy, and Fairness across Social Barriers , *DI: http://dx.doi.org/10.1111/j.1749-6632.2009.04733.x

back

4 – Carrabis, Joseph 2007 10 Aug Priming the Conversion Pump with Color, , AllBusiness.com
Carrabis, Joseph 2008/9 29 Aug/9 Jul From TheFutureOf (28 Aug 08): Response to Jim Novos 12 Jul 08 9:40am comment, , The Analytics Ecology
Carrabis, Joseph 2009 18 Aug I'm the Intersection of Four Statements, , BizMediaScience
Carrabis, Joseph; 2005 20 May Usability Studies 101: The First Sale…, , iMediaConnection
Carrabis, Joseph; Carrabis, Susan; 2009 Machine Detection of Website Visitor Age and Gender via Analysis of Psychomotor Behavioral Cues, 34 Pages, , Northern Lights Press , Scotsburn, NS
182 Pages, , Education, Employment, and Everything: The triple layers of a womans life, IWC (International Womens Conference) 2007 , University of Southern Queensland, Toowoomba, Queensland, Australia

5 – Fowler, James H.; Schreiber, Darren 2008 7 Nov Biology, Politics, and the Emerging Science of Human Nature, .Science V 322 , I 5903
Kotzeva, Tatyana; 2001 Feb Private Fantasies, Public Policies: Watching Latin American Telenovelas in Bulgaria, .Journal of Mundane Behavior V 2 , I 1
Liu, Sandra S.; Melara, Robert; Arangarasan, Raj; Lee, Kyung Jae; 2005 Development of Consumer Study in Retailing with visual technology (a demand side research agenda), .Journal of Shopping Center Research V 12
Patterson, Paul G.; Prasongsukarn, Kriengsin 2001 The Association Between Consumer Demographic Characteristics and Service Loyalty, . , Massey University, Auckland, NZ
Style, John 2002 Feb “Stop keeping count”: how Vernon escapes a mundane sex-life, in Martin Amis' Let Me Count the Times, .Journal of Mundane Behavior V 3 , I 1

Defining “Definition” and People as “Programmable Entities”

I've been studying The Calculus of Intentions (it's where semiotics and mathematics intersect) with some remarkably learned people over the past few months. A core question of the study is “How do we create a working definition that can serve as a baseline of knowledge while allowing us to create new knowledge?”

I believe this question is ignored in many disciplines today, especially in those disciplines where business mixes with science (see Why hasn't Marketing caught on as a “Science”?). I've worked in pure research (work that had no obvious ROI) and applied research (“Solve this problem because we can productize the solution”). The former must create working definitions that are expandable, the latter works to create definitions that are brandable. Very different. The two can go into conflict.

A Valid Definition Must Be So General as to Encompass All Variants

In Reading Virtual Minds Volume II: Theory and Online Applications (still writing it, folks), I define “Usable” as

Something is usable when the individual using that thing achieves a goal both known and recognized prior to the usage event.

and “Usability” as

Usability is a measure of an individual's conscious and non-conscious recognition of the pleasure derived from achieving their goal.

Creating as general as possible definitions is crucial to Reading Virtual Minds Volume II: Theory and Online Applications because I provide non-NextStage examples1 of how to do what I'm describing and I want readers to know ahead of time what they can expect as outcomes.

Readers will notice that “Usable” is objective and digital (you either did or did not achieve a known and recognized goal), “Usability” is subjective and analog (did you get a lot or a little pleasure? What do you mean by “a lot” and “a little” and is it the same as what I mean?) and I go into the reasons for this in the book.2

Readers will also (I hope) notice that the two definitions above exist in that borderland where pure becomes applied research. The goal is to create something general enough to be wholly true and restrictive enough to be uniquely identifiable as true.3

Pure “Definitions” versus Applied “Definitions”

What else is required? Pure research is usually interested in creating a definition for what hasn't been in experience before, applied research not so much so, hence any definition used in business, etc., should encompass all previous similar experiences and definitely should not negate any previous similar experiences.

Bill Cosby and New CokeThe classic business blunder example of this is “New Coke”. There was no question in consumer's consciousness that the New Cok wasn't “the real thing” and the debranding halo went from New Coke to Coca-Cola to Bill Cosby himself. The moral can be found easily in the Calculus of Intentions; instead of “Trust me, this is the real thing” (when “the real thing” was the existing definition of the old Coca-Cola formula) using “Trust me, this isn't for everybody, so give it a taste. This could be the real thing for you” with Mr. Cosby's finger first pointing at the Coca-Cola can then at the audience would have both captured the existing Coca-Cola audience and integrated it into the new formulation.

Integrate existing experience into the new definition and — from a marketing standpoint — you bring the existing audience with you (this is the heart of redesign and rebranding, also covered in Reading Virtual Minds V2). An example of not integrating existing experience into a new definition was something I heard in a radio spot earlier today (21 Jul 10). Some company in the Boston area is publishing a report, “The 25 Most Powerful Businesses in Massachusetts”. The ad then referenced their website with “Learn what makes a brand powerful at …”.

Essentially they use an existing term, “powerful”, then redefine it into something they lay claim to. It doesn't matter if their definition of “powerful” is accurate or meaningful to anything else we may apply that term to because they're also telling us what the term means when they use it.4

Organization and Structure

Next comes a definition's ability to organize a body of knowledge into a clear, irrefutable structure. Such things are called “elegant solutions” in mathematics, meaning the definition demonstrates simple, easily repeatable solutions. This tends to be where pure and applied research — especially when the application is intended for branding — diverge greatly. Pure research works to create foundations, applied research builds on those foundations in the hope that nothing else will be built. Boston's Hancock Tower, New York's Empire State Building and Chicago's Sears Tower (I think it has a different name now) are all buildings (foundational definition) and each has a separate name (applied definition).

The fact that what was Chicago's Sears Tower for many years is now known as The Willis Tower is a demonstration of an applied definition's mutability and temporality. People of a certain age will always reference that building as “The Sears Tower” and, if asked about “The Willis Tower”, will have to pause and perform the definition translation before answering with any confidence. Another example is The Boston Garden. I have no idea how many name changes it has gone through and to most people within a 100 mile radius of Boston who are over 35 years old, it will always be “The Boston Garden” (if for no other reason than “The TD BankNorth Garden” does not lend itself to alliteration and syllabation. It officially went from the full “The TD BankNorth Garden” to “The TD Garden” over a year's time, I think, perhaps longer. Such is the strength of pure and applied definitions as brands). Very often, when placial applied definitions change, society imposes a foundational definition to replace all applied definitions.

Again using Boston area examples, Foxboro Stadium is Foxboro Stadium, not Gillette Stadium (readers specializing in search engines know such examples by heart). The Tweeter Center is the Comcast Center and was Great Woods. Most people have to guess where it's located (Mansfield, MA). An example of pure and applied definitions going hand in glove is Gilford, NH's, “The Meadowbook U.S. Cellular Pavilion” (once MeadowBook Farm. “MeadowBook” has always been part of the venue's name so anybody and everybody knows about “The Meadowbrook”).

It is rare that a pure definition will change. Applied definitions are generational (as indicated above).

Recognize what is and what isn't defined

Lastly, both pure and applied definitions need to clearly demonstrate what is not included in the definition. Binary definitions are great for this. “0” is not “1”, (business) “male” is not (business) “female”. The definition of “Usable” provided at the start of this post is both binary and objective, good on both counts. Things like “Usability” and “New Coke”, being subjective, must always include the author's intent as part of the definition. I enjoy math puzzles so their usability to me is quite high, lots of people I know find no enjoyment in them, so my intent must be included in my definition of math puzzle usability.

And it is the recognition of my intent, the pleasure I feel5, that brings us back to The Calculus of Intentions and creating definitions.

People as Programmable Entities

It is possible to determine usability for different personality types, meaning one can plot how much pleasure a group of people will derive from a given object/device/tool, meaning it's possible to determine what features said object/device/tool must have to have penultimate usability, what features to change and how when introducing that object/device/tool into a new market, …

The same can be done for utility.

I was asked recently, “What sort of prison have you constructed, where the communications of people make such sense to you that their actions are programmably obvious…?”

I responded with “The foil here is probably an element of Cassandranism; if things are that obvious you'll know who can be communicated with and who not.”

Such research is, I think, a ship and not a prison, although the two are only different based on definition and intent.


1 – A non-NextStage example is one where NextStage's Evolution TechnologyTM (“ET”) isn't required to achieve the result. The result may have been proven with ET and ET isn't required to achieve the result.

back

2 – “Usability” as defined is not “utility”, the measure of relative satisfaction. I may be incredibly satisfied by something but derived absolutely no pleasure hence never want to use/do it again, such as being extremely satisfied that I survived a plane flight through a hurricane. However, I'll never do it again, therefore the usability is zero.

Utility is a measure subjective and analog, and it provides no cycle for improvement. “Usable” provides a binary measure of improvement — it wasn't usable before and now it is. “Usability” provides an improvement cycle — if usability is low (there is little to no pleasure in something's use) we can go through iterations wherein changes to some object/device/tool increases usability (each change allows greater pleasure in its use).

As a further example of the difference between usability and utility, note that usability is sensory in nature (another reason it's analog), utility is psychological in nature. We are prewired for usability (pleasure/pain), we have to learn utility.

back

3 – ET and humans move from “wholly true” to “uniquely identifiable as true” (the phenotype-genotype continuum) regularly and both do so via identity-relational models. For example, there exists a “business” definition of gender that is binary and has nothing to do with psychological, neurological, endocrinological, biological, …, science. By it's definition, I am male and that is wholly true because it is a binary definition. Either I am or I am not.

When we say “You remind me of …” we're dealing with “uniquely identifiable as true” and our conscious and non-conscious thoughts are using identity-relational models. We're basically comparing our memory of person A with our immediate awareness of person B who's standing in front of us. Are A and B a one-to-one match? Then they are uniquely identifiable and we say “Oh, you're …”. When the match isn't one-to-one we say things like “You remind me of …”, “You're a lot like …” or “I knew someone (just) like you …”

The slide from “I recognize you're a male” to “You remind me of …” to “You're …” is the slide from wholly true to uniquely identifiable as true and uses identity-relational models (how many unique elements are required to uniquely identify this as “not that”? See Chapter 5 Section 3, “The Toddness Factor” in Reading Virtual Minds Volume I: Science and History for a description of this).

back

4 – Shades of “Pornography is what I'm pointing at when I say it.” I pretty much believe redefining something to suit your needs is obscene and pornographic. In this case, by going to the company's website we learn “This national ranking is the first of its kind, … and provides a new benchmark for marketers”. Excellent! There's no real validity to their “metric” other than self-promotion and the desire to become a standard. Wonderful! Truly! Therefore the basis of the metric is the audience's acceptance of the company's statements as valid.

But wait… I knew an emperor like that…

And truth in advertising here; I have at times advised clients to do something similar. The dissimilarity is that the clients so advised could back up their definitions and claims with long, well documented evidentiary trails.

back

5 – I also derive utility from them, a sense of self-satisfaction at being able to solve them.

back


Posted in ,

Antagonistic Product Evolution

There is an interesting model in evolution theory with lots of evidence to back it up. It deals with the fact that (in most cases) things evolve faster in competitive environments and things evolve fastest in antagonistic environments.

They're not going to get along well at all at all at allAn antagonistic environment occurs when there's active and intense competition for resources. For example, two top predators (commonly called “apex” predators, meaning nobody messes with them) vying for supremacy in the same food chain. At some point the two apex predators will stop preying on prey and start preying on each other. They'll have to because two apex predators will quickly deplete all prey species and the only food resource left will be each other, hence somebody's messing with somebody, hence apex-envy ensues and there can be only one “king of the mountain” in evolutionary terms.

So these predators will rapidly evolve (think “arms escalation”) until one gains the top spot. Sometimes (and rarely) will cooperation be the result and when it does occur it usually takes the form of one predator species become alpha and the other becoming beta. An example of this is scavenger species that help top predators cull herds of the weak, wait while the top predators dine then go in for the scraps. Humans and dogs are examples of this in the modern world (ie, the past 12-15k years or so). Wolves and humans vied for top predator status, wolves evolved into dogs (and right quickly, too. Wolves have been around as “wolves” for close to a million years) because humans left enough from their kills to warrant the evolutionary change.

That's another thing to be aware of in these evolutionary, escalatory exchanges; the species that chooses the less dominant path tends to thrive because the more dominant species needs it in order to insure apex status. The fact that humans domesticated wolves into dogs then created so many varieties of dogs to do such a variety of jobs is a demonstration of this. The downside to such relationships is that the beta species will become prey to the alpha in hard times.

Antagonistic environments are usually unstable, meaning “something's got to give”. Unstable environments are the bane of ecologies because ecologies survive best when things are in balance. Balance occurs in competition — sometimes A wins, sometimes B wins — but never in antagonism — somebody's got to become “top dog”, to reach the apex, so to speak.

Business Environments

Mature business environments are competitive, rarely antagonistic. Antagonism is a hallmark of early stage evolutionary systems. There's extreme competition for resources, predators haven't evolved to match the opportunities of select prey because (in early evolutionary systems) everything is prey and everything is predator. Keystone species — the species that support ecologies at a fundamental and necessary level — haven't evolved yet.

Even in mature markets something will occur, some tipping or tripping point triggers environmental/ecological change and the mature market will spawn a new market that is highly immature and antagonism ensues.

Think “land grabs”. Think “speculative markets”. Think “junk bonds”. Think “mortgage crisis”. Think about much of what has happened in the past ten years.

They're not going to get along well at all at all at all, eitherThink any maturing industry and you'll first see antagonism once a new market opportunity is recognized, eventually followed by competition once the market is stabilized (again, balance). I've even heard of market competitors forming “co-opetitons”, co-operative competitive ventures. I've seen the contracts involved in such things and graciously shy away. As a friend once told me, “The purpose of those contracts is to decide where and when the mutual f?cking will begin.”

I consider such market realities to be necessary evils. They are fascinating to watch “over there” and tend to be not much fun “right here”. Environments and ecologies evolve and there's nothing anybody can do to stop them. Any human intervention means new balances will occur. Bailouts and market reforms are (I believe) sorry examples of this. Concepts of market tending and stabilization grew out of long proven agrarian husbandry concepts that are (typically) misapplied.

The challenge in moving the concept from barnyard to main street is that the barnyard is a fixed and intentionally static environment with a highly monitored ecology. You can grow bigger beef, taller wheat, tastier corn or fleeter salmon because nothing is allowed to deviate from well established norms.

Main street is neither fixed nor static. It is a free market (or at least claims to be) hence environments and ecologies can be highly monitored and that's about it. Some fox or coyote sneaks through the fence, some crows or chickadees recognize your owls can't be everywhere and baboom your whole world changes. Whatever your monitoring must obey the barnyard rules but the predators are free to do whatever they want. Predators may not even be recognized by your monitoring systems (think “Bernie Madoff”).

I bring all this up because NextStage is about to get (what I consider) a good review from Gartner in their “Cool Vendor” report. The lines I especially like are

  • Analysis of content before publishing ensures that it will appeal to the desired audience.
  • Analysis of the users to a web site will help align content to the audience and their expectations.
  • These products provide empirical data about user profiles and reactions — which is difficult to obtain other ways — while protecting anonymity.

It's going to be interesting (think “Chinese Curse”), these next few months. I already know what new products NextStage will be releasing this year. I have no idea how they or the markets we're entering will evolve although I do know both will.

The very introduction of new species (regardless of that species survival fitness at the time of introduction) into an existing ecology changes it forever (think “invasive species”, think “zebra mussels”, think “kudzu”). All that's required is that the invasive species be more opportunistic with the available resources than species already established in that environment. The odds are in favor of the invasive species. They're coming in prepared to evolve. Existing species have evolved to a stasis condition with the existing environment (balanced ecology). The introduction of the new species necessarily disrupts the ecology, changes the environment, new ecological niches are demonstrated, …

A good research project. Or two, me thinks, releasing NextStage's chimeras (a nod to things hybrid, born of four parents, like NextStage's Evolution Technology) from the barnyard onto the main streets, don't you?

The Unfulfilled Promise of Online Analytics, Part 3 – Determining the Human Cost

Knowledge will forever govern ignorance, and a people who mean to be their own governors, must arm themselves with the power knowledge gives. A popular government without popular information or the means of acquiring it, is but a prologue to a farce or a tragedy or perhaps both. – James Madison

There was never suppose to be a part 3 to this arc (Ben Robison was correct in that). Part 1 established the challenge (and I note here that the extent of the response and the voices responding indicates that the defined challenge does exist and is recognized to exist) and Part 2 proposed some solution paths. That was suppose to be the end of it. I had fulfilled my promise to myself1 and nothing more (from my point of view) was required.

But many people contacted me asking for a Part 3. There were probably as many people asking for a Part 3 as I normally get total blog traffic. Obviously people felt or intuited that something was missing, something I was unaware of was left out.

But I never intended there to be a Part 3. What to cover? What would be its thematic center?

It was during one of these conversations that I remembered some of the First Principles (be prepared. “First Principles” will be echoed quite a bit in this post) in semiotics.2

According to semiotics, you must ask yourself three questions in a specific order to fully understand any situation3:

  1. What happened?
  2. What do I think happened?
  3. What happened to me?

More verbosely:

  1. Remove all emotionality, all belief, all you and detail what happened (think of quis, quid, quando, ubi, cur, quomodo – the six evidentiary questions applied to life).
  2. What do your personal beliefs, education, training, cultural origins, etc., add to what actually and unbiasedly happened?
  3. Finally, how did you respond — willingly or unwillingly, knowingly or unknowingly, with all of your history and experience — to what happened.

The power of this semioticism is that it forms an equation that is the basis of logical calculus, the calculus of consciousness4, modality engineering5 and a bunch of other fields. I use a simplified form of it in many of my presentations, A + B = C.6

Talking with one first reader, I realized that Part 1 was “What happened?” (the presentation of the research) and Part 2 was “What do I think happened?” (my interpretation of the research). What was left for part 37 was “What happened to me?”

And if you know anything about me, you know I intend to have fun finding out!

All Manner of People Tell Me All Manner of Things

Oliver's TravelsThe above is a line from Oliver's Travels (highly recommended viewing), something said by the Mr. Baxter character. Mr. Baxter is himself a mystery and — although his true nature is hinted at several times — it is not revealed until the last episode. There we are told about The Legend of Hakon and Magnus. In short, Mr. Baxter could be a good guy, a bad guy or the individual directing the good or bad guy's actions. His role entirely depends on what side you are on yourself, a true Rashomon scenario. I found myself in something similar to Mr. Baxter's situation as how people responded to my research, its publication and myself also depended greatly on what side people were on when they contacted me.

I was both dumbfounded and honored by the conversations Parts 1 and 2 generated. The number of people who picked up on or continued the thread on their own blogs (here (and alphabetically) Christopher Berry (and a note that Chris continues the conversation in A Response (The Unfulfilled Promise of Analytics 3) ), Alec Cochrane, Stephane Hamel, Kevin Hillstrom, Daniel Markus, Jim Sterne, Shelby Thayer and if I've forgotten someone, my apologies), twittered it onward, skyped and called me was…I could say unprecedented and remind me to tell you about a psychology convention in the early 1990s (nothing to do with NextStage, just me being me, stating what is now recognized as common knowledge yet way before others decided it was common. Talk about unprecedented results. I had to be escorted out under guard. For those of you who know Dr. Geertz, his comment upon learning this was “I'm not surprised you'd have to be escorted out by guards. You have that subtle way about you…”8).

But to note the joy means to recognize the sorrow (as was done in Reading Virtual Minds Vol. 1: Science and History Chapter VI, “The Long Road Home”). While the majority of people honored me and a good number of people appreciated that I had done some useful research and donated something worth pondering, there were a few (just a few, honestly) who damned me.

The damning per se I don't mind. It's part of the territory. It was the manner and the persons involved that truly surprised me.

I was accused of possibly destroying a marriage (Susanism: If you think this is about you, it's not. We know a lot more people than just you), maligning certain individuals (usually by people who maligned other individuals during the research. I guess I wasn't maligning the correct individuals in their view), not demonstrating the proper respect to industry notables (same parenthetical comment as previous and you guessed it, another NextStage Principle), that I better post an apology to these same industry notables (two people wrote apologies in my name and strongly suggested that I publish them), …

Whoa!

Who gave me such power and authority to make or break people's lives? Certainly I didn't give it to myself, nor did I ask others to give it to me. And if anybody did give it to me without my knowing I gladly give it back. As I've said and written many times, I do research. When new data makes itself available and as required, I update my research. But until such new data comes in, the research stands.

What I really want to know is if, when the results of research are discomforting, the industry's standard and usual procedure is

  • to change either the research or results so that people feel warm and fuzzy — hence have no impetus to act (according to one person at yesterday's NH WAW, “Don't measure what you can't change”. An interesting statement that I disagree with. Doing so means to throw out meteorology, astronomy, … much of what has been historically measured without any change-ability allowed us to create the technologies that would produce change in previously unchangeable systems)
  • or let the discomfiting research stand — so that the challenge can be recognized and either action can be either taken or the challenge go ignored.

Seems to be the “change either the research or results” is the standard (or at least done when required) because while few asked that I rewrite research or results so that certain individuals appeared more favorably, the ones who did ask sure were some high-ranking industry folks.

Heaven forbid these folks wanting different results published or do complimentary research that either validated or invalidated my results.

Wait a second. What am I thinking? Obviously it would be impossible for them to do research that validates mine.9

Of course, publishing research would also mean publishing their methodologies, models, analytic methods, … and the reasons that ain't gonna happen will be covered later in this post.

And if that is the standard and usual procedure — at least among those in the high ranks — then

  • congratulations to all the companies hiring high ranking consultants to make them feel good rather than solve real problems and
  • be prepared for those coming up through the ranks to learn this lesson when it is taught them.

I'm mad as hell and I'm not going to take it anymore!For the record, not much upsets me (ask Susan for a more honest opinion of that). The sheer stupidity of arguments that resort to emotionalism or are nothing more than attempts to protect personalities and positions, though… Them they do offend me (can't wait to learn how our Sentiment Analysis tool reports this). And more about stupidity later in this post (Let me know if you recognize Joseph's “I'm mad as hell and I'm not going to take it anymore” persona).

When the Stories Meet the Numbers (Statistics, Probability and Logic)

I originally surveyed about sixty people for Part 1. That number grew to about one hundred in Part 2 due to responses to Part 1. Currently I've had conversations (I'm counting phone calls, Skype chats and calls, email exchanges and face-to-face discussions at meetings I've attended as “conversations”) with a few hundred people about those posts.

I noticed something interesting (to me) about the conversations I was having. Lots of people made statements about statistics, probability and logic but were using these terms and their kin in ways that were unfamiliar to me. Especially when I started asking people what their confidence levels were regarding their reporting results.

I'll offer that search analysts (I'm including SEO and SEM in “search analysts”) seem to have things much easier than web analysts do. “We were getting ten visits a day, changed our search terms/buy/imaging/engines/… and now we're getting twenty visits per day.” Granted, that's a simplification and it's the heart of search analytics — improving first the volume and second the quality of traffic to a site. Assuming {conversions::traffic-count} has standard variance, search analytics produces or it doesn't and it's obvious either way.

Web analytics, though… “The Official WAA Definition of Web Analytics” is

Web Analytics is the measurement, collection, analysis and reporting of Internet data for the purposes of understanding and optimizing Web usage.

The analytics organization I see most often cited, SEMPO, doesn't even attempt to define (“SEMPO is not a standards body…”) or police (“…or a policing organization.“) itself. It does offer search courses but the goals of the SEMPO courses and the WAA recognized courses are greatly different (an opinion, that, based on reading their syllabi as someone having taught a variety of courses in a variety of disciplines at various educational levels in various educational settings).

There are twenty-one words in the official WAA definition and a philologist will tell you that at least ten require further definition.

Definitions that require definitions worry me. Semiotics and communication theory dictate that the first communication must be instructions on how to build a receiver. Therefore any stated definition that requires further definition is not providing instructions on how to be understood (no receiver can be built because there is no common signal, sign or symbol upon which to construct a receiver. If you've ever read my attempts at French, you know exactly what I mean10).

One of the statements made during the research for this arc was “[online] Analysts need to share the error margins, not the final analysis, of their tools.” It expressed a sentiment shared if not directly stated by a majority of respondents and it truly surprised me. It states as a working model that any final analysis is going to be flawed regardless of tools used therefore standardize on the error margins of the tools rather than the outputs of the tools.

So…decisions should be made based on the least amount of error in a calculation, not what is being calculated (does the math we're using make sense in this situation?), the inputs (basic fact checking; can we validate and verify the inputs?) or the outcome (does the result seem reasonable considering the inputs we gave it and the math we used?)?

A kind of “That calculation says we're going to be screwed 100% but the error margin is only 3% while that other calculation says we're only going to be screwed 22% but the error margin is 10%.

Let's go with the first calculation. Lots less chances of getting it wrong there!”, ain't it?

More seriously, this is a fairly sophisticated mathematical view. Similar tools have similar mathematical signatures when used in similar ways. When a tool has an output of y with fixed input x in one run and y+n with that same fixed input x in another run but a consistent error margin in both runs, standardizing on the error margin e is a fairly good idea. It indicates there's more going on in the noise than you might think.11

Of course, this means you better start investigating that noise darn quick.

My understanding of “statistics, probability and logic” was often at odds with what people were saying when they used those words. The differences were so profound (in some cases) that I asked follow up questions to determine where my misunderstandings were placed.

Serendipity doing it's usual job in my life, over this fall-winter cycle I took on the task of relearning statistics12, partly so I could understand how online analysts were using statistics-based terms. As noted above, the differences between what I understood and how terms were being used and applied was so different that I questioned my understanding of the field and its applications.

And to whither I wander, I offer a philologic-linguistic evidentiary trail for all who will follow. For those who just want to get where I'm going, click here.

Web Analytics is Hard

Of course it is. Anything that has no standards, no base lines, no consistent and accurate methods for comparisons is going to be hard because all milestones, targets and such will have to be arbitrarily set, will have no real meaning in an ongoing, “a = b” kind of way, and therefore Person A's results are actually just as valid as Person B's results because both are really only opinion and the HiPPOs rule the riverbank…

…until a common standard can be decided upon.

Web Analytics is easy

Of course it is. Anything that applies principled logic, consistent definitions, repeatable methodologies that provide consistent results, … is going to be.

Online Analytics Is Whatever Someone Needs It to Be

Ah…of course it is.

And this is the truest statement of the three for several reasons. Consider the statement “(something) is Hard“.

It doesn't matter what that “(something)” is, it can be driving a car, riding a bike, watching TV, playing the oboe, composing poetry, doing online analytics, … . What that “(something)” is is immaterial because the human psyche, when colloquial AmerEnglish is used, assigns greater cognitive resources to understanding “Hard” than it assigns to “Web Analytics”, and this resource allocation has nothing to do with whether or not “Web Analytics” is easier to understand than “Hard”, it has to do with what are called Preparation Sets13. The non-conscious essentially goes into overdrive determining how hard “Hard” is. It immediately throws out things like “iron”, “stone” and “rock” because the sensory systems don't match (iron, stone and rock involve touch-based sensory systems, transitive expressions such as “(something) is hard” don't) and starts evaluating the most difficult {C,B/e,M}14 tasks in memory — most recent to most distant past — to determine if the individual using the term “Hard” is qualified to use the term as a surrogate for the person being told “(something) is Hard” (ie, our non-conscious starts asking “Do they mean what I think they mean when they say 'Hard'?”, “Do they know what 'Hard' is?”, “What do they think 'Hard' means, anyway?”, “Do they mean what I mean when I say 'Hard'?” and so on).15

What I will offer is what I've offered before; any discipline that defines success “on the fly” isn't a discipline at all (at least it's not a discipline as as I understand “discipline”). Lacking evidentiary trails, definitions and numeric discipline, comparisons of outputs and outcomes degenerates to “I like this one better” regardless of reporting frame.

Teach Your Children Well

Where statements like “(something) is Hard” and “(something) is Easy” really make themselves known is when teaching occurs.

Let me give you an example. You have a fear of (pick something. Let's go with spiders because I love them and most people don't (Only click on this link if you love spiders)). Phobias are learned behaviors. This means someone taught you to be afraid of spiders. It's doubtful someone set out some kind of educational curriculum with the goal of teaching you to fear spiders (barring Manchurian Candidate scenarios). It's much more likely that when you were a child, someone demonstrated their fear of spiders to you. Probably either repeatedly or very dynamically, so you learned either osmotically or via imprinting. Children demonstrate their parents' behaviors in hysteresis patterns. This means that if you measured a parent's level of arachniphobia and assigned it a value of 10, chances are the child would demonstrate their arachniphobia at a level of 100 or so in a few years' time. Children who learn their parents' fears and anxieties do so without understanding any logical basis for those fears, only the demonstration of them. When there is no logic to temper the emotional content, hysteria results.

However, if a parent demonstrates a fear response and the ability to control it, to explain to the child that fear response's origin, etc., most often the child learns caution and not fear (not to mention that the parent usually learns to control their fear). The difference can be thought of as the difference between teaching a child to “Be careful” versus hysterically screaming “EEEEK!”

What's so fascinating about this is that it's also how we pass on our core, personality and identity beliefs whether we mean to or not (I cover this in detail in Reading Virtual Minds Volume I: Science and History). We can be teaching physics, soccer, piano, bread-baking, … It doesn't matter because all these activities will be vectors for our core, identity and personal beliefs and behaviors. If we are joyful people then we will teach others to be joyful and the vector for that lesson will be physics, soccer, piano, bread-baking, … And if we are miserable people? Then we will teach others to be miserable and to be so especially when they do physics, play soccer, the piano, bake bread, …

Thus if any teaching/training occurs intentionally or otherwise, the individual doing the training/teaching is going to de facto teach their internal philosophies and beliefs — both business and personal — as well as their methods and practices to their students. This can't be helped. It's how humans function. If the philosophy and belief is that things are hard, then that philosophy and belief will be taught de facto to the students. Likewise for the philosophy and belief that something is easy. There will be no choice.16

The point is we protect others from what we fear. Humans are born with precious few fears hard-wired into us (heights and loud noises are the two most cited. Heights because we're no longer well adapted to an arboreal existence and loud noises because predators tend to make them when they attack).

So the statement “(something) is hard” either means we fear “(something)” or we wish to protect others from having the difficulties we have when we do “(something)”, and if difficulties existed then the non-conscious mind is going to place a fear response around whatever “(something)” is to make sure we don't put ourselves into unnecessary difficulties yet again.

The statement “(something) is easy” generates the polarity of the above and I, dear reader, I am the neuro- and philo-linguist's nightmare because my training is simply that “(something) is”. My training is that both whatever exists and whatever state it exists in are mind of the observer17 dependent. Thus things simply are and our perceptions, experience and decisions make them hard, soft, easy, whatever, to us individually.

It's always all about you, isn't it?More colloquially, whatever your perceptions of the world are, it's all you and precious little of anything else (a favorite quote along these lines is “What if life is fair and we get exactly what we deserve?” Ouch!).

The Trail Leads Here

There are lots of errors I can understand. A lack of knowledge, of mathematical rigor, of logic training, of problem solving skills, … These and a host of others I can appreciate. Especially in those junior to any given discipline.

But unprovable math, a lack of basic fact checking, outputs that have no meaning based on what's come before and (let's not forget) emotionalism? This really blew me away. Math can be taught, junior people who don't fact check can be trained, making sure units match can be taught and comes with experience, … but emotionalism?

I'll accept any of the above in junior players with the caveat that the first to go has got to be emotionalism.

But senior people failing any of these before offering something for publication? Then defending this lack of rigor with an emotional outburst? And when it happens more than once?

Talk about abandoning First Principles!

We don't need no stinking badgesFirst Principles? We don't need no stinking First Principles!

Challenge logic, challenge research, challenge findings, sure. Challenge a person if they challenge you, sometimes maybe. I'll tolerate a lot, folks (ask Susan for confirmation), and I have a real challenge with such as these — Arguing emotionally and telling me it's logic, arguments based on no facts at all… I'll accept, entertain and work with ignorance, arrogance, discomfiture, anxiety, joy, love, appreciation, anger, … quite a wide thrall of human response.

But arguments such as these are, in my opinion, stupid.

There, I typed it.

Yet because such arguments were presented as such I must recognize that in some camps doing web analytics means to heck with fact-checking, logic, … That it's acceptable to ignore truth and common practice to base outcomes on what one needs them to be. I mean, when someone with title and prestige does it, the overt statement is that others should, will or do do it, as well. Definitely people in the same company should or will do it. Whatever's lacking in the master's portfolio won't be found in the student's (in most cases).

Want to know why I stopped attending conferences? See the above.

Joseph, the Abominable Outsider

Joseph, The Abominable Outsider

Stephane Hamel applauded me (I think) when he referenced me as an industry “outsider” in his A nod to Joseph Carrabis: The unfulfilled promise of online analytics. Others used the term to applesauce me. (I was flattered by both, actually.)

I had been wondering if it was worth my writing a little bit on elementary logic, probability theory, problem solving or some such. A previous draft of this post contained an explanation of elementary statistics and problem solving as it might be applied to online analytics. Now I really had to question such an effort. If the notables don't know how to apply these things…

Where the stories meet the numbers, there Understanding dwells

The power of logic, knowing problem solving methods, basic statistics, probability and so on is that they provide basic disciplines that prevent or at least inhibit mistakes such as listed above. You have the tools and training to basically “…draw an XY axes on the paper, chart those numbers and the picture that results points you in the direction you need to go.” You can be emotional about your research and your findings and you can't defend your research emotionally. The research and findings are either valid or they ain't.18

As for drawing an XY axes, charting numbers and getting some direction…what can you do with such evidentiary information? There are lots of things you can do. Determine the relationships between the numbers and you can exploit their meanings.

But if the basics are beyond the industry greats

  • then explaining the differences between cross-sectional studies and longitudinal studies (cross-sectional studies involve measuring a single (x,y) pair, meaning x is fixed for all y. Longitudinal studies involve countably infinite (x,y) pairs. Longitudinal studies are greatly more expensive than their cross-sectional cousins and is why cross-sectional regression models are often used when longitudinal regression models are needed) won't do much good19,
  • nor will explaining the need for creating a “standard” site for calibration purposes,
  • models can only be standardized once methods themselves are analyzed and an accuracy “weighting” is determined (allowing all models to be compared to a “gold standard”, meaning comparing my results to your results actually has analytic meaning),
  • Figuring out where your normals are on your curveexplaining the meaning of and how to “normalize” samples is out (doing so allows you to see where the normals fall on your standard curve. You put your normals in the middle to lower part of the curve because a) this is where population densities are greatest and b) no naturally occuring line is going to be straight so you shoot for placing your normals on the straightest part of the curve to get some kind of linearity (that y = mx + b thing). Every naturally occuring phenomenon follows mathematical rules that produce curves. Between the two blue lines is where standards occur. Below the bottom blue is “below standard”, above the top blue is “out of standard”. Between the bottom blue and green line is the normal range. You calibrate your methods against the gold-standard normals and anything above is where the money lies),
  • 20

It takes more effort to reorder a partially ordered system than it does to create order in an unordered system (bonds, even when incorrect, have existing binding energy).

I completely understand why so many of NextStage's clients couldn't document the accuracy of the online analytics tools they were using at the time they contacted us for help. This lack of documentation was something I was very uncomfortable with. If there's no proven methodology for demonstrating a number's validity then you've essentially moved away from the gold standard and declared that the value of your dollar is based entirely on what others are going to value it at (pretty much determined by your political-military-industrial capabilities or in this case, those guarding the riverbank). Your numbers only have meaning so far as others are willing to accept them as valid and if lots of money is being paid for an opinion, that opinion is going to be gold regardless if it's based on invalid assumptions or documentable facts.

The online analytics field is partially ordered — it's been around long enough for a hierarchy to appear — so only those willing to expend the energy are going to attempt fixing it for the sake of getting it fixed rather than changing it to suit their own objectives.

And this is where

The detritus encounters the many winged whirling object

NSE was seeing so many erroneous tool results (my favorite example was the company that was getting 10k visitors/day and only 3 conversions/month. Their online analyst swore by the numbers) that it lead us to come up with a reliable y = x 2db that we could prove, repeat and document. It relied solely on First Principles. This led to our in-house analytics tools, which is why we're analytics tool agnostic. We really don't care what tools clients use. If we don't believe the numbers we'll use our own tools to determine them because we know and can validate how our tools work. As a result we now often use our tools to validate the accuracy of other tools.

I have no dog in this fight (both the “Web Analytics is…” and whether or not a promise existed and has gone unfulfilled fights because I'm a recognized industry outsider) and won't be dragged into it (I mean, would you really want me involved?). My agenda is making sure that those coming to NextStage for help either bring with them some mathematical rigor or allow NextStage to invoke it. There is little that can be done when a tool lacks internal consistency (given a consistent input it generates different outputs).

It really is that simple, folks. This is First Principles and they always work. Don't believe me? Ask Ockham. First Principles have to work. As long as the sun rises in the east and sets in the west, as long as there are stars up in the sky, as long as the recognized laws of reality are valid, …

And because mathematics is a universal language, the stars are in the sky, etc., etc., these rules have to apply to online analytics and the tools used therein.

Unless you're happy with high variability in results sets given a known and highly defined set of inputs.

Which is fine, if that's what your values are based on.

And I doubt it is, so be prepared for companies to use HiPPOs only for political purposes (“Our methods are valid because they were installed/given to us/updated/validated/… by HiPPO du jour“), not for accuracy purposes.

How fast are you going?I mean, people make a living out of these things, right? When someone talks about a regression curve and that a decision was made because the probabilities were such and so, does it matter if they know what they're talking about?

Or is being able to use a tool the same as understanding what the tool is doing?

And I know there are online analysts out there who take high variability and weave it into gold. Good for them (truly!). They have a skill I lack. And they're performing art, not science, and as someone who walks in both worlds I will share my opinion that science is lots easier than art. Science has rules. Art is governed by what the buying public is willing to spend and on whom.

Ahem.

That offered, HiPPOs du jour should be prepared for highly defined and validatable game-changing methods and technologies to un-du jour them because such methods and technologies will, given time and regardless of where they originate and how they emerge. In this, like stars shining in the sky, there is no option, no way out. The laws of evolutionary dynamics apply in everything from rainstorm puddles on the pavement to galactic clustering (I can demonstrate their validity in the online analytics world very quickly and easily; start with the first online analytics implementation at UoH in the early 1990s and follow the progression to today. Simple, clean and neat. I love it when things work. Don't you? Gives me confidence in what I think, do and say).

My suggestion (note the italics) is that the online community create an unbiased, product agnostic experimental group. All empirical sciences that I know of have experimental disciplines within them (physics has “experimental physics”, immunology has “experimental immunology”, …). NextStage is not part of this community so again, we have no dog in this fight. Let me offer NextStage as an example, though — we do regularly publish our experimental methods and their results in our own papers and in business-science journals and in scientific conference papers. This allows others to determine for themselves if our methods are valid and worthy. Granted, NextStage comes from a scientific paradigm and perhaps taking on some of science's disciplines would benefit the industry as a whole, or at least bring more confidence and comfort to those within it.

But what about the Third Semiotic Question?

Answering “What happened to me?” follows the trail of asking trusted others (my thanks to Susan, Charles, Barb, Mike, Warner, Lewis, Todd, Little-T and the Girls, M, Gladys and Dolph) many questions to bridge holes in my understandings.

All the ills referenced in parts 1 and 2 demonstrated themselves to their full — people who didn't like what I wrote triangulated. They contacted others whom they thought were socially closer to me or “might have an in” but heaven forbid they contact me directly. Others focused their frustration at me because (probably in their minds) I was something concrete and tangible, something they could point at, instead of something they felt powerless against; the industry as a whole. Still others because they consider me an industry leader (I'm not. I'm an outsider, remember? I can't lead an industry I'm not a part of. Or will Moses start telling Buddhists how to behave?). And (I'm told) I became the subject of klatch-talk on at least two continents (obviously, I need to start charging more for my time).

All of these things add up to determining the human cost of the unfulfilled promise of online analytics. As I quoted before, Coca-Cola Interactive Marketing Group Manager Tom Goodie said “Metrics are ridiculously political.” He was correct and not by half. The cost is high. It is highest amongst

  • those unsure of the validity of their methods, their measurements and their meanings who want to be accepted and acknowledged as doing valuable work yet are unable to concisely and consistently document what they're doing to the satisfaction of executives signing their checks
  • and those who are cashing those checks to buy new clothes.

Do I think online analytics industry will change because of my research and its publication?

See this tool? I must know what I'm doing because I use this tool.Did you read what I wrote about accountability in The Unfulfilled Promise of Online Analytics, Part 1? People are being paid without being accountable for what they're being paid to do. The sheer human inertia put forth to not change that model has got to be staggering, don't you think?

And I doubt anything I could do would bring such a change about. My work may contribute, it may be a drop in the bucket helping that bucket to fill and that's all.

The industry itself will change regardless (surprise!). As a WAWB colleague recently wrote, “For a field that's changing rapidly, based on rapidly changing technologies, I personally feel that holding any expectations for the future is a set up for disappointment. The expectation of change is the only realistic expectation I can hold today.” and I agree. Things will change. They always do. To promise anything else is to lie first to one's self then to others.

Final Thoughts

This is the end of the Unfulfilled Promise arc for me, folks. Please feel free to continue it on your own and give me a nod if you wish.


(my thanks to readers of Questions for my Readers who suggested this footnoting format over my usual <faux html> methods and to participants in the First NH WAW who, knowing nothing about this post, covered much the same topics during our lunch conversation)

1 – A constant promise to myself regarding my work — perform honest research, report results accurately and unbiasedly and (when possible) determine workable solutions to any challenges that presented themselves in either research or results.

back

2 – For those who don't know, much of ET is based on anthrolingualsemiotics — how humans communicate via signs. “Signs” means things like “No Parking”, true, and also means language, movement, symbols, art, music, … . According to Thomas Carlyle, it is through such things “that man consciously or unconsciously lives, works and has his being.” You can find more about semiotics in the following bibliography:

Aho, Alfred V. 2004 27 Feb Software and the Future of Programming Languages, .Science V 303 , I 5662 , DOI: 10.1126/science.1096169

Balter, Michael 2004 27 Feb Search for the Indo-Europeans, .Science V 303 , I 5662 , DOI: 10.1126/science.303.5662.1323

Balter, Michael 2004 27 Feb Why Anatolia?, .Science V 303 , I 5662 , DOI: 10.1126/science.303.5662.1324

Benson J.; Greaves W.; O'Donnell M.; Taglialatela J. 2002 Evidence for Symbolic Language Processing in a Bonobo (Pan paniscus), .Journal of Consciousness Studies V 9 , I 12 http://www.ingentaconnect.com/content/imp/jcs/2002/00000009/00000012/1321

Bhattacharjee, Yudhijit 2004 27 Feb From Heofonum to Heavens, .Science V 303 , I 5662 , DOI: 10.1126/science.303.5662.1326

Carrabis, Joseph 2006 Chapter 4 “Anecdotes of Learning”, Reading Virtual Minds Volume I: Science and History, V 1 , Northern Lights Publishing , Scotsburn, NS 978-0-9841403-0-5

Carrabis, Joseph 2006 Reading Virtual Minds Volume I: Science and History, V 1 , Northern Lights Publishing , Scotsburn, NS

Chandler, Daniel 2007 Semiotics: The Basics, , Routledge 978-0415363754

Crain, Stephen; Thornton, Rosalind 1998 Investigations in Universal Grammar, , MIT Press 0-262-03250-3

Fitch, W. Tecumseh; Hauser, Marc D. 2004 16 Jan Computational Constraints on Syntactic Processing in a Nonhuman Primate, .Science V 303 , I 5656

Gergely, Gyorgy; Bekkering, Harold; Kiraly, Ildiko 2002 14 Feb Rational imitation in preverbal infants, .Nature V 415 , I 6873 , DOI: http://dx.doi.org/10.1038/415755a

Graddol, David 2004 27 Feb The Future of Language, .Science V 303 , I 5662 , DOI: 10.1126/science.1096546

Holden, Constance 2004 27 Feb The Origin of Speech, .Science V 303 , I 5662 , DOI: 10.1126/science.303.5662.1316

Montgomery, Scott 2004 27 Feb Of Towers, Walls, and Fields: Perspectives on Language in Science, .Science V 303 , I 5662 , DOI: 10.1126/science.1095204

Pennisi, Elizabeth 2004 27 Feb The First Language?, .Science V 303 , I 5662 , DOI: 10.1126/science.303.5662.1319

Pennisi, Elizabeth 2004 27 Feb Speaking in Tongues, .Science V 303 , I 5662 , DOI: 10.1126/science.303.5662.1321

back

3 – There is (in my opinion) no greater demonstration of this principle than in The Book of the Wounded Healers, a long forgotten book that I hope will become available again sometime soon.

back

4 – Aleksander, Igor; Dunmall, Barry 2003 Axioms and Tests for the Presence of Minimal Consciousness in Agents I: Preamble, .Journal of Consciousness Studies V 10 , I 4-5

back

5 – Carrabis, Joseph 2004, 2006, 2009 A Primer on Modality Engineering, 18 Pages, , Northern Lights Publishing , Scotsburn, NS

Carrabis, Joseph 2009 18 Aug I'm the Intersection of Four Statements, , BizMediaScience

Carrabis, Joseph 2009 8 Sep Addendum to “I'm the Intersection of Four Statements”, , BizMediaScience

Nabel, Gary J. 2009 2 Oct The Coordinates of Truth, .Science V 326 , I 5949

back

6 – The simplest things often have the most power. The semioticist's A + B = C demonstrates itself with three questions to form equations of meaning such as:

(what happened) + (what do I think happened) = (what happened to me)

(what happened to me) – (what do I think happened) = (what happened)

(what happened to me) – (what happened) = (what do I think happened)

Know any two and the last reveals itself to you.

But only if you're willing.

back

7 – Note to Jacques Warren: Un et un est troi. Ha!

back

8 – Note to Ben Robison: Nope, ET wouldn't detect the sarcasm. The string was too short. We're working on it.

back

9 – Note to Ben Robison: Still working on that sarcasm thing. We have what we think is a good go at it in the NS Sentiment Analysis tool we'll be making public either this week or next (still waiting for the interface and may decide to go without it just to learn what happens).

back

10 – As Jacques Warren, Stephane Hamel and Rene can tell you, my best French is laughable. My attempt at “My gosh, what a beautiful day” usually comes out as “Joli jour heureux je”. (C'est rire, n'est-ce pas?)

back

11 – Carrabis, Joseph 2007 10 Jan Standards and Noisy Data, Part 1, , BizMediaScience

Carrabis, Joseph 2007 11 Jan Standards and Noisy Data, Part 2, , BizMediaScience

Carrabis, Joseph 2007 12 Jan Standards and Noisy Data, Part 3, , BizMediaScience

Carrabis, Joseph 2007 14 Jan Standards and Noisy Data, Part 4, , BizMediaScience

Carrabis, Joseph 2007 27 Jan Standards and Noisy Data, Part 5, , BizMediaScience

Carrabis, Joseph 2007 28 Jan Standards and Noisy Data, Part 10, , BizMediaScience

Carrabis, Joseph 2007 28 Jan Standards and Noisy Data, Part 6, , BizMediaScience

Carrabis, Joseph 2007 28 Jan Standards and Noisy Data, Part 7, , BizMediaScience

Carrabis, Joseph 2007 28 Jan Standards and Noisy Data, Part 8, , BizMediaScience

Carrabis, Joseph 2007 28 Jan Where Noisy Data Meets Standards (The Noisy Data arc, Part 9), , BizMediaScience

Carrabis, Joseph 2007 29 Jan For Angie and Matt, and The Noisy Data Finale, , BizMediaScience

Carrabis, Joseph 2007 29 Jan Standards and Noisy Data, Part 11, , BizMediaScience

back

12 – Periodic relearnings are part of my training and makeup. I put myself through periodic re-educations because I question my knowledge, not because I question someone else's. My goal is to find the flaws in my understanding, not to pronounce someone else's in error. Periodic re-educations keep subject matter knowledge fresh within me, brings new understandings to old educations, increases wisdom, all sorts of good things. Admittedly, this has enabled me to recognize flaws in other people's reasonings. Two examples that the online community may be familiar with are Eric Peterson's engagement equation (flawed definitions and mathematical logic) and Stephane Hamel's WAMM (frame confusion).

To respond to some comments made on the (now dead) TheFutureOf blog, I had to study other people's work. One such work was Eric Peterson's engagement equation. Other people had contacted me about his equation with some questions about it's validity (for the record, I had no intention of looking at Eric's engagement equation until he mentioned it in response to something I'd written. Once he mentioned it, my belief was he'd “placed it in the game” so to speak, hence opened it up to inspection).

In any case, the result of my own and others' questioning was that I studied how that equation was derived (was the mathematical logic viable and consistent, were the variables defined and used consistently, …) and found it flawed. Eric asked if it would be possible for us to simply work together on the equation to remove some ambiguities and make it more generally applicable, thereby removing any questions of mathematical validity and provide business value.

The public response to my reworking of Eric's original equation both confused and concerned me. My reworking was nothing more than turning it into a multiple regression model with the b0 and e terms set to 0 and all bn assumed to 1 (they could be changed as needs dictated). This allowed people using the reworking to determine by simple variance which models/methods weren't valid in their business setting and ignore them. I kept thinking people would laugh at how simplistic my reworking was and the response was quite the opposite. It was at this point my concerns about basic mathematical knowledge among online analysts flared.

I read through Stephane Hamel's WAMM paper (also because others entered it into a discussion) and recognized that by adding some consistent variable definitions that tool would have a great deal of power across disciplines. I asked Stephane if he'd mind my tinkering and so the story goes.

The challenge with Eric Peterson's engagement equation and Stephane Hamel's WAMM is (in my current understanding) that there is no “standard”, itself a theme I'll return to in this post. As an example, my current work with WAWB involves applying some standard modeling techniques so a “normal” can be determined. This would allow Company A to measure itself against a normal rather than comparing itself to bunches of other companies (that might not be good exemplars based on differing business and market conditions) and determine upon which vector Company A should place its efforts to insure cost-efficient gains along all WAMM vectors. The first aspect (my opinion) would be organizational. Without people accepting recognized truth there is no truth (again, my opinion).

And each time I take on such a task I require myself to relearn the necessary disciplines so I can be confident that my understandings are as close to the original author's as possible.

My method for learning and re-learning anything is to go back to First Principles (as mentioned earlier in this post). Some people may have heard or seen me talk about learning theory and how it can be applied everywhere. That's a lot of what First Principles are about. Start with the most basic elements you can, understand them as completely as possible, build upon that. One thing this provides me is the ability and confidence to discuss my ideas openly, the freedom to ask questions honestly and truthfully, and to understand and accept conflicting views easily and graciously. Put another way, the more you know, the wider your field of acceptance and understanding, and the more fluid and dynamic you become in your ability to respond to others.

So I started relearning statistics by going back to First Principles, studying Gauss, Galton, Fisher and Wright, giving myself the time to understand how the discipline evolved, how the concepts of regression, regression to the mean, ANOVA, ANCOVA, trait analysis, path analysis, structural equations modeling, causal analysis, least squares analysis, …, came about, how they're applied to different sciences (agriculture, eugenics, medicine, …), how bias, efficiency, optimality, sufficiency, ancillarity, robustness, … came about and how they are solved.

I also learned that the advent of fast, inexpensive computing power tended to focus people's attentions to problems that could be solved via fast, inexpensive computing rather than problems that needed to be solved. This was (to me) a point of intersection with the Unfulfilled Promise posts; “gathered data that [we] knew how to gather rather than asking what data would be useful to gather and figuring out how to gather it.”

So I shifted my focus a bit. I decided to use online analytics as the groundwork for teaching myself statistics.

back

13 – Somebody remind me to publish The Augmented Man. It covers Preparation Sets, EEGSLs and all that stuff in detail.

And it's another darn good read. Phphttt!

back

14 – Carrabis, Joseph 2006 Chapter 2, “What The Reading Virtual Minds Series Is About”, Reading Virtual Minds Volume I: Science and History, , Northern Lights Publishing , Scotsburn, NS 978-0-9841403-0-5

Carrabis, Joseph 2006 Chapter 4 section 2, “The Investors Heard the Music”, Reading Virtual Minds Volume I: Science and History, V 1 , Northern Lights Publishing , Scotsburn, NS 978-0-9841403-0-5

Carrabis, Joseph 2006 10 Nov Mapping Personae to Outcomes,

Carrabis, Joseph 2007 23 Mar Websites: You've Only Got 3 Seconds, , ImediaConnections

Carrabis, Joseph 2007 11 May Make Sure Your Site Sells Lemonade, , iMediaConnections

Carrabis, Joseph 2007 29 Nov Adding sound to your brand website, , ImediaConnections

Carrabis, Joseph 2008/9 28 Jan/1 Jul From TheFutureOf (22 Jan 08): Starting the discussion: Attention, Engagement, Authority, Influence, , , The Analytics Ecology

Carrabis, Joseph 2008 26 Jun Responding to Christopher Berry's “A Vexing Problem, Part 4” Post, Part 3, , BizMediaScience

Carrabis, Joseph 2008 2 Jul Responding to Christopher Berry's “A Vexing Problem, Part 4” Post, Part 2, , BizMediaScience

Carrabis, Joseph 2008/9 11 Jul/3 Jul From TheFutureOf (10 Jul 08): Back into the fray, , The Analytics Ecology

Carrabis, Joseph 2008/9 18 Jul/7 Jul From TheFutureOf (16 Jul 08): Responses to Geertz, Papadakis and others, 5 Feb 08, , The Analytics Ecology

Carrabis, Joseph 2008/9 18 Jul/7 Jul From TheFutureOf (16 Jul 08): Responses to Papadakis 7 Feb 08, , The Analytics Ecology

Carrabis, Joseph 2008/9 29 Aug/9 Jul From TheFutureOf (28 Aug 08): Response to Jim Novos 12 Jul 08 9:40am comment, , The Analytics Ecology

Carrabis, Joseph 2008 1 Oct Do McCain, Biden, Palin and Obama Think the Way We Do? (Part 1), , BizMediaScience

Carrabis, Joseph 2008 6 Oct Do McCain, Biden, Palin and Obama Think the Way We Do? (Part 2), , BizMediaScience

Carrabis, Joseph 2008 30 Oct Me, Politics, Adam Zand's Really Big Shoe, How Obama's and McCain's sites have changed when we weren't looking, , BizMediaScience

Carrabis, Joseph 2008 31 Oct Governor Palin's (and everybody else's) Popularity, , BizMediaScience

Carrabis, Joseph 2008/9 10 Nov/15 Jul From TheFutureOf (7 Nov 08): Debbie Pascoe asked me to pontificate on What are we measuring when we measure engagement?, , The Analytics Ecology

Carrabis, Joseph 2009 A Demonstration of Professional Test-Taker Bias in Web-Based Panels and Applications, 20 Pages, , NextStage Evolution , Scotsburn, NS

Carrabis, Joseph 2009 Machine Detection of and Response to User Non-Conscious Thought Processes to Increase Usability, Experience and Satisfaction – Case Studies and Examples, . , Towards a Science of Consciousness: Hong Kong 2009, University of Arizona, Center for Consciousness Studies , Tucson, AZ

Carrabis, Joseph 2009 5 Jun Sentiment Analysis, Anyone? (Part 1), , BizMediaScience

Carrabis, Joseph 2009 12 Jun Canoeing with Stephane (Sentiment Analysis, Anyone? (Part 2)), , BizMediaScience

Carrabis, Joseph; 2007 30 Mar Technology and Buying Patterns, , BizMediaScience

Carrabis, Joseph; 2007 9 Apr Notes from UML's Strategic Management Class – Saroeung, 3 Seconds Applies to Video, too, , BizMediaScience

Carrabis, Joseph; 2007 16 May KBar's Findings: Political Correctness in the Guise of a Sandwich, Part 1, , BizMediaScience

Carrabis, Joseph; 2007 16 May KBar's Findings: Political Correctness in the Guise of a Sandwich, Part 2, , BizMediaScience

Carrabis, Joseph; 2007 16 May KBar's Findings: Political Correctness in the Guise of a Sandwich, Part 3, , BizMediaScience

Carrabis, Joseph; 2007 16 May KBar's Findings: Political Correctness in the Guise of a Sandwich, Part 4, , BizMediaScience

Carrabis, Joseph; 2007 Oct The Importance of Viral Marketing: Podcast and Text, , AllBusiness.com

Carrabis, Joseph; 2007 9 Oct Is Social Media a Woman Thing?, , AllBusiness.com

Carrabis, Joseph; Bratton, Susan; Evans, Dave; 2008 9 Jun Guest Blogger Joseph Carrabis Answers Dave Evans, CEO of Digital Voodoos Question About Male Executives Weilding Social Media Influence on Par with Female Executives, , PersonalLifeMedia

Carrabis, Joseph; Carrabis, Susan; 2009 Designing Information for Automatic Memorization (Branding), 35 Pages, , NextStage Evolution , Scotsburn, NS

Carrabis, Joseph; 2009 Frequency of Blog Posts is Best Determined by Audience Size and Psychological Distance from the Author, 25 Pages, , NextStage Evolution , Scotsburn, NS

Daw, Nathaniel D.; Dayan, Peter 2004 18 Jun Matchmaking, .Science V 304 , I 5678

Draaisma, Douwe 2001 8 Nov The tracks of thought, .Nature V 414 , I 6860 , DOI: http://dx.doi.org/10.1038/35102645

Ferster, David 2004 12 Mar Blocking Plasticity in the Visual Cortex, .Science V 303 , I 5664

Harold Pashler; Mark McDaniel; Doug Rohrer; Robert Bjork 2008 Learning Styles: Concepts and Evidence, .Psychological Science in the Public Interest V 9 , I 3 1539-6053 %+ University of California, San Diego; Washington University in St. Louis; University of South Florida; University of California, Los Angeles

Hasson, Uri; Nir, Yuval; Levy, Ifat; Fuhrmann, Galit; Malach, Rafael 2004 12 Mar Intersubject Synchronization of Cortical Activity During Natural Vision, .Science V 303 , I 5664

Kozlowski, Steve W.J.; Ilgen, Daniel R. 2006 Dec Enhancing the Effectiveness of Work Groups and Teams, .Psychological Science in the Public Interest V 7 , I 3 , DOI: http://dx.doi.org/10.1111/j.1529-1006.2006.00030.x

Matsumoto, Kenji; Suzuki, Wataru; Tanaka, Keiji 2003 11 Jul Neuronal Correlates of Goal-Based Motor Selection in the Prefrontal Cortex, .Science V 301 , I 5630

Ohbayashi, Machiko; Ohki, Kenichi; Miyashita, Yasushi 2003 11 Aug Conversion of Working Memory to Motor Sequence in the Monkey Premotor Cortex, .Science V 301 , I 5630

Otamendi, Rene Dechamps; Carrabis, Joseph; Carrabis, Susan 2009 Predicting Age & Gender Online, 8 Pages, , NextStage Analytics , Brussels, Belgium

Otamendi, Rene Dechamps; 2009 22 Oct NextStage Announcements at eMetrics Marketing Optimization Summit Washington DC, , NextStage Analytics

Otamendi, Rene Dechamps; 2009 24 Nov NextStage Rich PersonaeTM classification, , NextStage Analytics

Paterson, S. J.; Brown, J. H.; Gsdl, M. K.; Johnson, M. H.; Karmiloff-Smith, A. 1999 17 Dec Cognitive Modularity and Genetic Disorders, .Science V 286 , I 5448

Pessoa, Luiz 2004 12 Mar Seeing the World in the Same Way, .Science V 303 , I 5664

Richmond, Barry J.; Liu, Zheng; Shidara, Munetaka 2003 11 Jul Predicting Future Rewards, .Science V 301 , I 5630

Sugrue, Leo P.; Corrado, Greg S.; Newsome, William T. 2004 18 June Matching Behavior and the Representation of Value in the Parietal Cortex, .Science V 304 , I 5678

Tang, Tony Z.; DeRubeis, Robert J.; Hollon, Steven D.; Amsterdam, Jay; Shelton, Richard; Schalet, Benjamin 2009 1 Dec Personality Change During Depression Treatment: A Placebo-Controlled Trial, .Arch Gen Psychiatry V 66 , I 12

back

15 – And before I get another flurry of emails that I'm attacking one person or another, no, I'm not. An almost identical process occurs when someone says “(something) is Easy”. I describe the “(something) is Hard” version because it's easier for people to understand. One of the wonders of AmerEnglish and American cultural training, that — it is easier to accept that something can be hard and harder to accept that something could be easy.

Human neural topography. Gotta love it.

back

16 – This understanding of what happens during teachings and trainings is why all NextStage trainings are done the way they are (see Eight Rules for Good Trainings (Rules 1-3) and Eight Rules for Good Trainings (Rules 4-8)) and could be why our trainings get the responses they do (see Comments from Previous Participants and Students).

back

17 – Bloom, Paul 2001 Precis of How Children Learn the Meanings of Words, .Behavioral and Brain Sciences V 24

Burnett, Stephanie; Blakemore, Sarah-Jayne 2009 6 Mar Functional connectivity during a social emotion task in adolescents and in adults, .European Journal of Neuroscience V 29 , I 6 , DOI: 10.1111/j.1460-9568.2009.06674.x

Frith, Chris D.; Frith, Uta 1999 26 Nov Interacting Minds–A Biological Basis, .Science V 286 , I 5445

Gallagher, Shaun 2001 The Practice of Mind (Theory, Simulation or Primary Interaction), .Journal of Consciousness Studies V 8 , I 5-7

Senju, Atsushi; Southgate, Victoria; White, Sarah; Frith, Uta 2009 14 Aug Mindblind Eyes: An Absence of Spontaneous Theory of Mind in Asperger Syndrome, .Science V 325 , I 5942

Tooby, J.; Cosmides, L. 1995 'Foreward' to S. Baron-Cohen, “MindBlindness: An Essay on Autism and Theory of Mind”, . , MIT Press , Cambridge, Mass.

Zimmer, Carl 2003 16 May How the Mind Reads Other Minds, .Science V 300 , I 5622

back

18 – I'll use myself as an example. I've often become emotional when talking about research and results. But (But!) regardless of my emotionalism, the work stands or doesn't. I can clarify, elucidate, explain, divulge, describe, … and in the end, the work stands or it doesn't.

back

19 – If your model is a linear variation (all regression analyses are linear in nature) then you have something like y = mx + b, y = b0 + b1x + e, … and every change in one unit of x will cause a one unit change in y. Using the above equations as examples we get the textbook definition of the regression coefficient (either m or b1 in the above); the effect that a one unit change in x has on y.

back

20 – I have experience working with large data sets. Some of you might know I worked for NASA in my younger years. I was responsible for downloading and analyzing satellite data. The downloads came every fifteen minutes and reported atmospheric phenomena the world over. My job was to catch the incongruous data and discard it. I got to a point where I could look at this hexidecimal data stream and determine weather conditions any where in the world before it got sent on for analysis.

Amazing that I got dates back then, isn't it?

back

Posted in , , , , ,

The Unfulfilled Promise of Online Analytics, Part 2

Perfection is achieved,
not when there is nothing more to add,
but when there is nothing left to take away.
– Antoine de Saint-Exupery, The Little Prince

<CAVEAT LECTOR>
Readers can find the previous entry in this arc at The Unfulfilled Promise of Online Analytics, Part 1.

First, I want to thank all the people who read, commented, twittered, emailed, skyped and phoned me with their thoughts on Part 1.

My special thanks to the people with reputations and company names who commented in Part 1. Avinash Kaushik and Jim Novo, I thank and congratulate you for stepping up and responding (I queried others if I could include them in this list, they never responded). Whether you intended to or not, whether you recognize it or not, you demonstrated a willingness to lead and a willingness to get involved. Please let's keep the discussion going.

Also my thanks to those who took up the gauntlet by propagating the discussion via their own blogs. Here Chris Berry (and I also note that Chris' The Schism in Analytics, A response to Carrabis, Part II post presages some of what I'll post here) and Kevin Hillstrom come to mind. My apologies to others I may not have encountered yet.

Second, I was taken aback by the amount of activity this post generated. I was completely unprepared for the responses. It never occurred to me there was a nerve to be struck; only one person interviewed responded purely in the positive. The lack of positive response caused me to think this information was self-evident.

Well…there was one of the problems. It was self-evident. Like the alcoholic brother-in-law elephant in the living room, it took someone new to the family to point and say, “My god is that guy drunk or what!”

And like the family who's been working very hard making sure nobody acknowledges the elephant, the enablers came forward — okay, they emailed, skyped and phoned forward. One industry leader commented, saw my response and asked that their comment be removed. I did so with great regret because there can be no leadership without discussion, no unification of voices until all voices are heard.

Please note that some quotes appearing in this entry may be from different sources than in part 1 and (as always) are anonymous unless a) express permission for their use is given or b) the quote is in the public domain (Einstein, Saint-Exupery, etc).

Okay, enough preamble. Enjoy!
</CAVEAT LECTOR>

The whole industry needs a fresh approach. This situation isn't going to improve itself.There was a sense of exhaustion among respondents regarding the industry. It took two forms and I would be hard pressed to determine which form took precedent.

One form I could liken to the exhaustion a spouse feels when their partner continually promises that tomorrow will be better, that they'll stop drinking/drugging/gambling/overeating/abusing or otherwise acting out.

It wasn't always the case. Once upon a time (that phrase was actually used by more than one respondent) there was a belief that if things were implemented correctly, if a new tool could be developed, if management would understand what was being done, if if if… Things could and would be better. Promises were made that were never kept and were then comfortably forgotten.

The second form I could liken to the neglected child who starts acting out simply to get attention. Look at me, Look at me! But mom&dad always have something else to focus their attention on. There's the new product launch, opening new markets, having to answer to the Board, (and probably the worst) the other children (marketing, finance, logistics, …), …

“When you know the implementation is correct you have to wonder if the specifications are wrong.”

Several respondents showed an impressive level of self-awareness. Many of them have moved on, either out of the industry completely or into more fulfilling positions within. All recognized that any industry that succumbs to promise and hype will ultimately end in disappointment.

First we're told to nail things down then given a block of unobtainium to nail them in then told to do it now!The disappointment took two primary forms (clear schisms abounded in this research. Clear schisms are usually indicative of deep level challenges to unification in social groups) and the division was along personality types. Respondents who were more analytic than business focused were disappointed because “…a fraction of implementation achieve business goals. A tiny faction of those actually work.”

Respondents who were more business than analytics focused were disappointed because the industry didn't help them achieve their career goals.

For many in both camps moving on was a recognition of their own personal growth and maturation, for most it was frustration based, a running away-from pain rather than a movement towards pleasure. This latter again demonstrates a victim mentality, a caught in the middle between warring parents.

“When the tools don't agree management's solution is to get a new tool.”

Deciding on tools is more politics than smarts. Management doesn't ask us, they just go with the best promises.Respondents demonstrated frustration with clients/organizations and vendors that refuse to demonstrate leadership. This was such a strong theme that I address it at length below. Sometimes a lack of leadership is the result of internal politics (“…and that's (competition, keeping knowledge to themselves, backstabing) is starting to happen (we see the schism (right word?) between Eric's 'hard' position and Avinash 'easy' (and others)…”).

Leadership vacuums also develop when power surges back and forth between those given authority positions by others. Family dynamics recognizes this when parents switch roles without clearly letting children know who's taking the lead (think James Dean's “You're tearing me apart” in Rebel Without a Cause). This frustration was exacerbated when respondents began to recognize that no tool was truly new, only the interfaces and report formats changed.

There was a sense among respondents that vendors and clients/organizations were switching roles back and forth, neither owning leadership for long, and again, the respondents were caught in the middle.

“Management pays attention to what they paid for, not what you tell them.”

Some respondents are looking at the horizon and reporting a new (to them) phenomenon; as vendors merge, move and restructure there's an increasing lack of definition around “what can we do with this?” This is disturbing in lots of ways.

...everybody's agreeing with their own ideas and nobody elses.Analysts will begin to socially and economically bifurcate (there will be no “middle class”). Those at the bottom of the scale will get into the industry as a typical “just out of school” job then move elsewhere unless they're politically adept. The political adepts will join the top runners, either associating themselves with whatever exemplars exist or by becoming exemplars themselves. But the social setting thus created allows for a multitude of exemplars, meaning there are many paths to the stars, meaning one must choose wisely, meaning most will fail and thus the culture bifurcates again and fewer will stay long enough to reach the stars. “You have to pick who you listen to. I get tired figuring out who to follow each day.”

Respondents admitted to lacking (what I recognize as) research skills. I questioned several people about their decision methods — had they considered this or that about what they did or are planning to do — and universally they were grateful to me for helping them clarify issues. Those that had appreciable research skills were hampered by internal politics (“Until my boss is ready nothing gets done.”)

Most respondents confused outputs with outcomes (as noted in part 1) because tools are presented and trained in two levels (this is my conclusion based on discussions. I'm happy to be corrected). There's the tool core that only few learn to use and there's the tool interface that everyone has access to.

Everyone can test and modify their plans based on the interface outputs but what happens at the core level — how the interface outputs are arrived at — is the great unknown hence can't be defended in management discussions and “…I can't explain where it came from so I'm ignored.” Management's (quite reasonable, to me) response follows Arthur C. Clarke's “Mankind never completely abandons any of its ancient tools”, they go with what they know, especially when analysts themselves don't demonstrate confidence in their findings. “I can only shrug so many times before they stop listening, period.” Management is left to make decisions based on experience and now we see the previously mentioned bifurcation creeping into business decisions. Those with the most experience, the most tacit knowledge, win. As John Erskine wrote, “Opinion is that exercise of the human will that allows us to make a decision without information” and management — asking for more accountability — is demanding to understand the basis for the information given.

“Did you ever get the urge when someone calls up or sends e-mails asking, 'How's that data coming?' to say, 'Well, we're about two hours behind where we would be if I didn't have to keep stopping to answer your goofy-?ss phone calls and e-mails.' This is called project management, I guess.”

Some tools are rejected even when they make successful predictions.“Ignore them” as a strategy for responding to business requests works two-ways. Management repeatedly asking difficult to solve questions results in they're being ignored by analysts until the final results are in. By that time both question and answer are irrelevant to a tactical business decision and once again the “promise” is lost. In-house analysts can suggest new tools and must deal with their suggestions gaining little traction. “Management works in small networks that look at the same thing. They're worse than g?dd?mn children. You have to whack them on the side of the head to get their attention.”

Management's reluctance to take on different tools and methodologies is understandable. Such decisions increase risk and no business wants risk.

“To change the form of a tool is to lose it's power. What is a mystery can only be experienced for the first time once.”

...as online analytics matures it must evolve to survive.I asked for clarification of the statement on the right and was told that yes, there are times when old paradigms need to be tossed aside and knowing when is a recognizable management skill that can only be exercised by extreme high-level management, by insanely confident upstarts and lastly by (you guessed it) trusted leaders/guides. The speaker had recently returned to the US from a study of successful EU-based startups. When and how paradigms should be shifted and abandoned is a hot topic among 30ish EU entrepreneurs.

“We're suppose to be solving problems. But I can't figure out what problems we're suppose to solve.”

Random metric names and symbols is not an equation.(the quote on the right is from Anna O'Brien's Random Acts of Data blog)

Business and Science are orthogonal, not parallel. Any science-based endeavor works to overcome obstacles. If not directly, then to provide insight into how and what obstacles can be overcome. Business-based endeavors work to generate profit. Science involves empirical investigation. Investigation takes time and only certain businesses can afford time because unless the science is working at overcoming a business obstacle, it's a cost, not a profit.

So if you can't afford the time involved in research and are being paid to solve business problems your options are limited. Most respondents relied on literature (usually read at home during “family time” or while traveling), conferences, private conversations and blogs. Literature is only produced by people wanting to sell something (this includes yours truly). It may be a book, a conference ticket, a tool, consulting, a metaphysic, …, and even when what they offer is free (such as most blogs) consumers pay with their attention, engagement and time (yes, I know. Especially with my posts).

...I don't believe in WA anymore, I haven't seen any of my clients change because of it and all the presentations that I've seen are always similar...Conferences and similar venues are biased by geographies, time and cost (again, even if free you're paying somehow. Whoever is picking up the bar tab and providing the munchies is going to be boasting about how many attended).

Private conversations provide limited access and that leaves blogs. The largest audiences will be (most often) offline in the form of books and online in the form of blogs.

Behold, and without most people realizing it's happening, exemplars form. The exemplar du jour provides the understanding du jour, hence a path to what problems can be solved du jour. Who will survive?

Historical precedent indicates that exemplars who embrace and encourage new models will thrive. More than thrive, they will continue as positive exemplars. Exemplars not embracing or at least acknowledging new models will quickly become negative exemplars and the “negativity” will be demonstrated socially first in small group settings then spill over into large group settings once a threshold is reached (and once that threshold is reached, watch out!). The latter won't happen “over night” and it will definitely happen (my opinion) because all societies follow specific evolutionary and ecologic principles (evolutionary biology, Red Queen, Court Jester, evolutionary dynamics, niche construction and adaptive radiation rules (along with others) all apply). The online analytics world is no different.

<TRUTH IN ADVERTISING DEPT>
Some people contacted me about Stephane Hamel's Web Analytics Maturity Model. I knew nothing about it, contacted Stephane, asked to read his full paper (not the shortened version available at http://immeria.net/wamm), did so, talked with him about it, told him my conclusions and take on it and got his permission to share those conclusions and takes here. I also asked Stephane if I could apply his model to some of my work with the goal of creating something with objective metricization that would be predictive in nature and he agreed (if you treat Stephane's axes as clades and consider each node as a specific situation then cladistic analysis tools via Situational Calculus looks very promising (asleep yet?)).
</TRUTH IN ADVERTISING DEPT>

A case in point is Stephane Hamel and his Web Analytics Maturity Model (WAMM). Stephane will emerge as an exemplar for several reasons and WAMM is only one of them.

KISS should be part of the overall philosophy.WAMM is (my opinion) an excellent first step to solving some of the issues recognized in part 1 because it does something psycholinguists know must be done before any problem can be solved; it gives the problem a name. Organizations can place themselves or be placed on a scale of 0-5, Impaired to Addicted (Stephane, did you know that only 1-4 would be considered psychologically healthy?). WAMM helps the online analytics world because it creates a codification, an assessment tool for where an organization is in their online efforts.

I asked Stephane if he thought his tool was a solution to what I identified in part 1. He agreed with me that it wasn't. Its purpose (my interpretation, Stephane agreed) was that it creates a 2D array, creates buckets therein and then explains what goes in each bucket.

I asked Stephane if he believed WAMM provided a metricizable solution with universally agreed to objective measures (I told Stephane that I wasn't grasping how WAMM becomes an “x + y = z” type of tool and asked if I'd missed something). Stephane replied “…no, you haven't missed anything, because it is NOT a x+y=z magical/universal formula, that's not the goal. The utmost goal is to enable change, facilitate discussion, and it's not 'black magic'. A formula would imply there is some kind of recipe to success. Just like we can admire Amazon or Google success and could in theory replicate everything they do, you simply can't replicate the brains working there – thus, I think there is a limit to applying a formula (or 'brain power' is a huge randomized value in the formula).”

WAMM and any similar models would be considered observational tools (I explain “observational” tools further down in this post). Most observational tools (I would write “all” and don't have enough data to be convinced) trace their origins (and this is a fascinating study) to surveying; People could walk the land and agree “here is a rise, there is a glen” but it wasn't until surveying tools (the plumb&line, levels, rods&poles, tapes, compass, theodolite, …) came along that territories literally became maps (orienteers can appreciate this easily) that told you “You are here” and gave very precise definitions of where “here” was.

The only problem with observational tools is that the map is not the territory. Yes, large enough maps can help you figure out how to get from “here” to “there” and how far you can travel (how much your business can successfully change) depends on the size of your map, your confidence in your guide/leader, … . Lots of change means maps have to be very large (ie, very large data fields/sets), updated regularly (to insure where you're walking is still where you want to walk). The adage “Here there be dragons” places challenges in a fixed, historical location, it doesn't account for population and migrational dynamics (market movements, audience changes).

Or you need lots of confidence in your leaders.

“…any science first start as art until it's understood and mature enough, no?”

A conclusion of this research is that online analytics is still more art than science, more practitioner than professional (at least in the client/organization's mind). This was demonstrated as a core belief in responses as the ratio of respondents using practitioner to professional was 6:1. This language use truly shocked me. Even among non-AmerEnglish speakers the psycholinguistics of practitioner and professional makes itself known. “Practitioner” is to “professional” as “seeking” is to “doing”, “deed” to “task”, “questing” to “working”, …

The disconnect between what practitioners do and what businesses need is an embarassment. There's a widening gulf between [online analytics] and business requirements.Online analytics makes use of mathematics (statistics, anyway) and although some people use formulae the results are often not repeatable except in incredibly large frames hence any surgical work is highly questionable. As the USAF Ammo Troop manual states “Cluster bombing from B-52s is very, very accurate. The bombs are guaranteed always to hit the ground.”

A challenge for online analysts may be recognizing the current state being more art than science as such and promoting both it and themselves accordingly. They are doing themselves and those they answer to a disservice if they believe and promote that they're doing “science” while the error rates between methods are recognized (probably non-consciously) as “art” by clients. Current models and methods allow for high degrees of flexibility (read “unaccountable error sources”).

Modern medical science has no cure for your condition. Fortunately for you, I'm a quack.A good metaphor is modern medicine. Without a diagnosis there can be no prognosis. You can attempt a cure but without a prognosis you have no idea if the patient is getting better or not. Most people think a prognosis is what they hear on TV and in the movies. “Doctor, will he live?” “The prognosis is good.” Umm…no. A prognosis is a description of the normal course of something, a prediction based on lots of empirical data seasoned with knowledge of the individual's general health. A prognosis of “most people turn blue then die” coupled with observations of “the skin is becoming a healthy pink and the individual is running a marathon” means the cure has worked and that the prognosis has failed.

Right now the state of online analytics is like the doctor telling the patient “We know you're ill but we don't know what you have.” The patient asks “Is there a cure?” and the doctor responds, “We don't know that either. Until we know what you have we don't know how to treat you…but we're willing to spend lots of money figuring it out.”

This philosophy is good in the individual and not in the whole (as recently witnessed by the public outcries about the recently published mammogram studies and no more demonstration of communicating science to non-scientists has occurred in recent years).

But once the disease is named? Then we have essentially put a box around whatever it is. We know its size, its shape and its limits.

There can be no standardization, no normalization of procedure or protocol, when the patient can shop for opinions until they find the one they want.

The challenge current models and methods face is that they serve the hospitals (vendors), not the doctors (practitioners) nor the patients (clients/organizations). It doesn't matter if all the doctors agree on a single diagnosis, what matters is whether or not there is a single prognosis that will heal the client. In that sense, WA is still much more an art than it is a science, and while we may all attend Hogwarts, our individual levels of wizardry may leave much to be desired.

...but give us a second and we'll run the data again.If you wish to claim the tools of mathematics then you must be willing to subject yourself to mathematical rigor. Currently there can be no version of Karl Popper's falsifiability when the same tool produces different results each time it's used (forget about different tools producing different results. When the same tool produces different results you're standing at the scientific “Abandon Hope All Ye Who Enter Here” gate).

“…gathered data that [we] knew how to gather rather than asking what data would be useful to gather and figuring out how to gather it.”

All the online tools currently available are “observational” (anthropologists, behavioral ethologists, etiologists, …, rely heavily on such tools). “Observation” is the current online tool sets' origin (going back to the first online analytics implementation at UoH in the early 1990s) and not much has changed. The challenge to observational tools is that they can only become predictive tools when amazingly large numbers are involved. And even then you can only predict generalized mass movement, neither small group nor individual behavior (for either you need what PsyOps calls ITATs — Individualizing Target Acquisition Technologies), with the mass' size determining the upper limit of a prediction's accuracy.

At this point we start circling back to part 1's discussions about “accountability” and why the suggestion of it gets more nervous laughter than serious nods. Respondents' resulting language indicates there is more a desire to currently keep WA an art than a science . There is less accountability when things are an art form. But “metrics as an art” is in direct conflict with client goals. And unless a great majority of practitioners wish their industry to mature there is no cure for its current malaise.

The promise has been unfulfilled since 2003. We were talking about more effective marketing, improved customer retention and all that stuff back then.One solution to this is giving the industry time to mature. Right now there is conflict between the art and science paradigms, between Aristophanes' “Let each man exercise the art he knows” and Lee Mockenstrum's “Information is a measure of the reduction of uncertainty.”

Time as a solution has been demonstrated historically, most obviously in our medical metaphor. Village wisdomkeepers gave way to doctors then to university degrees in medicine because the buying public (economic pressure) demanded consistency of care/cures. Eventually things will circle back and again due to economic pressure. Enough clients will seek alternatives not provided by institutional medicine and go back to practitioners of alternative medicine at which point the cycle will begin again. People have been openly seeking alternative cures to catastrophic illnesses since the 1960s. Eventually money began escaping institutional medicine's purview and insurers were being forced to pay. The end result was that institutional medicine and insurers started recognizing and accepting alternative medical technologies…provided some certification took place, usually through some university program.

It will be interesting to see how WAMM economizes the online analytics ecology: will practitioners decide institutions lower in the WAMM matrix are too expensive to deal with? This means such institutions — which require experienced practitioners to survive — will only be able to afford low quality/low experienced practitioners to help them. This can be likened to a naval gunnery axiom, “The farther one is from a target, either the larger the shell or the better the targeting mechanism” and companies will opt for larger shells (poorly defined efforts) rather than better targeting mechanisms (experienced practitioners).

“A dominant strand for [online analytics] the past ten fifteen years has been incorporating web information with executive decisions.”

So far no single solution to concerns raised in this research is apparent (to me). Instead a solution matrix of several components seems most likely to succeed (WAMM is a type of solution matrix; you can excel along any axis and to be successful you need to excel evenly along all axes). So far three matrix elements — time, a lack of leadership and realism — have been identified. Time to mature is culture dependent so the online community as a whole must do the work.

Not enough gets said about the importance of abandoning crap.(I believe the quote on the right originated with Ira Glass)

Realism — in the sense of being realistic about what should be expected and what can be accomplished is obvious — deals with social mores and leads in the “lack of leadership” concern. There can be no “realism” until the social frame accepts “realism” as a standard, until hype and promise are dismissed and this isn't likely to happen until leaders/exemplars emerge that make it so.

“Yes, I see your point. Please remove my post from your blog”

Progress in any discipline depends on public debate and the criticism of ideas. That recognized, it is unfortunate that the current modes of online analytics public debate and criticism are limited to conferences, private conversations and (as witnessed here) online posts. Conferences (by their nature) only allow for stentorian and HiPPOish debate. Private conversations only allow for senatorial flow. In both cases the community at large doesn't take part.

Blogs and related online venues are an interesting situation. They provide a means for voices to be raised from the crowd. Social mechanics research NextStage has been doing (we're working on a whitepaper) documents how leaders emerge (become senatorial, sometimes stentorian and in some cases HiPPOtic), how they fade, how to create and destroy them (for marketing purposes), (probably most importantly) how a given audience will perceive and respond to a given leader and what an individual can do regarding their own leadership status.

The WAA is very US focussed.I bring this into the discussion because several people commented publicly (both in Part 1 comments and elsewhere) and privately (emails and skypes) that the industry (more true of web than search) suffers from a lack of leadership.

People who enjoy the mantle of leadership yet refuse to lead are not leaders. Recognized names had an opportunity to both join and take leadership in the discussion (I mention some who did at the top of this post). Yet the majority of others either failed to respond, chose to ignore the discussion or — as indicated by the quote opening this section — simply backed away when the discussion was engaged. No explanation, no attempt at writing something else. Considering the traffic, twits, follow-up posts on other blogs (for something I posted, anyway), this was an opportunity for people to step forward. Especially when lots of other people were writing that there was a leadership vacuum.

Leaders/Influencers take different forms (as documented in the previously mentioned social mechanics paper). Two forms are Guide and Responder. Guides are those who are in front. They may know the way (hence are “experts”) and may not. Experts may or may not be trusted depending on how well they can demonstrate their expertise safely to their followers (you learn to trust your guide quickly if you've ever gone walking on Scottish bogs. They demonstrate their knowledge by saying “Don't step there”, you step there and go in over your head at which point they pull you out and say “I said, 'Don't step there'.” A clear, clean, quick demonstration of expertise).

Guides who don't know the way rely heavily on the trust of those following them and can be likened to “chain of command” situations; they are followed because they are trusted and have the moral authority to be followed.

The Guide role is definitely riskier. It's also the more respected one because Guides lead by “being in front of the pack, stepping carefully, being able to read the trail signs hence guiding them safely”. The Responder doesn't lead by being in front. Instead they assume a position “closer to the end, perpetually working at catching up, but always telling the pack where to go, where to look and what to do”. The major problem for Responders is that people don't have lots of respect for that latter role. They may respect the individual and most people will quickly recognize the role they play and the lack of respect will filter backward to the individual.

This plays greatly into any industry's maturation cycle. New school will replace old school and unless our forebears' wisdom is truly sage — evergreen rather than time&place dependent — the emerging schools will seek their own influencers, leaders and guides. This is already being demonstrated in the fractionalizing of the conference market.

One industry leader offered three points in a comment, saw my response and asked that I remove their comment before it went live. I'm going to address two points (the third was narrative and doesn't apply) because I believe the points should be part of the discussion and more so due to their origin.

First, Web Analytics is not a specific activity.

People need to look beyond the first conclusions that come to mind.I responded that nothing I'd researched thus far led me to think of 'Web Analytics' as an 'always do this – always get that' type of activity and offered that while different people use 'Web Analytics' for different purposes, the malaise is quite pervasive. Whether or not 'Web Analytics' includes a host of different activities or not is irrelevant to the discussion. The analysts' dissatisfaction with their role in the larger business frame, their dissatisfaction with the tools they are asked or choose to use, their dissatisfaction with their 'poor country cousin' position in the chain-of-command, …, are what need to be addressed.

Second, the individual wrote that there was no “right way” to do web analytics.

I both agreed and disagreed with this and explained that there are lots of ways to dig a hole. In the end, the question is 'Did you dig the hole?' More specifically, if one is asked to excavate a foundation hole, dig a grave, plow a field, dig a well, plant tomatoes, …, all involve digging holes, each requires different tools (time dependency for completion becomes an issue, I know. You can excavate a foundation hole with a hand trowel. I wouldn't want to and you could). Stating that 'There is a right way to do it' is a faulty assumption demonstrates a belief that standardization will never apply, therefore chaos is the rule.

Chaos being the rule is usually indicative of crossing a cultural boundary (such as a western educated individual having to survive in the Bush. None of the socio-cognitive rules apply until the western individual learns the rules of the Bush culture) or crazy-making behavior (from family and group dynamics theory). Culture of any kind is basically a war against chaos and what cultures do is create rules for proper conduct and tool use within their norms.

One could conjecture that the cross-cultural boundary is the analytics-management boundary. So long as management controls that boundary a) there will be no “one-way” to do analytics (the patients will self-diagnose and -prescribe) and b) analytics will never be granted a seat at the grown-ups' table.

The numbers need a context.So there better be a 'right way to do it', at least as far as delivering results and being understood are concerned, because without that the industry — more accurately, the practitioners — are lost.

“I could tell them 'It is not possible to send in the Armadillos for this particular effort but communication will continue without interruption' and they'd nod and agree.”

Two needs surfaced quickly:

  • recognize what's achievable when (so people aren't set up to fail) and
  • learn how to promote faster adoption of an agenda (without going to Lysistratic extremes, of course. Everybody wants to keep their job).

Accepting increased accountability addresses some issues and not all. Concepts from several sources (some distilled and not in quotes, some stated more elegantly than I could and in quotes) revealed the following additional matrix components:

1) “[online] Analysts need to share the error margins, not the final analysis, of their tools”
2) stop or at least recognize and honestly report measurement inflation
3) “Trainings need to focus on a proficiency threshold”
4) “…provide a strong evidence of benefit”
5) understand what [a tool] is really reporting
6) “It's better to come at [online analytics] from a business background than the other way around…” (“…but who wants the cut in pay?”)
7) “We should standardize reports because the vendors won't”
8) initiate regular, recognized adaptive testing for higher level practitioners
9) include communication and risk assessment training (some time we're at a conference, ask me about the bat&ball question. It's an amazingly simple way to discover one's risk assessment abilities)

We must work to get uncertianty off the table.“The problem is uncertainty…”

That's a long component list and most readers will justifiably back away or become overwhelmed and disheartened. Fortunately there's historically proven, overlapping strategies for dealing with the above items collectively rather than individually.

  • Analysts live with uncertainty, clients fear it, so “…get uncertainty off the table” when presenting reports (this was termed “stop hedging your bets” by some respondents).This single point addresses items 1, 2, 4, 5, 8 and 9 above (hopefully you begin to appreciate that working diligently on any one component suggested here will accrue benefits in several directions (so to speak)).
  • Identify the real problem so you can respond to their (management's) problem. This point addresses items 1, 2, 3, 4, 5, 6, 7 and 9.
  • Speak their (management's) language. Items 4, 5, 6, 7 and 9.
  • Learn to communicate the same message many ways without violating the core message (we've isolated eight vectors addressing this and the previous item: urgency, certainty, integrity, language facility, positioning, hope, outcome emphasis (Rene, I'm seeing another tool. Are you?)) Items 3, 4, 5, 6, 7, 8 and 9 are handled here.
  • Be drastic. Rethink and redo from the bottom up if you have to. This point deals with items 1, 2, 4, 5, 8 and 9.
  • Focus on opportunities, not difficulties. This point deals with items 4, 5, 6 and 9.

Any one of the above will cover several matrix components right out of the gate. The benefit to any of the above stratagems is that implementing any one will cause other stratagems to root over time as well, and thus the shift

  • in what the numbers are about,
  • how they are demonstrated,
  • how to derive actionable meaning from them and
  • how accountability is framed

mentioned at the end of part 1 can be easily (? well, at least more easily) achieved.

<ABOUT THIS RESEARCH>

I wrote a little about how this study was done in part 1. We contacted some people via email, performed various analysis on their responses, others via phone, ditto, others via skype, ditto, and some in face-to-face conversation. All electronic information exchanges were retained and analyzed using a variety of analog and digital tools. Face-to-face conversations were performed with at least one other observer present to check for personal biasing in the resulting analysis.

Like any research, others will need to add their voices and thoughts to the work presented here. I make no claims to its completeness, only that it's as complete as current time and resources allow.

</ABOUT THIS RESEARCH>

Posted in , , , ,

The Unfulfilled Promise of Online Analytics, Part 1

Man is the symbol using animal,
Inventor of the negative, separated from his natural condition
By instruments of his own making
Goaded by the spirit of Hierarchy,
With knowledge of his own mortality
And rotten with perfection.
– Kenneth Burke

<CAVEAT LECTOR>
I've had this theory that thermodynamic principles could be used to predict user attitudes and behaviors in finite populations for a while. A population threshold has to be reached before accuracy could be achieved. One prediction of the theory is that once that population threshold has been reached, the largest segment of that user population will be unsatisfied users for any given product or service. You don't need to sample the entire population or even the threshold. Another fallout from the theory is that you can create an exemplar group, study that, and make extremely accurate predictions about the entire population (including segments of the population not represented by the exemplar). I've been studying population dynamics for different industries for a while and had an opportunity to study the online analytics community, some results of that are shared here.

I initiated the study by sending the following (or at least a similar) request to people in the online analytics community.

Howdy,
would you be willing to write me your thoughts on “the unfulfilled promise of web analytics/search”? I'm preparing a column/blog post. Your response will be kept confidential (I'm keeping everybody's responses confidential).

Thanks,
Joseph

My request was intentionally open ended (surprise!). I wanted to know their responses, not what might be predicated by any guidance on my part.

<EDITORIAL COMMENT>
One respondent wrote, “In a survey, this question would be tagged with a 'leader bias'.” I pointed out that as this particular respondent opened their response with “The question was 'why is it that web analytics isn't delivering on its promise'?” they demonstrated that they were quite willing to follow any leader bias that may have existed. The fact that they rephrased my original request is a demonstration that the bias — hence prejudices, acceptances and beliefs — existed long before my request was made.

The question isn't whether or not leader bias exists, the question is “where were respondents willing to be lead?” and is typical of my (and NextStage's) use of the Chinese General Solicitation. Knowing what someone responds isn't as actionable as knowing how they respond (you're shocked I'd offer that, yes?).
</EDITORIAL COMMENT>

I also didn't ask for responses from people who had previously demonstrated (via other writings, etc) they had a “company line” or brand to protect.

The title of this post was originally “The Unfulfilled Promise of Web Analytics” and came from a conversation I had in early June '09. I was talking with some folks, one of whom was an SVP of web analytics and marketing for an international marketing company. Unprompted, this individual shared their disillusion with the web analytics field and provided the title phrase.

In all I received some 60 responses, some from people “just doing their job” to others with national and in some cases international reputations to protect. The responses came from everywhere except South America, Africa and East Asia (I hope to cast a better net in the future). I kept the original spellings and grammar of respondents when I quote them (AmerEnglish is not the native language of several of them) because doing so keeps their intent clear over my own.

I've studied the “largest user group will be 'unsatisfied' users” phenomenon across industries and (so far) it holds true. No doubt I'll write a formal research paper (and include an extensive bibliography) about this phenomenon in my copious free time someday.

In the meantime, allow me to share the results specific to the online analytics industry with you.
</CAVEAT LECTOR>

See this tool? I must know what I'm doing because I use this tool.The quote on the right was made to me during a discussion. It was offered jokingly and I accept it as such. I also know a little about how the mind works and where such statements — even as jokes — come from.

The person making that comment went on to tell me about a recent conference they attended. At some point a bunch of attendees got into a cab to go out to dinner. One of them offered that companies wanted more and more accountability in their analytics.

The universal response was “What? Accountability? It's time to get into another business.”

Such responses are understandable and they can only be made by people at or near the top of their industry. Nobody wants to work and everybody wants to play. The more fun (play) they can have in their job the more they'll enjoy it. Being accountable isn't fun, though.

When Web Analytics came around, my first thought was 'cool, plenty of data'And such responses must be put next to “When Web Analytics came around, my first thought was 'cool, plenty of data'. Little did I know data would replace the actual business reflection that spun all of this.”

I recognized true schisms in the responses. I'll mention one here because it relates to the first quote above (I'll get to the other schisms further on). It deals with people experiencing non-conscious incongruities between their identity-core and their identity-personality (more colloquially, The Impostor Syndrome, feeling they're frauds. Personality, Identity and Core make up an individual's psychological self-concept. Different disciplines have different terms for these elements). I would have thought that such sentiment would be prevalent at the lower end of the disciplinary spectrum and it wasn't. More than one well recognized individual shared that they feel like the emperor without any clothes when challenged about their conclusions. They often want to respond as is indicated above; “See this tool? I must know what I'm doing because I use this tool.”

The schism here was more psychological and psycho-social than analytics wise. Did these individuals have confidence in their analysis? Most often, yes. Did they have confidence their analysis would be accepted/have meaning/provide value?

No.

Sensing or believing that one's work is not honored or respected is damning. Such attitudes are psychological death to the vulnerable and emotionally uncomfortable to the strong.

“…the question of accuracy did not shake off easily. To be totally honest, I kept this to myself.”

Maybe new fields need to emerge -- web psycho-analytics perhaps?Online Analytics is a numerical discipline. That's its whole point; here are numbers that prove something. It is not a psycho-social discipline or, as one respondent wrote, “Maybe new fields need to emerge — web psycho-analytics perhaps?”

Such fields may exist and may emerge if they don't exist already. What is true about them — if they're primarily left to the current online analytics paradigm — is that they will require large numbers to demonstrate accuracy. The accurate metricizing of any social system (the internet as an information-exchange is such a system) requires threshold numbers for accuracy to be demonstrated when traditional methods are used. For example, a data space of 50,000 people within 2 days is reasonable for traditional analytics methods to prevail (use of conditional change models can shrink these numbers considerably). Typical numerical methods involving smaller populations require either longer timeframes or smaller environments to demonstrate reliable, repeatable business value.

“Web psycho-analytics” requires different numerical methods and mathematical paradigms from traditional analytics to demonstrate reliable, repeatable business value.

<EDITORIAL COMMENT>
The above is especially true when individualization — the ability to recognize a visitor as neuro-socio-psychologically unique from all other visitors — is to occur.
</EDITORIAL COMMENT>

...your analytics will not match the vendor's numbers. If you add two or three analytics systems, the numbers will not match each other. This creates situations where it is impossible to reconcile any data sets.However, until such methods are widely adopted clients and consultants are left with

  • conflicting numbers from different tools (“…your analytics will not match the vendor's numbers. If you add two or three analytics systems, the numbers will not match each other. This creates situations where it is impossible to reconcile any data sets.”),
  • conflicting numbers from the same tool (“Even using the same tool depending of how it is set-up it can lead to very different numbers.”),
  • tools that are difficult to use (“And those vendors said it was really, really easy! Pfff! Liars!”),
  • conflicting vendor definitions (“Vendors have different standards, meaning that what one vendor considers a visit is not the same as another vendor, thus making comparisons is often misleading.”) and
  • unachievable expectations (“Web Analytics is often sold as the thing that will improve your website results by 100-200%, well that's not true.”)

“…talk to other people about what you were trying to accomplish and beg for them to play along.”

Jim Sterne asked me what I'd learned about web analysts a while back at an eM SF. I was onstage at the time. My statements have (I believe) proved cassandric. I offered that there was discontent bordering on malcontent. There was little to no job satisfaction and advised the eM staff to start shifting their conference focus from pure WA to cross disciplinary offerings. I've also openly stated that I left the WAA because it had all the hallmarks of a society in decline (think Rome, Persia, the USSR, …, all overthrown by invaders from without or within).

<EDITORIAL COMMENT>
This comment was made a few years back and I have no knowledge of the WAA as it exists now.
</EDITORIAL COMMENT>

Where does this discontent among online analysts come from?

One place is unaccepted accuracy (as mentioned above). And if the accuracy is accepted, it isn't acted upon. But there are a lot of hindrances to accuracy that are beyond the analyst's control.

Tagging seems to be a major issue in this area. Tagging was originally considered a solution to the accuracy problem. But the world works in balance — especially when unnatural processes are assembled together. Tagging solved accuracy issues but required more sophistication in the collection and analysis of the data. This sophistication required the involvement of other organizational players, some of whom couldn't or wouldn't play along. The end result is that tagging — a relatively simple concept and method — still does not have an industry standard.

So long as I show that I'm doing something I'm not responsible if nothing useful gets done.The tagging problem (I believe) would go away if clients — not vendors and not consultants — were invited to find common ground in what they're looking for (more on this later). At present clients have no fixed, pervasive idea of what advantage online analytics provides. It doesn't reduce costs, reducing costs is done by rethinking processes. Instead of rethinking their internal processes companies “…have become very lazy.” When the website isn't producing what they believe it should produce the solution is to get another tool.

But there is no magic bullet. Companies who go from one tool to the next are like psychotherapy patients who stay in therapy with no desire to get well. One respondent confided “So long as I show that I'm doing something I'm not responsible if nothing useful gets done.”

The unfulfilled promise of web analytics, at the root, is because of people.Business politics can not be ignored when considering the unfulfilled promise of online analytics. “Where one person has all the authority and all the ability to change a site as they see fit: optimization actually really works. A new headline here, a picture of somebody looking into the camera there. Demand increases, everybody's happy. But, for most corporations, this is not the case” was a sentiment stated often if not as eloquently by many respondents.

“If only I had this report, as shown into the vendors slick presentations…”

All we get in the tools are simple averages and little in terms of correlation. In my opinion this leads me as an analyst to have a high degree of scepticism in the underlying data and hence difficult to really delve into hard core analysis of the data.One of the problems that came through the responses was that online analysts often demonstrate a victim mentality. This was greatly the case in web analytics and often in search. When asked directly, “If you know these problems exist in your industry why don't you take steps to solve them?” responses tended to manifest feelings and verbiage of powerlessness. The phrases “We can't do anything to solve this.”, “It's out of our hands.”, “It's beyond our control.” and “We don't have access to those people.” were repeated on both sides of the Atlantic. One respondent offered “…is it in the realm of a little web analyst within a large multinational to actually do that?”

Consulting online analysts are caught between the vendors and the clients they serve. If not a victim mentality, this “serving two masters” creates a psychology that's very close. It also ties back to Why hasn't Marketing caught on as a “Science”? and Matching Marketing and IT Mythologies about analysts and marketers finding common ground.

Consensus points abounded on these research elements. Vendors were viewed

  • as only being interested in selling licenses,
  • as promising more than could be delivered (“the space has been high jacked by vendors who promise a mountain of diamonds without much effort. This is not true.”),
  • as not offering proper or worthwhile optimization tools and methods (“…the important thing is the optimization that is done afterwards.”) or
  • as offering wolves in sheep's clothing — tools that actually produced simplistic results, could not do deep analysis and therefore produced skepticism about the underlying data.

Analytics and the web are suppose to be transparent and easy to track. However, once you start marketing you find out that is not really the case.It is likely that as more and more accountability is demanded from different organizational groups measurement efforts will merge and (perhaps) result in easier corporate buy-in. What may not go down well is that these efforts are more likely to come from marketing than from analytics. Multi-channel marketing will need to learn from online analytics if it is to have value to any business.

“There are simply not enough employees in the companies focusing on adopting web analytics in the organization.”

A challenge that falls out of the above section and the above quote is truer for web analysts than their search-based compatriots in any given organization. Web analysts have fewer champions at the top of the corporate ladders than do marketers and search (which is often not considered an analytics discipline even though the science of search has been documented elsewhere). Marketers have traditionally been closer to the top of the corporate recognition ladder than analysts could ever be. This goes back to the opening statements about work versus play; marketers play, analysts work. This is demonstrated in language if not in physical reality, and one needs to recognize that perception is reality.

Online analytics grew out of (and could very well still be mired in) IT departments. Worse, any kind of analytics smells of accountants (hence accountability), and everybody knows the accountants only come in when the business has failed or is recognizably close to. One respondent wrote that they knew their company was in trouble when the bank sent accountants in (evidently the waves of layoffs and learning they were US$20M in the hole weren't warnings enough).

Online analytics is a discipline of numbers. Whenever there's a discipline of numbers it means there's an evidentiary trail for decisions. Consider the political and psycho-economic meaning of this for a moment.

If I have the option of taking advice from someone who goes with their gut then I really can't be held accountable because there are no numbers, therefore from any evidentiary standpoint I'm pretty safe. Should things go sour it's a political issue because there was no real evidence that we should have gone pro or con, we went with our guts, flipped a coin and took what came.

Even better, it was (point finger in some general direction) their gut feeling, we went with it, it flopped, it was their gut not mine, they're out and I'm still good.

But if I go with hard numbers and my decision is in error? Now it's psycho-economic and I'm the idiot and fool because I didn't understand what I was doing. Both I and the group that helped me make the decision are forfeit.

So which is politically safer to place higher on the corporate ladder, to listen to and feel good about? But even at the top of the corporate ladder guts and numbers are in conflict; the average CMO corporate lifespan is about two years, often less.

“The tools promise a lot, and can live up to most of it.”

Most psychotherapists would look at the responses and recognize a love-hate relationship in the making if not already extant.

However, the love-hate relationship doesn't take the form most psychotherapists are familiar with. Most love-hate relationships exist between an individual and some one thing external to that individual (another person, another thing). The love-hate continuum usually takes the form of “I can't live with (it) and I can't live without (it)”.

...many larger companies buy (what used to be) expensive, full-featured web analytics packages, only to use the tip of the iceberg: the core metrics that should be obvious.This isn't the case with analysts. Most of those surveyed liked what they do and believe they add value for their efforts. They love what they do, just not who they do it for or how it is done (“It's not so much the unrealized promise of web analytics, as organizational politics leading to weak and vaguely defined goals in larger organizations.”). This creates a triangulism and triangulums are always psychologically deadly.

An example of psychological triangulism is the parent who loves their partner and recognizes their partner has an unhealthy relationship with their common child. Parent 1 is caught between protecting the child from the partner and protecting the partner from the eventual wrath of the child (think Oedipus and Electra). Their loyalties are constantly divided (as mentioned above and especially if no psychological reward manifests itself). The psychological challenge escalates until Parent 1 finds themselves developing their own animosity towards the child. They mistakenly believe if the child were not present the parent-partner relationship would be better.

The end result is that both child and parent-partner relationship suffer. Here the analyst-client relationship and the online analytics industry is suffering.

I think the promise is fulfilled for some and not for others!  The difference is the level of sophistication of the user.This tension is manifesting in the industry in the same way it manifests in the therapist's office — fingerpointing. Consider the following responses, some obviously from consultants, others from vendors, and those where the lines blur greatly:

  • “What's sure is that when it comes to Web Analytics and vendors, there's just one number that counts: quarterly sales. When it comes to clients, well they don't really know which number they're looking for but know it's damn hard & expensive to get it. Those in between, the expensive consultants, they're just trying to make a living and fight for peace on earth and accountable decision making.”
  • “Log file data and Web analytics are both sources of information. They are tools, like a hammer. A hammer in the hands of an unskilled, ignorant but self-righteous and overly confident carpenter? That is a scary thought. Well, it is equally as scary to me about Web analytics and log file data. There are plenty of unskilled, ignorant, self-righteous, and overly confident search engine optimization (SEO) professionals, Web analysts, and other marketing people. Even many search engine software engineers are not competent carpenters or architects, but they honestly believe they are. And we are buying what they have to sell.”
  • “I honestly don't think there are unfulfilled promises of web analytics. The companies are doing great and the software is progressing all the time. I love analytics!”
  • “Why is it so hard for people in the web to take actions and optimize based on what the tool reports? One of the reasons of this is that often they don't have a clue of what can be changed or they have an idea which is incorrect.”
  • “If web analytics is under-delivering in any way, it is largely because of most organizations inability to address web analytics at the strategic level rather than a tactical tool to optimize the online marketing channel.”
  • “I think the promise is fulfilled for some and not for others! The difference is the level of sophistication of the user. For some companies, even if they deploy it properly there is more volume and nuance to the data than they can properly grok.”

Is more training the answer? And if so, who do we train?

<EDITORIAL COMMENT>
I'll restate here what I wrote in Learning to Use New Tools; the use of any tool is going to require training across the usage spectrum. The use of new tools definitely so. This training can be self-training and the user should be prepared for scraped knuckles, smashed thumbs and lots of cursing. Self-training is great when the user has lots of time and patience. Otherwise, take a class or let the experts (“consultants”) in.

Do remember Buckminster Fuller's definition — An expert is someone who can spit over a boxcar. I often tell people that the front of my shirt is soaked based on my failed efforts.
</EDITORIAL COMMENT>

It reminds me of the development of web sites themselves ten years ago - everybody had to have one, still not being absolutely sure what to use them for.More training is the answer only if the training results in well-reasoned and understandable business actions. Tools and trainings are worthless without knowing what one wants to build (“It reminds me of the development of web sites themselves ten years ago – everybody had to have one, still not being absolutely sure what to use them for. Of course the free tools have done their part in this evolution.”).

“The unfulfilled promise of web analytics and search is measuring outcomes instead of outputs.”

Our culture (western, not analytics) has been “objective and evidence driven” for about 400 years. There has been the unstated Field of Dreams-like belief that “If you have the numbers, the truth will come”.

I believe most of the analysts surveyed would consider this a desirable yet inaccurate depiction of the real world. Their tools produce “…beautiful charts that don't tell me what to do to make things different – not better, just different. For that I have to go somewhere else.” None of the analysts surveyed wrote or talked about growth curves, forward discounting, debts, rates of depreciation, technological obsolescence, energy consumption (the company that can correctly respond to market needs faster wins because a) it responded correctly and b) it required less energy to do so).

This greatly surprised me. For all the analysts “in the room”, none talked of analysis. Several responses demonstrated a level of contempt regarding available tools (vendor agnostic) so it's possible analysis per se isn't a subject of high regard in its own community.

The above presents a discomfiting scenario. It demonstrates a severe disconnect between “what should be” and “what is”, something in keeping with C.P. Snow's two cultures yet far more pervasive (in this industry) therefore far more damaging.

If this paper focuses more on psychologies than on analytics it's because the responses dictated it so.

...you need a team of people who know what all this is about to digest it for the more common mortals.“…analysis is a story based upon data put into context.”

The quote starting this section was very telling but not unique. No respondents believed the numbers alone proved anything, nor even when presented as part of a strategy. And few respondents seemed to be equally comfortable in boardrooms as in spreadsheets.

Yet the need for online analytics to be part of a larger picture, a grander story, was everywhere. Analysts uniformly perceive themselves as

  • not part of a unified business reporting structure,
  • not contributing to the big picture, and
  • lacking the political power or psycho-social maturity (within the organization) to sit at the grownups' table.

“And then there's this vague notion from Mr. Kaushik…”

<EDITORIAL COMMENT>
Let me emphasize that I did not choose the exemplars noted in this paper. Respondents demonstrated exemplar recognition in conversation and written material.
</EDITORIAL COMMENT>

Any pervasive duality will present itself in exemplars (not to be confused with my previous mention of exemplars as part of this research). Here the exemplars (or probably more accurately, “doyas”) are Avinash Kaushik and Eric Peterson with Avinash Kaushik leading the pack in references by almost three to one.

Equally interesting was that the anti-Kaushik camp's complaint wasn't necessarily against Avinash Kaushik himself, it was against his “You, too, can do this” mantra (perceived if not actual). Yet another schism appears; those who need (for whatever reason) analytics to be hard and those who need it to be easy.

Web Analytics is hardEric Peterson is well known for his “Web Analytics is hard” statement (interesting Reading Virtual Minds Vol. 1: Science and History tie in; the majority of respondents wrote web analytics or search. Very few capitalized online analytic disciplines. Most people capitalize their own discipline. It demonstrates a non-conscious recognition of the value of what they do).

This belief begs the question of whether or not something can be “hard” (meaning “difficult”) if it is properly understood. Educational Psychology, Cognitive Psychology, Sports Medicine, Kinesiology and related disciplines all demonstrate that anything done improperly is hard. Many people give up on mathematics due to poor teachers, poor curriculum, lack of discipline, … To them, math is hard. Aikido is dangerous without proper instructors present.

But is something in and of itself difficult? Only if there's a social or political reason for it to be so. Perhaps the priests wish to keep the mysteries of the divine for themselves. This provides them the opportunity to select who'll enter their ranks, who'll excel, and to whom the teachings will be “difficult”. Only one respondent offered a centralizing attitude (“I'd rather be of the school of thought that web analytics can be easier… if given time and approached in the right way.”).

Id rather be of the school of thought that web analytics can be easier... if given time and approached in the right way.Here again politics more than psycho-economics rears its head. “I will protect my (place in the) industry by making it difficult for others to succeed in that industry” hence controlling the industry itself. The problem with this ethos is that eventually a large enough (ne' “threshold”) group will occur that takes the industry in some other direction completely.

There are psychologic ramifications to both “hard” and “easy” statements. “Hard” statements set up the majority of participants to fail, or if not to fail then to prepare for failure rather than success. Likewise, the “easy” statement can cause false expectations of success to develop. What is obvious from the responses is that Avinash Kaushik owns the “actionable outcomes” space and neuro- and psycho-linguistic Towards space when it comes to online analytics as a discipline (his was the only work directly quoted in the responses; “Actionable insights and metrics are the uber-goal simply because they drive strategic differentiation and a sustainable competitive advantage.”) and Eric Peterson owns the neuro- and psycho-linguistic AwayFrom space when it comes to online analytics as a discipline.

<EDITORIAL COMMENT>
AwayFrom and Towards are used in their neuro- and psycho-linguistic sense here to describe how people hence the industry is thinking, not necessarily how the industry is moving. See AllBusiness.com's Chris Bjorklund interviews viral marketing expert Joseph Carrabis, founder of NextStage Evolution, Part 4a) and Using Sound and Music on Websites for more on these concepts.
</EDITORIAL COMMENT>

The exemplar messaging is polarizing an industry already divided by a great many other factors. I can say playing guitar is easy and I know I'm never going to be a Segovia or Kottke. Likewise, I recognize I could play better if I practiced more. This “centering of duality” needs to take place in the online analytics world if it is to survive, yet most respondents demonstrated extremum statements (statements with language demonstrating polarity behavior and belief) rather than centering statements (statements with language demonstrating unifying or centering behavior and belief) in their responses.

All things require some degree of practice before facility in their use is obvious. There's also the intersection of lack of correct practice and lack of understanding. This can be mixed into The Impostor Syndrome mentioned earlier (see Reading Virtual Minds Vol. 1: Science and History or I'm the Intersection of Four Statements for more on The Impostor Syndrome). Anything can be difficult if the practitioner doesn't really understand what they're doing, is acting by rote but from neither repetitive action nor repetitive practice of the correct action.

Disciplines may be represented by exemplars and responses to the exemplars are sometimes not the responses to the discipline. Respondents tended to present AwayFrom behaviors regarding Avinash Kaushik and Towards behaviors regarding Eric Peterson in their responses (noting as offered earlier than Avinash Kaushik is more in their consciousness than is Eric Peterson and with basic normalization applied).

And then there's this vague notion from Mr. Kaushik: give more insights, knowing more about what's going on within your visitors minds & hearts so that you can better service them.These presentations are understandable. Correct or not, the perception is that Avinash Kaushik wants to move the industry away from a “numbers are evidence” basis (one respondent offered “And then there's this vague notion from Mr.Kaushik: give more insights, knowing more about what's going on within your visitors minds & hearts so that you can better service them. Sure, cool, sounds great. Still scratching my head. With surveys you say? Asking them a question? Just 4 questions? Ok so when I get the answers, is this representative? Should it influence my copywriting, my product offering, my pricing scheme?”)

<EDITORIAL COMMENT>
The concept of “knowing more about what's going on within your visitors hearts and minds” is one I and NextStage strongly encourage.

You're shocked, I know. Simply shocked.

I also encourage evidentiary — hence numbers based — decision making practices.
</EDITORIAL COMMENT>

A curiosity of this research is that no exemplars arose on the search side of online analytics. Search respondents noted Avinash Kaushik and none of their own. This could be due to the different lifespans of search and web analytics, the different mentalities and ego structures that arise in these two disciplines or simply that no one in search demonstrates a strong enough personality for a cult-of-personality to develop around them.

“How you measure success depends on how you define success”

There are many ways to interpret the above and all of them point to a lack of standardization. I remember conversations where the definition of success was moving away from online sales to “I got their name” or “they downloaded a paper”. These conversations always intrigued me because they were examples of defining success in terms of the visitor's action, not the desired outcome of the site owner.

This is another example of non-standard definitions plaguing an industry and no one stepping up to lead the way (equally interesting, no respondents mentioned any professional organizations in their communications. This indicates online analytics professional organizations are not serving their membership enough to warrant conscious recognition). Online analytics is quite capable of comparing the numbers between “sales” and “newsletter signups” and the comparison truly is one of apples and oranges; business development versus transactional business, strategic vision versus “I went to the bank today” tactics.

And if the success definition the consultant is comfortable with, knows how to demonstrate and can defend is one the business client can no longer accepts?

“The consensus among industry leaders is that web analytics will be a different entity in five years.”

The consensus among industry leaders is that web analytics will be a different entity in five years. Its ultimate purpose is to facilitate action in support of any initiative on the web, so it also is much like plastic.Clients are asking for more … something … from their vendors. One respondent stated “Procter&Gamble is moving from 'eyeballs' to 'engagement' but leaving 'engagement' for others to define.”

This is the intersection of Jim Sterne's “how you measure success” mantra with the “gut vs numbers” statements above. The only sure winner of letting others define your success is that politics will prevail.

A recent Forrester paper indicates a move towards free analytics over for-pay analytics. The report is interesting and perhaps more interesting when viewed outside the online analytics silo.

I point out in Reading Virtual Minds Volume 1: Science and History that growth numbers can seem impressive until you recognize population dynamics, population ecology and evolutionary rescue at work. I used these and similar concepts in From TheFutureOf (13 Mar 09): The Analytics Ecology and From TheFutureOf (5 Jan 09): Omniture and Google Considered Environmentally to indicate that populations would shift, go near death then bounce back dependent entirely on the existence of (again) threshold populations (I hope readers appreciate how important the threshold population concept is in any socio-environmental dynamic).

Conclusions for Part 1

In the end, it seems the online analytics world is setting itself up to fail. It's as if an architect were to create a negative space then attempt to fill it. Analytics doesn't matter be it search or web; all business — B2C, B2B, B2whatever and whatever platform you're using — is going to come down to personal relationships, establishing them, maintaining them, personal interaction and commitment (readers who've heard or seen my “10 Must Messages” presentation will recognize those communications here).

Nothing communicated by any respondents indicated that analytics is in and of itself a worthless discipline, only that it is a misunderstood hence misguided discipline in the online world. Yes, all forms of analytics will get you to the door (and in some cases may even open the door) and in the final conclusion it will be the establishment and demonstration of trust that powers commerce, not numbers. Or at least not numbers alone. This indicates a shift

  • in what the numbers are about,
  • how they are demonstrated,
  • how to derive actionable meaning from them and
  • how accountability is framed

are in the offing.

Problems are (in my experience) pretty easy to discover. Solutions, though…

(more on possible solutions in next month's post)

RVMsmallfrontcover.jpgHave you read Reading Virtual Minds Volume I: Science and History? It's a whoppin' good read.


Posted in , , , ,

Learning to Use New Tools

NextStage: Predictive Intelligence, Persuasion Engineering, Interactive Analytics and Behavioral MetricsResearchers engaged in acts of discovery sometimes have to confront the truly strange and make sense of it. – Henry Gee

<CAVEAT EMPTOR>
This post is near 8,000 words long (I've been working on it for about five months).
</CAVEAT EMPTOR>

I've been following some of the internet chatter re NextStage's Evolution Technology (ET). I'm indebted to the likes of Jacques Warren, Christopher Berry, Michael Notte, all those folks twittering their hearts out and others that I've not encountered yet who've added their voice to the conversation about how ET works and such.

To that end, I'd like my first official The Analytics Ecology post to be about how humans learn to use new tools. It doesn't matter who makes the tools or what the tools do. What I offer is true for tools in general and tools in specific. I doubt everyone will be comfortable with what I write here, especially when I extend the discussion to learning how to use NextStage tools. I hope that readers recognize I write from my understanding and I'm perfectly happy to have that understanding change when new information is presented. I'll also be making use of Buckminster Fuller's “In order to change an existing paradigm you do not struggle to try and change the problematic model. You create a new model and make the old one obsolete.” either literally or figuratively as I go along.

Tool Use Philosophy

Humans first learn to use specific tools to perform certain functions then they learn to use tool forms to perform those functions. This is why a crescent wrench sometimes gets used as a hammer. Hammers are usually the first tools humans learn to use because it meets one of the first needs we encountered on the evolutionary trail; it put strength over distance into our hands (anybody remember my “The history of technology is the study of placing the most power in the most hands economically” SNCR speech? This is where “technology economically in lots of hands” begins).

We could crack hard shells or each other's heads with a hammer. Very useful, indeed. The reason hammers magnify our strength over distance has to do with things like mass, torque, force and kinetic energy. Understanding how hammers worked came several million years after we started using them. Fortunately, understanding how tools worked wasn't important to our ability to get something done. Modern examples are cellphones. Few people understand how cellphones work (take some time to study up on it. It's fascinating) and not knowing how they work doesn't stop people from making calls.

While the concepts of mass, torque, force and kinetic energy took a long time to develop, understanding the hammer form — that a hammer was really just a weight at the end of a longish handle — took moments (truly, once used, it took moments to figure the “weight at the end of a longish handle” part).

That's what a hammer is, that's its form; a weight at the end of a longish handle.

crescent-wrench.jpgA crescent wrench also has that basic “weight at the end of a longish handle” form. Hammering is not a crescent wrench's best use but that “weight at the end of a longish handle” form is shared by hammers and crescent wrenches, so when you don't have a hammer you can just as easily crescent wrench that nail into place. It'll work just fine.

Form and Function

<GENDER BIASING NOTIFICATION>
What follows has some fascinating implications about gender because people who use kitchen knives as screwdrivers, etc., are adept at recognizing form from function. This often occurs along gender lines.
</GENDER BIASING NOTIFICATION>

What we're discussing here are the twin concepts of form and function as in “Does form follow function or does function follow form?”

Function following form is why crescent wrenches sometimes serve as hammers but hammers will never serve as crescent wrenches — they don't have the necessary form. However, the reason all hammers have the same basic shape is because form does follow function.

An excellent example of form following function is cutting tools. Everything from a knapped piece of flint to Luc Skywalker's light saber have the same form because there's only so many ways a cut can be made with a tool.

Humans are very good at function following form activities because our brains are constantly making comparisons between things. Function following form is why most people can see a Chevy, a Ford or a Maybach and know it's a car.

Form following function isn't something modern humans are particularly good at. Form following function is why few people can be cleaning nettles from their dog's fur and come up with VelcroTM. The funny thing is that our brains can be equally adept in either effort and were for much of our early life.

The reason most people are good at one and not the other as we grow older is because modern educational systems societalize rather than educate; their job is to create good citizens and good citizens follow their leaders. Educational systems don't get kudos for teaching students independent thought, they get kudos for keeping kids off the streets and out of jails. Aboriginal societies love form following function thinking because they are (usually) constantly improvising solutions in their environment.

<ANECDOTE>
I once taught high school math when I was in my early twenties. I was hired to teach the remedial math classes. My mandate was “If you can get them to add and subtract two numbers together without screwing up, that'll be fine.”

These kids were society's rejects. They'd pretty much been told they were all stupid, not qualified for any kind of happy life, would probably drop out of school before they'd graduate, and to smile when in police lineups.

After a week of crawling through the textbook I decided they couldn't be as hopeless as I and they were led to believe.

So I stopped using the preferred textbook and gave them radically different assignments, things like giving them the first part of a sentence and they had to come up with twenty different endings, language problems, logic problems, things like that.

At first there was no interest, then there was some, then there was a lot.

The real breakthrough came when one kid nervously handed in his twenty different endings homework and was walking slowly out of the room at the end of the day. I started reading what he'd written and darn near wet myself laughing. He came back hurriedly. “You think those're funny?”

“Oh, god, Kevin. I can't catch my breath I'm laughing so hard.”

And word spread (as we now say) virally. Kevin got Mr. C to laugh, the race was on. Kids not in my classes starting coming up to me to ask if they could solve some problems.

#1) I was giving them some self-worth
#2) I was teaching them to recognize how to solve problems, not just addition and subtraction but when to use either and how to know which would serve them better when.

At this point, in these remedial classes, I started introducing logical calculus problems.

And the students did wonderfully on them. Students who would be lucky if they could add and subtract.

One of my student's father was a plumber and he shared that he often helped his dad during the summers. He wanted to be a plumber, too, and wasn't sure if he could make it into trade school because he was such a poor student.

So I drew a house on the blackboard, told him where the sinks were, where the bathrooms were and asked him to plumb the house for me, explaining each joint along the way.

And he did. More to the point, he demonstrated a working knowledge of hydrodynamics that most grad engineering students didn't have.

Then I asked him to fill in the pressure values and necessary pipe dimensions along the paths he was laying out.

Then I showed him the equations that created the values he was coming up with intuitively. Then I drew another house with other plumbing requirements and asked him to use the equations to figure out how to plumb the house.

He was hesitant at first and I asked the class to help him.

And they did, and he did, and they plumbed the house.

Using college sophomore engineering calculus. Highschool sophomores and freshman who were told they'd never amount to anything because they were…remedial.

By the way, I was fired from that teaching position because I was neither using the preferred text nor following the designated curriculum.

There was definitely something remedial going on at that high school and it wasn't with the students, me thinks.
</ANECDOTE>

Being good at form following function requires people to understand the core what that is being done and this is where understanding things like mass, torque, force and kinetic energy becomes necessary. Form following function requires people to strip away everything that isn't the one thing that is necessary and determine how to do that one thing better, faster, cheaper, smarter.

For example, one would never (at least I wouldn't) use a pneumatic hammer as a traditional hammer. I mean you could and you'd have to hold it in just the right way because that hammer “weight at the end of a longish handle” thing isn't too obvious this time out.

Likewise, I don't know of too many people who would use a nail gun as a traditional hammer.

(I helped a friend put in a deck using a nail gun. I tell you, I ain't going back to using a traditional hammer for such things. I wouldn't use a nail gun to hang a picture and that involves knowing which tool to use when.)

But are you aware that the same core principle and the same core, simple, immutable goal is what's being achieved by the nail gun, the pneumatic hammer and a traditional hammer?

The core goal is to drive the nail.

A traditional hammer does that by swinging that weight through an arc. That weight, the swinging and the arc are what's doing the work. Physics calls them mass and torque and the hammer uses them to apply force via kinetic energy to the nail.

The core principle that achieves the core “driving the nail” goal is applying lots of force in the form of kinetic energy.

A pneumatic hammer does this with air pressure building up in a cylinder. The air pressure increases until some threshold is reached, at which point a weight is shot through the cylinder with great force (kinetic energy again) and the power of the increasing air pressure in the cylinder is used to drive the nail via explosive decompression. That's the first loud bang you hear when pneumatic hammers work. The first bang is the explosive decompression, the second bang is whatever the weight is hitting.

Nail guns use explosive charges to drive the weight that drives the nail. Same principle of expanding pressure in a closed cylinder driving a weight.

And the nail gun, the pneumatic hammer and the traditional hammer all use that same simple, core principle — applying lots of force in the form of kinetic energy — to get the job done — driving the nail.

But it takes someone understanding the core principle — the transfer of kinetic energy

  • from the hammer swung through an arc,
  • the explosive power of air pressure or
  • a discharge,

to a target — that allows for different kinds of hammers to be created that make driving nails easier, better, faster, cheaper, smarter.

Remember Marketing as a Science?

This brings us back to Why hasn't Marketing caught on as a “Science”?

One of science's goals is to create form from function, to create tools that describe actions, to apply mathematics to what's happening so that the rest of us can understand function from form. Chances are (and research with primates indicates) that very few of our ancestors “discovered” hammers but once the tribe saw one individual hammering away everybody was doing it. The same thing is true for knapped flint, crescent wrenches and light sabers.

I believe the same will be true for marketing as a science. Right now most people (at least the ones I interact with) don't know how to apply the required (important point, that, required) sciences to marketing. My belief is that most people in marketing haven't yet been able to grasp the core goals and core concepts necessary to turn marketing into a science.

And I'm willing to be proven incorrect in that.

Problem Solving Philosophy

I suggested at an SF eM conference that people would be using the types of tools NextStage produces in the near future, that such tools would become de rigeur. I also said quite clearly that it didn't really matter if people used NextStage tools or not, it was simply the case that such tools as NextStage produces would be required sooner rather than later.

I hope people remember my saying that. I was sitting on the edge of the stage at my last presentation of the conference when I did.

I also realize that using such tools is going to require people to perform paradigm shifts and that the earliest adopters (not counting the clients NextStage has had since 2001) are going to be those who can see nettles and think “Velcro!”, ie, form from function thinkers.

<ANECDOTE>
My training in form from function (abstract/symbolic) thinking and turning those musings into working tools began with Bill Dykstra when I was about seven years old. If you asked Bill what he did he would tell you he was a handyman and he was. I guess. Kind of. What he did was create tools for the company where he and my dad worked.

And my god, the tools he could make.

I remember Bill, my dad, the company mechanic and I were in the big maintenance garage on a Saturday morning. I picked up a brush used to scrape rust from trailer brakes, held it upside down and said, “Look, dad. A ray gun.” The mechanic said, “I think you've been watching too many Buck Rogers shows”, my dad laughed but Bill…Bill looked at the way I was holding the brush and asked, “You're right. What kind of ray gun is it?”

Bill asked my dad if he could borrow me every once in a while and, when we worked together, he would show me machines and ask me what they did. As I got older the questions turned into “What could they do?” and that led to “This is what we need done. How would you do it?”

He would often ask me “What is really happening here?” It was an invitation to wait, to think, to symbolize what was really needed versus what was being asked for. Bill had a phenomenal skill, the ability to see problems and eliminate everything to reveal the core problem in its purest form, to abstract that pure form from all the noise that blinded others to the solutions inherent in them. Once abstracted, he could mentally synthesize the elements (do we need a hammer or a wrench, a stone or a light saber?) needed to solve the pure problem. Once synthesized, he could create solutions in reality.

And they worked.

<@jdaysyism>
The form versus function concept also deals with Maslowian and Eliadeian tools, that is 1st and 2nd order tool use respectively.

Tools that are designed to do one thing well — a hammer — are Maslowian. They are first order tools. Interestingly enough, all Simple Tools are second order tools because they can do any number of things well and are usually recognizable parts of first order tools. I remember reading about a new simple tool in (I think it was) Popular Science when I was a kid. I can remember the design and especially remember wondering as a kid how it qualified as a simple tool. Too many moving parts, I thought.

I'm told that analytic types find my writing frustrating because I don't quickly get to the point, the “A=B”ness of it isn't obvious to them. One of the things demonstrated by this is an “If I can't touch it, it isn't real” metaphysic (a {C,B/e,M} kind of thing). More to the point, this metaphysic demonstrates a desire to use a tool rather than a desire to understand the problem sufficiently to determine if the immediate tool is the best for the task or if another form can be used that will solve the core problem easier, better, faster, cheaper, smarter. No offense to any analysts and what results from the former type of thinking is 1st order (Maslowian) tool use.

Anyway…

A difference between first and second order tool users is demonstrated by Ronald Cohen's “If you look at something closely that is thought to be well understood, you often find something new and exciting.” (demonstrating 2nd order thinking) and Carrabis' (my) Corollary, “If you look at something you've never encountered before, you often attempt to understand it with irrelevant ideas and fool yourself into thinking you understand it.” (demonstrating 1st order, Maslowian, if all you have is a hammer then everything must be a nail thinking and why I started this post with the Henry Gee quote)

(more about the differences between Maslowian and Eliadeian concepts will show up further in this post, so keep a'reading…)
</@jdaysyism>

Bill taught me to move from real-world problems to abstract to symbolic to synthesis and back to real-world solutions. He taught me that tools weren't solutions in themselves, they were ways to create solutions. Like Huntington, Bill had the ability to occupy and exploit the space between researchers and end-users. And like Huntington, Bill's ideas carried more influence than most people of his time could imagine.

The natural abstraction from tool use to tool creation involves those skills Bill Dykstra taught me, the ability to transcend from “what is it doing?” to “what needs to be done?” to move fluidly along the form-function axis.
</ANECDOTE>

Let's start applying this cultural paradigm shift, this tool use philosophy, to solve some problems hypothetically. Perhaps coming up with some hypothetical solutions will help us discover what kinds of tools we need to make those abstractions into reality.

Problem: Visitors aren't converting

<And deep thanks to Stephane Hamel for his contribution here>
Let's start with some traditional solution paths. Based on our training, we'd investigate the following:

Web Analyst

1) Conversion goal?
2) Price Point?
3) Incentives?
4) Perceived Value?
5) Risks?
6) Workflow/Process? (for interferences)
7) campaign?
8 ) traffic qualifications?

Marketer

1) What are the visitor characteristics?
2) What's the campaign?
3) Is the traffic qualified?

In either case, a Define, Measure, Analyze, Improve, Control (“SixSigma”) methodology would be used to determine solutions.

The traditional approach at this point wants to determine things like price point, incentives and so on. Fair enough. Do you answer from the business' or user's perspective?

Business perspective: Look at the market, the competition, do focus groups, costs & profitability to determine the price at which we should sell.

User perspective: Price is own perception of gained value & added benefit vs risk & cost (tangible or not). This pricepoint is specific to everyone…

Synthesis: If there was a way to determine the pricepoint based on user willingness to pay a specific amount, we could optimize profitability and satisfy users at the same time… as long as they don't share the price they paid for (a bit like airplane tickets where each site is priced virtually based on so many factors!)

The challenge lies in the ease for everyone to share the info about the price they paid for something. In the early 1900s every price was the result of a 1 on 1 discussion. The goal of the web is to return to that 1-1 discussion.

Possible solution:If the user perceived value is truly high, the price point might not be such a huge factor.

To sum up, there was initially “one-on-one” negotiation skills (last century), then large-scale pricing based on guts and “business management” best practices, now analytics is playing a bigger role.

Could the next step be going back to one-on-one based on behaviour and predictive analytics?

Good question, that.
</And deep thanks to Stephane Hamel for his contribution here>

The above is a wonderful demonstration of problem solving within an existing paradigm. Remember “An end-user tool should be extremely easy (ie, psychonomically intuitive or “requires no training” based on a given cultural paradigm) to use” from our End User Tool Laws?

Also, the solution path outlined above pretty much follows what most people can recognize as a logical process. The math involved is (in my opinion) elementary. There's nothing in the above that really requires more than a standard bachelor's/baccalaureate degree to understand and work through.

Further, the solution path above should be or would be intuitively obvious to most people regardless of what their bachelor's/baccalaureate degree was in. The roots of the process actually go back to the Renaissance, to when judicial astrology was turning into observational astronomy. If you know the history of statistics — and I believe traditional WA uses statistical methods a good deal — you know that the e in the basic

y0 – y1 = b0b1x1 + e

(the basic two sample t-test equation) comes from the error of margin originally so much an element of question in observational astronomy.

But the question here is “What happens when one has to use new tools that aren't based on their current cultural paradigm?” (this is the “cultural shift” part from the End User Tool Laws)

For example, nail guns and pneumatic hammers can only be used by people who've flushed toilets.

Flushed toilets? Yes, because flush toilets (usually) require plumbing, plumbing requires a knowledge of hydrodynamics (by that name or as “water pressure”), hydrodynamics requires a knowledge of PV=nRT (by that equation or as “when you put your thumb over the end of the hose the water squirts out faster”) and that, that PV=nRT thing, is what makes both nail guns and pneumatic hammers work as they do. The “P” in PV=nRT is “Pressure”, the “V” is “Volume” and what it basically means is that Pressure and Volume are in constant proportion to each other, and if remedial math high school students can understand this, so should you.

So when the nail gun fires or the pneumatic hammer hams, the sudden increase of Pressure in the small, cylinder (Volume) must cause an explosive release of force.

Bring a nail gun or pneumatic hammer to someone who's never experienced any kind of (relatively) modern technology and you have to be prepared to do lots of training to make sure they can use these new tools without hurting themselves or others.

So what about people who feel they need to understand before they can use the NextStage tool set? (and I think that's a fair request, by the way).

Do you want to know enough to make a cellphone call or do you want to know enough to make a cell phone?

I created the original version of the flash below back in Jun '02 for an academic presentation. The number of sciences involved in NS' ET hasn't grown since then. And I'll ask you to forgive the next question; How many of these sciences and fields of study are part of your present cultural paradigm, something you have enough knowledge of to be able to hold a conversation in?

There are currently elements from 120 sub-disciplines in Evolution Technology. We used 120 sub-disciplines because we borrowed a lesson from modern astronomy; use many lenses (radio, gamma, optic, infrared, ultraviolet, xray, …) to look at something because by so doing you'll have a better understanding of what's really out there. And for much the same reasons that the much more advanced Mayan calendar was never adopted by the Spanish and hence greater Europe, the cultural differences that created ET and WA will need effort and energy to bridge.

And I believe it would be both foolish and naive of me to think any one who wants to use the NextStage tool set to make a call wants to spend twenty or more years studying these various fields.

Therefore it's imperative that all new tools be extremely simple to use and provide immediately useful information to whomever wants to use them, and then that the useful information they provide lead the users (so inclined) to think of how to use the tool differently (ie, moving users from function from form to form from function.

Philosophy Change #1

NextStage tools don't really concern themselves with clickthroughs, bouncerates, such and so on. NextStage tools are much more concerned with why visitors clickthrough or not, why visitors bounce or not.

For example, this chart (our Purchase-ExchangeStop Report) utilizes concepts from social- and cognitive-psychology and personal mythologies studies to make determinations of which of eight factors — Imagination, Usage, Workability, Experience, Using, Need, Pleasure, Pain — are most important when someone is on a site and making a purchase decision. The red dot indicates how important each factor is (the higher the more important), the yellow indicates how well the site is answering the visitor's concern and the blue indicates how much of the visitor's own neural processes are involved in making the decision.

What we learn from this chart is that Experience, Using and Need are what's most on the visitor's mind when they're making their decision.

  • Experience – have they used this product/service or something similar before?
  • Using – are they using this product/service or something similar right now?
  • Need – do they recognize a problem this product/service addresses?

What's interesting is that while those elements are most in their mind, they don't weigh heavily in the actual decision process. The strongest decision factors are Imagination, Usage and Workability.

  • Imagination – can they imagine themselves using this product/service?
  • Usage – can they understand how this product/service is used to solve a known problem?
  • Workability – can they figure out how to make this product/service work in their current situation?

Why does this difference between visitors' internal states exist? Because (looking at the yellow bars on the chart) the site is emphasizing the latter elements even though the visitors are internally emphasizing the former elements.

So the end result from this chart is that if the site is redesigned to emphasize Experience, Using and Need — where the visitors are already putting the bulk of their neural effort — we'd make more sales.

Now there is a tool a NextStageologist would see and intuitively understand based on their cultural paradigm.

But not everybody is a NextStageologist nor do they want to be nor should they be.

So for the rest of the world we created The NextStage Gauge. Is your site in the red? Then you're in trouble. Yellow? Then you're okay and could use some work. Green? Don't do anything. Under the chart the NextStage Gauge lists one to three suggestions for improving your site. Simple as that. No need to know the sciences NextStage is based on, no need to get under the hood. Just drive the nail, make the call and flush the toilet.

<ASIDE>

The NextStage Gauge has been around and publicly available for a while. I mentioned it in For Angie and Matt, and The Noisy Data Finale on 29 Jan '07 and probably earlier.

</ASIDE>

Reading through people's comments about NextStage's tools, I see form confusion. People seem to be asking “Which end is the weight and which end is the handle?” or “How do I make a cut with this?” These questions arise because most people are good at function following form; they see a chart, recognize a visual representation of values and say “A hammer! I know how to use hammers. I've seen lots of hammers and I know how to crack skulls and nuts.” The fact that most people are better at function following form than form following function is why we take things like our Purchase-ExchangeStop report and turn it into The NextStage Gauge.

I mention all this because I believe that understanding NextStage's tools beyond a simple “dial a number, make a call” use require users to put aside their current concepts of tools. Currently what I'm witnessing is the Maslowian “If all you have is a hammer, everything looks like a nail” and people who've read NextStage's Principles might remember that tools can be Maslowian or Eliadeian.

People interested in getting under the hood will need a different philosophical perspective to understand the information these reports are providing.

Now, for those who want to know hydrodynamics, why cell phones are call cellphones and not little, tiny phones and understand kinetic theory…

For the most part humans think before they act. This “thinking” can be conscious or non-conscious. Most often it's a mix of both. Information comes in, the non-conscious does lots of work, makes a decision about what actions to perform then alerts the conscious mind of its decision. Sometimes the conscious and non-conscious minds disagree. Usually the conscious mind rationalizes things until it agrees with the non-conscious and whatever the non-conscious told the conscious to do is what the human ends up doing.

Sometimes the conscious can discuss things with the non-conscious and the non-conscious will cede. Most people don't have the training to do this on a regular basis.

Sometimes the conscious will bully the non-conscious into shutting up. Do this often enough and the human begins to demonstrate psychotic behaviors.

So let's go with “the non-conscious usually tells the conscious what to do”, and also throw in that the non-conscious sends its instructions to the conscious long before (in neurosynaptic time) the conscious mind instructs the human to act. Technically, these non-conscious instructions are known as preparation sets. Everybody has them, everybody does them. People with lots of training (think Zen Masters and the like) know how to shut down their conscious minds/unify the conscious and non-conscious elements/stand on the bridge between the two worlds and not everybody has the time or patience to go through that kind of training.

Pity.

Anyway, in a sense NextStage tools eavesdrop on that conversation people don't realize they're having with themselves. Nextstage tools focus on why people do things, not what they did, because knowing why empowers one to predict with great accuracy what people will do in the future.

So while I'm going to agree (more or less) with the solution path described earlier, I'm just going to note that it walks neither that marketing nor science bridge.

Why is that bridge important? Because it's at the heart (or mind, whatever) of both 1-1 marketing and cultural paradigm shifts. One-to-one marketing is relationship marketing and I'm probably using the term differently than others because I'm persnickety (and why marketing isn't considered one of the social sciences I'll never know). My training is that the first relationship you must market is between yourself and the person you're with.

The great thing is, a relationship exists whether you recognize it or not. The Christian New Testament has a passage “Whenever two or more are gathered together…” and whatever else is implied, what is recognized by social anthropologists is that both a social contract and relationship exists in that passage. How small can these relationships be? Have you ever talked to yourself, out loud or otherwise? Have you ever had a discussion with someone who wasn't there, perhaps telling off a co-worker while you're alone in your car driving home?

Then you know the magic number of persons required for a relationship to exist is two. Even when the relationship is with yourself, the magic number is two. There's you and — you guessed it — your non-conscious self.

By the way, those conversations where you tell off your co-worker after the fact? Those are minor examples of those psychotic episodes I mentioned earlier. It's when your non-conscious and conscious minds are working at reconciling each other. This is probably why, when I'm upset or bothered, I don't hold it in or keep it back. This is also probably why I so rarely get bothered or upset.

(Susan may tell you otherwise, of course)

And this first recognizable relationship, the one between you and yourself before you can have one with anybody else, including all those website visitors, target market or whatever, must be recognized and understood before cultural paradigm shifts can occur. Unless you're willing to sit down and ask yourself, “Why am I doing things this way again?” and “Is there a better way to do this than is immediately obvious to me?” — two very difficult questions for most Maslowian thinkers — then the game is pretty much lost before it begins.

<PLUG>

These concepts are covered in agonizing detail in Reading Virtual Minds Volume I: Science and History

</PLUG>

More to the point of my posts, unless you're willing to go through them and see where they take you, you're probably not a good candidate for deeper explorations of the NextStage tools other than the level to which you've explored cellphone technology. You can still be a client and still use them (much as you know how to use cellphones, hammers and flush toilets) and we're happy to have you as such, but using them to create other tools? Sorry, ain't gonna happen. And why should it? That's not what you do for a living. That's NextStage Evolution's job. We put in the towers so you can make the calls.

Achieving Your Goals versus Understanding the Principle

So the first philosophical change (and bringing this back to the web example above) is an important one to recognize — the visitor achieved some goal perfectly, cleanly and neatly. They explored, they entered into a relationship with you via the website, they had a conversation (either with themselves or someone physically close to them while they were navigating or with you as they drove home in the car). They performed some combination of conscious and non-conscious activities that caused the observed end result. Whatever their goal was when they arrived on the site, they achieved some goal when they left the site. Lots of times the goal achieved when they leave the site is a goal you — designer, owner, analyst — gave them.

Really, honestly and for true. No kidding, that.

And I'll bet dollars to donuts they achieved that exit goal exactly as you instructed them to via your design so on and et cetera.

That includes all those non-conversions, abandoned carts, whatever you want to call them.

Really, honestly and for true. No kidding, that, either.

Exactly as you instructed them to

Analyze the messaging of both a site as a whole and as individual pages (NextStage prefers the terms “presentations” and “experiences” because we concern ourselves with the user's experience of the information presented) and you can quickly learn if your home/landing page is getting the correct message across.

This one piece of information — that your site/page might be transmitting a less than optimal message — is one of those philosophical, cultural paradigm shift, change things.

<ASIDE>
Let me provide you with a more concrete example of this (and we already know from the above that Flash ain't my thing, right?).

Below are two flashes, one was created internally to provide a designer with a template to work from, the other was what the designer turned it into. We provided instructions to the designer with the template and the instructions were traditionally simple; See this? Copy it into your version of Flash. (we routinely provide rudimentary flashes for designers to professionalize. You can see one example that turned into a Signature system in Canada and Asia at BrainScienceConsulting).

First:

Second (and this one has sound in it so you may want to adjust your volume now):

I will be among the first to state that the designer's version has higher production values (they used better tools, they knew how to use Flash, …). However, the production values are secondary to whether or not people respond in a desired fashion. The 100 or so people I showed both flashes to (and not telling them who created either) always preferred the template, not the designer's finished product.

Why? Because the template affected them in a recognizable way and they demonstrated that it was affecting them.

How did they demonstrate that the template was affecting them and the professional designer's wasn't? By repeatedly playing the template, not the professionally designed one, and the language they used to describe template versus the professional version (such as “I don't know, I just liked it more”, “Something about it works and the other doesn't”, “It moved me”, …).

Quite a revelation, that. A/B testing with not a lot of effort. Show them two flashes side by side and just watch what happens (a lot like what our tracking tool does). The one they repeatedly play is the one that's causing the conscious and non-conscious responses that are being signaled by the repeated play and verbal behaviors.

This is a demonstration of design based on design and design based on the sciences behind NextStage. As one of our first clients told his designer, “I personally think this web site looks like sh?t, and believe it or not, I don't care about that, nor do I care about any design theory. What I do care about is that everybody who visits this site thinks it looks like sh?t, too. The information I have tells me that it needs to have less of … and more of …. Either you make that happen, or I'll find someone who can. This isn't a showcase for your alleged talent, it's a business tool.”

It truly doesn't matter how professionally designed your site is, if it's not meeting the conscious and non-conscious needs of your audience, give it up and walk away. You're spending money for nothing.

Again, I'm not questioning the difference in production values, I question whether or not the higher production values without science behind them necessarily get the job done.
</ASIDE>

Going to the Core

Earlier we defined Problem: Visitors aren't converting and now perhaps we can offer that visitors are doing exactly what they're being instructed to do, therefore the problem state is actually a success state; if someone is being instructed not to convert and they don't convert then the instruction is successful, correct?

It may not be what you want — for that matter, it may not be what the visitors want — and its still successful.

This indicates we need a new understanding of “success”.

<Deep thanks to Susan for the aides provided herein>
A horse may respond to a aide inappropriately (you may signal a canter, it goes into a trot) and you don't shoot the horse. It successfully responded to your aide, simply not as you wanted it to.

Anybody familiar with equestrian training knows the first thing you do is make sure you're giving the horse the correct aide. You are? And the horse is still responding inappropriately?

You provide the aide more obviously, a little more oomph with your legs and back, for example. Did the horse respond appropriately this time?

Still no. Are you sure the horse knows the proper response to the aide? This is the crucial question and in equestrian training, it's actually called “questions and answers”. You're asking the horse a question and it's giving you an answer. Most horses, if they don't know the right answer, will start giving you every answer they know and hope they get the right one.

Now you have to stop everything you're doing because the horse is starting to go nuts providing answers. You — if you're a good equestrian — need to calm the horse down and teach it the right answer to that aide.

At this point good equestrians may also check for physical reasons causing the horse to answer inappropriately. Everything okay physically? And the horse still not responding appropriately?

Then it's time to train the horse how to respond to the aide. Or remind the horse if it does know the correct response and is just refusing to give it (remind me to tell you about my ride on The WidowMaker sometime). Much like a dog who knows how to “sit” and doesn't, you need to put your hand on its rump, clearly and firmly say “Sit” while pushing down and the dog learns or relearns the command.

With horses and visitors to websites, you give the aide in such a way that they must respond appropriately.

And did you notice how seamlessly I integrated websites and visitors into this discussion? What you learn from working with horses can be applied to marketing material 1-1 and loses nothing in the translation.

<@jdaysyism>
Understanding the principles of equestrian training and applying them to marketing is also an example of why I emphasize understanding the theory in order to create applications and is a hallmark of 2nd order, Eliadeian thinking. The ability to take knowledge, training, experience, etc., from one area and apply it to a completely different area is an example of 2nd order tool use.
</@jdaysyism>

Here's the rest of the equestrian training, the part most inexperienced riders don't like: nine times out of ten, when the horse answers inappropriately, it's the rider's fault. The rider isn't asking the question correctly.

The same is true with website visitors who don't convert. It's the website's fault, not the visitors. Visitors are responding correctly, the site's simply not giving them the correct aide.

And as with horses, so with website visitors; you can't fight a 1600# animal. You're going to lose. You can't fight the 97.6% of your website visitors who don't convert. You're going to lose.

And always end on a positive note. With horses as with visitors; the last thing you do when you're training a horse is end with something it knows how to do, thus providing a “success” based reward (at the end of the training), a success even if nothing else was successful. On a website, if you're going to pop anything up when they're closing down their browser or otherwise ending their session, let it be “Thanks for coming to our site. We hope to see you again soon”. No questionnaires, no forms, nothing else. You've let them know their time is valuable to you and placed a marker in their memory that will probably bring them back.
</Deep thanks to Susan for the aides provided herein>

Knowing what tool to use when, Knowing what core problem you're solving

I wouldn't use a nail gun to hang a picture on a wall and I won't use a traditional hammer to put in a deck. Similarly, NextStage tools are not good at traditional WA functions although we make use of traditional WA results in some of our calculations.

<ASIDE>
The fact that different traditional WA tools come up with different values for the same function has pretty much led us to create our own WA tools (we don't offer them to others) so that we'd always know how the values we're using in our calculations are coming about and can have high confidence in the accuracy therein (Note: not that the values are accurate, only that the values are accurate within the paradigm that generates them. This is true of all tools even though it is mentioned rarely by tool users and manufacturers).

And here we come to an interesting cultural datum; NextStage could not gain recognition until traditional WA and similar tools had run their course and entered decline. The reason is simple enough and well established in the philosophy of science; Challenging orthodoxy is difficult because most practitioners are educated and work within current paradigms and have little career incentive to examine unconventional ideas. The decline of WA and similar tools is forcing practitioners to examine unconventional tools, hence the flourish of interest in what we've been doing since 2001.

That decline is something I've been mentioning to people for years. Sorry, folks. Worldwide research I've recently done querying internationally recognized WA consultants regarding “the unfulfilled promise of web analytics” strongly indicates this decline is the case. The results of that research will show up in a future Analytics Ecology post.
</ASIDE>

So we're back to Problem: Visitors aren't converting. Stephane Hamel does an excellent job of detailing a traditional tool solution to the problem and I have high confidence that Stephane's method will produce useful results.

Let me take you through a very brief (I promise) alternative derivation of why visitors aren't converting:

A certain largish company had a mini-site that consisted of four pages; 1) Landing, 2) Funneling, 3) Completion Event (closure, transaction, the visitor gives you something you want) and 4) Thankyou. Traffic on 1) Landing and 2) Funneling was good and fairly even and died after page 3) Completion Event (note that the client didn't see anything odd about this).

What's happening?

ch-ch-ch-changes%201-small.jpg My first thought was to determine if the same visitor was sitting at the computer through the entire browsing session. Drops off such as shown here often occur because different if not conflicting {C,B/e,M}s are interacting with the same information. One of the reports we developed early on was a measure of how many different visitors were using the same computer. It originated early in the days of NextStage, back when it was quite common for there to be a single workstation that was used by several different people. A client wanted to know how many different people were browsing their site because such information was a good indication of how much revenue would result from contacting the group browsing.

ch-ch-ch-changes%202-small.jpgThat report showed that the number of real humans using the computer during these browsing sessions was 1:1 human:computer on 1) Landing and 2) Funneling pages of the sessions, was almost double on 3) Completion Event (what NextStage calls the “FailurePage” in some of our other reports because that's the page that is actually failing to complete the transaction.) and went pretty much back to 1:1 on the Thankyou page. Looking at the {C,B/e,M}s that showed up on the 3) Completion Event page and weren't present before and after I noted that the majority of them demonstrated female neurologies while the 1) Landing, 2) Funneling and 4) Thankyou pages were dominated by male neurologies.

So far and knowing nothing about the content of the pages, it's obvious that males ask someone else to look at the website on the 3) Completion Event page and these others demonstrate negative biasing female neurologies.

ch-ch-ch-changes%203-small.jpgThe suggestions were to add some positive biasing female design factors to the 2) Funneling through 4) Thankyou pages. Completions increased. And do remember, the client was happy with things as they were (8%). Getting them to just under 22% only involved a few design changes so the cost was minimal.

Numbers and numbers and numbers

Look carefully and you'll see that the number of “visitors” goes down on the 2)Funneling through 4) Thankyou pages once ET's suggestions are taken into account. This decrease is due to the influence of the positive-biasing female design factors driving away the predominately male neurologies. Note, however, that actual completions increase on the Thankyou page as the other, predominantly female {C,B/e,M}s brought into the browsing session are positively reinforced (ie, “Sure, dear-partner-o'-mine, let's get that”). By the way, biologic gender isn't a factor here, neurologic gender (are they thinking male or female thoughts?) is.

Knowing what tool to use when, Knowing what core problem you're solving (Part 2)

I offer that adding another tool to the traditional toolbox, something like either The NextStage Gauge or the Purchase-ExchangeStop Report, would provide something directly more actionable. This is touching on a neuro- and psycho-linguistic principle; If what you're doing isn't working, try something else. IE, if your traditional methods have maxed out their ROI potential, perhaps some new tools (ours or others) are in order.

Or you can be like a horse and try everything else in the hopes of getting something right.

But then you'd better hope your client is a good equestrian, which means they'll check to make sure there's nothing physically wrong with you — your saddle pinching your withers (just sounds painful, doesn't it?), your bit and everything else is fitted properly — before they say “…or I'll find someone who can. This isn't a showcase for your alleged talent, it's a business tool.”

Suggested Readings

Putting technology in its place

The possibility of impossible cultures

Setting standards

A tool, not a tyrant

Why I Am Not a Property Dualist

The economics of impatience


Posted in , , , ,

From TheFutureOf (13 Mar 09): The Analytics Ecology

I'm going to build a bit on what I wrote in From TheFutureOf (5 Jan 09): Omniture and Google Considered Environmentally.

I'm very curious to know where the whole concept of analytics is going to go given three factors; the current economy, the emerging Web X.0 technologies and the increasing requirement that any analytics be multichannel. One thing I didn't address in my response listed above was the concept of keystone species. A keystone species is critical to any ecological system because they directly affect both the outcome of the entire ecosystem and help shape that system. Without them everything collapses. They're not necessarily the foundation of an ecosystem, merely close to.

So what are the keystone species in the current analytics ecology? More importantly, is that species going to go away in the present economy? If so, will the ecology collapse or will something else come in to take its place? And if some other species takes the current keystone species' place, what will the resulting ecology and keystone species be like?

This line of questioning isn't arbitrary. I've been asked to co-author a whitepaper about NextStage's Evolution Technology (ET) because it “…is a technology powerful enough to fund a new industry and this is worth a WP that will still be relevant in years time. … the idea is to write a White Paper that will explore the possibilities of the technology and open the door to the future.” (these aren't my words, I'm quoting)

The request to co-author such a whitepaper is, to me, similar to Rene's “What if all we had was Omniture and Google Analytics?” It's really asking if ET could be a keystone species in an emerging ecology.

Interesting question, that. Biologic systems are inherently unstable – they have to be because instability creates conditions that determine what will survive. Biologic stabilities only occur in discrete “phase spaces” clustered around some “point attractors” or within some well-defined “limit cycle”.

<My thanks to Stephane Hamel for helping me to clarify the following)>

Web analytics' phase space is the domain of (whatever) in which what we call “wa” exists. An attractor is a goal or a determinant, something we believe web analytics is providing us (note that it might not be doing so, only that we believe it is. Misbeliefs are primary reasons business ecologies collapse. Anybody checked their stock portfolio recently?). Limit cycle is the lifetime of a system. Web analytics' limit cycle was defined and its demise (probably) unrecognizably foretold when the WAA decided to define some standards. The moment you point at something and call it “A” you can't point at something different and call it “A” as well. The most you can hope for is to make a comparison.

The challenge, of course, is that the definitions were based on a declining ecology. Technologies change the phase space hence the limit cycle comes to an end. Do I think the decline will happen tomorrow or the next day? Heavens no. It'll continue for a while, I'm sure. But this blog is about “the future of…”, after all.

<Thanks, Stephane>

Market ecologies and systems share those traits with the exception that market ecologies are usually artificially maintained.

And any systems ecologist knows that's a dangerous and potentially erroneous statement, that last, because the artificial maintainment becomes part of the ecological system, thus the current players and what they do to maintain their positions, market share, etc., are all part of the ecology hence calculable.

Right now I'm guessing Google Analytics is the keystone species in the current ecology (this is where I again emphasize that I know nothing about web analytics). If Google Analytics went away an incredible vacuum would be created. But the vacuum wouldn't simply be market share, it would also be market space.

The removal of Google Analytics means the range they filled also collapses. That would be followed by a biodiverstic explosion as everybody and their kin attempts to alleviate that vacuum and fill that range. This explosion also means evolution would go into overdrive, testing out new lifeforms, creating them and mutating existing lifeforms because the last successful lifeform ruined the ecosystem when it left. This means the entire ecosystem moves left (metaphorically speaking). The biodiverstic explosion literally causes the remaining lifeforms to shift their positions in the food-web, kind of like the fact that the Sahara was once a great lake. What survives is what best adapts.

Phase space, point attractor and limit cycle can be fairly well defined for the present analytics ecology. I've been very public in my thinking that web analytics (as I currently recognize it) is based on false attractors and that eventually these would become obvious. Someone far wiser than I wrote “…continuing to perceive the world through glasses that distort relations and priorities, actions are misguided, interpretations are obliged to maintain unwitting fictions, and emotions are inappropriately deployed. (Chickens obliged to wear prismatic lenses always peck to one side of the seed they are aiming for. When grain is plentiful, they nevertheless hit food often enough to survive, and may even, if I may be anthropomorphic, remain unaware that anything is wrong.)”

It's that last piece, “When grain is plentiful, …” that brought much of this together for me. The grain is no longer plentiful in the current ecology (the economy is shrinking). Further, the range (in a food-web sense) is shifting due to emerging Web X.0 technologies. Multichannel requirements take the place of changing phase spaces. The general taxa of Rene's original question and the comments to it demonstrate this, me thinks. What's left is the limit cycle and that's being defined by an increasingly attention-challenged culture.

Please take my following statement as intended, an encouragement to explore and traverse deeper; web analytics has always impressed me as only looking at what it can easily see. If there's something it wants that it can't easily see it creates something and puts it in the place of what it wants. The flurry over engagement was (to me) just such a time. I think it's wonderful that that word is being used and I'm happy for people in the analytics community who are charging for it and making money at it. I also recognize one could just as easily have called it “fred” or “tulip”, assigned the desired meaning to it and gone on ahead. I know “fred” and “tulip” don't have the cache of “engagement” but what the heck. (My preference has always been for uniquely valued metrics from which others can be built (as I've written elsewhere, I'm a second order tool maker).)

Nor am I suggesting I or NextStage has the answers. I wouldn't be asking these questions if I had the answers. People who know me know I bore rapidly once a problem is solved and quickly move onto the next challenge.

Stephane Hamel agreed with me. “I think you are right,” he wrote. “The the current state of wa, as well as the attractors are bound to change. The current cycle is about to end because of the economy, because the web is part of a larger whole that includes many different channels (lots of them offline), because of data integration, and ultimately, because of the changing consumer behavior.”

I'm very curious to know what people think. Truly.

<Also many thanks to Aurelie Pols for reading a draft and commenting. I hope she comments here, as well.>

From TheFutureOf (5 Jan 09): Omniture and Google Considered Environmentally

As usual, I'll respond to this question through some very different lenses. Questions like “What if all we had was Omniture and Google Analytics?” are (to me) basically questions of systems ecology, adaptive and evolutionary biology, environmental modeling, things like that. My covert suggestion is that rational actors don't exist (duh!) and that the rules of adaptive and evolutionary biology are far better at determining how markets will behave than traditional methods.

So when asked “What if all we had was Omniture and Google Analytics?” I wonder what kind of environment would be necessary for such to be, what kind of evolutionary path and ecologies had to come into and go out of existence in order for such a system to thrive.

For people with an interest, I've put together a bibliography that I used in putting together my response. You can find it at Partial Bibliography for Rene Dechamps' What if all we had was Omniture and Google Analytics?. My responses weren't done on the fly (as least that's not how I did it. It took me better than a month of reading and researching. But oh, what fun it was!).

Organisms don't evolve in absentia. The number of factors involved in any species becoming dominant (especially as dominant as hypothesized here) are just on this side of countability. One needs to investigate things as diverse as

  • studying the present environment to understand the future environment
  • scaling issues – what is required of the organism(s) under study to support that future environment?
  • are there historical analogues?
  • can the “future” organism support the energy costs required to exist in that future environment? Or do bio- and thermo-dynamics stop the scenario from happening? Then what will the organism do to achieve the scenario while working within bio- and thermo-dynamic norms?
  • can a future environment support the organism(s) under study?
  • studying the evolutionary record – how do changes in this landscape occur now and will this change methodology continue? For how long? Why will it continue/not continue? What would take its place?

Other questions, such as how this environment is organized, are also critically important. These multi- and inter-disciplinary approaches are (I believe) of greater and greater necessity as system complexity increases.

We can simplify the problem somewhat based on the conversation that has taken place thus far.

Rene (Wednesday, February 12th, 2008 at 3:27 am): What
if all we had at our disposal was Google Analytics as a “basic” free
tool and Omniture, the “enterprise” platform, serving the high-end of
the market?

Then these two are predators. There are always less predators than there are prey. If there are only these two tools for all users, these tools/companies serve the roles of predators in the system with all users being their prey.

Rene (Wednesday, February 12th, 2008 at 3:27 am): How would this landscape affect consultants and practitioners? Would
it be a good thing?

I think the only path that would allow them to survive in the ecology defined would be as scavengers on whatever the top predators didn't consume. From Rene's previous statement, there are only two predators in the environment, therefore consultants and practitioners are not predators and not offering solutions to users. Working on the side of users means they are prey and will eventually be consumed by the top predators (note that this analogy means consultants and practitioners would probably be hired by the top predators, work for the top predators, work in conjunction with the top predators (this means they are definitely scavengers)). The only other prey-based roles would be as parasite or symbiont. Either case means consultants and practitioners are manipulating the evolution of prey species (clients) to better benefit themselves.

So consultants and practitioners evolve over time. Would it be a good thing? Depends on how you define “good”. This scenario would be unsustainable in any environment or ecology. The question then becomes “How long would such a scenario exist in any given environment or ecology?” That question is very easy to answer — it would last as long as the environment and ecological systems could maintain balance. Hence it becomes mandatory for these organisms to create some kind of balance if they are to survive.

There's an upper limit on how much prey an environment can sustain. The mega-predators' prey must prey on something themselves. That would be us, consumers (of web content in whatever form and for whatever purpose). Therefore the clients are in competition for us, the environment can only sustain a countably finite number of us, therefore the number of clients is both countably and recognizably finite, therefore the ultimate size of GA and Omniture is finite.

This is one way way balance occurs but said balance eventually fails due to a hysteresis loop occuring in the ecology (think of the acorn-mouse-deertick-wolf/coyote-deer cycle). So at this point the symbionts and parasites make themselves known in the diorama and push the cascade in one direction or another. They will push the cascade along the gradient of least resistance (that which benefits them the most).

Unlike oil companies, automobile manufacturers, investment houses, mortgage brokerages, etc., most parasites and definitely all symbionts know enough not to kill their hosts (although viruses and other parasites may do this at the last stage). Parasites direct the host organism to become better hosts, sometimes killing the host when the parasite moves on. Symbionts engage in a bio-molecular pas de deux with their hosts that benefits both.

What parasites can do that symbionts can't is cycle through host species. Some worms, for example, have life cycles that take them from insect to fish to mammal and back, each host species contributing some necessary environmental elements for the growth and development of the parasite.

Therefore consultants and practitioners that move seamlessly between client and GA or Omniture are more likely playing the role of parasite. The term “parasite” may have negative connotations and not so in biology and ecology (and definitely not in parasitology). Theories developed over the past 10-20 years indicate that parasitic activity has been a primary evolutionary force through geologic time.

Even so, Google can't sustain itself in any ecology without diversifying itself via either subspeciation or co-evolution. Omniture is a mega-predator going after mega-fauna in this model. Google is also defined as a mega-predator but going after micro-fauna, not a good place to be, really, as there will always be smaller and smaller fauna available to those willing to invest the energy into harvesting them.

Rene (Wednesday, February 12th, 2008 at 3:27 am): Or would it be the end of analytics as we know it today?

Probably so as any such specialization means some amazing things have happened to the environment and ecology. For two species to become the only species providing an ecological function to the environment then these two species have to be phenomenally unspecialized (I'll let actual users of these tools determine that). One specializes in mega-fauna (enterprise), the other in what's left. The enterprise predator has to become the more specialized hunter because the number of prey is smaller. Think of this as the polar bear first searching for then sitting by the ice hole for hours on end waiting for the one seal to surface. Expenditure of resources followed by high conservation of resources followed by maximal expenditure of resources followed by a long rest period to replenish resources.

The other tool is more like a blue whale sieving krill when it surfaces. It doesn't need to do much beyond what it's doing already — swimming, surfacing, some herding but only as a function of swimming and surfacing. But it does need to be aware that it has to keep moving because it will deplete the krill if it stays in one place for too long. All it needs to do is keep swimming and surfacing. Eventually it'll run into more krill, therefore the GA predator is feeding because the act of feeding is necessary, yes, but primarily because feeding drives some other, also evolutionarily desired activity. GA doesn't are about analytics for analytics' sake, it cares about analytics because analytics helps it achieve some other goal.

Rene (Wednesday, February 12th, 2008 at 3:27 am): Training would be easier for consultants such as ourselves as we would have fewer tools to support and understand.

This is the scavenger model. Consultants, etc., who survive based on what the predators leave, ignore or excrete are scavengers. This means the consultants thriving in Omniture's wake are biologically (ie, business plans and goals) quite different from those thriving in Google's wake.

This also indicates a possible ecological niche in which a scavenger species could evolve into a highly specific predator — there will be prey that are too small for Omniture yet too large for Google (as we've defined them here). Omniture and Google will specialize in very different ways because their prey are very different (again, as we've defined them here) and basic co-evolution principles indicate that the prey species will also specialize to the predator. The fallout from this is that as each mega-predator and prey species co-evolves, larger and larger ecological gaps appear in the food-web due to mutation, etc. Eventually these gaps become large enough that mid-level predators evolve to exploit the vulnerabilities in that new niche.

The short, WA way of saying the above is “Clients will make demands that neither Omniture nor Google can easily meet, or new clients will appear that have demands beyond the scope of both Omniture and Google, and others will develop tools to address those demands.”

At some point this niche will either become large enough that the mega-predators start to feed on it by accident, by intent, or the mid-level predators become large enough that they begin invading the mega-predators' territory. When this occurs and based on its duration evolution goes into overdrive and there's a relatively brief explosion of new organisms (think Cambrian Explosion until ecological balance is one again achieved. Then it's lather-rinse-repeat all over again.

Rene (Wednesday, February 12th, 2008 at 3:27 am): As for many industries, a duopoly generally leads to a lack of innovation.

I doubt a lack of innovation could occur. At some point these two predators would start exhausting their food supplies and would start encroaching on each other's territories and prey species, or the prey species would start evolving better defenses to the predators (ie “requires solutions neither GA nor Omniture can provide or address”). Either one would force evolutionary changes all around.

Rene (Wednesday, February 12th, 2008 at 3:27 am): Certainly if there is collusion at hand and as GA's pricing model is different from Omniture's one, they might have a shared interest in locking the market between their solutions. After all, competition is good. Just take a look at how vendors have been competing these past years to release more powerful tools and better functionalities to address the complexity of Web Analytics

This is an example of evolutionary principles. Other predators come into the environment, assuming and establishing ecological niches. Pretty much what I wrote above.

Rene (Wednesday, February 12th, 2008 at 3:27 am): If Omniture would be the only enterprise solution, prices would remain high while I strongly believe that WA tools will more and more becoming a commodity, putting downward pressure on prices. Don't forget that a tool is just that: a tool and that you need people and processes in order to use them correctly, which are the most important factors in a WA project. We have customers doing great things with Google Analytics and I've seen very poor uses of expensive WA tools. Look also at Office suites, currently you could say that you have two main options: Microsoft and Star Office; Microsoft still sells their software at a very high price and they make margins of over 70%! If there was a real competition I bet that prices would be lower;

Again, a demonstration of niches coming into existence. Rene's statement that “…you need people and processes in order to use them correctly…” are examples of scavengers evolving into mid-level predators.

Rene (Wednesday, February 12th, 2008 at 3:27 am): Having just Omniture and Google Analytics wouldn't/couldn't suit every need. Not all websites are alike and we see it already today that a single tool doesn't fit all. Take for example Coremetrics that focuses on retailers and seems to be doing a great job regarding this vertical. Look also at Unica that allows big corporations to integrate easily WA to Campaign management.

Another example of what I wrote above.

Rene (Wednesday, February 12th, 2008 at 3:27 am): My opinion regarding this question is that it wouldn't be good for the industry if we ended up with just 2 products (I've taken Omniture and Google Analytics as they are the two most important tools nowadays, but it could apply to any other). As I mentioned tools are just part of the equation, an essential but not an important part.

Not within my ability to determine good and bad, sorry. I can only identify environmental, ecological and evolutionary principles at work and predict outcomes based on them. 'Lo, that I were a web analyst…

Rene (Wednesday, February 12th, 2008 at 3:27 am): How would you see yourself in this scenario?

That is a interesting question. Hmm… As NextStage doesn't offer the same products/services as WA does we're not in the same ecology, not predator, prey or scavenger. Some aspects of NSE being in this environment and with a nod to our operating principles indicate we serve the function of symbiont (a discussion of personal philosophies and metaphysics would quickly confirm this, me thinks). However, NSE could be deemed a mid-level predator by mega-predators at some point and would probably be consumed by them as the market has already indicated there's an audience for NSE products and services, that audience is growing (ie new prey or a new niche is evolving and invading the ecosystem). Since NSE got started other mid-level predators have come into the environment and while definitely currently more profitable than NSE we do have that one great advantage all others lack — we (as a company) are specifically designed for phenomenally rapid evolution, kind of blending the best evolutionary benefits of viral, bacterial and herd species, co-opting different disciplines to respond to client requests (this response is an example of such), and our technology is both a core and base technology (meaning NSE can rapidly adapt itself to whatever environment it finds itself in. Kind of like a fish leaping out of the water, sprouting wings and feathers and learning to fly before it dives back in again. Also, NSE can utilize resources from other, even alien, environments in order to survive in a given ecosystem until environmental variables change enough for NSE to thrive there).

That a NextStageish way of offering “Remember all those ELE things that happen periodically? We're the species that survives them because we can adapt and breed faster than most.”

Readers familiar with r/K Selection Theory will recognize NextStage as opportunistic tending towards equilibrium over time in a given market because once we invade an environment we can rapidly generate offspring highly adapted to that environment.

Rene (Wednesday, February 12th, 2008 at 3:27 am): Do you want to see a two vendor market, kind of like Windows versus Apple, or do you like the diversity of options we have before us today?

This is another example of what I've written above. Other vendors (scavengers, mid-level predators, etc.) exist, just not in large or obvious enough numbers to be recognized as such, nor of becoming threats to the existing mega-predators' food supply.

Joseph James Geertz (Wednesday, February 12th, 2008 at 7:12 pm): The Pareto Principle, based on Pareto's analysis in 1906 that 80% of Italy's income went to 20% of the population, suggests that 80% of the revenue in the field will go to 20% of the industry.

Dr. Geertz, Pareto's Principle also has applications in evolutionary analysis, often appearing in any of environmental economics, food-web ratios and relationships, etc. The application is that 80% of an environment's resources go to 20% of the species in that environment. The numbers aren't exact and this is a reasonable definition of the mega-predator models I discuss above.

Joseph James Geertz (Wednesday, February 12th, 2008 at 7:12 pm): That 20% may come from a small set of businesses (e.g., automobile manufacturers) or a large set (residential construction), …

This is a definition of mega-fauna characteristics.

Joseph James Geertz (Wednesday, February 12th, 2008 at 7:12 pm): How does economics decide how many companies occupy the 20% plateau? It comes down to how scalable and transportable the leading businesses are and how large are the barriers to market.

A definition of the environment/ecology.

Joseph James Geertz (Wednesday, February 12th, 2008 at 7:12 pm): The investment involved starting up an auto manufacturing business is astronomical, operating as a barrier to market that holds down the number of players.

I'd need to think on this one a bit. (minutes tick away). Okay. Could a totally new species appear that is a direct challenger to an established mega-predator? No. Could an existing species evolve by exploiting a ecological niche to a point where it became a challenge to an existing mega-predator? Yes. Slight modification to what you're suggesting, me thinks.

Joseph James Geertz (Wednesday, February 12th, 2008 at 7:12 pm): Web analytics, to the extent it is a personalized service, …

That's an interesting thought, personalized service. One of the earlier implementations of NextStage's ET was to “personalize” web pages as they loaded into people's browsers so that the presentation most closely matched the individual visitor's psycho-emotive and -cognitive abilities. Basically it was a plug-in type of thing. Lots of fun, that.

Palani Balasundaram (Thursday, February 13th, 2008 at 7:20 am): Since the question is what if we had just two players, Omniture and GA, i would like to approach this discussion from the websites point of view.
It would make the work simpler for websites to adapt to web analytics. For organizations, to understand analytics they may try out with the free Google Analytics and once the belief is established they may go up the value chain and opt for Omniture. I do agree that migration would be lot more easier and there would be some sort of standardization.
I also share the view that the competition is one factor that helps in breeding better products and it also helps in pushing the prices down.
What would happen to the world of “Mobile Analytics”?

Nicely stated and nicely done. This is an example of co-evolution of predator and prey species, the K aspect of the r/K Selection Theory, until an equilibrium is established in the ecosystem.

The reference to Mobile Analytics and competition are examples of mutation, genetic drift, and the r aspect of r/K Selection Theory.

The challenge to the above comes from what I described above; prey can only evolve out of an ecology until the predator takes notice. If there are only two predators in the environment then prey will evolve to not be prey, predators will evolve to exploit the new adaptations, …

Eric Peterson (Thursday, February 13th, 2008 at 3:32 pm): …vendors around the world are popping up with innovation in measuring widgets, social networks, video, mobile, RIAs, engagement, etc.

Eric, forgive me if I'm mistaken and don't you have a background in bio- or environmental sciences? What I'm describing is probably very obvious and simplistic to you so my apologies. What you offer here is an example of what I wrote above about scavengers and other species evolving into mid-level predators, etc., to exploit gaps and niches, yes?

Eric Peterson (Thursday, February 13th, 2008 at 3:32 pm): And again, it's not like any one vendor is clearly leading the way into (say this with an ominous voice) “THE FUTURE OF WEB ANALYTICS”

Hey, I'm doing my best here…

Judah (Friday, February 14th, 2008 at 12:01 am): If the duopoly didn't provide capabilities for a certain type of measurement I needed to guide decision-making, then I wouldn't be able to make data-driven decisions until one of the duopolists decided to accommodate my need. The reason I have guided some businesses away from certain vendors (and some towards) is because they were deficient in features or capabilities that the business believed they needed to “win.” Competition catalyzes (no pun intended) innovation via differentiation. Competition leads to the genesis (pun intended :) of features and capabilities that answer market demand. In other words, I think the market should drive the product, not the product driving the market, which is what I fear in the scenario of duopoly (or worse yet monopoly).

Well stated, Judah. The former is an example of gaps in the food-web appearing, the latter an example of co-evolution. Depending on how often the latter occurs it could also be step-wise evolution wherein the prey rapidly evolves into a non-prey species due to some cataclysmic change in the environment. The predator species then either evolves or (because it is over-specialized) becomes extinct because it (being a K) can't evolve fast enough.

Ian Thomas (Friday, February 14th, 2008 at 3:00 pm): See, in the future, there will be more places you can do web analytics, not fewer. I made a prediction some years ago which I still stand by, which is that eventually the 'stand-alone' web analytics tools that we currently know and love will be absorbed into (or absorb, in some cases) adjacent technologies and tools, until there's no such thing as a “web analytics vendor”.

I think this is one of those ELE events I mentioned earlier.

Ian Thomas (Friday, February 14th, 2008 at 3:00 pm): …what if in the future there were no web analytics vendors, but web analytics was everywhere? What would the consultant community do then? Discuss.

Excellent question, me thinks. What is being described is a period change, kind of like the Ediacaran to Cambrian, with a large-scale extinction of the previous period's biota followed by an explosion of new life forms (ie, The Cambrian Explosion as mentioned earlier). What also occurs is the paleologic record of how surviving species became survivors. That would be an interesting study.

Denise Eisner (Saturday, February 15th, 2008 at 5:19 pm): If the WA space were to be dominated by the likes of GA and others that depend on cookies to track users, sites that prohibit cookies from a privacy standpoint would be out of luck. Websites that adhere to the Government of Canada's standards for example are cautioned against implementation of persistent cookies due to stringent privacy laws. This has all but stopped the use of GA for federal government web sites here in Canada.

Did I ever mention that NSE's ET doesn't use persistent cookies?

Rene (Sunday, February 16th, 2008 at 4:27 am): …I don't see large corporations using several 'little' tools in order to get answers. Large companies need an integrated tool that will allow them to deploy on a global scale…

Co-evolution at work.

Rene (Sunday, February 16th, 2008 at 4:27 am): @Joseph James, while I know the pareto principle since I was I kid I don't see the relationship with the 10/90 or the 10/20/70 rule. I don't think that this industry can be 'measured' in terms of revenue as we have major players that have changed the rules.

My line of thought causes a rephrasing of your terms; “revenue” == “environmental resources”, “changed the rules” == “modified the ecology”. These redefinitions allow environmental economics to apply.

Daniel Shields (Sunday, February 16th, 2008 at 11:48 pm): I am not exactly sold on the idea that Google is 'competing' with Omniture. As I see it, Google has provided a means to the compete by measurement in the market for web entities who cannot afford the pricetag of a commercial solution. In that regard, I think that they are complementary solutions.

See above my explanation of two mega-predators and how their prey differs.

Daniel Shields (Sunday, February 16th, 2008 at 11:48 pm): If speculation pans out to anything, Google has its eyes on cellular bandwidth.

An predator evolving to exploit an evolved prey species.

Anil Batra (Thursday, February 20th, 2008 at 6:24 pm): In three years there will be no Web Analytics vendor, but Web Analytics will be everywhere – I completely agree that Web Analytics will be everywhere in next few years. This is already happening, as you mentioned and provide several examples. However, I disagree that there will be no Web Analytics Vendor. Microsoft, Google, Oracle, Atlas, Doubleclick etc. will (or already do) provide web analytics as an add on to their products…

Excellent description of a transitory ecology.

Anil Batra (Thursday, February 20th, 2008 at 6:24 pm): …but there will still be a need for one web analytics product you can rely on to make strategic decisions.

Another fascinating concept wherein the model I use can either be of great benefit or deemed invalid. I asked another NextStageologist for some help on this one. The model I'm using here depends on what “strategic” defines in a time-sense and specifically in a time-sense in this particular ecology. IE, what is deep-time in this environment? The model I'm using can predict with excellent accuracy (look at the historic record (god, it almost hurts to write that)) what the future ecology will be and how the environment will change to support that ecology. A strategy that is based on what you can predict (with great accuracy) will allow any organism to thrive. A strategy that is based on what is currently available won't allow an organism to survive.

It's borderline amusing that while any evolved species carries in its DNA a genetic record of every change it's been through and every environment it's been exposed to (this is how biologies perform trend-analytics) and that several companies use trend-analytics as part of their offerings, most companies seem incapable of using that tool their environment's deep-time to determine where they should be and what they should be doing in their future.

You've come very close to recognizing something I believe has been missing in the other comments — that the environment is one of the players in any ecology. No ecology can sustain itself (no balance can be achieved) that the environment is not willing/able to support. Consider the current world economic situation and my point (and the use of this model) is demonstrated in full.

Anil Batra (Thursday, February 20th, 2008 at 6:24 pm): Can you imagine having 15 different web analytics solutions that all give you different numbers?

You mean like a roomful of economists?

Or how about my favorite joke regarding consultants: A consultant is someone who asks to borrow your watch when you ask them what's the correct time, then tells you the time according to your watch, presents you with a bill and a list of suggestions on how to make your watch more accurate.

From TheFutureOf (11 Nov 08): Responding to Steve Jackson's 16 Sept 08 6:52am comment

NextStage: Predictive Intelligence, Persuasion Engineering, Interactive Analytics and Behavioral Metrics(sorry, I don't have a copy of Steve's comment)

Pretty much all your questions are answered on our FAQs page, I believe. What isn't answered there has probably been answered in my presentations. What hasn't been answered in my presentations is probably best answered in a live conversation as blogversations do not easily allow for course corrections and new learnings to take place. And as most people know, I'm remarkably slow in these things.

What behavioral targeting does is simply target ads and offers based on your behavior.

Please define “behavior”. It gets used a lot in these discussions and I'm still sure I don't understand how it's being used.

…behavioral network…

ditto.

…you will receive ads based on preferences you have pre-identified or clicks you have
made.

Challenges with the above start with “pre-identified” and work their way down. Most of those preference selector tableaus reveal much more about their authors than they ever could about the people filling them in. Ditto “clicks”, tritto “have made”, …

In your scenario after the auditory stimulation they might be *thinking* about food and your offers are presumably designed around this potential situation.

I believe what I wrote was “they received auditory stimulation while browsing the blog during a pause in their browsing, after that auditory stimulation they started thinking about food”. The neural circuits that trigger for food are easily recognizable and differentiable from other biologic needs. There's no “might be” involved. The number of psycho-physiologic changes that occur when people are thinking about food are … well, a lot.

In the behavioral targeting world you wouldnt know that unless the behavior indicated it. (IE they went to a search engine and typed burger – in which case you could serve an advert)

We might be getting to the crux of thing with the above statement. Even if there's nothing on a given page that deals with food, even if the reasons an individual came to a site have nothing to do with food, knowing that the individual is now being influenced by hunger allows for a much more precise response range in content provided. This is touched on in From TheFutureOf (7 Nov 08): Debbie Pascoe asked me to pontificate on “What are we measuring when we measure ‘engagement'”.

I know nothing about Future Now's offering. I know who they are, of course, and not much more.

We matched our Rich Personae (see InFocus Reports and Personae Mapping Tool because a client asked for it.

So is your method to design websites based on this kind of principle?

Have you met Rene, our new CEO? This sounds like a question he can answer better than I. I can put you in touch, if you'd like. (wink wink, nudge, nudge)

Passion about your subject is required…

A friend of mine says he never knows where my ego is as I don't respond to much. I always laugh at that. I'm very passionate about kite flying, music, my family, …, not so much about much else. I'm passionate in my research but am always surprised when others express interest. I suppose I've also got it in my head (oh, my god, the pun that's in the making here) that when I respond passionately to something its due to certain parts of my thalmic and amygdalic clusters responding to environmental signals blah blah blah.

So when I recognize I'm getting passionate (and hopefully before others do) I can begin examining why which leads to a deeper understanding of myself and (more often than not) those interacting with me. This increased understanding often leads to mutual understandings, which often leads to …

One camp didnt believe Engagement was a valid metric while the other camp did.

Again, a surprise to me based on a different metaphysic. If someone doesn't believe Engagement is a valid metric then don't use it. It's an odd thing, I guess. My training is such that if someone told me they were going to design a craft to get to the moon and power it by tying geese to it…well, I'd probably help them because I'd want to learn a) did they know something I didn't, b) what caused them to have this belief, c) … Then at some point if they ran into difficulties that caused a violation in their metaphysic I could offer “Have you tried eagles? They fly higher, you know…”


JUST KIDDING!


And I do remember that I often have to tell people “I can do or I can teach and we only have two minutes to get this done. Tell me which one you want.” And it still comes down to, “if you don't accept it, don't do it.” It's the inability to move from “this is wrong for me therefore it is wrong for you” that shakes me. I don't use smartphones and I certainly don't stop others from doing so, nor do I tell them they're fools for using them. “Does it make your life easier? Then good for you!” It's right up there with forcing a new student to use a concert reed on their oboe. Most often all that happens is you kill their desire to learn the oboe. I'd prefer to help someone learn their options and let them make their own decision than force my belief system on them.

The valid arguments on both sides of the fence added fuel to the fire.

Remind me to tell you about taking a master class with Neil Simon about presenting valid arguments on both sides of the fence.

AND!!!! I think I'm caught up on all comments and posts. Obviously it's time for me to start some new research, yes?