From TheFutureOf (Published Version, 10 Mar 08): Analyzing Eric's Calculation or “Back into the fray, picking up at Eric's 'The purported complexity of my calculation…'”

I decided to continue the conversation started on Starting the discussion: Attention, Engagement, Authority, Influence, … in a new post for several reasons. Primarily I started analyzing Eric's Engagement calculation and came up with some suggestions regarding utility, extendability, extensability, things like that.

Eric's already posted that I'll be working with him to make the formula more applicable to a wider variety of interfaces with greater general use features. I also know that I can always use help and have repeatedly and publicly stated that I don't know web analytics.

So, first steps? A semantically exact statement of what we're hoping to measure. I suggest this step because it's much easier to know if your variables will result in the desired solution if you are exact in what the solution looks like and what you have to put into that solution.

Think of it this way; You want to make some chicken soup and you use your grandmother's recipe. I want to make some chicken soup and I use my grandmother's recipe. But your grandmother is Irish and mine is Italian. I'll bet we'd use different spices, different vegetables, different noodles (if indeed we both did).

But I'd bet we both use chicken stock as a base. And is your chicken stock from the leftovers of a roast chicken? What spices did you use there? Or is your stock from bullion?

So the first step is to decide what we all mean by “chicken soup”. One of my mentors was a genius of an author who use to write “speculative fiction”. I would ask, “What is speculative fiction?” and he'd reply “It's what I'm pointing at when I say it.” This is a great anecdote and an undefensible statement (except in cultural anthropology). If one person “owns” the definition of “speculative fiction”, “chicken soup” or “engagement” then that definition is only valid so long as there exists a market for that definition.

However, a definition that says something like “Basic Chicken Soup”, that is something I can start with to make “Italian Chicken Soup” and allows my Irish friend to extend it to “Irish Chicken Soup”? Now that's a good definition.

I snuck the concept of “extendable” into the above. “Extendable” means the definition accomodates special cases (Italian, Irish, etc). Think of a recipe for Italian Chicken Soup that begins “Step 1: Make the Basic Chicken Soup. Step 2: Now add garlic, oregano, …” That “Step 2” part means that the original definition isn't limited, that it can be extended to incorporate specific features to make it unique.

The concept of “extensible” means two things; First, you can substitute one thing for another if they share some basic properties. For example, you can substitute a glass of wine for a glass of water in the recipe because they're both liquids. You can't substitute a lamb chop for a glass of water, though. Mathematically, this means that if we want to include “clickthroughs” we can use whatever product A calls clickthroughs, whatever product B calls clickthroughs, etc., so long as they all meet some definition of “clickthroughs” (I'll let the WAA worry about things like that).

Second, “extensible” means new spices, new vegetables, new types of noodles, etc., can be used to make the chicken soup better. This means that you can add a new spice to your recipe, not replace one with another to make your soup taste better. Extensible (in this sense) means you're doing what you already do to make your style chicken soup and now you've discovered something more you can add to it to make even more “your style”. You're not watering it down or adding more vegetables to make the soup go further. That's scalability and the equation should be scalable without needing to define it as such.

The sum of these two concepts of “extensible” translates to “the equation is valid across all interfaces including those we haven't thought of yet.”

Wow. Do I love a challenge, huh?

Previously Unpublished: An Analysis of Eric Peterson's “Engagement” Calculation

This post contains the analysis of Eric Peterson's “Engagement” calculation that I mentioned in From TheFutureOf: Analyzing Eric's Calculation or “Back into the fray, picking up at Eric's 'The purported complexity of my calculation…'” (Note to readers — this was previously unpublished). I took the text for my comments, analysis and quotes from How to measure visitor engagement, redux.

As I noted in From TheFutureOf: Analyzing Eric's Calculation or “Back into the fray, picking up at Eric's 'The purported complexity of my calculation…'” and further down in this post, this is an analysis. I went through this exercise because and as suggested in the original premise of TheFutureOf (and continued here in The Analytics Ecology), there is more power in different disciplines coming together than in any single discipline attempting to answer what it doesn't have the tools to question.

So again to state what I wrote in my caveat to From TheFutureOf: Analyzing Eric's Calculation or “Back into the fray, picking up at Eric's 'The purported complexity of my calculation…'”, Would anyone care to join me in helping Eric reformulate this pseudo-logic so that it does something closer to what is intended?

Let me rebuild the calculation (above) from the ground up using the definitions supplied in How to measure visitor engagement, redux. I'm going to do my best to mathematize the definitions in order to disambiguate meanings. I'll use “==” for “is defined as”, “|” for “is formularized as” and place what I believe are necessary modifications to the original statements in italics in what follows.

Ci == “the percent of sessions having more than 'n' page views divided by all sessions” | = (sessions with n page views)/(all sessions with countable page views)

First, the above change serves the purpose of borrowing a lesson from physics, engineering, basically any discipline where metrics have meaning; make sure the units always match and that the final units are relevant to the original proposition. I don't think the modification adversely affects the definition and it does add a little rigor, something that will be required as I progress through the explanation of the calculation's logic.

Second, I am confused by the definition, “the percent of sessions having more than 'n' page views divided by all sessions”. I think what is meant is either the percent of session having more than n page views or the number of sessions having more than n page views divided by all sessions (these are mathematically equivalent statements). A “percent” takes the form of “x/y”. My reading of the definition or expression fails because I read it as “(x/?)/y” where “?” is a missing element of the calculation. I considered the material in the preface to the formula, “…to calculate 'percent of sessions having more than 5 page views' you need to examine all of the visitor's sessions during the time frame under examination and determine which had more than five page views.” and if I understand that model correctly then I've written the formulation as I think it was intended. If I'm mistaken, this modification greatly affects the calculation because the original calculation becomes unusable otherwise.

Ri == “the percent of sessions having more than 'n' page views that occurred in the past 'n' weeks divided by all sessions. The Recency Index captures recent sessions that were also deep enough to be measured in the Click-Depth Index.”

This one gives me some real challenges. There's a similar challenge to what I mentioned above regarding “percent”. The use of “captures” infers that Ci is a subset of Ri. This means that a simple “Ci + Ri” has the potential of yielding over 100% (as I believe negatives are disallowed by the definitions).

The use of “n” for both page views and weeks concerns me. The introductory material states that the calculation isn't absolute for all sites. That I can accept and I would need much more explanation of why the index is identical for page views and weeks. Even allowing one index to be “n” and another to be “m” presents some challenges that I'll address later.

The “…that occurred in the past 'n' weeks…” in itself poses a challenge. Time — as inferred by this statement — is a funny variable. The use here indicates that a temporal variable isn't being introduced as a subset of something else, such as “all sessions that lasted more than x minutes” as is done with Di below. Stating “…sessions having more than 'n' page views…” clearly (to me, anyway) indicates that “page views” are being introduced to define a subset of “sessions”. This isn't the case with “…that occurred in the past 'n' weeks…” because determining the correct meaning greatly affects “…divided by all sessions”. The prefatory material includes “…examine all of the visitor's sessions during the time frame under examination…” and leads me to conclude that the same time frame is applied to numerator and denominator. This introduces challenges of its own. Now the value of 'n' must be carefully determined based on data not yet demonstrated in the model; what's the average length of time it takes for “Recency” to occur? It's possible with the current model to determine that someone was extremely “engaged” over a one hour period. But if they are never heard from again, web wise? Or someone could be extremely “engaged” over a one year period and still not fall under the predicate of “truly qualified opportunities”. The fact that a time frame is specified for this variable and not others indicates that no time frame exists for other variables unless so noted.

In any case, my understanding of what is written causes me to guess (where m may equal n)

| = (sessions with n page views in some m week interval)/(all sessions with countable page views in the identical m week interval)

If I have understood the calculation correct thus far (big “if”, I know) then it's already violating the conservation of units concept I identified above. Ci has no time element therefore it can't be added to or summed with Ri because Ri does have a time element. For those less mathematically inclined, the problem here is that the sum of numbers with dissimilar units provides less information than the individual values. If I have a Weather Index that sums the likelihood of rain (1-10 scale with 10 being a flood) and expected temperature (1-12 scale with 12 allowing egg frying on the sidewalk) and I told you tomorrows Weather Index is 7, how would you dress for the day? Much more informative is simply saying the precipitation scale is a 2 and the expected temperature is a 10. This concern grows exponentially as we traverse the various measures.

Di == “the percent of sessions longer than 'n' minutes divided by all sessions.”

The use of “percent” continues to be a challenge. Ditto “all sessions”. Tritto unit conservation as Di has no temporal referents. Quartto the repeated use of 'n' as an index.

| = (sessions lasting longer than n minutes)/(all sessions)

Li == “1 if the visitor has come to the site more than 'n' times during the time-frame under examination (and otherwise scored '0').”

That 'n' is going to be very carefully chosen, yes? Also much the same concerns and challenges as noted previously.

| = {1,0} || {n >= N, n < N}

Bi == “the percent of sessions that either begin directly (i.e, have no referring URL) or are initiated by an external search for a 'branded' term divided by all sessions (see additional explanation below)”

Same challenges as noted previously.

| = (sessions that are branded)/(all sessions)

Fi == “the percent of sessions where the visitor gave direct feedback via a Voice of Customer technology like ForeSee Results or OpinionLab divided by all sessions (see additional explanation below)”

I'll leave logic errors presented by VoC for some links in my bibliography. I will offer that this is the closest variable thus far proposed in the calculation that comes close to either the dictionary or NextStage definitions of “engagement” (also in the bibliography) as it is a direct representation of the visitor's psyche. I don't believe it's a particularly useful representation because VoC as referenced here fails on several social science backgrounds and that's just my opinion. Also noting the same concerns as before.

| = (sessions with direct feedback)/(all sessions)

Ii == “the percent of sessions where the visitor completed one of any specific, tracked events divided by all sessions (see additional explanation below)”

This is something I recognize as a Visitor Action Metric. Otherwise same challenges as noted before.

| = (sessions with selected events)/(all sessions)

Si == “scored as '1' if the visitor is a known content subscriber (i.e., subscribed to my blog) during the time-frame under examination (and otherwise scored '0')”

Same challenges as before and with a twist. We're adding a logic referent disguised as a temporal referent. “If during the three weeks we monitored, person A became a subscriber” == 1. But it doesn't matter when during the three week period person A became a subscriber. Nor is there a recognition of unsubscription. Nor is there any consideration of person A's interaction with the subscription vehicle. I just finished a lot of work on understanding how and why people subscribe and unsubscribe to things so I really have a challenge with this being a metric as laid out here.

| = {1,0} || {S,~S}

The use of

eric%27s%20sum%20over%20all%20visitors.jpg

confuses me. Is something calculated that applies to all visitors to a site or is the result unique to each visitor? The extra level of panthicity adds to the challenges of forming the calculation correctly if the former, not if the latter. In the expression “Ci + Ri + Di + Li + Bi + Fi + Ii + Si” and using 'i' as the index, is the meaning that the index “i” for F is the same and the index “i” for C, etc.? In other words, there exists a set “C1 + R1 + D1 + L1 + B1 + F1 + I1 + S1” and a set “C2 + R2 + D2 + L2 + B2 + F2 + I2 + S2” and a set “C3 + R3 + D3 + L3 + B3 + F3 + I3 + S3” and so on for each visitor in the calculation?

This means for every “C” there is a matching “R”, “D”, … for each visitor. Having 20 Cs and 10 Ds, for example, violates the calculation. Unless there's a lot of fast and loose play to make things fit. I also think that some of the explanatory comments in How to measure visitor engagement, redux require a rewrite of the calculation.

“You take the value of each of the component indices, sum them, and then divide by '8' (the total number of indices in my model) to get a very clean value between '0' and '1' that is easily converted into a percentage.” Okay. I'm not going to argue semantics and I'll do my best to incorporate this terminology into my understanding.

There are separate ratios (because I don't think the Cs, Rs, Ds, etc, are percentages yet) and add them together (there is no summing yet as indicated by the calculation) then divide by the number of ratios you've gathered (8). This returns an average ratio that could be between 0 and 1 and doesn't have to be as noted above. If L and S are both 1 there's a real possibility of 25% becoming a false attractor (final calculated value). I would suggest removing these variables or calculating their values differently to provide more meaning to the calculation.

In any case, this average ratio is converted into a percentage.

I think there are flaws in either the explanation or the construction of the calculation. One way or another conservation of units doesn't occur and that pretty much invalidates the calculation from an arithmetic perspective.

The brand index B is also interesting and I agree, complex. We've found that people will find client sites in ingenious ways and one of the reports we offer clients is “Search Term by Level of Interest”. We've found that some of the most qualified visitors find client sites via search terms that appear out of the blue to site owners yet make extreme sense when one considers the cultural concepts the searcher is applying.

The feedback index F is also interesting. The statement “…anyone willing to provide direct feedback is engaged.” is troublesome. Ignoring the use of “engaged”, I will offer that anyone willing to provide direct feedback is willing to provide direct feedback, nothing more until their reasons for providing direct feedback are known. We've already demonstrated and published in detail that (the majority of) people who participate in such methodologies are not motivated by the site (they are not highly qualified and may not even be qualified visitors).

At this point I think evaluating the arithmetic is a lost cause. There are too many logic flaws (my opinion) to make the exercise worthwhile. Instead I'll focus on how well what is offered matches what is intended.

What I'm willing to suggest is that the calculation offered represents some pseudo-logic. I'm not ready to accept that the pseudo-logic represents “…the degree and depth of visitor interaction on the site against a clearly defined set of goals.”

Does the pseudo-logic represent “…new ways to examine and evaluate visitor interaction.”? Of course it does. Is it relevant, does it do what it claims to do and does it demonstrate both reproducible and functionally operable results (from Starting the discussion: Attention, Engagement, Authority, Influence, …, “Can you repeatedly measure what you mean by them so that there's a reasonable surety that what you're measuring is what you mean by the terms you've used?”)?

That's the real question for me. I've created lots of mathematical models that are danged impressive and completely useless (economically) when I create them. An example are the maths I created for our first patent. Nobody gave a rat's patootie about them when I developed them because few people understood what the mathematics was doing. Now companies are calling asking for applications based on the math.

So the fact that I believe the calculation offered doesn't do what is claimed is only relevant now and probably has more to do with my ignorance than anything else.

Is the pseudo-logic “…time-consuming…”? I suppose, if you're as anal as I am about understanding things before I respond to, use or accept them. And in their present form? Definitely time-consuming and due to the flaws in the calculation itself and documented above. But calculations (flawed or otherwise) that are time-consuming can be made much less so by a variety of mathematical tools so this isn't a real concern to me unless the desire is to perpetually do everything by hand. Two of NextStage's developers are remarkably adept at taking the math in my head and turning it into working code. Thank goodness! My inability to create legible blog posts is an indication of my lack of programming skill.

Does this pseudo-logic “…identify truly qualified opportunities.”? I would need to know what “truly qualified” and “identify” means. The calculation provided is arithmetically unstable, hence any values derived from it are questionable, hence it fails the test suggested in the quote a few paragraphs above and several others that would normally be applied. “Does a value of x here mean the same thing as the value x applied there?” that type of thing. Until repeatable and actionable the pseudo-logic is an interesting exercise and nothing more (my opinion again).

Regarding “…this model fully supports both quantitative and qualitative data,” … I'll agree that there are elements in the pseudo-logic that have quantitative origins and others that have qualitative origins, not that the model supports them. Making arithmetic use of something — including it in a calculation without logical reasons for doing so — is not a basis of support.

Lastly “…there is no single calculation of engagement useful for all sites,”… I'll agree that the pseudo-logic presented is not useful for all sites because it is flawed. Let that slide for a second and do some simple lexical substitution using the definition of “engagement” supplied in How to measure visitor engagement, redux, “…there is no single calculation of [the degree and depth of visitor interaction on the site against a clearly defined set of goals] useful for all sites…”.

I'm thinking I don't understand what is meant because (to me) of course there are such calculations available. A simple example is the NextStage Gauge. Interestingly, this tool came about when several university PR and marketing offices asked for something that would allow them to get away from traditional web analytics because, as you point out, “Web analytics is hard…”. They wanted something that would look at all visitors and determine what needed to be done to achieve specific business goals.

Regarding the section of How to measure visitor engagement, redux entitled “How Does This All Work in Practice?”… Let me first offer that the economic value of a metric is directly proportional to

1) the information value of what the metric reports on

2) as that information value is defined by some group with an interest in it and

3) that same group's ability to change environmental factors so that

4) the metric changes report value (not information value) in direct and obvious response to that same group's intentional changes in environmental factors.

Thus the proposition (and using the same lexical substitution as above) “…someone coming from a Google search for web analytics demystified who looks at 10 pages over the course of 7 minutes, downloads a white paper and then returns to my site the next day will have a higher visitor [degree and depth of visitor interaction on the site against a clearly defined set of goals] value than someone coming from a blog post who looks at 2 pages and leaves 2 minutes later, never to return.” is obviously true for items 1 and 2 above (and with the caveats and concerns already documented above) and I have no evidence regarding items 3 and 4.

Does it have anything to do with the dictionary definition of engagement? Not that I can determine.

So, if the final offering is that some pseudo-logic has been created and the term “engagement” has been used as a symbol for that pseudo-logic then great and good. But the value of this semanticism will only go as far as a business case can be made according to the four items above, me thinks. And it's not “engagement” by any definition other than the one supplied in How to measure visitor engagement, redux.

A couple of final thoughts. One of my mentors, a brilliant mathematician, often told me “If you're going to make mistakes, make them at the beginning. They're much easier to find and fix that way.” I think this is the case here. The logic and arithmetic mistakes are fairly simple to fix although the fixes might cause some radical rethinking of what the metric is really reporting.

Second, I didn't go through this simply because I enjoy understanding such things (I do. Ask my wife about my reading math texts to relax). I went through this exercise because and as suggested in the original premise of TheFutureOf, there is more power in different disciplines coming together than in any single discipline attempting to answer what it doesn't have the tools to question.

So again to state what I wrote at the beginning, Would anyone care to join me in helping Eric reformulate this pseudo-logic so that it does something closer to what is intended? I'm game for it. Remember, though, it might take me a while to respond.

Links for this post:

Previously Unpublished: Analyzing Eric's Calculation or “Back into the fray, picking up at Eric's 'The purported complexity of my calculation…'”

I decided to continue the conversation started on Starting the discussion: Attention, Engagement, Authority, Influence, … in a new post for several reasons. People wanting to know those reasons can find them over on Problem with Blogs for Holmes…for this Holmes, anyway. I'll be posting some “required readings” to what I write here over on BizMediaScience in an attempt to make my posts shorter here.

Caveat: Some first readers of this post were concerned that people unfamiliar with my methods may interpret what I write here as an attack on Eric or his calculation. This is not my intent nor is it the case. My last statement (or close to) in this post is “Would anyone care to join me in helping Eric reformulate this pseudo-logic so that it does something closer to what is intended?” and that offer is sincere. Companies, scientific societies, journals, etc., regularly hire me to evaluate their material prior to publication and I went at this analysis with that same mindset. One first reader wanted to know if anyone had asked me to do this analysis and no, not directly. Eric responded to one of my comments with “The purported complexity of my calculation…” and thus entered his calculation into the discussion. To understand his points I wanted to understand his calculation. Simple as that.

I left off on Starting the discussion: … with “Regarding your equation…hmm…

Give me a bit of time to read through your statements before I offer anything on “The purported complexity of my calculation accounts for that.”

Starting from there…

The first thing I'll note is that this is a definition of “engagement”, an excellent note as it narrows the analysis greatly because now all that's left is to determine

1) if the definition has merit and

2) if the calculation correctly mathematizes the definition.

Completely different problems to solve are

“How many definitions are out there?”

“Are any of these other definitions better at demonstrating ROI or some such?”

“Do any of these other definitions more closely model a reality people care about?”

That list can be interestingly long.

And as always, a NextStageish question, has anybody looked at the calculation itself and determined if it makes mathematical and arithmetic sense? I did see that Frank Flaubert, Gary Angel, Nick Arnett and others had commented on what kind of data is collected and from where but I didn't see anyone actually looking at the calculation itself, its actual functionality, to determine if it would calculate something that stood up to simple analysis. I'm sure someone has and sorry to cover the same territory. Let this be an example that I should have spent more time studying the situation than responding.

I kept in mind key phrases such as “…new ways to examine and evaluate visitor interaction.” , “…time-consuming…”, “…identify truly qualified opportunities.”, “…this model fully supports both quantitative and qualitative data,” and “…there is no single calculation of engagement useful for all sites,” during the analysis.

The analysis itself can be found on An Analysis of Eric Peterson's “Engagement” Calculation. It's about 8 pages long, went through 3 edits and was reviewed by two other researchers. What follows here is the result.

There is a section of How to measure visitor engagement, redux entitled “How Does This All Work in Practice?” Let me first offer that the economic value of a metric is directly proportional to

1) the information value of what the metric reports on

2) as that information value is defined by some group with an interest in it and

3) that same group's ability to change environmental factors so that

4) the metric changes report value (not information value) in direct and obvious response to that same group's intentional changes in environmental factors.

Thus the proposition (and using the same lexical substitution as noted in An Analysis of Eric Peterson's “Engagement” Calculation (lexical substitution explained)) “…someone coming from a Google search for web analytics demystified who looks at 10 pages over the course of 7 minutes, downloads a white paper and then returns to my site the next day will have a higher visitor [degree and depth of visitor interaction on the site against a clearly defined set of goals] value than someone coming from a blog post who looks at 2 pages and leaves 2 minutes later, never to return.” is obviously true for items 1 and 2 above (and with the caveats and concerns already documented on An Analysis of Eric Peterson's “Engagement” Calculation) and I have no evidence regarding items 3 and 4.

Does it have anything to do with the dictionary definition of engagement? Not that I can determine.

So, if the final offering is that some pseudo-logic has been created and the term “engagement” has been used as a symbol for that pseudo-logic then great and good. But the value of this semanticism will only go as far as a business case can be made according to the four items above, me thinks. And it's not “engagement” by any definition other than the one supplied in How to measure visitor engagement, redux.

Final notes: I went through this exercise because and as suggested in the original premise of TheFutureOf, there is more power in different disciplines coming together than in any single discipline attempting to answer what it doesn't have the tools to question.

So again to state what I wrote at the beginning, Would anyone care to join me in helping Eric reformulate this pseudo-logic so that it does something closer to what is intended? I'm game for it. Remember, though, it might take me a while to respond.

From TheFutureOf (27 Feb 08): Back into the fray, picking up at Eric's “I guess the problem I have with that…”

<Ramble>
Last night a follower of this blog gave me a call. My thanks! I'm flattered and honored. Please also feel free to post your comments directly into this thread. Brad Berens once told me that people feel more comfortable calling me than posting to my blog because it's easier to discuss single points with me than attempting to interject themselves into a multi-subject thread where all the threads come together at the end. Perhaps so and I definitely enjoy learning people's thoughts during phone calls. I also hope people will post their comments here and call me. That way everyone can benefit from our exchanges.

I heard an interesting statement during a conversation this morning. “Businesses should resist the urge to make decisions on the basis of incomplete metrics acting as surrogates for complete understanding.” It came from a client CTO, a midwestern university. The comment came from reading Multi-Channel Analytics and More on “Multi-Channel Analytics”. I directed them to this blog and suggested they take part in the discussion. Time will tell.

And some curiosity questions that people can feel free to call me about because they're (probably) off-topic:
1) What is the finest temporal granularity that web analytics traditionally uses? I'm interested in knowing if web analytics can provide literally one second's worth of data. That level of information would be helpful in some research I'm considering.
2) Does anyone have or know of research and data on business-consumer networks? I'm looking for examples of relatedness that demonstrate mutualism (both parties take part in an activity that will benefit both parties equally). As above, for some research I'm considering.
</Ramble>

Eric, I agree to a certain degree with your paragraph “I guess the problem I have with that, if I'm hearing you correctly, is that some people have an uncanny ability to nod at exactly the right point in a conversation without paying any attention at all. You could, for example, nod to your wife while thinking about, oh, playing the guitar. To your wife you might appear to be paying attention, but the little bubble above your head would show you picking-and-grinning… (as it were)”

First, I didn't know there were cameras in my house. Second, let me go on record as stating that I always pay attention to Susan when she's talking (remind me to tell you about the first time I saw her deal with an obstreperous horse).

Third, as I read your formulism I detect flaws in the underlying understanding and logic. “Is it possible for someone to nod at exactly the right point in a conversation without paying any attention at all to the conversation itself?”

Of course. There have been times I've been having conversations with people with the radio or tv on and the actor or reader makes a statement that falls perfectly into the conversation. Do I have the attention of the reader or actor? LOL! I have an ego, to be sure, and I hope it doesn't extend that far.

Is it possible for someone in my presence (and not I didn't specify physical presence) to nod at exactly the right point in a conversation without paying any attention at all to the conversation itself?

Yes. Let me consider “nod” and again (begging indulgence) I'll quote from Reading Virtual Minds:
Let me give you an example of the difference between everyday intuition and predictive intelligence and persuasive analytics…
Have you ever given someone a gift and seen the “Oh, God. Not another one” look on their face? Before they can say a “Thank you” or a “How sweet” or even “Oh, God. I've already got five of these” you're making apologies and offering to return it for something else they'd like.
Here's the question that starts people down the road of Profiling or Selling or whatever; How did you know the individual wasn't satisfied with the gift?
Smart money says “By the look on their face” and lets it stop at that.
Wise money says “The look on their face is the kind of look I have on my face when I'm disappointed.”
And the people who earn a living at profiling or selling know that it's not just the look on the face, it's the sudden suspension of breath, the slight flushing of the cheeks, the momentary and hardly noticeable sagging of the arms, the defocusing of the eyes, the tightening of the eyelids, the shifting of the eyes right or left but not up or down, the momentary flaring of the nostrils, the tightening of the shoulders, the tensing of the stomach and abdomen, the flexing of the thigh muscles, the flexion of the neck muscles, the slight lifting of the adam's apple, …
Because — and here's why some people make money at this — all of those things together and a whole bunch of things I left out are what indicate that some (but not all, and here's why you have to be careful when you practice this stuff for a living) people are less than happy with the gift and just how unhappy they are, as opposed to reacting to a sudden cramp, or burping up something that doesn't taste as good as it did going down, or so many other things.
Because sometimes you'll say you're sorry and offer to exchange the gift for something else and the individual will pull the gift closer, vigorously shake their head, no, and their eyes will go wide with protest and they say, “Oh, no. I love this. I'm sorry, I was just listening to my son talking on the phone. He forgot he has to go do some errands for me before he can meet his friends.”
And then you're a little confused and ask, “You're sure?”
And they reply, “Oh, please. I just LOVE this!”
What most people wouldn't have noticed in the above scenario is that the individual tilted their head slightly and their eyes shifted to the right and held for a moment before they slightly shook their head, no. Those few movements amassed with all the others are the tip off that their attention, their focus, wasn't on what was in front of them (the gift). Instead, their focus, specifically their sense of hearing (or “audition”) was somewhere else.
Where else?
In the direction their eyes shifted and held momentarily.
And how displeased were they by what they heard?
That's determined by how long they visually defocused before they started to shake their head and how vigorously they shook their head.
And how far away was this auditory cue upon which they focused their auditory attention?
That's determined by how much they canted their head (lifted and aimed their ear, so to speak) in order to hear the conversation in conjunction with how much they closed down (squinted) their eyes during the visual defocusing.
Why did intuition fail us in the above? Because our attention wasn't focused where the other person's attention was focused. We see something and think it applies to what we're focused on, and we know what a certain expression means when it's on our face, so it must mean the same thing when it's on their face, hence their expression applies to whatever we're focused on.
What we've essentially done is attempted to read their mind by interpreting their expression through what's going on in our own head. And you'll be shocked (shocked, I tell you!) to know the sciences we're talking about have their own term for what's going on in your head and why it's different from what's going on in my head. Most people are aware of what's going on around them and what situation (visiting friends, the office, a classroom setting, etc) they're in, and they know what kind of social filters apply. Hence, these sciences and a few others use the terms “Situational Awareness” and “Social Filtering” to describe and often approximate what's going on inside someone's head based on what's going on around them.


So while I accept that an individual might “nod” at just the right moment, I also offer that it's easy to determine if that “nod” has meaning for the information you (or a website, some marketing material, etc) is presenting or for something else in the individual's environment (and this is why I didn't specify physical presence in the above. See Anecdotes of Learning for examples of what I'm describing)). I guess this means web analytics (as I understand it) is missing the ability to provide situational awareness and social filtering — two very important elements if web analytics is going to be useful in the mobile web world (what is that, exactly? Web 2.9? Web 3.0? Web X?).

Regarding your equation…hmm…

Give me a bit of time to read through your statements before I offer anything on “The purported complexity of my calculation accounts for that.”

More to follow.

From TheFutureOf (27 Feb 08): Now onto Eric's “If I'm translating you correct, …” comment.

Well…I'm not sure I'd agree that my nod is your click. Actually I'm quite sure I wouldn't agree. Consider the following example.

You're at an eMetrics Summit. You're talking with some folks mid-afternoon about what you'll all do for dinner after the last session. Everybody is nodding, agreeing with the plans. But things change, other people come and go, and when you finally count heads at the restaurant you discover some of the nodders aren't there and people who were never involved in the discussion are.

Your click is counting the people who show up at the restaurant. My nod is everybody talking, some showing up and others not. I chose to explore those nods because the degree, angle, direction, inflection, …, of the nods tells me long before people show up at the restaurant who will and won't show up. This is — I think — one of the fundamental differences between what we measure and analyze and what I understand of web analytics. One of the fallouts from understanding the nods (if you will) is that you can determine if someone is paying attention as I define it and thus engaged as I define it.

Going back to the conversation about dinner. Consider the person who's nodding while looking around the hotel lobby. Do you have their attention? Some and not all, and exactly how much depends on the individual's cognitive, behavioral/effective and motivational {C, B/e, M} matrix (Headlines That Attract Attention, Adding sound to your brand website (I touched on this one at the DC eMetrics '07 Summit), Intelligent Website Design: Expand Your Market (Page 2 of 5), AllBusiness.com's Chris Bjorklund interviews viral marketing expert Joseph Carrabis, founder of NextStage Evolution, Part 5, KBar's Findings: Political Correctness in the Guise of a Sandwich, Finale, Notes from UML's Strategic Management Class – Saroeung, 3 Seconds Applies to Video, too, Technology and Buying Patterns) and any environmentally available information that more accurately targets and activates their {C, B/e, M} matrix. More to the point, are they engaged by you or with you?

Now that's an interesting question. Only to the point that whatever else their devoting their {C, B/e, M} resources to isn't activated by environmentally available information. This is a polite way of saying “No, they're not”.

Next up, Eric's “I guess the problem I have with that…”. Now I must prepare dinner…

From TheFutureOf (18-27 Feb 08): (still responding to Jim Novo's 31 Jan 08 10:59am entry…now on to “…what happens after Trust, what happens when 'Engagement' ends?”)

(still responding to Jim Novo's 31 Jan 08 10:59am entry…now on to “…what happens after Trust, what happens when 'Engagement' ends?”)

<Ramble>
Two things from readings read to better understand these comments:
1) Sometimes I wonder if the semantic similarities are Batesian or Mullerian in nature.
2) What if a real use of the REAN model demonstrates that what we want to measure can't be measured the way we want to measure it? There's a need to determine ahead of time how much credence each lens of the REAN model gets and by its definition (my reading of it, anyway) each lens requires equal weight.

A comment from someone following these posts:
I've been a little busy (so haven't been paying lots of attention) and it seems only Mr. Novo is still following this thread. A reader of this thread talked with me on the phone, asking if this discussion had its run as it seemed (to the reader and I agree) that nothing has been resolved, not even a “we agree to disagree”. I'm concerned that this discussion and the sprouting of others is doing more to create barriers between the specialities represented than offer ways to bring them together.

One thought:
Can evidence inform the debate? A friend told me that I have to start showing graphs and charts of the results I talk about when I do presentations so that people will believe me. This was an interesting request, I thought. Graphs and charts aren't “believable” to me, they merely represent the outcomes of experimental systems I accept as valid. This is one of the challenges I have with web analytics making claims to engagement, etc. Great charts and nice formulae and they don't come from anything that measures what I recognize as engagement, etc.

As noted above, I get busy and may not be devoting the time to this blog that some would prefer. My apologies. But my busyness often produces some fascinating results. The other discussions in this blog have merit, I'm sure. But if the function is to perfect the visitor experience (that is the goal isn't it? I mean, all these analytics are worthless unless they create happy, satisfied visitors, yes? Just my thought, anyway) then there has to be a way to blend something such as the following:
I've been going through journals looking for information that backs up some research NextStage did that deals with page load times. This became of interest when I was a WAA member as there seemed to be some discussions around page load times being a possible concern. NextStage has always measured page load times (and remember, what NextStage calls “page load time” may not be what others define as “page load time”). Anyway, one of the things we learned was that (given some standard page load time T), some visitors prefer slower page loads (t > T), some faster page loads (t < T), This preference was directly tied to personality and learning style. Fascinating stuff, directly measurable (not just because of NextStage. These types of things are regularly measured and evaluated in disciplines such as economic rationality (even though humans don't always act rationally...and there is where NextStage comes in)) and I think something worth focusing on as part of the discussion, should any be interested (and willing to put up with my slow response rates. Guess that means you'd appreciate slower load times if you correspond with me…)

Let me provide an example; I'm currently developing a tool for a client that will allow the user to determine if a blogger believes themselves or not. We're already measuring if blog readers believe the blogger or not (again, this goes back to a question posed to me by SNCR Sr. Research Fellow John Cass). In both cases, though, the actual discussion — the content of the blog — is pretty irrelevant. What matters much more is the blog reader's perception of the author's reliability and credibility and the author's self-perception of their own reliability and credibility. This isn't something that I believe can be measured by clicks and such alone. That's my guess and I'm thrilled to be corrected.

Okay, enough rambling. Now to get back to responding to Jim Novo…
</Ramble>

I knew there was a reason it took me so long to get back to this…the question of what happens when “engagement” ends. This was actually something that fell out of our email newsletter research and will be published this Friday, 29 Feb 08 (I think), in IMediaConnection; how people demonstrate an “irrelevancy” response to a newsletter (or much of anything else in the online world, it seems).

The scenario you describe re a friend and losing contact; this is something we call “Determining if the door is open or closed” and has a strong history of research and applications in social work. It is strongly tied to culture, belief, education, training, …, and pretty much can be summed up with “How does someone define friendship?”

(Interesting side note: most people fail at sales because they fail to recognize the difference between “relationship” and “friendship”, ie, that “friendship” is a one type of relationship. Their boundaries are unclear hence they have challenges in sales relationships. I just finished working on a project with a LeadGen/DemandGen firm and one of the ways we boosted productivity was to train individuals about creating boundaries between types of relationships.)

My belief is that the concept of “Is the door open or closed?” exists in sales based on my experience (note that I'm not a salesperson, never have been, don't wish to be. My experience is observational only).

You ask “What are the stages/attributes of a dissolving friendship from the literature?”

Excellent question! Thanks for asking! The answer is wrapped up in that culture, belief, education, training, etc., stuff I mentioned above. I'll use myself as an example; one of NextStage's early execs often commented to me that he couldn't understand why so many people agreed to work with and for me based solely on a handshake. I was remarkably ignorant of business at the time (still am, pretty much) and replied (very honestly and ignorantly) that perhaps because I spent my childhood milking cows I had a good grip, hence a good handshake.
What it really comes down to is concepts of “friendship” and “trust”. I know how to demonstrate “friendship” and “trust” to people and people echo that back. One way of doing so is by agreeing to work with and for me based on a “handshake”.

An important element of this needs to be mentioned; People who trust and are friendly with themselves will demonstrate trust and friendship to others. We have another term for this when we do trainings, presentations and talk with clients, “If I am a thief then you must steal”. People who don't demonstrate trust and friendship are people who will not work in your or their own best interest. A good friend and early adopter/trainer of NextStage's technology is a native Australian. He spent several years here in the US then moved back to Australia. I was one of the first people he told about his decision to move back. His reason is one I've heard from several people from other countries who deal with US businesses. He said, “In the US, the whole purpose of a contract is to decide how and when the mutual f???ing will begin.” A painful statement, yes, and one definitely worth studying.

Note that none of the above deals with “trust” per se because the “trust” doesn't change value, it is what is “trusted” that does. A lost friendship isn't a diminution of trust so much as a trust that the relationship has changed.

Let me ask a question re your “Pretty soon, you're not friends anymore.” What does one do if the friend unmet for several years is seen in the mall? Or knocks on your door? I am curious because the ability to “stay in touch” is very recent historically, only since the advent of high-speed personal communications. Forgive the vanity of quoting from Reading Virtual Minds:
Business often demands that we “stay in touch” or “be in touch” 365x24x7. At the same time, they implicitly demand that we stay in touch or be in touch by not touching or interacting with a human at all. How often are you invited to go to a company's website while you're waiting for a human to respond to your support call? How many of you have direct deposit and do most if not all of your banking on line? How many of you would be upset if ATMs vanished and you — Gasp! — had to wait for a human teller to handle your transactions? How many of you spend your day listening to your personal music player regardless of whether you're in an office surrounded by co-workers or one of many people walking a crowded street ? And how many of you would be upset if someone told you you couldn't listen, that it was distracting you from something they deemed more important, like work, or that truck rushing down upon you which you can't hear because your earphones cancel out all other noise?
The truth is we as a society in this “modern” world are allowed the solace of others less and less even though the increased pressure this same society and modern world places upon us demands it more and more.
And be aware of how language was used in that last paragraph to make you feel something you might not have otherwise; modern is in quotes, separating it from the rest of the sentence, allowed, solace, increased pressure and places upon are all metaphors of physical contact and distance. The message hidden yet strongly suggested in the above? “Oh, these devices, we use them at our peril! Be Aware! Watch the Skies!” and all that.

“Staying in touch” was unthought of in 1900 except for the very rich. Go back to 1800 and “staying in touch” meant (maybe) receiving a hand-written letter when the ships docked, maybe once or twice a year. Prior to that the norm was that people who left the village were gone forever — ahh, but not forgotten and still friends.

I remember my paternal grandfather's return to the village of his birth in Italy after an absence of some 60 years. People were hugging and kissing him, opening their homes to him, brining out their best wines. He was still their friend. Just as he was still “trusted” to do harm to those who had harmed him before he emigrated to the US 60 years earlier (Grandpa often quoted Archliochus).

(Anybody wonder what, in my mind, qualifies as a <Ramble></Ramble> and what doesn't?)

I'm also not sure that the concepts of “trust” and “friendship” can be used in this discussion because an unequal friendship can also involve concepts of “betrayal”. Betrayal happens in business, yes, and I'll ask that we limit this to B2B and B2C situations. Businesses may want consumers to think the business is a “friend” and that strategy has (I think) shifted to concepts of branding and identity-branding (where individuals are concerned) because consumers are experiencing new voices thanks to the internet and new voices equate to new power.

Consumers may feel “betrayed” by a business, that their “friendship” has been devalued in the relationship and that they can “trust” the business to cause them pain or harm. But consider the response — Comcast and other businesses radically change their business methods and practices because consumers have voices they never had before. A business would gladly suffer a class action suit rather than a blog visited and commented on by several tens of thousands of visitors a day or week. The former can be negotiated. The latter can destroy.

Applying this to web measurement. Yes, it is possible to directly measure if people find marketing material trustworthy or not, hence worthy of “friendship” in the sense that we as a species tend to accept a higher degree of familiarity with those we trust to cause us pleasure than those we trust to cause us pain. Similarly, it is possible to directly measure that such people are engaged, even how long after their web (in this case) session ends they will remain “engaged” (using the definition I'm most familiar with). Note that I also recognize that engagement crosses several channels, not just the web. People act as a response to (usually) several touches. I think (and am not sure) that web analytics is aware of this.

<Ramble>
Yes, I may go off and fall silent for a bit. Isn't it fun when I come back, though?
</Ramble>

And my next entry will respond to Eric's “If I'm translating you correct, …” comment.

Thanks for your patience. – Joseph

From TheFutureOf (18-19 Feb 08): still responding to Jim Novo's 31 Jan 08 10:59am entry…now on to “…the concept of Trust is a bit problematic, …”

(still responding to Jim Novo's 31 Jan 08 10:59am entry…now on to “…the concept of Trust is a bit problematic, …”)

I disagree with the statement that “…the concept of Trust is a bit problematic…”, both as something measurable online and as a concept. We demonstrated years ago that a measurement of the non-conscious message “You can trust us to help you” was directly connected to success in the market place.

I think (and am not sure) that what you're describing is “trust in the marketspace”, something recognized in neuroeconomics and economics in general (R. S. Burt, Bandwidth and Echo: Trust, Information, and Gossip in Social Networks. in Networks and Markets: Contributions from Economics and Sociology, edited by Alessandra Casella and James E. Rauch. New York: Russell Sage Foundation, (2001) 30, Social Decision-Making: Insights from Game Theory and Neuroscience. Although not directly related, an equally good piece is Watts and Dodds “Influentials, Networks, and Public Opinion Formation” in the Journal of Consumer Research Dec 2007 v34
also Watts' “The Collective Dynamics of Belief”) and closely tied to what's called “fair-exchange“.

Pain to Pleasure Trust SliderI think (and am not sure) that another facet of what you're describing is something that occurs in language and not in the mind. The mind — without extensive training — can't conceive of negatives, therefore what (most) humans do is think of a positive and then negate it. In other words, it's not that someone “doesn't trust” another person or organization, it's that they trust them to do something unpleasurable. The degree to which person A “distrusts” entity B is really a measure of how much person A trusts entity B to do hurt or harm to person A.

Taken from this perspective, it's not that you trust or don't trust some company, it's your confidence in your ability to manage the pain or pleasure you believe you'll receive as part of your exchange with them.

<Ramble>
(no doubt you're thinking “you're signaling a ramble? You mean, the rest of these aren't rambles?”)
“manage pleasure” is almost an oxymoron, don't you think? Reminds me of Commander Cody and the Lost Planet Airmen's “'Too much fun'? Thats news to me. 'Too much fun'? There must be a whole lot of things that Ive never done. Ive never had too much fun.”
</Ramble>

This may seem like semantics to you and I believe it isn't. The brain-mind is an amazing place. It's well worth exploring and I hope that is my value-add in this discussion.

(more to follow…picking up with “…what happens after Trust, what happens when 'Engagement' ends?”)

From TheFutureOf (8-9 Feb 08): Responding to Eric (080208)

(Picking up where I left off or close to)
Im not sure and I believe your thought that the border between what is measured and what is between our ears cant be crossed perhaps exists due to a false assumption that whats between our ears cant be measured.
Indeed, yes, it can. I can go into details and believe, you wouldnt want me to.
Also theres the reasonable surety thing. Unless a metric provides reasonable surety of repeatedly measuring cause and effect, Im not sure Id be comfortable calling what is measured a metric, at least not one Id be comfortable having clients put money on.
Proxies for the truth: this is (I hope) a standard of understanding in any scientific measurement. Measurements dont provide truth, they provide pointers to the truth. My favorite example of this is the deep cave experiments where they fill million gallon drums with chlorine and line the sides of the drums with photocells. Should (Should!) a stray neutrino collide with a chlorine atom a spark occurs. This is a very rare occurrence and one of the only ways we have of proving some incredible cosmological and quantum mechanical questions.
But when you read what cosmologists and quantum mechanicists claim from a highly random occurrence! And this, I ask, is suppose to be truth?
Does a high bounce rate mean your page sucks? Depends on the completely measurable squishy stuff between the visitors ears and whats on the page.
Does having a high conversion rate mean you have a well designed path? See the above. In both cases, it is one data point among several and I always work to exhaust all the possibilities and all the data before making a statement or a suggestion.
Does a reasonable surety assume a level that simply doesnt exist in the online world? Depends what youre measuring and how its being measured, me thinks. The purpose of this blog is to cause discussion, my desire is for a meeting of minds, a glorious accident, and Im not going to say I have the answer, nor the truth.
However, I do believe that, knowing were working with proxies for the truth, we should either find the closest proxies or learn how to combine proxies from different disciplines so that the new generation or new class of tools does indeed bring us closer to that truth we know exists (shades of Faith in things unseen).
Am I criticizing your engagement framework? Not at all. I recognize that it is from a paradigm with which Im unfamiliar therefore I default to one of my favorite responses when someone asks me what I think of another researchers methodologies, They do it differently than I was taught to do it.
I also believe working to find similarities and overlaps in our paradigms is a manifestly worthwhile exercise (more work, yeeha!).
Kind of like consulting a neuroscientist, shannonist, semioticist or semanticist when all you really need is a page view count. Ohh. Ouch. I think.
Walking and Running. But never in traffic. Especially on websites. (oh, Joseph makes a very bad pun).

(more to followIm catching up, folks. And lets face it, anybody whos emailed me knows Im remarkably slow on my best days)

From TheFutureOf (5 Feb 08): More thoughts on Eric's 29 Jan 08 comment

Boy, I go away for a little vacation and people start posting.

Bear with me and I'll respond in the order things appeared, going back to Eric's 29 Jan 08 comments. I responded earlier and, going back over the communications since then, perhaps I should clarify things a bit.

One of my favorite anecdotes is about the anthropologist and the microbiologist having lunch. The microbiologist checks her watch and exclaims, “My goodness! I have to go kill a culture.” The anthropologist has a heart attack.

This anecdote is about two different disciplines using the same word to mean two very different things and the confusion that ensues.

I share this anecdote because I'm not sure folks realize that NextStage has been monitoring and reporting on visitor's levels of attention, engagement, trust, etc., etc., since we started in 2001. And yes, I do mean attention, engagement, etc., as written in Attention, Engagement and Trust: The Internet Trinity and Websites, Defining Attention on Websites & Blogs, in the 7 day blog arc starting with NextStage
Evolution's Evolution Technology, Web Analytics, Behavioral Analytics
and Marketing Analytics Reports for the BizMediaScience Blog, 7 day
Cycle, Part 1: Are Visitors Getting Good Value?
and in several dozen papers written since I started studying these things back in 1991 (1987 if you want to stretch things a bit).

So when I write “Engagement is the demonstration of Attention via psychomotor activity that serves to focus the individual's attention” and “Can you repeatedly measure what you mean by them so that there's a reasonable surety that what you're measuring is what you mean by the terms you've used?” I recognize that the intersection of those two statements is “Can you repeatedly measure engagement as the demonstration of psychomotor activity that focuses the individual's attention so that there's a reasonable surety that what is measured is what is meant?” I answer yes, we've been doing it for years.

However, is what NextStage measures and reports on as “engagement” what Eric means by “engagement”? Probably not and the differences are what I wish to understand (I'm getting closer thanks to a conversation I had over the weekend with Stephane Hamel. Thanks, Stephane). I can look at the reports we generate from sites we're monitoring and tell the owner “Visitors on this page were interested in this but this is what got their attention. They did/didn't click on it for these reasons. This is what you need to do to make them click/stop them from clicking on it. They were completely engaged with this information on this page and completely disengaged with this other information on this other page. Here's what you need to do to get them to act/respond. …”

(As an aside, another point of confusion for me is the increasing interest in “Time-On-Site” and “Time-On-Page”. These are metrics we've been strongly monitoring since 1999 when I started testing my theories on real eCommerce sites. Causing people to spend less time and more time have distinct benefits based on the site owner's goals. This, also, is heavily documented in a variety of publications.)

Next up: Does some level of precision exist in the online world? I think it does and I'll leave that for my next post.

From TheFutureOf (22 Jan 08): Starting the discussion: Attention, Engagement, Authority, Influence, …

Okay. Something controversial to start.

The only problem is that to me what I'm offering isn't controversial. It deals with measures and measuring.

Measuring what?

Well, when you put some Flash object on a page. What can you measure? I'm not a web analyst so to me the answers are obvious; measure the psychomotive and psychobehavioral cues that visitors are demonstrating. These and other elements are what make up the Cognitive, Behavioral/effective, Motivational matrix or “{C,B/e,M}”. The {C,B/e,M} tells you things like age, gender, buying styles, best branding strategies, impact ratios, touch factors, education level, income level, …

I understand that not everybody finds these things fascinating. Anthropologists, behavioral and cognitive psychologists, psycholinguists, sociologists, behavioral etymologists, …, those kinds of people go nuts over this kind of stuff.

Some of the stuff listed above has to do with things like attention, engagement, authority, influence,

This is where it gets a little…umm…interesting. I see words like the above used a lot in web and web based “behavioral” analytics. This is a mystery to me. Much in the same way that an anthropologist and a microbiologist use the term “culture” to mean two very different things, I think the way web analysts and web-based behavioral analysts use the terms attention, engagement, authority, influence, … to mean two sets of very different things. I've often commented and written that behavioral tracking as defined by the industry doesn't track human behaviors at all. Not as I understand them, anyway.

Okay, so what do I mean by these things? To recycle content from Attention, Engagement and Trust: The Internet Trinity and Websites:


Attention is a behavior that demonstrates specific neural activity is taking place.

Engagement is the demonstration of Attention via psychomotor activity that serves to focus an individual's Attention.

I'll add in Trust because NextStage is often asked to determine if people find marketing material — websites, brochures, etc. — trustworthy:
Trust is what the consumer — well informed or not — gives the site (or whatever is asking for the consumer's Attention) when their Engagement is rewarded with useful, relevant and meaningful information.


I can go into authority (something fellow SNCR member John Cass caused me to explore and which I'll be publishing about soon) and influence. I know how to measure what I mean by these things. But the definition I use don't come from the web world even though what I mean by them can be measured through any number of commonly used web-enabled devices.

And while I'm not sure, I don't think my definitions are those used in web analytics and web-based behavioral analytics. What I can offer is that my definitions — and this is my opinion here — are more closely aligned to what is generally understood in the literature (in the disciplines I mentioned above) than what is meant by web analysts and web-based behavioral analysts. I'm not equating “close alignment with literature” with “more valid”, merely offering that different paradigms can offer more understanding than any single paradigm alone.

But right now I think I've gone on enough. I came here to learn. I'd really much rather hear what others think, understand what they measure and what value they assign to it.

So for me the real questions are:

  1. What do you mean by the terms used here?
  2. Can you repeatedly measure what you mean by them so that there's a reasonable surety that what you're measuring is what you mean by the terms you've used?
  3. Can you make these measurements through a commonly used web-enabled device?

Links for this post: