Reading Virtual Minds Volume II: Experience and Expectation Now Available on Amazon

First, we appreciate everyone’s patience while we got this volume out.
And now, from Holly Buchanan‘s Foreword to the book…

Reading Virtual Minds Volume II: Experience and ExpectationAfter inhaling Reading Virtual Minds Volume I I was like an antsy 3-year old waiting for Reading Virtual Minds Volume II. It did not disappoint.
I love the way Joseph Carrabis thinks. He has a unique ability to share broad rich theory with actionable specifics. Unlike many technical writers, he has a unique voice that is both approachable and humorous. It makes for an enjoyable read.
But what’s the main reason why you should read Reading Virtual Minds Volume II: Experiences and Expectations? Because where most companies and designers fail is on the expectation front.

Humans are designed as expectation engines.

This is, perhaps, the most important sentence in this book. One of the main points Joseph makes in this volume is this – Understand your audiences’ whys and you’ll design near perfect whats.
Design failures come from getting the whys wrong. That can lead to failures on the experience side, but also on the expectation side. And that can be the bigger problem.

Expectation is a top-down process. Higher-level information informs lower-level processing. Experience is a bottom-up process. Sensory information goes into higher-level processing for evaluation. Humans are designed as expectation engines. Topdown connections out number bottom-up connections by about 10:1.

Why is this so important?

In language, more than anywhere else, we see or hear what we expect to hear, not necessarily what is said or written. Across all cultures and languages, neurophysiologists and psychologists estimate that what we experience is as much as 85% what we expect to experience, not necessarily what is real or ‘environmentally available’.


When people expect A and get B they go through a few moments of fugue. External reality is not synching up with internal reality and the mind and brain will, if allowed, burn themselves out making the two mesh.

Get your consumer/visitor/user experience AND expectation right, get their why right, and you’ll be exponentially more successful.

Here are just a few of the goodies you’ll find in this book:

  • Privacy vs. value exchange and when to ask for what information. Joseph has some actionable specifics on this that will surprise you.
  • Why we design for false attractors rather than the real problem.
  • The importance of understanding convincer strategies. Convincer strategies are the internal processes people go through in order to convince themselves they should or should not do something.
  • Companies spend a lot of time trying to convince consumers to trust them. But what may be even more important is understanding how to let consumers you know you trust them. This book has ideas on how to show your customers/users/visitors, “I believe in you”.
  • How often our own experience influence our designs. Unless you’re able to throw all your experience out, and let the user’s experience in, get out of the usability and design business.
  • How to allow your visitors easy Anonymous-Expressive Identity and make them yours forever.
  • Regarding new material, design, interface, the importance of making sure your suggestions provide a clear path to the past (thus being risk averse while providing marketable innovation).

As always, Reading Virtual Minds provides specific actionable ideas. But it will also make you think and approach your work in a new way. And I think that’s the best reason to treat yourself to this book and the inner workings of NextStage and Joseph Carrabis.

(and we never argue with Holly Buchanan…)

Posted in , , , , , , , , , ,

Reading Virtual Minds Volume I: Science and History, 4th edition

It’s with great pleasure and a little pride that we announce Reading Virtual Minds Volume I: Science and History, 4th EDITION.

Reading Virtual Minds V1: Science and History, 4th edThat “4th EDITION” part is important. We know lots of people are waiting for Reading Virtual Minds Volume II: Experience and Expectation and it’s next in the queue.

But until then…

Reading Virtual Minds Volume I: Science and History, 4th EDITION is about 100 pages longer than the previous editions and about 10$US cheaper. Why? Because Reading Virtual Minds Volume II: Experience and Expectation is next in the queue.

Some Notes About This Book

I’m actually writing Reading Virtual Minds Volume II: Experience and Expectation right now. In the process of doing that, we realized we needed to add an index to this book. We also wanted to make a full color ebook version available to NextStage Members (it’s a download on the Member welcome page. And if you’re not already a member, what are you waiting for?)

In the process of making a full color version, we realized we’d misplaced some of the original slides and, of course, the charting software had changed since we originally published this volume (same information, different charting system). Also Susan and Jennifer “The Editress” Day wanted the images standardized as much as possible.

We included an Appendix B – Proofs (starting on page 187) for the curious and updated Appendix C – Further Readings (starting on page 236). We migrated a blog used for reference purposes so there may be more or less reference sources and modified some sections with more recent information.

So this edition has a few more pages and a few different pages. It may have an extra quote or two floating around.

You also need to know that Reading Virtual Minds Volume I: Science and History is a “Let’s explore the possibilities” book, not a “How to do it” book. As such, it deals with how NextStage did it (not to mention things that happened along the way). It does not explain how you can do it. This book’s purpose is to open a new territory to you and give you some basic tools for exploration.

There are no magic bullets, quick fixes, simple demonstrations, et cetera, that will turn you into jedis, gurus, kings, queens, samurai, rock stars, mavens, heroes, thought leaders, so on and so forth.

How to Do It starts with Volume II: Experience and Expectation and continues through future volumes in this series. We’ve included a Volume II: Experience and Expectation preview with a How to Do It example on page 302 so you can take a peek if that’s your interest.

That noted, I’m quite sure that you won’t get the full benefit of future volumes without reading this one because unless you’ve read this one you won’t understand the territory you’re exploring in those future volumes.

Reading Virtual Minds V1: Science and History, 4th edThat’s Reading Virtual Minds Volume I: Science and History, 4th EDITION. It’s so good and so good for you! Buy a copy or two today!

Posted in , , , , , , , , , ,

NextStage Evolution Research Brief – The Relationship Between Product Release/Announcement/Introduction and Social Network Response (Control Issues)

Basis: This publication documents an ongoing (eight years to date) study of how increasing social contagion (information dispersal) is affecting vendors’ abilities to control market response during product/service release, announcement and introduction.

Background: Businesses were able to control product/service messaging throughout most of the history of commerce primarily due to consumer isolation. Consumer social networks were relatively small compared to the reach of all but local businesses. The introduction and proliferation of social media and widespread social networks have given consumers reach equal to if not greater than businesses, resulting in an inability to control product/service messaging. The proliferation of “social” is causing businesses to become increasingly reactive tactically while working to become proactive strategically.

Objective: To determine if Brands should control socially propagated messages or become another voice in the conversation.

Method: Fifty brands were studied, ten brands with marketing going back to 1900, ten starting in 1925 and so on to 2000. Historical financial records were analyzed to determine relative success rates of messaging strategies through time and compared to public discourse (newspaper Letters to the Editors, OpEd pieces, editorial, various interview formats, social commentary, personal files and indicia of employees at all levels of business, etc.) of those same strategies to determine “social success rates”.


  • The only successful reactive strategies are those that function like Community Response Grids
    • The response must be to a problem recognized by the brand community
    • The response must be rapid and measured
    • The response must recognize individual and corporate responsibility for the problem if the problem is on the business’ part
    • The response must not recognize individual or group responsibility for the problem if the problem is on the community’s part unless the individual or group self-identifies as an authority in the market space
    • The response must be cooperative in nature
  • Businesses must recognize that there will always be more consumers with voices than there are business-endorsed voices going forward
  • Businesses need to be forthright but not transparent
  • The best strategy is tactical in nature
    • Businesses need to become another voice unless they own the social channel
    • Businesses will be recognized as authoritative partners by first recognizing other voices as having authority (being knowledgeable) regarding their brand

Key TakeAways:
The majority of consumers

  • are more interested in rapid, forthright response than transparency
  • transparency is evaluated as a need to monitor businesses for errors, hence is internalized as a non-actionable time-requirement in all but a few consumer psyches
  • recognize social media as another marketing vector and are becoming socially immune
  • grant more credibility to messages in existing networks over introduced networks
  • will use branded networks as launching pads for their own social efforts

Posted in

Predictive Opinion Mining

NextStage is looking for funded researchers to collaborate on a study into predicting the opinions of groups.

We are currently developing a set of tools that (we're guessing) will fall into the “opinion mining” or “opinion forecasting” or some such camps. Although we're not sure where it will land we are confident we can do it. Think of it as drilling a few exploratory mines in current public opinion resources (blogs, tweets, timelines, etc) and being able to determine where and when you'll strike future opinion gold.

We make use of elements of an existing tool, NSPE – NextStage Predictive Echo (scans web server logs and previous web pages to determine how visitors were thinking, determines how much of your audience was getting your message historically, then makes suggestions for your next design efforts), some concepts from psycho-acoustics, neurocognitive resource distribution and pieces of the de Broglie-Bohm Theory.

As usual, we blend.

Long Story Short

We've figured out how to isolate the so-called “wisdom of the crowd” on just about any topic in public consciousness/awareness. We then perform various mathematical techniques to extract waning opinion before it's publicly noticeable, waxing opinion ditto, extremist thinking, influencers on the rise/wane, do opinion averaging (influence/common wisdom through time), …

Think of it as walking through Times Square and being awash in all the noise of people's chatter, traffic, hucksters, sirens, ringing cellphones, walking tours and tour buses, and being able to isolate any single voice or collection of voices and listen to it closely, literally finding and hearing the voice in a crowd, except the voice is an opinion, and you get the idea.

We're looking for funded researchers with large amounts of validated online data (past online conversations/discussions about a then future event, the event happened, conversation/discussion continued). Selected researchers/organizations will have first/proprietary access to the completed tool for a mutually agreed to period of time.

Please contact NextStage if you're interested. Thanks.

Posted in , , ,

NextStage Evolution Research Brief – The Basics for Forming Strong, Lasting Social Networks

Basis: This publication documents an ongoing (ten years to date) study of social network lifecycles and what is required for any given social network to thrive.

Background: The number of extant social networks increases along well-defined rules that are dependent on the number of social media channels and the technology required to access any given social network. This translates to a change in the past ten years from a few social media channels with a diversity of internal networks to a diversity of social networks each with their own social media channel.

Whenever there's a proliferation of similar organisms the laws of evolution kick in with an unmatchable ferocity. A few social media channels with a diversity of internal networks demonstrated a user preference for the interface (usability) above the information (content value). A diversity of social networks each with their own social media channel demonstrates cladic growth that in turn is subject to evolutionary methods.

This is demonstrated in both online and offline worlds in how social networks form, grow, die and evolve into new social networks. Note that for the purposes of this study social network “stability” is defined as a creation-evolution cycle, meaning the social network thrives (a YouTube video that receives 1MM hits in two days then fades into oblivion does not constitute a thriving social network). “Healthy” networks are those that grow while maintaining focus and direction. “Vital” information is information required to keep a conversation going.

Objective: To determine if any specific requirements exist for the health of social networks regardless of social media channels (what is required for healthy fish regardless of the pond they're in?).

Method: This research is an outgrowth of NextStage's previous and ongoing social network studies, and is built on the mid 1980s-1990s cultural anthropology studies performed on such social networks as CompuServ, AOL, Genie and the like.

Five hundred differentiable areas of interest were identified across automotive, destination, entertainment, food, motorcycle, science and travel meta-networks. Similarities of subject matter (content, focus), contributor (voice, style, tone, knowledge-base, experiential-base, post/comment frequency), structure (interface, posting requirements/mechanics, alerting mechanism) and visitor (income level, education level, geographic location, life experience, age, gender) were isolated and routinely measured to determine social network mechanics.

Results:The greatest factors contributing to the longevity of a social network regardless of social medium are

  1. Three “golden ratios”
    • The ratio of contributors to entire network population must be between 1:100 and 1:30. Social networks with contributor to population ratios in this realm demonstrate a reasonable dialogue is taking place. Fewer indicates unguided conversations, greater indicates a dearth of vital information.
    • The ratio of influencers to entire network population must be greater than 1:3,000. Influencers are required to inject source-recognized vital information to generate discussions among network participants.
    • The ratio of influencers to contributors should be within a few points of 1:100. Greater and there aren't enough “Watsons” to support the “Holmses”, fewer and there are too many “Watsons” (see Another Ommaric Intersection – Holmses&Watsons).
  2. The regular injection of vital information
    • Vital information must be “forward thinking” information. It must recognize a community challenge and offer direction for its solution. It does not need to solve the challenge, only demonstrate a possible solution path. Consensus solutions indicate there's nothing left to talk about and are death to social networks until a new challenge is identified.
    • Injection to general conversation ratios should be within a few points of 1:55. Fewer and the conversation collapses, greater and the conversation becomes confusing.
    • Networks without regular injections of vital information first stagnate and eventually collapse.
      • The collapse speed is related to the size of the network. Larger networks collapse more quickly (relative to their size) than smaller networks due to higher social bonding factors usually present in smaller social networks.
    • Too many or ill-timed vital information injections cause confusion in the general population. This confusion translates to
      • a decrease in the general population.
      • an increase in the level of conversation among the “literati”. Note that this is a demonstration of a stable, evolving network.
  3. The information gradient (dispersion vector) should be directly proportional to the size of the network.

Key TakeAways: Brands (and others) wishing to maintain stable, healthy and growing social networks should focus their efforts on maintaining the necessary mix of

  • influencers, contributors and visitors to insure necessary conversation ratios
  • general comment to vital information posts/comments to insure necessary social growth incentive ratios

Posted in , , ,

NextStage Evolution Research Brief – Image v Text Use in Menu Systems

Basis: A one year study of twelve (12) international websites (none in Asia), M/F 63/37, 17-75yo, either in college or college educated, middle to upper income class in all countries studied

Objective: To determine if people were more decisive in their navigation when an image or text was used as a primary navigation motif (menu).

Method: Four separate functions were evaluated

  1. Presentation Format Preference (a simple A/B test)
  2. Sensory to Δt Mapping (time-to-target study)
  3. Teleology (how long did they remain active after acting)
  4. Time Normalization (determines what brain functions are active during navigation)

Results: Key take-aways for this research include

  • Visual (graphic or image)-based menus cause a 40.5% increase in immediate clickthrough, site activity is sustained an additional 32% with site-penetration being an additional 2.48 pages ending in a 36% increase in capture/closure/conversion.
  • Although not tested with Asian audiences, it is doubtful this technique will work with ideographic language cultures
  • The graphics/images used must be clear, distinct and be obvious iconographic metaphors for the items/concepts they open/link to. Example: Images of a WalMart storefront, a price tag with the words “Best Price” and people shopping resulted in greater activity than a simple shopping cart (too familiar as a “What have I already selected?” image) and the simple words “Store” and “Shop” to drive visitors into buying behaviors.
  • Existing sites with text-based menu systems need to use both systems (at the obvious loss of screen real-estate) to train existing visitors on the new iconography until image-based menu items are used more often than text-based menu items.

NextStage Evolution Research Brief – EU Audiences Adapt to and Integrate Site Redesigns Faster than US, GB and Oz Audiences

Basis: This publication concludes a two year study of visitor adaptation to and adoption of new technologies and site redesigns on similar product or purpose sites in the US, EU, GB and Australia. No Asian, South American or African sites were part of this study.

Objective: To determine if neuro-cognitive information biases exist in certain cultures and if so, is there benefit or detriment to those biases?

Method: Twenty sites (monthly visitor populations between 10-35k) were monitored in the USA, Italy, France, Germany, Great Britain and Australia. The sites included social platforms, ecommerce, news-aggregator, travel-destination and research postings. Activity levels were monitored before, during and after design changes were instituted, as well as before, during and after new technologies (podcasts, vcasts, YouTube feeds, social tools) were placed on the sites.

In addition to activity levels a study was made of viral propagation vectors to determine if changes to the site promoted new influencers or demoted existing influencers.


  • Announced changes to the sites increased adoption and adaptation rates among all visitors (in some cases by as much as 65%)
    • Announced changes most greatly benefitted US, GB and Australian audiences with adaptation and adoption rates increasing 12.5% on average.
  • Site previews increased adoption and adaptation rates among all visitors
    • 77% of EU based visitors who chose to preview site changes became influencers regardless previous social standing on site.
    • 35% of US based visitors who chose to preview site changes became influencers regardless of previous social standing on site.
    • 32.5% of Australian based visitors who chose to preview site changes became influencers regardless of previous social standing on site.
    • 27.5% of GB based visitors who chose to preview site changes became influencers regardless of previous social standing on site.
  • EU audiences demonstrated the highest rates of adaptation to and adoption of new technologies and site redesigns in all categories at 92.5% and 85% respectively.
  • Australian audiences demonstrated the lowest rates of adaptation to and adoption of new technologies and site redesigns in all categories at 30% and 7.5% respectively.

Key take-aways for this research include

  • Travel destination sites should provide a good deal of lead up time to site changes.
    • This lead up time should include previews and announcements.
    • This is especially true for US audiences.
  • Sites introducing social tools should select, train and promote influencers from within the existing visitor community before the social tools are made public.
  • The introduction of social tools to news-aggregator sites recognizably slowed the adaptation and adoption rates of EU audiences.
  • US based audiences were most likely to contact site admins, web admins, managers, etc., criticizing site redesigns and new technology implementations although they were the least likely to abandon sites due to those changes.
  • Australian audiences were the least likely to contact site admins, web admins, managers, etc., criticizing site redesigns and new technology implementations although they were the most likely to abandon a site due to those changes.
  • EU based audiences were the most likely to visit several sites all serving the same purpose.
  • EU based audiences were the most likely to give a site “time to settle” during redesign and new technology implementation before returning to it on a regular basis.

A Note About Research Methods (with implications for any kind of analytics)

NextStage will be posting some of its research here (as noted in NextStage Evolution Research Brief – The Importance of Brand as it Relates to Product v Feature Diversity and MarketShare). We normally apply our research methodology — one familiar to anyone doing psych, social, anthro or language research — to any engagement.

One thing we're repeatedly told is that our problem solving methods are unique and very different from what everyone else does so we decided to offer our methodology's high level form here.

The methodology is simple, adaptable and expandable. What is offered here is a core that can be used in any discipline with little modification.

  1. Background Study
  2. When presented with a challenge or a question to be answered or investigated, learn as much as possible about everything that's been done before, regardless of how seemingly irrelevant to the task at hand. Be thorough, be detailed. Find out what's failed and why. Find out what got close to solving the problem or answering the question and why it didn't go all the way.

    Learn this, study the background (even if it's obvious. And if it's that obvious to you, have someone not familiar with this particular paradigm do the study. You're missing something if you think the background is obvious), study the personalities, the models, the methods, the politics, everything.

    If you're not willing or don't have the time, don't take on the research, project or task.

  3. Necessary Data
  4. This one, we'll admit, causes people the most concern. Many people attempt to solve problems with either available data or easily obtainable data.

    Stop. Go no further until you honestly answer this question:

    What data — existing or not, obtainable or not — best solves the problem, answers the question or furthers the research?

    Talking with researchers and analysts world wide, the above question is the greatest stumbling block. People defer to what data is currently on hand, previously obtained data made public by other researchers, data that current methods make easily obtainable or collectable, etc.

    However, “ease of collection” or “prevalence of availability” should not be equated with “solves the problem”, “answers the question” or “furthers the research”. Agreed, it would be great if the exact data that would do all three was there for the taking and yes, solution vendors make wonderful cases for their data collection methods.

    The first challenge to solving any problem, answering any question, etc., is to determine what kind and how much data is necessary to provide a solution or answer. Find out how to measure what you really need to measure to solve what you really need to solve and you're 90% of the way to answering the question, solving the problem or furthering the research (there are two corollaries to this and they go into the third step in this research model).

    I've seen research come to a halt until the investigators could determine what they really needed to measure to answer what they really needed to answer. Take your time at this stage. I've heard that “We have money to do it over but not enough to do it right the first time” and, while I know several businesses accept that concept, and while I agree with Jeff Bezos' “Anything worth doing is worth doing poorly” I don't believe or accept that these two statements are congruous at all.

  5. Equals Must Be Equals
  6. The number of times projects fail, results are erroneous or research flounders due to people forgetting the simple rule that “1=1” is staggering. Using an online analytics term, “clicks” here must mean the same thing and be measured the same way as “clicks” there.

    The first corollary is

    Make sure you're measuring what really needs to be measured.

    The second is

    Units must be the same — and have the same meaning — on both sides of the equation.

    The “1=1” requirement most often fails because people mix Categorical, Rank, Metrical Analysis techniques, measures and methods as if one is identical to the other and they are different.

    And if you're not sure of the differences, I'm sorry, you should not be doing research. NextStage is often called in to help businesses make sense of some research they performed or contracted with another group, and more often than not the solution comes from clearing up categorical, rank and metrical overlaps. Categorical, Rank and Metric are basic measurement concepts. I explain them briefly in The Social Conversion Differences Between Facebook, LinkedIn and Twitter – Providence eMarketing Con 13 Nov 2011. Learn them and learn them well.

    To that end, the market is contributing to poor research models and measurement methodologies. The number of solution providers promoting self-serving “equations” as solving industry problems that

    1. change critical term and KPI definitions midstream,
    2. measure irrelevant (at worst) or very loosely (at best) correlated elements and profess a one-to-one correspondence between measurement and claim,
    3. include data having no direct relevance to the problem yet are included because they're easily obtained, and
    4. make up their own KPIs and claim relevance

    is mind-numbing.

    Companies can find vendors whose definitions make the companies' failures look good and aren't we all a little tired of naked emperors?

Research Quotes

People who know me or NextStage know we love quotes. Here are some our researchers keep on their walls for easy reference:

  • We are continually faced by great opportunities brilliantly disguised as insoluble problems. – Lee Iacocca
  • Simple solutions to complex problems are often wrong. – Jeanne Ryer
  • The cause of a problem is the system that produced it. – Tom Bigda-Peyton
  • Judgement consists not only of applying evidence and rationality to decisions, but also the ability to recognize when they are insufficient for the problem at hand. – Tom Davenport
  • Should you encounter a problem along your way, change your direction, not your destination.
  • For every complex problem there is an answer that is clear, simple, and wrong. – H L Mencken

NextStage Evolution Research Brief – The Importance of Brand as it Relates to Product v Feature Diversity and MarketShare

NextStage routinely makes its research available to Members. Research that's been published in the Members area for more than a year will be moved here, to The Analytics Ecology, as time and tide allow.


This publication reports on an ongoing (sixteen years to date) study of market fluctuations due to decreases in product/service versus feature diversity resulting in increased marketshare.


Markets arise under two conditions:

  • Products or services are developed that meet a specific need within a given population
  • Existing products or services are redefined or modified to meet the expectations within a given population

Emerging markets move from need-based to expectation-based in direct relation to the spread of product/service information within the given population. Note that this replaces the “adopter” model with a social contagion model — markets increase proportionally to the information level within a market. Early adopters are individuals who require minimal social information about a product/service, late adopters are those who require maximal social information in order to become market members.

Markets establish themselves when multiple vendors recognize possible revenue sources and expend resources to first enter then maintain marketshare. Traditionally market establishment followed an organic dispersement model due to minimal channels (information transmission vectors). The past sixteen years has seen an explosion of channels.

The traditional model dictates that the vendor able to saturate a market's chosen channels will claim more marketshare. However, channels are proliferating with the end result that vendors must create their own channels to insure controlled information dispersion.

The social contagion model dictates that an uncontrollable information exchange be met with decreased marketshare and decreased product/service diversity while proliferating features and brands to meet consumers at different social contagion levels within the market population.


To determine if branding concepts, product/service or feature diversity is more adept at establishing marketshare in socially engaged markets.


Eleven markets (agri, auto, construction, home electronics, personal apparel, personal communications, pharma, real estate, recreation, sports, travel) were observed from Jan 1995 to Jan 2011. Analysis was done on vendors in those markets, messaging, market reach, marketshare, channeling, brand imaging and management shifts.


  • Brand allure continues to play a role in marketshare
  • However brand allure is rapidly giving way to feature diversity (the brand that supports the largest feature set wins)
  • Feature diversity is becoming the new standard for opening markets and increasing marketshare, especially when features are tailored to a given market
  • Feature diversity benefits are increasingly communicated socially rather than through “traditional” channels
  • Product/service diversity benefits are decreasingly communicated socially although they maintain their place in “traditional” channels

Key TakeAways:

  • Brands able to demonstrate the greatest feature diversity within a market will maintain the greatest share of that market moving forward
  • Emerging markets will best be captured/maintained by products/services that are app enhanceable rather than those coming with a diversity of built-in features
  • There will be an increasing move to “app platform” devices as feature diversity moves from “what x can do out of the box” to “tailoring x to do what you want”
  • This app platform move will be the vector of future market segmentation

Joseph Carrabis named Senior Research Fellow at the University of Southern California (USC)'s Annenberg Center for the Digital Future Thursday, 28 Apr 2011

USC's Annenberg Center for the Digital FutureI am honored to be recognized by Annenberg's Center for the Digital Future.

Annenberg's Center for the Digital Future will have direct access to NextStage's research and findings for use with their greater, international audience. Annenberg's recent findings closely parallel NextStage's research findings through the years.

Learn about the Annenberg Center's history and work

My thanks for Jeff Cole, PhD, Center Director, and Brad Berens, PhD, for making this happen.