Archive | Social Media Measurement RSS feed for this section

Three Fundamentals of Great Social Media Measurement

20 Feb

If you want to evaluate the robustness and effectiveness of your approach to social media measurement, ask yourself these three fundamental questions:

  • Does the approach measure the ‘right’ things in order to show the business impact of the programs and initiatives? 
  • Will stakeholders of the report receive the data and actionable insights required to make strategic decisions?
  • Are the data and insights presented in a clear and concise manner that tells a story and makes it easy to understand and act upon?

Measuring the ‘Right’ Things

Social media metrics are derived from three primary sources:

Ideally, a robust social media measurement program will have a rich metrics set that contains metrics from all three areas. Metrics tied to program objectives allow for direct measurement of program success. Fundamentally, measurement is about assessing performance against objectives. It is surprising how often social program objectives are slanted toward channel-specific metrics (e.g. Likes or Followers) and not the specific outcomes desired for the program – what you hope to accomplish by implementing the program. Also, relying too heavily on channel metrics limits you to what you can measure rather than what you should measure. Business outcome metrics are used to connect the dots between social media programs and the business results they are designed to drive. Social programs that cannot answer, or at least address, the management question, “How is this impacting my business”, are more susceptible to resource allocation scrutiny (#pleasecutmybudget). Stated another way, if management asks how we’re doing in social media and we reply, “great, post virality is up 6.1% this month”, we make it difficult for that individual to understand how social media/business initiatives are helping move the business forward.

Getting to Data and Insights that Inform Strategic Decisions

Expectations for social media measurement and analysis have risen. In addition to sound analysis and reporting of performance against key metrics and KPIs, understanding audience dynamics and developing actionable insights are rapidly becoming de rigueur. Insights may be defined as synthesizing and interpreting data to provide actionable information and knowledge that informs strategic decisions. Too many social media measurement programs take a social-centric rather than a business-centric approach to insights. They often feature insights and recommendations that are tactical in nature – the best time of day or how many times to tweet, or what type of content seems to be most successful. Ideally, insights and recommendations in social measurement reports would be operating one level above this, informing strategic decisions about how social programs and conversations are impacting, or could impact, the business. To do this requires an understanding of the business function (e.g. marketing, customer service) impacted by the social program and an ability to ask the right questions prior to starting a social media analysis. 

For example, let’s say Company X plans to introduce a new video game. A social listening program has been implemented to analyze the early consumer reaction to the game. Based on the listening analysis, changes to the packaging, marketing or even the product itself are possible. If you are in charge of the marketing campaign for the game, what are the types of social media insights you need to make decisions about the game and the marketing campaign?

  • What is the level of buzz about the game?  What is the overall sentiment? How does this compare to previous game launches?
  • What are people talking about in social media – availability, cost, specific features of the game, packaging, marketing campaign?
  • What features of the game do consumers seem to like most?  Least? Specifically, what do they like or dislike?
  • What are the most influential gaming enthusiasts saying about the product?
  • Who are the promoters and detractors? What is the ratio of promoters to detractors? How does this compare to promoters and detractors from previous game launches?
  • How much social media conversation contains recommendations or expresses purchase intent?  How does this compare to previous launches?

Answering these types of questions provides actionable insights that provide context and can inform strategic marketing decisions.          

Presenting Results

Dashboards have gotten a bit of a bad rap – not because dashboards are not useful, but because some have used them as THE measurement report rather than just one aspect of a good report. I’m a dashboard proponent for a few reasons:

  • Deciding which metrics to feature on a dashboard is a good strategic exercise requiring you to focus on the very most important and relevant metrics for the intended audience
  • Online, dynamic dashboards are an effective user interface that can be used as a launching- off point for drilling into data to understand the underlying story
  • Good dashboards present a snapshot of overall performance that is easily absorbed and understood.   

A dashboard-driven social media measurement report is versatile and effective in many situations. A typical report might consist of one of more dashboards and then a deeper dive on each of the key metrics featured on the dashboards, along with audience insights, strategic insights and recommendations. This format provides a quick snapshot (dashboard) of results, ideal for those stakeholders interested only in topline data, and provides sufficient depth to satisfy those more interested in the underlying drivers of the metric  

Social media measurement programs that are built around metrics tied to business outcomes and show how programs are performing against objectives are important. Reports that deliver clear insights that inform strategic decisions are important. And delivering those reports in a compelling format that enhances usability and effectiveness is important. How do your programs stack up?

Bringing Some Clarity to Social Media Influence

10 Dec

The emphasis on influencer marketing in social media has reached a fever pitch in 2011 and with it a flood of tools and opinions on how to navigate the influence waters. This is interesting in that one of the most powerful aspects of social media marketing is the ability to establish connections and relationships directly with prospects and customers and not have to go through an intermediary to communicate. But we’ll leave that to the social strategists to reconcile and justify. Influencer marketing is hardly a new strategy. Through the years, much work in traditional public relations utilized influencer targeting (e.g. market analysts, financial analysts, KOLs, other customers) to help amplify and endorse a brand or a company’s products and services. So why is there so much discussion and confusion about influence in social media? Let’s explore.       

Influence Basics

A definition I like for influence is: effecting change in another person’s attitudes, opinions, beliefs and/or behavior. I believe the most overlooked word in this definition is change.  Without change influence has not truly occurred. One challenge here is influence can happen without any resulting short-term observable action. Influence takes hold primarily between the ears, not necessarily with hand on mouse or wallet. This creates fundamental challenges when trying to measure the degree to which influence has occurred.

Another challenge we face is that influence is contextual not absolute. People who influence others do so primarily in areas where they have specific expertise or authority. It is common to be influential in one area but have little or no influence in others. One of the main issues with current influence tools are they do a relatively poor job of establishing contextual relevance.

The distinction between creating influence within a target audience and who/what has influence over the target has a tendency to get muddled. To clarify, determining who has the potential to influence the target audience, (the influencers), is a targeting question. Have we created influence, (changed attitudes, opinions, beliefs and/or behavior) is a measurement question.   

Influence is purposeful. In real life or digital life, when we set out to change the opinion, attitude, beliefs or behavior of another person or group, we do so with a downstream motivation – for them to take a specific action. The list of possible actions is lengthy – buy a product, visit a website, tell a friend, vote, wave a sign and donate to name a few. Of course, not all desired actions are equal in terms of amount of influence required for change. Opinions might be easier to change than an attitude. An attitude is easier to change than a belief. Behavioral change can range from relatively easy to nearly impossible depending on the specific behavior. In marketing, the ultimate behavior or action we try to influence is purchase behavior. It is important to think through the specific actions you hope the target will take as a result of being influenced. This is also the sweet spot for influence measurement.

While creating an action/behavior change is the ultimate reason for influencing someone, it is helpful to think of the process of influence as two stages – opinion, attitude or belief change – and then, because of this change, did an action occur or was a behavior changed. Stated another way, the opinion change is an intermediate or micro outcome and the desired action is a final or macro outcome. Depending on the type of purchase decision there may be a time lag between the micro and macro outcomes that make it difficult to connect the dots. In his book The Business of Influence, Philip Sheldrake presents a concept called the “Maturity of Influence Approach”. Basically it melds two important concepts to use when thinking about influence measurement – focus on the influence, not the influencer (Philip refers to this as “influence-centric), and to start at the macro outcome/action and trace the path of influence back to the source(s) of influence. One simple example of this in a B2B context would be to ask the prospect at the time they are ready to make a purchase, “what sources of information or opinion were most valuable to you in making your decision to buy our product?” A similar question or two can be asked using a pop-up survey in an ecommerce situation. 

Influence and Engagement Confusion

A primary source of influence confusion is failing to distinguish between a simple act of engagement and the process of being influenced. If someone in my Twitter stream sends out a tweet and I retweet it, have they influenced me to retweet or have I simply engaged with that individual’s content? Many who have written about social media influence have suggested that in RTing the tweet, I have been influenced to do so. I do not believe that is the case. I have engaged with the content, but have there been any true changes in my attitudes, opinions, beliefs or behavior? Again, the operative word here is change. Does the act of RTing constitute a behavioral change? Probably not. Engagement – yes, influence – no.

Engagement is a necessary pre-condition to Influence. (This social media measurement model addresses the distinction) Without engagement you don’t have the opportunity to influence. Influence, however, only occurs if that engagement leads to a change in attitudes, opinions, beliefs and behavior.   

Influence, Popularity and Celebrity Confusion

There also seems to be some confusion about the difference between influence, popularity and celebrity. Although related, and in some cases overlapping, they are three distinct concepts. In my opinion, at least some of the confusion stems from Klout and other influencer tools that seem to measure popularity but call it influence. So what is the difference?

Popularity is the state of being popular – widely admired, accepted or sought after.

Celebrity is a famous person, renown and fame. 

If popularity is about being well-liked and celebrity is about being well-known, influence is more about being well-respected, with associations like knowledge, persuasion and trust. Some of the confusion lies in the fact that some celebrities do have influence over the types of behaviors that make the cash register ring. Oprah comes to mind. Other celebrities, while very popular, don’t really have the ability to create meaningful influence. They can get content re-tweeted (WINNING!) but do they have any influence over the types of actions brands really value?

Keeping Online Influence in Perspective  

As we discuss the intricacies of digital influence we should also keep in mind the majority of influence occurs in the analog world. I’ve seen estimates ranging from 70 – 90% of influence occurring by offline WOM. It’s personal. It’s about real family and friends and not Twitter friends. Influence is about a relatively small number of people (Dunbar’s Number suggests humans have a finite cognitive capacity to have around 150 social relationships with other humans), and not mass influence. The fact that most influence happens offline presents another significant measurement challenge.

In summary, I’ll leave you with a few sound bites on social media influence:

  • Influence is about change
  • Engagement leads to influence
  • One can be popular but not influential
  • Measure the influence not the influencer
  • Don’t forget offline when measuring online influence.

Thanks for reading.  See it a different way?

Measurement 2020 and Other Fantasies

23 Sep

At the 3rd European Summit on Measurement held in Lisbon in June 2011, standardization, education, ROI and measurement ubiquity emerged as the key themes in response to a call to set the Measurement Agenda 2020.  Delegates to the conference voted on 12 priorities they thought were most important to focus on in the period leading up to 2020.  The top four vote-getters became the Measurement Agenda 2020:

  1. How to measure the return on investment of public relations (89%)
  2. Create and adopt global standards for social media measurement (83%)
  3. Measurement of PR campaigns and programs needs to become an intrinsic part of the PR toolkit (73%)
  4. Institute a client education program such that clients insist on measurement of outputs, outcomes and business results from PR programs (61%)

For a very nice overview of the Lisbon session and the Barcelona Principles that came before, read this post from Dr. David Rockland of Ketchum who chaired the Barcelona and Lisbon sessions.  David pretty much said it all on these sessions, so I’ll just add a couple of comments and share a few thoughts on what I believe the future of measurement 2020 could be.

The rallying cry coming out of Barcelona has been focused and loud – death to AVEs!  Will there be a similar thematic coming out of Lisbon and what might it be?  My money is on standardization, borne out of cross-industry cooperation.  As David points out in his post, and in the words of AMEC Chairman Mike Daniels, “The Summit identified some significant challenges for the PR profession to address by 2020.  However, what we also accomplished in Lisbon beyond setting the priorities was to harness the commitment and energy of the industry to agree what we need to do together.”  The current cooperation and collaboration between industry groups – AMEC, Institute for Public Relations, PRSA and the Council of PR Firms is unprecedented in my time in this industry and is focused on tangible outcomes.  Cross-organization committees are already at work developing standard metrics for social media measurement for example.  The spirit of cooperation is uplifting.  While the outward thematic appears to be standardization, cooperation is the enabling force.  

I was also struck by the symmetry of the call to end AVEs in Barcelona and the call to codify ways to measure ROI in Lisbon.  One follows the other.  In my opinion the primary reason AVEs exist is because PR practitioners feel pressure to prove the value of what they do, and quite often they are asked to describe the impact in financial terms.  AVEs are perceived as a path of least resistance way to express financial value.  Except, as we all know, AVEs don’t really have anything to do with the impact public relations creates.  They are a misguided proxy for financial value.  Hence the need for research-based methods to determine true return on investment.

All of the priorities coming out of Lisbon are excellent goals for the industry.  And like David Rockland, I believe they will be achieved, and be achieved before 2020.  Here are three other items on my wish list for Measurement 2020:

Word of Mouth/Word of Mouse Integration: For those of us focused in social media and other digital technologies, we can’t allow our digital lens to color what is fundamentally an analog world.  Research studies suggest the majority of word of mouth happens in real life.  From an influence perspective, I don’t think too many would argue that word of mouth from a trusted friend or family member is more powerful than word of mouse from someone you follow on Twitter.  Digital cross-platform research is difficult enough, but when one huge platform is ‘real life’, we have significant challenges in measurement.  WOMMA and others have made early attempts to define measurement approaches for offline WOM, but much work remains.  We need ways to assess its impact and then we need to think about ways to attribute value to that impact.  Mobile is a wild card here as it becomes the preferred platform for online activity.  The need to triangulate online, mobile and ‘real life’ measurement presents significant challenges today, and may still by 2020.

Cookie Wars: We all know the measurement versus privacy showdown is coming, right?  The first shots have already been fired.  The collection of source-level personal data, enabled by cookies, is crucial to measurement and insights but has the potential for misuse or unintended disclosure.  Some sophisticated consumers have had their fill of cookies.  Although the broader issue might be framed as social sharing versus privacy control, how it plays out will have a direct impact on digital analytics and measurement.

Integrated Measurement across the Paid Earned Shared Owned (PESO) Spectrum: Measurement has increasingly become integrated.  It began with integrated traditional (Earned) and social media (Shared) measurement and then progressed rapidly to Earned, Owned and Shared, which is where most integrated measurement programs are today.  Many leading-edge integrated programs today also include advertising or Paid media.  By 2020, integrated measurement across the PESO spectrum will most likely be the norm and not the exception.  A key enabling element here in my view is some base level of agreement on how each area should be measured and standard metrics for each.  It will take significant cooperation between industry groups, vendors, agencies and major customers/clients for cross-discipline standardization to move forward effectively.  We are at the beginning of this movement in 2011.  By 2020, it will be fascinating to look back and see how all this plays out.

When looking ahead to 2020, I am reminded of a measurement discussion pulled together by PRWeek a couple of years ago.  Many of the Measurati attended.  In response to a question of where measurement will be in five years, David Rockland replied (paraphrasing here), ‘Who knows?  Five years ago who would have guessed we would all be focused on how to measure social media?’  So, there is a certain fantasy element to discussing 2020 challenges in measurement.  What are your measurement fantasies?

Selecting the Right Social Media Listening Platform is a Process Not an Event

17 Sep

It is not difficult to find a social media listening platform – there are over 100 to choose from.  What is difficult is to find the right tool.  It takes a keen understanding of scope and requirements.  It takes an evaluation and selection process that will surface the best platform to fully meet your  requirements.  And it takes a well thought-out process for deploying the platform across the organization in an effective and efficient manner.   There are many questions to be asked and answers to be given.  Asking the right questions at the right time is crucial.

It is helpful to think of the overall listening platform selection process in three phases:

  1. Plan – Define requirements, stakeholders, scope
  2. Select – Create a platform evaluation process tailored to your unique requirements
  3. Deploy – The selected platform across the organization with training, workflow and other important issues addressed.
To read the rest of this post and to download the free eBook, Social Media Listening Platforms: How to select and deploy the right social media listening tools for your company, please click here.

AVEs Don’t Describe the Value of Media Coverage, They Sensationalize It.

26 Jun

Saturday, Wall Street Journal columnist Carl Bialik, The Numbers Guy, addressed the subject of advertising value equivalency (AVE).  This is perhaps the first example of a mainstream media publication shining a light of the controversial practice of AVEs.  (You can read the story here.)

The primary reason advertising value equivalents exist are because they are perceived to be a way to attribute value to programs that would otherwise be difficult to value directly.  They are a path of least resistance approach to return on investment calculations, but not a valid one.  Let’s take a deeper dive into the three specific examples in the WSJ story, ask the tough questions and discuss more valid ways to think about value attribution and ROI.

American Airlines  

You can enjoy both questionable valuation techniques and hyperbole in this article.  American Airlines stands to “make boatloads of cash” and “the airline company could gain as much as $95.9 million of exposure”.  Of really, let’s take a closer look.

The most incredible part of this financial calculation is the financial calculation itself.  The calculation is apparently based on sign placement within the arena and presumably the ‘impressions’ the brand will receive when people attending the venue see the signage and when TV cameras catch the signs when showing the scoreboard or during the action.  This is a very passive form of advertising that should have as its objective either creating top of mind awareness or perhaps creating more brand affinity.  Rather than using an advertising equivalency model that has no validity, a true measurement of the value created by naming rights would ask a series of questions designed to determine the actual, tangible (or even intangible) impact on the business:

  • Revenue: Can incremental revenue generation in the form of higher passenger miles be directly attributed to the exposure created by the naming rights?  Is it possible that incremental revenue would actually be realized on a game by game basis, or would any positive impact be realized over a longer time horizon?  Have new customers been created as a direct result of the exposure generated by the naming rights?
  • Brand: Can the increased exposure lead to people perceiving the brand differently and can the difference translate into higher transactional revenues generated or increased brand loyalty?

So where exactly are the ‘boatloads of cash’ American Airlines made?  Are they hitting the income statement in the form of incremental revenue or enhanced brand loyalty (repeat business)?  Are they residing on the balance sheet in terms of brand goodwill?  Given that American’s parent company AMR lost $11.5B dollars in the first decade of the 21st century, its last profitable year was 2007 and they are projected to lose money in 2011 and 2012, they could use the cash.  Perhaps they could use it to fund a ’bags fly free’ program or for enhancing their Advantage program to create more brand loyalty.  I would strongly suspect American’s shareholders would prefer a do-over on the investments made on naming rights to the ‘boatloads of cash’ they are now enjoying from the investment.

Couple Won’t Cash In on Kiss

15 minutes of fame is rarely worth $10 million.  In this case, the celebrity agent is suggesting the news value of the coverage generated by the kiss is somehow equivalent to advertising value and assigns what appears to be an arbitrary and ridiculously high value to it.  (He later admits he just made the number up.)  Just how was the couple going to monetize their 15 minutes of fame?   Yes, they turned down a few talk show opportunities and perhaps the National Enquirer would have thrown a few dollars their way for an exclusive, but the assertion that any major brand would have paid them to endorse their product is wildly speculative.  I would guess that if you did a survey after the event, a small number of people would remember seeing the coverage, and a very small percentage of the people who did see it would have recalled Scott Jones’ name.  So perhaps Mr. Jones walked away from tens of thousands of potential dollars in the short-term, but nowhere near the sensationalized estimate of $10 million.  15 minutes of fame might be worth 10 thousand dollars, but certainly not $10 million.

Obama Enjoys a Guinness

So Guinness is a winner and received $20 million worth of “free publicity”?  What was the outcome of the publicity?  Again, in order to determine the value of the “free publicity” (this term is despised in the PR industry by the way), Guinness would have to be able to measure incremental revenues directly attributable to the publicity generated.  Did sales of Guinness increase as a result?  Were new customers created?  Did existing customers feel compelled to drink even more?  What was the value of the incremental sales?  These are much more difficult questions to answer but are the correct ones to ask in order to measure the publicity.  Not by focusing on the mythical value of the coverage as measured by flawed advertising equivalency, but measuring the outcome or what happened as a result of the publicity.  The assertion that President’s Obama’s image was softened and will help keep him in the public’s favor is highly dubious thinking.  Perhaps it helps him in Boston, but in the grand scheme of things, this is a Presidential image non-event.

Beginning last Summer in Barcelona,  the public relations industry has come together to publicly state advertising value equivalency is not a valid measure of public relations.  The so-called Barcelona Principles are explicit against AVEs and also call for a focus on measuring outcomes and not (just) outputs.  While it will take some time for the PR industry to totally leave AVEs behind, there is a lot of momentum right now to make this happen sooner rather than later.  No serious measurement effort can use advertising value equivalency to attribute value and be credible.  

Social Media Listening Platforms – Plan, Select, Deploy (Part Three – Deploy)

17 Jun

In Part Two of this series on social media listening platforms we offered a process for selecting a social media listening platform vendor.  Now it’s time to deploy the tool across your organization effectively and with minimal disruption.  And put the tool to work.

Configuration – We talked about value-added services in the first post in this series.  One of the services offered by many listening platform vendors is configuration.  You’ll have to decide if you want to have the vendor perform system configuration or do it yourself.  In some cases you have no choice – you submit keywords, topics and themes to the vendor and the system is programmed for you.  In other cases some basic configuration must be done by the platform vendor but the bulk of the configuration can be a DIY project.

Keywords and Topics – In part one of this series, we discussed the need to think through the keywords required to bring all relevant content into your platform.  The keywords might be company name, product/brand names, competitors, issues, segment names, executives and spokespersons and key messages.  During deployment you will need to build taxonomy around many of the keywords that represent concepts rather than singular ideas or names.  For example, if you have a message that centers on being an innovative company, you will have to decide what expressions in addition to the keyword ‘innovative’ may be classified as innovation –  leading-edge, technology leader, R&D leadership, breakthrough products, etc.  You will also have to decide words and terms to exclude from your analysis.  Both of these processes are iterative – make a change, check content relevancy, adjust, repeat.

Integration – There are a few different types of integration you may want to tackle during platform configuration and deployment.  Each of the possible forms of integration will take a little time to accomplish and may require some back and forth between you and the platform vendor and/or vendor to vendor.  I am a big fan of web analytics and social media integration.  With many listening platforms this is relatively straight forward to accomplish.  You may also want to integrate third-party data sources like Factiva, LexisNexis, VMS or Critical Mention.  Assuming the listening platform vendor you selected supports this type of integration, it also is relatively straight forward.  To address latency issues, make sure you specify load times for the content.

Reports and Workflow – Previously, we addressed many of the basic questions around reports and reporting.  In the deployment phase it’s time to make it real.  Design specific templates for each report you need.  Create a mock-up and share with your stakeholders to make sure everyone is on board with the look, feel and utility of the report.   You will want to test the various delivery mechanisms to be employed including all email clients and mobile platforms you believe may be used.  Generally speaking, assume a significant percentage of the audience may look at the report on a mobile device, making this an especially important dynamic to test.  Once you have the report format established, define your workflow process – who pulls data and when, who creates visuals and by when, who compiles and edits the report and by when, and who is responsible for distribution and against what schedule.

Training – The first decision to make with training is if you want to tackle it yourself or rely on the listening platform vendor to perform the training.  Some vendors have very strong training programs and others not so much.  Some vendors charge for training and some do the bulk of it for free.  You most likely will want to take a train-the-trainer hybrid approach to training – have a core one/two/three people trained by the platform vendor, and then charge this team with training within your company or organization.  With respect to training timing, make sure to begin training only after everyone has a log-in to the system so they can actually use the system during the training.  I usually refer to this as training with live ammo.  If you don’t do this you’ll find the half-life of training is pretty short – folks forget most of what they have learned very rapidly.  I also find a tell-show-do teaching methodology works very well (my friends at Radian6 approach training this way).  Show some slides that cover the basics, show a video or canned demo that brings it to life and then have everyone do some hands-on exercises using the platform.  Remember you will need to address initial training needs as well as ongoing needs as new users are brought on the platform.

Event-specific and Programmatic Planning – Related to keyword analysis and taxonomy build-out, it may be wise to create keyword groups for programs you know you will be asked to listen to and measure, and for any potential events, like a crisis, that you can anticipate or imagine.  With respect to programmatic listening and measurement, generally a combination of the right keywords and date-ranging will allow you to pull in program-specific content.  If programs are known at the time of configuration and deployment, get ahead of the curve and set-up the keyword groups or source filters you may need.

If a company, brand or organization has a social listening program, you are remiss if you don’t include specific keywords that may serve as an early-detection system for potential crisis.  For example, depending on the type of organization and industry, it may be advisable to set up a keyword search like this: Company Name AND fire OR explosion OR shooting OR recall OR kidnapping OR crash.

In today’s real-time world, in my opinion, it is no longer optional to have social media listening capabilities.  As a result of this three-part series on listening platforms, I hope you are better equipped to plan, select and deploy your platform effectively.

Thanks for reading.

Social Media Listening Platforms – Plan, Select, Deploy (Part Two – Select)

2 Jun

In Part One, we discussed a range of topics designed to help you plan and define the scope and requirements for selecting and deploying a social media listening platform across your company or organization.  In Part Two, we will use the knowledge and perspective we gained in planning to orchestrate a thorough and effective platform selection process.

Here is a scalable selection process that will help you surface and select the social media listening platform that best meets your unique situation and requirements.

1. Define the individuals who will be involved in the selection process – Inclusion is a powerful card to play here.  Inclusion brings different perspectives together.  Inclusion greatly improves chances for success when it is time to authorize purchase of a platform and get it deployed properly across the organization or company.  Inclusion will increase the likelihood of acceptance and use of the platform across the organization.  Include representatives from the major stakeholder groups identified during the planning process.  You might include someone from your IT department.  You might also include the individuals who must authorize the purchase.  A group of up to ten is most workable.  After ten or so, I believe you will most likely experience diminishing returns on the incremental people added to the process.

2. Develop a list of selection criteria organized by major category – Based on the planning process we undertook in Part One, develop a list of categories that are most important to learn more about.  Here are ten categories you might consider including:

  • Content Sources/Types & Aggregation Strategy – What types of social content are brought into the system?  How is the content aggregated (e.g. RSS, crawling, third-party aggregators)?  How often is each type of content aggregated?  
  • Data and Search Considerations – How long is content archived, and is back data available?  What data cleansing strategies are in place to address spam, splogs and duplicate content?  Is full Boolean logic available for constructing searches?
  • Metrics and Analytics – What specific metrics are ‘standard’ in the system?  Is automated sentiment analysis offered at the brand or post level?  What audience-level data is available?
  • Data Presentation  – What dashboard features and functionality are offered?  Can dashboards be customized by user or group?  Are drill-down capabilities available for all analytics on the dashboard?    
  • Engagement and Workflow Functionality – Does the platform offer the ability to engage directly with content owners?  Can ‘owned’ content be managed on-platform?  What workflow management and reporting capabilities are offered?
  • Integration – What additional types of data may be integrated in the system – traditional media, web analytics, email, call center, CRM, etc?
  • Reporting Capability – Does the platform have a report function?  Can reports be customized?  Automated?
  • Geographic Scope – What countries and languages are addressed by the system?  Are two-byte languages supported?
  • Cost Structure – What is the cost basis – seat charge, subscription, content volume and/or number of searches?  How does pricing vary with increases in the cost basis?
  • Value-added Services – Does the listening platform vendor offer system configuration services?  Do they perform analysis and reporting?

Within each major category, list the specific criteria most relevant and important to your requirements.  For example, within the Data and Search Considerations category, you might list ten specific criteria that you want to assess for each vendor:

  • How often is Twitter data refreshed?  Can refresh timing be specified?
  • How often is new content from other sources crawled/brought into the system?
  • How long can each content type be archived?
  • Is back data available?  How far back and at what cost?
  • What data cleansing strategies are in place?
  • Can data be easily exported in CSV/Excel format and is bulk data extraction supported?
  • Can users build and customize topics and searches?
  • What types of Boolean operators are supported?
  • Is proximity search supported?
  • Do users have the ability to date-range data for analysis?

3. Develop a scorecard to use in evaluating the potential listening platform vendors/partners – Using the major categories and specific criteria you have defined, develop an overall scorecard to be used in the evaluation process.  Think about creating a weighting system at the category level to help prioritize the importance of each category.  Assign a number of points to each criteria within a given category.  A scorecard might contain ten categories each containing ten criteria.  Begin by assigning a one-point value to each criteria (100 points total) and then apply weighting at the category level.

4. Develop the initial vendor consideration set – List all the social media platform vendors you wish to consider.  Pick ones you are familiar with and have positive experiences with as a starting point.  Talk to colleagues within, and experts outside, the organization to gain their perspective on the platforms that should be considered.  Read blog posts and reviews of the platforms to gain additional outside perspective.  Visit vendor websites and watch demo videos.  Pull it all together and gain consensus amongst your team on the platforms that will be considered.

5. Do some homework and narrow the list to a manageable number (perhaps five to ten)- If your initial vendor consideration set is too large (if it has more than ten vendors it is too large), do some additional homework and narrow your list to a more manageable number.

6. Develop and distribute an RFI based on evaluation criteria – Using the categories and criteria you developed, create a request for proposal, asking the listening platform vendors the questions that are most critical to meeting your requirements.  Specify the format (e.g. PowerPoint, Word) you would like responses to take.  Give the vendors about two weeks to respond.

7. Evaluate and score vendor responses – Once the RFI documents are received, each should be reviewed carefully and scored according to the criteria and weighting decided previously.  Depending on the number of vendors being evaluated and ease of getting the entire evaluation team together, there may be merit in blocking out an afternoon to gather as a group, read through the responses, and decide how each will be scored.  This is a bit of a ‘pulling off the band-aid’  approach that will save time and allow for spirited discussion and consensus scoring.  If this is impractical for whatever reason in your company or organization, assign one of more RFIs to individuals who will then develop the scorecards.  The scorecards may then be reviewed together in a meeting or conference call, and consensus reached on scoring.  Obviously the potential issue with multiple people independently creating scorecards is consistency.  You want the evaluation to be as fair and consistent as possible given whatever constraints you are working under.

8. Develop a short list of vendors – If your number of vendors under consideration is over five, use the scorecards to reduce the list to three to five platforms that will undergo further evaluation.  These are your finalists.  You should always promptly notify vendors not moving forward in the process, and offer to provide feedback via phone or email on why they were not selected to move forward.  This professionalism will be much appreciated by the vendors, and represents a good learning opportunity for all involved if done well.

9. Deploy test scenarios – At this point we have narrowed the list of contenders and are ready to proceed with some specific tests designed to illuminate the real-world capabilities of the platforms.  Here are three possible test scenarios.  You can use all three for a very rigorous evaluation, or just one or two if that fits your needs better.

  • Test scenario 1: Give each vendor a defined list of search terms (brands, competitors, issues) and the languages/countries you want to evaluate.   You should use search terms that are directly relevant to your company or organization.  Explain what type of analysis you would like performed and ask each to address insight generation.  Platform vendors are given one week to prepare an analysis.  If practical, you could ask each vendor to give a presentation of the results in person.  Alternatively, use a web conference to review the results.
  • Test scenario 2: This is a real-time exercise designed to assess vendor data volume by country/language and signal-to-noise ratio of relevant content.  Get on a web conference with each social media listening platform vendor.  Give them a new list of three search terms and ask that they go into their platform, configure the system for the three search terms and then pull in relevant content for the past 30 days.  Once that is accomplished, ask them to export the data as an CSV or Excel file and email you the results while everyone is still on the line.  A more detailed off-line review of the results should be undertaken, including translation of languages, to assess relevancy of the results.
  • Test scenario 3: This has been referred to by a colleague as the Dr. Evil test…In conjunction with test scenario two, it may be interesting to ‘plant’ known content that matches the search terms on different Twitter channels, Facebook pages and Forums in each country that is of interest to you.  When you receive your data export, examine to determine if the known content was found.

10. Pick a winner – At this point you have the RFIs, scorecards and test results.  You are ready to make your decision.  Convene the evaluation team, discuss the results and make a decision.  With luck, a clear winner will have emerged from the process.  Contact the winner and negotiate terms of a contract.  Don’t notify the non-winners until after a contract is in place, just in case you need to move to your second choice for whatever reason.

In Part Three, we will discuss how to maximize your potential for success when actually deploying the social media listening platform across your organization.

Social Media Listening Platforms – Plan, Select, Deploy (Part One – Plan)

19 May

It is not difficult to find a social media listening platform/tool – there are over 100 to choose from.  What is difficult is to find the right tool.  It takes a keen understanding of your scope and requirements.  It takes an evaluation and selection process that will surface the best platform to fully meet your requirements.  And it takes a well thought-out process for deploying the platform across the organization in an effective and efficient manner.   There are many questions to be asked and answers to be given.  Asking the right questions at the right time is crucial.

It is helpful to think of the overall process in three phases:

Plan – Define requirements, stakeholders, scope

Select – Create a platform evaluation process tailored to your unique requirements

Deploy – The selected platform across the organization with training, workflow and other important issues addressed.

This three-part series will tackle each phase one at a time.  First up – Plan.

In many ways, the planning phase is the most important.  Overlook an important detail here and you may or may not be able to overcome it later.  Here are ten topic areas to discuss within your organization to make sure you are setting yourself up for success.

  1. Stakeholders – What are the primary stakeholder groups within my company or organization?  Possible stakeholder groups might include marketing, corporate communications and customer service/care at the macro level.  Depending on the size of your organization, various regions, divisions, groups or product lines may also be distinct stakeholder groups.  Once you have identified the primary stakeholders, set up time to meet with each group.  Understand how they currently use social listening tools and what, from their perspective, are ‘must have’ capabilities versus ‘nice to have’ capabilities in a social listening platform.  Ask each stakeholder group the applicable questions from the list below.
  2. Geographic Scope – What languages and countries are stakeholders interested in including in the platform?  Try to understand the relative priority of each country and language.  Also be sure to comprehend future requirements.  For example, if Chinese is not a priority today but will be within two years, you may want to only consider listening platforms that support two-byte languages.  Also probe to assess if social media content will need to be translated into other languages.  This may be primarily an internal workflow issue or outsourcing issue, but might also be a platform consideration.
  3. Value-added Services – It is very important to develop a point of view on how monitoring, analysis and reporting will be done within your organization.  Will each stakeholder group be responsible for doing this themselves or will a centralized analytics and insights group be responsible?  In addition to the self-serve approach, you could consider outsourcing this work to your social listening platform vendor or to one of your agencies – PR, digital or advertising.  In my experience, it is easy for a company or organization to underestimate both the skill and time commitment necessary to make the self-serve approach effective.
  4. Content/Data Types – Social media listening platform vendors generally include content from the primary social media properties –  Facebook, Twitter, Blogs, Forums, YouTube and MySpace (being generous here).  Flickr is also included in many.  Currently on vendor roadmaps are properties like Linked-In and perhaps customer review sites.  Make sure the content types the platform supports meets your stakeholder requirements.  It is also very important to understand how the social content is being aggregated and how frequently (see Reporting for more on latency issues).  The fundamental ways in which content is aggregated in social listening platforms are crawling the web, RSS feeds and third-party content aggregators (e.g. Boardreader for Forums).  Many platform vendors employ a hybrid approach.
  5. Metrics and Analytics – Most social listening platforms either have a set group of analytics that deliver specific metrics or they offer configurable analytic ‘widgets’ that may be used to create metrics like share of conversation or volume and tone trend.  Some platforms offer a combination of these two approaches.  Based on your needs and measurement strategy/approach, define the analytics and metrics you would ideally like to see (e.g. volume, sentiment, messages, share-of-conversation, association with key topics).  In the vendor selection phase, this list will be useful to compare and contrast vendors.
  6. Keywords and Topics – During the planning phase, it is wise to develop a list of the major keywords and topics you believe will be necessary for the listening platform.  These keywords might include the company name, key competitors, industry issues, market segment names,  brand names, product names, key spokespersons, executives and competitor and industry spokespersons.  Social media listening platforms have varying degrees of sophistication with respect to their search capability.  Some have full Boolean logic, others offer very simple AND/OR logic.  The importance of this difference depends to some degree on you company/brand name as well as the sophistication of the people who will be configuring and maintaining your system.  If, for example, your company name is a common word (e.g. Apple, Visa), you will need stronger logic capabilities that include proximity search.
  7. Integration – Integration of varying data types – search, web, social, advertising, customer opinion and others – is the present and future of online measurement.  It is therefore important to understand what capabilities, if any, the social listening platform vendor has to integrate with other data types/streams.  Do they offer the ability to connect with web analytics packages via API for example?  The web/social integration is becoming increasingly common.  If you need to integrate traditional media with social, it might be a nice feature if the social listening platform allows third-party content aggregators like Factiva, Lexis Nexis, VMS or Critical Mention.
  8. Reporting – During the planning phase it is helpful to think through a series of questions about reports and reporting.  What type of reports are necessary?  Who will be responsible for their creation?  How often will reports be issued?  Does the system need the capability to automatically generate and deliver reports?  What about automated alerts?  There are quite a wide range of report capabilities represented by the various vendors in the listening space.  One potentially critical area to explore during the vendor evaluation phase is related to report frequency and perhaps to report type (think crisis).  That is how often new content is brought into the system.  Content latency issues may cause real problems during a fast-moving crisis.  Generally, the content latency differs by media type.  Best for Twitter and worst (perhaps) for forums, some of which restrict crawling to no more than once per day.  Within Twitter, the type of relationship the vendor has with Twitter should also be explored.  Not all Firehose arrangements are the same.  While most social media listening platforms claim to be ‘real time’, it is interesting to ask the vendors to define what they mean by ‘real time’.  The answers may surprise you.
  9. Access – Discuss who needs access to the listening platform and what they want to see and be able to do once they are in the system.  Do your different stakeholder groups (Divisions, product lines, brands, corporate, marketing, etc.) want or need a customized view of the data perhaps presented on a separate dashboard within the system?  It is also a good idea to have a perspective on who your power users will be versus the casual users.  This distinction applies not only to system access, but also in areas like training.
  10. Engagement – Some social media listening platforms support engagement with content owners directly from the platform, others do not.  Some engagement capabilities are elegant, others are rudimentary.  Make sure to explore the engagement needs of your stakeholders and understand how important this capability is to them in the short and long-term.  If engagement capabilities are important, you will also want to explore if the system allows users to tag content, assign content, manage assignments and track workflow.

In Part Two, we’ll examine a rigorous process for social media listening platform vendor evaluation and selection.

AVEs are a Disease – Here’s a Little Vaccine

16 Apr

One of the truly insidious aspects of public relations measurement is the use of advertising value equivalency (AVEs) or media value to assign financial value to public relations outputs.  It is a highly flawed, path-of-least-resistance attempt to calculate return on investment (ROI) for public relations.   To make matters worse, the practice has clearly moved into social media measurement as well.  For example, research studies that attempt to monetize the value of a Facebook Fan/Liker by attributing a CPM value from the advertising world.  Online media impact rankings also utilize equivalent paid advertising costs to assign monetary value to online news and social media.  AVE is like a disease that has infected and spread throughout the public relations industry.

In June of 2010, the PR industry came together in Barcelona to draft the Barcelona Principles, a set of seven principles of good measurement intended to provide guideposts for the industry.  The principle that has generated the most conversation is this one:

Advertising Value Equivalency (AVE) is Not the Value of Public Relations

 While many of the Measurati have been preaching against AVEs for years, there now appears to be a critical mass of outrage that may kill the practice in the coming years.  Here are four compelling reasons why I believe we must make this happen – the sooner the better.

1. AVEs Do Not  Measure Outcomes

AVEs equate an article with the appearance cost of an advertisement.  It does not speak at all to the results or impact that the article may have on a reader.  Advertisers do not judge the success of advertising on how much the insertions cost.  Imagine an advertising manager being asked by his or her boss, “How are we doing in advertising this year?”, and them replying, “Great!  We have spent $500,000 so far!  The true value of public relations or social media is not the appearance cost, but what happened as a result of the PR or social media effort – the impact it has on brand, reputation and marketing.  You will note the Barcelona Principles also call for a focus on measuring outcomes and not (just) outputs.  What happened as a result of media coverage is inherently more interesting and valuable than how much coverage was obtained.

2. AVEs Reduce Public Relations to Media Relations

You are, or become, what you measure.  AVEs do not address the impact or value of several important aspects of public relations including strategic counsel, crisis communications, grassroots efforts, viral campaigns or public affairs.  In other words, AVEs reduce PR to just the media dimension by only assigning a value in this area.  If only AVEs are used to assess PR value, the results will understate the totality of value delivered by PR.  AVEs also cannot measure the value of keeping a client with potentially negative news out of the media, yet that may be the primary objective of the PR practitioner.

3. AVEs Fly in the Face of Integrated Measurement                

Good marketing, branding and reputation campaigns have always been integrated to varying degrees.  The digitization of our lives has accelerated integration.

Advertising and PR actually work together synergistically, yet AVEs treat them as cost alternatives.  Studies have shown ads that run in a climate of positive publicity actually receive lift from the PR.  Conversely, ads run in an environment of negative publicity will likely not be successful and/or may be perceived negatively by consumers.  We have seen exposure to brand advertising increases conversion rates in social channels. Integrated campaigns and programs require integrated measurement.  AVEs don’t play well in this world.  They are analog and segregated in a digital and integrated world.

4. AVEs Provide No Diagnostic Value

Too much measurement energy is focused on score-keeping and not diagnostics.  This is one reason why single-number metrics like the Klout score and others have great appeal to many.  However, measurement is fundamentally about assessing performance against objectives with sufficient detail and granularity to determine what is working and what is not.  AVEs fail miserably in this regard.  AVE results can actually be misleading and result in false positives.  AVEs may be trending up while important metrics like message communication, share of favorable positioning and share of voice are falling.  Unfortunately, AVEs provide neither a valid single-number score nor any diagnostic value.

Some have said the Barcelona Principles are the ‘end of AVEs’.  I would agree directionally with that statement with one minor addition, Barcelona was the ‘beginning of the end of AVEs’.  Awareness of the practice and recognition of its flaws are at an all-time high in our industry.  More education and evangelism are required.  Understanding concepts like impact, tangible value, intangible value and (true) return on investment help foster much more sophisticated conversation about the total value delivered by public relations and social media.  AVEs are a disease, education and knowledge are the vaccine.  AVEs won’t die easily.  The momentum generated by the Barcelona event has provided focus and intent.  It is up to all of us to make AVEs a thing of the past.

Social Media Measurement 2011: Five Things to Forget and Five Things to Learn

30 Dec

It has been said that social media came of age in 2010.  Not so for social media measurement.  But the mainstreaming of social media marketing brings with it a heightened call for accountability.  The need to prove the value of social media initiatives has never been greater.  So, perhaps 2011 will be the year that social media measurement matures and comes of age.

As we look to the next year, here are five things to forget and five things to learn about social media measurement in 2011.

Things to Forget in 2011

1. Impressions

The public relations industry has historically measured and reported success through the lens of quantity not quality.  The most common PR metric today is Impressions.  While it is a somewhat dubious metric for traditional media, it really loses meaning in social media where engagement not eyeballs is what we seek.  Impressions also (greatly) overstate actual relevant audience.   Impressions merely represent an opportunity to see, they do not attempt to estimate the (small) percentage of the potential audience that actually saw your content.

For Twitter, many folks use the sum of all first generation followers as ‘impressions’ for a particular tweet.  The obvious problem here is that the probability that any one follower sees any one tweet is quite small.  I don’t have good data on this (please share if you do), but an educated guess might put the percentage at less than 5%.  Similarly for Facebook, use of impressions as a metric is also problematic.  Facebook impressions do not indicate unique reach and you don’t have any idea who, if anyone, actually viewed the content.

Number of Impressions is a flawed, unwashed masses metric for social media measurement.  Any time you are tempted to use the word ‘impressions’ in social media, think about ‘potential reach’ or ‘opportunities to see’ instead.  Or better yet, concentrate on Engagement and Influence.

____________________________________________________________

2. Vanity Metrics – Fans and Followers

Most social media measurement efforts place far too much emphasis on Fans/Likers and Followers.  For Twitter, the number of Followers is seen as a key metric, thought by many to relate to potential influence.  For Facebook it is the number of Fans/Likers many companies/brands attempt to maximize.  While these may be the vanity metrics of choice, they fall far short of being adequate for rigorous measurement.  The largest disconnect of course is these numbers really don’t describe potential audience size very well and they have nothing to do with interactions/engagement.

For Twitter, there is a growing amount of evidence (read the Million Follower Fallacy paper) that number of Followers really has little to do with Influence.  Number of Followers may be an indication of popularity but not influence.  Influence talks more to one’s ability to start conversations and spread ideas.  For Facebook, number of Fans bears little semblance to average daily audience size and tells you nothing about engagement of the community.  All Fans are not created equally.  Some are engaged, some never return.  Some are your best customers, others are there only to trash you.

Number of Fans and Followers are metrics you probably should include in your overall metrics set, but should be de-emphasized and not be a primary area of focus.

________________________________________________________

3. Standardization

Measurement standardization is always an interesting topic to debate.  On one side you have the folks who believe standards are absolutely necessary for measurement to proliferate, and on the other side you have the snowflake measurement disciples who believe each program is unique and therefore requires unique objectives/metrics.  I fall somewhere between the two extremes.

In June 2010 IPR, AMEC, PRSA, ICCO and The Global Alliance got together in Barcelona for a conference intended to create an atmosphere for measurement consistency/standardization around a codified set of principles of good measurement.  The Barcelona Principles as they have come to be called are basic statements of good measurement practice – focus on outcomes not outputs, don’t use AVEs, etc.  Absolutely nothing to disagree with in the Principles.  However, the heavy lifting of standardization comes at the metrics-level.  Subcommittees have been formed that are taking the Principles all the way down to the metrics level.  I have reviewed the work of the social media committee and believe there is a lot of good work being done.

But in 2011, I expect a lot of debate but not a lot of progress in creating social media measurement standardization.   One to watch is the Klout score for online influencers which is being integrated as metadata in social media listening and engagement platforms.  There are issues with the Klout score (read this post), and I question the type of ‘influence’ it is measuring – there is a big difference between motivating someone to action (e.g. retweeting your content) and motivating someone to purchase which is ultimately the type of influence many companies and brands are most interested in effecting.

__________________________________________________________

4. Ad or Media Equivalency

One of the truly insidious aspects of public relations measurement is the use of advertising or media equivalency (AVEs – advertising value equivalency) to assign financial value to public relations outputs.  It is a highly flawed, path of least resistance attempt to calculate return on investment (ROI) for public relations.  There are many reasons why using ad equivalency as a proxy for PR value is not advisable.

To make matters worse, the practice has clearly moved into social media measurement as well.  For example, research studies that monetize the value of a Facebook Fan/Liker by attributing an arbitrary $5 CPM value from the advertising world.  Online media impact rankings also utilize equivalent paid advertising value to assign monetary value to online news and social media.  The true value of social media is not how much an equivalent ad would have cost but in the impact it has on brand, reputation and marketing.

__________________________________________________________

5. Return on Engagement/Influence/etc.

Not a day goes by without someone declaring a new and improved metric for the acronym ROI, or stating that ROI does not apply in social networks.  A recent Google search for “Return on Engagement” returned 192,000 results.  “Return on Influence” returned 68,300.

Most of the folks who use these terms either don’t understand ROI or don’t know how to obtain the data necessary to calculate it.  Many confuse the notion of impact with ROI (addressed in Things to Learn).  Engagement creates impact for a brand or organization, but may or may not generate ROI in the short-term.  Creating influence – effecting someone’s attitudes, opinions and/or actions – creates impact but may or may not create ROI in the short-term.  It often is better to think about measuring impact first and then deciding whether or not you have the means and data necessary to attribute financial value.

__________________________________________________________

Things to Learn in 2011

1. Measurable Objectives

There are many issues and challenges in the field of social media measurement.  The easiest one to fix is for everybody to learn how to write measurable objectives.  Most objectives today are either not measurable as written or are strategies masquerading as objectives.  (For example, any sentence starting with an action buzzword like leverage is a strategy.)

‘Increase awareness of product X’ is not a measurable objective.  In order to be measurable, objectives must contain two essential elements:

  • Must indicate change in metric of interest – from X to Y
  • Must indicate a timeframe for the desired change – weeks, months, quarter, year, specific dates tied to a campaign (pre/post)

Therefore, properly stated, measurable objectives should look more like these:

  • Increase awareness of product X from 23% to 50% by year-end 2011
  • Increase RTs per 1000 Followers from 0.5% in Q1’11 to 10% by the end of Q2’11.

__________________________________________________________

2. Impact versus ROI

ROI is one of the most overused and misused term in social media measurement.  Many people say ‘ROI’ what they really just mean results or impact.  ROI is a financial metric – percentage of dollars returned for a given investment/cost.  The dollars may be revenue generated, dollars saved or spending avoided.  ROI is transactional.

ROI is a form of impact, but not all impact takes the form of ROI.  Impact is created when people become aware of us, engage with our content or brand ambassadors, are influenced by engagement with content or other people, or take some action like recommending to a friend, writing a review or buying a product.  Impact ultimately creates value for an organization, but the value creation occurs over time, not at a point in time.  Value creation is process-oriented.  It has both tangible and intangible elements.

Your investments in social media or public relations remain an investment, creating additional value if done correctly, until which time they can be linked to a business outcome transaction that results in ROI.

Most social media initiatives today do not (or should not) have ROI as a primary objective.  Most social programs are designed to create impact, not ROI, in the short-term.  There is also the notion that many social media initiatives are in an investment phase, not a return phase of maturity.

__________________________________________________________

3. Hypothetical ROI Models

One important step in determining how a social media initiative creates ROI for an organization is to create a hypothetical model that articulates the cascading logic steps in the process, as well as the data needed and assumptions used.  The model is most useful in the planning stages of a program.  It helps address the proverbial question, “If I approve this budget, what is a reasonable expectation for the results we will achieve?”  Let’s take a look at a simple Twitter example:

Program: Five promoted tweets are sent with a special offer to purchase a product on an e-commerce site.

Hypothetical ROI Model:

  • (Data)                   Total potential unduplicated reach of the five tweets is 1,000,000 people
  • (Assume)            10% of the potential audience will actually see the tweet = 100,000 people
  • (Assume)            20% of the individuals who see the tweet find it relevant to them = 20,000 people
  • (Assume)            10% of those finding it relevant will visit the site = 2,000 people
  • (Assume)            10% of those visiting the site will convert and buy the product = 200 people
  • (Data)                   Incremental profit margin on each sale is $50
  • (Data)                   Total cost of the social media initiative is $2,400

ROI Calculation: (200 x $50) = $10,000 – $2,400 = $7,600/$2,400 = 3.17 x 100 = 317% ROI

Our model suggests this program will be successful and generate substantial ROI.  If in reviewing a model with someone who needs to approve a program, they conceptually buy into the model but challenge the assumptions, that is a positive step.  Negotiate different assumptions and rerun the numbers.  Hypothetical models help you think through the data requirements your research approach must address in order to actually measure the ROI of the program after implementation.

__________________________________________________________

4. Integrated Digital Measurement

The definition of public relations is fluid, and rapidly evolving to encompass a much broader and more integrated view of communications and how we connect, engage and build relationships with consumers and other stakeholders.  Digitization in all its forms has driven and accelerated this important change.  Communicators should now take a more content and consumer-centric view of the world, orchestrating all the consumer touch points available in our increasingly digital world.  At Fleishman Hillard, we capture this expanded scope and integration in a model we refer to as PESO – Paid/Earned/Shared/Owned.  Here is how we define the elements of our model:

Paid – refers to all forms of paid content that exists on third-party channels or venues.  This includes banner or display advertisements, pay-per-click programs, sponsorships and advertorials.

Earned – includes traditional media outreach as well as blogger relations/outreach where we attempt to influence and encourage third-party content providers to write about our clients and their products and services.

Shared – refers to social networks and technologies controlled by consumers along with online and offline WOM

Owned – includes all websites and web properties controlled by a company or brand including company or product websites, micro-sites, blogs, Facebook pages and Twitter channels.

The social media measurement Holy Grail in many ways is to be able to track behavior of individuals across platforms, online and offline, tethered and mobile, understanding how online behavior impacts offline behavior and vice-versa.  We also seek to understand how the PESO elements work together synergistically.  For example, how exposure to online advertising impacts conversions within social channels.  To address this, your measurement strategy should be to take a holistic, integrated approach using a variety of methodologies, tools and data.

_________________________________________________________

5. Attribution

If you are not already familiar with value attribution models, prepare to hear much more about them in 2011.  Value attribution models attempt to assign a financial value to specific campaigns and/or channels (e.g. advertising, search, direct, social) that are part of a larger marketing effort.  So rather than giving all the conversion credit to the last click in a chain or even the first click, the model attributes portions of the overall value across the relevant campaigns and/or channels.

A simple model might look at the following metrics for each channel:

  • Frequency – the number of exposures to a specific marketing channel or campaign
  • Duration – time on site for exposures referring to the conversion site
  • Recency – credit for exposures ranging from first click to last click, with last click typically receiving more credit.

Value attribution models require human analysis and expertise.  This factor is often cited in studies as the reason more companies do not pursue attribution modeling.

_________________________________________________________

Here’s wishing you and yours an exciting and prosperous 2011!

Follow

Get every new post delivered to your Inbox.

Join 169 other followers