Selecting the Right Social Media Listening Platform is a Process Not an Event

17 Sep

It is not difficult to find a social media listening platform – there are over 100 to choose from.  What is difficult is to find the right tool.  It takes a keen understanding of scope and requirements.  It takes an evaluation and selection process that will surface the best platform to fully meet your  requirements.  And it takes a well thought-out process for deploying the platform across the organization in an effective and efficient manner.   There are many questions to be asked and answers to be given.  Asking the right questions at the right time is crucial.

It is helpful to think of the overall listening platform selection process in three phases:

  1. Plan – Define requirements, stakeholders, scope
  2. Select – Create a platform evaluation process tailored to your unique requirements
  3. Deploy – The selected platform across the organization with training, workflow and other important issues addressed.
To read the rest of this post and to download the free eBook, Social Media Listening Platforms: How to select and deploy the right social media listening tools for your company, please click here.

AVEs Don’t Describe the Value of Media Coverage, They Sensationalize It.

26 Jun

Saturday, Wall Street Journal columnist Carl Bialik, The Numbers Guy, addressed the subject of advertising value equivalency (AVE).  This is perhaps the first example of a mainstream media publication shining a light of the controversial practice of AVEs.  (You can read the story here.)

The primary reason advertising value equivalents exist are because they are perceived to be a way to attribute value to programs that would otherwise be difficult to value directly.  They are a path of least resistance approach to return on investment calculations, but not a valid one.  Let’s take a deeper dive into the three specific examples in the WSJ story, ask the tough questions and discuss more valid ways to think about value attribution and ROI.

American Airlines  

You can enjoy both questionable valuation techniques and hyperbole in this article.  American Airlines stands to “make boatloads of cash” and “the airline company could gain as much as $95.9 million of exposure”.  Of really, let’s take a closer look.

The most incredible part of this financial calculation is the financial calculation itself.  The calculation is apparently based on sign placement within the arena and presumably the ‘impressions’ the brand will receive when people attending the venue see the signage and when TV cameras catch the signs when showing the scoreboard or during the action.  This is a very passive form of advertising that should have as its objective either creating top of mind awareness or perhaps creating more brand affinity.  Rather than using an advertising equivalency model that has no validity, a true measurement of the value created by naming rights would ask a series of questions designed to determine the actual, tangible (or even intangible) impact on the business:

  • Revenue: Can incremental revenue generation in the form of higher passenger miles be directly attributed to the exposure created by the naming rights?  Is it possible that incremental revenue would actually be realized on a game by game basis, or would any positive impact be realized over a longer time horizon?  Have new customers been created as a direct result of the exposure generated by the naming rights?
  • Brand: Can the increased exposure lead to people perceiving the brand differently and can the difference translate into higher transactional revenues generated or increased brand loyalty?

So where exactly are the ‘boatloads of cash’ American Airlines made?  Are they hitting the income statement in the form of incremental revenue or enhanced brand loyalty (repeat business)?  Are they residing on the balance sheet in terms of brand goodwill?  Given that American’s parent company AMR lost $11.5B dollars in the first decade of the 21st century, its last profitable year was 2007 and they are projected to lose money in 2011 and 2012, they could use the cash.  Perhaps they could use it to fund a ’bags fly free’ program or for enhancing their Advantage program to create more brand loyalty.  I would strongly suspect American’s shareholders would prefer a do-over on the investments made on naming rights to the ‘boatloads of cash’ they are now enjoying from the investment.

Couple Won’t Cash In on Kiss

15 minutes of fame is rarely worth $10 million.  In this case, the celebrity agent is suggesting the news value of the coverage generated by the kiss is somehow equivalent to advertising value and assigns what appears to be an arbitrary and ridiculously high value to it.  (He later admits he just made the number up.)  Just how was the couple going to monetize their 15 minutes of fame?   Yes, they turned down a few talk show opportunities and perhaps the National Enquirer would have thrown a few dollars their way for an exclusive, but the assertion that any major brand would have paid them to endorse their product is wildly speculative.  I would guess that if you did a survey after the event, a small number of people would remember seeing the coverage, and a very small percentage of the people who did see it would have recalled Scott Jones’ name.  So perhaps Mr. Jones walked away from tens of thousands of potential dollars in the short-term, but nowhere near the sensationalized estimate of $10 million.  15 minutes of fame might be worth 10 thousand dollars, but certainly not $10 million.

Obama Enjoys a Guinness

So Guinness is a winner and received $20 million worth of “free publicity”?  What was the outcome of the publicity?  Again, in order to determine the value of the “free publicity” (this term is despised in the PR industry by the way), Guinness would have to be able to measure incremental revenues directly attributable to the publicity generated.  Did sales of Guinness increase as a result?  Were new customers created?  Did existing customers feel compelled to drink even more?  What was the value of the incremental sales?  These are much more difficult questions to answer but are the correct ones to ask in order to measure the publicity.  Not by focusing on the mythical value of the coverage as measured by flawed advertising equivalency, but measuring the outcome or what happened as a result of the publicity.  The assertion that President’s Obama’s image was softened and will help keep him in the public’s favor is highly dubious thinking.  Perhaps it helps him in Boston, but in the grand scheme of things, this is a Presidential image non-event.

Beginning last Summer in Barcelona,  the public relations industry has come together to publicly state advertising value equivalency is not a valid measure of public relations.  The so-called Barcelona Principles are explicit against AVEs and also call for a focus on measuring outcomes and not (just) outputs.  While it will take some time for the PR industry to totally leave AVEs behind, there is a lot of momentum right now to make this happen sooner rather than later.  No serious measurement effort can use advertising value equivalency to attribute value and be credible.  

Social Media Listening Platforms – Plan, Select, Deploy (Part Three – Deploy)

17 Jun

In Part Two of this series on social media listening platforms we offered a process for selecting a social media listening platform vendor.  Now it’s time to deploy the tool across your organization effectively and with minimal disruption.  And put the tool to work.

Configuration – We talked about value-added services in the first post in this series.  One of the services offered by many listening platform vendors is configuration.  You’ll have to decide if you want to have the vendor perform system configuration or do it yourself.  In some cases you have no choice – you submit keywords, topics and themes to the vendor and the system is programmed for you.  In other cases some basic configuration must be done by the platform vendor but the bulk of the configuration can be a DIY project.

Keywords and Topics – In part one of this series, we discussed the need to think through the keywords required to bring all relevant content into your platform.  The keywords might be company name, product/brand names, competitors, issues, segment names, executives and spokespersons and key messages.  During deployment you will need to build taxonomy around many of the keywords that represent concepts rather than singular ideas or names.  For example, if you have a message that centers on being an innovative company, you will have to decide what expressions in addition to the keyword ‘innovative’ may be classified as innovation –  leading-edge, technology leader, R&D leadership, breakthrough products, etc.  You will also have to decide words and terms to exclude from your analysis.  Both of these processes are iterative – make a change, check content relevancy, adjust, repeat.

Integration – There are a few different types of integration you may want to tackle during platform configuration and deployment.  Each of the possible forms of integration will take a little time to accomplish and may require some back and forth between you and the platform vendor and/or vendor to vendor.  I am a big fan of web analytics and social media integration.  With many listening platforms this is relatively straight forward to accomplish.  You may also want to integrate third-party data sources like Factiva, LexisNexis, VMS or Critical Mention.  Assuming the listening platform vendor you selected supports this type of integration, it also is relatively straight forward.  To address latency issues, make sure you specify load times for the content.

Reports and Workflow – Previously, we addressed many of the basic questions around reports and reporting.  In the deployment phase it’s time to make it real.  Design specific templates for each report you need.  Create a mock-up and share with your stakeholders to make sure everyone is on board with the look, feel and utility of the report.   You will want to test the various delivery mechanisms to be employed including all email clients and mobile platforms you believe may be used.  Generally speaking, assume a significant percentage of the audience may look at the report on a mobile device, making this an especially important dynamic to test.  Once you have the report format established, define your workflow process – who pulls data and when, who creates visuals and by when, who compiles and edits the report and by when, and who is responsible for distribution and against what schedule.

Training – The first decision to make with training is if you want to tackle it yourself or rely on the listening platform vendor to perform the training.  Some vendors have very strong training programs and others not so much.  Some vendors charge for training and some do the bulk of it for free.  You most likely will want to take a train-the-trainer hybrid approach to training – have a core one/two/three people trained by the platform vendor, and then charge this team with training within your company or organization.  With respect to training timing, make sure to begin training only after everyone has a log-in to the system so they can actually use the system during the training.  I usually refer to this as training with live ammo.  If you don’t do this you’ll find the half-life of training is pretty short – folks forget most of what they have learned very rapidly.  I also find a tell-show-do teaching methodology works very well (my friends at Radian6 approach training this way).  Show some slides that cover the basics, show a video or canned demo that brings it to life and then have everyone do some hands-on exercises using the platform.  Remember you will need to address initial training needs as well as ongoing needs as new users are brought on the platform.

Event-specific and Programmatic Planning – Related to keyword analysis and taxonomy build-out, it may be wise to create keyword groups for programs you know you will be asked to listen to and measure, and for any potential events, like a crisis, that you can anticipate or imagine.  With respect to programmatic listening and measurement, generally a combination of the right keywords and date-ranging will allow you to pull in program-specific content.  If programs are known at the time of configuration and deployment, get ahead of the curve and set-up the keyword groups or source filters you may need.

If a company, brand or organization has a social listening program, you are remiss if you don’t include specific keywords that may serve as an early-detection system for potential crisis.  For example, depending on the type of organization and industry, it may be advisable to set up a keyword search like this: Company Name AND fire OR explosion OR shooting OR recall OR kidnapping OR crash.

In today’s real-time world, in my opinion, it is no longer optional to have social media listening capabilities.  As a result of this three-part series on listening platforms, I hope you are better equipped to plan, select and deploy your platform effectively.

Thanks for reading.

Social Media Listening Platforms – Plan, Select, Deploy (Part Two – Select)

2 Jun

In Part One, we discussed a range of topics designed to help you plan and define the scope and requirements for selecting and deploying a social media listening platform across your company or organization.  In Part Two, we will use the knowledge and perspective we gained in planning to orchestrate a thorough and effective platform selection process.

Here is a scalable selection process that will help you surface and select the social media listening platform that best meets your unique situation and requirements.

1. Define the individuals who will be involved in the selection process – Inclusion is a powerful card to play here.  Inclusion brings different perspectives together.  Inclusion greatly improves chances for success when it is time to authorize purchase of a platform and get it deployed properly across the organization or company.  Inclusion will increase the likelihood of acceptance and use of the platform across the organization.  Include representatives from the major stakeholder groups identified during the planning process.  You might include someone from your IT department.  You might also include the individuals who must authorize the purchase.  A group of up to ten is most workable.  After ten or so, I believe you will most likely experience diminishing returns on the incremental people added to the process.

2. Develop a list of selection criteria organized by major category – Based on the planning process we undertook in Part One, develop a list of categories that are most important to learn more about.  Here are ten categories you might consider including:

  • Content Sources/Types & Aggregation Strategy – What types of social content are brought into the system?  How is the content aggregated (e.g. RSS, crawling, third-party aggregators)?  How often is each type of content aggregated?  
  • Data and Search Considerations – How long is content archived, and is back data available?  What data cleansing strategies are in place to address spam, splogs and duplicate content?  Is full Boolean logic available for constructing searches?
  • Metrics and Analytics – What specific metrics are ‘standard’ in the system?  Is automated sentiment analysis offered at the brand or post level?  What audience-level data is available?
  • Data Presentation  – What dashboard features and functionality are offered?  Can dashboards be customized by user or group?  Are drill-down capabilities available for all analytics on the dashboard?    
  • Engagement and Workflow Functionality – Does the platform offer the ability to engage directly with content owners?  Can ‘owned’ content be managed on-platform?  What workflow management and reporting capabilities are offered?
  • Integration – What additional types of data may be integrated in the system – traditional media, web analytics, email, call center, CRM, etc?
  • Reporting Capability – Does the platform have a report function?  Can reports be customized?  Automated?
  • Geographic Scope – What countries and languages are addressed by the system?  Are two-byte languages supported?
  • Cost Structure – What is the cost basis – seat charge, subscription, content volume and/or number of searches?  How does pricing vary with increases in the cost basis?
  • Value-added Services – Does the listening platform vendor offer system configuration services?  Do they perform analysis and reporting?

Within each major category, list the specific criteria most relevant and important to your requirements.  For example, within the Data and Search Considerations category, you might list ten specific criteria that you want to assess for each vendor:

  • How often is Twitter data refreshed?  Can refresh timing be specified?
  • How often is new content from other sources crawled/brought into the system?
  • How long can each content type be archived?
  • Is back data available?  How far back and at what cost?
  • What data cleansing strategies are in place?
  • Can data be easily exported in CSV/Excel format and is bulk data extraction supported?
  • Can users build and customize topics and searches?
  • What types of Boolean operators are supported?
  • Is proximity search supported?
  • Do users have the ability to date-range data for analysis?

3. Develop a scorecard to use in evaluating the potential listening platform vendors/partners – Using the major categories and specific criteria you have defined, develop an overall scorecard to be used in the evaluation process.  Think about creating a weighting system at the category level to help prioritize the importance of each category.  Assign a number of points to each criteria within a given category.  A scorecard might contain ten categories each containing ten criteria.  Begin by assigning a one-point value to each criteria (100 points total) and then apply weighting at the category level.

4. Develop the initial vendor consideration set – List all the social media platform vendors you wish to consider.  Pick ones you are familiar with and have positive experiences with as a starting point.  Talk to colleagues within, and experts outside, the organization to gain their perspective on the platforms that should be considered.  Read blog posts and reviews of the platforms to gain additional outside perspective.  Visit vendor websites and watch demo videos.  Pull it all together and gain consensus amongst your team on the platforms that will be considered.

5. Do some homework and narrow the list to a manageable number (perhaps five to ten)- If your initial vendor consideration set is too large (if it has more than ten vendors it is too large), do some additional homework and narrow your list to a more manageable number.

6. Develop and distribute an RFI based on evaluation criteria – Using the categories and criteria you developed, create a request for proposal, asking the listening platform vendors the questions that are most critical to meeting your requirements.  Specify the format (e.g. PowerPoint, Word) you would like responses to take.  Give the vendors about two weeks to respond.

7. Evaluate and score vendor responses – Once the RFI documents are received, each should be reviewed carefully and scored according to the criteria and weighting decided previously.  Depending on the number of vendors being evaluated and ease of getting the entire evaluation team together, there may be merit in blocking out an afternoon to gather as a group, read through the responses, and decide how each will be scored.  This is a bit of a ‘pulling off the band-aid’  approach that will save time and allow for spirited discussion and consensus scoring.  If this is impractical for whatever reason in your company or organization, assign one of more RFIs to individuals who will then develop the scorecards.  The scorecards may then be reviewed together in a meeting or conference call, and consensus reached on scoring.  Obviously the potential issue with multiple people independently creating scorecards is consistency.  You want the evaluation to be as fair and consistent as possible given whatever constraints you are working under.

8. Develop a short list of vendors – If your number of vendors under consideration is over five, use the scorecards to reduce the list to three to five platforms that will undergo further evaluation.  These are your finalists.  You should always promptly notify vendors not moving forward in the process, and offer to provide feedback via phone or email on why they were not selected to move forward.  This professionalism will be much appreciated by the vendors, and represents a good learning opportunity for all involved if done well.

9. Deploy test scenarios – At this point we have narrowed the list of contenders and are ready to proceed with some specific tests designed to illuminate the real-world capabilities of the platforms.  Here are three possible test scenarios.  You can use all three for a very rigorous evaluation, or just one or two if that fits your needs better.

  • Test scenario 1: Give each vendor a defined list of search terms (brands, competitors, issues) and the languages/countries you want to evaluate.   You should use search terms that are directly relevant to your company or organization.  Explain what type of analysis you would like performed and ask each to address insight generation.  Platform vendors are given one week to prepare an analysis.  If practical, you could ask each vendor to give a presentation of the results in person.  Alternatively, use a web conference to review the results.
  • Test scenario 2: This is a real-time exercise designed to assess vendor data volume by country/language and signal-to-noise ratio of relevant content.  Get on a web conference with each social media listening platform vendor.  Give them a new list of three search terms and ask that they go into their platform, configure the system for the three search terms and then pull in relevant content for the past 30 days.  Once that is accomplished, ask them to export the data as an CSV or Excel file and email you the results while everyone is still on the line.  A more detailed off-line review of the results should be undertaken, including translation of languages, to assess relevancy of the results.
  • Test scenario 3: This has been referred to by a colleague as the Dr. Evil test…In conjunction with test scenario two, it may be interesting to ‘plant’ known content that matches the search terms on different Twitter channels, Facebook pages and Forums in each country that is of interest to you.  When you receive your data export, examine to determine if the known content was found.

10. Pick a winner – At this point you have the RFIs, scorecards and test results.  You are ready to make your decision.  Convene the evaluation team, discuss the results and make a decision.  With luck, a clear winner will have emerged from the process.  Contact the winner and negotiate terms of a contract.  Don’t notify the non-winners until after a contract is in place, just in case you need to move to your second choice for whatever reason.

In Part Three, we will discuss how to maximize your potential for success when actually deploying the social media listening platform across your organization.

Social Media Listening Platforms – Plan, Select, Deploy (Part One – Plan)

19 May

It is not difficult to find a social media listening platform/tool – there are over 100 to choose from.  What is difficult is to find the right tool.  It takes a keen understanding of your scope and requirements.  It takes an evaluation and selection process that will surface the best platform to fully meet your requirements.  And it takes a well thought-out process for deploying the platform across the organization in an effective and efficient manner.   There are many questions to be asked and answers to be given.  Asking the right questions at the right time is crucial.

It is helpful to think of the overall process in three phases:

Plan – Define requirements, stakeholders, scope

Select – Create a platform evaluation process tailored to your unique requirements

Deploy – The selected platform across the organization with training, workflow and other important issues addressed.

This three-part series will tackle each phase one at a time.  First up – Plan.

In many ways, the planning phase is the most important.  Overlook an important detail here and you may or may not be able to overcome it later.  Here are ten topic areas to discuss within your organization to make sure you are setting yourself up for success.

  1. Stakeholders – What are the primary stakeholder groups within my company or organization?  Possible stakeholder groups might include marketing, corporate communications and customer service/care at the macro level.  Depending on the size of your organization, various regions, divisions, groups or product lines may also be distinct stakeholder groups.  Once you have identified the primary stakeholders, set up time to meet with each group.  Understand how they currently use social listening tools and what, from their perspective, are ‘must have’ capabilities versus ‘nice to have’ capabilities in a social listening platform.  Ask each stakeholder group the applicable questions from the list below.
  2. Geographic Scope – What languages and countries are stakeholders interested in including in the platform?  Try to understand the relative priority of each country and language.  Also be sure to comprehend future requirements.  For example, if Chinese is not a priority today but will be within two years, you may want to only consider listening platforms that support two-byte languages.  Also probe to assess if social media content will need to be translated into other languages.  This may be primarily an internal workflow issue or outsourcing issue, but might also be a platform consideration.
  3. Value-added Services – It is very important to develop a point of view on how monitoring, analysis and reporting will be done within your organization.  Will each stakeholder group be responsible for doing this themselves or will a centralized analytics and insights group be responsible?  In addition to the self-serve approach, you could consider outsourcing this work to your social listening platform vendor or to one of your agencies – PR, digital or advertising.  In my experience, it is easy for a company or organization to underestimate both the skill and time commitment necessary to make the self-serve approach effective.
  4. Content/Data Types – Social media listening platform vendors generally include content from the primary social media properties –  Facebook, Twitter, Blogs, Forums, YouTube and MySpace (being generous here).  Flickr is also included in many.  Currently on vendor roadmaps are properties like Linked-In and perhaps customer review sites.  Make sure the content types the platform supports meets your stakeholder requirements.  It is also very important to understand how the social content is being aggregated and how frequently (see Reporting for more on latency issues).  The fundamental ways in which content is aggregated in social listening platforms are crawling the web, RSS feeds and third-party content aggregators (e.g. Boardreader for Forums).  Many platform vendors employ a hybrid approach.
  5. Metrics and Analytics – Most social listening platforms either have a set group of analytics that deliver specific metrics or they offer configurable analytic ‘widgets’ that may be used to create metrics like share of conversation or volume and tone trend.  Some platforms offer a combination of these two approaches.  Based on your needs and measurement strategy/approach, define the analytics and metrics you would ideally like to see (e.g. volume, sentiment, messages, share-of-conversation, association with key topics).  In the vendor selection phase, this list will be useful to compare and contrast vendors.
  6. Keywords and Topics – During the planning phase, it is wise to develop a list of the major keywords and topics you believe will be necessary for the listening platform.  These keywords might include the company name, key competitors, industry issues, market segment names,  brand names, product names, key spokespersons, executives and competitor and industry spokespersons.  Social media listening platforms have varying degrees of sophistication with respect to their search capability.  Some have full Boolean logic, others offer very simple AND/OR logic.  The importance of this difference depends to some degree on you company/brand name as well as the sophistication of the people who will be configuring and maintaining your system.  If, for example, your company name is a common word (e.g. Apple, Visa), you will need stronger logic capabilities that include proximity search.
  7. Integration – Integration of varying data types – search, web, social, advertising, customer opinion and others – is the present and future of online measurement.  It is therefore important to understand what capabilities, if any, the social listening platform vendor has to integrate with other data types/streams.  Do they offer the ability to connect with web analytics packages via API for example?  The web/social integration is becoming increasingly common.  If you need to integrate traditional media with social, it might be a nice feature if the social listening platform allows third-party content aggregators like Factiva, Lexis Nexis, VMS or Critical Mention.
  8. Reporting – During the planning phase it is helpful to think through a series of questions about reports and reporting.  What type of reports are necessary?  Who will be responsible for their creation?  How often will reports be issued?  Does the system need the capability to automatically generate and deliver reports?  What about automated alerts?  There are quite a wide range of report capabilities represented by the various vendors in the listening space.  One potentially critical area to explore during the vendor evaluation phase is related to report frequency and perhaps to report type (think crisis).  That is how often new content is brought into the system.  Content latency issues may cause real problems during a fast-moving crisis.  Generally, the content latency differs by media type.  Best for Twitter and worst (perhaps) for forums, some of which restrict crawling to no more than once per day.  Within Twitter, the type of relationship the vendor has with Twitter should also be explored.  Not all Firehose arrangements are the same.  While most social media listening platforms claim to be ‘real time’, it is interesting to ask the vendors to define what they mean by ‘real time’.  The answers may surprise you.
  9. Access – Discuss who needs access to the listening platform and what they want to see and be able to do once they are in the system.  Do your different stakeholder groups (Divisions, product lines, brands, corporate, marketing, etc.) want or need a customized view of the data perhaps presented on a separate dashboard within the system?  It is also a good idea to have a perspective on who your power users will be versus the casual users.  This distinction applies not only to system access, but also in areas like training.
  10. Engagement – Some social media listening platforms support engagement with content owners directly from the platform, others do not.  Some engagement capabilities are elegant, others are rudimentary.  Make sure to explore the engagement needs of your stakeholders and understand how important this capability is to them in the short and long-term.  If engagement capabilities are important, you will also want to explore if the system allows users to tag content, assign content, manage assignments and track workflow.

In Part Two, we’ll examine a rigorous process for social media listening platform vendor evaluation and selection.

AVEs are a Disease – Here’s a Little Vaccine

16 Apr

One of the truly insidious aspects of public relations measurement is the use of advertising value equivalency (AVEs) or media value to assign financial value to public relations outputs.  It is a highly flawed, path-of-least-resistance attempt to calculate return on investment (ROI) for public relations.   To make matters worse, the practice has clearly moved into social media measurement as well.  For example, research studies that attempt to monetize the value of a Facebook Fan/Liker by attributing a CPM value from the advertising world.  Online media impact rankings also utilize equivalent paid advertising costs to assign monetary value to online news and social media.  AVE is like a disease that has infected and spread throughout the public relations industry.

In June of 2010, the PR industry came together in Barcelona to draft the Barcelona Principles, a set of seven principles of good measurement intended to provide guideposts for the industry.  The principle that has generated the most conversation is this one:

Advertising Value Equivalency (AVE) is Not the Value of Public Relations

 While many of the Measurati have been preaching against AVEs for years, there now appears to be a critical mass of outrage that may kill the practice in the coming years.  Here are four compelling reasons why I believe we must make this happen – the sooner the better.

1. AVEs Do Not  Measure Outcomes

AVEs equate an article with the appearance cost of an advertisement.  It does not speak at all to the results or impact that the article may have on a reader.  Advertisers do not judge the success of advertising on how much the insertions cost.  Imagine an advertising manager being asked by his or her boss, “How are we doing in advertising this year?”, and them replying, “Great!  We have spent $500,000 so far!  The true value of public relations or social media is not the appearance cost, but what happened as a result of the PR or social media effort – the impact it has on brand, reputation and marketing.  You will note the Barcelona Principles also call for a focus on measuring outcomes and not (just) outputs.  What happened as a result of media coverage is inherently more interesting and valuable than how much coverage was obtained.

2. AVEs Reduce Public Relations to Media Relations

You are, or become, what you measure.  AVEs do not address the impact or value of several important aspects of public relations including strategic counsel, crisis communications, grassroots efforts, viral campaigns or public affairs.  In other words, AVEs reduce PR to just the media dimension by only assigning a value in this area.  If only AVEs are used to assess PR value, the results will understate the totality of value delivered by PR.  AVEs also cannot measure the value of keeping a client with potentially negative news out of the media, yet that may be the primary objective of the PR practitioner.

3. AVEs Fly in the Face of Integrated Measurement                

Good marketing, branding and reputation campaigns have always been integrated to varying degrees.  The digitization of our lives has accelerated integration.

Advertising and PR actually work together synergistically, yet AVEs treat them as cost alternatives.  Studies have shown ads that run in a climate of positive publicity actually receive lift from the PR.  Conversely, ads run in an environment of negative publicity will likely not be successful and/or may be perceived negatively by consumers.  We have seen exposure to brand advertising increases conversion rates in social channels. Integrated campaigns and programs require integrated measurement.  AVEs don’t play well in this world.  They are analog and segregated in a digital and integrated world.

4. AVEs Provide No Diagnostic Value

Too much measurement energy is focused on score-keeping and not diagnostics.  This is one reason why single-number metrics like the Klout score and others have great appeal to many.  However, measurement is fundamentally about assessing performance against objectives with sufficient detail and granularity to determine what is working and what is not.  AVEs fail miserably in this regard.  AVE results can actually be misleading and result in false positives.  AVEs may be trending up while important metrics like message communication, share of favorable positioning and share of voice are falling.  Unfortunately, AVEs provide neither a valid single-number score nor any diagnostic value.

Some have said the Barcelona Principles are the ‘end of AVEs’.  I would agree directionally with that statement with one minor addition, Barcelona was the ‘beginning of the end of AVEs’.  Awareness of the practice and recognition of its flaws are at an all-time high in our industry.  More education and evangelism are required.  Understanding concepts like impact, tangible value, intangible value and (true) return on investment help foster much more sophisticated conversation about the total value delivered by public relations and social media.  AVEs are a disease, education and knowledge are the vaccine.  AVEs won’t die easily.  The momentum generated by the Barcelona event has provided focus and intent.  It is up to all of us to make AVEs a thing of the past.

Social Media Measurement 2011: Five Things to Forget and Five Things to Learn

30 Dec

It has been said that social media came of age in 2010.  Not so for social media measurement.  But the mainstreaming of social media marketing brings with it a heightened call for accountability.  The need to prove the value of social media initiatives has never been greater.  So, perhaps 2011 will be the year that social media measurement matures and comes of age.

As we look to the next year, here are five things to forget and five things to learn about social media measurement in 2011.

Things to Forget in 2011

1. Impressions

The public relations industry has historically measured and reported success through the lens of quantity not quality.  The most common PR metric today is Impressions.  While it is a somewhat dubious metric for traditional media, it really loses meaning in social media where engagement not eyeballs is what we seek.  Impressions also (greatly) overstate actual relevant audience.   Impressions merely represent an opportunity to see, they do not attempt to estimate the (small) percentage of the potential audience that actually saw your content.

For Twitter, many folks use the sum of all first generation followers as ‘impressions’ for a particular tweet.  The obvious problem here is that the probability that any one follower sees any one tweet is quite small.  I don’t have good data on this (please share if you do), but an educated guess might put the percentage at less than 5%.  Similarly for Facebook, use of impressions as a metric is also problematic.  Facebook impressions do not indicate unique reach and you don’t have any idea who, if anyone, actually viewed the content.

Number of Impressions is a flawed, unwashed masses metric for social media measurement.  Any time you are tempted to use the word ‘impressions’ in social media, think about ‘potential reach’ or ‘opportunities to see’ instead.  Or better yet, concentrate on Engagement and Influence.

____________________________________________________________

2. Vanity Metrics – Fans and Followers

Most social media measurement efforts place far too much emphasis on Fans/Likers and Followers.  For Twitter, the number of Followers is seen as a key metric, thought by many to relate to potential influence.  For Facebook it is the number of Fans/Likers many companies/brands attempt to maximize.  While these may be the vanity metrics of choice, they fall far short of being adequate for rigorous measurement.  The largest disconnect of course is these numbers really don’t describe potential audience size very well and they have nothing to do with interactions/engagement.

For Twitter, there is a growing amount of evidence (read the Million Follower Fallacy paper) that number of Followers really has little to do with Influence.  Number of Followers may be an indication of popularity but not influence.  Influence talks more to one’s ability to start conversations and spread ideas.  For Facebook, number of Fans bears little semblance to average daily audience size and tells you nothing about engagement of the community.  All Fans are not created equally.  Some are engaged, some never return.  Some are your best customers, others are there only to trash you.

Number of Fans and Followers are metrics you probably should include in your overall metrics set, but should be de-emphasized and not be a primary area of focus.

________________________________________________________

3. Standardization

Measurement standardization is always an interesting topic to debate.  On one side you have the folks who believe standards are absolutely necessary for measurement to proliferate, and on the other side you have the snowflake measurement disciples who believe each program is unique and therefore requires unique objectives/metrics.  I fall somewhere between the two extremes.

In June 2010 IPR, AMEC, PRSA, ICCO and The Global Alliance got together in Barcelona for a conference intended to create an atmosphere for measurement consistency/standardization around a codified set of principles of good measurement.  The Barcelona Principles as they have come to be called are basic statements of good measurement practice – focus on outcomes not outputs, don’t use AVEs, etc.  Absolutely nothing to disagree with in the Principles.  However, the heavy lifting of standardization comes at the metrics-level.  Subcommittees have been formed that are taking the Principles all the way down to the metrics level.  I have reviewed the work of the social media committee and believe there is a lot of good work being done.

But in 2011, I expect a lot of debate but not a lot of progress in creating social media measurement standardization.   One to watch is the Klout score for online influencers which is being integrated as metadata in social media listening and engagement platforms.  There are issues with the Klout score (read this post), and I question the type of ‘influence’ it is measuring – there is a big difference between motivating someone to action (e.g. retweeting your content) and motivating someone to purchase which is ultimately the type of influence many companies and brands are most interested in effecting.

__________________________________________________________

4. Ad or Media Equivalency

One of the truly insidious aspects of public relations measurement is the use of advertising or media equivalency (AVEs – advertising value equivalency) to assign financial value to public relations outputs.  It is a highly flawed, path of least resistance attempt to calculate return on investment (ROI) for public relations.  There are many reasons why using ad equivalency as a proxy for PR value is not advisable.

To make matters worse, the practice has clearly moved into social media measurement as well.  For example, research studies that monetize the value of a Facebook Fan/Liker by attributing an arbitrary $5 CPM value from the advertising world.  Online media impact rankings also utilize equivalent paid advertising value to assign monetary value to online news and social media.  The true value of social media is not how much an equivalent ad would have cost but in the impact it has on brand, reputation and marketing.

__________________________________________________________

5. Return on Engagement/Influence/etc.

Not a day goes by without someone declaring a new and improved metric for the acronym ROI, or stating that ROI does not apply in social networks.  A recent Google search for “Return on Engagement” returned 192,000 results.  “Return on Influence” returned 68,300.

Most of the folks who use these terms either don’t understand ROI or don’t know how to obtain the data necessary to calculate it.  Many confuse the notion of impact with ROI (addressed in Things to Learn).  Engagement creates impact for a brand or organization, but may or may not generate ROI in the short-term.  Creating influence – effecting someone’s attitudes, opinions and/or actions – creates impact but may or may not create ROI in the short-term.  It often is better to think about measuring impact first and then deciding whether or not you have the means and data necessary to attribute financial value.

__________________________________________________________

Things to Learn in 2011

1. Measurable Objectives

There are many issues and challenges in the field of social media measurement.  The easiest one to fix is for everybody to learn how to write measurable objectives.  Most objectives today are either not measurable as written or are strategies masquerading as objectives.  (For example, any sentence starting with an action buzzword like leverage is a strategy.)

‘Increase awareness of product X’ is not a measurable objective.  In order to be measurable, objectives must contain two essential elements:

  • Must indicate change in metric of interest – from X to Y
  • Must indicate a timeframe for the desired change – weeks, months, quarter, year, specific dates tied to a campaign (pre/post)

Therefore, properly stated, measurable objectives should look more like these:

  • Increase awareness of product X from 23% to 50% by year-end 2011
  • Increase RTs per 1000 Followers from 0.5% in Q1’11 to 10% by the end of Q2’11.

__________________________________________________________

2. Impact versus ROI

ROI is one of the most overused and misused term in social media measurement.  Many people say ‘ROI’ what they really just mean results or impact.  ROI is a financial metric – percentage of dollars returned for a given investment/cost.  The dollars may be revenue generated, dollars saved or spending avoided.  ROI is transactional.

ROI is a form of impact, but not all impact takes the form of ROI.  Impact is created when people become aware of us, engage with our content or brand ambassadors, are influenced by engagement with content or other people, or take some action like recommending to a friend, writing a review or buying a product.  Impact ultimately creates value for an organization, but the value creation occurs over time, not at a point in time.  Value creation is process-oriented.  It has both tangible and intangible elements.

Your investments in social media or public relations remain an investment, creating additional value if done correctly, until which time they can be linked to a business outcome transaction that results in ROI.

Most social media initiatives today do not (or should not) have ROI as a primary objective.  Most social programs are designed to create impact, not ROI, in the short-term.  There is also the notion that many social media initiatives are in an investment phase, not a return phase of maturity.

__________________________________________________________

3. Hypothetical ROI Models

One important step in determining how a social media initiative creates ROI for an organization is to create a hypothetical model that articulates the cascading logic steps in the process, as well as the data needed and assumptions used.  The model is most useful in the planning stages of a program.  It helps address the proverbial question, “If I approve this budget, what is a reasonable expectation for the results we will achieve?”  Let’s take a look at a simple Twitter example:

Program: Five promoted tweets are sent with a special offer to purchase a product on an e-commerce site.

Hypothetical ROI Model:

  • (Data)                   Total potential unduplicated reach of the five tweets is 1,000,000 people
  • (Assume)            10% of the potential audience will actually see the tweet = 100,000 people
  • (Assume)            20% of the individuals who see the tweet find it relevant to them = 20,000 people
  • (Assume)            10% of those finding it relevant will visit the site = 2,000 people
  • (Assume)            10% of those visiting the site will convert and buy the product = 200 people
  • (Data)                   Incremental profit margin on each sale is $50
  • (Data)                   Total cost of the social media initiative is $2,400

ROI Calculation: (200 x $50) = $10,000 – $2,400 = $7,600/$2,400 = 3.17 x 100 = 317% ROI

Our model suggests this program will be successful and generate substantial ROI.  If in reviewing a model with someone who needs to approve a program, they conceptually buy into the model but challenge the assumptions, that is a positive step.  Negotiate different assumptions and rerun the numbers.  Hypothetical models help you think through the data requirements your research approach must address in order to actually measure the ROI of the program after implementation.

__________________________________________________________

4. Integrated Digital Measurement

The definition of public relations is fluid, and rapidly evolving to encompass a much broader and more integrated view of communications and how we connect, engage and build relationships with consumers and other stakeholders.  Digitization in all its forms has driven and accelerated this important change.  Communicators should now take a more content and consumer-centric view of the world, orchestrating all the consumer touch points available in our increasingly digital world.  At Fleishman Hillard, we capture this expanded scope and integration in a model we refer to as PESO – Paid/Earned/Shared/Owned.  Here is how we define the elements of our model:

Paid – refers to all forms of paid content that exists on third-party channels or venues.  This includes banner or display advertisements, pay-per-click programs, sponsorships and advertorials.

Earned – includes traditional media outreach as well as blogger relations/outreach where we attempt to influence and encourage third-party content providers to write about our clients and their products and services.

Shared – refers to social networks and technologies controlled by consumers along with online and offline WOM

Owned – includes all websites and web properties controlled by a company or brand including company or product websites, micro-sites, blogs, Facebook pages and Twitter channels.

The social media measurement Holy Grail in many ways is to be able to track behavior of individuals across platforms, online and offline, tethered and mobile, understanding how online behavior impacts offline behavior and vice-versa.  We also seek to understand how the PESO elements work together synergistically.  For example, how exposure to online advertising impacts conversions within social channels.  To address this, your measurement strategy should be to take a holistic, integrated approach using a variety of methodologies, tools and data.

_________________________________________________________

5. Attribution

If you are not already familiar with value attribution models, prepare to hear much more about them in 2011.  Value attribution models attempt to assign a financial value to specific campaigns and/or channels (e.g. advertising, search, direct, social) that are part of a larger marketing effort.  So rather than giving all the conversion credit to the last click in a chain or even the first click, the model attributes portions of the overall value across the relevant campaigns and/or channels.

A simple model might look at the following metrics for each channel:

  • Frequency – the number of exposures to a specific marketing channel or campaign
  • Duration – time on site for exposures referring to the conversion site
  • Recency – credit for exposures ranging from first click to last click, with last click typically receiving more credit.

Value attribution models require human analysis and expertise.  This factor is often cited in studies as the reason more companies do not pursue attribution modeling.

_________________________________________________________

Here’s wishing you and yours an exciting and prosperous 2011!

Follow

Get every new post delivered to your Inbox.

Join 169 other followers