Archive | June, 2011

AVEs Don’t Describe the Value of Media Coverage, They Sensationalize It.

26 Jun

Saturday, Wall Street Journal columnist Carl Bialik, The Numbers Guy, addressed the subject of advertising value equivalency (AVE).  This is perhaps the first example of a mainstream media publication shining a light of the controversial practice of AVEs.  (You can read the story here.)

The primary reason advertising value equivalents exist are because they are perceived to be a way to attribute value to programs that would otherwise be difficult to value directly.  They are a path of least resistance approach to return on investment calculations, but not a valid one.  Let’s take a deeper dive into the three specific examples in the WSJ story, ask the tough questions and discuss more valid ways to think about value attribution and ROI.

American Airlines  

You can enjoy both questionable valuation techniques and hyperbole in this article.  American Airlines stands to “make boatloads of cash” and “the airline company could gain as much as $95.9 million of exposure”.  Of really, let’s take a closer look.

The most incredible part of this financial calculation is the financial calculation itself.  The calculation is apparently based on sign placement within the arena and presumably the ‘impressions’ the brand will receive when people attending the venue see the signage and when TV cameras catch the signs when showing the scoreboard or during the action.  This is a very passive form of advertising that should have as its objective either creating top of mind awareness or perhaps creating more brand affinity.  Rather than using an advertising equivalency model that has no validity, a true measurement of the value created by naming rights would ask a series of questions designed to determine the actual, tangible (or even intangible) impact on the business:

  • Revenue: Can incremental revenue generation in the form of higher passenger miles be directly attributed to the exposure created by the naming rights?  Is it possible that incremental revenue would actually be realized on a game by game basis, or would any positive impact be realized over a longer time horizon?  Have new customers been created as a direct result of the exposure generated by the naming rights?
  • Brand: Can the increased exposure lead to people perceiving the brand differently and can the difference translate into higher transactional revenues generated or increased brand loyalty?

So where exactly are the ‘boatloads of cash’ American Airlines made?  Are they hitting the income statement in the form of incremental revenue or enhanced brand loyalty (repeat business)?  Are they residing on the balance sheet in terms of brand goodwill?  Given that American’s parent company AMR lost $11.5B dollars in the first decade of the 21st century, its last profitable year was 2007 and they are projected to lose money in 2011 and 2012, they could use the cash.  Perhaps they could use it to fund a ’bags fly free’ program or for enhancing their Advantage program to create more brand loyalty.  I would strongly suspect American’s shareholders would prefer a do-over on the investments made on naming rights to the ‘boatloads of cash’ they are now enjoying from the investment.

Couple Won’t Cash In on Kiss

15 minutes of fame is rarely worth $10 million.  In this case, the celebrity agent is suggesting the news value of the coverage generated by the kiss is somehow equivalent to advertising value and assigns what appears to be an arbitrary and ridiculously high value to it.  (He later admits he just made the number up.)  Just how was the couple going to monetize their 15 minutes of fame?   Yes, they turned down a few talk show opportunities and perhaps the National Enquirer would have thrown a few dollars their way for an exclusive, but the assertion that any major brand would have paid them to endorse their product is wildly speculative.  I would guess that if you did a survey after the event, a small number of people would remember seeing the coverage, and a very small percentage of the people who did see it would have recalled Scott Jones’ name.  So perhaps Mr. Jones walked away from tens of thousands of potential dollars in the short-term, but nowhere near the sensationalized estimate of $10 million.  15 minutes of fame might be worth 10 thousand dollars, but certainly not $10 million.

Obama Enjoys a Guinness

So Guinness is a winner and received $20 million worth of “free publicity”?  What was the outcome of the publicity?  Again, in order to determine the value of the “free publicity” (this term is despised in the PR industry by the way), Guinness would have to be able to measure incremental revenues directly attributable to the publicity generated.  Did sales of Guinness increase as a result?  Were new customers created?  Did existing customers feel compelled to drink even more?  What was the value of the incremental sales?  These are much more difficult questions to answer but are the correct ones to ask in order to measure the publicity.  Not by focusing on the mythical value of the coverage as measured by flawed advertising equivalency, but measuring the outcome or what happened as a result of the publicity.  The assertion that President’s Obama’s image was softened and will help keep him in the public’s favor is highly dubious thinking.  Perhaps it helps him in Boston, but in the grand scheme of things, this is a Presidential image non-event.

Beginning last Summer in Barcelona,  the public relations industry has come together to publicly state advertising value equivalency is not a valid measure of public relations.  The so-called Barcelona Principles are explicit against AVEs and also call for a focus on measuring outcomes and not (just) outputs.  While it will take some time for the PR industry to totally leave AVEs behind, there is a lot of momentum right now to make this happen sooner rather than later.  No serious measurement effort can use advertising value equivalency to attribute value and be credible.  

Social Media Listening Platforms – Plan, Select, Deploy (Part Three – Deploy)

17 Jun

In Part Two of this series on social media listening platforms we offered a process for selecting a social media listening platform vendor.  Now it’s time to deploy the tool across your organization effectively and with minimal disruption.  And put the tool to work.

Configuration – We talked about value-added services in the first post in this series.  One of the services offered by many listening platform vendors is configuration.  You’ll have to decide if you want to have the vendor perform system configuration or do it yourself.  In some cases you have no choice – you submit keywords, topics and themes to the vendor and the system is programmed for you.  In other cases some basic configuration must be done by the platform vendor but the bulk of the configuration can be a DIY project.

Keywords and Topics – In part one of this series, we discussed the need to think through the keywords required to bring all relevant content into your platform.  The keywords might be company name, product/brand names, competitors, issues, segment names, executives and spokespersons and key messages.  During deployment you will need to build taxonomy around many of the keywords that represent concepts rather than singular ideas or names.  For example, if you have a message that centers on being an innovative company, you will have to decide what expressions in addition to the keyword ‘innovative’ may be classified as innovation –  leading-edge, technology leader, R&D leadership, breakthrough products, etc.  You will also have to decide words and terms to exclude from your analysis.  Both of these processes are iterative – make a change, check content relevancy, adjust, repeat.

Integration – There are a few different types of integration you may want to tackle during platform configuration and deployment.  Each of the possible forms of integration will take a little time to accomplish and may require some back and forth between you and the platform vendor and/or vendor to vendor.  I am a big fan of web analytics and social media integration.  With many listening platforms this is relatively straight forward to accomplish.  You may also want to integrate third-party data sources like Factiva, LexisNexis, VMS or Critical Mention.  Assuming the listening platform vendor you selected supports this type of integration, it also is relatively straight forward.  To address latency issues, make sure you specify load times for the content.

Reports and Workflow – Previously, we addressed many of the basic questions around reports and reporting.  In the deployment phase it’s time to make it real.  Design specific templates for each report you need.  Create a mock-up and share with your stakeholders to make sure everyone is on board with the look, feel and utility of the report.   You will want to test the various delivery mechanisms to be employed including all email clients and mobile platforms you believe may be used.  Generally speaking, assume a significant percentage of the audience may look at the report on a mobile device, making this an especially important dynamic to test.  Once you have the report format established, define your workflow process – who pulls data and when, who creates visuals and by when, who compiles and edits the report and by when, and who is responsible for distribution and against what schedule.

Training – The first decision to make with training is if you want to tackle it yourself or rely on the listening platform vendor to perform the training.  Some vendors have very strong training programs and others not so much.  Some vendors charge for training and some do the bulk of it for free.  You most likely will want to take a train-the-trainer hybrid approach to training – have a core one/two/three people trained by the platform vendor, and then charge this team with training within your company or organization.  With respect to training timing, make sure to begin training only after everyone has a log-in to the system so they can actually use the system during the training.  I usually refer to this as training with live ammo.  If you don’t do this you’ll find the half-life of training is pretty short – folks forget most of what they have learned very rapidly.  I also find a tell-show-do teaching methodology works very well (my friends at Radian6 approach training this way).  Show some slides that cover the basics, show a video or canned demo that brings it to life and then have everyone do some hands-on exercises using the platform.  Remember you will need to address initial training needs as well as ongoing needs as new users are brought on the platform.

Event-specific and Programmatic Planning – Related to keyword analysis and taxonomy build-out, it may be wise to create keyword groups for programs you know you will be asked to listen to and measure, and for any potential events, like a crisis, that you can anticipate or imagine.  With respect to programmatic listening and measurement, generally a combination of the right keywords and date-ranging will allow you to pull in program-specific content.  If programs are known at the time of configuration and deployment, get ahead of the curve and set-up the keyword groups or source filters you may need.

If a company, brand or organization has a social listening program, you are remiss if you don’t include specific keywords that may serve as an early-detection system for potential crisis.  For example, depending on the type of organization and industry, it may be advisable to set up a keyword search like this: Company Name AND fire OR explosion OR shooting OR recall OR kidnapping OR crash.

In today’s real-time world, in my opinion, it is no longer optional to have social media listening capabilities.  As a result of this three-part series on listening platforms, I hope you are better equipped to plan, select and deploy your platform effectively.

Thanks for reading.

Social Media Listening Platforms – Plan, Select, Deploy (Part Two – Select)

2 Jun

In Part One, we discussed a range of topics designed to help you plan and define the scope and requirements for selecting and deploying a social media listening platform across your company or organization.  In Part Two, we will use the knowledge and perspective we gained in planning to orchestrate a thorough and effective platform selection process.

Here is a scalable selection process that will help you surface and select the social media listening platform that best meets your unique situation and requirements.

1. Define the individuals who will be involved in the selection process – Inclusion is a powerful card to play here.  Inclusion brings different perspectives together.  Inclusion greatly improves chances for success when it is time to authorize purchase of a platform and get it deployed properly across the organization or company.  Inclusion will increase the likelihood of acceptance and use of the platform across the organization.  Include representatives from the major stakeholder groups identified during the planning process.  You might include someone from your IT department.  You might also include the individuals who must authorize the purchase.  A group of up to ten is most workable.  After ten or so, I believe you will most likely experience diminishing returns on the incremental people added to the process.

2. Develop a list of selection criteria organized by major category – Based on the planning process we undertook in Part One, develop a list of categories that are most important to learn more about.  Here are ten categories you might consider including:

  • Content Sources/Types & Aggregation Strategy – What types of social content are brought into the system?  How is the content aggregated (e.g. RSS, crawling, third-party aggregators)?  How often is each type of content aggregated?  
  • Data and Search Considerations – How long is content archived, and is back data available?  What data cleansing strategies are in place to address spam, splogs and duplicate content?  Is full Boolean logic available for constructing searches?
  • Metrics and Analytics – What specific metrics are ‘standard’ in the system?  Is automated sentiment analysis offered at the brand or post level?  What audience-level data is available?
  • Data Presentation  – What dashboard features and functionality are offered?  Can dashboards be customized by user or group?  Are drill-down capabilities available for all analytics on the dashboard?    
  • Engagement and Workflow Functionality – Does the platform offer the ability to engage directly with content owners?  Can ‘owned’ content be managed on-platform?  What workflow management and reporting capabilities are offered?
  • Integration – What additional types of data may be integrated in the system – traditional media, web analytics, email, call center, CRM, etc?
  • Reporting Capability – Does the platform have a report function?  Can reports be customized?  Automated?
  • Geographic Scope – What countries and languages are addressed by the system?  Are two-byte languages supported?
  • Cost Structure – What is the cost basis – seat charge, subscription, content volume and/or number of searches?  How does pricing vary with increases in the cost basis?
  • Value-added Services – Does the listening platform vendor offer system configuration services?  Do they perform analysis and reporting?

Within each major category, list the specific criteria most relevant and important to your requirements.  For example, within the Data and Search Considerations category, you might list ten specific criteria that you want to assess for each vendor:

  • How often is Twitter data refreshed?  Can refresh timing be specified?
  • How often is new content from other sources crawled/brought into the system?
  • How long can each content type be archived?
  • Is back data available?  How far back and at what cost?
  • What data cleansing strategies are in place?
  • Can data be easily exported in CSV/Excel format and is bulk data extraction supported?
  • Can users build and customize topics and searches?
  • What types of Boolean operators are supported?
  • Is proximity search supported?
  • Do users have the ability to date-range data for analysis?

3. Develop a scorecard to use in evaluating the potential listening platform vendors/partners – Using the major categories and specific criteria you have defined, develop an overall scorecard to be used in the evaluation process.  Think about creating a weighting system at the category level to help prioritize the importance of each category.  Assign a number of points to each criteria within a given category.  A scorecard might contain ten categories each containing ten criteria.  Begin by assigning a one-point value to each criteria (100 points total) and then apply weighting at the category level.

4. Develop the initial vendor consideration set – List all the social media platform vendors you wish to consider.  Pick ones you are familiar with and have positive experiences with as a starting point.  Talk to colleagues within, and experts outside, the organization to gain their perspective on the platforms that should be considered.  Read blog posts and reviews of the platforms to gain additional outside perspective.  Visit vendor websites and watch demo videos.  Pull it all together and gain consensus amongst your team on the platforms that will be considered.

5. Do some homework and narrow the list to a manageable number (perhaps five to ten)- If your initial vendor consideration set is too large (if it has more than ten vendors it is too large), do some additional homework and narrow your list to a more manageable number.

6. Develop and distribute an RFI based on evaluation criteria – Using the categories and criteria you developed, create a request for proposal, asking the listening platform vendors the questions that are most critical to meeting your requirements.  Specify the format (e.g. PowerPoint, Word) you would like responses to take.  Give the vendors about two weeks to respond.

7. Evaluate and score vendor responses – Once the RFI documents are received, each should be reviewed carefully and scored according to the criteria and weighting decided previously.  Depending on the number of vendors being evaluated and ease of getting the entire evaluation team together, there may be merit in blocking out an afternoon to gather as a group, read through the responses, and decide how each will be scored.  This is a bit of a ‘pulling off the band-aid’  approach that will save time and allow for spirited discussion and consensus scoring.  If this is impractical for whatever reason in your company or organization, assign one of more RFIs to individuals who will then develop the scorecards.  The scorecards may then be reviewed together in a meeting or conference call, and consensus reached on scoring.  Obviously the potential issue with multiple people independently creating scorecards is consistency.  You want the evaluation to be as fair and consistent as possible given whatever constraints you are working under.

8. Develop a short list of vendors – If your number of vendors under consideration is over five, use the scorecards to reduce the list to three to five platforms that will undergo further evaluation.  These are your finalists.  You should always promptly notify vendors not moving forward in the process, and offer to provide feedback via phone or email on why they were not selected to move forward.  This professionalism will be much appreciated by the vendors, and represents a good learning opportunity for all involved if done well.

9. Deploy test scenarios – At this point we have narrowed the list of contenders and are ready to proceed with some specific tests designed to illuminate the real-world capabilities of the platforms.  Here are three possible test scenarios.  You can use all three for a very rigorous evaluation, or just one or two if that fits your needs better.

  • Test scenario 1: Give each vendor a defined list of search terms (brands, competitors, issues) and the languages/countries you want to evaluate.   You should use search terms that are directly relevant to your company or organization.  Explain what type of analysis you would like performed and ask each to address insight generation.  Platform vendors are given one week to prepare an analysis.  If practical, you could ask each vendor to give a presentation of the results in person.  Alternatively, use a web conference to review the results.
  • Test scenario 2: This is a real-time exercise designed to assess vendor data volume by country/language and signal-to-noise ratio of relevant content.  Get on a web conference with each social media listening platform vendor.  Give them a new list of three search terms and ask that they go into their platform, configure the system for the three search terms and then pull in relevant content for the past 30 days.  Once that is accomplished, ask them to export the data as an CSV or Excel file and email you the results while everyone is still on the line.  A more detailed off-line review of the results should be undertaken, including translation of languages, to assess relevancy of the results.
  • Test scenario 3: This has been referred to by a colleague as the Dr. Evil test…In conjunction with test scenario two, it may be interesting to ‘plant’ known content that matches the search terms on different Twitter channels, Facebook pages and Forums in each country that is of interest to you.  When you receive your data export, examine to determine if the known content was found.

10. Pick a winner – At this point you have the RFIs, scorecards and test results.  You are ready to make your decision.  Convene the evaluation team, discuss the results and make a decision.  With luck, a clear winner will have emerged from the process.  Contact the winner and negotiate terms of a contract.  Don’t notify the non-winners until after a contract is in place, just in case you need to move to your second choice for whatever reason.

In Part Three, we will discuss how to maximize your potential for success when actually deploying the social media listening platform across your organization.

Follow

Get every new post delivered to your Inbox.

Join 171 other followers