Two Keys to Low Cost Measurement

23 Jul

Cost is one of the key inhibitors of public relations measurement becoming more prevalent.  It probably is THE key inhibitor, with ignorance/lack of education a close second.  Cracking the code on lowering total costs of measurement would go a long way toward making measurement the rule rather than the exception.  In order to understand how to lower costs, you first have to understand what the largest cost drivers are in most measurement programs today.  The primary cost drivers are content acquisition/aggregation and human analysis of articles.

Up to 40% of the total cost of a media content analysis program can simply be acquiring and aggregating the content to be measured.  Common content services like Factiva, Lexis Nexis, Bacons, eWatch and VMS are not cheap.  In order to cast a wide net, many PR professional feel they need multiple services to cover (almost) every possible outlet where coverage may occur.  Content costs can quickly get out of hand.

The other major cost driver is the need to have real humans analyze coverage.  And no, automating article analysis is not a truly viable option right now.  IMHO, the accuracy of such systems is not high enough to justify the potential cost savings.  Even with many content analysis operations being off-shored to low cost countries like India, the cost for analysis on a per article basis ranges from $1.50 – $3.00.  If you are garnering a lot of coverage, these costs can add up in a hurray.

The good news is that one can address both of these cost components by simply measuring a subset of your total coverage rather than every single article.  There are two ways to accomplish this – by taking a nth. sample of your total coverage, or, the approach I prefer, determining the relatively short list of publications/outlets that have the most influence on your targets and only measuring coverage within this smaller population.  By relatively short, think 100 total publications or less.  With a little work, you can probably get your list down to 50 outlets that really make a difference.  I worked with a F500 company that targeted 64 publications they felt really helped move their business.  That is 64 globally.

By confining your measurement program to the most important and influential outlets you hold down both content and analysis costs.  So why don’t more people pursue this easy fix?  I believe it goes back to the industry attitude that views measurement as a score keeping mechanism more so than a diagnostic tool. (read more here).  If our industry remains prisoners of the ‘tonnage’ model of coverage, then measurement cost reduction is very difficult.

To close on a positive note, the quality of the free content sources and tools is getting better and better.  Now that is supports archiving, Google News is a viable source to acquire content.  Google Analytics and BlogPulse from Nielsen Buzzmetrics provide some interesting blog and website metrics at an aggressive price point – free.

Thanks for reading.  -Don B

9 Responses to “Two Keys to Low Cost Measurement”

  1. Katie Paine July 24, 2008 at 9:20 am #

    Great post Don! We’ve found that about 20% of the normal reading list is sufficient to get a good understanding of what the marketplace is seeing. It’s much more important, IMHO, to include the competitors clips in a smaller universe of media than it is to try to capture and analyze the world.

  2. Don Bartholomew July 24, 2008 at 3:37 pm #

    Thanks for the additional insight, Katie. Could not agree more that it is crucial to do competitive clipping in the smaller universe. It never makes sense to only analyze your own coverage. With competitive clipping in a finite set of influential publications you get interesting metrics like SOV in Most Influential Media.

  3. Andrew Laing July 24, 2008 at 10:49 pm #

    Yes, couldn’t agree more. I often trot out polling as a reference point to some clients. We recently had a client come back to us several times to include a mystery item we couldn’t locate that apparently aired on a rural radio station in saskatchewan–they only backed off when we told them that even if it did air, we estimated an audience reach of around 300 (excluding cattle).

    Not sure about Google News as a source, though. My casual experimenting with it shows differing results depending on time of day, moon phases, etc. It doesn’t seem to be an archive of content as it is an archive of a set of dynamic links that seems to change, so it is difficult to use it to generate a consistent sample over time. Others might have a different experience, however.

    Always dead on in your insights, hope to touch base soon.

  4. Inga Starrett July 25, 2008 at 4:55 pm #

    So happy to hear additional support for lack of faith in currently available AI engines. So often you hear clients saying they rely on these systems even without understanding the limitations.

  5. Don Bartholomew July 25, 2008 at 5:10 pm #

    Andrew,
    Thanks for reading and your comment. When do we need to start referring to you as Dr. Laing? I’m unsure about Google News as a long-term solution as well. If Google is using an algorithm similar to what is used for search, then what you say makes sense. Their search algorithm is driven to some large degree by relevance, and relevance in Googleland is largely determined by links. Perhaps someone with more hands-on experience can clarify for us. Thanks again, Don B

  6. Chuck Hemann July 30, 2008 at 6:20 pm #

    Don – great post as usual. We utilize Google News only as a backstop. As your other readers have noted, the content can be spotty. Not to mention that content usually goes back only 30 days.

  7. metricsman July 30, 2008 at 8:04 pm #

    Thanks for your comment and insights, Chuck. I think we are seeing a pattern of concern around the use of Google News. -Don B

  8. Don,

    If you’re analyzing content for the purposes of gaining strategic insight and directional guidance, a limited media sample works fine as there is a diminishing level of new insights at a given point. However, most PR people — even those who invest in measurement – are using tonnage as a surrogate for “high performance.” In my opinion, and I’m sure it’s shared by people who are drawn to your blog, until the balance shifts from “proving value through clip volume” to “gaining insight for continual improvement,” we will continue to see this form of investment decision-making on the part of PR people. Unfortunately, most of our colleagues in PR perpetuate the myth that evaluation equates with tabulation and volume equates with high performance; what’s worse is that those executives who invest in PR accept the myth as truth.

  9. Don Bartholomew August 6, 2008 at 6:51 pm #

    Mark,
    Thanks for the insightful comment. You are exactly right that most folks equate volume with high performance – I had not thought of the situation in those words. Very well said.

    In addition to gaining insight for continuous learning and improvement, I wish the pendulum would swing more toward outcomes – what happened as a result of the coverage – rather than limiting our thinking and expectations to just coverage. Coverage should always be thought of as the strategy and not the objective of PR IMO. Thanks again, Mark. -Don B

Leave a comment