Gap IV Study Points Out Measurement GAP

8 May

Results of the fourth annual Public Relations Generally Accepted Practices Study (GAP IV) were released today.  The study is published by the USC Annenberg Strategic Public Relations Center (SPRC), and is intended to provide the PR profession with data on evaluation, organization, budgeting, emerging trends, use of outside agencies, perceptions of the PR function and other important topics. 

The study has several interesting findings, but let's take a look at what the findings say about Evaluation/Measurement:

  • Respondents spent only 4% of their total PR budgets on evaluation
  • 'Ability to Quantify Results' (12%) ranked low on the reasons a company would choose to use an outside agency ('Additional arms & legs' (51%) and 'Complements our internal capabilities' (47%) were the two highest-ranked factors)
  • 64% of all respondents (77% of F-500 respondents) report to the C-Suite in their organizations
  • The differences in metrics used by organizations where PR reports to the C-Suite versus those used where PR reports to Marketing are dramatic.  C-Suite metrics tend to be more strategic and organizationally focused while metrics used by those reporting to Marketing were more media and sales-oriented
  • 'Influence on corporate reputation' has been the highest scoring metric in each of the four GAP studies, yet as the authors point out, "there are currently no consistently reliable, generally-accepted, quantifiable methods for correlating PR activities with reputation." 
  • PR is seen as contributing to the bottom line success of the company (5.30 v. top-ranked Finance at 5.59)

To amplify the fourth bullet above, in companies reporting to the C-Suite, it was significantly more likely the following metrics were used for evaluation:

  1. Crisis avoidance/mitigation
  2. Influence on corporate culture
  3. Influence on corporate reputation

In companies where PR reports to Marketing, it was significantly more likely the following metrics were used:

  1. Contribution to sales
  2. Total circulation of clips
  3. Total number of clips

To me, there are two key learnings from the study:

  • Confirming my personal experience and observations (see my earlier post), PR increasingly is reporting to the C-Suite and is seen as playing a strategic role within organizations, and
  • There is a tremendous gap between what most companies actually measure today and the new requirements to measure how PR is impacting the organization

The measurement gap refers to the fact that the measurement industry today is focused on media content analysis (outputs measurement) while organizations increasingly value PR for our contributions in moving the needle on reputation, culture or sales (outcomes).  We may have great data on number of impressions or % of articles containing key messages, but how are we proving the value of PR's contribution to the more strategic outcomes the C-Suite demands?  This is a great challenge to the measurement community – we are not necessarily measuring the right things (media clips), and lack agreed-upon metrics and approaches for the types of measurement our new environment demands.  We need to evaluate the tangible contributions of PR (e.g. sales) as well as the intangible (e.g. brand or reputation).  And we need to not only look at PR's contribution to sales, but to PR's contribution in reducing costs (e.g. regulation or litigation) and reducing risks (e.g. new product introductions, downsizing).  So many challenges – so little time.

 Thanks for reading – Don B.

Disclaimer: The author is a member of the Professional Advisory Committee for the GAP Study       

11 Responses to “Gap IV Study Points Out Measurement GAP”

  1. David Phillips May 9, 2006 at 9:10 am #

    This are interesting findings.

    Of the more significant is that marketing and so called PR ‘Marcoms’ are moving away from the centre of gravity in PR practice. It is important that it does when the whole idea of marketing as we know it is in decline. Noticeably, the marketers are bullish about their future while the employers are telling us that employment prospects are not great. PR on the other hand is growing in significance and especially C suite significance.

    How do we respond in terms of evaluation? I think there are methodologies already available that are helpful.

    Effective ‘stakeholder’ benchmarking is an approach that Jon White and I have used to good effect and the Clarity Concept its is particularly helpful for issues management. Interestingly we have found the use of visualisation with C Suite managers is very effective and engages them. It also makes them into an interesting focus group.

    The use of Latent Semantic Analysis (LSA) of news and blogs is interesting because it can deal with the ‘long tail’ in both media. This approach shows the rise, morphing and decline of subjects (issues, brand attributes etc.) relevant to an organisation and my experiments so far suggest it is a considerable intelligence tool with in-built evaluation. This has the advantage that it taps into the conversation in time frames that can be made variable (when issues are significant the approach can go down to half hourly updates).

    This year a number of my students have been looking at blog posts to identify capability for using blogs as a research resource. There are mixed findings. We do see bloggers representing a media view as well as reflecting content that comes up in the bloggersphere. Results are not even and testing is needed to ensure efficacy but there is encouraging news on this front. Combined with LSA it is powerful. Certainly, this capability has a number of advantages over traditional surveys and polls for larger organisations and better known products and services.

    Compared to internal data mining for changes in performance (sales visits v conversions, site visits v interactions etc) these methodologies do have advantages because they are less susceptible to pollution and wishful thinking but this is a case of not instead of but in addition to.

    If the demand is there, I am sure there is capability.

  2. Derek Hodge May 9, 2006 at 9:22 am #

    I would be interested to know the basis for the reports assertion that “from a statistical point of view the pool of respondents constitute a representative sample of the population”.

    A response rate of 496 replies from questionnaires sent to 8,500 organisations is not high.

  3. Jeffrey Hall, GAP Senior Analyst May 10, 2006 at 8:49 pm #

    In response to Don Bartholomew’s second post, given our population size (8,500 companies) and our sample size (496 companies), any statistic offered in the GAP study for the entire sample (i.e., all respondents categories) is representative of the entire population with 96.3% accuracy. This is well within the 95% confidence interval standard. For specific company types and sizes, actual population parameters may be slightly higher or slightly lower than our reported values, but still are likely to be within the 95% confidence interval. Finally, the data demonstrate remarkable consistency between years, which adds additional support for the trends reported in GAP IV, and the overall representativeness of the sample.

  4. Derek Hodge May 15, 2006 at 11:30 am #

    Jeffrey, those number sound good, but they are only correct if you assume your sample is representative of the population as a whole. But with a response rate of under 6% you have no good grounds for making this assumption.

    I was always taught (and teach) that you were looking for a response rate of over 70% to assume your sample was representative and that the results from any survey with a response rate under 50% should be treated with suspicion. 6% is very low indeed.

  5. Katie Delahaye Paine June 7, 2006 at 4:18 pm #

    I finally had a chance to read the entire survey and I have to say that I agree with Don. As much as I try to educate my clients to think about the bigger picture, I find that 80% of our time is spent talking about media analysis. My staff (I don’t have the patience) spend most of their days discussing how to get every single last clip, what the circ figure is for a particular publication, and which “bucket” the clip should go in to. Don’t get me wrong, it’s a great work, and I love analyzing the data, but I wish more clients would consider measuring relationships rather than just the media.

  6. Jeffrey Treem (USC PR alum) June 8, 2006 at 6:13 pm #

    I agree with Derek, the issue is not so much with the response rate itself (which I do feel is too low), but with the lack of validity in the sample.

    Who are these people? How were they selected? How are we to know they represent the industry as whole?

    The fact that the results are consistent over the years may only serve to prove that your sample is consistently flawed.

    The results are likely very accurate, but we have to be sure we know what we are really measuring.

Trackbacks/Pingbacks

  1. PR Measurement - New PR study shines light on measurement - May 10, 2006

    […] Don Bartholomew summarizes the evaluation/measurement findings from the latest GAP IV study from the Strategic Public Relations Centre at USC Annenberg. […]

  2. Strive Notes » Measuring the value of PR… - May 10, 2006

    […] Earlier this week Don Bartholomew blogged about the Gap IV study on PR measurement.  It seems there is still a lot of counting column inches and impressions.  Increasingly, clients want to look at influence on culture and reputation.  In short, what is the true value of public relations? […]

  3. Moderne-Unternehmenskommunikation.de » USA: 4% von PR-Budgets gehen in die Evaluation - May 11, 2006

    […] Eine sehr gute Zusammenfassung der Studie (jedenfalls der Ergebnisse zur Evaluation) gibt es vom metricsman. Verwandte Beiträge dazu: >>12 Gründe, PR nicht zu evaluieren? >>Texte zu PR Measurement und Evaluation Evaluation, PR Evaluation, PR measurement, Public Relations Relationship Measurement […]

  4. intelligent measurement » PR Measurement - Generally Accepted Practices Study - June 19, 2006

    […] Thanks to metricsman blog for bringing this study to my attention, where on this post he also makes an interesting analysis on the results. […]

  5. Fifth GAP Study Delivers Industry Benchmarking Data…and Best Practices « Proving the Value of Public Relations - May 28, 2008

    […] the GAP V Study will have its critics (here’s a few from the GAP IV Study), taken in proper context it provides many useful data points that help add a little science to the […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: