On April 20, the Canadian Public Relations Society (CPRS), unveiled the media relations rating points (MRP) system, an attempt to come up with a single metric for media relations measurement. (Learn more here). Here are a few initial impressions.
Use of PR Multiplier Hurts Credibility
The MRP uses a multiplier of 2.5 on circulation figures to arrive at Impressions. Advocates for the use of multipliers argue one of two positions – either PR impressions are “worth more” than other (i.e. advertising) impressions due to editorial credibility, or they are accounting for pass-along readership in addition to base circulation. Neither position is defensible or advisable. The only credible, consistent way to report impressions is by using audited circulation or audience figures. I know of no industry association or governing body (e.g. IPR, PRSA) that supports the use of a multiplier. It is bad business for the PR industry.
MRP is NOT ROI
Guess the CPRS Measurement Committee could not resist using the magic measurement acronym – ROI – to describe the MRP. This is spin. The MRP uses a cost per Impression metric, which is preferable to just Impressions and does allow comparison to advertising, but it does not give ROI. ROI has become one of the most misused terms in public relations. In order to calculate a true ROI, you compare the dollar value of what is created (sales, perception changes, etc.) to the cost of doing it. Cost per impression is just that, a cost-oriented metric, not value-oriented. It does not give ROI.
Are All MRPs the Same?
The fact that one selects five qualitative factors from a longer list to include in the MRP for a given campaign seems to mean that for five different campaigns all reporting MRPs, you could have five slightly different ways of scoring articles for the index. Not sure how meaningful this is, but it does seem to raise possible issues around consistency of application of the metric. I also found it a little odd that no attempt was made to weight the various factors according to their ability to impact the readers as determined by primary research.
Where’s the Proof?
For any single metric to be compelling, you would like to know how it correlates to desired business outcomes. Does the metric show statistical correlation to some desired ‘downstream’ behavior – sales, likelihood to buy or recommend the stock, brand preference or prescription volume, for example? Hard to tell if any of this sort of research has been undertaken by the MRP sponsors. I did not find any evidence of it.
It’s always easier to criticize than create, so I applaud the efforts of the CPRS and the MRP Committee. This was a massive undertaking and they accomplished a lot to get to this point. The MRP is not a magic bullet, and may be fatally flawed. But who knows – it may take off in Canada and prove to be a valuable tool. Should be fun to watch!
– Don B