Tag Archives: #AMECSummit

Social Media Metrics & Measurement Continue to Evolve

9 Jun

This week on 11/12 June, AMEC is holding their next International Summit on Measurement. Many of you will recognize AMEC as the framer of the Barcelona Principles at their annual meeting in 2010. The theme this year is upping the game to deliver relevant insights along with traditional measurement reporting on performance against objectives and KPIs, in order to provide a richer environment and context for making strategic business decisions.

summit-header

 

 

In this post, I wanted to shine the light on a workshop during the Summit called, Metrics that matter: Making sense of social media measurement. The session, led by Richard Bagnall, promises to look at the latest social media measurement trends, provide a look ahead at what might be right around the corner in the next 12 months, and unveil a revised and enhanced AMEC social media measurement framework guide that should make it easier to implement the frameworks in your own planning environments.

At last year’s Summit in Madrid, we presented the initial work in developing models and frameworks to support rigorous and valid social media measurement. It consisted primarily of three elements – a new model for social media measurement, a couple of alternatives ways to think about populating the model with relevant metrics, and a social media measurement planning framework/template. The initial metrics approaches focused on two alternatives, metrics focused across programmatic, channel and business dimensions as well as an approach based on the Paid, Earned, Shared and Owned integrated channel metrics approach which enjoys traction in many organizations. Based on using and customizing the approach for multiple clients, here are a few things to consider as you review and think about using the new frameworks and usage guidance to be unveiled this week.

Model.PESO

As campaigns become increasingly integrated across media types it makes sense to also reflect this integrated view in measurement. Ideally, measurement should reflect a similar level of integration as the campaign or program being measured. As we measure performance from the four channels, we should keep in mind what we really would like to understand is how the channel efforts amplify or build on each other.

The work presented at last year’s Summit showed example or illustrative metrics for each of the media types across the measurement model. Expect this year’s version to take a stronger view of the most relevant or best metrics to use when employing this approach.

Another key development would be the presentation of different metrics models to use with the measurement model. The intention all along was there could be a range of choices to fit different corporate cultures and planning environments.

 

Measurement Planning Template

The measurement planning template is the heart of your social media measurement planning effort. It is best used by conducting a facilitated discussion with all measurement stakeholders around each of the key elements in the planning template. Setting aside half-a-day to complete the exercise is not excessive. Here is how we think about using the template in our social media measurement planning.

Planning.Framework

Business or Organizational Objectives: The agreed upon overarching organizational objective(s) the social media effort is designed to impact. These objectives may be given to the social media team or they may be the result of conversations and negotiations.

KPIs (Key Performance Indicators): One or two high-level metrics that are aligned with the business/organizational objectives and are outcome-oriented. Again, the KPIs may be given to the measurement team or the measurement team can help guide the conversation to develop them.

Program Elements: Outline the major elements of the program. The elements should reflect the scope and integration of the campaign. They could range from a simple social media program to one that includes social media, online advertising, influencer outreach, e-mail marketing and other elements.

Program Objectives: Capture or write the measurable objective associated with each major aspect of the program above. Frequently you will need to help rewrite the metrics to make them measurable. You may also need to rewrite them to actually make them objectives (what) and not strategies (how).

Measurement Story: This is an attempt to marry the concepts of measurement and story telling. At the end of the program, what measurement story would you like to be able to tell your key stakeholders? This should be based on accomplishing your specific KPIs through the success of the programs used and your ability to prove that through data and measurement. We typically develop the measurement story right after settling on the business objectives and KPIs, and use it as a guidepost to ensure we have the means and methods to tell the desired story.

Key Metrics: The metrics directly tied to program objectives. Should be aligned with the objectives as well as the higher order KPIs. The most important KPIs and metrics and often captured on a dashboard for monitoring and/or reporting.

 

Determining the importance of metrics

A question sometimes comes up about the best way to determine the most important metrics – what rises to the level of inclusion on a high-level dashboard? Here’s a few tips to consider:

Place metrics in their respective place in the measurement model (Exposure/Engagement/Influence/Impact/Advocacy) Metrics that appear toward the right side of the model are generally more compelling than those addressing just Exposure or Engagement. These generally get to the outcomes rather than just outputs of program activities.

Examine how closely aligned each metric is with program objectives. Metrics that directly support the objective are generally more important than those that indirectly support the objective.

Look at the degree to which the metric is explicitly part of the Measurement Story. Metrics directly aligned with the Measurement Story represent better potential dashboard metrics than those that are tangential to the story.

Metrics that provide context are generally stronger than those that don’t. For example, RTs per 1000 Followers tells you much more than just the Number of RTs – it gives an indication of community engagement. Likewise, Engagement Rate (Number of Engagements divided by Total Reach) is a better metric that just the Number of Engagements.

Remember the audience – at the end of the day, when you’re showing your stakeholders what you accomplished this year, what would they be most excited to hear? What will they want to see that ultimately demonstrates the best use of their dollars?

 

After the Summit I’ll post a reaction to Professor Jim Macnamara’s unveiling of his new his new paradigm and model for measurement and evaluation. Jim is incredibly smart and thoughtful so I’ll be curious to see and discuss what he proposes. Enjoy the Summit in person or through social media and contribute to the dialogue.

Thanks for reading.

 

 

I would like to acknowledge our colleagues at R/GA for their contributions to the thinking you see here.

New Framework for Social Media Measurement – Update & Debate

21 Jun

The complete presentation, including speaker notes, I gave at the AMEC Summit in Madrid on June 7 introducing the AMEC Social Media Valid Framework is now available for download here. Please visit and download the slides if you are interested.

Whenever you throw a new idea or concept out there you really hope people see it. And you really, really hope they care enough about it to comment. Or better still, to challenge it and debate its merits. Simply put, critical thinking makes the concept better. Enter stage left, our provocateur, Philip Sheldrake, author of The Business of Influence. Philip has also been an active thought leader in the whole push toward social media standards through AMEC committees and the work of The Conclave. I respect his opinion very much. Philip wrote a rather long essay on his blog in response to my original post on the new Valid Framework for Social Media. Richard Bagnall, Chairman of the Social Media Measurement committee of AMEC posted a response to some of Philip’s concerns. I would like to address a few others now.  

First, one area I am not going to address in this post is whether or not any framework for social media measurement should be driven by Business Performance Management (BPM) principles and approaches. That subject is being debated very well in the comments under Philip’s original post – pretty interesting thread.

Stakeholders v. Customers v. Audiences

Philip questions if the framework, based on the words chosen to describe it, is oriented more toward customers (i.e. social media marketing) and not the broader concept of stakeholders. The intent was and is the make the framework broad enough to comprehend the majority of use cases for social media. Marketing is just one of the use cases. To be fair, the use of ‘audience’ in the description of Impact is a direct lift from the proposed standards document. We were trying to not reinvent the wheel. That said, I agree with you that saying stakeholders or publics is a more accurate description than ‘audience’ in many cases.

Influence

Hard to argue about influence with the guy who wrote a book on it, so I won’t. Philip makes the point that the model offers a definition of influence (Ability to cause or contribute to a change in opinion or behavior) but does not include the qualifying statements or concepts from the WOMMA Guidebook – the potential to influence (before) and the actual, observed influence (during/after). He goes on to say he prefers emphasis on the second. Fully agree. The short definition was offered for clarity and brevity. We will plan to reword this slightly to broaden. You’ll note the illustrative metrics shown in the framework with metrics are consistent with the during/after (e.g. change in purchase consideration) rather than before orientation. If you think about it, the before piece is really about targeting (who are the influencers we should engage?) rather than measurement (did we change opinion, attitudes or behavior of our target stakeholders?).

Value

Philip makes the point that in the model verbiage, the only reference to Value was financial impact whilst the standards definition reads, “the importance, worth or usefulness of something”. Philip also acknowledges this definition was a “last minute tweak”. It was actually being debated as the slides were being developed. We’ll broaden the definition to include tangible and intangible value to be in lock-step with the standards document.

Advocacy

Philip poses two great questions regarding this phase. His first point is a good model that includes advocacy should also be able to comprehend opposition and advocacy for competing agendas. Fully agree these are important to consider and the model certainly does not preclude that. Comprehending these factors would be addressed by one or more metrics within the framework (e.g. Net Positive Advocacy – positive minus negative advocacy) or simply as a point of analysis rather than measurement per se.

Philip’s second question is simply why Advocacy is shown after Impact. It is shown in this order to recognize the fact that most advocacy actually occurs post impact or conversion event. For example, in a sales context, advocacy generally occurs after someone has bought and had great experiences with the product or service. To be more accurate, we might also show Advocacy after Influence and before Impact. Certainly issue advocacy often happens as a direct outcome of a change in opinion or attitude. Rather than show it in two places, to keep it simple, we choose to show it in the sequence where it is most relevant. In practice one could have both pre and post-impact advocacy metrics in your measurement plan.  

Paid Owned Earned  

Philip simply questions the strategic value of this taxonomy and says it props-up organizational silos and reinforces misconceptions such as PR equals (only) Earned Media. I think the questions are valid ones. But as a practical matter many companies and organizations do use the taxonomy. That is a key reason why it is one of the framework alternatives. And while it can reinforce silos and misperceptions, I have actually seen it have the opposite impact – it recognizes integrated programs require integrated measurement and bring three key elements (different departments and sometimes different agencies) together under a common framework. Interestingly in PR, we now routinely develop programs that include earned, paid and owned elements. This reinforces the positive perception public relations today is not simply equal to earned media.

Impact

Finally, Philip questions whether any framework can consider programmatic-level or channel-specific metrics in terms of Impact. I think it can and does. Impact refers to outcomes. We can have macro and micro conversions. And we can have outcomes that are specific to programs (e.g. event attendance, voter registrations, subscriptions to a content series) or channels (download a whitepaper from the website); although I will agree there will be few channel-specific outcomes.

Keep in mind the metrics shown in the frameworks are illustrative, as Richard pointed out in his remarks, they are not meant to be exhaustive, definitive or recommended. They are illustrative of the emerging standard metrics for social media.

 

Thanks for caring.