In Part One, we discussed a range of topics designed to help you plan and define the scope and requirements for selecting and deploying a social media listening platform across your company or organization. In Part Two, we will use the knowledge and perspective we gained in planning to orchestrate a thorough and effective platform selection process.
Here is a scalable selection process that will help you surface and select the social media listening platform that best meets your unique situation and requirements.
1. Define the individuals who will be involved in the selection process – Inclusion is a powerful card to play here. Inclusion brings different perspectives together. Inclusion greatly improves chances for success when it is time to authorize purchase of a platform and get it deployed properly across the organization or company. Inclusion will increase the likelihood of acceptance and use of the platform across the organization. Include representatives from the major stakeholder groups identified during the planning process. You might include someone from your IT department. You might also include the individuals who must authorize the purchase. A group of up to ten is most workable. After ten or so, I believe you will most likely experience diminishing returns on the incremental people added to the process.
2. Develop a list of selection criteria organized by major category – Based on the planning process we undertook in Part One, develop a list of categories that are most important to learn more about. Here are ten categories you might consider including:
- Content Sources/Types & Aggregation Strategy – What types of social content are brought into the system? How is the content aggregated (e.g. RSS, crawling, third-party aggregators)? How often is each type of content aggregated?
- Data and Search Considerations – How long is content archived, and is back data available? What data cleansing strategies are in place to address spam, splogs and duplicate content? Is full Boolean logic available for constructing searches?
- Metrics and Analytics – What specific metrics are ‘standard’ in the system? Is automated sentiment analysis offered at the brand or post level? What audience-level data is available?
- Data Presentation – What dashboard features and functionality are offered? Can dashboards be customized by user or group? Are drill-down capabilities available for all analytics on the dashboard?
- Engagement and Workflow Functionality – Does the platform offer the ability to engage directly with content owners? Can ‘owned’ content be managed on-platform? What workflow management and reporting capabilities are offered?
- Integration – What additional types of data may be integrated in the system – traditional media, web analytics, email, call center, CRM, etc?
- Reporting Capability – Does the platform have a report function? Can reports be customized? Automated?
- Geographic Scope – What countries and languages are addressed by the system? Are two-byte languages supported?
- Cost Structure – What is the cost basis – seat charge, subscription, content volume and/or number of searches? How does pricing vary with increases in the cost basis?
- Value-added Services – Does the listening platform vendor offer system configuration services? Do they perform analysis and reporting?
Within each major category, list the specific criteria most relevant and important to your requirements. For example, within the Data and Search Considerations category, you might list ten specific criteria that you want to assess for each vendor:
- How often is Twitter data refreshed? Can refresh timing be specified?
- How often is new content from other sources crawled/brought into the system?
- How long can each content type be archived?
- Is back data available? How far back and at what cost?
- What data cleansing strategies are in place?
- Can data be easily exported in CSV/Excel format and is bulk data extraction supported?
- Can users build and customize topics and searches?
- What types of Boolean operators are supported?
- Is proximity search supported?
- Do users have the ability to date-range data for analysis?
3. Develop a scorecard to use in evaluating the potential listening platform vendors/partners – Using the major categories and specific criteria you have defined, develop an overall scorecard to be used in the evaluation process. Think about creating a weighting system at the category level to help prioritize the importance of each category. Assign a number of points to each criteria within a given category. A scorecard might contain ten categories each containing ten criteria. Begin by assigning a one-point value to each criteria (100 points total) and then apply weighting at the category level.
4. Develop the initial vendor consideration set – List all the social media platform vendors you wish to consider. Pick ones you are familiar with and have positive experiences with as a starting point. Talk to colleagues within, and experts outside, the organization to gain their perspective on the platforms that should be considered. Read blog posts and reviews of the platforms to gain additional outside perspective. Visit vendor websites and watch demo videos. Pull it all together and gain consensus amongst your team on the platforms that will be considered.
5. Do some homework and narrow the list to a manageable number (perhaps five to ten)– If your initial vendor consideration set is too large (if it has more than ten vendors it is too large), do some additional homework and narrow your list to a more manageable number.
6. Develop and distribute an RFI based on evaluation criteria – Using the categories and criteria you developed, create a request for proposal, asking the listening platform vendors the questions that are most critical to meeting your requirements. Specify the format (e.g. PowerPoint, Word) you would like responses to take. Give the vendors about two weeks to respond.
7. Evaluate and score vendor responses – Once the RFI documents are received, each should be reviewed carefully and scored according to the criteria and weighting decided previously. Depending on the number of vendors being evaluated and ease of getting the entire evaluation team together, there may be merit in blocking out an afternoon to gather as a group, read through the responses, and decide how each will be scored. This is a bit of a ‘pulling off the band-aid’ approach that will save time and allow for spirited discussion and consensus scoring. If this is impractical for whatever reason in your company or organization, assign one of more RFIs to individuals who will then develop the scorecards. The scorecards may then be reviewed together in a meeting or conference call, and consensus reached on scoring. Obviously the potential issue with multiple people independently creating scorecards is consistency. You want the evaluation to be as fair and consistent as possible given whatever constraints you are working under.
8. Develop a short list of vendors – If your number of vendors under consideration is over five, use the scorecards to reduce the list to three to five platforms that will undergo further evaluation. These are your finalists. You should always promptly notify vendors not moving forward in the process, and offer to provide feedback via phone or email on why they were not selected to move forward. This professionalism will be much appreciated by the vendors, and represents a good learning opportunity for all involved if done well.
9. Deploy test scenarios – At this point we have narrowed the list of contenders and are ready to proceed with some specific tests designed to illuminate the real-world capabilities of the platforms. Here are three possible test scenarios. You can use all three for a very rigorous evaluation, or just one or two if that fits your needs better.
- Test scenario 1: Give each vendor a defined list of search terms (brands, competitors, issues) and the languages/countries you want to evaluate. You should use search terms that are directly relevant to your company or organization. Explain what type of analysis you would like performed and ask each to address insight generation. Platform vendors are given one week to prepare an analysis. If practical, you could ask each vendor to give a presentation of the results in person. Alternatively, use a web conference to review the results.
- Test scenario 2: This is a real-time exercise designed to assess vendor data volume by country/language and signal-to-noise ratio of relevant content. Get on a web conference with each social media listening platform vendor. Give them a new list of three search terms and ask that they go into their platform, configure the system for the three search terms and then pull in relevant content for the past 30 days. Once that is accomplished, ask them to export the data as an CSV or Excel file and email you the results while everyone is still on the line. A more detailed off-line review of the results should be undertaken, including translation of languages, to assess relevancy of the results.
- Test scenario 3: This has been referred to by a colleague as the Dr. Evil test…In conjunction with test scenario two, it may be interesting to ‘plant’ known content that matches the search terms on different Twitter channels, Facebook pages and Forums in each country that is of interest to you. When you receive your data export, examine to determine if the known content was found.
10. Pick a winner – At this point you have the RFIs, scorecards and test results. You are ready to make your decision. Convene the evaluation team, discuss the results and make a decision. With luck, a clear winner will have emerged from the process. Contact the winner and negotiate terms of a contract. Don’t notify the non-winners until after a contract is in place, just in case you need to move to your second choice for whatever reason.
In Part Three, we will discuss how to maximize your potential for success when actually deploying the social media listening platform across your organization.