Why Use Case metrics matter over enterprise-wide metrics
Meaningful metrics are best measured at the use case level. Who, what, why, where, and how is any specific group, business unit, program, initiative, etc. performing against itself according to the implementation's specific goals, behavioral changes, and objectives.
If those numbers can be established and are met at the use case level, the organizational-level metrics will be what they will be and the community will be healthy, vibrant and value-driven.
When Use Cases are not enough
When the need for the creation of enterprise metrics arises early on in the implementation, and use cases are not mature enough to provide sufficient data, the need for comparison arises. What follows are a sampling of variables to consider to achieve an aggregate of factors that most closely represent your implementation’s metrics :
- Are these other companies using Jive or some other platform? SP? Yammer? Drupal? home-grown?
- Did they replace Yammer, Sharepoint or some other Enterprise Social platform?
- How many employees? Is this organization-wide, departmental level, program level or an ad-hoc implementation?
- Did these others do Big Bang or phased rollout?
- How long and big was their Pilot?
- How long have they been in Production?
- Where are these companies on the adoption maturity model?
- What is their geographic profile (local, national, international....). Are employees in a single location or 80% virtual?
- What is their user demographics - knowledge, skills, occupation and age of the employees?
- Did they do a big mobile push?
- Did they do a big Office/Outlook/Sharepoint push?
- Do they use the Apps Marketplace?
- What Apps are they integrated with?
- What other system integrations? Salesforce? HRIS? home-grown?
- What's their level of customizations or use of APIs
- Did they replace their entire Intranet? Corporate Wiki? Corporate blog? or was this a parallel effort?
- Are they B2C, B2B, manufacturing, IT, agriculture, financial, or Oil/Gas? Are they Union or regulated organizations?
- Do they have a dedicated staff running their install? or is this an ad-hoc initiative?
- How knowledgeable is their implementation staff?
- Did they use professional Consultants? Or did they do the implementations on their own and have to learn the hard way?
- How big is their Community Management team? Ambassador group? Champions group?
- What is there governance for T&C? Posting content? Creating Groups?
- Do they have Groups disabled? or Have to go through an approval process?
- Are they mostly Spaces or Groups?
- Did they have the highest level of sponsorship? or did this bubble up from below?
- What authentication methods are available? LDAP? SSO? Dual-Factor? ICAM?
- Can employees access it outside the network? on-network only? VPN? Citrix?
Until each of these variables above (and many others) can be weighted and normalized to match the specific environment, user population, objectives, and use cases of an organization establishing enterprise-wide metrics based on any other implementation simply does not offer meaningful value.
What do you measure?
There's one simple metric goal: 100% of total users registered, participating, and contributing. Anything less is a false sense of accomplishment or a perceived failure. Since that’s not realistic, what are the metrics relevant to the behavior that is being measured?
- What’s your use case?
- Who's your population?
- What actions and behaviors do you expect?
- What are the goals and objectives?
Measure against these variables, maximize the data points that are meaningful, and improve those that are not performing to expectation.
The organizational metrics will follow.
Examples: How to define metrics of success
An organization that has chosen a platform for certain types of communications might have low enterprise adoption metrics. However, 97% of the expected communications are being shared by the platform, otherwise they use other communication methods; another organization might be using the tool for specific programs where communities of practice cycle in and out. Their platform-wide metrics might show a comparatively low enterprise adoption because their model is to serve a specific program need at a specific point in time; therefore, the platform serves the intended purpose. Overall metrics might be seen as low, but during the cohorts and specific to that cohort, the participation is 75% and contributing is 60%.
Two scenarios, their metrics, and measuring success:
- Registered users are 75% of total and participating is 35% and contributing is 15%?
- High volume, but low overall activity?
- Registered users are 35% of total and participating is 80% and contributing is 60%?
- Low volume, but high overall activity?
- Has a 20% remote staff and the enterprise goal is for those users to interact with each other, not necessarily with the home office where 80% of the population resides.
- Is focused on Product Development only; it's open to the rest of the organization, but not being rolled out to them yet.
Take another scenario:
The platform is open to the entire organization of 15,000 employees and 10,000 of them are front-line employees with less than 10% of their available time to do non-frontline activities (administrative tasks, answering emails, filing reports, tending to other business) and 2,500 of those workers are registered with 75% participating and 50% contributing. The remaining 5,000 are knowledge workers with 2,500 registered and 50% participating and 25% contributing.
Is that an organizational success or failure?
Now say the front-line workers are mobile only and knowledge workers can be either mobile or desktop. What then? What if the frontline rollout was only 6 months old and knowledge workers was 18 months old?
Is there opportunity to pick up the remaining users? Yes, but what are the use cases that have not worked and what are potential new use cases that will attract them?
"How do we know when the implementation is successful?"
Since the request for this comes early in an implementation and is from high-level stakeholders, not giving numbers is not an option.
The world’s best boss demands numbers. There are plenty of resources out there that state what total, registered, participating, contributing “should” be.
When deciding upon metrics for early stage implementation, be sure to include an asterisk and take advantage of the opportunity to educate. Teach the audience what is important and tie it back to the stated goals and business drivers. Take the opportunity to share and talk through the differences. "Want to know why that engagement is better? Here are some ideas...". Adoption metrics could exceed any stated goal, but if it’s all chatter and not what matters …what’s the business value?
Adoption is critical, but without business value, it's not the only measure of success.