(I would like to start by adding this following comment: this whole post plus everything I have worked on today, has actually been made on my Windows 950XL phone using Continuum, and external screen, keyboard, and mouse. And it works great so far!!)
I had the great pleasure of presenting at this year IntraTeam 2016 event in Copenhagen on March 2nd. It was a very short presentation this time, just 20 minutes, with all focus on the current progress on how we think about- and explore measuring key collaborating behavior in Grundfos.
Attached is my presentation, but I wanted to take some time explaining some key slides in it. This is by the way a follow-up post from Thomas’ Performance Appraisal in the Digital Enterprise Workplace, where he in more detail explain the calculation(s) that we are working on.
Slide 3: Can you spot it?
I took this example from our Yammer Network just days before the presentation. I actually just stumbled upon it, perhaps due to my trained eyes looking for collaborating behaviors, but most likely it was just pure luck. :o) In any case, the example is in my mind very good, since it contains so many direct leads to what we want more people to do – on a scale of 19.500.
As in when I presented it, please take a minute or two and see if you can spot things your self, before scrolling down to the text below the image where I list what we consider as being most interesting.
Did you spot any? Here is what we think:
1. First of all the obvious one: experience and knowledge has been shared. Here is a person who have been out learning about, in this case, our customers and end-users, and decided to take 5-10 minutes to write things down and sharing it to the rest of the organization.
2. The other hint of “good behavior” is that it is shared in one of our Meta-Communities. This was an end-user visit, so the right place is in our Voice of the Customer community.
3. Visualizing is of great importance. Not only does it captures the audience, it also acts like a filter, as well as in this case building a bit more around the end/user. Now we know how “Rene” is, what he looks like, where is his daily work being done, etc.
4. And Stefan, our “sharer” is also getting instant feedback that what he is sharing is of interest to his audience. It shows that it is not only quantitative but qualitative what he is sharing.
5. So others can find this type of conversation or insights, tags and topics is of great importance. And the interesting thing here is that it is not Stefan who has added these to his conversation. It is someone else. This type of “community manager” behavior of everyone in the company is needed.
6. A classic “high roller” when it comes to key collaborating behavior is what we like to call @-mentioning of people into a conversation. Again this is core community manager behavior that is in play here, making sure that people who “could be interested” are added to the conversation but just mentioning their name.
7. And finally, back to quantitative vs. qualitative, being the sharer also requires that you participate in any conversations that might happen due to your contribution. Liking, commenting, and perhaps even ending a conversation is key collaborating behavior.
Slide 9: The Equation
If we then jump further to page 9, this is then the simple equation behind all this. In general for all of our data we look at a time span of 60 days. This is mainly due to the fact that we would like to make sure that people have enough “real” data which is not greatly affected by for instance vacation or company holidays.
The first one – general activity – is the easy one, since it is straight forward. It is also the one we can call “quantitative” since it only shows the activity level of someone, and not what value this activity might give to the person or the network. For each of the main activities 1 p (one point) is awarded:
• Post, 1 p
• Reply, 1 p
• Like, 1 p
• Praise, 1 p
The collective sum of these over 60-days then gives a total that we are then applying into a 4 Step Scale:
• Super active, 101 p and more
• Active, 21 p – 100 p
• Learner, 6 p – 20 p
• Inactive, up to 5 p
The second one – collaborator activity – is of course then the qualitative measure (or at least that is our aim for it to be). Here a bit more calculations are needed to make it work. The current equation is based on the following awards:
• Reply to question, 1 p
• Tag, 1 p
• @-mention (either in post or reply), 3 p
• Praise post, 5 p
The additional measures that we are exploring (among others) are:
• Outside network/geo conversation
• Post in Meta-Community
• Use of Meta-Tag (e.g. #moreofthis)
• Use of picture/video
The list can be made long! But it actually shouldn’t. The important thing here is that we want to measure fewest possible things, i.e. those which represent behaviuors which capture the greater value of the network, and then make sure that as many people as possible do alike. It’s like trying to find the Net Promotor Score of Key Collaborating Behavior – preferably we would just like to have one “question” to be answered, but in reality it might be we need a couple of more.
Now, just like for general activity, the totals are then applied in the same 4 Step Scale:
• Super collaborator, 101 p and more
• Collaborator, 21 p – 100 p
• Learner, 6 p – 20 p
• Inactive, up to 5 p
Note: Since I have made the scales, I will ALWAYS be a Super Active and Super Collaborative employee. It’s just one of the perks of being the creator. :o)
Having these numbers then gives us the potential benefit of driving the adoption of key collaborating behavior in Grundfos. We see three levels of usage:
- For individuals
- For communities or groups of people
- For organizations
For individuals (Slide 11)
A personal dashboard using PowerBI, very much similar to Microsofts own O365 Analytics, provides the single user a snap-shot of their current activity in the Enterprise Social Network (in our case Yammer). The example on page 11 shows my dashboard, with my current activity and collaboration levels, where I’m most active, what tags and topics I participate in, how much of my conversations are in public vs. private (we are very much for Working Out Loud!), etc. Based on the questions when I presented this, please note that only I will have access to this dashboard. And the data that is shown is actually both from public and private communities and conversations, but it we have removed all conversations even before adding the data into Azure.
For communities or groups of people (Slide 10)
This is based on selecting specific people across teams and departments, for instance members of a community, or a group of people who has a common interest. I think this one is easiest to explain with a real example. On page 10 you can see the Activity and Collaboration scores of 19 course members in Grundfos. The course is called “Virtual Leadership and Collaboration” and by looking at the scoring models before and after the course (in this case 5 months after they ‘graduated’) it is possible to see the effect on their behavior. You might think it was not much, but in reality moving just 2-3 people up the ladder might actually have great effect to the over all network, since they will influence other people with their use of key collaborating behavior. And, changing peoples behavior is a loooong process. But we are in it for the long run!
And the combination of the two will for sure be a helpful instrument for the whole organization as well as departments and teams. With the data – which by the way is combined with HR data – it is now possible for us to “drill-up” and take a helicopter view of a specific department. So when starting to look at having executive power, we can use this as a friendly gamification between teams and departments.
I mentioned that we have combined it with HR data. It is also important to state that the reason why we decided to try develop this our self instead of using one of the available 3rd party tools has two main reasons:
1. We wanted the possibility to quickly be able to add any other types of data to the analytics. For instance one thing that we are looking at adding, is Network Analytics data from some of our in-house tools; offline self-assessment data; or even similar SharePoint data.
2. This ties very much into the need for us to be exploratory, learn as we go along, and understand the underlying forces of collaboration. By doing it our self we force our self to learn this, fail fast, adopt, and iterate, iterate, iterate.
Worth mentioned in this is that Microsoft are working on a lot of this in their O365 Analytics. The challenge is that we don’t know what is coming or when. So it is a balance of course, but this is a risk we are willing to take since we learn so much our self by doing this.
I ended the presentation with asking the room if they would like to have their own personal dashboard or not. About 70% wanted one, and the rest 30% thought it was just crazy! What do you think?