Sunday, October 26, 2008

E2.0 ratings or Peer People Reviews : should companies change their development and evaluation processes ?

In a conversation launched a month ago by Andrew McAfee in this post, and continued last week through this post, there was a lot of buzz on whether it was a good idea or not to measure E2.0 participation of knowledge workers. I tend to agree with Andrew McAfee that measuring E2.0 activities would "encourage and increase participation and contributions". And I think we are looking at a major change opportunity.

I would first like to stress that E2.0 ratings should be linked to a correctly organized peer network or community (in a post last week, I pointed out that we call these Collaborative Models at the Boostzone Institute). If ratings are allowed from any employee in the company, I am worried that it will be a long time before they are taken into account by management for decision making. But within a peer network, these ratings can become a powerful tool for expertise recognition. In a interesting article from HBR, Creativity and the Role of the Leader, Diego Rodriguez, a partner from IDEO, points out that "contributing to an interdependent network is its own reward". I would go further, as Andrew McAfee does, and say that ratings can encourage friendly competition and self-improvement.

The change opportunity I am pointing at is how these E2.0 ratings could become the basis for network or comunity management.

In a very classical view of HR, how would E2.0 ratings be considered ? As the measurement of an employee's performance in using E2.0 tools ? Probably. The reality is slightly more complicated. E2.0 ratings should be used to start building Peer People Reviews (PPR).

In classic People Reviews, a particular employee is given a triple assessment of himself: her performance, her potential for evolution at her current employer, her "development needs". This triple assessment is based on a hierarchical view of the organization: people are assessed by their managers, based on performance management systems that are deployed down hierarchical lines and on skill frameworks that, more often than not, are built top-down by HR teams.

I think these People Reviews are particularly well adapted to developing managers within a hierarchical organization. Most other systems (mobility, rewards) are based on this one. As for 360° assessment and other more "democratic" assessment systems, they are still management-designed tools and are far from the peer potential of E2.0 ratings.

Peer People Reviews, that is E2.0 ratings, to my mind, would be different from classic people reviews in at least two dimensions. From a process point of view, they are continuous assessment processes, continuously evolving as network member work activity produces additional ratings. From a content point of view, the ratings are very different from performance indicators or skills description. These, in classic people reviews, are designed in advance. In Peer People Reviews, only the rating categories can be designed: the content of the ratings will continuously evolve also, based on the evolution of content development by the network. And therefore, assuming that the peer network has a clear link with value creation, the ratings become a great indicator for network performance.

In the same HBR article mentioned above, the authors say that "one doesn't manage creativity. One manages for creativity". I would use the same approach to say that an organization does not manage community or network performance: an organization can only build the environment for community or network performance.

Individual People Reviews should be used to drive individual managers development, while Peer People Reviews should become a "common way of working" within any collaborative model. Building the framework for the Peer People Review and continuously monitoring it is management work.

No comments:

Post a Comment