Why peer feedback from surveys doesn’t qualify as feedback
In a previous post, I identified peer feedback from 360 degree surveys as a source of inputs where a manager gets information about an employee’s performance. Peer feedback via 360 degree surveys has become increasingly popular as a way of identifying the better performers from the lesser performers. After all, teams should identify people who get great peer feedback, and do something about the team members who get poor peer feedback. The better the peer feedback, the better the employee, right? Well, maybe. Maybe not. Let’s talk about how peer feedback should be used, and not misused.
OK, peer feedback. As Demetri Martin would say, “This is a very important subject.”
First of all, by definition, peer feedback on surveys is, from the manager’s perspective, indirect reporting of an employee’s performance. The peer gives the feedback via some intermediary source (survey, email request, or, if requested, verbal discussion), and then that information gets interpreted as to what it means by the manager, or perhaps even some third party algorithm.
So it is essentially hearsay. Since it is one degree away from direct observation of performance, peer feedback is inherently more risky to use as a way to provide feedback on an employee’s performance. Here’s why:
The best feedback – or the most artful, as I like to say – has the following qualities, amongst others:
–It is specific
–It is immediate
–It is behavior-based
–It provides an alternative behavior
Let’s see how peer feedback stands up!
Specificity: Peer feedback on surveys comes in the form of a summary of behaviors over an aggregate period of time. Sample peer feedback will say something like, “John is always on top of everything, which I enjoy,” or “John needs to stop checking messages during the team meeting.” Now here’s the rub: This looks like general feedback, but it may be (you don’t know for sure) related to one incident. The “on top of everything” may refer to arranging a co-worker’s birthday party. The “check messages during a meeting” may have happened during the one meeting when his daughter was undergoing surgery. Or it could be something that John always does. You don’t know. It’s general or it’s specific. You don’t know.
Or. . .have you ever seen this kind of peer feedback?
“During the September 18 team meeting, John was checking his messages when he should have been working with the team to brainstorm solutions to resolving the budget shortfall. Then, on the September 25 team meeting, John received two phone calls during the meeting, interrupting the discussion flow about what our strategy for next year should be. Then, on October 2, John. . .”
This kind of peer feedback doesn’t happen on surveys. Instead, you get summaries of behaviors that may be based on a specific incident. . . or not. It may be work-related, or not. You don’t really know.
Immediacy: Peer feedback on surveys is delayed. Sometimes by over a year if the peer feedback is requested on an annual basis. That means something can happen, then a year later a request for peer feedback goes to your peers, the peer then discusses an incident that happened a year ago.
Let’s say you were late to an important meeting. The peer feedback is, “I would like John to be on time to meetings.” Even were John to be on time for all other meetings through the year, the “late meeting” issue has arisen and is a problem. The lack of immediacy makes it very murky what the actual problem is, whether it is a problem, whether it can be resolved, or what needs to be done about it. Not sure.
Behavior-based: Peer feedback on surveys is almost never behavior-based. It is usually a generalization or value judgment of an employee. “Amy is always first to offer help” or “Amy is the greatest.” These are not examples behavior-based language, but in fact the opposite. Have you ever seen peer feedback that looked something like this:
“Amy identified an email string where the issue was not being resolved, and took action by calling a meeting of the interested parties on the string and came to a decision on the issue more quickly than what would have happened if the string kept going on.
. . .or. . .
“Amy suggested that we identify the upstream issues in system as a way to getting to root cause. This was very effective at re-focusing the team.”
Not too often.
Provides recommendations for what to do differently: You may get this in peer feedback from a survey, but it would be rare should this happen. Instead, you are more likely to get a generalization of behaviors with an implied action for what to do differently:
“John doesn’t always communicate effectively,” or “John needs to keep his schedule up to date.”
You could glean something from this – improved communication skills, or better schedule management, but what exactly should John do differently? Not sure. This feedback seems to be based on something. . . and something should be done differently, but the best you can do is guess what that different thing is. Hmm. . .
Which leads me to conclude: Peer feedback is not feedback at all. It is misnamed.
Instead, it should be called, “General input from peers about an employee.” Let’s try it. You receive an email asking you to submit “general input from peers about your co-workers.” That sounds more correct than “Please give us your feedback about your peer.” It means that you are contributing to the collective knowledge about an employee, but not so much that it is specifically actionable, as the word “feedback” implies. It also takes some pressure off of peer feedback as a reliable source of information about the employee, and perhaps, perhaps could avoid manager misuse of peer feedback, which I’ll discuss in my next article.
In my series of articles on “strategy sessions with employees”, I discuss how managers can better utilize indirect sources of information such as peer feedback from surveys.