By Bruce Bennett
A 360 Flaw?
In his HBR Blog post, “The Fatal Flaw with 360 Surveys”, Marcus Buckingham argues that 360 surveys are “at best, a waste of everyone’s time, and at worst actively damaging to both the individual and the organization.” Yet, 77 % of organizations surveyed by T&D magazine have used and continue to use 360’s. Something does not add up. It turns out that the ‘Fatal Flaw’ is faux flaw only found poorly constructed 360 surveys.
It’s clear that Marcus has had some bad experiences with 360 feedback and those experiences color his perspective. Condemn all cars because the engine in the lemon you bought is broken and you miss the value of every other car on the road. Marcus describes a ‘fatal flaw’ in 360 surveys that is equivalent to a broken engine in a lemon used car.
The flaw Marcus has discovered occurs because raters score a leader subjectively. Raters, he suggests, mentally compare the leader to themselves and score accordingly, making the data worthless. This is a flaw faced by every survey, not just 360 surveys. It is not a flaw, it is a reality of collecting survey data. It becomes a flaw when a 360 survey contains poorly written questions.
The Flaw Disappears
Write the questions well and the flaw does not exist.
A specific, observable behavior is among the first requirements for a good survey question. If the behavioral description is clear, anyone and everyone observing a leader can recognize when the behavior is performed. In statistical parlance it’s called inter-rater reliability.
Such observations are not only the basis of good 360 questions, they are the basis of all scientific research. Whether it’s a biologist counting bacteria in a petri dish or a naturalist studying tree frogs in the Amazon, the quality of their data depends on how well they have defined what they are looking for.
In a 360 survey question, the behavioral description is combined with a scale, both have to be good for the question to work. Chose the wrong scale and you destroy the question. Agreement scales are very risky because you are trying to measure what the person is thinking and you can’t see ‘thinking’. A frequency scale, combined with a clear behavioral description, creates the condition for valid observations. How many bacteria in the petri dish? How many times did this leader ____?
Take the question, “Do you agree this person has a clear vision for the future?” The flaw Marcus describes is present. No answer can be independently confirmed because no one rater can see the thinking that produced the answer. Change the question, “How often does this person discuss her vision with you?” and you will get data that can be confirmed by anyone witnessing the discussions. The flaw disappears.
How many people have to agree before their observations are worthwhile? Researchers pull a representative sample when it’s too difficult to poll the whole population. Random samples are critical with large populations. Respondent groups in 360 feedback are small populations and often 100% of possible respondents are invited. 100% is the entire population, its not a sample - so it can’t be a skewed sample. If the question is, “how often does the leader do behavior X when working with his direct reports” and 100% of his direct reports respond, you can’t get a better sample.
If all research had that luxury of polling 100% of the research population, the Chicago Tribune would not have declared Dewey the President on election night in 1948 and no one would have to wait up for the election results today.
The Importance of the Gap
More important than sample size is the question of what the assessment is measuring?
An excellent 360 does not try to determine if a leader POSSESSES a particular set of skills, instead it measures if the leader DEMONSTRATES the skills.
An excellent 360 is not just about self-awareness, it’s about illuminating the environment in which the leader functions. A difference or gap in respondent group scores does not mean one respondent group is right and the other wrong. The gap may not mean the leader is lacking a particular skill. It does means there is an important difference in the perception of reality and that difference can inhibit the leaders ability to lead.
Bridging the gap may not be a skill issue, it may be a motivation issue, a system issue or a lack of information. 360 feedback is a powerful diagnostic tool that highlights issues then tells the leader what group to talk with and what to talk about to uncover root of the issue. The gap data helps effective leaders understand the real issue, then make relevant changes to resolve the issue.
Not all gaps are bad. Often others rate a leader higher than the leader rates herself. In many high performing organizations, the opposite of ‘benevolent distortion’ occurs. Leaders, driven by the sense that they can never be good enough, rate themselves lower than their actual performance.
That type of gap identifies an underutilized asset in the leaders toolbox. Understanding how valuable others consider a specific behavior informs the leader how to build on her strengths and use those underutilized, highly valued behaviors more often.
Gaps illuminate the environment in a way that makes it easy to understand. 360 surveys are the most effective tool for finding gaps
360 Surveys Work Well
360 feedback is hard to do well. Observing human behavior is not as easy or exact as counting bacteria in a petri dish, but clear behavioral descriptions and the right scale produce valid data. 360's have been used for more than half a century and organizations keep using them because, when done well, 360's are valuable tools that help leaders get better.
Even Marcus admits he’s “…seen some extraordinary coaches use 360 results as the jumping off point for insightful and practical feedback sessions.” Extraordinary coaches don’t use flawed data and to know what to do with good data. 360 surveys are a powerful leadership development tool.
If you happen to buy a lemon, with a broken engine, don’t let it prevent you from experiencing the thrill and performance of a well-designed automobile.