Raw 360° scores have bias baked in. A harsh manager drags down an entire team's numbers. A lenient peer inflates them. Post-evaluation calibration exists to fix this.
Performs360 has built-in calibration — adjust scores after feedback collection with documented justifications, at team or individual level.
How does your team currently handle score calibration? Spreadsheet? Gut feel? Ignore it entirely?
Replies