360-Degree Feedback

Definition

A structured feedback process in which an employee receives performance input from multiple sources — manager, peers, direct reports, and sometimes external stakeholders — rather than from their manager alone.

360-degree feedback is a multi-rater review process that gathers performance and behavioral input from everyone who works with an employee: their manager, peers at the same level, direct reports if applicable, and occasionally cross-functional partners or clients. The term 'full circle' captures the intent — removing the single-manager blind spot that affects traditional top-down reviews. Raters typically respond to structured questionnaires covering competencies like communication, collaboration, leadership, and execution. Results are usually aggregated and anonymized before being shared with the employee. Some organizations use 360 feedback for developmental purposes only; others incorporate it into formal performance ratings and compensation decisions. The choice has significant implications for how honestly raters respond: anonymity and perceived safety are essential for getting candid input rather than socially acceptable answers.

Why it matters for HR and People Ops teams

Single-rater performance reviews are structurally prone to bias — a manager can only observe what they directly witness, and their relationship with an employee shapes their perception. 360-degree feedback triangulates across multiple perspectives, making it harder for any single relationship dynamic or recency bias to dominate the picture. For People Ops, 360s serve two distinct purposes: developing individual employees through richer, more actionable feedback, and informing calibration and talent decisions with a broader evidence base. High-potential programs, succession planning, and manager effectiveness assessments all rely heavily on 360 data. The format also signals cultural values: organizations that invest in multi-rater feedback are communicating that how you work matters, not just what you deliver. HR teams should be deliberate about separating developmental 360s from evaluative ones — conflating the two tends to undermine honest participation.

How it works

  1. HR or the employee nominates raters from relevant peer groups, direct reports, and stakeholders — typically five to ten people total.
  2. HR reviews and approves the rater list to ensure appropriate coverage and prevent cherry-picking favorable reviewers.
  3. Raters receive survey invitations with structured competency questions and optional open-text response fields, typically requiring 15–30 minutes to complete.
  4. Responses are collected over a defined window (usually one to two weeks) and aggregated by software to protect anonymity.
  5. A feedback report is generated showing average scores by competency, comparison to norms, and anonymized qualitative comments.
  6. The employee reviews the report — often with a manager or HR coach — to identify key themes, strengths, and development priorities.
  7. Findings feed into a development plan, performance review summary, or calibration input, depending on whether the 360 is developmental or evaluative.

How performance management software supports 360-Degree Feedback

Performance management platforms automate the logistical complexity of running 360s at scale — managing rater nominations, survey distribution, anonymization, response tracking, and report generation. Without tooling, HR teams coordinating 360 feedback manually across hundreds of employees face serious data integrity risks and administrative burden. Software also enables consistent competency frameworks across the organization and produces standardized reports that can be fed into calibration and succession planning workflows.

  • Rater nomination and approval workflows — lets employees suggest raters with manager or HR override to ensure representative coverage
  • Anonymized response aggregation — automatically protects individual rater identities by grouping responses and suppressing small sample sizes
  • Competency-based survey templates — provides pre-built or customizable questionnaire frameworks mapped to company or role-specific competencies
  • Automated reminders and completion tracking — sends nudges to incomplete raters and gives HR visibility into response rates before deadlines
  • Feedback report generation — compiles scores, peer averages, and qualitative comments into a structured report ready for manager or coaching conversations
  • Integration with performance review cycles — surfaces 360 data directly inside annual or semi-annual review forms to inform written assessments and ratings

Related terms

  • Performance Cycle — the structured calendar of review events within which 360 feedback is gathered and used to inform ratings or development plans
  • Calibration Session — a cross-manager meeting where 360 feedback results are one data point used to align ratings and reduce individual manager bias
  • Continuous Feedback — real-time, lightweight input shared between peers and managers outside of formal review cycles
  • Manager Effectiveness — a measurement framework assessing how well managers lead, develop, and retain their teams, often assessed through 360 methods
  • Rating Scale — the numerical or descriptive scoring system applied to competencies assessed within a 360-degree feedback questionnaire

Should 360 feedback be anonymous?

In most organizational cultures, yes. Anonymity increases the likelihood that raters provide honest, candid input rather than socially safe responses. However, anonymity does not mean unaccountable — HR should ensure rater pools are large enough that individual responses cannot be reverse-engineered. Some mature feedback cultures experiment with attributed feedback, but this requires significant psychological safety investment before it produces useful data rather than overly polished commentary.

How many raters should be included in a 360 review?

Best practice is five to ten raters, typically including the direct manager, two to four peers, and where applicable, one to three direct reports. Fewer than five raters makes anonymization difficult and increases the chance that a single perspective dominates. More than ten raters creates diminishing returns on new insight while increasing survey fatigue. For senior leaders being assessed for succession or executive development, slightly larger rater pools are appropriate to capture broader organizational influence.

What is the difference between developmental and evaluative 360 feedback?

Developmental 360s are confidential, shared only with the employee, and used exclusively to support growth. Evaluative 360s contribute to formal performance ratings, promotion decisions, or compensation. Most HR practitioners recommend starting with developmental-only programs because employees and raters behave more honestly when feedback is not tied to administrative consequences. Once trust is established in the process, organizations can consider incorporating 360 data into calibration and talent decisions.

How often should 360 feedback be collected?

Annual is the most common cadence for formal 360 reviews, typically aligned with performance review cycles. Some organizations run them semi-annually for high-potential cohorts or managers. Continuous feedback platforms enable lighter-weight, ongoing peer input that serves a similar purpose without the full formality of a 360. The right frequency depends on how feedback is used: if it feeds annual calibration, annual collection is sufficient; if it drives real-time development, more frequent touchpoints help.

What should HR do when 360 feedback scores are very different from a manager's rating?

This discrepancy is valuable signal, not a problem to paper over. HR should bring the gap to the attention of the reviewing manager and explore the reasons: does the manager have limited visibility into cross-functional relationships? Are peers or direct reports experiencing something the manager is not? The gap may warrant a calibration conversation or a coaching discussion with the employee. Ignoring systematic discrepancies between manager and peer ratings erodes trust in the entire feedback process.