How To Write Performance Reviews: A Guide For Engineering Managers
Lessons learned from writing performance reviews as a manager
The Reality of Performance Reviews
Let's face it: most managers hate performance reviews. While beneficial in theory, they often create friction and frustration. Nobody wants to hear they're "doing just fine," yet mathematically, not everyone can be an above-average performer.
A poorly written performance review can cost a talented engineer their promotion, damage their financial outcomes, erode team morale, and accelerate turnover. These evaluations aren't just administrative tasks for software engineering managers but critical communication tools that directly impact careers, compensation, and team dynamics.
Engineering leaders often receive little training in writing effective reviews, yet they're expected to produce fair, actionable, and impactful assessments. This guide addresses this gap, providing practical strategies for creating performance reviews that drive growth rather than resentment.
The Challenge
In my experience, managers struggle most with the human/emotional side of reviews. Performance conversations are often difficult and uncomfortable. It’s *far* easier to rate everyone highly and deliver positive reviews than to give someone an average or critical review. Beware of bias in your reviews.
Clear expectations, clear levelling criteria, and a more data-driven approach can help to avoid bias.
Common Mistakes Engineering Managers Make
Last-minute scrambling: Waiting until deadlines to recall a year's worth of contributions, creating recency bias and incomplete evaluations
Relying on instinct over structured criteria: Leading to inconsistent and potentially biased assessments.
Lack of actionable feedback: Providing assessments without clear guidance for improvement
Vague, subjective language: Using terms like "good team player" without substantiating evidence
Inconsistent standards: Applying different criteria across team members or review cycles
Overlooking impact alignment: Failing to connect individual contributions to broader organisational goals
Essential Principles for Better Performance Reviews
1. Track All Year Round To Avoid Recency Bias
For each of your direct reports, create a document that tracks their significant achievements and update it regularly. This prevents information loss over time. You can either maintain this or ask your direct report to update it. When something is completed, update the doc. When you get interesting feedback, update the doc. You’ll be grateful when it comes time to review performance. The last thing you want is to hunt back through 12 months’ worth of work to remember what someone has done.
If you enjoy automation, tools like Confluence can automatically pull in completed JIRA tasks, so the document updates automatically every time a project is completed.
This is sometimes called a “brag doc.” The difference here is that you, as a manager, should also be able to access it and update it. It’s essential to explain the purpose of this doc, which is to help support career growth, promotions, and keep track together.
2. Use A Data-Driven Approach
Data separates exceptional reviews from mediocre ones. Compare these examples:
Weak: "A strong developer who contributes much to the team."
Strong: "Delivered 4 critical features that reduced API response times by 37%, mentored 2 junior engineers who are now independent contributors, and led our database migration project that eliminated 98% of production incidents."
The first is so generic and subjective that it can be said about almost anyone.
Effective metrics include:
Work delivered and its business impact
System performance improvements (with before/after metrics)
Code quality indicators (test coverage, defect reduction)
Technical design documents authored and approved
Hiring metrics like interviews completed
While many advise against activity metrics like PR counts because they can be gamed, they can sometimes serve as valuable signals. For example, if a mid-level developer goes months without contributing, this warrants investigation. Use these metrics as signals to explore further, not as definitive performance indicators.
3. Apply A Consistent Template Structure
A well-structured template ensures comprehensive evaluation and reduces bias. Your template might include:
Summary: High-level assessment focused on impact
Technical Contributions: Specific projects, features, and technologies
Collaboration & Leadership: Team interactions and influence
Growth & Development: Skills gained and areas for improvement
Goals for Next Period: Clear, measurable objectives
Career Trajectory: Progress toward promotion or expertise development
4. Focus On Impact Over Activities
The "So What?" test is critical. For every achievement, ask:
What problem did this solve?
Who benefited?
What would have happened without this contribution?
How does this align with team/company goals?
Weak: "Refactored authentication module and wrote comprehensive tests." (So what?)
Strong: "Refactored authentication module, reducing login failures by 90% and cutting customer support tickets by 35%, directly contributing to our quarterly goal of improving user retention."
5. Minimise Subjective Language And Weasel Words
Performance reviews lose credibility when they rely on vague, subjective terminology that can't be verified or measured. Even the most data-driven performance reviews are susceptible to bias because we're all human. Avoid weasel words - empty phrases that sound impressive but convey little concrete information:
Weak: "Really hard worker who is great at on-call"
This statement contains no specifics and may reflect bias rather than performance
Without supporting evidence, phrases like "hard worker" become meaningless platitudes
Strong: "Resolved 15 critical production incidents during on-call rotations this quarter, with an average response time of 12 minutes and resolution time of 45 minutes, exceeding team standards by 30%"
Specific metrics establish clear performance benchmarks
Facts eliminate subjective assessment and potential bias
Research shows that unconscious bias is more prevalent when evaluation criteria are ambiguous. This particularly affects groups that face stereotyping.
Common weasel words to avoid:
"Good team player"
"Hard worker"
"Great attitude"
"Talented engineer"
"Shows potential"
"Lacks enthusiasm"
Replace with objective language:
"Contributed to 5 cross-functional projects"
"Completed >35 story points in each sprint"
"Designed and implemented 3 new features"
"Mentored 3 junior engineers"
For performance reviews to be fair, clear levels and expectations need to be well-defined. Without this structure, feedback remains subjective. Document the gap between expectation and performance with concrete examples that remove ambiguity.
6. Address Technical And Soft Skills
Comprehensive reviews evaluate both technical expertise and behavioural competencies:
Technical Assessment Areas:
Code quality
System design and architecture
Technical problem-solving
Team-wide improvements
Impact outside of the team
Documentation practices
Behavioural Assessment Areas:
Communication
Collaboration and teamwork
Initiative and ownership
Adaptability and learning
Mentorship and knowledge sharing
7. Connect To Business Goals
Show how individual contributions support broader objectives:
Weak: "Built a new notification system with high test coverage."
Strong: "Built a new notification system that increased user engagement by 27%, directly supporting our company-wide initiative to improve retention metrics. The implementation included 95% test coverage, aligning with our team's quality standards."
8. Use The "So What?" Test
For every statement in your review, ask "So what?" to expose generic feedback:
Generic: "Rachel is a team player."
"So what?": "Rachel proactively takes on unassigned tasks during crunch periods, volunteers to help teammates debug complex issues, and contributed to improving our onboarding process which reduced time-to-productivity for new hires by 40%."
9. Balance Positive Feedback With Growth Areas
Effective reviews maintain appropriate proportions:
60-70% recognition of strengths and contributions
20-30% constructive feedback on growth areas
10-20% forward-looking development plans
10. Leverage AI Thoughtfully
In 2025, AI tools can help overcome bias and improve efficiency in performance reviews. You can use AI to transform data into a draft review.
Remember that AI should enhance, not replace, human judgment, which is crucial to effective performance reviews.
11. Avoid Surprises
The most important thing to consider when it comes to performance reviews is that they should never feel like a “surprise.” Regular conversations and 1:1 progress reviews can help avoid end-of-year reviews blindsiding people.
Summary
Hopefully, these ideas are helpful. My biggest lesson on performance management over the years is to be organised and prepared ahead of time. Collect and review performance data regularly, rather than once a year during review season. Use a structured template or tool to organise the information into categories (code contributions, projects, leadership, design, feedback).
Thanks for reading.
Connect
If you enjoyed this article, I’d love to connect.
LinkedIn: Daily content on tech, leadership and AI
YouTube: Weekly videos where I build systems with AI
LeverageAI: My weekly newsletter covering AI news and tutorials