Sales is the relational profession: drip campaigns, nurturing leads, touchpoints, making tee time, maintaining the rolodex. (Don’t confuse the soft-skill activities here to mean easy; from what I’ve observed it’s anything but.)
Engineering is applied science: proofs and theorems, instrumentation, algorithms, bits, and bytes.
And yet, there’s a certain objectivity available in assessing sales professional performance that isn’t available when it comes to software engineers. The sales leader has an easier time qualitatively navigating performance conservations than the engineering leader.
- Account Development Representative: Did you meet quota on booked meetings?
- Account Executive: Did you meet quota on new deals?
- Account Manager: Did you meet quota on expansions?
A sales professional who meets their number quarter after quarter and embodies the team’s values gets to keep playing. If you miss quota too frequently or meet quota but run over others in the process (depending on the org), you’re gone.
As a manager, it’s a pretty simple rubric to assess performance and decide to act.
How do I decide as an engineering leader that someone isn’t performing? Being late on an estimate? Shipping a bug? Not performing enough code review? Being too pedantic in code review? Not knowing how to present your architecture for buy-in?
The most drastically deficient cases are easy (e.g. code is habitually late and broken when posted for review, an engineer never takes time to review other’s code.)
There are also engineers who are predictable and steady and who are easy to evalaute. They ship code on-time with few bugs, perform thoughtful reviews, and deliver what was asked. However, if they’re that consistent, then as a leader we’re probably not asking these engineers to tackle the hairy, ambiguous problems. They play an important part on the team, yet the advances they make for the team and company are incremental not revolutionary.
There’s a third case though. What about the engineer who you give the hardest problems to? They sometimes come up short. They sometimes make mistakes, ones that are obvious in retrospect. They’re also pushed harder project after project because the organization asks more of them.
How do you evaluate their performance? How do you justify this to your own leaders? What if those leaders don’t have a product development background themselves?
And of course software engineering is a team sport. So disambiguating the effects and dynamics of a team on an individual’s performance makes this all the harder.
At this point in our industry, it’s impossible to objectively and qualitatively measure a single engineer’s performance. Engineering leaders are paid to bring to bear their professional experience and judgement to assess and steer both the team’s and individuals’ performance. Management is fundamentally about applying subjective assessment to ambiguous situations. Software engineer performance is no exception.