off-by-one
Longform thoughts of @_off_by_one

A word of caution on hiring rubrics

About 5 min reading time

Engineers and businesspeople don’t always see eye to eye. One of the few things we truly have in common is an affinity for (or maybe an obsession with) the quantifiable. It’s no surprise, then, that many hiring processes are driven by rubrics. I’ve come to believe that—for all their benefits—hiring rubrics hurt organizations more than they help. This is especially true for small organizations.

Hiring is scary. You have to make a yes/no decision about a person with only squishy, intangible information to go off of. False positives are expensive and profoundly painful. False negatives are less painful but sometimes just as expensive because of the opportunity cost of losing someone truly great[1].

In almost all cases, a group of people contribute to a hiring process. There are good and bad reasons for this. On the plus side, a successful candidate will have to work with a diverse group of people who serve in various roles. Getting a wide sampling of perspectives is undoubtedly a good idea: it increases the likelihood that someone will identify a true red flag or pick up on a crucial subtlety. Another reason many organizations put hiring decisions in the hands of committees is quite poor: no one wants to take the heat if a hire goes bad. It’s too scary for any individual to take the decision head-on, so the hot potato gets passed around the table.

Sometimes committees find consensus; in many cases they don’t. This is where rubrics become important. Everyone fills out a survey about the interview, often with 1-5 as the options. Each candidate gets an aggregate score. This score can be compared to other candidates and may steer the committee towards a decision or even determine the outcome outright.

Rubrics may make the process easier, but I’ve come to believe that they lead to worse outcomes[2]. People feel better about rubric-driven processes because they mistake precision for accuracy[3]. When someone sees a number, it feels like it comes from some kind of scientific instrument. But it doesn’t. At best, it’s a numerical representation of someone’s deliberate reflection, and at worst it’s a hot take.

When rubrics are used at a company, someone, someday will be silently thinking, “Well, I really thought [candidate] would do great here, but I can’t argue with the fact that [candidate] scored a 6.2 on [topic]… I guess that’s not a good enough score.” All nuance is swept under the rug, everyone’s hands are clean, and no hard conversations are had. No one has to go out on a limb.

Proponents of rubrics claim that they’re necessary for insuring consistency and fairness across a company. Without a rubric, they ask, how can you make sure you’re rooting out unconscious bias? At a certain scale, rubrics might be helpful in ensuring organizations operate consistently (although I think it’s pretty clear that this is often an illusion). But, I’m not so sure rubrics reduce bias—conscious or unconscious[4]. In fact, I think it might make the problem worse because it lets bias hide behind a number. The world is awash with examples of bias codified in rubrics or algorithms. For example, the SAT is widely understood to be discriminatory.

Proponents will ask how they can compare two candidates without a rubric. I would respond by asking them how often this is truly a problem they have. I’m sure there are some cases where companies run highly concurrent interview processes. But, in most cases, candidates aren't on the market at the same time. They come through as a stream and have a short time window in which you can make the hire. You might have to compare a candidate to 1 or 2 others, but the rest of the application pool has expired. Do you really need numerical data to make that kind of decision?

So what do I propose instead? Well, honestly, this is hard stuff. If I had a perfect answer, I’d build a perfect company, get rich, and retire to build wind farms.

I think the best answer I can come up with is one that embraces the hard part. Make someone go out on a limb. To be hired, you need someone to have conviction. Anyone can raise a red flag.

  • Have a committee interview the candidate. Make sure you have representation across various races, genders, roles, and experience levels. Consider excluding very recent hires who might not know the company dynamics well, yet.
  • have interviewers fill out a short qualitative survey. The survey should be a thinking tool. Make sure they fill it out before talking to other interviewers. You don’t want groupthink.
  • Everyone meets and reviews the surveys (anonymously)
  • If anyone raised any true red flags[5] in the surveys, the group discusses, but the candidate is rejected in all likelihood. The bar for dismissing a red flag is extremely high.
  • The group discusses the candidate in an open fashion.
  • Does anyone on the committee have conviction that this is a great hire[6]? If not, reject the candidate. It’s very important to create space for different styles of conviction to emerge. Not everyone communicates in the same way.
  • If you really want to use numbers, try expressing conviction as a bet... "how much would you bet that [candidate] is able to succeed in this role." That allows people to quantify confidence and risk sensitivity.

What I like about this method is that it’s sensitive to disqualifying information, but also to positive conviction from an individual. All it takes to be rejected is a red flag. To be hired you need to have someone believe you’ll be amazing.

Have I tried this method with great success? No. This is what I’m going to try next time I build a hiring process. So, take my advice with a grain of salt. My thoughts on this are fluid. And, even if I had used this method with great success, that doesn’t mean it would work for you. Hiring processes reflect culture, and every culture is different.

One final piece of advice—and this I write without any caveat—If you’re part of a hiring process that uses a rubric, and you don’t think it’s serving the company or the candidate well, you can do something about it. I’d suggest you voice conviction in spite of what the rubric says. Don’t sit quietly and let your conviction fade. That’s not to say you shouldn’t allow yourself to be influenced by your coworkers perspectives… you should. But don’t allow yourself to be silenced by a random number. You could say “I know the rubric has [candidate] as a 5.3 on [topic], but are we confusing precision with accuracy? I got a really good feeling about [candidate]’s ability to take on this role. I think we might have a misread on our hands.”

I’d love to hear from you. Have you had experiences with rubrics—good or bad—that you think I’d benefit from hearing about? Let me know. Have you found a system that you believe in? I’d love to hear about that too.


  1. This is an example of Loss Aversion ↩︎

  2. I don’t have any hard evidence for this, but based on my personal, subjective experience, I believe it ↩︎

  3. This is an example of Precision Bias ↩︎

  4. It’s possible that rubrics could help if interviewers were blinded from identifying information about candidates, but this is in all likelihood impractical. Would that even be an interview? ↩︎

  5. What’s a true red flag? That’s up to you to decide, but, in my opinion it’s discriminatory behavior, ethical concerns, incongruence with company values, etc. Red flags can sometimes be subtle, but they’re always about something serious, otherwise they wouldn’t be a red flag, they’d be a concern. ↩︎

  6. It’s important that you don’t have trigger-happy interviewers who have conviction for everyone who walks in the door. I recommend that before someone starts conducting interviews, they shadow the process for a little while. They conduct interviews like normal, but their conviction isn’t enough to make the hire, at first. ↩︎


👉   Like this post? Follow on twitter and tell me what you think!