n.b.While I would imagine a good bit of this applies outside of the realm of interviewing software engineers, what follows is specifically written about software interviews. If you’re looking about other types of interviews, read on, but, proceed with caution.
I’ve had positions open under me for pretty much all of my career as a technical manager. And so I’ve spent a lot of time interviewing, and given a recent focus on hiring a more diverse team, a lot of time thinking about interviewing. And what I’ve become more and more convinced over time is that no one (really, no one) is good at interviewing. Sure, lots of us point to the teams that we work with and say, “Look at all these amazing people! We must be doing something right.” And that’s probably true, but I’m reasonably sure that the thing we’re doing well is not interviewing. Even more terrifyingly, as I’ve looked at it closely, I’ve become convinced that it is not possible to do well.
Does that sound crazy? I don’t think it is — under the very best conditions as an interviewer, you can only ever have one side of the story. If you should be so lucky over a long career of hiring to hire no one that you view as a mistake on any level, that’s fantastic (if somewhat improbable), but also only tells you 1/2 of the story. Have you ever been talking with someone who was doing really well in the interview process, only to suddenly choke and bomb a segment of it — eliminating themselves from consideration? Do you think that was a fair representation of their skills? Have you ever interviewed someone and wondered at their evident career success v. their inability to perform the simplest tasks in an interview? Sure, some of those folks may have completely misleading resumes; but I’ll bet more than one of them simply choked under the pressure, and your assessment told you that they were no good.
And why is that? How good is your assessment? Let’s take a look at a few common tech interview gotos.
- Whiteboard coding has taken a well deserved beating over the past few years, but that’s only a sample of the awful things we do.
- Stupid questions. What do you expect to learn by asking someone how they would keep from being eaten if they were a cucumber in a salad? Or how they would escape from a blender if miniaturized? Do these questions tell you anything about a candidate? Is this an ice-breaker? Do you expect these questions to put a candidate at ease? Puh-lease! Best case scenario, they give an interviewer an opportunity to feel smug. That’s not going to help anyone.
- FizzBuzz and other well-known interview problems. Congratulations, you’ve just learned that your candidate can memorize some stupid code from a programming interview questions web site. Do you know if they can actually code? This category of simple question may have some value as a screening question, but I’m unconvinced it tells you anything you need to know on an on-site. And if you find yourself questioning whether someone who struggled a bit with FizzBuzz is worth hiring, because the last 3 candidates aced it in seconds, shame on you. You’ve got one candidate who just thought on their feet and got where they needed to be against 3 who read one of the 892 books that currently come up when you search for “programming interview” on Amazon.
- Have you ever been in a good panel interview (on either side of the hiring equation)? I’m pretty sure it would be possible to prove the exponential relationship between the number of people interviewing and the terribleness of the interview. Add 3 interviewers to an interview, and at least one of them will get distracted by their laptop or mobile. Four? Then you’ll have at least two with a conflicting agenda. Five? Someone’s going to start monologuing (and everyone else will check out). Your focus quickly becomes about group dynamics, rather than the candidate. Meanwhile, you’ve wasted a significant chunk of multiple peoples’ time and increased the stress on your candidate. Are you testing for endurance?
- Pair programming? I’ve talked with a lot of strong advocates for this, but this technique can be a great way to eliminate anyone who hasn’t spent a lot of time pair programming. (Remember how uncomfortable it felt the first time you paired? And I’ll bet that wasn’t with the added pressure of being on an interview.) Also, as much as I just beat up panel interviews, having a second interviewer in the room brings a valuable second perspective to the table, and there’s just not much room for a 3rd in pair programming. If you’re not looking for someone bringing a specific language skill to the table, this can get even more complicated — you might have to switch interviewers depending on the candidate, and now you’re going to lose any reasonable chance of fair evaluation across a set of candidates.
- Looking for someone that “clicks” with the team? That’s a great idea, but it’s also almost impossible to make this an important factor without guaranteeing a monoculture. Are you looking to increase diversity on your team? (hint: you should be) Are you competing to hire in an increasingly candidate-friendly market? (hint: you are) Then you need to monitor this carefully. It’s also too prone to eliminating candidates who are too nervous to let their real personality through in an interview. (Relevant: how often have you let your real personality through in a job interview? )
- Post-interview analysis meetings. sigh Here’s an area where I’m especially ashamed of my own once-valued previous practice. My last team would wait until everyone was available, gather around an open section of floor, kick a soccer ball (oh, hai, rest of the world, that’s a football to you) back and forth, and discuss a recent candidate. Chances are, minus the soccer ball, you’ve done something similar. There are a few problems with this approach. First, by the time we all got together for that conversation, no one’s perspective was fresh. But the biggest problem with the let’s-all-get-together-and-make-a-decision approach is that the first member of the team to voice a strong opinion will generally carry the conversation. If you’re feeling slightly positive about a candidate, but the first person to speak says, “Oh, that candidate was terrible!” are you going to argue? What if everyone was reasonably positive except that first speaker? What if someone at the table has a very high opinion of the candidate but doesn’t like conflict? How will you know?
Given these practices, which have all been at least considered in the various iterations of my interviewing process, it’s not really surprising at all that I’m terrible at interviewing. And if you’re a hiring manager and somehow the logic of all of the above hasn’t convinced you (and honestly, even if it has), then you really, really need to go read Interviewing.io’s article on the arbitrary nature of tech interviews. Seriously, go read it. At the very least, scroll down to the interactive chart titled “Standard Dev vs. Mean of Interviewee Performance” and click around till you understand it.
Seriously, go do that.
Okay, back? “Only 25% of candidates are consistent in their performance.” I’d also like to take a brief digression to point out the realization that got me in this mess to start with:
The dark side of optimizing for high false negative rates, though, rears its head in the form of our current engineering hiring crisis. Do single interview instances, in their current incarnation, give enough signal? Or amidst so much demand for talent, are we turning away qualified people because we’re all looking at a large, volatile graph through a tiny keyhole?
With those horrifying, all too believable figures as context, go back and think about past interviews. I’ve definitely had the experience in the past of rejecting a candidate based on a teams opinion, while remaining somewhat sure that the team had eliminated someone with a tremendous amount of potential. I’ve also pushed back against a team decision to reject a candidate only to make one of the best hires of my career.
After thinking through all of the above, I came to what seemed like an obvious conclusion: I am terrible at interviewing. And given that the practices I’ve followed are widely accepted interview standards, I’m pretty sure the rest of the world could use some work on this too. In fact the only convincing evidence I’ve found that there’s a useful way to hire for actual skill in any field is a blind listening technique for orchestra members. I think there’s some interesting potential in applying more techniques like blind resume (or code) reviews, and I’d be in favor of experimenting with fully blind hiring. But unfortunately, I don’t think most organizations are ready for that yet.
So where does that leave us? Are we all just in an impossible situation with no hope? After all, I started thinking about this with hopes of identifying the best hiring process in the world and then implementing it and collecting all the benefits and privileges pertaining thereto. Instead, I’ve spent months staring at various revisions of this post with my head in my hands, wondering “what now?”
Here’s my conclusion, at least for now: hiring well is simply not possible yet. At least not for me.
Hiring better? That sounds like something I can work on. I’ve definitely got some ideas for that. Working with the candidates I hire through an imperfect process to help them realize their potential? Now for that, I’m definitely in.