At Xtreme Labs, Farhan hired 1,000 people over four years.
Technical interviews are a fantastic tool … if you want to hire candidates that are great at interviewing. But if you want people who are good at the job? Not so much. That’s why we don’t rely on them, don’t plan to, and you know what? I’m proud of that fact.
You might think I’m crazy, but I can probably thank this approach for the incredibly diverse engineering, product and design team that I’m lucky enough to manage. Here atHelpful, we don’t look like your typical tech company. We’re more varied in terms of gender, nationality, and sexual orientation, and that diversity of experience is our core strength. Diversity helps us win.
If you want to win too, you’ll knock it off with the interviews.
The technical interview is broken
Let’s revisit why we have interviews in the first place. They’re supposed to be an extremely accurate predictor of future performance. The upfront investment, which can be heavy (e.g. my own interviews at Amazon and Yahoo exceeded 10+ hours each), should be time well-spent evaluating whether someone would be an exceptional contributor in the role.
However, the data suggests that there is very low correlation between interview scores and actual job performance.
On top of being a poor predictor for folks coming in the door (false positives), it’s also a biased process for discarding the loads of folks who would be a great fit at your company (false negatives). Tons of great people, and, disproportionately, women and minorities, can perform less well at technical interviews but be great in role.
As Aline Lerner, co-founder of Interviewing.io , a software recruiting platform, wrote in an excellent Medium article, companies who rely on arduous technical interviews are “pumping resources into finding diverse candidates — who don’t understand the game-like nature of interviewing — and dumping them into a broken, nondeterministic machine.” The people who make it through this thresher are those who excel at … drumroll please … interviewing. But not necessarily the job.
Engineers are expected to stand up and code on whiteboards, a high-pressure situation that works to the disadvantage of those who feel out of place.
Here are some incriminating stats:
- Most interviews can only explain 14% of an employee’s performance. — Wired
- Only 25% of technical interviewees are consistent in their performance.- Interviewing.io
- Strong performers mess up technical interviews 22% of the time. – Interviewing.io
Technical interviews give you a ton of false negatives. They bias your talent pool toward people who are great at interviewing which, if we now accept that they are not deterministic, means you’re wasting a lot of time and resources on people who aren’t a real fit.
That’s why we tossed the technical interview out the window. Instead, we select for people who are good at the job. How do we figure that out? Easy. We put them in the driver’s seat.
The key to better candidates is seeing if they can drive
Let’s say that you’re evaluating race car drivers. Would you sit them down for seven hours to ask them about their favorite car color, interests, and hobbies, and then decide based on whether you want to get a beer with them? Hell no. You’d put them in the driver’s seat and see how fast they are. If they blow you away, you have all you need.
Put them in the fucking car.
It doesn’t matter their age, their background, their gender, or their hairstyle. Heck, you don’t even need to see the person in the car, and in fact, it’s actually better if you don’t . The question is, can they do the job? If so, great. And that’s basically our approach. Less interviewing, more focus on outcomes.
Most interviews are a waste of time because 99.4% of the time is spent trying to confirm whatever impression the interviewer formed in the first ten seconds.
This reduces our interview biases, including the unconscious ones we’re not even aware of. Biases cause us to associate the people we’re interviewing with others we’ve known or even people on TV based on their mannerisms — a tattoo, a speech pattern, or of course, a skin color. We make a snap judgement — we get ‘a feeling’ — and we decide that we like or don’t like them without ever exploring their fit for the role. I call the phenomena of unconscious bias in interviewing the Nickelback problem:
The Nickelback problem
Let’s say that while dating online, I come across a profile that declares the person loves Nickelback (ugh!). I’m going to swipe left, instantly. Right there, I’m making a snap judgement with massive repercussions: I’ll never get to know all the other incredible aspects about this person. They’re gone forever. If, however, that person’s profile doesn’t mention Nickelback, we date, we build a relationship, and six months down the road their love of Nickelback comes up, we’ll just laugh. It’ll be no thing. It’s a tiny blip compared to all the good things.
Point being, don’t weed people out based on irrelevant preconceived notions. Nickelback fandom is not deterministic of someone’s compatibility as a partner, and your unconscious biases don’t lead to better candidates. Just see if the interviewee can drive.
So, here’s how we interview at Helpful:
We interview people who fit the basic criteria, and for less than one hour (sometimes 15 minutes!). If the person seems capable and there aren’t major red flags, we hire them (we use an extremely coarse grained filter). If we have five applicants, sometimes we’ll hire all five. They’re then on a 90 day probationary period to see if they work out, with check-ins at 30, 60, and 90 days. And what of the six hours we saved on each technical interview, you ask? We spend them coaching.
The best predictor of how someone will perform in a job is a work sample test.
During this time, they get to show us how fast they can drive. Instead of a bunch of anecdotal feedback from the handful of people who met them during an interview, we get incredibly detailed feedback from the whole team who is working with them for 40 hours a week. We can evaluate their impact on the organization.
Now, there is of course the danger of the ‘culture fit’ problem. Some people just won’t get along with everybody, which is why we rotate them through different teams. Not getting along with a few people is no problem. Not getting along with every person across multiple different teams is a big red flag, and that’ll come up in our check-ins.
When someone isn’t working out, it’s typically obvious to all parties.
We’re transparent to a fault and leave no room for mystery — we operate in a frictionless feedback environment . We tell people precisely how to improve. For example, let’s say that someone is working on our mobile client and it’s not going as quickly as we need it to go. We’ll let them know that they need to speed up and maybe we’ll ask that they give a presentation on a deep technical topic related to mobile. Sometimes, they’ll improve. Other times, they’ll be the ones to come back and say, “Yeah, it’s not a fit.”
In my experience, people are incredibly grateful for this transparency. There’s no mutual mystification. Rather than being inducted into the company with the sense that the evaluation is over but not really knowing where they stand, we just tell them. This helps them evaluate us as well, and never before using this method had I seen so many candidates leave amicably and then refer their smart friends to us.
Now, can I tell you scientifically why this all works, and why everyone leaves feeling happier? No. I have my hunches, but as I often tell my friends:
I don’t know if it works in theory — I only know it works in practice.
For yourself, find out what works in your practice. As the angel investor and partner at Homebrew, Hunter Walk, put it in an article titled, Try Before You Buy , startups are “ditching the interview process as final arbiter of employment and instead opting for some sort of ‘try before you buy’ arrangement.” That’s everything from having potential employees work as contractors on one single project to allowing them to work with your team to develop a pitch. This approach has it’s downsides, but I think Hunter and I agree that they’re vastly outweighed by the upside of building powerful, diverse, and effective teams that win.
Okay, okay, I know you will have objections, and I’d love to hear them
But first, I’ve pre-responded to the most common ones.
Isn’t it cruel to let people go within 90 days? What if they quit their old job to take this one?
Great question, and no. I can’t think of a full-time position that doesn’t put people on a probationary 90 days (even implicitly), we’re just being way more transparent about it and acting on it. While we’re evaluating the candidate, they’re evaluating us, and we’re figuring out if it’s a good fit before we commit to years together. In my experience, people are grateful for this transparency and the predictability it affords them.
Isn’t it disruptive to always have new people coming in?
No more disruptive than it is to have a team full of people who excel at interviews but are bad at their job. Nothing drags morale down more than people who suck in your environment. If you want to minimize disruption and increase productivity, get rid of those people. I think this method of hiring fast and firing faster gets us more high quality candidates in the long run.
Note: This only works if you actually do spend the saved interviewing time coaching and getting feedback from your team.
But that’s not how Google and Facebook do it. Do you think you know more than them?
As Mark Twain put it, “Whenever you find yourself on the side of the majority, it is time to pause and reflect.” People are using technical interviews because that’s how things have been done in the past. It’s a vestige of not having the data. Now we have the data, and it shows that they don‘t work.
What if I’m a really good interviewer and can actually tell?
I’d love to see the data. I suspect very few leaders actually track this, and if you do and are above the mean, let’s talk. There are some approaches that are less bad than others, I will admit. For example, there’s the Topgrading method. That’s great at thoroughly checking references, and is likely better than a completely unstructured approach. It isn’t 90 days worth of data, but if you’re wed to technical interviews, you could do a lot worse than adding a Topgrading interview step.
Drop the interviews, take up the test drive
This is our method and it seems to be working out. We threw out the traditional interview because it’s broken, and because it leads us to preemptively judge people on trivial factors like clothing, height, and Nickelback. Instead, we put people in the “fucking car” and let them show us they can drive. That’s how we built our team — who I am immensely proud of — and I think it bears the results.
What is your approach to interviewing and how does it differ?