←back to thread

Please stop the coding challenges

(blackentropy.bearblog.dev)
261 points CrazyEmi | 4 comments | | HN request time: 0.001s | source
Show context
fishtoaster ◴[] No.42149357[source]
I recently ran an interview process for a relatively senior eng role at a tiny startup. Because I believe different interview methods work better for different people, I offered everyone a choice:

1. Do a takehome test, targeted to take about 4 hours but with no actual time limit. This was a non-algorithmic project that was just a stripped-down version of what I'd spent the last month on in actual work.

2. Do an onsite pairing exercise in 2 hours. This would be a version of #1, but more of "see how far we get in 2 hours."

3. Submit a code sample of pre-existing work.

Based on the ire I've seen takehome tests get, I figured we'd get a good spread between all three, but amazingly, ~90-95% of candidates chose the takehome test. That matches my preference as a candidate as well.

I don't know if this generalizes beyond this company/role, but it was an interesting datapoint - I was very surprised to find that most people preferred it!

replies(7): >>42149441 #>>42149536 #>>42149571 #>>42149636 #>>42150136 #>>42150254 #>>42151318 #
dahart ◴[] No.42149571[source]
Interesting! I like the idea of choice, but as a hiring manager it makes my problem harder. How do you compare the results from different choices equitably? I find trying to compare candidates fairly to be quite difficult, even when they have the exact same interview.

Last time I did a coding interview for real, I had the choice of any programming language, and could choose between 3 different problems to solve. I liked that quite a bit, and was offered the job. Being able to choose Python, instead of, say, C++ in a time-bound interview almost feels like cheating.

replies(3): >>42149671 #>>42149832 #>>42152810 #
1. sfink ◴[] No.42149832[source]
> as a hiring manager it makes my problem harder. How do you compare the results from different choices equitably?

That makes sense, and it's the perspective that's being drilled into a lot of us. Implicit bias and all that. But in my experience, the comparison problem has never been that big of an issue in practice. I guess it depends on the hiring climate, but I'm much more familiar with spending a lot of time going through mediocre candidates and looking for someone who's "good enough". And this is when the recruiter is doing their job, and I understand why each candidate made it to the interview stage. Sure, sometimes it's a hot position (or rather, hot group to work for) and we get multiple good candidates, but then the decisionmaking process is more about "do we want A, who has deep and solid experience in this exact area; or B, who blew us away with their breadth of knowledge, flexibility, and creativity?" than something like "do we want A who did great on the take-home test but was unimpressive in person, or B whose solution to the take-home test was so-so but was clearly very knowledgeable and experienced in person?" The latter is in a hypothetical case where everyone did the same stuff, just to make it easier to compare, but even in that setup that's an uncommon and uninteresting choice. You're comparing test results and trying to infer Truth from it. Test results don't give a lot of signal or predictive power in the first place, so if you're trying to be objective by relying only on normalized scores, then you're not working with much of value.

Take home tests or whiteboard tests or whatever are ok to use as a filter, but people aren't points along a one-dimensional "quality" line. Use whatever you have to in order to find people that have a decent probability of being good enough, then stop thinking about how they might fail and start thinking about what it would look like if they succeed. They'll have different strengths and advantages. Standardizing your tests isn't going to help you explore those.

replies(2): >>42149935 #>>42154710 #
2. dahart ◴[] No.42149935[source]
Highly agree with all of that, perhaps save the conclusion. I still try to standardize the interview process, but we have enough different kinds of interview phases to capture different strengths and weaknesses of candidates. I still want the interview to be fair even when people respond very differently. You’re right that it doesn’t often come anywhere close to a tie. But sometimes candidates aren’t vocal and don’t vouch for themselves strongly but they are great coders, and sometimes people are talkative and sell themselves very well but when it comes down to technical ability they aren’t amazing. The choice of coding interview could obscure some of that, so mainly I include other parts to the interview, though I’m still interested in how to compare someone who does take home coding to someone who does the live pair programming, for example. I kinda want to see how candidates handle both. ;)
replies(1): >>42150402 #
3. sfink ◴[] No.42150402[source]
Oops, sorry, I think I mangled my conclusion a bit. I'm not against standardizing tests. If it doesn't get in the way of other attributes, standardization is very valuable. It's just that those results aren't enough by themselves.

> I still try to standardize the interview process, but we have enough different kinds of interview phases to capture different strengths and weaknesses of candidates.

100% agree with this approach.

4. thaumasiotes ◴[] No.42154710[source]
> Take home tests or whiteboard tests or whatever are ok to use as a filter, but people aren't points along a one-dimensional "quality" line.

They are unless you're going to hire everyone who ever applies. At some point, you're choosing between two candidates, and the only way you can make that choice is by projecting them onto a one-dimensional line. The approach you describe:

> stop thinking about how they might fail and start thinking about what it would look like if they succeed. They'll have different strengths and advantages.

is one method of doing that, but not an especially effective one.