←back to thread

191 points foxfired | 1 comments | | HN request time: 0.394s | source
Show context
stephenpontes ◴[] No.45110797[source]
I had almost this exact interview experience recently with a popular AI startup. The exercise was to build a search UI over a static array of dictionary terms. It was a frontend role so I wired it up with filter and startsWith and spent more time polishing the UI and UX.

The final interview question was: “Okay, how do you make this more performant?” My answer was two-tiered:

- Short term: debounce input, cache results.

- Long term: use Algolia / Elastic, or collaborate with a backend engineer to index the data properly.

I got rejected anyway (even with a referral). Which drove home OP's point: I wasn't being judged on problem solving, but auditioning for the "senior eng" title.

With candidate interview tools and coding aids increasingly hard to detect in interviews, this gap between interview performance and delivering in the role is only going to widen. Curious how many of these "AI-assisted hires" will start hitting walls once they're outside of the interview sandbox.

replies(3): >>45111168 #>>45111494 #>>45111507 #
1. enraged_camel ◴[] No.45111507[source]
One of the worst things about the employment market in the US is that you almost never get accurate feedback about how well you actually performed. The reasons for this are of course legal (i.e. the company doesn't want potential liability in case the rejected employee uses the feedback to sue), but it is one of those things that work out against job seekers in a major way.