←back to thread

Looking for a Job Is Tough

(blog.kaplich.me)
184 points skaplich | 5 comments | | HN request time: 0.704s | source
Show context
thw09j9m ◴[] No.42132752[source]
This is the toughest market I've ever seen. I easily made it to on-sites at FAANG a few years ago and now I'm getting resume rejected by no-name startups (and FAANG).

The bar has also been raised significantly. I had an interview recently where I solved the algorithm question very quickly, but didn't refactor/clean up my code perfectly and was rejected.

replies(12): >>42132828 #>>42132878 #>>42132900 #>>42132935 #>>42133185 #>>42133278 #>>42138532 #>>42138559 #>>42139442 #>>42140920 #>>42143310 #>>42145184 #
joshuaturner ◴[] No.42133185[source]
I think a lot of this comes down to AI. In a recent hiring round we experienced multiple candidates using AI tooling to assist them in the technical interviews (remote only company). I expect relationship hires to become more common over the next few years as even more open-discussion focused interview rounds like architecture become lower signal.

So with that in mind I'll see you all at ReInvent

replies(1): >>42133235 #
rsanek ◴[] No.42133235[source]
If you're giving remote interviews, your loop should assume candidates can use AI. it's like giving a take home math test that assumes people won't use calculators at this point
replies(2): >>42133482 #>>42136958 #
joshuaturner ◴[] No.42133482[source]
I disagree. We pretty explicitly ask candidates to not use AI.

While it's fine when doing the job the purpose of the interview is to gauge your ability to understand and solve problems, while AI can help you with that you understanding how to do it yourself signals that you'll be able to solve other more complex wider-spanning problems.

Just like with a calculator - it's important for candidates to know _why_ something works and be able to demonstrate that as much as them knowing the solution.

replies(6): >>42134372 #>>42136181 #>>42136639 #>>42137164 #>>42137819 #>>42140896 #
1. randomdata ◴[] No.42136181[source]
There is an interesting dichotomy in your interview process. You say you want someone who can solve problems, but then go on to say (perhaps unintentionally; communication is hard) that you only want someone who has already rote-memorized how to solve the particular problems you throw at them, not someone who can figure things out as the problems arise.
replies(2): >>42136620 #>>42136918 #
2. Izkata ◴[] No.42136620[source]
> but then go on to say (perhaps unintentionally; communication is hard) that you only want someone who has already rote-memorized how to solve the particular problems you throw at them

They said the opposite of that. Unless you think it's not possible to figure out problems and you can only do them by rote memorization?

replies(1): >>42141921 #
3. squeaky-clean ◴[] No.42136918[source]
> you only want someone who has already rote-memorized how to solve the particular problems you throw at them, not someone who can figure things out as the problems arise

This is literally what AI is, and why they don't want it used in the interview.

replies(1): >>42137135 #
4. randomdata ◴[] No.42137135[source]
Literally someone (or, at least, some thing) that can figure things out as problems arise? That seems quite generous. Unless you're solving a "problem" that has already been solved a million times before, it won't have a clue. These so-called AIs are predictive text generators, not thinking machines. But there is no need to solve a problem that is already solved in the first place, so...

It is really good at being a "college professor" that you can bounce ideas off of, though. It is not going to give you the solution (it fundamentally can't), but it can serve to help guide you. Stuff like "A similar problem was solved with <insert research paper>, perhaps there is an adaptation there for you to consider?"

We're long past a world where one can solve problems in a vacuum. You haven't been able to do that for thousands, if not millions, of years. All new problems are solved by standing on the shoulders of problems that were solved previously. One needs resources to understand those older problems and their solutions to pave the way to solving the present problems. So... If you can't use the tools we have for that during the interview, all you can lean on is what you were able to memorize beforehand.

But that doesn't end up measuring problem solving ability, just your ability to memorize and your foresight in memorizing the right thing.

5. randomdata ◴[] No.42141921[source]
> Unless you think it's not possible to figure out problems and you can only do them by rote memorization?

It is not possible to solve a problem from scratch. You must first invent the universe, as they say. Any solution you come up with for a new problem will build upon solutions others have made for earlier problems.

In the current age, under a real-world scenario, you are going to use AI to help discover those earlier solutions on which to build upon. Before AI you would have consulted a live human instead. But humans, while not what we consider artificial, are what we consider intelligent and therefore presumably fall under the same rule, so that distinction is moot anyway.

Which means that, without access to the necessary tools during the interview, any pre-existing solution you might need to build upon needs to be memorized beforehand. If you fail to remember, or didn't build up memories of the right thing, before going into the interview, then you can't possibly solve the problem, even if you are quite capable of problem solving. Thus, it ends up being a test of memory, not a test of problem solving ability.

And for what? AI fundamentally cannot solve new problems anyway. At best, it can repeat solutions to old problems already solved, but why on earth would you be trying to solve problems already solved in the first place? That is a pointless waste of time, and a severe economic drain for the business. Being able to repeat solutions to problems already solved is not a useful employment skill.