> there is no real world use case for a middle-ground (c) where you want someone with algo implementation details rote-memorized in their brain and without the very deep understanding that would make the rote-memorization unnecessary!
I was watching a video recently talking about how Facebook is adding KPIs for its engineers' LLM usage. As in, you will be marked negatively in your performance review if your code is good but you didn't use AI enough.
I think, you and I agree, that's obviously stupid right? I imagined myself as an engineer at Facebook, reading this email come through. I can imagine two paths: I roll my eyes, find a way to auto-prompt an LLM to fulfill my KPI needs, and go back to working with my small working group of "underrecognized folks that are doing actual work that keeps the company's products functioning against all odds." Or, the other path: I put on my happy company stooge hat, install 25 VScode LLM forks, start writing a ton of internal and external posts about how awesome AI is and how much more productive I am with it, and get almost 0 actual work done but score the highest on the AI KPIs.
In the second path, I believe I will be more capitalistically rewarded (promotions, cushy middle/upper management job where I don't have to do any actual work). In the first, I believe I will be more fulfilled.
Now consider the modern interview: the market is flooded with engineers after the AI layoffs. There's a good set of startups out there that will appreciate an excellent, pragmatic engineer with a solid portfolio, but there's the majority of other gigs, for which I need to pass a leetcode interview, and nothing else really matters.
If I can't get into one of the good startups, then, I guess I'm going to put on my dipshit spinny helicopter hat and play the stupid clown game with all the managers so I can have money.