←back to thread

196 points zmccormick7 | 4 comments | | HN request time: 0s | source
Show context
throwacct ◴[] No.45388562[source]
We stopped hiring a while ago because we were adjusting to "AI". We're planning to start hiring next year, as upper management finally saw the writing on the wall: LLMs won't evolve past junior engineers, and we need to train junior engineers to become mid-level and senior engineers to keep the engine moving.

We're now using LLMs as mere tools (which is what it was meant to be from the get-go) to help us with different tasks, etc., but not to replace us, since they understand you need experienced and knowledgeable people to know what they're doing, since they won't learn everything there's to know to manage, improve and maintain tech used in our products and services. That sentiment will be the same for doctors, lawyers, etc., and personally, I won't put my life in the hands of any LLMs when it comes to finances, health, or personal well-being, for that matter.

If we get AGI, or the more sci-fi one, ASI, then all things will radically change (I'm thinking humanity reaching ASI will be akin to the episode from Love, Death & Robots: "When the Yogurt Took Over"). In the meantime, the hype cycle continues...

replies(1): >>45389340 #
1. menaerus ◴[] No.45389340[source]
> That sentiment will be the same for doctors, lawyers, etc., and personally, I won't put my life in the hands of any LLMs when it comes to finances, health, or personal well-being, for that matter.

I mean, did you try it for those purposes?

I have personally submitted an appeal to court for an issue I was having for which I would otherwise have to search almost indefinitely for a lawyer to be even interested into it.

I also debugged health opportunities from different angles using the AI and was quite successful at it.

I also experimented with the well-being topic and it gave me pretty convincing and mind opening suggestions.

So, all I can say is that it worked out pretty good in my case. I believe its already transformative in a ways we wouldn't be able even to envision couple years ago.

replies(2): >>45390352 #>>45390510 #
2. pessimizer ◴[] No.45390352[source]
They're tuned (and its part of their nature) to be convincing to people who don't already know the answer. I couldn't get it to figure out how to substitute peanut butter for butter in a cookie recipe yesterday.

I ended up spending an hour on it and dumping the context twice. I asked it to evaluate its own performance and it gave itself a D-. It came up with the measurements for a decent recipe once, then promptly forgot it when asked to summarize.

Good luck trying to use them as a search engine (or a lawyer), because they fabricate a third of the references on average (for me), unless the question is difficult, then they fabricate all of them. They also give bad, nearly unrelated references, and ignore obvious ones. I had a case when talking about the Mexican-American war where the hallucinations crowded out good references. I assume it liked the sound of the things it made up more than the things that were available.

edit: I find it baffling that GPT-5 and Quen3 often have identical hallucinations. The convergence makes me think that there's either a hard limit to how good these things can get which has been reached, or that they're just directly ripping each other off.

3. asadotzler ◴[] No.45390510[source]
You are not a doctor, lawyer, etc. You are responsible for yourself, not for others like doctors and lawyers who face entirely different consequences for failures.
replies(1): >>45393583 #
4. menaerus ◴[] No.45393583[source]
AI is already being used both by lawyers and doctors so I am not sure what's the point you're trying to make. All I tried to say with my comment is that technology is very worthwhile and that ones ignoring it will be the ones at the loss.