←back to thread

579 points paulpauper | 2 comments | | HN request time: 0s | source
Show context
maccard ◴[] No.43604081[source]
My experience as someone who uses LLMs and a coding assist plugin (sometimes), but is somewhat bearish on AI is that GPT/Claude and friends have gotten worse in the last 12 months or so, and local LLMs have gone from useless to borderline functional but still not really usable for day to day.

Personally, I think the models are “good enough” that we need to start seeing the improvements in tooling and applications that come with them now. I think MCP is a good step in the right direction, but I’m sceptical on the whole thing (and have been since the beginning, despite being a user of the tech).

replies(1): >>43612021 #
sksxihve ◴[] No.43612021[source]
The whole MCP hype really shows how much of AI is bullshit. These LLMs have consumed more API documentation than possible for a single human and still need software engineers to write glue layers so they can use the APIs.
replies(2): >>43612217 #>>43613458 #
maccard ◴[] No.43613458[source]
I don't think I agree, entirely.

The problem is that up until _very_ recently, it's been possible to get LLMs to generate interesting and exciting results (as a result of all the API documentation and codebases they've inhaled), but it's been very hard to make that usable. I think we need to be able to control the output format of the LLMs in a better way before we can work on what's in the output. I don't konw if MCP is the actual solution to that, but it's certainly an attempt at it...

replies(1): >>43613671 #
1. sksxihve ◴[] No.43613671[source]
That's reasonable along with your comment below too, but when you have the ceo of anthropic saying "AI will write all code for software engineers within a year" last month I would say that is pretty hard to believe given how it performs without user intervention (MCP etc...). It feels like bullshit just like the self driving car stuff did ~10 years ago.
replies(1): >>43615263 #
2. maccard ◴[] No.43615263[source]
I completely agree with you there. I think we're a generation away from these tools being usable with light supervision in the way I _want_ to use them, and I think the gap between now and that is about 10x smaller than the gap between that and autonomous agents.