←back to thread

237 points jdkee | 10 comments | | HN request time: 0.607s | source | bottom
1. almosthere ◴[] No.45948997[source]
AI is in it's "pre react" state if you were to compare this with FE software development of 2008-2015
replies(2): >>45949533 #>>45950668 #
2. zby ◴[] No.45949084[source]
AI has lots of this 'fake till you make it' vibe from startups. And unfortunately it wins - because these hustler guys get a lot of money from VCs before their tools are vetted by the developers.
3. weberer ◴[] No.45949306[source]
>TrueState unburdens analytics teams from the repetitive analysis and accelerates the delivery of high-impact solutions.

Ehh, that's pretty vague. How does it work?

>Request demo

Oh. Well how much is it?

>Request pricing

Oh never mind

replies(1): >>45949502 #
4. antonvs ◴[] No.45949502[source]
It’s like the email scams that filter people out with bad spelling and obvious red flags. If someone makes it through those hurdles they’re probably a good prospect. You weren’t really thinking of buying it, were you?
5. CuriouslyC ◴[] No.45949533[source]
I think that's being generous, we haven't even had the Rails moment with AI yet. Shit, I'm not sure we've had the jQuery moment yet. I think we're still in the Perl+CGI phase.
6. krackers ◴[] No.45949822[source]
MCP is something that's filled with buzzwords and seems like something created solely so that you can be "sold" something. From what I actually gathered, it's basically somehow four things rolled into one:

* A communication protocol, json-rpc esque except it can be done over stdio or via HTTP

* A discovery protocol, like Swagger, to document the "tools" that an endpoint exposes and how it should be used

* A tool calling convention, the specific sequence of tokens the LLM needs to output for something to be recognized as a tool call

* A thin glue layer orchestrating all of the above: injecting the list of available tools into the LLM context, parsing LLM output to detect tool calls and invoke them with appropriate args, and inject results back into LLM context

replies(1): >>45949899 #
7. voidhorse ◴[] No.45949874[source]
Yeah, and just like the web space there will be a plethora of different frameworks out there all solving the same problems in their own slightly different, uniquely crappy ways and an entire pointless industry built around ceaselessly creating and rehashing and debating this needlessly bloated ecosystem of competing solutions will emerge and employ many "ai engineers".

Outside of a few notable exceptions, the software industry has become such a joke.

8. mattacular ◴[] No.45949899[source]
> * A thin glue layer orchestrating all of the above: injecting the list of available tools into the LLM context, parsing LLM output to detect tool calls and invoke them with appropriate args, and inject results back into LLM context

Yeah llm rules. You think there must be something more to it. There's not.

9. Rapzid ◴[] No.45950106[source]
Frameworks are also a way to capture a part of the ecosystem and control it. Look at Vercel.
10. mountainriver ◴[] No.45950668[source]
We won’t have a rails or react for AI, that’s insane. As it gets smarter you’ll just talk to it lol.

All of this is just software engineers grasping to stay relevant