←back to thread

263 points itzlambda | 3 comments | | HN request time: 0.797s | source
Show context
linsomniac ◴[] No.44609117[source]
Here's my current rule of thumb: If you have successfully built a couple projects using agentic tooling and Claude 4 or similar models: you are doing a fine job of keeping up. Otherwise, you are at least a generation behind.
replies(4): >>44609160 #>>44609216 #>>44610180 #>>44610654 #
bluefirebrand ◴[] No.44609160[source]
Behind what?

Isn't the whole promise of AI tools that they just work?

What skill am I missing out on learning, exactly, by not using them right now? Prompt Engineering?

I think I'm a reasonably good communicator in both voice and text, so what skill am I failing to train by not using LLMs right now?

replies(1): >>44609273 #
1. linsomniac ◴[] No.44609273[source]
>Isn't the whole promise of AI tools that they just work?

No, not at all. Like with pretty much any development tool, you need to get proficient with them.

>what skill am I missing out on

At this point, it seems like pretty much all of them related to generative AI. But, the most recent of them that I'll point at is: tooling tooling tooling, and prompting. But the specific answer (to answer your "exactly") is going to depend on you and what problems you are solving. That's why on tries not to fall behind, so you can see how to use tooling in a rapidly evolving landscape, for your exact circumstances.

>I think I'm a reasonably good communicator in both voice and text, so what skill am I failing to train by not using LLMs right now?

You know how to achieve something you will use different words with different people? You don't talk to your spouse the same way you talk to your parents or your children or your friends or your coworkers, right? You understand that if you are familiar with someone you speak to them differently if you want to achieve something, yes?

replies(1): >>44609338 #
2. dingnuts ◴[] No.44609338[source]
this is just ridiculous. you can get up to speed with SOTA tooling in a few hours. A system prompt is just a prompt that runs every time. Tool calls are just patterns that are fine tuned into place so that we can parse specific types of LLM output with traditional software. Agents are just a LLM REPL with a context specific system prompt, and limited ability to execute commands

none of this stuff is complicated, and the models themselves have been basically the same since GPT-2 was released years ago

replies(1): >>44609968 #
3. chasd00 ◴[] No.44609968[source]
> A system prompt is just a prompt that runs every time. Tool calls are just patterns that are fine tuned into place so that we can parse specific types of LLM output with traditional software. Agents are just a LLM REPL with a context specific system prompt, and limited ability to execute commands

pulling the covers back so hard and so fast is going to be shocking for some.

To make it more concrete you can try and build something yourself. Grab a small model off of hugging face that you can run locally. Then put a rest API in front of it so you can make a request with curl, send in some text, and get back in the response what the llm returned. Now in the API prepend some text to what came on the request ( this is your system prompt ) like "you are an expert programmer, be brief and concise when answering the following". Now add a session to your API and include the past 5 requests from the same user along with the new one when passing to the llm. Update your prepended text (the system prompt) with "consider the first 5 requests/responses when formulating your response to the question". you can see where this is going, all of the tools and agents are some combination of the above and/or even adding more than one model.

At the end of the day, everyone has a LLM at the core predicting and outputting the next most likely string of characters that would follow from an input string of characters.