Isn't the whole promise of AI tools that they just work?
What skill am I missing out on learning, exactly, by not using them right now? Prompt Engineering?
I think I'm a reasonably good communicator in both voice and text, so what skill am I failing to train by not using LLMs right now?
No, not at all. Like with pretty much any development tool, you need to get proficient with them.
>what skill am I missing out on
At this point, it seems like pretty much all of them related to generative AI. But, the most recent of them that I'll point at is: tooling tooling tooling, and prompting. But the specific answer (to answer your "exactly") is going to depend on you and what problems you are solving. That's why on tries not to fall behind, so you can see how to use tooling in a rapidly evolving landscape, for your exact circumstances.
>I think I'm a reasonably good communicator in both voice and text, so what skill am I failing to train by not using LLMs right now?
You know how to achieve something you will use different words with different people? You don't talk to your spouse the same way you talk to your parents or your children or your friends or your coworkers, right? You understand that if you are familiar with someone you speak to them differently if you want to achieve something, yes?
none of this stuff is complicated, and the models themselves have been basically the same since GPT-2 was released years ago
pulling the covers back so hard and so fast is going to be shocking for some.
To make it more concrete you can try and build something yourself. Grab a small model off of hugging face that you can run locally. Then put a rest API in front of it so you can make a request with curl, send in some text, and get back in the response what the llm returned. Now in the API prepend some text to what came on the request ( this is your system prompt ) like "you are an expert programmer, be brief and concise when answering the following". Now add a session to your API and include the past 5 requests from the same user along with the new one when passing to the llm. Update your prepended text (the system prompt) with "consider the first 5 requests/responses when formulating your response to the question". you can see where this is going, all of the tools and agents are some combination of the above and/or even adding more than one model.
At the end of the day, everyone has a LLM at the core predicting and outputting the next most likely string of characters that would follow from an input string of characters.