←back to thread

118 points soraminazuki | 1 comments | | HN request time: 0.209s | source
Show context
skhameneh ◴[] No.45080882[source]
> This is the thing about AI tools. They are by design going to honour your prompt, which often results in your AI tool agreeing with you, even if you’re wrong.

LLMs augment the input with their trained data. LLMs don't inherently agree if you set up context correctly for analysis.

I've arrived at the conclusion that the top-down push without adequate upskilling creates bad experiences and subpar results. It's like adopting a new methodology for something without actually training anyone on the new methodology, it leaves everyone scrambling trying to figure it out often with poor results.

I find LLMs to be a great multiplier. But that multiplier will take whatever you put in context. If one puts in bias and/or fragmented mess, it's far more difficult to steer the context to correct it than it was to add it to begin with.

replies(1): >>45081693 #
1. radarsat1 ◴[] No.45081693[source]
> It's like adopting a new methodology for something without actually training anyone on the new methodology, it leaves everyone scrambling trying to figure it out often with poor results.

Agree strongly. I've had to push back when the CEO didn't see the results he wanted, had to basically remind him, look, this technology is like.. 6 months old (at the time, and with respect to actually getting good results, Claude Code etc).. you can't expect everyone on the team to just "know" how to use it.. we're literally _all_ learning this very new thing right now, not just us, but everybody in the industry. It's a little crazy to expect immediate uptake and an overnight revolutionary productivity boost with this thing we barely know how to use properly, there's going to be a learning phase here whether you like it or not