←back to thread

600 points antirez | 1 comments | | HN request time: 0s | source
Show context
wg0 ◴[] No.44629309[source]
I don't understand.

Is author suggesting manually pasting redis C files into Gemini Pro chat window on the web?

replies(1): >>44629357 #
thefourthchime ◴[] No.44629357[source]
I was mostly nodding my head until he got to this part.

The fundamental requirement for the LLM to be used is: don’t use agents or things like editor with integrated coding agents.

So right, is he like actually copying and pasting stuff into a chat window? I did this before Co-Pilot, but with cursor I would never think of doing that. He never mentioned Cursor or Claude Code so I wonder if he's even experienced it.

replies(4): >>44629647 #>>44629781 #>>44631780 #>>44642442 #
libraryofbabel ◴[] No.44629647[source]
Right, this didn’t make much sense to me either. Who’d still recommend copy-and-paste-into-chat coding these days with Claude Code and similar agents available? I wonder if he’s got agents / IDEs like windsurf, copilot, cursor etc where there is more complexity between you and the frontier LLM and various tricks to minimize token use. Claude Code, Gemini CLI etc aren’t like that and will just read in whole files into the context so that the LLM can see everything, which I think achieves what he wants but with all the additional magic of agents like edits, running tests, etc. as well.
replies(1): >>44631808 #
1. Implicated ◴[] No.44631808[source]
> agents / IDEs like windsurf, copilot, cursor etc where there is more complexity between you and the frontier LLM and various tricks to minimize token use.

This is exactly why he's doing it the way he is and why what he describes is still the most effective, albeit labor intensive, way to work on hard/complex/difficult problems with LLMs.

Those tricks are for saving money. They don't make the LLM better at its task. They just make it so the LLM will do what you could/should be doing. We're using agents because we're lazy or don't have time or attention to devote, or the problems are trivial enough to solved with these "tricks" and added complexities. But, if you're trying to solve something complex or don't want to have a bunch of back and forth with the LLM or don't want to watch it iterate and do some dumb stuff... curate that context. Actually put thought and time into what you provide the LLM, both in context and in prompt - you may find that what you get is a completely different product.

Or, if you're just having it build views and buttons - keep vibing.