←back to thread

724 points simonw | 1 comments | | HN request time: 0.226s | source
Show context
marcusb ◴[] No.44527530[source]
This reminds me in a way of the old Noam Chomsky/Tucker Carlson exchange where Chomsky says to Carlson:

  "I’m sure you believe everything you’re saying. But what I’m saying is that if you believed something different, you wouldn’t be sitting where you’re sitting."
Simon may well be right - xAI might not have directly instructed Grok to check what the boss thinks before responding - but that's not to say xAI wouldn't be more likely to release a model that does agree with the boss a lot and privileges what he has said when reasoning.
replies(5): >>44528694 #>>44528695 #>>44528706 #>>44528766 #>>44529331 #
chatmasta ◴[] No.44528694[source]
I'm confused why we need a model here when this is just standard Lucene search syntax supported by Twitter for years... is the issue that its owner doesn't realize this exists?

Not only that, but I can even link you directly [0] to it! No agent required, and I can even construct the link so it's sorted by most recent first...

[0] https://x.com/search?q=from%3Aelonmusk%20(Israel%20OR%20Pale...

replies(4): >>44528738 #>>44528767 #>>44528788 #>>44532662 #
1. lynndotpy ◴[] No.44532662[source]
Others have explained the confusion, but I'd like to add some technical details:

LLMs are what we used to call txt2txt models. The output strings which are interpreted by the code running the model to take actions like re-prompting the model with more text, or in this case, searching Twitter (to provide text to prompt the model with). We call this "RAG" or "retrieval augmented generation", and if you were around for old-timey symbolic AI, it's kind of like a really hacky mesh of neural 'AI' and symbolic AI.

The important thing is that user-provided prompt is usually prepended and/or appended with extra prompts. In this case, it seems it has extra instructions to search for Musk's opinion.