Most active commenters

    ←back to thread

    456 points wg0 | 13 comments | | HN request time: 0.75s | source | bottom
    Show context
    chrismorgan ◴[] No.45899143[source]
    The current title (“Pakistani newspaper mistakenly prints AI prompt with the article”) isn’t correct, it wasn’t the prompt that was printed, but trailing chatbot fluff:

    > If you want, I can also create an even snappier “front-page style” version with punchy one-line stats and a bold, infographic-ready layout—perfect for maximum reader impact. Do you want me to do that next?

    The article in question is titled “Auto sales rev up in October” and is an exceedingly dry slab of statistic-laden prose, of the sort that LLMs love to err in (though there’s no indication of whether they have or not), and for which alternative (non-prose) presentations can be drastically better. Honestly, if the entire thing came from “here’s tabular data, select insights and churn out prose”… I can understand not wanting to do such drudgework.

    replies(9): >>45899255 #>>45899348 #>>45899636 #>>45899711 #>>45899852 #>>45900787 #>>45902114 #>>45903466 #>>45904945 #
    1. layer8 ◴[] No.45899348[source]
    The AI is prompting the human here, so the title isn't strictly wrong. ;)
    replies(2): >>45900301 #>>45902047 #
    2. dwringer ◴[] No.45900301[source]
    Gemini has been doing this to me for the past few weeks at the end of basically every single response now, and it often seems to result in the subsequent responses getting off track and lower quality as all these extra tangets start polluting the context. Not to mention how distracting it is as it throws off the reply I was already halfway in the middle of composing by the time I read it.
    replies(5): >>45901512 #>>45901950 #>>45901979 #>>45903775 #>>45907820 #
    3. layer8 ◴[] No.45901512[source]
    Occasionally I find it helpful, but it would be good to have the option to remove it from the context.
    replies(1): >>45902066 #
    4. Razengan ◴[] No.45901950[source]
    I think AI should present those continuation prompts as dynamic buttons, like "Summarize", "Yes, explain more" etc. based on the AI's last message, like the NPC conversation dialogs in some RPGs
    replies(1): >>45902554 #
    5. butlike ◴[] No.45901979[source]
    Why do you respond to its prompting? It's a machine
    replies(1): >>45902261 #
    6. chrismorgan ◴[] No.45902047[source]
    I have decided to call it engagement bait.
    7. drivers99 ◴[] No.45902066{3}[source]
    You can if you script the request yourself, or you could have a front end that lets you cut out those paragraphs from the conversation. I only say that because yesterday I followed this guide: https://fly.io/blog/everyone-write-an-agent/ except I had to figure out how to do it with Gemini API instead. The context is always just (essentially) a list of strings (or "parts" anyway, doesn't have to be strings) that you pass back to the model so you can make the context whatever you like. It shouldn't be too hard to make a frontend that lets you edit the context, and fairly easy to mock up if you just put the request in a script that you add to.
    8. dwringer ◴[] No.45902261{3}[source]
    Because if I don't, it tends to misinterpret the next thing I say because it reads that as an answer to the question it just asked me.
    replies(1): >>45902478 #
    9. catlifeonmars ◴[] No.45902478{4}[source]
    Try one-shotting. Rather than a continuous conversation, refine your initial prompt and restart.
    10. xnorswap ◴[] No.45902554{3}[source]
    Claude code already does this, it'll present a series of questions with pre-set answers, and the opportunity to answer "custom: <free text>"
    11. lubujackson ◴[] No.45903775[source]
    Add "Complete this request as a single task and do not ask any follow-up questions." Or some variation of that. They keep screwing with default behavior, but you can explicitly direct the LLM to override it.
    replies(1): >>45905986 #
    12. astrange ◴[] No.45905986{3}[source]
    That doesn't help GPT-5; it /really/ wants to suggest follow-ups and ignored me telling it not to.
    13. elxr ◴[] No.45907820[source]
    This is why I wish chat UI's had separate categories of chats (like a few generic system prompts) that let you do more back-and-forth style discussions, or more "answers only" without adding any extra noise, or even an "exploration"/"tangent" slider.

    The fact that system prompts / custom instructions have to be typed-in in every major LM chat UI is a missed opportunity IMO