←back to thread

1246 points adrianh | 1 comments | | HN request time: 0.289s | source
Show context
dingnuts ◴[] No.44492359[source]
[flagged]
replies(4): >>44492524 #>>44492566 #>>44493015 #>>44496543 #
1. alwa ◴[] No.44493015[source]
Does this extend to the heuristic TFA refers to? Where they end up (voluntarily or not) referring to what LLMs hallucinate as a kind of “normative expectation,” then use that to guide their own original work and to minimize the degree to which they’re unintentionally surprising their audience? In this case it feels a little icky and demanding because the ASCII tablature feature feels itself like an artifact of ChatGPT’s limitations. But like some of the commenters upthread, I like the idea of using it for “if you came into my project cold, how would you expect it to work?”

Having wrangled some open-source work that’s the kind of genius that only its mother could love… there’s a place for idiosyncratic interface design (UI-wise and API-wise), but there’s also a whole group of people who are great at that design sensibility. That category of people doesn’t always overlap with people who are great at the underlying engineering. Similarly, as academic writing tends to demonstrate, people with interesting and important ideas aren’t always people with a tremendous facility for writing to be read.

(And then there are people like me who have neither—I agree that you should roll your eyes at anything I ask an LLM to squirt out! :)

But GP’s technique, like TFA’s, sounds to me like something closer to that of a person with something meaningful to say, who now has a patient close-reader alongside them while they hone drafts. It’s not like you’d take half of your test reader’s suggestions, but some of them might be good in a way that didn’t occur to you in the moment, right?