Most active commenters
  • kragen(7)
  • pvg(3)

←back to thread

1245 points adrianh | 20 comments | | HN request time: 0.714s | source | bottom
1. dingnuts ◴[] No.44492359[source]
[flagged]
replies(4): >>44492524 #>>44492566 #>>44493015 #>>44496543 #
2. simonw ◴[] No.44492524[source]
Plenty of people have English as a second language. Having an LLM help them rewrite their writing to make it better conform to a language they are not fluent in feels entirely appropriate to me.

I don't care if they used an LLM provided they put their best effort in to confirm that it's clearly communicating the message they are intending to communicate.

replies(2): >>44492560 #>>44492564 #
3. kragen ◴[] No.44492560[source]
Yeah, my wife was just telling me how much Grammarly has helped her with improving her English.
4. avalys ◴[] No.44492566[source]
What makes you feel so entitled to tell other people what to do?
replies(1): >>44492582 #
5. kragen ◴[] No.44492582[source]
Anyone is entitled to make a request—or to ignore one.
replies(1): >>44493141 #
6. kragen ◴[] No.44492608{3}[source]
On the contrary, I've found Simon's opinions informative and valuable for many years, since I first saw the lightning talk at PyCon about what became Django, which IIRC was significantly Simon's work. I see nothing in his recent writing to suggest that this has changed. Rather, I have found his writing to be the most reliable and high-information-density information about the rapid evolution of AI.

Language only works as a form of communication when knowledge of vocabulary, grammar, etc., is shared between interlocutors, even though indeed there is no objectively correct truth there, only social convention. Foreign language learners have to acquire that knowledge, which is difficult and slow. For every "turn of phrase" you "enjoy" there are a hundred frustrating failures to communicate, which can sometimes be serious; I can think of one occasion when I told someone I was delighted when she told me her boyfriend had dumped her, and another occasion when I thought someone was accusing me of lying, both because of my limited fluency in the languages we were using, French and Spanish respectively.

7. simonw ◴[] No.44492619{3}[source]
If you think my writing is AI-generated you need to recalibrate your AI writing detection skills, they're way off.
replies(2): >>44492687 #>>44492778 #
8. ctxc ◴[] No.44492778{4}[source]
Hijacking, but

Hey hey you're the TIL guy! I was designing my blog and I looked at what was suggested as the best blogs, yours was on it.

The TIL is such a great idea, takes the pressure off of "is it really good enough to post as a blog"

Glad to see you here :D

replies(1): >>44497088 #
9. alwa ◴[] No.44493015[source]
Does this extend to the heuristic TFA refers to? Where they end up (voluntarily or not) referring to what LLMs hallucinate as a kind of “normative expectation,” then use that to guide their own original work and to minimize the degree to which they’re unintentionally surprising their audience? In this case it feels a little icky and demanding because the ASCII tablature feature feels itself like an artifact of ChatGPT’s limitations. But like some of the commenters upthread, I like the idea of using it for “if you came into my project cold, how would you expect it to work?”

Having wrangled some open-source work that’s the kind of genius that only its mother could love… there’s a place for idiosyncratic interface design (UI-wise and API-wise), but there’s also a whole group of people who are great at that design sensibility. That category of people doesn’t always overlap with people who are great at the underlying engineering. Similarly, as academic writing tends to demonstrate, people with interesting and important ideas aren’t always people with a tremendous facility for writing to be read.

(And then there are people like me who have neither—I agree that you should roll your eyes at anything I ask an LLM to squirt out! :)

But GP’s technique, like TFA’s, sounds to me like something closer to that of a person with something meaningful to say, who now has a patient close-reader alongside them while they hone drafts. It’s not like you’d take half of your test reader’s suggestions, but some of them might be good in a way that didn’t occur to you in the moment, right?

10. soganess ◴[] No.44493141{3}[source]
There is a big difference between the above 'request' and, say, me politely asking the time of a complete stranger I walk by on the street.

Requests containing elements of hostility, shame, or injury frequently serve dual purposes: (1) the ostensible aim of eliciting an action and (2) the underlying objective of inflicting some from of harm (here shame) as a means compelling compliance through emotional leverage. Even if the respondent doesn't honor the request, the secondary purpose still occurs.

replies(1): >>44493598 #
11. kragen ◴[] No.44493598{4}[source]
These are good points, but I think they represent a somewhat narrow view of the issue. What's happening here is that we're discussing among ourselves what kinds of actions would be good or bad with respect to AI, just as we would with any other social issue, such as urban development, immigration, or marital infidelity. You could certainly argue that saying "please don't replace wetlands with shopping malls" or "please don't immigrate to the United States" has "the underlying objective of inflicting some from of harm (here shame) as a means [of] compelling compliance through emotional leverage."

But it isn't a given that this will be successful; the outcome of the resulting conversation may well be that shopping malls are, or a particular shopping mall is, more desirable than wetlands, in which case the ostensible respondent will be less likely to comply than they would have been without the conversation. And, in this case, it seems that the conversation is strongly tending toward favoring the use of things like Grammarly rather than opposing it.

So I don't oppose starting such conversations. I think it's better to discuss ethical questions like this openly, even though sometimes people suffer shame as a result.

replies(1): >>44496889 #
12. tomhow ◴[] No.44496543[source]
Please don't do this here. If a comment seems unfit for HN, please flag it and email us at hn@ycombinator.com so we can have a look.

We detached this subthread from https://news.ycombinator.com/item?id=44492212 and marked it off topic.

13. tomhow ◴[] No.44496553{5}[source]
Woah! You can't comment like this on Hacker News, no matter who you're replying to or what it's about.

If posts are unfit for HN or if you think someone is posting too much, flag them and email us, and there are things we can do.

It's never ok to personally attack someone like this on HN. If we want others to do better we have to hold ourselves to a high standard too.

https://news.ycombinator.com/newsguidelines.html

14. pvg ◴[] No.44496889{5}[source]
Hectoring someone to 'stop doing this' is not 'starting a conversation', it's just hectoring.
replies(1): >>44501817 #
15. tptacek ◴[] No.44497088{5}[source]
He's always here! He's here all the time! He's one of the good features of HN. :)
16. kragen ◴[] No.44501817{6}[source]
A conversation on the topic certainly did ensue; see https://news.ycombinator.com/item?id=44492524 and https://news.ycombinator.com/item?id=44493015. Perhaps you mean to say that this wasn't the intended effect? But it was at least a highly predictable effect. Perhaps it would have gone better for the flamer if they had made the request without flaming not only the author in question and simonw.

To me the request in question seems to be in the same spirit as "Please don't play your music so loud at night", "Please don't look at my sister", or "Please don't throw your trash out your car window". In each of these cases, there's clearly a conflict between different people's desires, probably accompanied with underlying disagreements about relevant duties; perhaps one person believes the other has a duty to avert their gaze from the sister in question to show respect to her chastity, while their interlocutor does not subscribe to any such duty, believing he is entitled to look at whomever he pleases. Or perhaps one person believes the other has a duty to carry their trash to a trash can, while the other does not.

Given that such a conflict has arisen, how can we resolve it? We could merely refrain from trying to influence one another's behavior entirely, which is the lowest-effort approach, but this clearly leads to deeply suboptimal outcomes in many cases; perhaps the cost of turning down the stereo or carrying the garbage to a trash can would be almost trivial, so doing it to accommodate others' preferences results in a net improvement in welfare. Alternatively, we could try to exclude people whose normative beliefs differ from our own from the spaces that most affect us, but it should be obvious that this also often causes harms far out of proportion from the good that results, such as ethnic cleansing.

All the other approaches to resolving the conflict that I can think of—bargaining, mediation, arbitration, collective deliberation, etc.—begin unavoidably with stating the unfulfilled desire. Or, as you put it, hectoring someone to 'stop doing this'.

replies(1): >>44502310 #
17. pvg ◴[] No.44502310{7}[source]
There's no analogy or wall of text that makes that comment unshitty and inviting of conversation. It's not a thing one should do on HN because it trashes the place. We resolve this by striving to control our own reflexive dickishness and downvoting/flagging the egregiously dickish comments, which is exactly what happened here.
replies(1): >>44503057 #
18. kragen ◴[] No.44503057{8}[source]
I agree that it's a dickish, shitty comment, and uninviting of conversation. I don't agree that the reason it's dickish is that comments of the form "Please don't do such and such" are inherently dickish. I think that such comments are uncomfortable but necessary, and tabooing discussion of such conflicts does more harm than good—in addition to the reasons above, it would ensure that only the most disagreeable commenters dare to make them.

Undoubtedly, if you devote the minute and a half required to read my "wall of text" comment above, you will be persuaded by its reasoning.

replies(1): >>44503196 #
19. pvg ◴[] No.44503196{9}[source]
I fixed the confusing bit, thanks. I'm not persuaded by the reasoning because I don't see how the reasoning is relevant - we're talking about a specific dickish comment in a specific social place with its specific norms. These are so well understood and established the comment got flagblasted by users and moderator scolded on top of that - effectively the maximum penalty/public shaming an HN comment can get. It's not a hypothetical different context in which some kind of hypothetical value eventually comes from such comments - the bad comment and the bad subthread are concretely in front of us.
replies(1): >>44503301 #
20. kragen ◴[] No.44503301{10}[source]
I think the non-hypothetical value that came from this comment in this case is that it surfaced good reasons for writers to use generative AI and showed that many people support doing so. I would have liked to see that happen in a much more civil fashion, but I don't think it could happen at all without some openly stated form of the initial objection to writers using generative AI. So I think that's the wrong aspect of the initial comment to taboo.

My concern is that the flagblasting and moderator-scolding, while certainly justified by the comment in question, will cause the collateral damage of discouraging politer versions of such comments in the future. So I think it's worthwhile to affirm that criticizing people's behavior to their face is not in fact inherently dickish, but rather a much better alternative to doing it behind their back, or to finding ways to silently exclude them, or people you suspect of being like them.