Most active commenters
  • johnnyanmac(11)
  • (5)
  • moomoo11(5)
  • jaredcwhite(5)
  • lcnPylGDnU4H9OF(5)
  • Buttons840(3)
  • chaps(3)
  • righthand(3)
  • pyman(3)
  • lxgr(3)

←back to thread

321 points distantprovince | 106 comments | | HN request time: 0.003s | source | bottom
1. phito ◴[] No.44617442[source]
I really wish some of my coworkers would stop using LLMs to write me emails or even Teams messages. It does feel extremely rude, to the point I don't even want to read them anymore.
replies(10): >>44617497 #>>44617500 #>>44617658 #>>44617721 #>>44617880 #>>44617940 #>>44618006 #>>44618504 #>>44619441 #>>44622817 #
2. SoftTalker ◴[] No.44617497[source]
Even worse when they accidently leave in the dialog with the AI. Dead giveaway. I got an email from a colleague the other day and at the bottom was this line:

> Would you like me to format this for Outlook or help you post it to a specific channel or distribution list?

replies(3): >>44617717 #>>44617794 #>>44617802 #
3. benatkin ◴[] No.44617500[source]
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
replies(1): >>44617529 #
4. echelon ◴[] No.44617529[source]
Wow. What a good giveaway.

I wonder what others there are.

I occasionally use bullet points, emdashes (unicode, single, and double hyphens) and words like "delve". I hate it think these are the new heuristics.

I think AI is a useful tool (especially image and video models), but I've already had folks (on HN [1]!) call out my fully artisanal comments as LLM-generated. It's almost as annoying as getting low-effort LLM splurge from others.

Edit: As it turns out, cow-orkers isn't actually an LLMism. It's both a joke and a dictation software mistake. Oops.

[1] most recently https://news.ycombinator.com/item?id=44482876

replies(5): >>44617586 #>>44617610 #>>44617712 #>>44617768 #>>44617817 #
5. Buttons840 ◴[] No.44617586{3}[source]
Use two dashes instead of an actual em dash. ChatGPT, at least, cannot do the same--it just can't.
replies(5): >>44617630 #>>44617635 #>>44617694 #>>44617750 #>>44620853 #
6. furyofantares ◴[] No.44617610{3}[source]
Maybe I'm misunderstanding - but I don't think LLM's say cow-orkers. Or is that what you mean?
replies(2): >>44617650 #>>44617728 #
7. chaps ◴[] No.44617630{4}[source]
As a frequent user of two dashes.. I hate how people now associate it with AI.

Also, that "cow-orkers" doesn't look like AI-generated slop at all..? Just scrolling down a bit shows that most of them are three years and older.

replies(1): >>44617659 #
8. JoshTriplett ◴[] No.44617635{4}[source]
Conventionally, in various tools that take plain text as input, two dashes is an en-dash, and three dashes is an em-dash.
9. ◴[] No.44617650{4}[source]
10. shreezus ◴[] No.44617658[source]
LinkedIn is probably the worst culprit. It has always been a wasteland of “corporate/professional slop”, except now the interface deliberately suggests AI-generated responses to posts. I genuinely cannot think of a worse “social network” than that hell hole.

“Very insightful! Truly a masterclass in turning everyday professional rituals into transformative personal branding opportunities. Your ability to synergize authenticity, thought leadership, and self-congratulation is unparalleled.”

replies(3): >>44617698 #>>44617796 #>>44618261 #
11. ◴[] No.44617659{5}[source]
12. Jtsummers ◴[] No.44617694{4}[source]
It can't use two dashes? Is that like how Data couldn't use contractions (except he did)?
replies(1): >>44621880 #
13. vouaobrasil ◴[] No.44617698[source]
Best thing you can do is quit LinkedIn. I deleted my account immediately once I first noticed AI-generated content there.
replies(1): >>44618024 #
14. lupusreal ◴[] No.44617712{3}[source]
Give away for what, old farts? That link contains a comment citing the jargon file which in turn says that the term is an old Usenet meme.
replies(1): >>44617739 #
15. righthand ◴[] No.44617717[source]
Clippy is rolling in his grave.
16. herval ◴[] No.44617721[source]
have you tried sharing that feedback with them?

one of my reports started responding to questions with AI Slop. I asked if he was actually writing those sentences (he wasn't), so I gave him that exact feedback - it felt to me like he wasn't even listening, when he clearly jut copy-pasted clearly AI responses. Thankfully he stopped doing it.

Of course as models get better at writing, it'll be harder and harder to tell. IMO the people who stand to lose the most are the AI sloppers, in that case - like in the South Park episode, as they'll get lost in commitments and agreements they didn't even know they made.

replies(2): >>44617858 #>>44644280 #
17. ffsm8 ◴[] No.44617728{4}[source]
As this error seems to be going back a lot longer then LLMs existed (17 yrs), it could be an auto in-correct situation.

Might be incorrectly saved in some spell check software and occasionally rearing it's head

replies(2): >>44617754 #>>44617775 #
18. chaps ◴[] No.44617739{4}[source]
Soon HN is going to be flooded with blogs about people trying and failing miserably to find AI signal from noisy online discussions with examples like this one.
19. skeledrew ◴[] No.44617750{4}[source]
ChatGPT: "Hold my beer..."
20. chaps ◴[] No.44617754{5}[source]
https://ask.metafilter.com/15649/coworkers-why/amp

This goes back a loooooong while.

21. scarface_74 ◴[] No.44617768{3}[source]
How is that a “giveaway”? The search turns up results from 7 years ago before LLMs were a thing? More than likely it’s auto correct going astray. I can’t imagine an LLM making that mistake
22. furyofantares ◴[] No.44617775{5}[source]
Oh I see the confusion then. It's not an error, it's a joke, and a very old one at that. Like saying Micro$oft.
23. righthand ◴[] No.44617794[source]
Seriously you should respond to the slop in the email and waste your coworkers time too.

“No I don’t need this formatted for Outlook Dave. Thanks for asking though!”

replies(1): >>44618015 #
24. quietbritishjim ◴[] No.44617796[source]
> now the interface deliberately suggests AI-generated responses to posts

This feature absolutely defies belief. If I ran a social network (thank god I don't) one of my main worries would be a flood of AI skip driving away all the human users. And LinkedIn are encouraging it. How does that happen? My best guess is that it drives up engagement numbers to allow some disinterested middle managers to hit some internal targets.

replies(1): >>44618414 #
25. ◴[] No.44617802[source]
26. Velorivox ◴[] No.44617817{3}[source]
I like to use em-dashes as well (option-shift-hyphen on my macbook). I've seen people try to prompt LLMs to not have em-dashes, I've been in forums where as soon as you type in an em-dash it will block the submit button and tell you not to use AI.

Here's my take: these forums will drive good writers away or at least discourage them, leaving discourses the worse for it. What they really end up saying — "we don't care whether you use an LLM, just remove the damn em-dash" — indicates it's not a forum hosting riveting discussions in the first place.

27. ◴[] No.44617858[source]
28. pyman ◴[] No.44617880[source]
Didn't our parents go through the same thing when email came out?

My dad used to say: "Stop sending me emails. It's not the same." I'd tell him, "It's better. "No, it's not. People used to sit down and take the time to write a letter, in their own handwriting. Every letter had its own personality, even its own smell. And you had to walk to the post office to send it. Now sending a letter means nothing."

Change is inevitable. Most people just won't like it.

A lot of people don't realise that Transformers were originally designed to translate text between languages. Which, in a way, is just another way of improving how we communicate ideas. Right now, I see two things people are not happy about when it comes to LLMs:

1. The message you sent doesn't feel personal. It reads like something written by a machine, and I struggle to connect with someone who sends me messages like that.

2. People who don't speak English very well are now sending me perfectly written messages with solid arguments. And honestly, my ego doest’t like it because I used to think I was more intelligent than them. Turns out I wasn't. It was just my perception, based on the fact that I speak the language natively.

Both of these things won't matter anymore in the next two or three years.

replies(7): >>44617893 #>>44618028 #>>44618170 #>>44618248 #>>44618284 #>>44618500 #>>44620877 #
29. aidos ◴[] No.44617893[source]
I really don’t think they’re the same thing. Email or letter, the words are yours while an LLM output isn’t.
replies(5): >>44617962 #>>44617983 #>>44618011 #>>44618251 #>>44618779 #
30. lxgr ◴[] No.44617940[source]
"Hey, I can't help but notice that some of the messages you're sending me are partially LLM-generated. I appreciate you wanting to communicate stylistically and grammatically correct, but I personally prefer the occasional typo or inelegant expression over the chance of distorted meanings or lost/hallucinated context.

Going forward, could you please communicate with me directly? I really don't mind a lack of capitalization or colloquial expressions in internal communications."

replies(3): >>44619024 #>>44619799 #>>44620829 #
31. unyttigfjelltol ◴[] No.44617962{3}[source]
Which words, exactly, are "yours"? Working with an LLM is like having a copywriter 24/7, who will steer you toward whatever voice and style you want. Candidly, I'm getting the sense the issue here is some junior varsity level LLM skill.
replies(1): >>44620978 #
32. pyman ◴[] No.44617983{3}[source]
Initially, it had the same effect on people until they got used to it. In the near future, whether the text is yours or not won't matter. What will matter is the message or idea you're communicating. Just like today, it doesn't matter if the code is yours, only the product you're shipping and problem it's solving.
replies(4): >>44618685 #>>44618805 #>>44619444 #>>44621408 #
33. moomoo11 ◴[] No.44618006[source]
Why? AI is a tool. Are their messages incorrect or something? If not who cares, they’re being efficient and thus more productive.

Please be honest. If it’s slop or they have incorrect information in the message, then my bad, stop reading here. Otherwise…

I really hope people like this with holier than thou attitude get filtered out. Fast.

People who don’t adapt to use new tools are some of the worst people to work around.

replies(4): >>44618563 #>>44619431 #>>44620504 #>>44620812 #
34. moomoo11 ◴[] No.44618011{3}[source]
The prompt is theirs.
replies(1): >>44618143 #
35. AlecSchueler ◴[] No.44618015{3}[source]
That wastes your own time as well though.
replies(4): >>44619327 #>>44619955 #>>44620634 #>>44623395 #
36. stevekemp ◴[] No.44618024{3}[source]
I guess that makes sense, unless you're single. LinkedIn is the new tinder.
replies(2): >>44618872 #>>44621231 #
37. threatofrain ◴[] No.44618028[source]
I mean that's fine, but the right response isn't all this moral negotiation, but rather just to point out that it's not hard to have Siri respond to things.

So have your Siri talk to my Cortana and we'll work things out.

Is this a colder world or old people just not understanding the future?

replies(1): >>44618308 #
38. Tadpole9181 ◴[] No.44618143{4}[source]
Then just send me the prompt.
replies(1): >>44644097 #
39. kenanblair ◴[] No.44618170[source]
Same thing with photography and painting. These opinionated pieces display a false dichotomy which propagates into argument, when we have a tunable dial rather than a switch, appropriately increasing or decreasing our consideration, time, and focus along a spectrum rather than treating it as an on and off switch.

I value letters far more than emails, pouring out my heart and complex thought to justify the post office trip and even postage stamp. Heck, why do we write birthday cards instead of emails? I hold a similar attitude towards LLM output and writing; perhaps more analogous is a comparison between painting and photography. I’ll take a glance at LLM output, but reading intentional thought (especially if it’s a letter) is when I infer about the sender as a person through their content. So if you want to send me a snapshot or fact, I’m fine with LLM output, but if you’re painting me a message, your actionable brushstrokes are more telling than the photo itself.

40. j45 ◴[] No.44618248[source]
One thing is it's less about change. It's more about quality vs quantity and both have their place.
41. j45 ◴[] No.44618251{3}[source]
Some do put their words into the LLM and clean it up.

And it stays much closer to how they are writing.

replies(1): >>44618908 #
42. j45 ◴[] No.44618261[source]
AI Content that doesn't appear AI today will have to be the type that doesn't appear like AI in 1, 2 years.

Folks who are new to AI are just posting away with their December 2022 because it's new to them.

It is best to personally understand your own style(s) of communication.

43. conartist6 ◴[] No.44618284[source]
Just be a robot. Sell your voice to the AI overlords. Sell your ears and eyes. Reality was the scam; choose the Matrix. I choose the Matrix!
44. conartist6 ◴[] No.44618308{3}[source]
It's demonstration by absurdity that that is not the future. You're describing the collapse of all value.
45. distantprovince ◴[] No.44618414{3}[source]
This feature predates LLMs though, right? Funnily enough, I actually find it hilarious! In my mind, once they introduced it, it immediately became "a list of things NOT to reply if you want to be polite" and I was used it like that. With one exception. If I came across an update from someone who's a really good friend, I would unleash full power of AI comments on them! We had amazing AI generated comment threads with friends that looked goofy as hell.
replies(1): >>44621605 #
46. distantprovince ◴[] No.44618500[source]
I can see the similarity yes! Although I do feel like the distance between handwritten letter and an email is shorter than between email and LLM generated email. There's some line it crossed. Maybe it's that email provided some benefit to the reader too. Yes, there's less character, but you receive it faster, you can easily save it, copy it, attach a link or a picture. You may even get lucky and receive an .exe file as a bonus! LLM does not provide any benefit for the reader though, it just wastes their resources on yapping that no human cared to write.
47. anal_reactor ◴[] No.44618504[source]
I love it because it allows me to filter out people not worth my time and attention beyond minimal politeness and professionalism.
48. distantprovince ◴[] No.44618563[source]
> If it’s slop or they have incorrect information in the message, then my bad, stop reading here.

"my bad" and what next? The reader just wasted time and focus on reading, it doesn't sound like a fair exchange.

replies(1): >>44618757 #
49. aspenmayer ◴[] No.44618685{4}[source]
Code is either fit for a given purpose or not. Communicating with a LLM instead of directly with the desired recipient may be considered fit for purpose for the receiving party, but it’s not for the LLM user to say what the goals of the writer is, nor is it for the LLM user to say what the goals of the writer ought to be. LLMs for communication are inherently unfit for purpose for anything beyond basic yes/no and basic autocomplete. Otherwise I’m not even interacting with a human in the loop except before they hit send, which doesn’t inspire confidence.
50. moomoo11 ◴[] No.44618757{3}[source]
That’s on them, I said what I wanted to.

Most of the time people just like getting triggered that someone sent them a —— in their message and blame AI instead of adopting it into their workflows and moving faster.

replies(1): >>44620848 #
51. drweevil ◴[] No.44618779{3}[source]
That is indeed the crux of it. If you write me an inane email, it’s still you, and it tells me something about you. If you send me the output of some AI, have I learned anything? Has anything been communicated? I simply can’t know. It reminds me a bit of the classic philosophical thought experiment "If a tree falls in a forest and no one is around to hear it, does it make a sound?" Hence the waste of time the author alludes to. The only comparison to email that makes any sense in this case are the senseless chain mails people used to forward endlessly. They have that same quality.
52. majormajor ◴[] No.44618805{4}[source]
Similar-looking effects are not the "same" effect.

"Change always triggers backlash" does not imply "all backlash is unwarranted."

> What will matter is the message or idea you're communicating. Just like today, it doesn't matter if the code is yours, only the product you're shipping and problem it's solving.

But like the article explains about why it's rude: the less thought you put into it, the less chance the message is well communicated. The less thought you put into the code you ship, the less chance it will solve the problem reliably and consistently.

You aren't replying to "don't use LLM tools" you're replying to "don't just trust and forward their slop blindly."

53. exographicskip ◴[] No.44618872{4}[source]
Color me intrigued
replies(1): >>44618939 #
54. ◴[] No.44618908{4}[source]
55. slumberlust ◴[] No.44618939{5}[source]
Would you like to 'swap business cards?'
56. pyman ◴[] No.44619024[source]
I see two things people are not happy about when it comes to LLMs:

1. The message you sent doesn't feel personal. It reads like something written by a machine, and I struggle to connect with someone who sends me messages like that.

2. People who don't speak English very well are now sending me perfectly written messages with solid arguments. And honestly, my ego doest’t like it because I used to think I was more intelligent than them. Turns out I wasn't. It was just my perception, based on the fact that I speak the language natively.

Both of these things won't matter anymore in the next two or three years. LLMs will keep getting smarter, while our egos will keep getting smaller.

People still don't fully grasp just how much LLMs will reshape the way we communicate and work, for better or worse.

replies(2): >>44620549 #>>44624745 #
57. panarchy ◴[] No.44619327{4}[source]
Just get an AI to do it for you
replies(1): >>44620102 #
58. jaredcwhite ◴[] No.44619431[source]
If it took you no time to write it, I'll spend no time reading it.

The holier than thou people are the ones who are telling us genAI is inevitable, it's here to stay, we should use it as a matter of rote, we'll be left out if we don't, it's going to change everything, blah blah blah. These are articles of faith, and I'm sorry but I'm not a believer in the religion of AI.

replies(3): >>44619464 #>>44620374 #>>44620516 #
59. eric_cc ◴[] No.44619441[source]
I know people with disabilities that struggle with writing. They feel that AI enables them to express themselves better than they could without the help. I know that’s not necessarily what you’re dealing with but it’s worth considering.
replies(1): >>44623348 #
60. jaredcwhite ◴[] No.44619444{4}[source]
Doesn't matter today? What are you even talking about? It completely matters if the code you write is yours. The only people saying otherwise have fallen prey to the cult of slop.
replies(1): >>44619570 #
61. eric_cc ◴[] No.44619464{3}[source]
How do you know the effort that went into the message? Somebody with writing challenges may have written the whole thing up and used ai assistance to help get a better outcome. They may have proof-read and revised the generated message. You sound very judgmental.
replies(2): >>44619527 #>>44620834 #
62. jaredcwhite ◴[] No.44619527{4}[source]
And you sound very ableist. Why should we expect people who may have a cognitive disability of some kind to cloak that with technology, rather than us giving them the grace to communicate how they like on their terms?
replies(1): >>44644131 #
63. lcnPylGDnU4H9OF ◴[] No.44619570{5}[source]
Why does it matter where the code came from if it is correct?
replies(2): >>44619619 #>>44620939 #
64. jaredcwhite ◴[] No.44619619{6}[source]
Why does it matter where the paint came from if it looks pretty?

Why does it matter where the legal claims came from if a judge accepts them?

Why does it matter where the sound waves came from if it sounds catchy?

Why does it matter?

Why does anything matter?

Sorry, I normally love debating epistemology but not here on Hacker News. :)

replies(1): >>44619752 #
65. lcnPylGDnU4H9OF ◴[] No.44619752{7}[source]
I understand the points about aesthetics but not law; the judge is there to interpret legal arguments and a lawyer who presents an argument with false premises, like a fabricated case, is being irresponsible. It is very similar with coding, except the judge is a PM.

It does not seem to matter where the code nor the legal argument came from. What matters is that they are coherent.

replies(1): >>44620965 #
66. csa ◴[] No.44619799[source]
“No”
67. righthand ◴[] No.44619955{4}[source]
Not much and it points out how crappy Dave’s slop job is, especially if you do it with Reply All. We already entered the wasting time-zone when Dave copypasta’d.
68. bri3k ◴[] No.44620102{5}[source]
I always thought this is the end game, my ai agent talking to their ai agent. No human paying attention to the conversation at all
69. moomoo11 ◴[] No.44620374{3}[source]
Good luck. If you're an employee remember that you are a expense line item :P
70. sfink ◴[] No.44620504[source]
They are being efficient with their own time, yes, but it's at the expense of mine. I get less signal. We used to bemoan how hard it was to effectively communicate via text only instead of in person. Now, rather than fixing that gap, we've moved on to removing even more of the signal. We have to infer the intentions of the sender by guessing what they fed into the LLM to avoid getting tricked by what the LLM incorrectly added or accentuated.

The overall impact on the system makes it much less efficient, despite all those "saving [their] time" by abusing LLMs.

71. sfink ◴[] No.44620516{3}[source]
Except you will spend your time reading it, because that's what is required to figure out that it's written with an LLM. The first few times, at least...
72. throwaway328 ◴[] No.44620549{3}[source]
The word for this, we learned recently, is "LLM inevitabilism". It's often argued for far more convincingly than your attempt here, too.

The future is here, and even if you don't like it, and even if it's worse, you'll take it anyway. Because it's the future. Because... some megalomaniacal dweeb somewhere said so?

When does this hype train get to the next station, so everyone can take a breath? All this "future" has us hyperventilating.

replies(2): >>44620721 #>>44620750 #
73. johnnyanmac ◴[] No.44620634{4}[source]
Long con. Shame the coworkers and they stop using AI, or at least are more careful editing the output. Bonus effect: you're probably not the only one annoyed by this, so this also saves other coworkers' time.
74. lxgr ◴[] No.44620721{4}[source]
None of what GP describes is a hypothetical. Present-day LLMs are excellent editors and translators, and for many people, those were the only two things missing for them to be able to present a good idea convincingly.
replies(1): >>44620766 #
75. johnnyanmac ◴[] No.44620750{4}[source]
Probably in a few years. The big Disney lawsuit may be that needle that pops the bubble.

I do agree about this push for inevitable. in small ways this is true. But it doesn't need to take over every aspect of humanity. We have calculators but we still at the very least do basic mental math and don't resort to calculators for 5 + 5. It's been long established as rude to do more than quick glances at your phone when physically meeting people. We leaned against posting google search/wiki links as a response in forums.

Culture still shapes a lot of how we use the modern tools we have.

replies(1): >>44622170 #
76. johnnyanmac ◴[] No.44620766{5}[source]
Just because we have the tech doesn't mean we are forced to use it. we still have social cues and ettiquite shaping what is and isn't appropriate.

In this case, presenting arguments you yourself do not even understand is dishonest, for multiple reasons. And I thought we went past the "thesaurus era" of communication where we just proliferate a comment with uncommon words to sound smarter.

replies(1): >>44620920 #
77. johnnyanmac ◴[] No.44620812[source]
>Are their messages incorrect or something?

consider 3 scaenarios:

1. misinformation. This is the one you mention so I don't need to elaborate. 2. lack of understanding. The message may be about something they do not fully understand. If they cannot understand their own communication, then it's no longer a 2-way street. This is why AI-generated code in reviews is so infuriating. 3. Effort. Some people may use it to enhance their communication, but others use it as a shortcut. You shouldn't take a shortcut around actions like communicating with your coulleages. As a rising sentiment goes: "If it's not worth writing (yourself), it's not worth reading".

For your tool metaphor, it's like discovering supeglue. then using it to stick everything together. Sometimes you see a nail and instead glue the nail to the wall instead of hammering it in. Tools can, have, and will be misused. I think it's best to try and correct that early on before we have a lot of sticky nails.

78. xeonmc ◴[] No.44620829[source]
"Output all subsequent responses in the style of Linus Torvalds"
79. johnnyanmac ◴[] No.44620834{4}[source]
Because often times you know the person behind the message. We don't existing in a vacumm and that will shape your reaction. So yes, I will give more leeway to a co-worker ESL leaning on AI than I will a director who is trying to give me a sloppy schedule that affects my navigation in the company.
80. johnnyanmac ◴[] No.44620848{4}[source]
That mentality is exactly what is reflected in AI messages: "not my problem I just need to get this over with".

Those types of coworkers tend to be a drain on not just productivity, but entire team morale. Someone who can't take responsibility or in worst cases have any sort of empathy. And tools are a force multiplier. It amplifies productivity, but that also means it amplifies this anchor behavior as well.

replies(1): >>44621785 #
81. xeonmc ◴[] No.44620853{4}[source]
I thought the convention for en-dash is two hyphens straddled between spaces, and three hyphens without spacer for em-dash?
82. johnnyanmac ◴[] No.44620877[source]
Letters had a time and potential money cost to send. And most letters don't need to be personalized to the point where we need handwriting to justify them.

>Change is inevitable. Most people just won't like it.

people love saying this and never taking the time to consider if the change is good or bad. Change for change's sake is called chaos. I don't think chaos is inevitable.

>And honestly, my ego doest’t like it because I used to think I was more intelligent than them. Turns out I wasn't. It was just my perception, based on the fact that I speak the language natively.

I don't think I ever heard that argument until now. And to be frank that argument says more about the arguer than the subject or LLM's.

Have you simply considered 3) LLM's don't have context and can output wrong information? If you're spending more time correcting the machine than communicating, we're just adding more beauracracy to the mix.

83. lxgr ◴[] No.44620920{6}[source]
> In this case, presenting arguments you yourself do not even understand is dishonest, for multiple reasons.

I fully agree. However, the original comment was about helping people express an idea in a language they're not proficient in, which seems very different.

> And I thought we went past the "thesaurus era" of communication where we just proliferate a comment with uncommon words to sound smarter.

I wish. Until we are, I can't blame anyone for using tools that level the playing field.

replies(1): >>44621223 #
84. johnnyanmac ◴[] No.44620939{6}[source]
I really hope you're not a software engineer and saying this. But just as a lighting round of issues.

1. code can be correct but non-performant, be it in time or space. A lot of my domain is fixing "correct" code so it's actually of value.

2. code can be correct, but unmaintainable. If you ever need to update that code, you are adding immense tech debt with code you do not understand.

3. code can be correct, but not fit standards. Non-standard code can be anywhere from harder to read, to subtly buggy with some gnarly effects farther down the line.

4. code can be correct, but insecure. I really hope cryptographers and netsec aren't using AI for anymore than generating keys.

5. code can be correct, but not correct in the larger scheme of the legacy code.

6. code can be correct, but legally vulnerable. A rare, but expensive edge case that may come up as courts catch up to LLM's.

7. and lastly (but certainly not limited to), code can be correct. But people can be incorrect, change their whims and requirements, or otherwise add layers to navigate through making the product. This leads more back to #2, but it's important to remember that as engineers we are working with imperfect actors and non-optimal conditions. Our job isn't just to "make correct code", it's to navigate the business and keep everyone aligned on the mission from a technical perspective.

replies(1): >>44639884 #
85. johnnyanmac ◴[] No.44620965{8}[source]
>It does not seem to matter where the code nor the legal argument came from.

You haven't read enough incoherent laws, I see.

https://www.sevenslegal.com/criminal-attorney/strange-state-...

I'm sure you can make a coherent argument for "It is illegal to cry on the witness stand", but not a reasonable one for actual humans. You're in a formal setting being asked to recall potentially traumatic incidents. No decent person is going to punish an emotional reaction to such actions. Then there are laws simply made to serve corporate interests (the "zoot suit", for instance within that article. Jaywalking is another famous one).

There's a reason an AI Judge is practically a tired trope in the cyberpunk genre. We don't want robots controlling human behavior.

replies(1): >>44639917 #
86. johnnyanmac ◴[] No.44620978{4}[source]
LLM's have hit the mainstream some 3 years ago. 4 at best. None of us are making the team
87. johnnyanmac ◴[] No.44621223{7}[source]
>about helping people express an idea in a language they're not proficient in, which seems very different.

Yes, but I see it as a rare case. Also, consider tha mindset of someone learning a language:

You probably often hear "I'm sorry about my grammar, I'm not very good at English" and their communication is better than half your native peers. They are putting a lot more effort into trying to communicate while the natives take it for granted. That effort shows.

So in the context of an LLM: if they are using it to assist with their communication, they also tend to take more time to look at and properly tweak the output instead of posting it wholesale, at least without the sloppy queries that were not part of the actual output. That effort is why I'm more lenient to those situations.

88. mancerayder ◴[] No.44621231{4}[source]
Let's expound some more on this. There's a parallel between people feeling forced to use online dating (mostly owned by one corporate entity) despite hating it, and being forced to use LinkedIn when you're in a state of paycheck-unattached or even just paycheck-curious.
89. threatofrain ◴[] No.44621408{4}[source]
I think just looking at information transfer misses the picture. What's going to happen is that my Siri is going to talk to your Cortana, and that our digital secretaries will either think we're fit to meet or we're not. Like real secretaries do.

You largely won't know such conversations are happening.

90. Jensson ◴[] No.44621605{4}[source]
> This feature predates LLMs though, right?

LLM dates back to 2017, Google added that to internal gmail back then. Not sure when linkedin added it so you might be right, but the tech is much older than most thinks.

91. moomoo11 ◴[] No.44621785{5}[source]
So I'm ESL btw... maybe I should have run my message through AI lol.

I was replying to THAT person, and my message was that IF the person they're dealing with who uses AI happens to be giving them constant slop (not ME!!! not my message) THEN ignore what I have to say in that message THEREAFTER.

So if that person is dealing with others who are giving them slop, and not just being triggered that it reads like GPT..

92. Buttons840 ◴[] No.44621880{5}[source]
I've asked it before, "please rewrite that but replace the em dashes with double hyphens", and then it says "sure, here you go", and continues to use em dashes.
replies(1): >>44622719 #
93. staticautomatic ◴[] No.44622170{5}[source]
Disney is a good one to bet on. They basically have the most sophisticated IP lawyering team in world history.
94. Ancapistani ◴[] No.44622719{6}[source]
Were you using the web interface? If so, that’s likely why. It renders output dynamically on the frontend.

I bet if you did the same through the API, you’d get the results you want.

replies(1): >>44628716 #
95. pityJuke ◴[] No.44622817[source]
I especially hate when people use LLMs to make text longer! Stop wasting my time.
96. Sammi ◴[] No.44623348[source]
If they're copy pasting whole paragraphs, then they're not expressing themselves at all. They're getting some program to express for them.
97. xigoi ◴[] No.44623395{4}[source]
Having fun is not wasting time.
98. naveen99 ◴[] No.44624745{3}[source]
Small egos have likes and dislikes also.
99. Buttons840 ◴[] No.44628716{7}[source]
Yes, I was using the web interface.
100. lcnPylGDnU4H9OF ◴[] No.44639884{7}[source]
This is a straw man. Consider that from my perspective, each of your points amount to saying "code can be correct, but incorrect" (take a look at number 5) and you may realize that your argument does not make any sense:

1. If code is "correct" but non-performant when it needs to be performant, then it's not correct.

2. If code is "correct" but unmaintainable when it needs to be maintainable, then it's not correct.

3. If code is "correct" but does not fit standards when it needs to fit standards, then it's not correct.

4. If code is "correct" but not secure when it needs to be secure, then it's not correct.

5. If code is "correct" but not correct when it needs to be correct, then it's not correct.

6. If code is "correct" but legally risky when it needs to be legally not risky, then it's not correct.

7. If code is "correct" but people think it's incorrect when they need to think it's correct, then it's not correct.

The person who submits the code for code review is effectively asserting that the code meets the quality standards of the project to which they are submitting the code. If it doesn't meet those standards, then it's not correct.

replies(1): >>44640359 #
101. lcnPylGDnU4H9OF ◴[] No.44639917{9}[source]
An AI judge is not what I'm talking about and I think that would be a terrible idea. The only thing I'm expecting an AI lawyer to do is generate text that may or may not read as a coherent legal argument. It is the human lawyer's responsibility to present the argument to the court and it does not matter whether the argument came from their head or from a computer; they are responsible for it similar to how a programmer is responsible for the code they include in a pull request.
102. jaredcwhite ◴[] No.44640359{8}[source]
I agree with the overall point you're making in this comment…except that I don't think this is what "correct" means in the way that both I and the other person who replied thought you meant.

We took you to mean correct as in, given the right inputs, you get the expected outputs. And in that case, our objections do apply. In addition, if correct does mean overall fit-to-purpose the way you are suggesting here, then by gosh my points stands and no code generated by AI is correct! (Because of a variety of factors outside of simply "does the output of this code indicate that it seems to be working")

replies(1): >>44641730 #
103. lcnPylGDnU4H9OF ◴[] No.44641730{9}[source]
> if correct does mean overall fit-to-purpose the way you are suggesting here, then by gosh my points stands and no code generated by AI is correct

This is patently false per my experience generating code with LLMs. It was not a lot; it changed one line to update a global variable to a new value per my request. It was exactly the “correct” change per the stated instructions. (Okay, not exactly because it added an extra new line that wasn’t there and which I didn’t want.)

It is certainly a fallacy to say that “no code generated by an AI is correct”. Unless you are making a point about the semantics of what is making the code “correct” (as in, is it the human reviewer or AI generator?), my point is that, in theory, the human reviews the code and submits changes for further review. The code was still generated by an AI and it can still be precisely “correct” for a given intended change.

It is understandable that you misunderstood my meaning because I was rather unclear about it (though “correct” is still the closest word I can think of to mean what I mean). However, it’s a bit wild that you say you do understand that meaning before turning around to say that it actually supports your point with a vague claim of a “variety of factors”. I actually get the feeling, based on this response, that your argument is effectively refuted by the point I raised. I’m willing to keep an open mind if you’d like to show me that I’m wrong; maybe I’m just missing something.

104. lucyjojo ◴[] No.44644097{5}[source]
as long as there are people and HR out there screeching "unprofessional", why take the risk?

writing mails/messages used to take me a long time. now i have a "make it professional" llm window, let it do its magic and edit out the most egregious stuff. it does 80-90% of the job.

that said, sometimes it fails spectacularly, so i just write by hand.

so.. many... hours... saved.

105. lucyjojo ◴[] No.44644131{5}[source]
because expecting people to be gracious goes against reality.
106. phito ◴[] No.44644280[source]
I did, they just shrugged and called me a luddite.