Most active commenters
  • lcnPylGDnU4H9OF(5)
  • jaredcwhite(3)
  • johnnyanmac(3)

←back to thread

321 points distantprovince | 23 comments | | HN request time: 0.001s | source | bottom
Show context
phito ◴[] No.44617442[source]
I really wish some of my coworkers would stop using LLMs to write me emails or even Teams messages. It does feel extremely rude, to the point I don't even want to read them anymore.
replies(10): >>44617497 #>>44617500 #>>44617658 #>>44617721 #>>44617880 #>>44617940 #>>44618006 #>>44618504 #>>44619441 #>>44622817 #
pyman ◴[] No.44617880[source]
Didn't our parents go through the same thing when email came out?

My dad used to say: "Stop sending me emails. It's not the same." I'd tell him, "It's better. "No, it's not. People used to sit down and take the time to write a letter, in their own handwriting. Every letter had its own personality, even its own smell. And you had to walk to the post office to send it. Now sending a letter means nothing."

Change is inevitable. Most people just won't like it.

A lot of people don't realise that Transformers were originally designed to translate text between languages. Which, in a way, is just another way of improving how we communicate ideas. Right now, I see two things people are not happy about when it comes to LLMs:

1. The message you sent doesn't feel personal. It reads like something written by a machine, and I struggle to connect with someone who sends me messages like that.

2. People who don't speak English very well are now sending me perfectly written messages with solid arguments. And honestly, my ego doest’t like it because I used to think I was more intelligent than them. Turns out I wasn't. It was just my perception, based on the fact that I speak the language natively.

Both of these things won't matter anymore in the next two or three years.

replies(7): >>44617893 #>>44618028 #>>44618170 #>>44618248 #>>44618284 #>>44618500 #>>44620877 #
1. aidos ◴[] No.44617893{3}[source]
I really don’t think they’re the same thing. Email or letter, the words are yours while an LLM output isn’t.
replies(5): >>44617962 #>>44617983 #>>44618011 #>>44618251 #>>44618779 #
2. unyttigfjelltol ◴[] No.44617962[source]
Which words, exactly, are "yours"? Working with an LLM is like having a copywriter 24/7, who will steer you toward whatever voice and style you want. Candidly, I'm getting the sense the issue here is some junior varsity level LLM skill.
replies(1): >>44620978 #
3. pyman ◴[] No.44617983[source]
Initially, it had the same effect on people until they got used to it. In the near future, whether the text is yours or not won't matter. What will matter is the message or idea you're communicating. Just like today, it doesn't matter if the code is yours, only the product you're shipping and problem it's solving.
replies(4): >>44618685 #>>44618805 #>>44619444 #>>44621408 #
4. moomoo11 ◴[] No.44618011[source]
The prompt is theirs.
replies(1): >>44618143 #
5. Tadpole9181 ◴[] No.44618143[source]
Then just send me the prompt.
replies(1): >>44644097 #
6. j45 ◴[] No.44618251[source]
Some do put their words into the LLM and clean it up.

And it stays much closer to how they are writing.

replies(1): >>44618908 #
7. aspenmayer ◴[] No.44618685[source]
Code is either fit for a given purpose or not. Communicating with a LLM instead of directly with the desired recipient may be considered fit for purpose for the receiving party, but it’s not for the LLM user to say what the goals of the writer is, nor is it for the LLM user to say what the goals of the writer ought to be. LLMs for communication are inherently unfit for purpose for anything beyond basic yes/no and basic autocomplete. Otherwise I’m not even interacting with a human in the loop except before they hit send, which doesn’t inspire confidence.
8. drweevil ◴[] No.44618779[source]
That is indeed the crux of it. If you write me an inane email, it’s still you, and it tells me something about you. If you send me the output of some AI, have I learned anything? Has anything been communicated? I simply can’t know. It reminds me a bit of the classic philosophical thought experiment "If a tree falls in a forest and no one is around to hear it, does it make a sound?" Hence the waste of time the author alludes to. The only comparison to email that makes any sense in this case are the senseless chain mails people used to forward endlessly. They have that same quality.
9. majormajor ◴[] No.44618805[source]
Similar-looking effects are not the "same" effect.

"Change always triggers backlash" does not imply "all backlash is unwarranted."

> What will matter is the message or idea you're communicating. Just like today, it doesn't matter if the code is yours, only the product you're shipping and problem it's solving.

But like the article explains about why it's rude: the less thought you put into it, the less chance the message is well communicated. The less thought you put into the code you ship, the less chance it will solve the problem reliably and consistently.

You aren't replying to "don't use LLM tools" you're replying to "don't just trust and forward their slop blindly."

10. ◴[] No.44618908[source]
11. jaredcwhite ◴[] No.44619444[source]
Doesn't matter today? What are you even talking about? It completely matters if the code you write is yours. The only people saying otherwise have fallen prey to the cult of slop.
replies(1): >>44619570 #
12. lcnPylGDnU4H9OF ◴[] No.44619570{3}[source]
Why does it matter where the code came from if it is correct?
replies(2): >>44619619 #>>44620939 #
13. jaredcwhite ◴[] No.44619619{4}[source]
Why does it matter where the paint came from if it looks pretty?

Why does it matter where the legal claims came from if a judge accepts them?

Why does it matter where the sound waves came from if it sounds catchy?

Why does it matter?

Why does anything matter?

Sorry, I normally love debating epistemology but not here on Hacker News. :)

replies(1): >>44619752 #
14. lcnPylGDnU4H9OF ◴[] No.44619752{5}[source]
I understand the points about aesthetics but not law; the judge is there to interpret legal arguments and a lawyer who presents an argument with false premises, like a fabricated case, is being irresponsible. It is very similar with coding, except the judge is a PM.

It does not seem to matter where the code nor the legal argument came from. What matters is that they are coherent.

replies(1): >>44620965 #
15. johnnyanmac ◴[] No.44620939{4}[source]
I really hope you're not a software engineer and saying this. But just as a lighting round of issues.

1. code can be correct but non-performant, be it in time or space. A lot of my domain is fixing "correct" code so it's actually of value.

2. code can be correct, but unmaintainable. If you ever need to update that code, you are adding immense tech debt with code you do not understand.

3. code can be correct, but not fit standards. Non-standard code can be anywhere from harder to read, to subtly buggy with some gnarly effects farther down the line.

4. code can be correct, but insecure. I really hope cryptographers and netsec aren't using AI for anymore than generating keys.

5. code can be correct, but not correct in the larger scheme of the legacy code.

6. code can be correct, but legally vulnerable. A rare, but expensive edge case that may come up as courts catch up to LLM's.

7. and lastly (but certainly not limited to), code can be correct. But people can be incorrect, change their whims and requirements, or otherwise add layers to navigate through making the product. This leads more back to #2, but it's important to remember that as engineers we are working with imperfect actors and non-optimal conditions. Our job isn't just to "make correct code", it's to navigate the business and keep everyone aligned on the mission from a technical perspective.

replies(1): >>44639884 #
16. johnnyanmac ◴[] No.44620965{6}[source]
>It does not seem to matter where the code nor the legal argument came from.

You haven't read enough incoherent laws, I see.

https://www.sevenslegal.com/criminal-attorney/strange-state-...

I'm sure you can make a coherent argument for "It is illegal to cry on the witness stand", but not a reasonable one for actual humans. You're in a formal setting being asked to recall potentially traumatic incidents. No decent person is going to punish an emotional reaction to such actions. Then there are laws simply made to serve corporate interests (the "zoot suit", for instance within that article. Jaywalking is another famous one).

There's a reason an AI Judge is practically a tired trope in the cyberpunk genre. We don't want robots controlling human behavior.

replies(1): >>44639917 #
17. johnnyanmac ◴[] No.44620978[source]
LLM's have hit the mainstream some 3 years ago. 4 at best. None of us are making the team
18. threatofrain ◴[] No.44621408[source]
I think just looking at information transfer misses the picture. What's going to happen is that my Siri is going to talk to your Cortana, and that our digital secretaries will either think we're fit to meet or we're not. Like real secretaries do.

You largely won't know such conversations are happening.

19. lcnPylGDnU4H9OF ◴[] No.44639884{5}[source]
This is a straw man. Consider that from my perspective, each of your points amount to saying "code can be correct, but incorrect" (take a look at number 5) and you may realize that your argument does not make any sense:

1. If code is "correct" but non-performant when it needs to be performant, then it's not correct.

2. If code is "correct" but unmaintainable when it needs to be maintainable, then it's not correct.

3. If code is "correct" but does not fit standards when it needs to fit standards, then it's not correct.

4. If code is "correct" but not secure when it needs to be secure, then it's not correct.

5. If code is "correct" but not correct when it needs to be correct, then it's not correct.

6. If code is "correct" but legally risky when it needs to be legally not risky, then it's not correct.

7. If code is "correct" but people think it's incorrect when they need to think it's correct, then it's not correct.

The person who submits the code for code review is effectively asserting that the code meets the quality standards of the project to which they are submitting the code. If it doesn't meet those standards, then it's not correct.

replies(1): >>44640359 #
20. lcnPylGDnU4H9OF ◴[] No.44639917{7}[source]
An AI judge is not what I'm talking about and I think that would be a terrible idea. The only thing I'm expecting an AI lawyer to do is generate text that may or may not read as a coherent legal argument. It is the human lawyer's responsibility to present the argument to the court and it does not matter whether the argument came from their head or from a computer; they are responsible for it similar to how a programmer is responsible for the code they include in a pull request.
21. jaredcwhite ◴[] No.44640359{6}[source]
I agree with the overall point you're making in this comment…except that I don't think this is what "correct" means in the way that both I and the other person who replied thought you meant.

We took you to mean correct as in, given the right inputs, you get the expected outputs. And in that case, our objections do apply. In addition, if correct does mean overall fit-to-purpose the way you are suggesting here, then by gosh my points stands and no code generated by AI is correct! (Because of a variety of factors outside of simply "does the output of this code indicate that it seems to be working")

replies(1): >>44641730 #
22. lcnPylGDnU4H9OF ◴[] No.44641730{7}[source]
> if correct does mean overall fit-to-purpose the way you are suggesting here, then by gosh my points stands and no code generated by AI is correct

This is patently false per my experience generating code with LLMs. It was not a lot; it changed one line to update a global variable to a new value per my request. It was exactly the “correct” change per the stated instructions. (Okay, not exactly because it added an extra new line that wasn’t there and which I didn’t want.)

It is certainly a fallacy to say that “no code generated by an AI is correct”. Unless you are making a point about the semantics of what is making the code “correct” (as in, is it the human reviewer or AI generator?), my point is that, in theory, the human reviews the code and submits changes for further review. The code was still generated by an AI and it can still be precisely “correct” for a given intended change.

It is understandable that you misunderstood my meaning because I was rather unclear about it (though “correct” is still the closest word I can think of to mean what I mean). However, it’s a bit wild that you say you do understand that meaning before turning around to say that it actually supports your point with a vague claim of a “variety of factors”. I actually get the feeling, based on this response, that your argument is effectively refuted by the point I raised. I’m willing to keep an open mind if you’d like to show me that I’m wrong; maybe I’m just missing something.

23. lucyjojo ◴[] No.44644097{3}[source]
as long as there are people and HR out there screeching "unprofessional", why take the risk?

writing mails/messages used to take me a long time. now i have a "make it professional" llm window, let it do its magic and edit out the most egregious stuff. it does 80-90% of the job.

that said, sometimes it fails spectacularly, so i just write by hand.

so.. many... hours... saved.