> Would you like me to format this for Outlook or help you post it to a specific channel or distribution list?
I wonder what others there are.
I occasionally use bullet points, emdashes (unicode, single, and double hyphens) and words like "delve". I hate it think these are the new heuristics.
I think AI is a useful tool (especially image and video models), but I've already had folks (on HN [1]!) call out my fully artisanal comments as LLM-generated. It's almost as annoying as getting low-effort LLM splurge from others.
Edit: As it turns out, cow-orkers isn't actually an LLMism. It's both a joke and a dictation software mistake. Oops.
[1] most recently https://news.ycombinator.com/item?id=44482876
Also, that "cow-orkers" doesn't look like AI-generated slop at all..? Just scrolling down a bit shows that most of them are three years and older.
“Very insightful! Truly a masterclass in turning everyday professional rituals into transformative personal branding opportunities. Your ability to synergize authenticity, thought leadership, and self-congratulation is unparalleled.”
one of my reports started responding to questions with AI Slop. I asked if he was actually writing those sentences (he wasn't), so I gave him that exact feedback - it felt to me like he wasn't even listening, when he clearly jut copy-pasted clearly AI responses. Thankfully he stopped doing it.
Of course as models get better at writing, it'll be harder and harder to tell. IMO the people who stand to lose the most are the AI sloppers, in that case - like in the South Park episode, as they'll get lost in commitments and agreements they didn't even know they made.
This goes back a loooooong while.
This feature absolutely defies belief. If I ran a social network (thank god I don't) one of my main worries would be a flood of AI skip driving away all the human users. And LinkedIn are encouraging it. How does that happen? My best guess is that it drives up engagement numbers to allow some disinterested middle managers to hit some internal targets.
Here's my take: these forums will drive good writers away or at least discourage them, leaving discourses the worse for it. What they really end up saying — "we don't care whether you use an LLM, just remove the damn em-dash" — indicates it's not a forum hosting riveting discussions in the first place.
My dad used to say: "Stop sending me emails. It's not the same." I'd tell him, "It's better. "No, it's not. People used to sit down and take the time to write a letter, in their own handwriting. Every letter had its own personality, even its own smell. And you had to walk to the post office to send it. Now sending a letter means nothing."
Change is inevitable. Most people just won't like it.
A lot of people don't realise that Transformers were originally designed to translate text between languages. Which, in a way, is just another way of improving how we communicate ideas. Right now, I see two things people are not happy about when it comes to LLMs:
1. The message you sent doesn't feel personal. It reads like something written by a machine, and I struggle to connect with someone who sends me messages like that.
2. People who don't speak English very well are now sending me perfectly written messages with solid arguments. And honestly, my ego doest’t like it because I used to think I was more intelligent than them. Turns out I wasn't. It was just my perception, based on the fact that I speak the language natively.
Both of these things won't matter anymore in the next two or three years.
Going forward, could you please communicate with me directly? I really don't mind a lack of capitalization or colloquial expressions in internal communications."
Please be honest. If it’s slop or they have incorrect information in the message, then my bad, stop reading here. Otherwise…
I really hope people like this with holier than thou attitude get filtered out. Fast.
People who don’t adapt to use new tools are some of the worst people to work around.
So have your Siri talk to my Cortana and we'll work things out.
Is this a colder world or old people just not understanding the future?
I value letters far more than emails, pouring out my heart and complex thought to justify the post office trip and even postage stamp. Heck, why do we write birthday cards instead of emails? I hold a similar attitude towards LLM output and writing; perhaps more analogous is a comparison between painting and photography. I’ll take a glance at LLM output, but reading intentional thought (especially if it’s a letter) is when I infer about the sender as a person through their content. So if you want to send me a snapshot or fact, I’m fine with LLM output, but if you’re painting me a message, your actionable brushstrokes are more telling than the photo itself.
Folks who are new to AI are just posting away with their December 2022 because it's new to them.
It is best to personally understand your own style(s) of communication.
"my bad" and what next? The reader just wasted time and focus on reading, it doesn't sound like a fair exchange.
"Change always triggers backlash" does not imply "all backlash is unwarranted."
> What will matter is the message or idea you're communicating. Just like today, it doesn't matter if the code is yours, only the product you're shipping and problem it's solving.
But like the article explains about why it's rude: the less thought you put into it, the less chance the message is well communicated. The less thought you put into the code you ship, the less chance it will solve the problem reliably and consistently.
You aren't replying to "don't use LLM tools" you're replying to "don't just trust and forward their slop blindly."
1. The message you sent doesn't feel personal. It reads like something written by a machine, and I struggle to connect with someone who sends me messages like that.
2. People who don't speak English very well are now sending me perfectly written messages with solid arguments. And honestly, my ego doest’t like it because I used to think I was more intelligent than them. Turns out I wasn't. It was just my perception, based on the fact that I speak the language natively.
Both of these things won't matter anymore in the next two or three years. LLMs will keep getting smarter, while our egos will keep getting smaller.
People still don't fully grasp just how much LLMs will reshape the way we communicate and work, for better or worse.
The holier than thou people are the ones who are telling us genAI is inevitable, it's here to stay, we should use it as a matter of rote, we'll be left out if we don't, it's going to change everything, blah blah blah. These are articles of faith, and I'm sorry but I'm not a believer in the religion of AI.
Why does it matter where the legal claims came from if a judge accepts them?
Why does it matter where the sound waves came from if it sounds catchy?
Why does it matter?
Why does anything matter?
Sorry, I normally love debating epistemology but not here on Hacker News. :)
It does not seem to matter where the code nor the legal argument came from. What matters is that they are coherent.
The overall impact on the system makes it much less efficient, despite all those "saving [their] time" by abusing LLMs.
The future is here, and even if you don't like it, and even if it's worse, you'll take it anyway. Because it's the future. Because... some megalomaniacal dweeb somewhere said so?
When does this hype train get to the next station, so everyone can take a breath? All this "future" has us hyperventilating.
I do agree about this push for inevitable. in small ways this is true. But it doesn't need to take over every aspect of humanity. We have calculators but we still at the very least do basic mental math and don't resort to calculators for 5 + 5. It's been long established as rude to do more than quick glances at your phone when physically meeting people. We leaned against posting google search/wiki links as a response in forums.
Culture still shapes a lot of how we use the modern tools we have.
In this case, presenting arguments you yourself do not even understand is dishonest, for multiple reasons. And I thought we went past the "thesaurus era" of communication where we just proliferate a comment with uncommon words to sound smarter.
consider 3 scaenarios:
1. misinformation. This is the one you mention so I don't need to elaborate. 2. lack of understanding. The message may be about something they do not fully understand. If they cannot understand their own communication, then it's no longer a 2-way street. This is why AI-generated code in reviews is so infuriating. 3. Effort. Some people may use it to enhance their communication, but others use it as a shortcut. You shouldn't take a shortcut around actions like communicating with your coulleages. As a rising sentiment goes: "If it's not worth writing (yourself), it's not worth reading".
For your tool metaphor, it's like discovering supeglue. then using it to stick everything together. Sometimes you see a nail and instead glue the nail to the wall instead of hammering it in. Tools can, have, and will be misused. I think it's best to try and correct that early on before we have a lot of sticky nails.
Those types of coworkers tend to be a drain on not just productivity, but entire team morale. Someone who can't take responsibility or in worst cases have any sort of empathy. And tools are a force multiplier. It amplifies productivity, but that also means it amplifies this anchor behavior as well.
>Change is inevitable. Most people just won't like it.
people love saying this and never taking the time to consider if the change is good or bad. Change for change's sake is called chaos. I don't think chaos is inevitable.
>And honestly, my ego doest’t like it because I used to think I was more intelligent than them. Turns out I wasn't. It was just my perception, based on the fact that I speak the language natively.
I don't think I ever heard that argument until now. And to be frank that argument says more about the arguer than the subject or LLM's.
Have you simply considered 3) LLM's don't have context and can output wrong information? If you're spending more time correcting the machine than communicating, we're just adding more beauracracy to the mix.
I fully agree. However, the original comment was about helping people express an idea in a language they're not proficient in, which seems very different.
> And I thought we went past the "thesaurus era" of communication where we just proliferate a comment with uncommon words to sound smarter.
I wish. Until we are, I can't blame anyone for using tools that level the playing field.
1. code can be correct but non-performant, be it in time or space. A lot of my domain is fixing "correct" code so it's actually of value.
2. code can be correct, but unmaintainable. If you ever need to update that code, you are adding immense tech debt with code you do not understand.
3. code can be correct, but not fit standards. Non-standard code can be anywhere from harder to read, to subtly buggy with some gnarly effects farther down the line.
4. code can be correct, but insecure. I really hope cryptographers and netsec aren't using AI for anymore than generating keys.
5. code can be correct, but not correct in the larger scheme of the legacy code.
6. code can be correct, but legally vulnerable. A rare, but expensive edge case that may come up as courts catch up to LLM's.
7. and lastly (but certainly not limited to), code can be correct. But people can be incorrect, change their whims and requirements, or otherwise add layers to navigate through making the product. This leads more back to #2, but it's important to remember that as engineers we are working with imperfect actors and non-optimal conditions. Our job isn't just to "make correct code", it's to navigate the business and keep everyone aligned on the mission from a technical perspective.
You haven't read enough incoherent laws, I see.
https://www.sevenslegal.com/criminal-attorney/strange-state-...
I'm sure you can make a coherent argument for "It is illegal to cry on the witness stand", but not a reasonable one for actual humans. You're in a formal setting being asked to recall potentially traumatic incidents. No decent person is going to punish an emotional reaction to such actions. Then there are laws simply made to serve corporate interests (the "zoot suit", for instance within that article. Jaywalking is another famous one).
There's a reason an AI Judge is practically a tired trope in the cyberpunk genre. We don't want robots controlling human behavior.
Yes, but I see it as a rare case. Also, consider tha mindset of someone learning a language:
You probably often hear "I'm sorry about my grammar, I'm not very good at English" and their communication is better than half your native peers. They are putting a lot more effort into trying to communicate while the natives take it for granted. That effort shows.
So in the context of an LLM: if they are using it to assist with their communication, they also tend to take more time to look at and properly tweak the output instead of posting it wholesale, at least without the sloppy queries that were not part of the actual output. That effort is why I'm more lenient to those situations.
You largely won't know such conversations are happening.
I was replying to THAT person, and my message was that IF the person they're dealing with who uses AI happens to be giving them constant slop (not ME!!! not my message) THEN ignore what I have to say in that message THEREAFTER.
So if that person is dealing with others who are giving them slop, and not just being triggered that it reads like GPT..
I bet if you did the same through the API, you’d get the results you want.
1. If code is "correct" but non-performant when it needs to be performant, then it's not correct.
2. If code is "correct" but unmaintainable when it needs to be maintainable, then it's not correct.
3. If code is "correct" but does not fit standards when it needs to fit standards, then it's not correct.
4. If code is "correct" but not secure when it needs to be secure, then it's not correct.
5. If code is "correct" but not correct when it needs to be correct, then it's not correct.
6. If code is "correct" but legally risky when it needs to be legally not risky, then it's not correct.
7. If code is "correct" but people think it's incorrect when they need to think it's correct, then it's not correct.
The person who submits the code for code review is effectively asserting that the code meets the quality standards of the project to which they are submitting the code. If it doesn't meet those standards, then it's not correct.
We took you to mean correct as in, given the right inputs, you get the expected outputs. And in that case, our objections do apply. In addition, if correct does mean overall fit-to-purpose the way you are suggesting here, then by gosh my points stands and no code generated by AI is correct! (Because of a variety of factors outside of simply "does the output of this code indicate that it seems to be working")
This is patently false per my experience generating code with LLMs. It was not a lot; it changed one line to update a global variable to a new value per my request. It was exactly the “correct” change per the stated instructions. (Okay, not exactly because it added an extra new line that wasn’t there and which I didn’t want.)
It is certainly a fallacy to say that “no code generated by an AI is correct”. Unless you are making a point about the semantics of what is making the code “correct” (as in, is it the human reviewer or AI generator?), my point is that, in theory, the human reviews the code and submits changes for further review. The code was still generated by an AI and it can still be precisely “correct” for a given intended change.
It is understandable that you misunderstood my meaning because I was rather unclear about it (though “correct” is still the closest word I can think of to mean what I mean). However, it’s a bit wild that you say you do understand that meaning before turning around to say that it actually supports your point with a vague claim of a “variety of factors”. I actually get the feeling, based on this response, that your argument is effectively refuted by the point I raised. I’m willing to keep an open mind if you’d like to show me that I’m wrong; maybe I’m just missing something.
writing mails/messages used to take me a long time. now i have a "make it professional" llm window, let it do its magic and edit out the most egregious stuff. it does 80-90% of the job.
that said, sometimes it fails spectacularly, so i just write by hand.
so.. many... hours... saved.