My dad used to say: "Stop sending me emails. It's not the same." I'd tell him, "It's better. "No, it's not. People used to sit down and take the time to write a letter, in their own handwriting. Every letter had its own personality, even its own smell. And you had to walk to the post office to send it. Now sending a letter means nothing."
Change is inevitable. Most people just won't like it.
A lot of people don't realise that Transformers were originally designed to translate text between languages. Which, in a way, is just another way of improving how we communicate ideas. Right now, I see two things people are not happy about when it comes to LLMs:
1. The message you sent doesn't feel personal. It reads like something written by a machine, and I struggle to connect with someone who sends me messages like that.
2. People who don't speak English very well are now sending me perfectly written messages with solid arguments. And honestly, my ego doest’t like it because I used to think I was more intelligent than them. Turns out I wasn't. It was just my perception, based on the fact that I speak the language natively.
Both of these things won't matter anymore in the next two or three years.
"Change always triggers backlash" does not imply "all backlash is unwarranted."
> What will matter is the message or idea you're communicating. Just like today, it doesn't matter if the code is yours, only the product you're shipping and problem it's solving.
But like the article explains about why it's rude: the less thought you put into it, the less chance the message is well communicated. The less thought you put into the code you ship, the less chance it will solve the problem reliably and consistently.
You aren't replying to "don't use LLM tools" you're replying to "don't just trust and forward their slop blindly."
Why does it matter where the legal claims came from if a judge accepts them?
Why does it matter where the sound waves came from if it sounds catchy?
Why does it matter?
Why does anything matter?
Sorry, I normally love debating epistemology but not here on Hacker News. :)
It does not seem to matter where the code nor the legal argument came from. What matters is that they are coherent.
1. code can be correct but non-performant, be it in time or space. A lot of my domain is fixing "correct" code so it's actually of value.
2. code can be correct, but unmaintainable. If you ever need to update that code, you are adding immense tech debt with code you do not understand.
3. code can be correct, but not fit standards. Non-standard code can be anywhere from harder to read, to subtly buggy with some gnarly effects farther down the line.
4. code can be correct, but insecure. I really hope cryptographers and netsec aren't using AI for anymore than generating keys.
5. code can be correct, but not correct in the larger scheme of the legacy code.
6. code can be correct, but legally vulnerable. A rare, but expensive edge case that may come up as courts catch up to LLM's.
7. and lastly (but certainly not limited to), code can be correct. But people can be incorrect, change their whims and requirements, or otherwise add layers to navigate through making the product. This leads more back to #2, but it's important to remember that as engineers we are working with imperfect actors and non-optimal conditions. Our job isn't just to "make correct code", it's to navigate the business and keep everyone aligned on the mission from a technical perspective.
You haven't read enough incoherent laws, I see.
https://www.sevenslegal.com/criminal-attorney/strange-state-...
I'm sure you can make a coherent argument for "It is illegal to cry on the witness stand", but not a reasonable one for actual humans. You're in a formal setting being asked to recall potentially traumatic incidents. No decent person is going to punish an emotional reaction to such actions. Then there are laws simply made to serve corporate interests (the "zoot suit", for instance within that article. Jaywalking is another famous one).
There's a reason an AI Judge is practically a tired trope in the cyberpunk genre. We don't want robots controlling human behavior.
You largely won't know such conversations are happening.
1. If code is "correct" but non-performant when it needs to be performant, then it's not correct.
2. If code is "correct" but unmaintainable when it needs to be maintainable, then it's not correct.
3. If code is "correct" but does not fit standards when it needs to fit standards, then it's not correct.
4. If code is "correct" but not secure when it needs to be secure, then it's not correct.
5. If code is "correct" but not correct when it needs to be correct, then it's not correct.
6. If code is "correct" but legally risky when it needs to be legally not risky, then it's not correct.
7. If code is "correct" but people think it's incorrect when they need to think it's correct, then it's not correct.
The person who submits the code for code review is effectively asserting that the code meets the quality standards of the project to which they are submitting the code. If it doesn't meet those standards, then it's not correct.
We took you to mean correct as in, given the right inputs, you get the expected outputs. And in that case, our objections do apply. In addition, if correct does mean overall fit-to-purpose the way you are suggesting here, then by gosh my points stands and no code generated by AI is correct! (Because of a variety of factors outside of simply "does the output of this code indicate that it seems to be working")
This is patently false per my experience generating code with LLMs. It was not a lot; it changed one line to update a global variable to a new value per my request. It was exactly the “correct” change per the stated instructions. (Okay, not exactly because it added an extra new line that wasn’t there and which I didn’t want.)
It is certainly a fallacy to say that “no code generated by an AI is correct”. Unless you are making a point about the semantics of what is making the code “correct” (as in, is it the human reviewer or AI generator?), my point is that, in theory, the human reviews the code and submits changes for further review. The code was still generated by an AI and it can still be precisely “correct” for a given intended change.
It is understandable that you misunderstood my meaning because I was rather unclear about it (though “correct” is still the closest word I can think of to mean what I mean). However, it’s a bit wild that you say you do understand that meaning before turning around to say that it actually supports your point with a vague claim of a “variety of factors”. I actually get the feeling, based on this response, that your argument is effectively refuted by the point I raised. I’m willing to keep an open mind if you’d like to show me that I’m wrong; maybe I’m just missing something.
writing mails/messages used to take me a long time. now i have a "make it professional" llm window, let it do its magic and edit out the most egregious stuff. it does 80-90% of the job.
that said, sometimes it fails spectacularly, so i just write by hand.
so.. many... hours... saved.