Most active commenters

    ←back to thread

    321 points distantprovince | 16 comments | | HN request time: 0.207s | source | bottom
    1. lvl155 ◴[] No.44617588[source]
    While I understand this sentiment, some people simply suck at writing nice emails or have a major communication issue. It’s also not bad to run your important emails through multiple edits via AI.
    replies(7): >>44617614 #>>44617714 #>>44617746 #>>44617774 #>>44617827 #>>44618111 #>>44621147 #
    2. z3c0 ◴[] No.44617614[source]
    Is it too much to ask them to learn? People can have poor communication habits and still write* a thoughtful email.
    replies(2): >>44617839 #>>44618199 #
    3. adamtaylor_13 ◴[] No.44617714[source]
    The article clearly supports this type of usage.
    4. deadbabe ◴[] No.44617746[source]
    Then they shouldn’t be in jobs or positions where good communication skills and writing nice emails are important.
    5. GPerson ◴[] No.44617774[source]
    Seems like there are potential privacy issues involved in sharing important emails with these companies, especially if you are sharing what the other person sent as well.
    replies(3): >>44617876 #>>44617968 #>>44618058 #
    6. scarface_74 ◴[] No.44617827[source]
    I work with a lot of people who are in Spanish speaking countries who have English as a second language. I would much rather read their own words with grammatical errors than perfect AI slop.

    Hell I would rather just read their reply in Spanish and if they need to write it out really fast without struggling trying to translate it and I use my own B1 level Spanish comprehension than read AI generated slop.

    7. yoyohello13 ◴[] No.44617839[source]
    Seriously. If you can’t spend effort to communicate properly, why should I expend effort listening?
    8. lxgr ◴[] No.44617876[source]
    Almost all email these days touches Google's or Microsoft's cloud systems via at least one leg, so arguably, that ship has already sailed, given that they're also the ones hosting the large inference clouds.
    9. stefan_ ◴[] No.44617968[source]
    Ha, did you see the outrage from people when they realized that them sharing their deepest secrets & company information with ChatGPT was just another business record to OpenAI that is total fair game in any sort of civil suit discovery? You would think some evil force just smothered every little childs pet bunny.

    Tell people there are 10000 license plate scanners tracking their every move across the US and you get a mild chuckle, but god forbid someone access the shit they put into some for profit companies database under terms they never read.

    replies(1): >>44621160 #
    10. lvl155 ◴[] No.44618058[source]
    If you work in a big enough organization, they have AI sandboxes for things like this.
    11. Al-Khwarizmi ◴[] No.44618111[source]
    Or are non-native speakers. LLMs can be a godsend in that case.
    12. Al-Khwarizmi ◴[] No.44618199[source]
    Maybe yes, it's too much?

    I'm a non-native English speaker who writes many work emails in English. My English is quite good, but still, it takes me longer to write email in English because it's not as natural. Sometimes I spend a few minutes wondering if I'm getting the tone right or maybe being too pushy, if I should add some formality or it would sound forced, etc., while in my native language these things are automatic. Why shouldn't I use an LLM to save those extra minutes (as long as I check the output before sending it)?

    And being non-native with a good English level is nothing compared to people who might have autism, etc.

    replies(2): >>44618483 #>>44620615 #
    13. z3c0 ◴[] No.44618483{3}[source]
    I'm a native English speaker who asks myself the same questions on most emails. You can use LLM outputs all you want, but if you're worried about the tone, LLM edits drive the tone to a level of generic that ranges from milquetoast, to patronizing, to outright condescending. I expect some will even begin to favor pushy emails, because at least it feels human.
    14. sfink ◴[] No.44620615{3}[source]
    If you're checking the outputs, and I mean really checking (and adjusting) them, then I'd say this use is fine.
    15. johnnyanmac ◴[] No.44621147[source]
    >It’s also not bad to run your important emails through multiple edits via AI.

    The issue is that we both know 99% of output are not the result of this. AI is used to cut corners, not to cross your T's and dot your I's. It's similar to how having the answer banks for a textbook is a great tool to self-correct and reinforce correct learning. In reality these banks aren't sold publicly because most students will use it to cheat.

    And I'm not even saying this in a shameful way per se; high schoolers are under so much pressure, used to be given hours of homework on top of 7+ hours of instruction, and in some regards the content is barely applicable to long term goals past keeping their GPA up. The temptation to cheat is enormous at that stage.

    ----

    Not so much for 30 year old me who wants to refresh themselves on calculus concepts for an interview. There also really shouldn't be any huge pressure to "cheat" your co-workers either (there sometimes is, though).

    16. johnnyanmac ◴[] No.44621160{3}[source]
    I'm not surprised the layman doesn't understand how and where their data goes. It's a bit of a let down members in HN seemed surprised by this practice after some 20 years of tech awareness. Many of the community here probably worked in those very databases storing such data.