←back to thread

645 points bradgessler | 1 comments | | HN request time: 0.407s | source
Show context
WillAdams ◴[] No.44009072[source]
This is only a problem if one is writing/thinking on things which have already been written about without creating a new/novel approach in one's writing.

An AI is _not_ going to get awarded a PhD, since by definition, such are earned by extending the boundaries of human knowledge:

https://matt.might.net/articles/phd-school-in-pictures/

So rather than accept that an LLM has been trained on whatever it is you wish to write, write something which it will need to be trained on.

replies(1): >>44009169 #
1. noiv ◴[] No.44009169[source]
I agree, frustration steps in earlier than before because AIs tell you very eagerly that unique thought is already part of its training data. Sometimes I wish one could put an AI on drugs and filter out some hallucinations that'll become main stream next week.