I do say the same think about JKR, btw. And for the same things, because the content she writes. I think you focused on the fanfic part and not the part where I'm criticizing where they say their stuff is the most important to keep humanity alive and they're charging money for it. Meanwhile, you may notice in academia we publish papers to make them freely available, like on arXiv. If it is that important that people need to know, you make it available.
The second person, Hinton, is not as good of an authority as you'd expect. Though I do understand why people take him seriously. Fwiw, his Nobel was wildly controversial. Be careful, prizes often have political components. I have degrees in both CS and physics (and am an ML researcher) and both communities thought it was really weird. I'll let you guess which community found it insulting.
I want to remind you, in 2016 Hinton famously said[0]
| Let me start by saying a few things that seem obvious. I think if you work as a radiologist, you're like the coyote that's already over the edge of the cliff, but hasn't yet looked down so doesn't realize there's no ground beneath him. People should stop training radiologists now. It's just completely obvious that within 5 years that deep learning is going to be better than radiologists because it's going to get a lot more experience. It might take 10 years, but we've got plenty of radiologists already.
We're 10 years in now. Hinton has shown he's about as good at making predictions as Musk. What Hinton thought was laughably obvious actually didn't happen. He's made a number of such predictions. I'll link another small explanation from him[1] because it is so similar to something Sutskever said[2]. I can tell you with high certainty that every physicist laughs at such a claim. We've long experienced that being able to predict data does not equate to understanding that data[3].
I care very much about alignment myself[4,5,6,7]. The reason I push back on Yud and others making claims like they do is because they are actually helping create the future they say we should be afraid of. I'm not saying they're evil or directly making evil superintelligences. Rather, they're pulling attention and funds away from the problems that need to be solved. They are guessing about things we don't need to guess about. They are making confidently asserting claims we know to be false (to be able to make accurate predictions requires accurate understanding[8]). Without being able to openly and honestly speak to the limitations of our machines (mainly blinded by excitement), we create these exact dangers we worry about. I'm not calling for a pause on research, I'm calling for more research and more people to pay attention to the subtle nature of everything. In a way I am saying "slow down" but only in that I'm saying don't ignore the small stuff. We move so fast that we keep pushing off the small stuff, but the AI risk comes through the accumulation of debt. You need to be very careful to not let that debt get out of control. You don't create safe systems by following the boom and bust hype cycles that CS is so famous for. You don't just wildly race to build a nuclear reactor and try to sell it to people while it is still a prototype.
[0] https://fastdatascience.com/ai-in-healthcare/ai-replace-radi...
[1] https://www.reddit.com/r/singularity/comments/1dhlvzh/geoffr...
[2] https://youtu.be/Yf1o0TQzry8?t=449
[3] https://www.youtube.com/watch?v=hV41QEKiMlM
[4] https://news.ycombinator.com/item?id=44068943
[5] https://news.ycombinator.com/item?id=44070101
[6] https://news.ycombinator.com/item?id=44017334
[7] https://news.ycombinator.com/item?id=43909711
[8] This is the link to [1,2]. I mean you can reasonably create a data generating process that is difficult or impossible to distinguish from the actual data generating process but you have a completely different causal structure, confounding variables, and all that fun stuff. Any physicist will tell you that fitting the data is the easy part (it isn't easy). Interpreting and explaining the data is the hard part. That hard part is the building of the causal relationships. It is the "understanding" Hinton and Sutskever claim.