Open AI is not arguing that AI is harmless they are agreeing it's dangerous. They are using that to promote their product and hype it up as world changing. But more worrying they're advocating for regulations, presumably the sort that would make it more difficult for competition to come in.
I think we can talk about the potential dangers of AI. But that should include a discussion on how to best deal with that and consciousness of how fear of AI might be manipulated by silicon valley.
Especially when that fear involves misrepresentation - eg. AI being presented to the public as self directed artificial consciousness rather than algorithms that mimic certain reasoning capabilities
AI right now is not powerful and not scary.
But follow the trendlines. AI is improving. From 2010 to now the pace is relentless with multiple milestones passed constantly. AI right now is a precursor to something not so dumb and not so "not scary" in the future.
But then there's whatever danger actually exists regardless of the business maneuvering.
I'm not saying this is what you're doing, but I've been in numerous discussions where where someone will point to this maneuvering and then conclude that virtually all danger is manufactured/nonexistent and only exists for marketing purposes.
> eg. AI being presented to the public as self directed artificial consciousness rather than algorithms that mimic certain reasoning capabilities
I think the fact that these tools can be presented in that way and some people will believe it points to some of the real dangers.
Similarly, AI can easily sound smart when directed to do so. It typically doesn't actually take action unless authorized by a person. We're entering a time where people may soon be willing to give that permission in a more permanent basis, which I would argue is still the fault of the person making that decision.
Whether you choose to have AI identify illegal immigrants, or you simply decide all immigrants are illegal, the deciding is made by you the human, not by a machine.