←back to thread

361 points mseri | 2 comments | | HN request time: 0s | source
Show context
mentalgear ◴[] No.46003316[source]
This is how the future of "AI" has to look like: Fully-traceable inferences steps, that can be inspected & adjusted if needed.

Without this, I don't see how we (the general population) can maintain any control - or even understanding - of these larger and more opaque becoming LLM-based long-inference "AI" systems.

Without transparency, Big Tech, autocrats and eventually the "AI" itself (whether "self-aware" or not) will do whatever they like with us.

replies(5): >>46004003 #>>46005442 #>>46005823 #>>46007572 #>>46007669 #
turnsout ◴[] No.46005442[source]
I agree transparency is great. But making the response inspectable and adjustable is a huge UI/UX challenge. It's good to see people take a stab at it. I hope there's a lot more iteration in this area, because there's still a long way to go.
replies(1): >>46005718 #
1. lionkor ◴[] No.46005718[source]
If I give you tens of billions of dollars, like, wired to your personal bank account, do you think you could figure it out given a decade or two?
replies(1): >>46007200 #
2. turnsout ◴[] No.46007200[source]
Yes! I think that would do it. But is anyone out there is committing tens of billions of dollars to traceable AI?