←back to thread

361 points mseri | 7 comments | | HN request time: 0.411s | source | bottom
1. mentalgear ◴[] No.46003316[source]
This is how the future of "AI" has to look like: Fully-traceable inferences steps, that can be inspected & adjusted if needed.

Without this, I don't see how we (the general population) can maintain any control - or even understanding - of these larger and more opaque becoming LLM-based long-inference "AI" systems.

Without transparency, Big Tech, autocrats and eventually the "AI" itself (whether "self-aware" or not) will do whatever they like with us.

replies(5): >>46004003 #>>46005442 #>>46005823 #>>46007572 #>>46007669 #
2. turnsout ◴[] No.46005442[source]
I agree transparency is great. But making the response inspectable and adjustable is a huge UI/UX challenge. It's good to see people take a stab at it. I hope there's a lot more iteration in this area, because there's still a long way to go.
replies(1): >>46005718 #
3. lionkor ◴[] No.46005718[source]
If I give you tens of billions of dollars, like, wired to your personal bank account, do you think you could figure it out given a decade or two?
replies(1): >>46007200 #
4. moffkalast ◴[] No.46005823[source]
You've answered your own question as to why many people will want this approach gone entirely.
replies(1): >>46010877 #
5. turnsout ◴[] No.46007200{3}[source]
Yes! I think that would do it. But is anyone out there is committing tens of billions of dollars to traceable AI?
6. SilverElfin ◴[] No.46007669[source]
In the least, we need to know what training data goes into each AI model. Maybe there needs to be a third party company that does audits and provides transparency reports, so even with proprietary models, there are some checks and balances.
7. Imustaskforhelp ◴[] No.46010877[source]
I always really like answers like yours as they are clever and in my opinion maybe a bit true as well

I think that tho there are a lot of things public can do and maybe raising awareness about these stuff could be great as well.