←back to thread

226 points azhenley | 1 comments | | HN request time: 0.204s | source
Show context
ayende ◴[] No.45075710[source]
That is the wrong abstraction to think at. The problem is not _which_ tools you give the LLM, the problem is what action it can do.

For example, in the book-a-ticket scenario - I want it to be able to check a few websites to compare prices, and I want it to be able to pay for me.

I don't want it to decide to send me to a 37 hour trip with three stops because it is 3$ cheaper.

Alternatively, I want to be able to lookup my benefits status, but the LLM should physically not be able to provide me any details about the benefits status of my coworkers.

That is the _same_ tool cool, but in a different scope.

For that matter, if I'm in HR - I _should_ be able to look at the benefits status of employees that I am responsible for, of course, but that creates an audit log, etc.

In other words, it isn't the action that matters, but what is the intent.

LLM should be placed in the same box as the user it is acting on-behalf-of.

replies(9): >>45075916 #>>45076036 #>>45076097 #>>45076338 #>>45076688 #>>45077415 #>>45079715 #>>45080384 #>>45081371 #
1. tomjen3 ◴[] No.45081371[source]
I don't think your benefit example is too much a problem in practice, we already have the access setup for that (ie its the same one for you).

For the other example, I think a nice compromise is to have the AI be able to do things only with your express permission. In your example it finds flights that it thinks are appropriate, sends you a notification with the list and you can then press a simple yes/no/more information button. It would still save you a ton of money, but it would be substantially less likely to do something dangerous/damaging.