←back to thread

129 points ericciarla | 3 comments | | HN request time: 0.001s | source
Show context
madrox ◴[] No.40712650[source]
I have a saying: "any sufficiently advanced agent is indistinguishable from a DSL"

If I'm really leaning into multi-tool use for anything resembling a mutation, then I'd like to see an execution plan first. In my experience, asking an AI to code up a script that calls some functions with the same signature as tools and then executing that script actually ends up being more accurate than asking it to internalize its algorithm. Plus, I can audit it before I run it. This is effectively the same as asking it to "think step by step."

I like the idea of Command R+ but multitool feels like barking up the wrong tree. Maybe my use cases are too myopic.

replies(7): >>40713594 #>>40713743 #>>40713985 #>>40714302 #>>40717871 #>>40718481 #>>40721499 #
TZubiri ◴[] No.40713743[source]
I think you are imagining a scenario where you are using the LLM manually. Tools are designed to serve as a backend for other GPT like products.

You don't have the capacity to "audit" stuff.

Furthermore tool execution occurs not in the LLM but in the code that calls the LLM through API. So whatever code executes the tool, it also orders the calling sequence graph. You don't need to audit it, you are calling it.

replies(1): >>40713878 #
verdverm ◴[] No.40713878[source]
People want to audit the args, mainly because of the potential for destructive operations like DELETE FROM and rm -rf /

How do you know a malicious actor won't try to do these things? How do you protect against it?

replies(2): >>40713887 #>>40713896 #
TZubiri ◴[] No.40713887[source]
"the args"

You need to be more specific. In a systems, everything but the output is an argument to something else. Even then the system output is an input to the user.

So yeah, depending on what argument you are talking about you can audit it in a different way and it has different potential for abuse.

replies(1): >>40714060 #
verdverm ◴[] No.40714060[source]
The args to a function like SQL or TERMINAL
replies(1): >>40714218 #
TZubiri ◴[] No.40714218[source]
I personally don't connect LLMs to SQL, but to APIs.

But I'm pretty sure you would just give an SQL user to the LLM and enjoy the SQL server's built-in permissions and auditing features.

replies(1): >>40714254 #
verdverm ◴[] No.40714254[source]
What if that user has write permissions and the LLM generates a bad UPDATE, i.e. forgets to put the WHERE clause in... even for a SELECT, how do you know the right constraints were in place and you are getting the correct data?

read-only use-cases misses a whole category. All this is to get back to the point that people want to audit the LLM before running the function because of the unreliability, there is hesitance with good reason

replies(3): >>40714408 #>>40714689 #>>40723056 #
1. TZubiri ◴[] No.40714408{3}[source]
No, the human user doesn't have permissions, the LLM system has permissions, we create a user for the process, we've been doing this since unix, take a look at what your HTTP server runs as. There's no deputization of permissions going on here, at least on my systems.

Even if there are user-level permissions, you then use a role-based approach (SQL user for a type of users, for example accountant, manager, etc..) and restrict its permissions accordingly, I don't think the idea of restricting permissions so that we avoid users fucking the database up is new.

Many organizations have DBA whose role it is to convert user queries into SQL queries, Juniors usually have tighter permissions. Also non-technical managers and analysts can have access to the database.

As I said, not a new problem, SQL servers have mature permission systems.

If that is not enough, just write an API wrapper. It's what Amazon does anyways, Bezos' memo explicitly states that teams should not expose databases, rather they should expose APIs, under punishment of firing.

replies(1): >>40714420 #
2. verdverm ◴[] No.40714420[source]
and even with that permission system, mistakes still happen, we haven't even been able to eliminate sql injection in real systems, so these things can and will happen

adding LLMs in means we have an unaudited query producer, that is the point OP is trying to make, that is something they want to avoid and audit the function call before it happens, because we know the LLMs are not even at our level yet, and we make mistakes and we use code review to reduce them

and again, even in a read-only system, we have removed the guardrails of a human designed form with constraints and replaced it with an unaudited LLM that we can no longer be certain returns the correct or consistent results. People are rightly cautious and hesitant, preferring a system they use as a peer and can audit or review

replies(1): >>40722064 #
3. TZubiri ◴[] No.40722064[source]
Again, SQL query generating agents are not the subject of the original article.