←back to thread

171 points abirag | 1 comments | | HN request time: 0s | source
Show context
tadfisher ◴[] No.45308140[source]
Is anyone working on the instruction/data-conflation problem? We're extremely premature in hooking up LLMs to real data sources and external functions if we can't keep them from following instructions in the data. Notion in particular shows absolutely zero warnings to end users, and encourages them to connect GitHub, GMail, Jira, etc. to the model. At this point it's basically criminal to treat this as a feature of a secure product.
replies(5): >>45308229 #>>45309698 #>>45310081 #>>45310871 #>>45315110 #
mcapodici ◴[] No.45309698[source]
The way you worded tbat is good and got me thinking.

What if instead of just lots of text fed to an LLM we have a data structure with trusted and untrusted data.

Any response on a call to a web search or MCP is considered untrusted by default (tunable if you also wrote the MCP and trust it).

The you limit tbe operations on untrusted data to pure transformations, no side effects.

E.g. run an LLM to summarize, or remove whitespace, convert to float etc. All these done in a sandbox without network access.

For example:

"Get me all public github issues on this repo, summarise and store in this DB."

Although the command reads public information untrusted and has DB access it will only process the untrusted information in a tight sandbox and so this can be done securely. I think!

replies(2): >>45311866 #>>45313574 #
sebastiennight ◴[] No.45311866[source]
You definitely do not need or want to give database access to an LLM-with-scaffolding system to execute the example you provided.

(by database access, I'm assuming you'd be planning to ask the LLM to write SQL code which this system would run)

Instead, you would ask your LLM to create an object containing the structured data about those github issues (ID, title, description, timestamp, etc) and then you would run a separate `storeGitHubIssues()` method that uses prepared statements to avoid SQL injection.

replies(1): >>45312560 #
1. mcapodici ◴[] No.45312560[source]
Yes this. What you said is what I meant.

You could also get the LLM to "vibe code" the SQL. Tbis is somewhat dangerous as the LLM might make mistakes, but the main thing I am talking about hete is how not to be "influenced" by text in data and so be susceptible to that sort of attack.