←back to thread

152 points Gaishan | 1 comments | | HN request time: 0s | source
Show context
simonw ◴[] No.45341827[source]
I went looking for how they define "agent" in the paper:

> AI agents are autonomous systems that can reason about tasks and act to achieve goals by leveraging external tools and resources [4]. Modern AI agents are typically powered by large language models (LLMs) connected to external tools or APIs. They can perform reasoning, invoke specialized models, and adapt based on feedback [5]. Agents differ from static models in that they are interactive and adaptive. Rather than returning fixed outputs, they can take multi-step actions, integrate context, and support iterative human–AI collaboration. Importantly, because agents are built on top of LLMs, users can interact with agents through human language, substantially reducing usage barriers for scientists.

So more-or-less an LLM running tools in a loop. I'm guessing "invoke specialized models" is achieved here by running a tool call against some other model.

replies(3): >>45342175 #>>45343430 #>>45366145 #
1. backflippinbozo ◴[] No.45366145[source]
Yeah, probably pretty simple compared to the methods we've publicly discussed for months before this publication.

Here's the last time we showed our demo on HN: https://news.ycombinator.com/item?id=45132898

We'll actually be presenting on this tomorrow at 9am PST https://calendar.app.google/3soCpuHupRr96UaF8

Besides ReAct, we use AG2's 2-agent pattern with Code Writer and Code Executor in the DockerCommandLineCodeExecutor

Also, using hardware monitors and LLM-as-a-Judge to assess task completion.

It's how we've built nearly 1K Docker images for arXiv papers over the last couple months: https://hub.docker.com/u/remyxai

And how we'll support computational reproducibility by linking Docker images to the arXiv paper publications: https://github.com/arXiv/arxiv-browse/pull/908