←back to thread

101 points lmeierhoefer | 1 comments | | HN request time: 0.324s | source

Hi HN, we’re the cofounders of Augento (https://augento.ai/). We’re building Deepseek R1-like fine-tuning as a service. You connect your agent, tell us when it’s right or wrong, and we deliver an LLM optimized for that agent. There’s a demo video https://www.youtube.com/watch?v=j5RQaTdRrKE, and our docs are at https://docs.augento.ai/. It’s open for anyone to use at https://augento.ai.

Agents fail all the time, especially when you try to use them for something actually useful. Current solution approaches suck: prompting has intrinsic limits and supervised fine-tuning requires big explicit datasets that are hard to collect.

Two months ago, the DeepSeek R1 paper outlined a way to post-train LLMs with (almost) pure reinforcement learning. We took up their research and built a fine-tuning platform around that.

You let us intercept your agent's data flow, and we deliver you a fine-tuned open-source model, that is trained on the agent's specific task. Instead of providing big datasets of explicit fine-tuning samples, you provide a reward function, judging the model's outputs.

Here are examples of what this can be used for:

Coding Agent: We fine-tuned a coding agent that was constantly making syntax errors and failed to handle semantic edge cases properly. By providing a reward function that evaluated code against the compiler, the agent learned not to produce these errors. The fine-tuned model reduced critical bugs by 40% with just 20 training samples.

MCP Tool Specialization: Imagine you have a custom set of internal tools using the MCP protocol, but your agent keeps selecting the wrong tool or passing incompatible parameters. You could fine-tune with a reward function that scores tool selection and parameter matching.

Browser Agent Navigation: If you're building a browser agent that struggles with complex web UIs or specific sites, you could fine-tune it to better understand UI elements and navigation patterns. With a reward function that scores successful task completion (like "find the best price for this product" or "complete this multi-step form"), you could train an agent that better identifies clickable elements, understands form validation errors, and navigates through complex SPAs without getting stuck.

VLA Robot Control: If you're using vision-language models to control robotic arms or other hardware, you could fine-tune for your specific actuator setup. With a reward function based on high-level task completion, you could train a Vision-Langauge-Action (VLA) model that translates natural language commands like "move the red block behind the blue cylinder" into actuator controls for your specific hardware.

As you see from these examples, the current paradigm is best suited for "verifiable domains”, where it is possible to give an explicit function judging the model’s outputs. However, up next, we will also support an "alignment mode", where you don't have to provide a reward function but provide high-level feedback on past failure runs of your agent. Just tag where things went wrong, and we'll handle the rest. This makes it even easier to improve your agents without needing to write formal reward functions.

Our platform is not itself open source, but it fine-tunes open-source language models. I.e. it is an alternative to the reinforcement fine-tuning API from OpenAI, but with Qwen, LLama, Deepseek, etc., and more customizability on the reward model. We charge users for the training and for their inference/interaction with the model later on ($0 monthly flat fee + training cost + inference cost).

The platform is self-serving and open to use at https://augento.ai/dashboard. We’ll give you $20 in training credits, which should be enough for connecting your agent and delivering some observable improvement on your use case.

We’d love to hear your thoughts and feedback!

Show context
HyprMusic ◴[] No.43541099[source]
This looks great.

I have a few questions. 1. I'm assuming by the pricing it's "serverless" inference, what's the cold-start time like? 2. Any idea on inference costs?

Also just to reiterate what others say but the option of exporting weights would definitely make it more appealing (although it sounds like that's in the roadmap).

replies(1): >>43544003 #
1. Zollerboy1 ◴[] No.43544003[source]
Thanks!

> I'm assuming by the pricing it's "serverless" inference, what's the cold-start time like?

Yeah, you could probably call it serverless inference. However, due to the fact that all fine-tuned models are trained on the same base model(s), we have some interesting optimizations we can apply over standard "serverless" model deployment. The biggest is that we can keep the base model loaded in VRAM and only swap the trained weight deltas per request. This gives us sub-second cold-start times for inference in the average case.

> Any idea on inference costs?

Right now, we’re pricing inference at $0.5/M input tokens, $2.5/M output tokens. That’s in a similar price range but a bit lower than gpt-4o/Claude 3.5, which we consider the main models we’re "competing" with. As it’s our goal to democratize access to models/agents in the long run, we hope that we can drop the prices for inference further, which should be enabled by some other optimizations we’re currently planning.