←back to thread

204 points waleedlatif1 | 1 comments | | HN request time: 0.2s | source

Hey HN, Waleed here. We're building Sim (https://sim.ai/), an open-source visual editor to build agentic workflows. Repo here: https://github.com/simstudioai/sim/. Docs here: https://docs.sim.ai.

You can run Sim locally using Docker, with no execution limits or other restrictions.

We started building Sim almost a year ago after repeatedly troubleshooting why our agents failed in production. Code-first frameworks felt hard to debug because of implicit control flow, and workflow platforms added more overhead than they removed. We wanted granular control and easy observability without piecing everything together ourselves.

We launched Sim [1][2] as a drag-and-drop canvas around 6 months ago. Since then, we've added:

- 138 blocks: Slack, GitHub, Linear, Notion, Supabase, SSH, TTS, SFTP, MongoDB, S3, Pinecone, ...

- Tool calling with granular control: forced, auto

- Agent memory: conversation memory with sliding window support (by last n messages or tokens)

- Trace spans: detailed logging and observability for nested workflows and tool calling

- Native RAG: upload documents, we chunk, embed with pgvector, and expose vector search to agents

- Workflow deployment versioning with rollbacks

- MCP support, Human-in-the-loop block

- Copilot to build workflows using natural language (just shipped a new version that also acts as a superagent and can call into any of your connected services directly, not just build workflows)

Under the hood, the workflow is a DAG with concurrent execution by default. Nodes run as soon as their dependencies (upstream blocks) are satisfied. Loops (for, forEach, while, do-while) and parallel fan-out/join are also first-class primitives.

Agent blocks are pass-through to the provider. You pick your model (OpenAI, Anthropic, Gemini, Ollama, vLLM), and and we pass through prompts, tools, and response format directly to the provider API. We normalize response shapes for block interoperability, but we're not adding layers that obscure what's happening.

We're currently working on our own MCP server and the ability to deploy workflows as MCP servers. Would love to hear your thoughts and where we should take it next :)

[1] https://news.ycombinator.com/item?id=43823096

[2] https://news.ycombinator.com/item?id=44052766

Show context
solarkraft ◴[] No.46238813[source]
This looks really cool for DIYing workflows, especially since you seem to have a very useful selection of tools!

Did you build your own agent engine? Why not LangGraph?

Say I was building a general agentic chat app with LangGraph in the backend (as it seems to provide a lot of infrastructure for highly reliable and interactive agents, all the way up to a protocol usable by UIs, plus a decent ecosystem, making it very easily extensible). Could I integrate with this for DIY workflows in a high quality fashion (high-precision updates and control)?

Is there a case for switching out LangGraph‘s backend with Sim (can you build agents of the same quality and complexity - I’m thinking coding agent)? Could it interact with LangGraph agents in a high quality way so you can tap that ecosystem?

Can I use Sim workflows with my current agent, say, via MCP?

replies(2): >>46239236 #>>46239238 #
1. waleedlatif1 ◴[] No.46239238[source]
1. we wanted to have full control over the agent orchestration and the execution since we didn't like the abstractions that many of the existing frameworks had built, and didn't want to have dependencies in places we didn't need them. so, we built the orchestration and execution engine from scratch, allowing us to do neat things like human in the loop, settings that run the same block 10 times concurrently, etc.

2. this would kind of serve as a drop-in replacement for langgraph. you could build a workflow with an agent and some tools, perhaps some form of memory. then, just deploy that as an API, call it from your frontend, and consume the streamed response on your chat client and without the need to maintain any infra at all.

3. we have a generic code block and an api block used to call APIs for integrations that we may not have, and you can use those to plug (langgraph) agents into the Sim ecosystem.

4. we are adding in the ability to deploy your workflow as an MCP server in the next week, stay tuned :) in the meantime, you can deploy the workflow as an API and have the agent call it as a tool. moreover, you can use the workflow block in sim to call other agents/worklows as well, so its easy to encapsulate a lot of complexity in a `parent` workflow that you call that dynamically routes and uses different tools based on the task at hand