←back to thread

204 points waleedlatif1 | 2 comments | | HN request time: 0.003s | source

Hey HN, Waleed here. We're building Sim (https://sim.ai/), an open-source visual editor to build agentic workflows. Repo here: https://github.com/simstudioai/sim/. Docs here: https://docs.sim.ai.

You can run Sim locally using Docker, with no execution limits or other restrictions.

We started building Sim almost a year ago after repeatedly troubleshooting why our agents failed in production. Code-first frameworks felt hard to debug because of implicit control flow, and workflow platforms added more overhead than they removed. We wanted granular control and easy observability without piecing everything together ourselves.

We launched Sim [1][2] as a drag-and-drop canvas around 6 months ago. Since then, we've added:

- 138 blocks: Slack, GitHub, Linear, Notion, Supabase, SSH, TTS, SFTP, MongoDB, S3, Pinecone, ...

- Tool calling with granular control: forced, auto

- Agent memory: conversation memory with sliding window support (by last n messages or tokens)

- Trace spans: detailed logging and observability for nested workflows and tool calling

- Native RAG: upload documents, we chunk, embed with pgvector, and expose vector search to agents

- Workflow deployment versioning with rollbacks

- MCP support, Human-in-the-loop block

- Copilot to build workflows using natural language (just shipped a new version that also acts as a superagent and can call into any of your connected services directly, not just build workflows)

Under the hood, the workflow is a DAG with concurrent execution by default. Nodes run as soon as their dependencies (upstream blocks) are satisfied. Loops (for, forEach, while, do-while) and parallel fan-out/join are also first-class primitives.

Agent blocks are pass-through to the provider. You pick your model (OpenAI, Anthropic, Gemini, Ollama, vLLM), and and we pass through prompts, tools, and response format directly to the provider API. We normalize response shapes for block interoperability, but we're not adding layers that obscure what's happening.

We're currently working on our own MCP server and the ability to deploy workflows as MCP servers. Would love to hear your thoughts and where we should take it next :)

[1] https://news.ycombinator.com/item?id=43823096

[2] https://news.ycombinator.com/item?id=44052766

1. Multicomp ◴[] No.46240560[source]
Excited to try this out, I've been looking at LangFlow and similar tools for doing DAG workflows. Sure, I could prompt or try to do an MCP or a claude skill for my utility workflows, but they aren't strongly followed and I want to, where possible, make each AI agent call be smaller, like a function.

This is definitely going to be given a try tomorrow morning. I think first up will be something easy and personal like going through the collection of NPC character sheets in my recent campaign and ensuring all NPCs have the following sections with some content in them, and if not, flagging them for my review.

replies(1): >>46240764 #
2. waleedlatif1 ◴[] No.46240764[source]
sounds super cool! let me know how it goes