←back to thread

214 points waleedlatif1 | 4 comments | | HN request time: 0.425s | source

Hey HN, Waleed here. We're building Sim (https://sim.ai/), an open-source visual editor to build agentic workflows. Repo here: https://github.com/simstudioai/sim/. Docs here: https://docs.sim.ai.

You can run Sim locally using Docker, with no execution limits or other restrictions.

We started building Sim almost a year ago after repeatedly troubleshooting why our agents failed in production. Code-first frameworks felt hard to debug because of implicit control flow, and workflow platforms added more overhead than they removed. We wanted granular control and easy observability without piecing everything together ourselves.

We launched Sim [1][2] as a drag-and-drop canvas around 6 months ago. Since then, we've added:

- 138 blocks: Slack, GitHub, Linear, Notion, Supabase, SSH, TTS, SFTP, MongoDB, S3, Pinecone, ...

- Tool calling with granular control: forced, auto

- Agent memory: conversation memory with sliding window support (by last n messages or tokens)

- Trace spans: detailed logging and observability for nested workflows and tool calling

- Native RAG: upload documents, we chunk, embed with pgvector, and expose vector search to agents

- Workflow deployment versioning with rollbacks

- MCP support, Human-in-the-loop block

- Copilot to build workflows using natural language (just shipped a new version that also acts as a superagent and can call into any of your connected services directly, not just build workflows)

Under the hood, the workflow is a DAG with concurrent execution by default. Nodes run as soon as their dependencies (upstream blocks) are satisfied. Loops (for, forEach, while, do-while) and parallel fan-out/join are also first-class primitives.

Agent blocks are pass-through to the provider. You pick your model (OpenAI, Anthropic, Gemini, Ollama, vLLM), and and we pass through prompts, tools, and response format directly to the provider API. We normalize response shapes for block interoperability, but we're not adding layers that obscure what's happening.

We're currently working on our own MCP server and the ability to deploy workflows as MCP servers. Would love to hear your thoughts and where we should take it next :)

[1] https://news.ycombinator.com/item?id=43823096

[2] https://news.ycombinator.com/item?id=44052766

1. solarkraft ◴[] No.46238813[source]
This looks really cool for DIYing workflows, especially since you seem to have a very useful selection of tools!

Did you build your own agent engine? Why not LangGraph?

Say I was building a general agentic chat app with LangGraph in the backend (as it seems to provide a lot of infrastructure for highly reliable and interactive agents, all the way up to a protocol usable by UIs, plus a decent ecosystem, making it very easily extensible). Could I integrate with this for DIY workflows in a high quality fashion (high-precision updates and control)?

Is there a case for switching out LangGraph‘s backend with Sim (can you build agents of the same quality and complexity - I’m thinking coding agent)? Could it interact with LangGraph agents in a high quality way so you can tap that ecosystem?

Can I use Sim workflows with my current agent, say, via MCP?

replies(2): >>46239236 #>>46239238 #
2. threecheese ◴[] No.46239236[source]
Their deployment stuff has been turning me off lately; everyone is rushing to monetize - which I understand and support - but I feel like Langsmith is creeping further and further into Langchain|graph and it makes me hesitant to invest. It’s giving AWS-like gentle but firm lock-in vibes, I wonder if they have any PMs from there.

I do like the way they’ve been able to leverage Langgraph workflows to build agents - it seems like the right abstraction to me - and I also feel their middleware approach is very Django-y which I also like. Are you enjoying their stack?

replies(1): >>46242476 #
3. waleedlatif1 ◴[] No.46239238[source]
1. we wanted to have full control over the agent orchestration and the execution since we didn't like the abstractions that many of the existing frameworks had built, and didn't want to have dependencies in places we didn't need them. so, we built the orchestration and execution engine from scratch, allowing us to do neat things like human in the loop, settings that run the same block 10 times concurrently, etc.

2. this would kind of serve as a drop-in replacement for langgraph. you could build a workflow with an agent and some tools, perhaps some form of memory. then, just deploy that as an API, call it from your frontend, and consume the streamed response on your chat client and without the need to maintain any infra at all.

3. we have a generic code block and an api block used to call APIs for integrations that we may not have, and you can use those to plug (langgraph) agents into the Sim ecosystem.

4. we are adding in the ability to deploy your workflow as an MCP server in the next week, stay tuned :) in the meantime, you can deploy the workflow as an API and have the agent call it as a tool. moreover, you can use the workflow block in sim to call other agents/worklows as well, so its easy to encapsulate a lot of complexity in a `parent` workflow that you call that dynamically routes and uses different tools based on the task at hand

4. solarkraft ◴[] No.46242476[source]
I’m only in the research phase of my hypothetical project so far, so I’m going more off of vibes than personal experience for now.

I’m interested in LangGraph because it seems the closest to an industry standard - every use case seems to be addressed with a tutorial (both first and third party) and there’s an ecosystem of already available graphs/agents. I’m aiming for both high extensibility (new use cases should be easily implementable) and high reliability. The LangGraph docs do a pretty good job at convincing me that they got the latter pretty nailed down. It seems like a hard enough problem to question a new solution on this.

I want to build a (highly reliable & controllable) UI for agents more than I want to build the agents themselves, so my hope is that LangGraph has the biggest ecosystem I can plug into.

They do have some funky lock-in attempts, for instance the LangGraph CLI, which acts as a server for their agent protocol (https://github.com/langchain-ai/agent-protocol), is proprietary. However (and this is what I consider indicative of a strong ecosystem) there’s a free reimplementation named Aegra: https://www.aegra.dev/