←back to thread

204 points waleedlatif1 | 1 comments | | HN request time: 0.299s | source

Hey HN, Waleed here. We're building Sim (https://sim.ai/), an open-source visual editor to build agentic workflows. Repo here: https://github.com/simstudioai/sim/. Docs here: https://docs.sim.ai.

You can run Sim locally using Docker, with no execution limits or other restrictions.

We started building Sim almost a year ago after repeatedly troubleshooting why our agents failed in production. Code-first frameworks felt hard to debug because of implicit control flow, and workflow platforms added more overhead than they removed. We wanted granular control and easy observability without piecing everything together ourselves.

We launched Sim [1][2] as a drag-and-drop canvas around 6 months ago. Since then, we've added:

- 138 blocks: Slack, GitHub, Linear, Notion, Supabase, SSH, TTS, SFTP, MongoDB, S3, Pinecone, ...

- Tool calling with granular control: forced, auto

- Agent memory: conversation memory with sliding window support (by last n messages or tokens)

- Trace spans: detailed logging and observability for nested workflows and tool calling

- Native RAG: upload documents, we chunk, embed with pgvector, and expose vector search to agents

- Workflow deployment versioning with rollbacks

- MCP support, Human-in-the-loop block

- Copilot to build workflows using natural language (just shipped a new version that also acts as a superagent and can call into any of your connected services directly, not just build workflows)

Under the hood, the workflow is a DAG with concurrent execution by default. Nodes run as soon as their dependencies (upstream blocks) are satisfied. Loops (for, forEach, while, do-while) and parallel fan-out/join are also first-class primitives.

Agent blocks are pass-through to the provider. You pick your model (OpenAI, Anthropic, Gemini, Ollama, vLLM), and and we pass through prompts, tools, and response format directly to the provider API. We normalize response shapes for block interoperability, but we're not adding layers that obscure what's happening.

We're currently working on our own MCP server and the ability to deploy workflows as MCP servers. Would love to hear your thoughts and where we should take it next :)

[1] https://news.ycombinator.com/item?id=43823096

[2] https://news.ycombinator.com/item?id=44052766

Show context
smarx007 ◴[] No.46238673[source]
So here is a case that I wanted to implement in n8n a few years ago and it required quite heavy JS blocks:

- I want to check some input - pick one of your 138 blocks

- I want to extract a list of items from that input

- I want to check which items did I encounter before <- that's the key bit

- Do something for the items that have not been encountered before; bonus point for detecting updated and deleted items

- Rinse and repeat

It could be a row added to a CSV file, a new file dropped into a Nextcloud folder, a list of issues pulled from a repo, or an RSS feed (Yahoo! Pipes, what a sweet memory).

How good is the support for such a case in Sim? And did it get better in n8n?

replies(5): >>46238762 #>>46239112 #>>46239487 #>>46239695 #>>46240086 #
1. waleedlatif1 ◴[] No.46239112[source]
this is actually a perfect use case, mostly deterministic workflows that need LLMs to fill in the gaps or do the knowledge work. As you mentioned, you can either add it as a row in a CSV file (sheets), use the baked-in memory block and treat it as simple storage, store the row in supabase, or use the knowledgebase. Basically, there are a ton of ways that this can be done that don't require you to maintain the memory solution yourself. you can even detect the updated and deleted items by keeping some sort of version-controlled snapshot of each row in the csv and updating it as you go.

I can't tell you whether it got better in n8n, but I can definitively say that this sounds like a great candidate workflow to build in sim :)