←back to thread

82 points moh_quz | 6 comments | | HN request time: 0.001s | source | bottom

Hi HN, I'm Mohammed, a technical founder who loves shipping and giving back to the community. I'm open-sourcing the full-stack engine that powers my B2B product, apflow.co.

What it is: A production B2B starter with a Go backend and Next.js frontend. Both are fully Dockerized with separate containers. No Vercel. No Supabase. Deploy the whole thing on a $6 VPS, or split frontend and backend across different providers. You own the infrastructure.

The problem I was solving:

Every SaaS starter I evaluated had the same issue: they locked me into someone else's platform. Vercel for hosting. PlanetScale for the database. Serverless functions billing per invocation. Fine for prototypes, but costs become unpredictable at scale and migrating away is painful.

I wanted something I could deploy on any Linux box with docker-compose up. Something where I could host the frontend on Cloudflare Pages and the backend on a Hetzner VPS if I wanted. No vendor-specific APIs buried in my code.

Why Go for the backend:

Go gives me exactly what I need for a SaaS backend:

Tiny footprint. The backend idles at ~50MB RAM. On a cheap VPS, that headroom lets me run more services without upgrading. Concurrency without complexity. Billing webhooks, file uploads, and AI calls run concurrently without callback hell. Compile-time type safety. Using SQLC, my SQL compiles to type-safe Go. If the query is wrong, it fails at build time, not in production. Predictable performance. No garbage collection pauses that surprise you under load. The architecture (Modular Monolith):

I didn't want microservices complexity for a small team, but I needed clean separation. I built a Modular Monolith: features like Auth, Billing, and AI are isolated Go modules with explicit interfaces, but they deploy as a single binary.

This structure also made AI coding tools (Cursor, Claude Code) dramatically more effective. Because every module has strict boundaries, the AI knows exactly where new code belongs and doesn't break other modules.

Full-stack, not just backend:

Backend: Go 1.25 + Gin + SQLC (type-safe SQL, no ORM) + PostgreSQL with pgvector Frontend: Next.js 16 + React 19 + Tailwind + shadcn/ui Communication: The frontend consumes a clean REST API. You can swap Next.js for any framework that speaks HTTP. Infrastructure: Separate Dockerfiles for frontend and backend. Deploy together or apart. What's pre-built:

The boring infrastructure is solved so you can focus on your actual product:

Auth + RBAC: Stytch B2B integration with Organizations, Teams, and Roles. Multi-tenant data isolation enforced at the query level. Billing: Polar.sh as Merchant of Record. Handles subscriptions, invoices, and global tax/VAT. No Stripe webhook edge cases. AI Pipeline: OpenAI RAG using pgvector. The retrieval service enforces strict context boundaries to minimize hallucinations. OCR: Mistral integration for document extraction. File Storage: Cloudflare R2 integration. Each feature is a separate module. Don't need OCR? Remove it. Want Stripe instead of Polar? The billing interface is abstracted.

Real-world proof:

This isn't a template I made for GitHub stars. It's the exact code running apflow.co in production. When I added document OCR, I built it as a new module without touching Auth or Billing. The architecture held.

How to try it:

Clone the repo, read setup.md to check the prerequisite, run ./setup.sh, and you have a working B2B environment locally in minutes.

Feedback I want:

I'd appreciate feedback from Go developers on the module boundaries and cross-module interfaces. Also curious if anyone has suggestions for the Docker setup in production deployments.

GitHub: https://github.com/moasq/production-saas-starter

Live: https://apflow.co

1. adlpz ◴[] No.46324890[source]
Cool project! Will surely copy ideas from it :)

A general question for the room: where's the tipping point where you need a "proper" backend, in a different language, with all the inconveniences of possible type safety issues and impedance mismatches?

Because I feel like for 90% of small-medium projects it's just good enough with all the backend stuff within the same Next.js process as the front-end. I just do "separation of concerns"-ish with the code organization and funnel all communication with something structured and type safe like tRPC.

Feels separate enough but very pleasant to work anyway.

Am I doing it wrong?

replies(1): >>46324921 #
2. moh_quz ◴[] No.46324921[source]
You're not doing it wrong.

For most CRUD apps, Next.js + tRPC is the right call.

My tipping point was long-running tasks (OCR, AI processing that takes 30+ seconds) and wanting to scale backend compute separately from frontend serving.

If you don't have those needs, stick with what you have.

replies(3): >>46324953 #>>46324983 #>>46327765 #
3. adlpz ◴[] No.46324953[source]
Thanks for the answer! I've hit those tipping points myself in exactly the same scenarios (OCR and AI). For me, ends up being hacky or just decoupled (independent job runners). Makes sense to have a proper monolith backend for these.

Congrats on the launch again!

replies(1): >>46324970 #
4. moh_quz ◴[] No.46324970{3}[source]
I really appreciate your comment, never hesitate to reach out to me if you have any concerns, you can find my info in the repo.
5. ◴[] No.46324983[source]
6. d0100 ◴[] No.46327765[source]
dbos.dev should take care of long-running tasks, but I haven't tried running too many tasks yet as my other bigger projects all use Temporal