←back to thread

82 points moh_quz | 3 comments | | HN request time: 0s | source

Hi HN, I'm Mohammed, a technical founder who loves shipping and giving back to the community. I'm open-sourcing the full-stack engine that powers my B2B product, apflow.co.

What it is: A production B2B starter with a Go backend and Next.js frontend. Both are fully Dockerized with separate containers. No Vercel. No Supabase. Deploy the whole thing on a $6 VPS, or split frontend and backend across different providers. You own the infrastructure.

The problem I was solving:

Every SaaS starter I evaluated had the same issue: they locked me into someone else's platform. Vercel for hosting. PlanetScale for the database. Serverless functions billing per invocation. Fine for prototypes, but costs become unpredictable at scale and migrating away is painful.

I wanted something I could deploy on any Linux box with docker-compose up. Something where I could host the frontend on Cloudflare Pages and the backend on a Hetzner VPS if I wanted. No vendor-specific APIs buried in my code.

Why Go for the backend:

Go gives me exactly what I need for a SaaS backend:

Tiny footprint. The backend idles at ~50MB RAM. On a cheap VPS, that headroom lets me run more services without upgrading. Concurrency without complexity. Billing webhooks, file uploads, and AI calls run concurrently without callback hell. Compile-time type safety. Using SQLC, my SQL compiles to type-safe Go. If the query is wrong, it fails at build time, not in production. Predictable performance. No garbage collection pauses that surprise you under load. The architecture (Modular Monolith):

I didn't want microservices complexity for a small team, but I needed clean separation. I built a Modular Monolith: features like Auth, Billing, and AI are isolated Go modules with explicit interfaces, but they deploy as a single binary.

This structure also made AI coding tools (Cursor, Claude Code) dramatically more effective. Because every module has strict boundaries, the AI knows exactly where new code belongs and doesn't break other modules.

Full-stack, not just backend:

Backend: Go 1.25 + Gin + SQLC (type-safe SQL, no ORM) + PostgreSQL with pgvector Frontend: Next.js 16 + React 19 + Tailwind + shadcn/ui Communication: The frontend consumes a clean REST API. You can swap Next.js for any framework that speaks HTTP. Infrastructure: Separate Dockerfiles for frontend and backend. Deploy together or apart. What's pre-built:

The boring infrastructure is solved so you can focus on your actual product:

Auth + RBAC: Stytch B2B integration with Organizations, Teams, and Roles. Multi-tenant data isolation enforced at the query level. Billing: Polar.sh as Merchant of Record. Handles subscriptions, invoices, and global tax/VAT. No Stripe webhook edge cases. AI Pipeline: OpenAI RAG using pgvector. The retrieval service enforces strict context boundaries to minimize hallucinations. OCR: Mistral integration for document extraction. File Storage: Cloudflare R2 integration. Each feature is a separate module. Don't need OCR? Remove it. Want Stripe instead of Polar? The billing interface is abstracted.

Real-world proof:

This isn't a template I made for GitHub stars. It's the exact code running apflow.co in production. When I added document OCR, I built it as a new module without touching Auth or Billing. The architecture held.

How to try it:

Clone the repo, read setup.md to check the prerequisite, run ./setup.sh, and you have a working B2B environment locally in minutes.

Feedback I want:

I'd appreciate feedback from Go developers on the module boundaries and cross-module interfaces. Also curious if anyone has suggestions for the Docker setup in production deployments.

GitHub: https://github.com/moasq/production-saas-starter

Live: https://apflow.co

1. rvz ◴[] No.46325128[source]
Nice project and great idea and a reasonable selection of technologies that optimize for low cost deployment.

However, my biggest concern is the glaringly lack of comprehensive tests whatsoever. I have to even question if this project is production ready at all.

Until that is in place, I really do not think this is "production" quality I'm afraid.

replies(1): >>46325190 #
2. moh_quz ◴[] No.46325190[source]
Fair point. For what its worth I did add a script that runs tests and checks coverage. But yeah the coverage itself could be better, working on it

PRs welcome if anyone wants to help out

replies(1): >>46337307 #
3. twodave ◴[] No.46337307[source]
Eh, don’t let other people define what is acceptable for production. Tests are nice but for most boilerplate type things nobody (and I mean NOBODY) writes unit or even integration tests. If you are deploying with the tolerance that you need this stuff to be automatically verified then you’re probably going to be running automated e2e UI tests anyway, and those will naturally uncover regressions and other issues with your auth backbone and other basics.

Source: self, from “shipping to production” for multiple decades