I’ve built Open Codex, a fully local, open-source alternative to OpenAI’s Codex CLI.
My initial plan was to fork their project and extend it. I even started doing that. But it turned out their code has several leaky abstractions, which made it hard to override core behavior cleanly. Shortly after, OpenAI introduced breaking changes. Maintaining my customizations on top became increasingly difficult.
So I rewrote the whole thing from scratch using Python. My version is designed to support local LLMs.
Right now, it only works with phi-4-mini (GGUF) via lmstudio-community/Phi-4-mini-instruct-GGUF, but I plan to support more models. Everything is structured to be extendable.
At the moment I only support single-shot mode, but I intend to add interactive (chat mode), function calling, and more.
You can install it using Homebrew:
brew tap codingmoh/open-codex
brew install open-codex
It's also published on PyPI: pip install open-codex
Source: https://github.com/codingmoh/open-codexWas the model too big to run locally?
That’s one of the reasons I went with phi-4-mini - surprisingly high quality for its size and speed. It handled multi-step reasoning, math, structured data extraction, and code pretty well, all on modest hardware. Phi-1.5 / Phi-2 (quantized versions) also run on raspberry pi as others have demonstrated.
When trying out "phi4" locally with:
open-codex --provider ollama --full-auto --project-doc README.md --model phi4:latest
I get this error:
OpenAI rejected the request. Error details: Status: 400, Code: unknown, Type: api_error, Message: 400
registry.ollama.ai/library/phi4:latest does not support tools. Please verify your settings and try again.