←back to thread

106 points codingmoh | 2 comments | | HN request time: 0.589s | source

Hey HN,

I’ve built Open Codex, a fully local, open-source alternative to OpenAI’s Codex CLI.

My initial plan was to fork their project and extend it. I even started doing that. But it turned out their code has several leaky abstractions, which made it hard to override core behavior cleanly. Shortly after, OpenAI introduced breaking changes. Maintaining my customizations on top became increasingly difficult.

So I rewrote the whole thing from scratch using Python. My version is designed to support local LLMs.

Right now, it only works with phi-4-mini (GGUF) via lmstudio-community/Phi-4-mini-instruct-GGUF, but I plan to support more models. Everything is structured to be extendable.

At the moment I only support single-shot mode, but I intend to add interactive (chat mode), function calling, and more.

You can install it using Homebrew:

   brew tap codingmoh/open-codex
   brew install open-codex

It's also published on PyPI:

   pip install open-codex

Source: https://github.com/codingmoh/open-codex
Show context
strangescript ◴[] No.43755608[source]
curious why you went with Phi as the default models, that seems a bit unusual compared to current trends
replies(2): >>43755856 #>>43755896 #
1. jasonjmcghee ◴[] No.43755896[source]
agreed - thought the qwen2.5-coder was kind of standard non-reasoning small line of coding models right now
replies(1): >>43755983 #
2. codingmoh ◴[] No.43755983[source]
I saw pretty good reasoning quality with phi-4-mini. But alright - I’ll still run some tests with qwen2.5-coder and plan to add support for it next. Would be great to compare them side by side in practical shell tasks. Thanks so much for the pointer!