←back to thread

749 points noddybear | 3 comments | | HN request time: 0.746s | source

I'm Jack, and I'm excited to share a project that has channeled my Factorio addiction recently: the Factorio Learning Environment (FLE).

FLE is an open-source framework for developing and evaluating LLM agents in Factorio. It provides a controlled environment where AI models can attempt complex automation, resource management, and optimisation tasks in a grounded world with meaningful constraints.

A critical advantage of Factorio as a benchmark is its unbounded nature. Unlike many evals that are quickly saturated by newer models, Factorio's geometric complexity scaling means it won't be "solved" in the next 6 months (or possibly even years). This allows us to meaningfully compare models by the order-of-magnitude of resources they can produce - creating a benchmark with longevity.

The project began 18 months ago after years of playing Factorio, recognising its potential as an AI research testbed. A few months ago, our team (myself, Akbir, and Mart) came together to create a benchmark that tests agent capabilities in spatial reasoning and long-term planning.

Two technical innovations drove this project forward: First, we discovered that piping Lua into the Factorio console over TCP enables running (almost) arbitrary code without directly modding the game. Second, we developed a first-class Python API that wraps these Lua programs to provide a clean, type-hinted interface for AI agents to interact with Factorio through familiar programming paradigms.

Agents interact with FLE through a REPL pattern: 1. They observe the world (seeing the output of their last action) 2. Generate Python code to perform their next action 3. Receive detailed feedback (including exceptions and stdout)

We provide two main evaluation settings: - Lab-play: 24 structured tasks with fixed resources - Open-play: An unbounded task of building the largest possible factory on a procedurally generated map

We found that while LLMs show promising short-horizon skills, they struggle with spatial reasoning in constrained environments. They can discover basic automation strategies (like electric-powered drilling) but fail to achieve more complex automation (like electronic circuit manufacturing). Claude Sonnet 3.5 is currently the best model (by a significant margin).

The code is available at https://github.com/JackHopkins/factorio-learning-environment.

You'll need: - Factorio (version 1.1.110) - Docker - Python 3.10+

The README contains detailed installation instructions and examples of how to run evaluations with different LLM agents.

We would love to hear your thoughts and see what others can do with this framework!

Show context
WJW ◴[] No.43332084[source]
Very cool and also pretty expected results tbh. Some thoughts:

Factorio is a game that requires SIGNIFICANT amounts of thinking ahead, often requiring investments into things that won't pay off until much later and which might even significantly hamper initial development. Building a main bus vs spaghetti belts is one of the obvious examples here.

Humans with a little bit of experience playing factorio know that while building 1 item/s of some new resource is good, the game is about eventually building thousands of the new item. Until the LLM learns not to be short term minded it will probably build itself into a corner very quickly.

It is kind of amazing that these models manage to figure out a strategy at all, considering the game is not in their training set. That said, the current research goals are not very good IMO. Building the largest possible base has the predictable result of the AI building a humongous belt loop covering much of the map. A much better target would be the "standard" goal of SPM.

I think 99% of Factorio could be "solved" with GOFAI algorithms from the 80s and enough processing power. Set up a goal like 10k SPM and then work backwards towards how many of each resource you need, then recursively figure out fastest way to set up the production for each subresource using standard optimization algorithms from OR. No LLMs needed.

replies(9): >>43332165 #>>43332202 #>>43332340 #>>43332409 #>>43332816 #>>43333224 #>>43333259 #>>43333347 #>>43333353 #
1. noddybear ◴[] No.43332202[source]
I definitely agree that planning is essential to perform well in Factorio - my hope is that we can create agents in FLE that can better front-load the planning part, as well as create utility functions for future use - such that as the agent progresses, it can do more and more in each program / step. For example, it could create a function called 'resolve_resource_dependencies', which would enable it to backfill missing resources in order to proceed.

LLMs tend to build themselves into corners here quite often. Basically, if they break the topology (e.g enclose their factory in pipes) they struggle to reason over it and correct it. My basic view on this is that there exists some set of functions/data-structures that they can design in FLE, which will give them a better view over their factory to enable scaling (if the models take a step back to consider it).

We currently do track SPM, but decided against making that our main metric, as it zeroes out in the early stages. We use 'production score' instead, which is a more generalised metric that just captures total production (multiplied by an item-price).

There was a cool paper that came out a few years ago using meta-heuristics to do this, (https://arxiv.org/abs/2102.04871), but I reckon the combinatorial complexity of large factories makes it challenging to solve beyond trivial factories.

Its worth noting that agents in FLE can write their own libraries etc, so a dominant strategy could be for an LLM agent to implement a solver in Python to do the heavy lifting. This is quite far from current capabilities though.

replies(1): >>43332326 #
2. WJW ◴[] No.43332326[source]
An agent writing its own library to interface with a good solver like Z3 (or even writing some basic planning algorithms itself) seems like the epitome of a "costly long term investment that does nothing for the short term". The only thing I can see overcoming such problems are deep search trees, but AFAIK that is not how LLMs work at all.
replies(1): >>43332757 #
3. noddybear ◴[] No.43332757[source]
I experimented a bit using deep search trees to find better Factorio trajectories (MCTS), which worked somewhat well. Unfortunately, it's very computationally expensive, and probably only makes sense in a training context (i.e gathering trajectories to train a model in a supervised setting).