←back to thread

749 points noddybear | 6 comments | | HN request time: 0.626s | source | bottom

I'm Jack, and I'm excited to share a project that has channeled my Factorio addiction recently: the Factorio Learning Environment (FLE).

FLE is an open-source framework for developing and evaluating LLM agents in Factorio. It provides a controlled environment where AI models can attempt complex automation, resource management, and optimisation tasks in a grounded world with meaningful constraints.

A critical advantage of Factorio as a benchmark is its unbounded nature. Unlike many evals that are quickly saturated by newer models, Factorio's geometric complexity scaling means it won't be "solved" in the next 6 months (or possibly even years). This allows us to meaningfully compare models by the order-of-magnitude of resources they can produce - creating a benchmark with longevity.

The project began 18 months ago after years of playing Factorio, recognising its potential as an AI research testbed. A few months ago, our team (myself, Akbir, and Mart) came together to create a benchmark that tests agent capabilities in spatial reasoning and long-term planning.

Two technical innovations drove this project forward: First, we discovered that piping Lua into the Factorio console over TCP enables running (almost) arbitrary code without directly modding the game. Second, we developed a first-class Python API that wraps these Lua programs to provide a clean, type-hinted interface for AI agents to interact with Factorio through familiar programming paradigms.

Agents interact with FLE through a REPL pattern: 1. They observe the world (seeing the output of their last action) 2. Generate Python code to perform their next action 3. Receive detailed feedback (including exceptions and stdout)

We provide two main evaluation settings: - Lab-play: 24 structured tasks with fixed resources - Open-play: An unbounded task of building the largest possible factory on a procedurally generated map

We found that while LLMs show promising short-horizon skills, they struggle with spatial reasoning in constrained environments. They can discover basic automation strategies (like electric-powered drilling) but fail to achieve more complex automation (like electronic circuit manufacturing). Claude Sonnet 3.5 is currently the best model (by a significant margin).

The code is available at https://github.com/JackHopkins/factorio-learning-environment.

You'll need: - Factorio (version 1.1.110) - Docker - Python 3.10+

The README contains detailed installation instructions and examples of how to run evaluations with different LLM agents.

We would love to hear your thoughts and see what others can do with this framework!

1. alexop ◴[] No.43331994[source]
its funny how video games are the hardest benchmark that humanity has for ai
replies(5): >>43332020 #>>43332131 #>>43332240 #>>43332889 #>>43336981 #
2. quchen ◴[] No.43332020[source]
A video game is a very well-defined problem, and usually comes with simple metrics for success – health, time, or in Factorio’s case, ultimately science per minute (or per minute played, for AIs?). Real world problems are much harder to define, they are embedded in a very complex ecosystem, and it’s not clear at all what to optimize for.
3. WJW ◴[] No.43332131[source]
They're not the hardest problems we have, they are just very nice benchmark tools because by definition they already run on a computer and you can fairly easily interface an AI with them.

There's probably also a distorting factor in that all the AI research into stock market and military applications probably doesn't get published, so it seems like video game AIs are a much larger percentage of research than it actually is.

4. lucianbr ◴[] No.43332240[source]
It is "hardest" in a context of the AI actually having a chance.

There's no problem asking AI for the blueprints to a working faster-than-light spaceship, only we already know the AI will fail, and the way it fails provides no useful information.

5. Hammershaft ◴[] No.43332889[source]
I'd love to see a Baba is You or Stephen's Sausage Roll llm environment to gauge spatial reasoning. Stephen's Sausage Roll in particular could be very interesting because the mechanics are incredibly simple but challenging.
6. throitallaway ◴[] No.43336981[source]
DeepMind went from playing Pong to protein folding in a short number of years. There are much harder things for AI to do than playing video games. Also see: self driving cars.