I used to fill the paragraphs all the time, turns out — it's really better to leave them as they are, because you can never get the satisfiable number of `fill-column` — the default works in some cases, for others, you'd want it to be wider, etc.
I have a little helper function that uses gptel-request that I use while reading Latin texts. It sets the system prompt so the LLM acts as either a Latin to English translator, or with a prefix argument it breaks down the grammatical structure and vocabulary of a sentence for me. It's very cool.
From the title I thought this was an implementation of Gwern's idea, but it's not.
Just being able to tell an LLM "rewrite all of this code using this new pattern" and then dozens of code sites are correctly updated is a huge help. It makes me consider bigger refactoring or minor features that I might normally skip because I am lazy.
So the story never ends?
(As I think about it, an LLM generation should be thought of as a many-author situation, as each generation comes in cold)
Stories need pacing, which exists over many passages, not just at the choice level. And then the passages should all be based on a single underlying world. Both of these fall apart quickly without a guiding author.
I think this is resolvable with LLMs and appropriate prompting, but the naive approach seems cool only until you actually play out a few stories
You'll also need to keep track of context, so you can place the current passage in the lore. I doubt it will be self-evident. Some of this can probably be shown to the user as well as "metadata" of sorts (like a location name).
You could also pre-build the lore entirely, and then as the LLM expands out choices instruct it to stay within that lore. This is probably easier, but it'll seem less exciting to see it build out those passages. You'll want to include major choices in the lore itself so it can build out some of those conditional parts of the story.
The rhythm is much harder. Pre-building the entire lore will probably make this much easier. I'd probably throw a few queries at Deep Research to get ideas and terms for story structure that I'd want to apply. There's two major issues I see in incrementally created LLM generated stories: never getting to the good parts, and accelerating too fast.
The first is probably worse and more common: the LLM will build up anticipation, but it doesn't know the conclusion and will keep putting off any conclusion. It reminds me of Lost (the TV show)... you can tell the writers didn't have a plan, and only made things worse as they introduced more distractions that couldn't be made into a cohesive reveal. If you already have an outline then it should go forward. Another option is to have the LLM produce conclusions at the same time as it produces choices, and give it criteria for what a good conclusion is. These don't have to be full passages, but will be notes you pass on as the next passage is created.
Going too fast can happen too. If you just ask ChatGPT to write a CYOA story it'll probably stop abruptly when it sees the opportunity to just finish up. I don't have a clear idea here, but I'd try to think about splitting the story into acts, giving those acts clear purposes (as story structure). Then I'd aim for a certain length to each act, but let the LLM decide the exact moment one act moves to the next.
Don't get me wrong; I use emacs all the time, I just can't seem to make it work for me when working with teams of people on large-ish software projects.
But maybe org mode is worth a revisit as a "managing my ADHD" tool; it's got to be better than Jira, haha.
For a big payoff you can combine Emacs, GPTel, mcp.el, and the Jira/Confluence MCP server[0] so LLMs can manage your tickets for you.
Also, Inform6 allows you far more interactivity than a CYOA game. Which both are systems based on states, but a text adventure allows timers, random events, even chat simulations...
I thought this might be related based on the title, but it's more about refactoring code.
Maybe you just don't have sufficient exposure to Elisp. Emacs Lisp is one of the best blackmagicfockery automation tools, you can do tons of interesting things. Just the other day I got sucked into a big, long-going Jira Epic, I just needed to find every single PR in my work GitHub Orgs related to specific Jira tickets. Then I wrote this - https://github.com/agzam/github-topics
Once you learn some Org-mode, you will see how awesome it is to be able to turn just about any kind of data into an outline format - I read HN and Reddit in Org-mode format, keep my notes, my Anki cards, my code experiments, my LLM musings - all in Org-mode.