←back to thread

204 points JPLeRouzic | 1 comments | | HN request time: 0.199s | source

I polished a Markov chain generator and trained it on an article by Uri Alon and al (https://pmc.ncbi.nlm.nih.gov/articles/PMC7963340/).

It generates text that seems to me at least on par with tiny LLMs, such as demonstrated by NanoGPT. Here is an example:

  jplr@mypass:~/Documenti/2025/SimpleModels/v3_very_good$
  ./SLM10b_train UriAlon.txt 3
  
  Training model with order 3...
  
  Skip-gram detection: DISABLED (order < 5)
  
  Pruning is disabled
  
  Calculating model size for JSON export...
  
  Will export 29832 model entries
  
  Exporting vocabulary (1727 entries)...
  
  Vocabulary export complete.
  
  Exporting model entries...
  
    Processed 12000 contexts, written 28765 entries (96.4%)...
  
  JSON export complete: 29832 entries written to model.json
  
  Model trained and saved to model.json
  
  Vocabulary size: 1727
  
  jplr@mypass:~/Documenti/2025/SimpleModels/v3_very_good$ ./SLM9_gen model.json
Aging cell model requires comprehensive incidence data. To obtain such a large medical database of the joints are risk factors. Therefore, the theory might be extended to describe the evolution of atherosclerosis and metabolic syndrome. For example, late‐stage type 2 diabetes is associated with collapse of beta‐cell function. This collapse has two parameters: the fraction of the senescent cells are predicted to affect disease threshold . For each individual, one simulates senescent‐cell abundance using the SR model has an approximately exponential incidence curve with a decline at old ages In this section, we simulated a wide range of age‐related incidence curves. The next sections provide examples of classes of diseases, which show improvement upon senolytic treatment tends to qualitatively support such a prediction. model different disease thresholds as values of the disease occurs when a physiological parameter ϕ increases due to the disease. Increasing susceptibility parameter s, which varies about 3‐fold between BMI below 25 (male) and 54 (female) are at least mildly age‐related and 25 (male) and 28 (female) are strongly age‐related, as defined above. Of these, we find that 66 are well described by the model as a wide range of feedback mechanisms that can provide homeostasis to a half‐life of days in young mice, but their removal rate slows down in old mice to a given type of cancer have strong risk factors should increase the removal rates of the joint that bears the most common biological process of aging that governs the onset of pathology in the records of at least 104 people, totaling 877 disease category codes (See SI section 9), increasing the range of 6–8% per year. The two‐parameter model describes well the strongly age‐related ICD9 codes: 90% of the codes show R 2 > 0.9) (Figure 4c). This agreement is similar to that of the previously proposed IMII model for cancer, major fibrotic diseases, and hundreds of other age‐related disease states obtained from 10−4 to lower cancer incidence. A better fit is achieved when allowing to exceed its threshold mechanism for classes of disease, providing putative etiologies for diseases with unknown origin, such as bone marrow and skin. Thus, the sudden collapse of the alveoli at the outer parts of the immune removal capacity of cancer. For example, NK cells remove senescent cells also to other forms of age‐related damage and decline contribute (De Bourcy et al., 2017). There may be described as a first‐passage‐time problem, asking when mutated, impair particle removal by the bronchi and increase damage to alveolar cells (Yang et al., 2019; Xu et al., 2018), and immune therapy that causes T cells to target senescent cells (Amor et al., 2020). Since these treatments are predicted to have an exponential incidence curve that slows at very old ages. Interestingly, the main effects are opposite to the case of cancer growth rate to removal rate We next consider the case of frontline tissues discussed above.
1. ssivark ◴[] No.45996794[source]
The Markov property means that the next token is determined purely by the current token. Well, if it were a Hidden Markov Model, the next state would actually be determined by the current state, and the respective tokens would be a lossy representation of the states.

The problem with HMMs is that the sequence model (Markov transition matrix) accounts for much less context than even Tiny LLMs. One natural way to improve this is to allow the model to have more hidden states, representing more context -- called "clones" because these different hidden states would all be producing the same token while actually carrying different underlying contexts that might be relevant for future tokens. We are thus taking a non-Markov model (like a transformer) and re-framing its representation to be Markov. There have been sequence models with this idea aka Cloned HMMs (CHMMs) [1] or Clone-Structured Cognitive Graphs (CSCGs) [2]. The latter name is inspired by some related work in neuroscience, to which these were applied, which showed how these graphical models map nicely to "cognitive schemas" and are particularly effective in discovering interpretable models of spatial structure.

I did some unpublished work a couple of years ago (while at Google DeepMind) studying how CHMMs scale to simple ~GB sized language data sets like Tiny Stories [3]. As a subjective opinion, while they're not as good as small transformers, they do generate text that is surprisingly good compared with naive expectations of Markov models. The challenge is that learning algorithms that we typically use for HMMs (eg. Expectation Maximization) are somewhat hard to optimize & scale for contemporary AI hardware (GPU/TPU), and a transformer model trained by gradient descent with lots of compute works pretty well, and also scales well to larger datasets and model sizes.

I later switched to working on other things, but I still sometimes wonder whether it might be possible to cook up better learning algorithms attacking the problem of disambiguating contexts during the learning phase. The advantage with an explicit/structured graphical model like a CHMM is that it is very interpretable, and allows for extremely flexible queries at inference time -- unlike transformers (or other sequence models) which are trained as "policies" for generating token streams.

When I say that transformers don't allow flexible querying I'm glossing over in-context learning capabilities, since we still lack a clear/complete understanding and what kinds of pre-training and fine-tuning one needs to elicit them (which are frontier research questions at the moment, and it requires a more nuanced discussion than a quick HN comment).

It turns out, funnily, that these properties of CHMMs actually proved very useful [4] in understanding the conceptual underpinnings of in-context learning behavior using simple Markov sequence models instead of "high-powered" transformers. Some recent work from OpenAI [5] on sparse+interpretable transformer models seems to suggest that in-context learning in transformer LLMs might work analogously, by learning schema circuits. So the fact that we can learn similar schema circuits with CHMMs makes me believe that what we have is a learning challenge and it's not actually a fundamental representational incapacity (as is loosely claimed sometimes). In the spirit of full disclosure, I worked on [4]; if you want a rapid summary of all the ideas in this post, including a quick introduction to CHMMs, I would recommend the following video presentation / slides [6].

[1]: https://arxiv.org/abs/1905.00507

[2]: https://www.nature.com/articles/s41467-021-22559-5

[3]: https://arxiv.org/abs/2305.07759

[4]: https://arxiv.org/abs/2307.01201

[5]: https://openai.com/index/understanding-neural-networks-throu...

[6]: https://slideslive.com/39010747/schemalearning-and-rebinding...