I mean, everyone is still using variational autoencoders for their latent flow models instead of the information bottleneck. It's because it's cheaper (in founder time) to raise 10(0)x more money instead of having to design your own algorithms and architectures for a novel idea that might work in theory, but could be a dead end six months down the line. Just look at LiquidAI. Brilliant idea, but it took them ~5 years to do all the research and another to get their first models to market... which don't yet seem to be any better than models with a similar compute requirement. I find it pretty plausible that none of the "big" LLM companies seriously tried SSMs, because they already have plenty enough money to throw at transformers, or took a quick path to get a big valuation.