That's kind of the bet they made, but misses a key point.
Their fundamental idea was that by having simpler CPUs, they could iterate on Moore's law more quickly. And eventually they would win on performance. Not just on a few speculative edge cases, but overall. The dynamic compilation was needed to be able to run existing software on it.
The first iterations, of course, would be slower. And so their initial market, needed to afford those software generations, would be use cases for low power. Because the complexity of a CISC chip made that a weak point for Intel.
They ran into a number of problems.
The first is that the team building that dynamic compilation layer was more familiar with the demands of Linux than Windows, with the result that the compilation worked better for Linux than Windows.
The second problem was that the "simple iterates faster" also turns out to be true for ARM chips. And the most profitable segments of that low power market turned out to be willing to rewrite their software for that use case.
And the third problem is that Intel proved to be able to address their architectural shortcomings by throwing enough engineers at the problem to iterate faster.
If Transmeta had won its bet, they would have completely dominated. But they didn't.
It is worth noting that Apple pursued a somewhat similar idea with Rosetta. Both in changing to Intel, and later changing to ARM64. With the crucial difference that they also controlled the operating system. Meaning that instead of constantly dynamically compiling, they could rely on the operating system to decide what needs to be compiled, when, and call it correctly. And they also better understood what to optimize for.