I'm not sure if you have enough knowledge about fonts, but I have built fonts entirely from scratch and know enough that it won't work as you imagine. Please forgive me if you are indeed knowledgeable about fonts...
Let's assume that "HELLO" is remapped under your scheme. You would have a base font that will be used to dynamically generate mangled fonts, and it surely has at least four glyphs, which I'll refer as gH, gE, gL and gO (let's ignore advanced features and ligatures for now). Your scheme for example will instead map, say, a decimal digit 1 to gH, 2 to gE, 3 to gL and 4 to gO so that the HTML will contain "12334" instead of "HELLO". Now consider which attacks are possible.
The most obvious attack, as you have considered, is to ignore HTML and only deal with the rendered page. This is indeed costly compared to other attacks, but not very expensive either because the base font should have been neutral enough in the first place. Neutral and regular typefaces are the ideal inputs for OCR, and this has been already exploited massively in Fax documents (search keyword: JBIG2). So I don't think this ultimately poses a blocker for crawlers, even though it will indeed be very annoying.
But if the attacker does know webfonts are generated dynamically, one can look at the font itself and directly derive the mapping instead. As I've mentioned, glyphs therein would be very regular and can easily be recognized because a single glyph OCR (search keyword: MNIST) is even much simpler than a full-text OCR where you first have to detect letter-like areas. The attacker will render each glyph to a small virtual canvas, run OCR and generate a mapping to undo your substitution cipher.
Since the cost of this attack is proportional to the number of glyphs, the next countermeasure would be putting more glyphs to make it a polyalphabetic cipher: both 3 and 5 will map to gL and the HTML will contain "12354" instead. But it doesn't scale well, especially because OpenType has a limit of 65,535 glyphs. Furthermore, you have to make each of them unique so that the attacker has to run OCR on each glyph (say, 3 maps to gL and 5 maps to gL' which is only slightly different from gL), otherwise it can cache the previously seen glyph. So the generated font would have to be much larger than the original base font! I have seen multiple such fonts in the wild and almost all of them are for CJKV scripts, and those fonts are harder to deploy as webfonts for the exactly same reason. Even Hangul with only ~12,000 letters poses a headache for deployment.
This attack also applies to ligatures by the way, because OpenType ligatures are just composite glyphs plus substitution rules. So you have the same 65,535 glyph limit anyway [1], and it is trivial to segment two or more letters from those composite glyphs anyway. The only countermeasure would be therefore describing and mangling each glyph independently, and that would take even more bytes to deploy.
[1] This is the main reason Hangul needlessly suffers in this case too. Hangul syllables can be generated from a very simple algorithm so you only need less than 1,000 glyphs to make a functional Hangul font, but OpenType requires one additional glyph for each composite glyphs so that all Hangul fonts need to have much more glyphs even though all composite glyphs would be algorithmically simple.