If you ever end up on a video that's related to drugs, there will be entire chains of bots just advertising to each other and TikTok won't find any violations when reported. But sure, I'm sure they care a whole lot about not ending up like Twitter.
They won't be betting that this stops that entirely, but it adds a layer of friction that is easy for them to change on a continuous basis. These things are also very good for leaving honeypots in where if someone is found to still be using something after a change you can tag them as a bot or otherwise hacking. Both of those approaches are also widely used in game anti-cheat mechanisms, and as shown there the lengths people will go to anyway are completely insane.
TikTok is a huge company, evidence of what the support department does or doesn't do has only minor bearing on the whole company, and basically none on the engineering department.
The thing that seems most likely to me is that they care about spam, the engineering department did this one thing, and the support department is either overworked or cares less. Or really efficient which is why you only see "a lot of spam", not "literally nothing but spam".
The nominal goal of the code could well be bots at the same time the POSIWID purpose is about the exec impressing his superiors and the developers feeling smart and indulging their pet technical interests. Similarly, the nominal goal of the abuse reporting system would include spam, even if the POSIWID analysis would show that the true current purpose is to say they're doing something while keeping costs low.
So again, I don't think you have a lot of understanding of how large companies work. Whereas I, among other things, ran an anti-abuse engineering team at Twitter back in the day, so I'm reasonably familiar with the dynamics.
I'm guessing a lot of them use reCAPTCHA, and according to this comment, reCAPTCHA uses an obfuscated VM: