If you ever end up on a video that's related to drugs, there will be entire chains of bots just advertising to each other and TikTok won't find any violations when reported. But sure, I'm sure they care a whole lot about not ending up like Twitter.
TikTok is a huge company, evidence of what the support department does or doesn't do has only minor bearing on the whole company, and basically none on the engineering department.
The thing that seems most likely to me is that they care about spam, the engineering department did this one thing, and the support department is either overworked or cares less. Or really efficient which is why you only see "a lot of spam", not "literally nothing but spam".
The nominal goal of the code could well be bots at the same time the POSIWID purpose is about the exec impressing his superiors and the developers feeling smart and indulging their pet technical interests. Similarly, the nominal goal of the abuse reporting system would include spam, even if the POSIWID analysis would show that the true current purpose is to say they're doing something while keeping costs low.
So again, I don't think you have a lot of understanding of how large companies work. Whereas I, among other things, ran an anti-abuse engineering team at Twitter back in the day, so I'm reasonably familiar with the dynamics.