If you ever end up on a video that's related to drugs, there will be entire chains of bots just advertising to each other and TikTok won't find any violations when reported. But sure, I'm sure they care a whole lot about not ending up like Twitter.
The nominal goal of the code could well be bots at the same time the POSIWID purpose is about the exec impressing his superiors and the developers feeling smart and indulging their pet technical interests. Similarly, the nominal goal of the abuse reporting system would include spam, even if the POSIWID analysis would show that the true current purpose is to say they're doing something while keeping costs low.
So again, I don't think you have a lot of understanding of how large companies work. Whereas I, among other things, ran an anti-abuse engineering team at Twitter back in the day, so I'm reasonably familiar with the dynamics.