And what do you have then? 300 tests that test the behavior that's exposed by the implementations of the api. Are they useful? Probably some are, probably some are not. The ones that are not will just be clutter and maintenance overhead. Plus, there will be lots of use-cases for which you need to look a little deeper than just the api implementation, which are now not covered. And those kind of tests, tests that test real business use cases, are by far the most useful ones if you want to catch regressions.
So if your goal is to display some nice test coverage metrics on SonarQube or whatever, making your CTO happy, yes AI will help you enormously. But if your goal is to speed up development of useful test cases, less so. You will still gain from AI, but nowhere near 90%.