I'd suggest that the balance between Unit Test(s) and Integration Test(s) is a trade-off and depends on the architecture/shape of the System Under Test.
Example: I agree with your assertion that I can get "90%+ coverage" of Units at an integration test layer. However, the underlying system would suggest if I would guide my teams to follow this pattern. In my current stack, the number of faulty service boundaries means that, while an integration test will provide good coverage, the overhead of debugging the root cause of an integration failure creates a significant burden. So, I recommend more unit testing, as the failing behaviors can be identified directly.
And, if I were working at a company with better underlying architecture and service boundaries, I'd be pointing them toward a higher rate of integration testing.
So, re: Kent Dodds "we write tests for confidence and understanding." What layer we write tests at for confidence and understanding really depends on the underlying architectures.
Unit tests often cover the same line multiple times meaningfully, as it's much easier to exhaust corner case inputs of a single unit in isolation than in an integration test.
Think about a line that does a regex match. You can get 100% line coverage on that line with a single happy path test, or 100% branch coverage with two tests. You probably want to test a regex with a few more cases than that. It can be straightforward from a unit test, but near impossible from an integration test.
Also integration tests inherently exercise a lot of code, then only assert on a few high level results. This also inflates coverage compared to unit tests.
Unfortunately, integration testing is painful and hardly done here because they keep inventing new bad frameworks for it, sticking more reasonable approaches behind red tape, or raising the bar for unit test coverage. If there were director-level visibility for integration test coverage, would be very different.