Software engineers understandably don't like that, so Unix time handles it instead by "going backwards" and repeating the final second. That way every minute is 60 seconds long, every day is 86400, and you're only at risk of a crazy consistency bug about once every year and a half. But most databases do it differently, using smearing. Many databases use different smearing windows from one another (24 hours vs 20 for instance). Some rarer systems instead "stop the clock" during a leap second.
That's 4 different ways to handle a leap second, but much documentation will use terms like "UTC" or "Unix time" interchangeably to describe all 4 and cause confusion. For example, "mandating UTC for the server side" almost never happens. You're probably mandating Unix time, or smeared UTC.
If you care about sub-second differences, you likely run your own time infra (like Google Spanner), and your systems are so complex already that the time server is just a trivial blip.
If you are communicating across org boundaries, I've never seen sub-second difference in absolute time matter.
It makes a lot of sense until you realize what we're doing. We're just turning UTC into a shittier version of TAI. After 2035, they will forevermore have a constant offset, but UTC will keep it's historical discontinuities. Why not just switch to TAI, which already exists, instead of destroying UTC to make a more-or-less redundant version of TAI?