Personally, I do not mind it if it's on-device, especially small specialised models (e.g. overview generation, audio generation, etc) with no internet access.
That was the original intent. They only recently added the "chatbot-y" kind of stuff since the infra is all already there. The main uses were for their translation tools and PDF alt-text generation (which I believe disabling ML will disable as they rely on the on-device transformer tools to do).