The compute accelerator story on mainstream, non-patched Linux, with upstream software isn't that good at the moment. You're going to be waiting a while before you can do fun stuff like organize layers across the Neural Engine and GPU for ML models, something CoreML can do today. Compute using graphics APIs exists, but it isn't really the same and loses out on many features people practically want and are used to, and it moves forward much more quickly than graphics APIs e.g. Nvidia just released Heterogeneous Memory Management as stable in the open source GPU driver for x86. The Linux accelerator ecosystem in practice is just held together by Nvidia's effort, honestly.
We really need something like Mesa, but for compute accelerator APIs. I'm really hoping that IREE helps smooth out parts of the software stack and can fill in part of this, but the pieces aren't all put in place yet. You'll need the GPU for a substantial amount of accelerator work regardless of Neural Engine support.
I disagree that there is nothing lacking on these machines with Asahi, I still run into small nits all the time (from 16k page sizes biting back to software missing features). But my M2 Air is 100%, no-questions-asked usable as a daily driver and on-the-go hacking machine, it is fast as hell and quiet, it has nested virtualization and is the only modern ARM machine on the market, and I love it for that.