Be careful for what you wish for.
I've just spent a week writing Neural net code in C++, so i have direct insight into what a C++ implementation might look like.
Much as I dislike python, and having to deal with endless runtime errors when your various inputs and outputs are mismatched, the inconvenience pales in comparison with having to wade through three pages of error messages that a C++ compiler generates when you have a single mismatched templated Eigen matrix with the incorrect dimension. The "Required from here" message you are actually interested in is typically the 3rd or fourth "Required from here", amongst a massive stack of cascading errors, each of which wraps to about 4 lines when displayed. You know what I mean. Sometimes you don't get a "Required from here" at all, which is horrifying. And it's infuriating to find and parse the template arguments of classes involved.
Debugging Python runtime errors is kind of horrible, and rubs me the wrong way on principle. But it is sweetness and light compared to debugging C++ compile-time error messages, which are unimaginably horrible.
The project: converting a C++ Neural Amp Modeller LV2 plugin to use fixed-size matrices (Eigen::Matrix<float,N,M>) instead of dynamic matrixes (Eigen::MatrixXf) to see if doing so would improve performance. (It does. Signficantly). So a substantial and realistic experiment in doing ML work in C++. Not directly comparable to working in Pytorch, but directly analogous in that it involves hooking up high-level ML constructs, like Conv1D, LayerT, WaveNet ML chunks.