There’s another difference which is what I was referring to: in both cases, your proxy has to forge the SSL certificate for the remote server but in the transparent case it also must intercept network traffic intended for the remote IP. That means that clients can’t tell whether an error or performance problem is caused by the interception layer or the remote server (sometimes Docker Hub really is down…) and it can take more work for the proxy administrator to locate logs.
If you explicitly configure a proxy, the CONNECT method can trigger the same SSL forgery but because it’s explicit the client’s view is more obvious and explainable. If my browser gets an error connecting to proxy.megacorp.com I don’t spend time confirming that the remote service is working. If the outbound request fails, I’ll get a 5xx error clearly indicating that rather than having to guess at what node dropped a connection or why. This also provides another way to implement client authentication which could be useful if you have user-based access control policies.
It’s not a revelation, but I think this is one of the areas where trying to do things the easy way ends up being harder once you factor in frictional support costs. Transparent proxying is trading a faster rollout for years of troubleshooting.