←back to thread

112 points favoboa | 1 comments | | HN request time: 0s | source
Show context
mg ◴[] No.44432001[source]
I wonder how the math turns out when we compare the energy use of local vs remote models from first principles.

A server needs energy to build it, house, power and maintain it. It is optimized for throughoutput and can be used 100% of the time. To use the server, additional energy is needed to send packets through the internet.

A local machine needs energy to build and power it. If it lives inside a person's phone or laptop, one could say housing and maintenance is free. It is optimized to have a nice form factor for personal use. It is used maybe 10% of the time or so. No energy for internet packages is needed when using the local machine.

My initial gut feeling is that the server will have way better energy efficiency when it comes to the amount of calculations it can do over its lifetime and how much energy it needs over its lifetime. But I would love to see the actual math.

replies(1): >>44432132 #
1. danhor ◴[] No.44432132[source]
As the local machine is there anyway, only the increase in energy usage should be considered, while the server only exists for this use case (distributed across all users).

The local machine is usually also highly constrained in computing power, energy (when battery driven) and thermals, I would expect the compute needed to be very different. The remote user will happily choose a large(r) model, while for the local use case a highly optimized (small) model will be chosen.