-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs: document how the network TDP energy usage is calculated #102
Comments
Something seems off about that number? Okay, so working backwards: It's currently Divide by 24, you get 1.531kWh, or 1,531Wh. Divide 1,530W by 95 and you get an average of 16W TDP? That seems way too low? Maybe it's only counting PRs, so divide by 55 instead of 95, which would make it around 28W TDP? That also seems suspiciously low? Am I thinking about this the right way? |
It is only counting PRs but there was a sneaky bug that was resulting in assigning 0 watt hours to reps with missing reported watt hours (instead of the average). Fixed here: 2c040aa I've deployed the fix. Nice catch sir |
Awesome! Thanks for fixing that. Honestly a ~71W average still seems a little low? But I guess maybe there's a large % of AMD representation or something? What do you think about taking this one step further? I think a reasonable critic would point out that CPU TPD is only a small % of the total energy of a computer. Maybe I scrape something like PCPartPicker and we find what the average % of the total power draw of a computer CPUs are, then include that in the calculation? From a quick look, I'm guessing it will be anywhere between 10%-30%, depending on if we include a GPU? This is an important point - PRs don't really benefit from GPUs, right? We might want to include the GPU as an upper-bound, and not include the GPU as a lower-bound? Thoughts? |
Let me share the data I have so we can better discuss how to communicate it.
Agree with the sentiment. I initially considered this broader approach but after a little bit of thinking I abandoned it as I felt it was going to be too loose of an estimate. I decided a more accurate estimate for a smaller subset of the energy usage would be a good feasible first step. In any case, I think the path to getting an estimate of the whole is continuing to find parts of the whole we can accurately estimate and work our way to the whole. I think we should gather information about if a node is hosted on a dedicated machine vs shared machine — then try to gather as much information about the energy hungry components of dedicated nodes similar to how we've done with CPU TDP. Side note, I need to setup a system that allows for reps to easily and regularly send signed messages to us updating this information so that we don't have to regularly poll them (#105) csv download — https://pastebin.com/HCcXprCB
|
Thanks for sharing
Agreed. Was there any option when you surveyed for the responder to indicate if their node was self-hosted? Just like, a desktop in a person's house? I'm surprised I don't see anything indicating that? Setting up the system like you mentioned in #105 would 100% be the better long-term solution. |
I did not gather any information about whether or not a node is self-hosted. We could possibly infer self-hosted from the network and provider information I automatically gather based on IP address. In any case it should be a boolean field we track. I do need to clean up some of the fields in the
|
As it currently stands, the network CPU TDP is calculated by adding up the TDP of the reported CPU for reps that responded (have data on 44 reps) and the average TDP is used where a reported TDP is not available. This sum is then multiplied by 24 to convert the total from watt-hours to watt-hours per day, and finally divided by 1000 to convert the measurement from watt-hours to kilowatt-hours (kWh), providing a daily energy usage estimate of the network in kWh.
Thus the network TDP calculation is as follows:
h/t not ian
The text was updated successfully, but these errors were encountered: