-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement persistent volumes in AWS #235
Comments
There's two of approaches that can I believe can be taken:
With GKE, the costs of disks was returned alongside the costs of compute resources(https://github.com/grafana/cloudcost-exporter/blob/main/pkg/google/compute/pricing_map.go#L218-L246), so I decided to implement exporting the costs of pv's alongside the cost of instances: What's hard for me to know without digging into the API responses is if there is a similar level of coupling for AWS. My hunch is there isn't since we use the following filter for the listing of prices. Personal recommendation: check to see if the pricing map from eks(and the listing of prices) can easily be extended to pull in disk costs. If not, I'd recommend going down the route of creating a module dedicated to disks. Even though PV's is somewhat tightly coupled to k8s, we're ultimately billed for disks. I think it would be cleaner to have a |
Most of the functionality is implemented with tweaks to naming and what not. The main thing is adding tests, specifically for the @Pokom will take a look to see if we can split out the collect method in such a way where you can create a method that encapsulates the logic into a testable method. |
Data is out in prod and next step is validating it and then creating the TCO rules. |
Currently working to offload processing of the volumes to a background goroutine for performance reasons. @paulajulve will close this out and follow up with another issue that details the performance problems and track the work there. |
Next steps:
|
The primary goal is to export a metric that calculates the hourly cost of each persistent volume within our EKS clusters. The exported metric should align with our existing metrics(
cloudcost_aws_eks_persistent_volume_dollars_per_hour
).There's really two parts:
There should be the following labels:
@paulajulve had pointed out the cluster label may not be possible to derive from the API response. If that's the case, then we need to consider how we can join against existing kube state metrics to derive the cluster name.
The text was updated successfully, but these errors were encountered: