-
Notifications
You must be signed in to change notification settings - Fork 556
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ceph-csi very slow on vm #9754
Comments
The issue you posted doesn't have any relevant details, including the performance numbers, the way you set up things, etc. Ceph is a complicated subject, and setting it up properly is not trivial. |
We have a Proxmox Cluster with 5 Nodes and a Ceph Cluster on the Proxmox. The Ceph Cluster has a 100GB nic. if i testing with kubestr fio: with a local path Storageclass FIO version - fio-3.36 JobName: read_iops JobName: write_iops JobName: read_bw JobName: write_bw Disk stats (read/write):
and with the ceph block Storageclass: rbd.csi.ceph.com ` FIO version - fio-3.36 JobName: read_iops JobName: write_iops JobName: read_bw JobName: write_bw Disk stats (read/write):
` The talos machine has two nic's. One only to communicating with the ceph monitor's. It's Working, but i think to slow. |
Then you need to dig further to understand why - what is the bottleneck, certainly Ceph block storage should be slower as it goes via the network, does replication, etc. You can watch resource utilization to understand what is the bottleneck. We are not aware of anything missing from the Talos side, and we do use Ceph a lot ourselves with Talos. |
proxmox with ceph and talos as vm with ceph csi is much slower than openebs-hostpath, are there any modules missing for the kernel?
Environment
The text was updated successfully, but these errors were encountered: