I kinda want to put together a Ceph storage cluster — but I know it can take quite a bit to get good IOPS from ceph — good CPU’s and fast enterprise drives (but I want NVMe), oh and also good networking. But I mainly want to see what I can get in the IOPS department so sequential throughput, I’m not too worried about
How would you guys go about this? Any good hardware choices now that prices of things have come down a good bit in the last year or so?
https://static.xtremeownage.com/blog/2023/proxmox-building-a-ceph-cluster/
Having around 10 total enterprise NVMes, and 10G networking, I am pretty happy with the results.
It runs all of my VMs, kubernetes, etc, and doesn’t bottleneck.
This is a great article but it definitely shows that you shouldn’t expect much
He’s not even reaching the IOPS of a single drive in his testing :(
I might have to find something else lol
I did put the disclaimer front and center! Ceph really needs a ton of hardware before it starts even comparing to normal storage solutions.
But, the damn reliability is outstanding.
Hmm, I guess the most IOPs and latency cut will come from a storage protocol use. I mean, with 10GbE and iSCSI or NFS, you might not feel the benefits of NVMe. Especially in terms of latency. And as far as i know, there is no NVMe-oF support yet.
WIth any kind of advanced software you will never be getting drive speed out of a soulution.
Raw nvmes can hit 800k iops. Add XFS and you may be able to get that.
with MDADM you get like 200-300k
ZFS shrink it to 20-50k
Anything network be glad if you get 5-20k
The more software is involved the worse performance gets especially for IO. Sequentials often scale better or even linearly, but IO is a PITA