Reputation: 151
I've deployed ceph cluster within kubernetes cluster and tried to git clone a repository inside of a pod using a volume mount of CephFs and CephRbd typ.
Although it takes a huge amount of time to write all the files to the volume.
The git repository has roughly 4gb size.
Wondering if this is a normal behavior?
Specs:
4 kubernetes nodes - 1 master + 3 slaves 3 Osd 3 mon 1 metada server 1 mnager daemon
The 3 nodes where ceph is using as storage it's a second ssd drive of 100Gb size.
Upvotes: 0
Views: 425
Reputation: 6113
We are operating a small ceph cluster (4 nodes, 2 OSDs per node) as well. The nodes are exclusively used by ceph. They connect with 10Gbit Ethernet, have Samsung server grade SSDs (I would advise caution with Samsung SSDs, because of this incompatibility). Especially the server grade SSDs got us more throughput. Every part that is reducing latency is buying you better throughput and better response to high rate of small file creation.
We started out with three nodes and two consumer SSDs per OSD. That time was very burdensome, because with 30 VMs using ceph as backing storage we had some situations, where ceph was not able to keep up fast with the IO.
The more ceph nodes you have the better. Adding the fourth node made quite a difference for us. Keep the ceph nodes exclusive to ceph. Have enough RAM. Don't let the OSDs swap. Use recommended hardware.
I can only strongly recommend the book: Mastering Ceph 2nd Edition. It is full of valuable information.
Upvotes: 1