yspreen
yspreen

Reputation: 1981

ReadWriteMany volumes on kubernetes with terabytes of data

We want to deploy a k8s cluster which will run ~100 IO-heavy pods at the same time. They should all be able to access the same volume.

What we tried so far:

There has to be some way to get 2TB of data mounted in a GKE cluster with relatively high availability?

Firestorage seems to work, but it's a magnitude more expensive than other solutions, and with a lot of IO operations it quickly becomes infeasible.


I contemplated creating this question on server fault, but the k8s community is a lot smaller than SO's.

Upvotes: 3

Views: 1042

Answers (1)

yspreen
yspreen

Reputation: 1981

I think I have a definitive answer as of Jan 2020, at least for our usecase:

| Solution        | Complexity | Performance | Cost           |
|-----------------|------------|-------------|----------------|
| NFS             | Low        | Low         | Low            |
| Cloud Filestore | Low        | Mediocre?   | Per Read/Write |
| CephFS          | High*      | High        | Low            |

* You need to add an additional step for GKE: Change the base image to ubuntu

I haven't benchmarked Filestore myself, but I'll just go with stringy05's response: others have trouble getting really good throughput from it

Ceph could be a lot easier if it was supported by Helm.

Upvotes: 1

Related Questions