Reputation: 23
I successfully deployed Kubernetes on AWS using "getting started on AWS ec2 guide" (http://kubernetes.io/v1.0/docs/getting-started-guides/aws.html), but the disk size of all the minions (kubernetes hosts) is 8gb. I would like to increase the disk size, but I haven't found a way to do it.
I can change the VM size by setting MINION_SIZE (e.g. export MINION_SIZE=m3.medium) prior to installing, but the disk size is still 8gb.
From the Kubernetes install instructions for other cloud providers there's an option to set MINION_DISK_SIZE to set the disk size. I tried that with AWS ec2 installation, and the variable is ignored.
I also poked around the config files, but I didn't see anything obvious.
Any suggestions on how to set the disk size for minions when installing Kubernetes on AWS ec2?
Upvotes: 2
Views: 405
Reputation: 2351
I have faced this very issue and tried the current accepted answer but it looks like Kubernetes is changing quite fast what may make this answer also outdated soon.
To this date, I've tested the solution below that might become or not a definitive solution in the future:
There is this PR on Kubernetes' github project that implements an easy way to ignore the SSD storage by setting KUBE_AWS_STORAGE=ebs
before running kubernetes/cluster/kube-up.sh
.
Hope it is helpful!
Upvotes: 0
Reputation: 78
I recently stumbled upon the same issue. Have a look at BLOCK_DEVICE_MAPPINGS in kubernetes/cluster/aws/util.sh
. You can modify it to have something more appropriate for a EBS-only minion.
For example:
[{"DeviceName":"/dev/sda1","Ebs":{"VolumeSize":80}}]
AWS docs: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html
Upvotes: 2