Athi
Athi

Reputation: 391

Redshift many small nodes vs less numbers of bigger nodes

Recently I have been facing cluster restart(outside maintenance window/arbitrary) in AWS Redshift that has been triggered from AWS end. They are not able to identify what is the exact root cause of this reboot. The error that AWS team captured is "out of object memory".

In the meantime, I am trying to scale up the cluster size to avoid this out of object memory(as a blind try), Currently I am using ds2.xlarge node type but I am not sure which of below I need to increase/choose?

  1. Many smaller nodes (increase number of nodes in ds2.xlarge)
  2. Few larger nodes (change to ds2.8xlarge and have less number but increased capacity)

Anyone faced similar issue in Redshift? Any advise?

Upvotes: 2

Views: 980

Answers (1)

Shubham Jain
Shubham Jain

Reputation: 5526

Going with the configuration, for better performance in this case you should opt for ds2.8xlarge cluster type.

One ds2.xlarge cluster has 13 gb of RAM and 2 slice to perform your workload as compared with ds2.8xlarge which has 244 gb of RAM and 16 slices to perform your workloads.

Now even if you choose 8 ds2.xlarge nodes you will get max 104 GB memory against 244 GB in one node of ds2.8xlarge.

So you should go with ds2.8xlarge node type for handling memory issue along with large amount of storage

Upvotes: 2

Related Questions