Reputation: 273
I want to provide test and live environment in AWS. My environment contains AWS services like Lambda, API-Gateway etc. for testing purposes and live usage.
What is the best way to separate test and live environments in AWS?
Is it a good idea to create a organization with a master account and use this master account for live environment, then create another account which belongs to the same organization and use it for test environment?
Upvotes: 5
Views: 890
Reputation: 389
There are different project and environment separation strategies for AWS, that you can use to achieve your goal. Besides the already described "Separation by IAM" approach you can also consider trying out Separation by Account. This could result in an account hierarchy that is similar to following figure from this AWS answer.
First you need to create an AWS Organization with your master account that will operate as your project account. Then you add to your organization either existing AWS accounts by inviting them or you create new ones with help of AWS Organizations (in this case be aware of possible pitfalls). After you've done creating and organizing accounts you will automatically get a clean sandbox environment for each account, where no resource collision could happen.
Upvotes: 1
Reputation: 504
There are many paradigms with trade-offs - mostly dealing with your tolerance for administrative overhead and isolation.
As you mention, there is the pattern of Master Organization Account and Testing Account. There are some lines of thinking suggesting that it's an unnecessary risk to put your production, customer-facing assets in your master billing account, because a compromise within that account compromises your whole organization. That said, this is still a very common pattern.
You can use your Master accounts to house your federation and access to all of your other accounts and nothing else. This allows you to apply service control policies on your production account to better protect it and limit the scope of a compromise. If you want to dive down the rabbit hole, you can put your identity federation work in its own account, but before you accept this level (or more) of complexity, make sure it's meeting a business need or mitigating a risk commensurate with the business cost.
You can operate test and live environments within a single account, but it would probably not be advisable due to the administrative overhead. If you create a set of IAM policies/roles and prefix their names with TEST, and use them with policies that only allow permissions on other resources with TEST at the beginning of their name (i.e. they could only create resources that begin with TEST and could only modify resources that begin with TEST). Then repeat the process for LIVE. To separate data, you could use LIVE and TEST KMS keys with a policy only granting permissions to roles in the same environment. If you use IAM Users you can give them sts:assumeRole
permission on whichever TEST or LIVE roles they needed, and they could use the switch role feature in the console or the sts assume-role
command on the cli or api to operate either one of the environments. An example policy for interacting with lambda might look like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListActions",
"Effect": "Allow",
"Action": [
"lambda:ListFunctions",
"lambda:ListEventSourceMappings",
"lambda:ListLayerVersions",
"lambda:ListLayers",
"lambda:GetAccountSettings",
"lambda:CreateEventSourceMapping"
],
"Resource": "*"
},
{
"Sid": "TestOnly",
"Effect": "Allow",
"Action": "lambda:*",
"Resource": [
"arn:aws:lambda:*:337676836613:layer:TEST*:*",
"arn:aws:lambda:*:337676836613:event-source-mapping:TEST*",
"arn:aws:lambda:*:337676836613:function:TEST*",
"arn:aws:lambda:*:337676836613:layer:TEST*"
]
}
]
}
This wouldn't prevent either environments from being able to run list*
api calls and see the presence and/or names of resource from the other environment, but resources from one environment wouldn't be able to perform describe*
api calls to see any information/metadata about resources in the other environment, and would not be able to instantiate new resources in the other environment, and would not be able to modify resources in the other environment. With KMS thoroughly locking down data, you wouldn't even be able to copy data between environments without using an intermediary. However, this wouldn't work if you wanted to have different account-wide settings/defaults for services (like VPC Trunking in ECS), because they are account-wide and would affect all of your environments.
However, this would require some well crafted policies, and unless you're very familiar with IAM, using AWS Organizations is an easier way. The advantage of this way is it is very explicit about separations between environments. Since AWS API calls can traverse accounts, it becomes possible for resources in one account to start using resources in others (assuming roles, sharing S3 buckets, sharing KMS CMKs, etc.). A multi-account strategy might overlook this and as the environments become more connected over time, unwanted leaks between accounts become possible.
Upvotes: 6
Reputation: 41
AWS elasticbean service can be leveraged to achieve this functionality.
Below is snippet: Blue/green deployments require that your environment runs independently of your production database, if your application uses one. Example, If your environment has an Amazon RDS DB instance attached to it, the data will not transfer over to your second environment, and will be lost if you terminate the original environment.
More details, please refer below link: https://docs.aws.amazon.com/elasticbeanstalk
Hope this helps to get started.
Upvotes: 0