Reputation: 11
I am trying to do the following configuration in AWS:
This was working in my local Account, but when i tried it my corporate one it return the error from the title above. I've given the following permission to the services as follows:
The only difference between my local and my corporate account is the EC2 instance's access. In my local account it has a public IPv4 address, while in my corporate one it does not, because it is in a private subnet.
Is it possible that the problem comes from the possibility of this exact networking issue, because I thought that SSM can communicate without any Problem with EC2 instances located in a private subnet. Please, give me an insight on what could be the problem and thank you in advance.
I changed the IAM roles, but nothing changed. It still says "Instance not in a valid state"
Upvotes: 0
Views: 2462
Reputation: 2907
I have managed to set up an EC2 instance and successfully connect via SSM with the following terraform sections:
resource "aws_iam_role" "ssm_ec2_role1" {
name = "ssm_ec2_role"
assume_role_policy = file("assume_role_policy.json")
}
resource "aws_iam_role_policy_attachment" "ssm_role_attachment" {
role = aws_iam_role.ssm_ec2_role1.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforSSM"
}
resource "aws_iam_role_policy_attachment" "ssm_role_attachment2" {
role = aws_iam_role.ssm_ec2_role1.name
policy_arn = "arn:aws:iam::aws:policy/AmazonSSMFullAccess"
}
resource "aws_iam_instance_profile" "ssm_ec2_role_profile" {
name = "ssm_ec2"
role = aws_iam_role.ssm_ec2_role1.name
}
resource "aws_instance" "ec2_instance_master" {
ami = var.ami_id
subnet_id = aws_subnet.kube_public_subnet.id
instance_type = var.instance_type
key_name = var.ami_key_pair_name
associate_public_ip_address = true
security_groups = [ aws_security_group.kube_security_group.id ]
iam_instance_profile =
aws_iam_instance_profile.ssm_ec2_role_profile.name
root_block_device {
volume_type = "gp2"
volume_size = "16"
delete_on_termination = true
}
tags = {
Name = "k8s_master_1"
}
user_data_base64 =
base64encode("${templatefile("scripts/install_k8s_master.sh", {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
s3_bucket_name = "${aws_s3_bucket.s3_kube_bucket.id}"
})}")
depends_on = [
aws_s3_bucket.s3_kube_bucket,
random_string.s3name
]
}
Then you can run the aws cli ssm commands:
aws ssm start-session --target i-XXXXXXXX --region eu-XXXXX-X
aws ssm send-command \
--document-name "AWS-RunShellScript" \
--parameters 'commands=["kubectl apply -f nginx.yaml"]' \
--targets "Key=instanceids,Values=i-XXXXXXXXXXXXXX" \
--comment "kubectl apply -f nginx.yaml"
I hope it helps
Upvotes: 0