Reputation: 848
I don't understand the way for make a git clone of my private gitlab repository into ec2 instance when I deploy using terraform.
I have a deploy.sh file where i have :
sudo apt-get update
sudo apt-get install git
git clone [email protected]:myapp/myrepo.git
I use a ssh key for pull or push repository.
How i define or registry the ssh key from new ec2 instance created from terraform script?
I already trying using provisioner "file", but i use a ELB with autoscaling group and i can't connect via ssh to copy files from my local machine to new ec2 instance, always y get "port 22: Connection refused".
Upvotes: 1
Views: 2539
Reputation: 36
To expand on jstewart379's answer:
The fundamental way of handling this is to allow your EC2 instances to access that repository in some manner.
For this to work, your Gitlab instance (not the repo) will need to be public or will at least need to allow access to the EC2 instances (For example, by modifying the security group of Gitlab to allow Port 80 and Port 443 access to the EC2 instance security group).
After that you can choose to authenticate via any of the methods your Gitlab instance supports (This is generally either SSH Key or HTTP Creds).
For an the SSH key method, you should setup a read-only deploy key (Don't use your personal SSH key) within Gitlab.
https://docs.gitlab.com/ee/ssh/#per-repository-deploy-keys
After this you can choose to install this key on the instance in a number of ways. You will use the User-data option of your ASG to handle all of this.
My preferred method is to load the key onto the instance via a encrypted, private, S3 bucket.
resource "aws_s3_bucket_object" "s3_object_deploy_key" {
key = "id_rsa"
bucket = "${aws_s3_bucket.s3_secrets.id}"
source = "secrets/id_rsa"
}
Important Note: Be sure to add that secrets directory to your .gitignore
or you're gonna have a bad time.
After uploading the key to the bucket, grant read only access to this bucket via an IAM Instance Role.
That would look something like this:
resource "aws_iam_policy" "iam-policy-s3-deploy-key" {
name = "${var.cluster_name}-${var.env}-read-deploy-key"
path = "/"
description = "Allow reading from the S3 bucket"
policy = <<EOF
{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action":[
"s3:ListBucketByTags",
"s3:GetLifecycleConfiguration",
"s3:GetBucketTagging",
"s3:GetInventoryConfiguration",
"s3:GetObjectVersionTagging",
"s3:ListBucketVersions",
"s3:GetBucketLogging",
"s3:ListBucket",
"s3:GetAccelerateConfiguration",
"s3:GetBucketPolicy",
"s3:GetObjectVersionTorrent",
"s3:GetObjectAcl",
"s3:GetEncryptionConfiguration",
"s3:GetBucketRequestPayment",
"s3:GetObjectVersionAcl",
"s3:GetObjectTagging",
"s3:GetMetricsConfiguration",
"s3:GetIpConfiguration",
"s3:ListBucketMultipartUploads",
"s3:GetBucketWebsite",
"s3:GetBucketVersioning",
"s3:GetBucketAcl",
"s3:GetBucketNotification",
"s3:GetReplicationConfiguration",
"s3:ListMultipartUploadParts",
"s3:GetObject",
"s3:GetObjectTorrent",
"s3:GetBucketCORS",
"s3:GetAnalyticsConfiguration",
"s3:GetObjectVersionForReplication",
"s3:GetBucketLocation",
"s3:GetObjectVersion"
],
"Resource":[
"${data.terraform_remote_state.secret-store.s3_secrets_arn}",
"${data.terraform_remote_state.secret-store.s3_secrets_arn}/*"
]
},
{
"Effect":"Allow",
"Action":[
"s3:ListAllMyBuckets",
"s3:HeadBucket"
],
"Resource":"*"
}
]
}
EOF
}
You'd setup an instance role like this and assign that to your Launch Configuration:
data "aws_iam_policy_document" "instance-assume-role-policy" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["ec2.amazonaws.com"]
}
}
}
resource "aws_iam_role" "iam-role-instance" {
name = "${var.cluster_name}-${var.env}-instance"
path = "/system/"
assume_role_policy = "${data.aws_iam_policy_document.instance-assume-role-policy.json}"
}
resource "aws_iam_role_policy_attachment" "iam-attach-deploy-key" {
role = "${aws_iam_role.iam-role-instance.name}"
policy_arn = "${aws_iam_policy.iam-policy-s3-deploy-key.arn}"
}
After getting the key in place, you can do as you wish with the repository.
Hope that helps!
Upvotes: 2
Reputation: 446
When you provision your aws_autoscaling_group resource with terraform, you could give it an aws_launch_configuration resource and fill in the userdata field with a script you want to run when the instance launches. That script can do anything you need it to including cloning your gitlab repo and setting up your ssh keys. For ssh access to the machine, you could set up a public facing instance from which you can connect to your private instance running behind your elb. Your security group settings can make this public instance inaccessible to anyone but you.
For providing a file to Terraform: https://www.terraform.io/docs/configuration/interpolation.html
Upvotes: 0