Reputation: 119
Can someone please help , i have a strange issue , i have 2 pipelines which is associated with 2 different repo , i used first Azure devops pipeline to upgrade the terraform version from 12 to 13(no resource change) and now what i am trying to do is to access the same state file created by pipeline one to add and remove some resources in the same Aws Account . I get 403 error
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Error refreshing state: AccessDenied: Access Denied
status code: 403, request id: 6RH0CW38GPXZZK6Q, host id: zHYzyP9c50QWl+QoTi+7tbaClu2+T6BFrqxDm8xd+mnJOYTd6a3lzVo6tiMJ6Ni91UXvHIQcpjg=
##[error]Error: The process '/usr/bin/bash' failed with exit code 1
I have a reason to use 2 different pipelines , my question here cant a same state file be shared between 2 Pipelines ??
When i do a terraform init from my local laptop it is working perfectly fine
Options i tried
i tried removing the encrypt true from the backend.tf file (as i am not sure if it uses different kms but both the pipeline is on the same Aws account)
Tried removing .terraform and .terraform.lock files (no luck )
i changed the statefile and tried running the init , it still failed so that means it is not state file , it is actually not able to access the folders itself from S3 bucket
original backend.tf
terraform {
backend "s3" {
key = "ad-account/terraform/backend"
region = "eu-west-1"
}
}
changed to
terraform {
backend "s3" {
key = "ad-account/terraform/backend1"
region = "eu-west-1"
}
}
This still failed then changed to
terraform {
backend "s3" {
key = "ad-account1/terraform/backend"
region = "eu-west-1"
}
}
the above also failed
Both pipelines use same AWS credentials and from the same Aws account
Any insight much appreciated please , thanks
Options i tried
i tried removing the encrypt true from the backend.tf file (as i am not sure if it uses different kms but both the pipeline is on the same Aws account)
Tried removing .terraform and .terraform.lock files (no luck )
i changed the statefile and tried running the init , it still failed so that means it is not state file , it is actually not able to access the folders itself from S3 bucket
Upvotes: 0
Views: 599
Reputation: 119
Thanks for al the comment , the issue is fixed , we use different pools for different stage and for 1st pipeline it is configured correctly and for the second pipeline for acceptance environment test env agent pool was configured that created a mess out of it , thanks guys
Upvotes: 0