ramondea
ramondea

Reputation: 317

Attach Volume EFS in ECS

When trying to mount an EFS file system together with ECS, I get the following error:

ResourceInitializationError: failed to invoke EFS utils commands to set up EFS volumes: stderr: mount.nfs4: Connection reset by peer : unsuccessful EFS utils command execution; code: 32

My Stack:

--- 
  AWSTemplateFormatVersion: "2010-09-09"
  Description: "Template Test"
  Outputs: 
    FileSystemID: 
      Description: "File system ID"
      Value: 
        Ref: FileSystem
  Parameters: 
    VolumeName: 
      Default: myEFSvolume
      Description: "The name to be used for the EFS volume"
      MinLength: "1"
      Type: String
  Resources: 
    ECSCluster: 
      Properties: 
        ClusterName: jenkins-cluster
      Type: "AWS::ECS::Cluster"
    EFSMountTarget1: 
      Properties: 
        FileSystemId: 
          Ref: FileSystem
        SecurityGroups: 
          - "sg-0082cea75ba714505"
        SubnetId: "subnet-0f0b0d3aaada62b6c"
      Type: "AWS::EFS::MountTarget"
    FileSystem: 
      Properties: 
        Encrypted: true
        FileSystemTags: 
          - Key: Name
            Value: 
              Ref: VolumeName
        PerformanceMode: generalPurpose
      Type: "AWS::EFS::FileSystem"
    JenkinsService: 
      Type: "AWS::ECS::Service"
      Properties: 
        Cluster: 
          Ref: ECSCluster
        DesiredCount: 2
        LaunchType: FARGATE
        NetworkConfiguration: 
          AwsvpcConfiguration:
            AssignPublicIp: ENABLED
            SecurityGroups: 
              - "sg-0082cea75ba714505"
            Subnets: 
              - "subnet-0f0b0d3aaada62b6c"
        PlatformVersion: "1.4.0"
        ServiceName: JenkinsService
        
        TaskDefinition: 
          Ref: JenkinsTaskDef
    JenkinsTaskDef: 
      Type: "AWS::ECS::TaskDefinition"
      Properties:
        Cpu: 2048
        Memory: 4096
        Family: efs-example-task-fargate
        NetworkMode: awsvpc
        TaskRoleArn: "arn:xxxxx/ecs"
        ExecutionRoleArn: "arn:xxxxxx:role/ecs"
        RequiresCompatibilities:
          - FARGATE 
        ContainerDefinitions: 
          - Cpu: 1024
            Memory: 2048
            PortMappings:
              - HostPort: 8080
                ContainerPort: 8080
              - HostPort: 50000
                ContainerPort: 50000
            image: "xxxxxxx.dkr.ecr.us-east-1.amazonaws.com/sample:latest"
            mountPoints: 
              - containerPath: /var/jenkins_home
                readOnly: false
                sourceVolume: myEfsVolume
            name: jenkins
        volumes:
          - name: myEfsVolume  
            efsVolumeConfiguration: 
              fileSystemId: 
                Ref: FileSystem
              rootDirectory: /var/jenkins_home
              transitEncryption: ENABLED 
    

I am performing according to documentation:

https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_EFSVolumeConfiguration.html

Upvotes: 18

Views: 24870

Answers (7)

grantr
grantr

Reputation: 1060

This happened to me when the fargate task was running on an availability zone that wasn't in the EFS network.

Go to EFS>File Systems> your file system > Network and make sure you launch the fargate task in an availability zone that is displayed.

Upvotes: 0

Gokce Demir
Gokce Demir

Reputation: 685

I also get this error:

ResourceInitializationError: failed to invoke EFS utils commands to set up EFS volumes: stderr: Failed to resolve "fs-xxxxxxxxxxx.efs.us-east-1.amazonaws.com" - check that your file system ID is correct

fixed it by enable dns host name --> The VPC DNS hostnames are turned off. DNS hostnames are turned off by default.

Upvotes: 1

marcuse
marcuse

Reputation: 4009

In case of CDK, the following PolicyStatement will handle the above suggested actions and fix the error:

fileSystem.addToResourcePolicy(
      new iam.PolicyStatement({
        actions: ['elasticfilesystem:ClientMount'],
        principals: [new iam.AnyPrincipal()],
        conditions: {
          Bool: {
            'elasticfilesystem:AccessedViaMountTarget': 'true'
          }
        }
      })
    )

Upvotes: 0

Biswanath Roy
Biswanath Roy

Reputation: 121

These are things that you need to do to mount an EFS in AWS FARGATE:

  1. Add 2049 port to the EFS sg group from the application layer ( containers sg in this case)
  2. Update the ECS FARGATE Task execution role with the policy to mount and write to the EFS
  3. The NACL of the EFS subnet has an outbound on port 2049

Upvotes: 9

strizzwald
strizzwald

Reputation: 643

If you enabled IAM Authorization while associating the Task Definition to the volume, you also need to update its Task Execution Role. You need to attach the policies required to access EFS to it.

Upvotes: 2

fagiani
fagiani

Reputation: 2351

It's been a while now but I've had the same issue and it was a bit confusing to understand how to proceed. When you create your EFS Volume, you choose a VPC and one Security Group to each Subnet.

You need to go to edit this Security Group to add an Inbound rule of type NFS to allow access (tcp port 2049) to the Security Group Identifier of your ECS cluster service that you want to allow access to. For that, just select Custom in the source field and type service's Security Group identifier on the text box.

For more information this article describes the whole process very well.

Upvotes: 13

AWS PS
AWS PS

Reputation: 4708

You need to open port 2049 inbound on the security group on the network interface and task definition. It was not automatically set up even though If you set it to create the security group for you.

Upvotes: 20

Related Questions