claudiadast
claudiadast

Reputation: 429

S3 file not downloaded when triggering a Lambda function associated with EFS

I'm using the Serverless framework to create a Lambda function that, when triggered by an S3 upload (uploading test.vcf to s3://trigger-test/uploads/), downloads that uploaded file from S3 to EFS (specifically to the /mnt/efs/vcfs/ folder). I'm pretty new to EFS and followed AWS documentation for setting up the EFS access point, but when I deploy this application and upload a test file to trigger the Lambda function, it fails to download the file and gives this error in the CloudWatch logs:

[ERROR] FileNotFoundError: [Errno 2] No such file or directory: '/mnt/efs/vcfs/test.vcf.A0bA45dC'
Traceback (most recent call last):
  File "/var/task/handler.py", line 21, in download_files_to_efs
    result = s3.download_file('trigger-test', key, efs_loci)
  File "/var/runtime/boto3/s3/inject.py", line 170, in download_file
    return transfer.download_file(
  File "/var/runtime/boto3/s3/transfer.py", line 307, in download_file
    future.result()
  File "/var/runtime/s3transfer/futures.py", line 106, in result
    return self._coordinator.result()
  File "/var/runtime/s3transfer/futures.py", line 265, in result
    raise self._exception
  File "/var/runtime/s3transfer/tasks.py", line 126, in __call__
    return self._execute_main(kwargs)
  File "/var/runtime/s3transfer/tasks.py", line 150, in _execute_main
    return_value = self._main(**kwargs)
  File "/var/runtime/s3transfer/download.py", line 571, in _main
    fileobj.seek(offset)
  File "/var/runtime/s3transfer/utils.py", line 367, in seek
    self._open_if_needed()
  File "/var/runtime/s3transfer/utils.py", line 350, in _open_if_needed
    self._fileobj = self._open_function(self._filename, self._mode)
  File "/var/runtime/s3transfer/utils.py", line 261, in open
    return open(filename, mode)

My hunch is that this has to do with the local mount path specified in the Lambda function versus the Root directory path in the Details portion of the EFS access point configuration. Ultimately, I want the test.vcf file I upload to S3 to be downloaded to the EFS folder: /mnt/efs/vcfs/.

Relevant files:

serverless.yml:

service: LambdaEFS-trigger-test
frameworkVersion: '2'

provider:
  name: aws
  runtime: python3.8
  stage: dev
  region: us-west-2
  vpc:
    securityGroupIds:
      - sg-XXXXXXXX
      - sg-XXXXXXXX
      - sg-XXXXXXXX
    subnetIds:
      - subnet-XXXXXXXXXX


functions:
  cfnPipelineTrigger:
    handler: handler.download_files_to_efs
    description: Lambda to download S3 file to EFS folder.
    events:
      - s3:
          bucket: trigger-test
          event: s3:ObjectCreated:*
          rules:
            - prefix: uploads/
            - suffix: .vcf
          existing: true
    fileSystemConfig:
      localMountPath: /mnt/efs
      arn: arn:aws:elasticfilesystem:us-west-2:XXXXXXXXXX:access-point/fsap-XXXXXXX
    iamRoleStatements:
      - Effect: Allow
        Action:
          - s3:ListBucket
        Resource:
          - arn:aws:s3:::trigger-test
      - Effect: Allow
        Action:
          - s3:GetObject
          - s3:GetObjectVersion
        Resource:
          - arn:aws:s3:::trigger-test/uploads/* 
      - Effect: Allow
        Action:
        - elasticfilesystem:ClientMount
        - elasticfilesystem:ClientWrite
        - elasticfilesystem:ClientRootAccess
        Resource:
        - arn:aws:elasticfilesystem:us-west-2:XXXXXXXXXX:file-system/fs-XXXXXX

plugins:
  - serverless-iam-roles-per-function

package:
  individually: true
  exclude:
    - '**/*'
  include:
    - handler.py

handler.py:

import json
import boto3

s3 = boto3.client('s3', region_name = 'us-west-2')

def download_files_to_efs(event, context):
        """
        Locates the S3 file name (i.e. S3 object "key" value) the initiated the Lambda call, then downloads the file
        into the locally attached EFS drive at the target location.
        :param: event | S3 event record
        :return: dict
        """
        print(event)

        key = event.get('Records')[0].get('s3').get('object').get('key') # bucket: trigger-test, key: uploads/test.vcf
        efs_loci = f"/mnt/efs/vcfs/{key.split('/')[-1]}" # '/mnt/efs/vcfs/test.vcf
        print("key: %s, efs_loci: %s" % (key, efs_loci))
        result = s3.download_file('trigger-test', key, efs_loci)
        if result:
            print('Download Success...')
        else:
            print('Download failed...')
        return { 'status_code': 200 }

EFS Access Point details:

Upvotes: 1

Views: 836

Answers (1)

Marcin
Marcin

Reputation: 238687

Your local path is localMountPath: /mnt/efs. So in your code you should be using only this path (not /mnt/efs/vcfs):

efs_loci = f"/mnt/efs/{key.split('/')[-1]}" # '/mnt/efs/test.vcf

Upvotes: 2

Related Questions