user6882757
user6882757

Reputation:

How to create a layer in lambda function

Steps followed to create

  1. Create a virtual environment and activate it
  2. pip install elasticsearch
  3. Zip the folder inside site-packages.zip

4.Create Layer in AWS (say name is elastic)

  1. Add the code below in lambda_handler and add the Layer elastic

Below is code

import json
from elasticsearch import Elasticsearch, RequestsHttpConnection
def lambda_handler(event, context):
    # TODO implement
    return {
        'statusCode': 200,
        'body': json.dumps('Hello from Lambda!')
    }

Still I got "errorMessage": "Unable to import module 'lambda_function': No module named 'elasticsearch'",

Upvotes: 4

Views: 2189

Answers (4)

Karthik Pillai
Karthik Pillai

Reputation: 337

Python Only Answer:

Preparing and Uploading Python Package to AWS Lambda Layer (Example OpenCV)

Step 1: Create a Folder for Python Dependencies

mkdir python
cd python

Step 2: Install OpenCV in the Folder

pip install opencv-python -t .

Step 3: Zip the Folder

cd ..
zip -r python.zip python

Step 4: Upload the Zip File to AWS Lambda Layer

  1. Log in to the AWS Management Console.
  2. Navigate to the AWS Lambda service.
  3. In the left-hand menu, click on Layers.
  4. Click on the Create layer button.
  5. Enter a name for your layer (e.g., opencv-layer).
  6. Under Upload a file from Amazon S3 or Upload a .zip file, select the python.zip file you created.
  7. Choose the appropriate runtime (e.g., Python 3.x).
  8. Click on the Create button.

Step 5: Add the Layer to Your Lambda Function

  1. Go to your Lambda functions in the AWS Management Console.
  2. Select the function to which you want to add the layer.
  3. In the Layers section, click on Add a layer.
  4. Select Custom layers.
  5. Choose the layer you created (opencv-layer) from the list.
  6. Click on Add to attach the layer to your Lambda function.

Upvotes: 0

mahmoud mehdi
mahmoud mehdi

Reputation: 1590

For MacOs users, if you're not specifying the platform while installing the library using pip, you'll have issues importing the library from the lambda function. To fix that, this how you should create the layer properly:

  1. create an empty folder called "python" :

    mkdir python

  2. install the library using pip3 by specifying the platform, target, implementation and python version:

    pip3 install --platform manylinux2014_x86_64 --target=python --implementation cp --python-version 3.10 --only-binary=:all: --upgrade library-name

  3. Zip the library you have just downloaded using pip3

    zip -r library-name.zip python

You can then upload the library-name.zip while creating the layer on AWS. It solved the issue for me.

Upvotes: 0

Adiii
Adiii

Reputation: 59986

As one option is already mentioned by @Marcin which is required Docker to be installed in the target machine. If you want to skip docker then you can use below script to create and publish layer to AWS.

All you need

./creater_layer.sh <package_name> <layer_name>

./creater_layer.sh elasticsearch my-layer

script creater_layer.sh

path="app"
package="${1}"
layername="${2}"
mkdir -p $path
pip3 install "${package}" --target "${path}/python/lib/python3.8/site-packages/"
cd $path && zip -r ../lambdalayer.zip .
aws lambda publish-layer-version --layer-name "${layername}" --description "My layer" --license-info "MIT" --zip-file "fileb://../lambdalayer.zip" --compatible-runtimes python3.8

Upvotes: 0

Marcin
Marcin

Reputation: 238319

If I may, I would like to recommend an alternative technique which has never failed me. The technique includes docker tool described in the recent AWS blog:

Thus for this question, I verified it using elasticsearch as follows:

  1. Create empty folder, e.g. mylayer.

  2. Go to the folder and create requirements.txt file with the content of

elasticsearch
  1. Run the following docker command (may adjust python version to your needs):
docker run -v "$PWD":/var/task "lambci/lambda:build-python3.8" /bin/sh -c "pip install -r requirements.txt -t python/lib/python3.8/site-packages/; exit"
  1. Create layer as zip:
zip -r elastic.zip python > /dev/null
  1. Create lambda layer based on elastic.zip in the AWS Console. Don't forget to specify Compatible runtimes to python3.8.

  2. Test the layer in lambda using the following lambda function:

import json

from elasticsearch import Elasticsearch, RequestsHttpConnection

def lambda_handler(event, context):
    # TODO implement
    
    print(dir(Elasticsearch))
    
    return {
        'statusCode': 200,
        'body': json.dumps('Hello from Lambda!')
    }

The function executes correctly:

['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__enter__', '__eq__', '__exit__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', 'bulk', 'clear_scroll', 'close', 'count', 'create', 'delete', 'delete_by_query', 'delete_by_query_rethrottle', 'delete_script', 'exists', 'exists_source', 'explain', 'field_caps', 'get', 'get_script', 'get_script_context', 'get_script_languages', 'get_source', 'index', 'info', 'mget', 'msearch', 'msearch_template', 'mtermvectors', 'ping', 'put_script', 'rank_eval', 'reindex', 'reindex_rethrottle', 'render_search_template', 'scripts_painless_execute', 'scroll', 'search', 'search_shards', 'search_template', 'termvectors', 'update', 'update_by_query', 'update_by_query_rethrottle']

Upvotes: 6

Related Questions