lfk
lfk

Reputation: 2633

Can I test AWS Glue code locally?

After reading Amazon docs, my understanding is that the only way to run/test a Glue script is to deploy it to a dev endpoint and debug remotely if necessary. At the same time, if the (Python) code consists of multiple files and packages, all except the main script need to be zipped. All this gives me the feeling that Glue is not suitable for any complex ETL task as development and testing is cumbersome. I could test my Spark code locally without having to upload the code to S3 every time, and verify the tests on a CI server without having to pay for a development Glue endpoint.

Upvotes: 46

Views: 36328

Answers (9)

Sarde
Sarde

Reputation: 688

You can do this as follows:

  1. Install PySpark using

     >> pip install pyspark==2.4.3
    
  2. Prebuild AWS Glue-1.0 Jar with Python dependencies: Download_Prebuild_Glue_Jar

  3. Copy the awsglue folder and Jar file into your pycharm project from github

  4. Copy the Python code from my git repository

  5. Run the following on your console; make sure to enter your own path:

     >> python com/mypackage/pack/glue-spark-pycharm-example.py
    

From my own blog

Upvotes: 1

selle
selle

Reputation: 982

There is now an official docker from AWS so that you can execute Glue locally: https://aws.amazon.com/blogs/big-data/building-an-aws-glue-etl-pipeline-locally-without-an-aws-account/

There's a nice step-by-step guide on that page as well

Upvotes: 9

David R. Willson
David R. Willson

Reputation: 21

I think the key here is to define what kind of testing do you want to do locally. If you are doing unit testing (i.e. testing just one pyspark script independent of the AWS services supporting that script) then sure you can do that locally. Use a mocking module like pytest-mock, monkeypatch or unittest to mock the AWS and Spark services external to your script while you test the logic that you have written in your pyspark script. For module testing, you could you a workbook environment like AWS EMR Notebooks, Zeppelin or Jupyter. Here you would be able to run your Spark code against test datasources, but you can mock the AWS Services.
For integration testing (i.e. testing your code integrated with the services it depends on, but not a production system) you could launch a test instance of your system from your CI/CD pipeline and then have compute resources (like pytest scripts or AWS Lambda) automate the workflow implemented by your script.

Upvotes: 2

Pradeep Kumar GS
Pradeep Kumar GS

Reputation: 156

If you are looking to run this in docker here is a link

Docker Hub : https://hub.docker.com/r/svajiraya/glue-dev-1.0

Git Repo for dockerfile
https://github.com/svajiraya/aws-glue-libs/blob/glue-1.0/Dockerfile

Upvotes: 0

Brian
Brian

Reputation: 1388

Eventually, as of Aug 28, 2019, Amazon allows you to download the binaries and

develop, compile, debug, and single-step Glue ETL scripts and complex Spark applications in Scala and Python locally.

Check out this link: https://aws.amazon.com/about-aws/whats-new/2019/08/aws-glue-releases-binaries-of-glue-etl-libraries-for-glue-jobs/

Upvotes: 12

nont
nont

Reputation: 9519

I spoke to an AWS sales engineer and they said no, you can only test Glue code by running a Glue transform (in the cloud). He mentioned that there were testing out something called Outpost to allow on-prem operations, but that it wasn't publically available yet. So this seems like a solid "no" which is a shame because it otherwise seems pretty nice. But with out unit tests, its no-go for me.

Upvotes: 8

Sandeep Fatangare
Sandeep Fatangare

Reputation: 2144

You can keep glue and pyspark code in separate files and can unit-test pyspark code locally. For zipping dependency files, we wrote shell script which zips files and upload to s3 location and then applies CF template to deploy glue job. For detecting dependencies, we created (glue job)_dependency.txt file.

Upvotes: 8

Yuva
Yuva

Reputation: 3153

Adding to CedricB,

For development / testing purpose, its not necessary to upload the code to S3, and you can setup a zeppelin notebook locally, have an SSH connection established so you can have access to the data catalog/crawlers,etc. and also the s3 bucket where your data resides.

After all the testing is completed, you can bundle your code, upload to an S3 bucket. Then create a Job pointing to the ETL script in S3 bucket, so that the job can be run, and scheduled as well. Once all the development/testing is completed, make sure to delete the dev endpoint, as we are charged even for the IDLE time.

Regards

Upvotes: 1

CedricB
CedricB

Reputation: 1167

Not that I know of, and if you have a lot of remote assets, it will be tricky. Using Windows, I normally run a development endpoint and a local zeppelin notebook while I am authoring my job. I shut it down each day.

You could use the job editor > script editor to edit, save, and run the job. Not sure of the cost difference.

Upvotes: 2

Related Questions