Galuoises
Galuoises

Reputation: 3283

Submit a Python project to Dataproc job

I have a python project, whose folder has the structure

main_directory - lib - lib.py
               - run - script.py

script.py is

from lib.lib import add_two
spark = SparkSession \
    .builder \
    .master('yarn') \
    .appName('script') \
    .getOrCreate()

print(add_two(1,2))

and lib.py is

def add_two(x,y):
    return x+y

I want to launch as a Dataproc job in GCP. I have checked online, but I have not understood well how to do it. I am trying to launch the script with

gcloud dataproc jobs submit pyspark --cluster=$CLUSTER_NAME --region=$REGION \
  run/script.py

But I receive the following error message:

from lib.lib import add_two
ModuleNotFoundError: No module named 'lib.lib'

Could you help me on how I should do to launch the job on Dataproc? The only way I have found to do it is to remove the absolute path, making this change to script.py:

 from lib import add_two

and the launch the job as

gcloud dataproc jobs submit pyspark --cluster=$CLUSTER_NAME --region=$REGION \
  --files /lib/lib.py \
  /run/script.py

However, I would like to avoid the tedious process to list the files manually every time.

Following the suggestion of @Igor, to pack in a zip file I have found that

zip -j --update -r libpack.zip /projectfolder/* && spark-submit --py-files libpack.zip /projectfolder/run/script.py

works. However, this puts all files in the same root folder in libpack.zip, so if there were files with the same names in subfolders this would not work.

Any suggestions?

Upvotes: 7

Views: 7974

Answers (3)

Keshav Prashanth
Keshav Prashanth

Reputation: 341

In order for the Dataproc to recognize python project directory structure we have to zip the directory from where the import starts.

example: if we have python project directory structure as this — dir1/dir2/dir3/script.py and if the import is from dir2.dir3 import script as sc then we have to zip dir2 and pass the zip file as --py-files during spark submit.

zip -r dir2 dir2

--py-files dir2.zip

Upvotes: 2

gagan
gagan

Reputation: 355

To zip the dependencies -

cd base-path-to-python-modules
zip -qr deps.zip ./* -x script.py

Copy deps.zip to hdfs/gs. Use uri when submitting the job as shown below.

Submit a python project (pyspark) using Dataproc' Python connector

from google.cloud import dataproc_v1
from google.cloud.dataproc_v1.gapic.transports import (
    job_controller_grpc_transport)

region = <cluster region>
cluster_name = <your cluster name>
project_id = <gcp-project-id>

job_transport = (
    job_controller_grpc_transport.JobControllerGrpcTransport(
        address='{}-dataproc.googleapis.com:443'.format(region)))
dataproc_job_client = dataproc_v1.JobControllerClient(job_transport)

job_file = <gs://bucket/path/to/main.py or hdfs://file/path/to/main/job.py>

# command line for the main job file
args = ['args1', 'arg2']

# required only if main python job file has imports from other modules
# can be one of .py, .zip, or .egg. 
addtional_python_files = ['hdfs://path/to/deps.zip', 'gs://path/to/moredeps.zip']

job_details = {
    'placement': {
        'cluster_name': cluster_name
    },
    'pyspark_job': {
        'main_python_file_uri': job_file,
        'args': args,
        'python_file_uris': addtional_python_files
    }
}

res = dataproc_job_client.submit_job(project_id=project_id,
                                     region=region, 
                                     job=job_details)
job_id = res.reference.job_id

print(f'Submitted dataproc job id: {job_id}')

Upvotes: 5

Igor Dvorzhak
Igor Dvorzhak

Reputation: 4465

If you want to preserve project structure when submitting Dataroc job then you should package your project into a .zip file and specify it in --py-files parameter when submitting a job:

gcloud dataproc jobs submit pyspark --cluster=$CLUSTER_NAME --region=$REGION \
  --py-files lib.zip \
  run/script.py

To create zip archive you need to run script:

cd main_directory/
zip -x run/script.py -r libs.zip .

Refer to this blog post for more details on how to package dependencies in zip archive for PySpark jobs.

Upvotes: 1

Related Questions