Reputation: 33
I typically install packages in EMR through Spark's install_pypi_package
method. This limits where I can install packages from. How can I install a package from a specific GitHub branch? Is there a way I can do this through the install_pypi_package
method?
Upvotes: 1
Views: 236
Reputation: 20445
If you have access to cluster creation step, you can install the package using pip from github at bootstrap. (install_pypi_package
is needed because the cluster is already running at that time and packages might not resolve on all nodes)
Installing prior Cluster is running:
A simple example (e.g with download.sh bootstrap file) of bootstrap and installing from github using pip is
#!/bin/bash
sudo pip install <you-repo>.git
then you can use this bash at bootstrap as
aws emr create-cluster --name "Test cluster" --bootstrap-actions Path="s3://elasticmapreduce/bootstrap-actions/download.sh"
or you can use pip3 in bootstrap
sudo pip3 install <you-repo>.git
or just clone it and build it locally on EMR with setup.py file
#!/bin/bash
git clone <your-repo>.git
sudo python setup.py install
After Cluster is running (Complex and not recommended)
If you still want to install or build a custom package when the cluster is already running, AWS has some explanation here that uses AWS-RunShellScript
to install package on all core nodes. It says
(I) Install the package to Master node, (doing pip install on running cluster via shell or a jupyter notebook on top of it)
(II) Running following script locally on EMR, for which you pass cluster-id
and boostrap script path(for e.g download.sh
above) as arguments.
import argparse
import time
import boto3
def install_libraries_on_core_nodes(
cluster_id, script_path, emr_client, ssm_client):
"""
Copies and runs a shell script on the core nodes in the cluster.
:param cluster_id: The ID of the cluster.
:param script_path: The path to the script, typically an Amazon S3 object URL.
:param emr_client: The Boto3 Amazon EMR client.
:param ssm_client: The Boto3 AWS Systems Manager client.
"""
core_nodes = emr_client.list_instances(
ClusterId=cluster_id, InstanceGroupTypes=['CORE'])['Instances']
core_instance_ids = [node['Ec2InstanceId'] for node in core_nodes]
print(f"Found core instances: {core_instance_ids}.")
commands = [
# Copy the shell script from Amazon S3 to each node instance.
f"aws s3 cp {script_path} /home/hadoop",
# Run the shell script to install libraries on each node instance.
"bash /home/hadoop/install_libraries.sh"]
for command in commands:
print(f"Sending '{command}' to core instances...")
command_id = ssm_client.send_command(
InstanceIds=core_instance_ids,
DocumentName='AWS-RunShellScript',
Parameters={"commands": [command]},
TimeoutSeconds=3600)['Command']['CommandId']
while True:
# Verify the previous step succeeded before running the next step.
cmd_result = ssm_client.list_commands(
CommandId=command_id)['Commands'][0]
if cmd_result['StatusDetails'] == 'Success':
print(f"Command succeeded.")
break
elif cmd_result['StatusDetails'] in ['Pending', 'InProgress']:
print(f"Command status is {cmd_result['StatusDetails']}, waiting...")
time.sleep(10)
else:
print(f"Command status is {cmd_result['StatusDetails']}, quitting.")
raise RuntimeError(
f"Command {command} failed to run. "
f"Details: {cmd_result['StatusDetails']}")
def main():
parser = argparse.ArgumentParser()
parser.add_argument('cluster_id', help="The ID of the cluster.")
parser.add_argument('script_path', help="The path to the script in Amazon S3.")
args = parser.parse_args()
emr_client = boto3.client('emr')
ssm_client = boto3.client('ssm')
install_libraries_on_core_nodes(
args.cluster_id, args.script_path, emr_client, ssm_client)
if __name__ == '__main__':
main()
Upvotes: 1