rainu
rainu

Reputation: 763

Starting a worker node In docker and connect to master running on host OS

I was experimenting on running spark in standalone mode. The master and the worker node are up and running on the host os.

enter image description here

I am trying to start a docker container to run as a worker node. The host os is ubuntu 18.04 64 bit. The container Dockerfile is as below whic will run alpine linux.

### Dockerfile for creating images of spark worker


#set the base image as alpine-java 
# headless openjdk8.
FROM anapsix/alpine-java

#install few required dependencies in the alpine linux os
#To upgrade all the packages of a running system, use upgrade
#install wget to download the hadoop,spark binaries
#install git  as all the required softwares for  alpine are in git repos
#install unzip to unzip the downloaded  files
#Py4J enables Python programs running in a Python interpreter
#to dynamically access java objects in a JVM.
RUN apk update --no-cache && apk upgrade --no-cache && \
    apk add --no-cache wget \
            git \
            unzip \
            python3 \
            python3-dev && \
            pip3 install --no-cache-dir --upgrade pip -U py4j && \
            cd /home && \
            wget http://www-eu.apache.org/dist/spark/spark-2.3.1/spark-2.3.1-bin-hadoop2.7.tgz && \
            tar -xvf spark-2.3.1-bin-hadoop2.7.tgz && \
            rm -rf spark-2.3.1-bin-hadoop2.7.tgz && \
            rm -rf /var/cache/* && \
            rm -rf /root/.cache/*

# set some enviroment variables for the alpine

# setting the seed value of hash randomization to an integer
ENV PYTHONHASHSEED 2

ENV SPARK_HOME /home/spark-2.3.1-bin-hadoop2.7
ENV PYSPARK_PYTHON python3
ENV PATH $PATH:$SPARK_HOME/bin
WORKDIR $SPARK_HOME
ENTRYPOINT $SPARK_HOME/bin/spark-class org.apache.spark.deploy.worker.Worker $MYMASTER

created the image with the above Dockerfile with below command

docker build -t spkworker .

The image was successfully created

The problem is while bringing up the worker node with below command The Dockerfile has a variable $MYMASTER that is supposed to pass the master URL to deploy the worker.

The run command is as below I am passing the master node URL in the env variable.

docker run spkworker --name worker1 --env MYMASTER=spark://127.0.1.1:7077

It fails with the error msg

2018-08-05 18:00:57 INFO  Worker:2611 - Started daemon with process name: 8@44bb0d682a48
2018-08-05 18:00:57 INFO  SignalUtils:54 - Registered signal handler for TERM
2018-08-05 18:00:57 INFO  SignalUtils:54 - Registered signal handler for HUP
2018-08-05 18:00:57 INFO  SignalUtils:54 - Registered signal handler for INT
Usage: Worker [options] <master>

Master must be a URL of the form spark://hostname:port

Options:
  -c CORES, --cores CORES  Number of cores to use
  -m MEM, --memory MEM     Amount of memory to use (e.g. 1000M, 2G)
  -d DIR, --work-dir DIR   Directory to run apps in (default: SPARK_HOME/work)
  -i HOST, --ip IP         Hostname to listen on (deprecated, please use --host or -h)
  -h HOST, --host HOST     Hostname to listen on
  -p PORT, --port PORT     Port to listen on (default: random)
  --webui-port PORT        Port for web UI (default: 8081)
  --properties-file FILE   Path to a custom Spark properties file.
                           Default is conf/spark-defaults.conf.

How to pass the master node details to start the worker node.

Upvotes: 0

Views: 1679

Answers (1)

Manuel Ortiz
Manuel Ortiz

Reputation: 693

The worker node and the master are in different networks. A possible solution is to indicate to the container (worker node) that must use the network of its host

docker run --net=host --name worker1 --env MYMASTER=spark://$HOSTNAME:7077 spkworker

Upvotes: 3

Related Questions