Reputation: 969
My aim is to dockerize my simple angular website and test it locally. Given below is the Dockerfile.
# STEP 1 build static website
FROM node:alpine as builder
RUN apk update && apk add --no-cache make git
# Create app directory
WORKDIR /app
# Install app dependencies
COPY package.json package-lock.json /app/
RUN cd /app && npm set progress=false && npm install
# Copy project files into the docker image
COPY . /app
RUN cd /app && npm run build
# STEP 2 build a small nginx image with static website
FROM nginx:alpine
## Remove default nginx website
RUN rm -rf /usr/share/nginx/html/*
## From 'builder' copy website to default nginx public folder
COPY --from=builder /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
I'm executing following commands on top of this Dockerfile.
docker build .
docker tag a585f0653f3b abameerdeen/website:latest
docker run -p 80:8000 abameerdeen/website:latest
docker ps
Gives following result.
Still, I;m unable to access the website on http://localhost:8000.
I tried following solutions, but result won't change. I'm on an ubuntu machine which has nginx installed locally too.
EXPOSE 80 in Dockerfile and docker run -p 80:8000 abameerdeen/website:latest
EXPOSE 80 in Dockerfile and docker run abameerdeen/website:latest
This is the status of ports in my local machine.
Edit :Updated port numbers.
Upvotes: 2
Views: 2984
Reputation: 969
I kept changing the Dockerfile and different base images, following worked finally. I removed the whole nginx thing and accomplished this on node. But this isn't the ideal solution because I wanted a lightweight image. If someone can suggest the ideal solution, would accept it as the answer.
# base image
FROM node:9.6.1
# install chrome for protractor tests
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'
RUN apt-get update && apt-get install -yq google-chrome-stable
# set working directory
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
# add `/usr/src/app/node_modules/.bin` to $PATH
ENV PATH /usr/src/app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /usr/src/app/package.json
RUN npm install
RUN npm install -g @angular/[email protected]
# add app
COPY . /usr/src/app
#expose the port
EXPOSE 5000
# start app
CMD ng serve --port 5000 --host 0.0.0.0
Then,
docker build . -t my_image_tag
docker run -p 5000:5000 my_image_tag
Actually, @Saddam Pojeee's comment triggered me to bash into the container and see if the files are really there. Files weren't there, so I changed the image.
Upvotes: 3
Reputation: 5399
the nginx image exposes port 80 by default so the EXPOSE 8000
is not necessary... but you will need to map your local port 8000 to the port 80 of the container.
So the correct command for running it is: docker run -p 8000:80 abameerdeen/website:latest
this will map the container's port 80 to your local port 8000.
Now you should be able to access http://localhost:8000... If you copied your files to the right location inside the container you will see the webpage. If not you should receive a http 404 from nginx.
Upvotes: 0