wheresmycookie
wheresmycookie

Reputation: 763

Volume shared between two containers "is busy or locked"

I have a deployment that runs two containers. One of the containers attempts to build (during deployment) a javascript bundle that the other container, nginx, tries to serve.

I want to use a shared volume to place the javascript bundle after it's built.

So far, I have the following deployment file (with irrelevant pieces removed):

apiVersion: apps/v1
kind: Deployment
metadata:
  ...
spec:
  ...
  template:
    ...
    spec:
      hostNetwork: true
      containers:
      - name: personal-site
        image: wheresmycookie/personal-site:3.1
        volumeMounts:
        - name: build-volume
          mountPath: /var/app/dist
      - name: nginx-server
        image: nginx:1.19.0
        volumeMounts:
        - name: build-volume
          mountPath: /var/app/dist
      volumes:
      - name: build-volume
        emptyDir: {}

To the best of my ability, I have followed these guides:

One other things to point out is that I'm trying to run this locally atm using minikube.

EDIT: The Dockerfile I used to build this image is:

FROM node:alpine
WORKDIR /var/app

COPY . .
RUN npm install
RUN npm install -g @vue/cli@latest

CMD ["npm", "run", "build"]

I realize that I do not need to build this when I actually run the image, but my next goal is to insert pod instance information as environment variables, so with javascript unfortunately I can only build once that information is available to me.

Problem

The logs from the personal-site container reveal:

-  Building for production...
 ERROR  Error: EBUSY: resource busy or locked, rmdir '/var/app/dist'
 Error: EBUSY: resource busy or locked, rmdir '/var/app/dist'

I'm not sure why the build is trying to remove /dist, but also have a feeling that this is irrelevant. I could be wrong?

I thought that maybe this could be related to the lifecycle of containers/volumes, but the docs suggest that "An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node".

Question

What are some reasons that a volume might not be available to me after the containers are already running? Given that you probably have much more experience than I do with Kubernetes, what would you look into next?

Upvotes: 0

Views: 1152

Answers (1)

Abdennour TOUMI
Abdennour TOUMI

Reputation: 93183

The best way is to customize your image's entrypoint as following:

  • Once you finish building the /var/app/dist folder, copy(or move) this folder to another empty path (.e.g: /opt/dist)

    cp -r /var/app/dist/* /opt/dist
    

PAY ATTENTION: this Step must be done in the script of ENTRYPOINT not in the RUN layer.

  • Now use /opt/dist instead..:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      ...
    spec:
      ...
      template:
        ...
        spec:
          hostNetwork: true
          containers:
          - name: personal-site
            image: wheresmycookie/personal-site:3.1
            volumeMounts:
            - name: build-volume
              mountPath: /opt/dist # <--- make it consistent with image's entrypoint algorithm
          - name: nginx-server
            image: nginx:1.19.0
            volumeMounts:
            - name: build-volume
              mountPath: /var/app/dist
    
          volumes:
          - name: build-volume
            emptyDir: {}
    

Good luck!

If it's not clear how to customize the entrypoint, share with us your entrypoint of the image and we will implement it.

Upvotes: 2

Related Questions