Reputation: 8072
I've got a repository that has a series of documents (multimarkdown files, PDFs, GSN arguments, etc.) that need to use our internal (currently) proprietary tool to assemble those documents into a HTML-like document. The internal tool is quite complicated to use and isn't (yet) deployable.
What I tried doing was compiling the internal tool on the Ubuntu VM that I knew would be used for this job and then not tell GitLab (we're using self-hosted GitLab) to use any docker image when it tried to assemble the documents. Alas, when the CI job was run, I saw:
Pulling docker image alpine:latest ...
And then, of course, none of the stuff I installed on the VM itself was available.
NB: The current methodology for "installing" the complicated internal tool, in addition to a lot of installing packages via apt-get, etc., (which I already have examples of how to do in Docker), is to clone the repository, and then run npm install
and rake install
in the cloned directory.
Upvotes: 0
Views: 172
Reputation: 40861
This is controlled by your GitLab-runner configuration. When the runner uses the docker
executor it will always use a docker image for the build. If you want to run a GitLab job without using docker, you will need to configure a GitLab runner with the "shell" executor on your VM.
However, using image: ubuntu:focal
or similar is likely enough. You usually don't have to be concerned about the fact that an executor happens to run your job inside of a container. This is also beneficial, as it means your build environment is reproducible and that process will be defined in your job.
myjob:
image: ubuntu:focal
script:
- apt update && apt install -y nodejs ruby # or whatever else
# - npm install
# - gem install
# - rake install
# etc...
-
Or better yet, if you can produce a docker image with your core dependencies installed you can just use image: my-special-image
in your GitLab job to use that image as your build environment.
Upvotes: 1