Reputation: 63
I have a CICD server which builds Docker images and pushes them to an image repo. All it really does is docker build && docker push
, it doesn’t run any of the workloads of those images, and as such it’s a small-ish cloud instance.
I’d like to use it to build an image where it installs a package by way of RUN yum install -y somepackage.rpm
, but the package I’m installing does a cpu and memory check and it refuses to install when less than X cpu cores and Y gigs of memory are present, and my CICD server doesn’t meet those thresholds. It makes sense that I’ll need to meet those thresholds when I actually run this image/workload on another server, but I’d like to work around that limit when all I’m doing is building the image.
Is it possible to fake cpu cores and memory inside the build context? Could I somehow expose 8 CPU’s or whatever inside a container when the host only has 2?
Upvotes: 0
Views: 906
Reputation: 11
It depends on how it does the check. Often it will look at 2 things.
free -m
(or -g) and cat /proc/cpuinfo
then it will grep for something, so you fake the output with an echo
and replace mem and proc with the appropriate amount
echo " total used free shared buff/cache available Mem: 32417152 12002948 16547220 48080 3866984 19974332 Swap: 12582908 0 12582908"
Paste this script into the command line then run your install.
cd /usr/bin
mv nproc nproc2
mv free free2
echo "16" > nproc
chmod +x nproc
echo " total used free shared buff/cache available" > free
echo " Mem: 64
0 48 0 11 39 " >> free
echo "Swap: 11 0 11" >> free
chmod +x free
Upvotes: 1