chrundle
chrundle

Reputation: 31

Why is my optimization solver running slower in docker?

I am very new to docker and recently wrote a dockerfile to containerize a mathematical optimization solver called SuiteOPT. However, when testing the optimization solver on a few test problems I am experiencing slower performance in docker than outside of docker. For example, one demo problem of a linear program (demoLP.py) takes ~12 seconds to solve on my machine but in docker it takes ~35 seconds. I have spent about a week looking through blogs and stackoverflow posts for solutions but no matter what changes I make the timing in docker is always ~35 seconds. Does anyone have any ideas what might be going on or could anyone point me in the right direction?

Below are links to the docker hub and PYPI page for the optimization solver:

Docker hub for SuiteOPT

PYPI page for SuiteOPT

Edit 1: Adding an additional thought due to a comment from @user3666197. While I did not expect SuiteOPT to perform as well in the docker container I was mainly surprised by the ~3x slowdown for this demo problem. Perhaps the question can be restated as follows: How can determine whether or not this slowdown is caused purely to the fact that I am executing a CPU-RAM-I/O intensive code inside of a docker container instead of due to some other issue with the configuration of my Dockerfile?

Note: The purpose of this containerization is to provide a simple way for users to get started with the optimization software in Python. While the optimization software is available on PYPI there are many non-python dependencies that could cause issues for people wishing to use the software without running into installation issues.

Upvotes: 3

Views: 1241

Answers (1)

user3666197
user3666197

Reputation: 1

Q : How can determine whether or not this slowdown is caused purely to the fact that I am executing a CPU-RAM-I/O intensive code inside of a docker container instead of due to some other issue with the configuration of my Dockerfile?

The battlefield :

enter image description here ( Credits: Brendan GREGG )

Step 0 : collect data about the Host-side run processing :


 mpstat -P ALL 1 ### 1 [s] sampled CPU counters in one terminal-session (may log to file)

 python demoLP.py  # <TheWorkloadUnderTest> expected ~ 12 [s] on bare metal system 

Step 1 : collect data about the same processing but inside the Docker-container

plus review policies set in --cpus and --cpu-shares ( potentially --memory +--kernel-memory if used )
plus review effects shown in throttled_time ( ref. Pg.13 )

cat /sys/fs/cgroup/cpu,cpuacct/cpu.stat
nr_periods 0
nr_throttled 0
throttled_time 0 <-------------------------------------------------[*] increasing?

plus review the Docker-container's workload view-from-outside the box by :

cat /proc/<_PID_>/status | grep nonvolu ### in one terminal session
nonvoluntary_ctxt_switches: 6 <------------------------------------[*] increasing?

systemd-cgtop                           ### view <Tasks> <%CPU> <Memory> <In/s> <Out/s>

Step 2 :

Check observed indications against the set absolute CPU cap policy and CPU-shares policy using the flowchart above

Upvotes: 1

Related Questions