Reputation: 8215
We're developing a rust back-end and debugging in minikube. I have a single Dockerfile which builds the entire workspace, then just deploy that image with different cargo run
statements for each microservice.
docker build
from the command line, it takes about 3 minutes to build the binaries (i.e. the cargo build
step). => [10/10] RUN cargo fetch --locked && cargo build --offline --frozen --timings --verbose 186.3s
=> exporting to image 134.2s
skaffold build
, it takes about 20 minutes to build the binaries. The extra time appears to be during the linking phase, as the cargo build
step displays its last warning after 104 seconds.#14 104.5 = note: `#[warn(dead_code)]` on by default
#14 104.5
#14 1192.5 warning: `xx` (bin "xx") generated 9 warnings (run `cargo fix --bin "xx"` to apply 7 suggestions)
#14 1223.4 Timing report saved to /home/app/target/cargo-timings/cargo-timing-20240531T162451Z.html
#14 1223.4 Finished `dev` profile [unoptimized + debuginfo] target(s) in 19m 52s
#14 DONE 1256.5s
Things I've done to try and minimise build times and differences so far
In order to rule out network issues, my docker file does a cargo fetch
followed by a cargo run --offline
.
I have the following in my skaffold.yaml
, so that docker builds always use buildKit in either command.
build:
local:
useBuildkit: true
concurrency: 0
This basically makes the use of skaffold dev
impractical.
I've disabled lto
in my Cargo.toml for the dev
profile
[profile.dev]
lto = "off"
Any ideas on the likely cause?
Upvotes: 5
Views: 172