nikitablack
nikitablack

Reputation: 4663

Split compilation among different machines

This is a theory question.

Let's say, I have 1000 Raspberry Pi Zero computers and I want all of them to compile a huge C++ project. I can send to each machine a source file to compile with every dependencies (headers) it needs and make this machine to compile only one single unit to produce an obj file. After all obj files are produced I need to link them. I'm not sure that I can split a linking step, for me it looks like a monolithic process but let's say I have a dedicated powerful single machine to do it.

So the question - will it work in theory? Will my project be compiled 1000 times faster? Is the linking process is heavy? How many time does it take comparing to preprocessor/compilation step?

Upvotes: 2

Views: 290

Answers (1)

Erik Johannessen
Erik Johannessen

Reputation: 101

You can use tools such as distcc to distribute compilation across many computers. How much speedup you get will depend on your particular project (such as size and complexity of source files, speed of compilation computers, network latency etc). You could make a small scale test and do measurements.

As you say, linking is often a monolithic process and can be a bottleneck in a large project. An alternative is to split your project into several shared libraries. Then you don't need a big link step each time you make a small change, but starting the executable may be slower since some linking must be performed at runtime.

If you make changes to several parts of the projects these libraries can be built and linked in parallel.

How much time spent in compilation vs. linking depends on your project.

Upvotes: 2

Related Questions