Martijn Muurman
Martijn Muurman

Reputation:

How to decrease MSBuild times

My situation

In the C# project I am now working on we have a fairly big solution (80+ projects). Now rebuild times of 5 minutes+ are really becoming quite a problem using MSBuild from Visual Studio 2008.

In an analysis I did last week it turned out that my build time was spent as follows:

  1. Copying files to the projects and recopying it to the projects that depend on it (CopyToLocal), etc. (60%)

  2. Invoking a postbuild to decompile/compile. (20%)

  3. Doing the actual compilation, etc. (20%)

Apart from the 'normal' project bin\debug folders output is also copied to an external directory to set up the main 'loader' program. The main program structure is a bit like this:

\loader\bin\loader.exe

\loader\plugin\plugin1\plugin1.dll

\loader\plugin\plugin1\somedependency.dll

What I did

In an attempt to make things go a little faster I thought of the following:

  1. Copy all the files to one a big bin directory and don't use CopyTolocal. I don't like this because we can no longer use different versions of the same DLL files and my bin directory is getting quite a mess.

  2. Use parallelism (/m) for MSBuild. This helps only very little in build times.

  3. Try to reduce dependencies between projects which is always a good thing of course.

  4. Invest in hardware. I found some research on solid-state drives, but this does not seem promising.

My question

I also noticed that when I make a change to a project that is at the root of my dependency tree everything gets rebuild. Even if the change was only in the 'private' part and the interface of the project did not change.

Does MSBuild use a timestamp of dependent projects to determine if a project needs a rebuild?

Can this be changed to a different condition? For example, the checksum of the file?

Apart from this specific suggestion I would sure appreciate all suggestions to make build times faster.

Upvotes: 23

Views: 17160

Answers (4)

Slava Imeshev
Slava Imeshev

Reputation: 1390

MSBuild build time is a multi-dimensional problem. The good news that it is easy to solve:

Unlike most of the processes running on the build machine, build processes are notorious for being CPU, RAM and I/O -consuming. A general recipe for speeding up MSBuild builds is "get best machine money can buy", particularly:

  1. CPU - at least two Intel 3.0 GHz Core 2 Duo.

  2. RAM - at least 4 GB DDR3. If this is a developer and build machine, a 64-bit OS and 8 GB RAM is a better option.

  3. HDD - The fastest options is a high-end 3ware RAID-1 with an on-board battery and an enabled write cache. A fast SSD may be another option to consider.

  4. Network - minimum 1 Gbit/s card.

This simple hardware optimization can speed up your MSBuilds 2-3 times.

Upvotes: 1

Precipitous
Precipitous

Reputation: 5343

We also have huge solutions. Build and compilation is all about I/O.

Solid-state drives are very promising. A co-worker put a solid-state drive in his laptop, and found that it is now much faster than his humongous main development box. Don't have the details, but he claims many times faster.

We've been fiddling with solution folders to group parts of the project: This makes it easier for devs to unload projects they aren't working on.

/m rarely helps with .NET debug builds. I ran a series of tests on this a few weeks ago and found minor differences. Findings:

  • Because my build is I/O constrained, using /m:2-4 makes debug builds slower for me.
  • Release builds usually much faster.
  • Code analysis adds a lot of time.

Big picture: In reality, compilation is a pretty minor cost for me, compared to getting source and running unit tests. On our build server, the integration tests and packaging are almost all the time. For my development box, I schedule a batch file that gets source, builds, runs unit tests before I come to work and while I'm at lunch. Good enough.

On the build server, it's more complicated. We're thinking of setting up chained parallel CruiseControl.NET builds on various machines. We are using VSTS build, but it is too expensive (and time consuming) to scale horizontally like this.

My numbers for detail-oriented folks. Configurations from slowest to fastest, running msbuild "bigsolution.sln" /target:clean - between each.

  1. /m: 4, Debug with code analysis 1:38
  2. /m: 1, Debug with code analysis 1:30
  3. /m: 4, Debug with no code analysis 1:30
  4. /m: 1, Debug with no code analysis 1:30
  5. /m: 1, Release with code analysis: 1:30
  6. /m: 4, Release with code analysis: 1:05
  7. /m: 4, Release with no code analysis: 0:53

Build time without rebuild or clean: ~ 4-10 seconds

Upvotes: 4

rxantos
rxantos

Reputation: 1849

If you got enough RAM and are using Visual C++ (not sure about C#), you could accelerate things by copying the whole include and lib directory to a RAM drive.

You can also place the temporary build items to be on the RAM drive. Of course you need a massive amount of RAM. A 2 GB RAM drive would be enough for the includes and libraries. But for the temporary files (*.obj, etc.) it would depend on the project. So it might be between 1 and 4 GB extra or more.

Upvotes: 1

Ludwo
Ludwo

Reputation: 6173

I'm working on 500+ C# application projects. Projects are compiled in parallel and copylocal set to false. Compile time is about 37 min without unit tests and code coverage. 13 min for incremental build without any change in the code.

If I turn off parallel compilation and set copylocal to true, compile time is than 1 h 40 min.

I have different configuration for local build, gated check-in build and server builds with deploy phase (night builds).

Here are my experiences:

  1. Copying output files to one directory is not good idea if you want to build your projects in parallel without CopyLocal set to false. My assemblies were sometimes locked when multiple projects referenced the same assembly and MSBuild tried to copy this reference to the output folder at the same time. This solution was very helpful for me. I set copylocal to false for all references and my build directory size was lowered 10x (10 times less I/O). I have a different setup for local build and for server build. Different setup for gated check-in build and for full deploy build.
  2. If I enable parallel build, builds are faster, much faster. If you have a strong build server, your /m:2 build should be 2x faster as a /m:1 build. It has nothing to do with dependencies between projects (if copylocal is set to false).
  3. You should reduce dependencies between the projects if you want to have a fast incremental build. It has no impact on a full build (copylocal false). Incremental compile time depends on the changed project location in the build tree.

Yes, MSBuild uses a timestamp of dependent projects to determine if a project needs a rebuild. It compares input files (code files, referenced assemblies, temporary files,..) timestamp with output assembly. If something is changed, your project is recompiled. Try to reduce the number of dependencies between projects to minimize recompilation. If your change was only in the 'private' part of the project, your output assembly will be changed, assembly timestamp will be changed and all related projects will be rebuild also. You cannot do much with this.

Run your build two times with diagnostic verbosity without any change in your code and check for "Building target "CoreCompile" completely" like I described here. You can have something wrong in your project files and your projects are recompiled every time. If you don't change anything your build log should not contain "Building target "CoreCompile" completely" logs.

Our build server is a virtual machine, not a real piece of hardware. It is not good idea to use a VM for a build server, but it was not my decision.

If you have multi GB RAM try to use part of it as a in-memory hard drive. Your build should be much faster :)

SSD drives are sensitive to high I/O per day. It has an impact on warranty.

I hope it helps someone ... ;)

Upvotes: 29

Related Questions