Reputation: 347636
How do YOU reduce compile time, and linking time for VC++ projects (native C++)?
Please specify if each suggestion applies to debug, release, or both.
Upvotes: 40
Views: 25167
Reputation: 416
This is an old post and I think FASTBuild deserves to be called out here.
If you are stuck with long build times on code that you can't change, parallelization and caching are the way to go. Incredibuild gives that, as other commenters have mentioned, but as of this writing it costs if your team is more than two developers.
FASTBuild is an open-source build parallelization and caching tool. I have had good success speeding up builds with this tool and would recommend for C++ projects that take upwards of 10 minutes to build. FASTBuild does two things -
Upvotes: 0
Reputation: 179
If you have many cpp files which are linked repeatedly into various executables, compile those cpp files into a library. Then link against the library. For example (with CMake):
set(TYPICAL_SOURCES_TO_LINK
src/a.cpp
src/b.cpp
src/c.cpp
...
src/z.cpp)
add_executable(main_1 ${TYPICAL_SOURCES_TO_LINK}
src/main_1.cpp)
...
add_executable(main_27 ${TYPICAL_SOURCES_TO_LINK}
src/main_27.cpp)
becomes...
add_library(common_behavior_lib STATIC ${TYPICAL_SOURCES_TO_LINK})
add_executable(main_1 src/main_1.cpp)
target_link_libraries(main_1 common_behavior_lib)
...
add_executable(main_2 src/main_2.cpp)
target_link_libraries(main_2 common_behavior_lib)
This reduced my compile time from 7 minutes to 2.5 minutes.
Upvotes: 0
Reputation: 496
Another useful technique is blobbing. I think it is something similar to what was described by Matt Shaw.
Simply put, you just create one cpp file in which you include other cpp files. You may have two different project configurations, one ordinary and one blob. Of course, blobbing puts some constrains on your code, e.g. class names in unnamed namespaces may clash.
One technique to avoid recompiling the whole code in a blob (as David Rodríguez mentioned) when you change one cpp file - is to have your "working" blob which is created from files modified recently and other ordinary blobs.
We use blobbing at work most of the time, and it reduces project build time, especially link time.
Upvotes: 1
Reputation: 69
With Visual C++, there is a method, some refer to as Unity, that improves link time significantly by reducing the number of object modules.
This involves concatenating the C++ code, usually in groups by library. This of course makes editing the code much more difficult, and you will run into namespace collisions unless you use them well. It keeps you from using "using namespace foo";
Several teams at our company have elaborate systems to take the normal C++ files and concatenate them at compile time as a build step. The reduction in link times can be enormous.
Upvotes: 2
Reputation: 507433
It may sound obvious to you, but we try to use forward declarations as much as possible, even if it requires to write out long namespace names the type(s) is/are in:
// Forward declaration stuff
namespace plotter { namespace logic { class Plotter; } }
// Real stuff
namespace plotter {
namespace samples {
class Window {
logic::Plotter * mPlotter;
// ...
};
}
}
It greatly reduces the time for compiling also on others compilers. Indeed it applies to all configurations :)
Upvotes: 48
Reputation: 11
Compile Time:
If you have IncrediBuild, compile time won't be a problem. If you don't have a IncrediBuild, try the "unity build" method. It combine multiple cpp files to a single cpp file so the whole compile time is reduced.
Link Time:
The "unity build" method also contribute to reduce the link time but not much. How ever, you can check if the "Whole global optimization" and "LTCG" are enabled, while these flags make the program fast, they DO make the link SLOW.
Try turning off the "Whole Global Optimization" and set LTCG to "Default" the link time might be reduced by 5/6.
(LTCG stands for Link Time Code Generation)
Upvotes: 0
Reputation: 28583
We use Xoreax's Incredibuild to run compilation in parallel across multiple machines.
Upvotes: 7
Reputation: 19899
The compile speed question is interesting enough that Stroustrup has it in his FAQ.
Upvotes: 7
Reputation: 18572
Also an interesting article from Ned Batchelder: http://nedbatchelder.com/blog/200401/speeding_c_links.html (about C++ on Windows).
Upvotes: 5
Reputation: 64032
These solutions apply to both debug and release, and are focused on a codebase that is already large and cumbersome.
Forward declarations are a common solution.
Distributed building, such as with Incredibuild is a win.
Pushing code from headers down into source files can work. Small classes, constants, enums and so on might start off in a header file simply because it could have been used in multiple compilation units, but in reality they are only used in one, and could be moved to the cpp file.
A solution I haven't read about but have used is to split large headers. If you have a handful of very large headers, take a look at them. They may contain related information, and may also depend on a lot of other headers. Take the elements that have no dependencies on other files...simple structs, constants, enums and forward declarations and move them from the_world.h
to the_world_defs.h
. You may now find that a lot of your source files can now include only the_world_defs.h
and avoid including all that overhead.
Visual Studio also has a "Show Includes" option that can give you a sense of which source files include many headers and which header files are most frequently included.
For very common includes, consider putting them in a pre-compiled header.
Upvotes: 8
Reputation: 135463
Use the Handle/Body pattern (also sometimes known as "pimpl", "adapter", "decorator", "bridge" or "wrapper"). By isolating the implementation of your classes into your .cpp files, they need only be compiled once. Most changes do not require changes to the header file so it means you can make fairly extensive changes while only requiring one file to be recompiled. This also encourages refactoring and writing of comments and unit tests since compile time is decreased. Additionally, you automatically separate the concerns of interface and implementation so the interface of your code is simplified.
Upvotes: 21
Reputation: 116764
If you have large complex headers that must be included by most of the .cpp files in your build process, and which are not changed very often, you can precompile them. In a Visual C++ project with a typical configuration, this is simply a matter of including them in stdafx.h. This feature has its detractors, but libraries that make full use of templates tend to have a lot of stuff in headers, and precompiled headers are the simplest way to speed up builds in that case.
Upvotes: 14
Reputation:
Our development machines are all quad-core and we use Visual Studio 2008 supports parallel compiling. I am uncertain as to whether all editions of VS can do this.
We have a solution file with approximately 168 individual projects, and compile this way takes about 25 minutes on our quad-core machines, compared to about 90 minutes on the single core laptops we give to summer students. Not exactly comparable machines but you get the idea :)
Upvotes: 4