M. Tibbits
M. Tibbits

Reputation: 8630

How can I improve the speed of my Makefile?

I am building a multiple binaries in C++ & CUDA with a couple of files in Fortran. I found this question and I'm having a similar problem. A user recently asked me to re-build a three year old version of the repository (before we had performed a massive migration and renaming) and I was shocked to see how quickly it built. It would be impossible / incredibly time consuming to determine exactly which of changes between that version and now caused the build to take so friggin' long.

However, I noticed in an answer's comment to the aforementioned question:

In particular, remember to use := instead of =, as := does the expansion immediately, which saves time. – Jack Kelly Mar 23 at 22:38

Are there other suggestions that I should be aware of?

Note:

(markdown..)

#
# CUDA Compilation Rules
#

define cuda-compile-rule
  $1: $(call generated-source,$2) \
    $(call source-dir-to-build-dir, $(subst .cu,.cubin, $2)) \
    $(call source-dir-to-build-dir, $(subst .cu,.ptx, $2))
    $(NVCC) $(CUBIN_ARCH_FLAG) $(NVCCFLAGS) $(INCFLAGS) $(DEFINES) -o $$@ -c $$<

  $(call source-dir-to-build-dir, $(subst .cu,.cubin, $2)): $(call generated-source,$2)
    $(NVCC) -cubin -Xptxas -v $(CUBIN_ARCH_FLAG) $(NVCCFLAGS) $(INCFLAGS) $(DEFINES) $(SMVERSIONFLAGS) -o $$@ $$<

  $(call source-dir-to-build-dir, $(subst .cu,.ptx, $2)): $(call generated-source,$2)
    $(NVCC) -ptx $(CUBIN_ARCH_FLAG) $(NVCCFLAGS) $(INCFLAGS) $(DEFINES) $(SMVERSIONFLAGS) -o $$@ $$<

  $(subst .o,.d,$1): $(call generated-source,$2)
    $(NVCC) $(CUBIN_ARCH_FLAG) $(NVCCFLAGS) $3 $(TARGET_ARCH) $(INCFLAGS) $(DEFINES) -M $$< | \
    $(SED) 's,\($$(notdir $$*)\.o\) *:,$$(dir $$@)\1 $$@: ,' > [email protected]
    $(MV) [email protected] $$@
endef

Lastly: How can I determine if it's the compilation time or the make time which is really slowing things down?

I didn't want to append the entire Makefile. It's 914 lines, but I'd be happy to update the question with snippets if it would help.

Update: Here is my dependency generation rule & compile rule:

#
# Dependency Generation Rules
#

define dependency-rules

  $(subst .o,.d,$1): $2
    $(CC) $(CFLAGS) $(DEFINES) $(INCFLAGS) $3 $(TARGET_ARCH) -M $$< | \
    $(SED) 's,\($$(notdir $$*)\.o\) *:,$$(dir $$@)\1 $$@: ,' > [email protected]
    $(MV) [email protected] $$@

endef

%.d: %.cpp
    $(CC) $(CFLAGS) $(CPPFLAGS) $(TARGET_ARCH) -M $< | \
    $(SED) 's,\($(notdir $*)\.o\) *:,$(dir $@)\1 $@: ,' > [email protected]
    $(MV) [email protected] $@

Update 2: Using @Beta's suggestion, I was able to parcel out the dependency generation and Makefile time was roughly 14.2% of the overall compiling time. So I'm going focus on minimizing header inclusion in my C++ code first. Thanks to both of you for your suggestions!!

Upvotes: 1

Views: 8394

Answers (3)

pmod
pmod

Reputation: 10997

I really doubt that make's variable assignment (immediate with := or recursive =) can make significant impact on speed in general. One specific and obvious case when it does serious impact is shell command:

VAR := $(shell ...)

There could be other hidden consuming processes which are not obvious. For example in our environment standard temporary windows directory was on network drive. Thus, when make stored/updated files on that drive (even with 1G LAN) - it was very slow. What you need is to debug makefile(s). This maybe helpful.

According to the mentioned doc you can put debug prints in the form of $(warning Going to do bla-bla-bla) and then watch where the process freezes most of all.

Upvotes: 1

Eric Melski
Eric Melski

Reputation: 16790

ElectricMake (emake) is a drop-in replacement for gmake that makes it really, really easy to answer questions like this. emake can generate an annotated build log that includes detailed timing information about every job in the build, and then you can load that into ElectricInsight to generate, for example, the Job Time by Type report:

Job Time by Type example

If you want to give it a try, you can get an eval copy.

(disclaimer: I'm the architect and lead developer of ElectricMake and ElectricInsight!)

Upvotes: 2

Beta
Beta

Reputation: 99094

  1. It shouldn't be all that difficult to determine which changes slowed everything down. You have all the versions over the past three years (I hope), and you say the difference is dramatic. So try the version from two years ago. If it's taking too long, do a binary search. You could even automate this process, have it run overnight and give you a graph in the morning, build time sampled each month for the past 36 months.
  2. If you're using GNUMake (as I hope), `make -n` will print out the commands it would execute, without actually executing them. This will give you all of the Make time with no compilation time.
  3. One of the biggest sources of unnecessary build time (even bigger than recursion, which you aren't using) is unnecessary rebuilding, recompiling/relinking/whatever when you don't really need to. This can be because your makefile doesn't handle dependencies correctly, or because your C++ files `#include` headers recklessly, or something about CUDA or FORTRAN that I wouldn't know. Run Make twice in a row, and see if it does anything on the second pass. Look over the makefile for suspiciously huge prerequisite lists. Have a skillful programmer take a look at a few of the source files, especially the newer ones, and check for unnecessary dependencies.

Upvotes: 4

Related Questions