user4076675
user4076675

Reputation: 109

What is the purpose of creating object files separately and then linking them together in a Makefile?

When using the gcc compiler, it will link and compile in one step. However, it appears to be idiomatic to turn source files into object files and then link them at the end. To me, this seems unnecessary. Not only does this clutter up your directories with a bunch of object files, but it complicates the Makefile when you can simply tack on all the source files to your compiler. For example, here's what I consider to be simple:

.PHONY: all

SOURCES = $(wildcard *.cpp)

all: default
default:
    g++ $(SOURCES) -o test

Which neatly becomes:

g++ main.cpp test.cpp -o test

However, more complicated Makefiles which use pattern rules would clutter the output for each and every file. For example:

.PHONY: all

SOURCES = $(wildcard *.cpp)
OBJECTS = $(SOURCES:.cpp=.o)

%.o: %.cpp
    g++ -c -o $@ $<

all: default
default: $(OBJECTS)
    g++ -o test $^

clean:
    rm -rf *.o

g++ -c -o main.o main.cpp
g++ -c -o test.o test.cpp
g++ -o test main.o test.o

To me, this seems unnecessary complicated and error prone. So what are the reason(s) for this practice?

Upvotes: 10

Views: 3029

Answers (4)

Harsh Verma
Harsh Verma

Reputation: 923

I will explain why making object files leads to faster compilation.

Consider the following analogy :

You are learning to build a car from scratch (i.e you have lots of metal and rubber for the wheels). You have all the machines required to make all the major parts of the car (the frame, the wheels, the engine etc.). Assume each machine builds one particular part of the car, for eg say you have a frame-building machine. Assume it takes significant amount of time to ready a machine because you have to read a very complicated manual :(

Scenario 1 You spend half a day trying to ready all the machines and then one hour to build parts and solder together all the parts to finish making the car. Then you turn off all the machines you had used (which removed all your custom settings). However, later on you realize that you made the wrong engine for the car. Since you had soldered together all the parts of the car, you cannot replace the previous engine, so you have to make all the parts again. You spend another half of a day to get the machines ready(by rereading the manual) and another hour to make all the parts and join them. Painful !

Scenario 2 You get the machines ready in half a day and then you note down everything you had done while readying the machines in a notebook. You make all the parts of the car in an hour and you solder together all the parts to finish making the car and turn off the machines. However, later on you realize that you put made the wrong engine for the car. You have to make all parts again. Because you had kept track of everything you had done in a notebook getting the machines ready now only take 10 mins. You again spend an hour to make the parts and join them together. This saved you a lot of time (almost half a day).

  • The complicated manual you read is the source file.
  • The notebook with your notes is the object file.
  • The final car is the binary file.

The object files are intermediate results that help you avoid doing all the compilation (getting the machines ready) again by keeping track of most of the hard work that you have already done in an appropriate form before making the binary file. Object files have more purpose! You should read about them if this excites you :D.

Upvotes: 3

Micha&#235;l Le Barbier
Micha&#235;l Le Barbier

Reputation: 6468

Why do you want to write a Makefile and not write a simple shell script? In the example that you consider simple, you make no use of any feature of make, you could even write a simple shell script that understands the keywords build and clean, and that's it!

You are actually questioning about the point of writing Makefiles instead of shell scripts, and I will address this in my answer.

Also note that in the simple case where we compile and link three moderately sized files, any approach is likely to be satisfying. I will therefore consider the general case but many benefits of using Makefiles are only important on larger projects. Once we learned the best tool which allows us to master complicated cases, we want to use it in simple cases as well.

The procedural paradigm of shell scripts is wrong for compilation-like jobs

Writing a Makefile is similar to writing a shell script with a slight change of perspective. In a shell script, we describe a procedural solution to a problem: we can start to describe the whole procedure in very abstract terms using undefined functions, and we refine this description until we reached the most elementary level of description, where a procedure is just a plain shell command. In a Makefile, we do not introduce any abstraction, but we focus on the files we want to produce and how we can produce them. This works well because in UNIX, everything is a file, therefore each treatment is accomplished by a program which reads its input data from input files, do some computation and write the results in some output files.

If we want to compute something complicated, we have to use a lot of input files which are treated by programs whose outputs are used as inputs to other programs, and so on until we have produced our final files containing our result. If we translate the plan to prepare our final file into a bunch of procedures in a shell script, then the current state of the processing is made implicit: the plan executor knows “where it is at” because it is executing a given procedure, which implicitly guarantees that such and such computations were already done, that is, that such and such intermediary files were already prepared. Now, which data describes “where the plan executor is at” ?

Innocuous observation The data which describes “where the plan executor is at” is precisely the set of intermediary files which were already prepared, and this is exactly the data which is made explicit when we write Makefiles.

This innocuous observation is actually the conceptual difference between shell scripts and Makefiles which explains all the advantages of Makefiles over shell scripts in compilation jobs and similar jobs. Of course, to fully appreciate these advantages, we have to write correct Makefiles, which might be hard for beginners.

Make makes it easy to continue an interrupted task where it was at

When we describe a compilation job with a Makefile, we can easily interrupt it and resume it later. This is a consequence of the innocuous observation. A similar effect can only be achieved with considerable efforts in a shell script.

Make makes it easy to work with several builds of a project

You observed that Makefiles will clutter the source tree with object files. But Makefiles can actually be parametrised to store these object files in a dedicated directory, and advanced Makefiles allow us to have simultaneously several directories containing several builds of a project with distinct options. (For instance, with distinct features enabled, or debug versions, etc.) This is also consequence of the innocuous observation that Makefiles are actually articulated around the set of intermediary files.

Make makes it easy to parallelise builds

We can easily build a program in parallel since this is a standard function of many versions of make. This is also consequence of the innocuous observation: because “where the plan executor is at” is an explicit data in a Makefile, it is possible for make to reason about it. Achieving a similar effect in a shell script would require a great effort.

Makefiles are easily extensible

Because of the special perspective — that is, as another consequence of the innocuous observation — used to write Makefiles, we can easily extend them. For instance, if we decide that all our database I/O boilerplate code should be written by an automatic tool, we just have to write in the Makefile which files should the automatic tool use as inputs to write the boilerplate code. Nothing less, nothing more. And we can add this description pretty much where we like, make will get it anyway. Doing such an extension in a shell script build would be harder than necessary.

This extensibility ease is a great incentive for Makefile code reuse.

Upvotes: 13

Mats Petersson
Mats Petersson

Reputation: 129374

Well, the argument for "compile everything every time" can be seen here:

http://xkcd.com/303/

but joking aside, it's MUCH faster to compile one file when you have made a small change, compared to recompiling everything every time. My Pascal compiler project is not very large, but it still takes about 35 seconds or so to compile it.

Using make -j3 (that is runing 3 compile jobs at once, I'm currently on my spare computer with only a dual core processor), the time to compile the files take only ten seconds less, but you can't do -j 3 if you don't have multiple compile jobs.

Recompiling only one (of the larger) modules takes 16 seconds.

I know what I'd rather wait for of 16 or 35 seconds...

Upvotes: 4

Chris
Chris

Reputation: 2763

The two big reasons at the top of a list of many reasons for me are:

  • You can compile multiple source files at the same time decreasing build time
  • If you change one file, you only recompile that one file instead of recompiling everything

Upvotes: 7

Related Questions