Reputation: 965
This question is only about Visual Studio C++ project/solution configuration and may involve subjectivity.
However, the idea behind this post is to share our approaches to configure a large Visual Studio solution.
I'm not considering any tool like CMake
/Premake
here.
I'm a software developer for a video game company, so I will take a very simplified game engine architecture to illustrate my words:
My Visual Studio solution would probably look something like this:
Where Application
is an executable and every other projects are libraries (Dynamically linked).
My approach would be to separate each project into two folders: include
, src
And the inner structure would be separated into folders following my namespaces:
The following lines will assume there is only one $(Configuration)
and $(Platform)
available (Ex: Release-x64
) and that referencing .lib files into Linker/Input/Additional Dependencies
is done for each project.
I would here define a Bin
(Output), Bin-Int
(Intermediate output) and Build
(Organized output) folder, let's say they are located in $(SolutionDir)
:
$(SolutionDir)Bin\
$(SolutionDir)Bin-Int\
$(SolutionDir)Build\
The Bin
and Bin-Int
folders are playgrounds for the compiler, while the Build
folder is populated by each project post-build event:
$(SolutionDir)Build\$(ProjectName)\include\
(Project includes)$(SolutionDir)Build\$(ProjectName)\lib\
(.lib files)$(SolutionDir)Build\$(ProjectName)\bin\
(.dll files)This way, each $(SolutionDir)Build\$(ProjectName)\
can be shared as an independent library.
Note: Following explainations may skip $(SolutionDir)
from the Build
folder path to simplify reading.
If B
is dependent of A
, Build\B\include\
will contain B
and A
includes. The same way, Build\B\bin\
will contain B
and A
binaries and Build\B\lib\
will contain B
and A
.lib files (If and only if B
is ok to expose A
to its user, otherwise, only B
.lib files will be added to Build\B\lib\
).
Projects reference themselves relatively to Build\
folders. Thus, if B
is dependent of A
, B
include path will reference $(SolutionDir)Build\A\include\
(And not $(SolutionDir)A\include\
), so any include used by A
will be available for B
without specifying it explicitly. (But result to sections 2.6.2.
, 2.6.3.
and 2.6.4.
technical limitations).
After that, I make sure that my solution has a proper Project Dependencies
configuration so the Build Order
, when building the whole solution, will consider and respect my project dependencies.
Our EngineSDK
user (Working onto Application
) will only have to setup Application
such as:
$(SolutionDir)Build\Engine\include\
$(SolutionDir)Build\Engine\lib\
$(SolutionDir)Build\Engine\bin\*
to $(OutDir)
This is the typical Visual Studio configuration flow of a lot of C++ library.
Common library folder architecture that I try to preserve:
lib\
include\
bin\
Here are some example of libraries using this folder architecture model (Do note that bin
is exclusively for dynamically linked libraries as statically linked libraries don't bother with DLLs):
$(SolutionDir)Build\
The approach I wrote here has some drawbacks. These limitations are the reason of this post as I want to improve myself:
Dealing with 10 or less projects is fine, however, with bigger solutions (15+ projects), it can quickly become a mess. Project configurations are very rigid, and a small change in project architecture can result into hours of project configuration and debugging.
Let's consider a simple dependency case:
C
is dependent of B
and B
is dependent of A
.C
is an executable, and B
and A
are librariesB
and A
post-build events update their Build\$(ProjectName)\
directoryWhen changing the source code of A
, then compiling it, Build\A\
will get updated. However, as B
has been previously compiled (Before A
changes), its Build\B\
folder contains a copy of previous A
binaries and includes. Thus, executing C
(Which is only aware of B
as a dependency), will use old A
binaries/includes. A workaround I found for this problem is to manually trigger B
post-build event before executing C
. However, forgetting to trigger an intermediate project post-build can result into headaches during debugging (Not loaded symbols, wrong behaviour...).
Another limitation for this approach is "Multiple times single header reference".
This problem can be explained by considering the project dependency image at section 2.1..
Considering that Graphics
and Physics
are both including Maths
headers, and that Engine
is including Build\Graphics\include\
and Build\Physics\include\
, typing a header name will show multiple identical results:
If B
is dependent of A
and any header changes in A
(for instance, we add a new function), Rescan File
/Rescan Solution
will be needed to access the new symbol from B
.
Also, navigating to files or symbol can make us move to the wrong header (Copied header instead of the original one).
During my Visual Studio software developer journey, I came through the project Reference
concept, but I can't find how it can solve the technical limitations of my current approach, nor how it can helps me to re-think it.
As every project configuration of my solution is following the same principle but the content (Include dirs, library dirs...) for each one is different, I'm not sure how to make a great usage of property sheets.
Currently I'm struggling finding some good project architecture references. I would be pleased finding some Visual Studio configured solution on GitHub or any code sharing platform. (I know that CMake
and Premake
are prefered in most case when sharing code, however, learning more about Visual Studio project configuration is my actual goal).
Thanks for reading my words, I hope that you are also interested into discussing about this subject and maybe we can share our approaches.
Upvotes: 10
Views: 3739
Reputation: 1454
I think the general approach is good. I would say, however, that it is very important to implement a facade for each of the libraries. As the solution evolves over time, you might find that it becomes necessary to carve out some libraries into a separate microservice or application. Keeping a facade will provide sufficient architectural flexibility that the function calls to the facade can remain as is, but the implementation of the facade changes from function calls of the library to API calls of the microservice/application.
Another consideration is the data layer. If this will remain a small project then don't worry, but if the intention is for this to evolve into a larger enterprise-grade system, then read on. Even if all libraries start of sharing the same database server, it is very important to be strictly disciplined that each library has its own database (ideally) or at least its own set of tables and does not access that of the other libraries. It is very tempting to query across all databases/tables on the same server, but if you do this then you introduce tight-coupling between the logical components. This will then result in increased cost and risk to decouple them into separate microservices later.
The reason you would want to consider carving out libraries into microservices would be for increased robustness, flexibility in scaling and high availabiility options (i.e.: the user interface might not need to be as scalable and highly available as the core business logic components), and for differing requirements around component lifecycle and asset management. For example, if a separate developer or team take over one component which is currently a library, it might be better to carve that out so that they can manage its deployment lifecycle independently.
PS: Another advantage you did not note, but is worth considering, is the ability to do clean dependency injection. This can be quite powerful in complex systems.
Upvotes: 0