Adrien Givry
Adrien Givry

Reputation: 965

Visual Studio C++ Multiple Project Solution Setup

0. Disclaimer

This question is only about Visual Studio C++ project/solution configuration and may involve subjectivity.

However, the idea behind this post is to share our approaches to configure a large Visual Studio solution.

I'm not considering any tool like CMake/Premake here.

1. Problem

2. Personnal approach

2.1. Context

I'm a software developer for a video game company, so I will take a very simplified game engine architecture to illustrate my words:

enter image description here

2.2. File Structure

My Visual Studio solution would probably look something like this:

enter image description here

Where Application is an executable and every other projects are libraries (Dynamically linked).

My approach would be to separate each project into two folders: include, src

enter image description here

And the inner structure would be separated into folders following my namespaces:

enter image description here

2.3. Project Configuration

The following lines will assume there is only one $(Configuration) and $(Platform) available (Ex: Release-x64) and that referencing .lib files into Linker/Input/Additional Dependencies is done for each project.

I would here define a Bin (Output), Bin-Int (Intermediate output) and Build (Organized output) folder, let's say they are located in $(SolutionDir):

The Bin and Bin-Int folders are playgrounds for the compiler, while the Build folder is populated by each project post-build event:

This way, each $(SolutionDir)Build\$(ProjectName)\ can be shared as an independent library.

Note: Following explainations may skip $(SolutionDir) from the Build folder path to simplify reading.

If B is dependent of A, Build\B\include\ will contain B and A includes. The same way, Build\B\bin\ will contain B and A binaries and Build\B\lib\ will contain B and A .lib files (If and only if B is ok to expose A to its user, otherwise, only B .lib files will be added to Build\B\lib\).

Projects reference themselves relatively to Build\ folders. Thus, if B is dependent of A, B include path will reference $(SolutionDir)Build\A\include\ (And not $(SolutionDir)A\include\), so any include used by A will be available for B without specifying it explicitly. (But result to sections 2.6.2., 2.6.3. and 2.6.4. technical limitations).

After that, I make sure that my solution has a proper Project Dependencies configuration so the Build Order, when building the whole solution, will consider and respect my project dependencies.

2.4. User Project Configuration

Our EngineSDK user (Working onto Application) will only have to setup Application such as:

This is the typical Visual Studio configuration flow of a lot of C++ library.

Common library folder architecture that I try to preserve:

lib\
include\
bin\

Here are some example of libraries using this folder architecture model (Do note that bin is exclusively for dynamically linked libraries as statically linked libraries don't bother with DLLs):

2.5. Advantages

2.6. Technical limitations

The approach I wrote here has some drawbacks. These limitations are the reason of this post as I want to improve myself:

2.6.1. Tedious configuration

Dealing with 10 or less projects is fine, however, with bigger solutions (15+ projects), it can quickly become a mess. Project configurations are very rigid, and a small change in project architecture can result into hours of project configuration and debugging.

2.6.2. Post-build limitation

Let's consider a simple dependency case:

When changing the source code of A, then compiling it, Build\A\ will get updated. However, as B has been previously compiled (Before A changes), its Build\B\ folder contains a copy of previous A binaries and includes. Thus, executing C (Which is only aware of B as a dependency), will use old A binaries/includes. A workaround I found for this problem is to manually trigger B post-build event before executing C. However, forgetting to trigger an intermediate project post-build can result into headaches during debugging (Not loaded symbols, wrong behaviour...).

2.6.3. Multiple times single header reference

Another limitation for this approach is "Multiple times single header reference".

This problem can be explained by considering the project dependency image at section 2.1.. Considering that Graphics and Physics are both including Maths headers, and that Engine is including Build\Graphics\include\ and Build\Physics\include\, typing a header name will show multiple identical results:

enter image description here

2.6.4. De-synchronized symbol referencing

If B is dependent of A and any header changes in A (for instance, we add a new function), Rescan File/Rescan Solution will be needed to access the new symbol from B.

Also, navigating to files or symbol can make us move to the wrong header (Copied header instead of the original one).

3. Interrogations and learning perspectives

3.1. Project Reference

During my Visual Studio software developer journey, I came through the project Reference concept, but I can't find how it can solve the technical limitations of my current approach, nor how it can helps me to re-think it.

3.2. Property sheets

As every project configuration of my solution is following the same principle but the content (Include dirs, library dirs...) for each one is different, I'm not sure how to make a great usage of property sheets.

3.3. Exploring GitHub

Currently I'm struggling finding some good project architecture references. I would be pleased finding some Visual Studio configured solution on GitHub or any code sharing platform. (I know that CMake and Premake are prefered in most case when sharing code, however, learning more about Visual Studio project configuration is my actual goal).

Thanks for reading my words, I hope that you are also interested into discussing about this subject and maybe we can share our approaches.

Upvotes: 10

Views: 3739

Answers (1)

ralfe
ralfe

Reputation: 1454

I think the general approach is good. I would say, however, that it is very important to implement a facade for each of the libraries. As the solution evolves over time, you might find that it becomes necessary to carve out some libraries into a separate microservice or application. Keeping a facade will provide sufficient architectural flexibility that the function calls to the facade can remain as is, but the implementation of the facade changes from function calls of the library to API calls of the microservice/application.

Another consideration is the data layer. If this will remain a small project then don't worry, but if the intention is for this to evolve into a larger enterprise-grade system, then read on. Even if all libraries start of sharing the same database server, it is very important to be strictly disciplined that each library has its own database (ideally) or at least its own set of tables and does not access that of the other libraries. It is very tempting to query across all databases/tables on the same server, but if you do this then you introduce tight-coupling between the logical components. This will then result in increased cost and risk to decouple them into separate microservices later.

The reason you would want to consider carving out libraries into microservices would be for increased robustness, flexibility in scaling and high availabiility options (i.e.: the user interface might not need to be as scalable and highly available as the core business logic components), and for differing requirements around component lifecycle and asset management. For example, if a separate developer or team take over one component which is currently a library, it might be better to carve that out so that they can manage its deployment lifecycle independently.

PS: Another advantage you did not note, but is worth considering, is the ability to do clean dependency injection. This can be quite powerful in complex systems.

Upvotes: 0

Related Questions