Reputation: 472
I have some experience writing C libraries but I've never read any formal documents describing good practices while writing such libraries. My question pertains around mainly 2 topics:
The main thing about binary compatibility that I can see from my research is that I can make libraries binary compatible by using the pImpl idiom but changing the structure / adding new data members etc can affect it's binary compatibility even while using pImpl. Also, is there a way to add new methods / functions to a library without actually breaking the binary compatibility? I am assuming adding these things would change the size, layout of the library thus breaking compatibility.
Is there a tool to check binary compatibility?
I have already read these articles. Are there any other docs I can peruse?
http://en.wikipedia.org/wiki/Opaque_pointer
http://techbase.kde.org/Policies/Binary_Compatibility_Issues_With_C++
Also, are there articles which describe ownership issues of memory in the context of designing library interfaces. What are the general conventions? Who owns memory, for how long, who is responsible for deallocating memory etc?
Upvotes: 23
Views: 7278
Reputation: 16133
Is there a tool to check binary compatibility?
ABI Compliance Checker - a tool for checking backward binary compatibility of a shared C/C++ library (DSO).
Are there any other docs I can peruse?
See this long list of articles on binary compatibility of shared libraries.
How to design interfaces which remain backward compatible?
Usage of reserved/padding fields is a general method to preserve compatibility of C libraries. But there are also a lot of others.
Also, is there a way to add new methods / functions to a library without actually breaking the binary compatibility? I am assuming adding these things would change the size, layout of the library thus breaking compatibility.
Added C functions don't break backward binary compatibility of DSO on Linux and Mac. The same is true on Windows and Symbian, but you should add new functions only to the end of a .DEF file. The forward compatibility is always broken by added functions though.
Added C++ methods break binary compatibility if and only if they are virtual or pure-virtual ones, because the layout of v-table may change. But your question seems to be about C libraries only.
Upvotes: 5
Reputation: 239071
In terms of documentation, How To Write Shared Libraries by Ulrich Drepper is a must-read.
Upvotes: 4
Reputation: 340218
A couple things to add to what R. said:
Since it appears that you're talking about C ABIs and not about C++ ABIs:
changing the structure / adding new data members etc can affect it's binary compatibility even while using pImpl
That shouldn't be the case when using pIMpl - if the external users of the object only have an opaque pointer/handle to the object and only the library deals with the internals of the structure, then by definition the thing that deals with the internals of the structure is compatible with it.
is there a way to add new methods / functions to a library without actually breaking the binary compatibility? I am assuming adding these things would change the size, layout of the library thus breaking compatibility.
Adding new functions or changing the size or layout of the shared library doesn't break binary compatibility. Since the binding to the function address isn't made until the shared library is loaded into the process, a change to the location of the target function isn't a problem.
Upvotes: 3
Reputation: 215287
The key compatibility issues are:
#define
/enum
constant values in shared headersSo the best list of guidelines I can give is:
dup
versus dup2
or wait
versus waitpid
).struct
types).#define
/enum
constants. Only add new constants with previously-unused values.If you follow these guidelines, I think you're at least 95% covered.
Upvotes: 21