mizubasho
mizubasho

Reputation: 91

C++ operator new, object versions, and the allocation sizes

I have a question about different versions of an object, their sizes, and allocation. The platform is Solaris 8 (and higher).

Let's say we have programs A, B, and C that all link to a shared library D. Some class is defined in the library D, let's call it 'classD', and assume the size is 100 bytes. Now, we want to add a few members to classD for the next version of program A, without affecting existing binaries B or C. The new size will be, say, 120 bytes. We want program A to use the new definition of classD (120 bytes), while programs B and C continue to use the old definition of classD (100 bytes). A, B, and C all use the operator "new" to create instances of D.

The question is, when does the operator "new" know the amount of memory to allocate? Compile time or run time? One thing I am afraid of is, programs B and C expect classD to be and alloate 100 bytes whereas the new shared library D requires 120 bytes for classD, and this inconsistency may cause memory corruption in programs B and C if I link them with the new library D. In other words, the area for extra 20 bytes that the new classD require may be allocated to some other variables by program B and C. Is this assumption correct?

Thanks for your help.

Upvotes: 2

Views: 905

Answers (6)

xtofl
xtofl

Reputation: 41519

In addition to the mentioned 'ad hoc' techniques, you can also model compatibility into your system by saying that your new class A is really a subclass of the 'old' class A. That way, your old code keeps working, but all code that needs the extended functionality needs to be revised.

This design principle is clearly visible in the COM world, where especially interfaces are never changed over versions, only extended by inheritance. Next to that, they only construct classes by the CreateInstance method, which moves the allocation problem to the library containing the class.

Upvotes: 0

Loki Astari
Loki Astari

Reputation: 264571

You are correct the memory size is defined at compile time and applications B/C would be in danger of serious memory corruption problems.

There is no way to handle this explicitly at the language level. You need to work with the OS to get the appropriate shared libraries to the application.

You need to version your libraries.

As there is no explicit way of doing this with the build tools you need to do it with file names. If you look at most products this is approx how they work.

In the lib directory:

libD.1.00.so
libD.1.so     ->  libD.1.00.so    // Symbolic link
libD.so       ->  libD.1.so      // Symbolic link

Now at compile time you specify -lD and it links against libD.1.00.so because it follows the symbolic links. At run time it knows to use this version as this is the version it compiled against.

So you now update lib D to version 2.0

In the lib directory:

libD.1.00.so
libD.2.00.so
libD.1.so     ->  libD.1.00.so    // Symbolic link
libD.2.so     ->  libD.2.00.so    // Symbolic link
libD.so       ->  libD.2.so       // Symbolic link

Now when you build with -libD it links against version 2. Thus you re-build A and it will use version 2 of the lib from now on; while B and C will still use version 1. If you rebuild B or C it will use the new version of the library unless you explicitly use an old version of the library when building -libD.1

Some linkers do not know to follow symbolic links very well so there are linker commands that help. gcc use the '-install_name' flag your linker may have a slightly different named flag.

As a runtime check it is usally a good idea to put version information into your shared objects (global variable/function call etc). Thus at runtime you can retrieve the shared libraries version information and check that your application is compatible. If not you should exit with the appropriate error message.

Also note: If you serialize objects of D to a file. You know need to make sure that version information about D is maintained. Libd.2 may know how to read version 1 D objects (with some explicit work), but the inverse would not be true.

Upvotes: 2

Paolo Capriotti
Paolo Capriotti

Reputation: 4072

Changing the size of a class is binary incompatible. That means that if you change the size of classD without recompiling the code that uses it, you get undefined behavior (most likely crashes).

A common trick to get around this limitation is to design classD so that it can be safely extended in a binary compatible way, for example by using the Pimpl idiom.

In any case, if you want different programs to use different versions of your class, I think you have no choice but releasing multiple versions of the shared library and have those programs linked to the appropriate version.

Upvotes: 6

Shay Erlichmen
Shay Erlichmen

Reputation: 31928

Compile Time, you should not change shared object size underneath their clients.

there is a simple workaround for that:

class foo
{
public:
  // make sure this is not inlined
  static foo* Create()
  {
     return new foo();
  }
}

// at the client
foo* f = foo::Create();

Upvotes: 3

Edouard A.
Edouard A.

Reputation: 6128

The amount of memory to allocate is determined at compile time when doing something like

new Object();

but it can be a dynamic parameter such as in

new unsigned char[variable];

I really advise you to go through some middleware to achieve what you want. C++ guarantees nothing in terms of binary interfaces.

Have you looked at protobuf?

Upvotes: 1

rlbond
rlbond

Reputation: 67829

Memory allocation is figured out at compile time. Changing the size of a class in D will trigger a recompile.

Consider publicly deriving from the class in question to extend it, if that would apply. Or, compose it in another object.

Upvotes: 1

Related Questions