When to provide an empty destructor

April 4th, 2012

If you search around on the Internet, you will find various opinions about whether it is a good idea to provide an explicit empty definition of a destructor or if it is best to let the compiler synthesize an implementation for you. The other day I also caught myself thinking about this choice for a class I’ve been working on. This made me realize that I don’t have a complete and clear picture of the tradeoffs involved. Ideally, I would like a hard and fast rule so that I don’t have to waste a few minutes thinking about this every time I create a new class. So today I decided to lay this matter to rest by analyzing all the common and special cases that I am aware of while taking into account not only performance, but also footprint and even the compilation time.

There are three distinct use-cases that I would like to analyze: a class or a class template with a non-virtual destructor, a class with a virtual destructor, and a class template with a virtual destructor. But before we jump to the analysis, let’s first review some terms used by the standard when talking about synthesized destructors. At the end of the analysis I would also like to mention some special but fairly common cases as well as how C++11 helps with the situation.

If we declare our own destructor, the standard calls it a user-declared destructor. If we declared a destructor, we also have to define it at some point. If a class has no user-declared destructor, one is declared implicitly by the compiler and is called an implicitly-declared destructor. An implicitly-declared destructor is inline. An implicitly-declared destructor is called trivial, if (a) it is not virtual, (b) all its base classes have trivial destructors, and (c) all its non-static data members have trivial destructors. In other words, a trivial destructor doesn’t need to execute any instructions and, as a result, doesn’t need to be called, or even exist in the program text. Note that the first condition (that a destructor shall not be virtual) was only added in C++11, but, practically, I believe all the implementations assumed this even for C++98 (virtual function table contains a pointer to the virtual destructor and one can’t point to something that doesn’t exist).

Another aspect about destructors that is important to understand is that even if the body of a destructor is empty, it doesn’t mean that this destructor won’t execute any code. The C++ compiler augments the destructor with calls to destructors for bases and non-static data members. For more information on destructor augmentation and other low-level C++ details I recommend the “Inside the C++ Object Model” book by Stanley L. Lippman.

Note also that an explicit empty inline definition of a destructor should be essentially equivalent to an implicitly-defined one. This is true from the language point of view with a few reservations (e.g., such a class can no longer be a POD type). In practice, however, some implementations in some circumstances may choose not to inline an explicitly-defined destructor or expression involving such a destructor because an empty inline destructor is still “more” than the trivial destructor. And this makes an implicitly-declared trivial destructor a much better option from the performance and footprint point of view. As a result, if we are providing an empty destructor, it only makes sense to define it as non-inline. And the only reason for doing this is to make the destructor non-inline. Now, the question is, are there any good reasons for making an empty destructor non-inline?

Class with non-virtual destructor

Let’s start by considering a class with a non-virtual destructor. While there are a few special cases which are discussed below, generally, there are no good reasons to prefer a non-inline empty destructor to the synthesized one. If a class has a large number of data members (or bases) that all have non-trivial destructors, then, as mentioned above, the augmented destructor may contain quite a few calls. However, chances are good a C++ compiler will not actually inline calls to such a destructor due to its complexity. In this case, object files corresponding to translation units that call such a destructor may end up containing multiple instances of the destructor. While they will be weeded out at the link stage, the need to instantiate the same destructor multiple times adds to the compilation time. However, in most cases, I believe this will be negligible.

The same reasoning applies to class templates with non-virtual destructors.

Class with virtual destructor

If a destructor is made virtual, then we also get an entry for it in the virtual function table (vtbl from now on for short). And this entry needs to be populated with a pointer to the destructor. As a result, even if the destructor is inline, there will be a non-inline instantiation of this destructor.

At first this may sound like a good reason to provide our own non-inline empty implementation. But, on closer inspection, there doesn’t seem to be any benefit in doing this. In either case there will be a non-inline version of the destructor for the vtbl. And when the compiler is able to call the destructor without involving the vtbl (i.e., when it knows that the object’s static and dynamic types are the same), then we can apply exactly the same reasoning as above.

Another thing that we may want to consider here is the instantiation of the vtbl itself. Normally, the vtbl for a class is generated when compiling a translation unit containing the first non-inline member function definition of this class. In this case we end up with a single vtbl instantiation and no resources are wasted. However, if a class only has inline functions (including our compiler-synthesized destructor), then the compiler has to fall to a less optimal method by instantiating the vtbl in every translation unit that creates an instance of an object and then weeding our duplicates at the link stage. If this proves to be expensive (e.g., you have hundreds of translation units using this class), then you may want to define an empty non-inline destructor just to anchor the vtbl.

Note also that in C++98 it is not possible to declare a destructor virtual but let the compiler synthesize the implementation (this is possible in C++11 as we will see shortly). So here we have to define an empty destructor and the question is whether to make it inline or not. Based on the above analysis I would say make it inline for consistency with the derived classes which will have inline, compiler-synthesized destructors. That is:

class base
{
public:
  virtual ~base () {}
 
  ...
};

Class template with virtual destructor

The same analysis applies here except now we always have potentially multiple vtbl instantiations, regardless of whether our destructor is inline or not. And this gives us one less reason to provide one ourselves.

To summarize, in all three cases my recommendation is to let the compiler define an inline destructor for you. Let’s now consider a few special cases where we have to make the destructor non-inline.

Special cases

There are two such special but fairly common cases that I am aware of. If you know of others, I would appreciate it if you mentioned them in the comments.

The first case can be generally described as needing extra information to be able to correctly destroy data members of a class. The most prominent example of this case is the pimpl idiom. When implemented using a smart pointer and a hidden “impl” class, the inline destructor won’t work because it needs to “see” the “impl” class declaration. Here is an example:

// object.hxx
//
class object
{
public:
  object ();
 
  // ~object () {} // error: impl is incomplete
  ~object ();
 
  ...
 
private:
  class impl;
  std::unique_ptr<impl> impl_;
};
 
// object.cxx
//
class object::impl
{
  ...
};
 
object::
object ()
  : impl_ (new impl)
{
}
 
object::
~object ()
{
  // ok: impl is complete
}

Another example of this case is Windows-specific. Here, if your object is part of a DLL interface and the DLL and executable use different runtime libraries, then you will run into trouble if your object allocates dynamic memory using the DLL runtime (e.g., in a non-inline constructor) but frees it using the executable runtime (e.g., in an inline destructor). By defining the destructor non-inline, we can make sure that the memory is allocated and freed using the same runtime.

The second case has to do with interface stability. Switching from a compiler-provided inline definition to a user-provided non-inline one changes the binary interface of a class. So if you need a binary-compatible interface, then it may make sense to define a non-inline empty destructor if there is a possibility that some functionality may have to be added to it later.

C++11 improvements

C++11 provides us with the ability to control inline-ness and virtual-ness of the compiler-defined destructor using the defaulted functions mechanism. Here is how we can declare a virtual destructor with the default implementation:

class base
{
public:
  virtual ~base () = default; // inline
 
  ...
};

To make the default implementation non-inline we have to move the definition of the destructor out of the class, for example:

// derived.hxx
//
class derived: public base
{
public:
  virtual ~derived ();
 
  ...
};
 
// derived.cxx
//
derived::~derived () = default;

Note that making a default implementation virtual or non-inline also makes it non-trivial.

Checklist

To be able to quickly decide whether a class needs an empty non-inline destructor definition I condensed the above analysis into a short checklist. When designing a class interface, ask yourself the following three questions:

  1. Do you need to anchor the vtbl (doesn’t apply to class templates)?
  2. Does proper destruction of data members require additional declarations or functionality that is not available in the class interface? Does the destruction need to be done consistently with construction (e.g., using the same runtime)?
  3. Do you need to define a stable interface and chances are that later you may have to add some functionality to the destructor?

If the answers to all these questions are “No”, then let the compiler provide the default implementation of the destructor.

C++11 support in ODB

March 27th, 2012

One of the major new features in the upcoming ODB 2.0.0 release is support for C++11. In this post I would like to show what is now possible when using ODB in the C++11 mode. Towards the end I will also mention some of the interesting implementation-related issues that we encountered. This would be of interest to anyone who is working on general-purpose C++ libraries or tools that have to be compatible with multiple C++ compilers as well as support both C++98 and C++11 from the same codebase.

In case you are not familiar with ODB, it is an object-relational mapping (ORM) system for C++. It allows you to persist C++ objects to a relational database without having to deal with tables, columns, or SQL, and manually writing any of the mapping code.

While the 2.0.0 release is still a few weeks out, if you would like to give the new C++11 support a try, you can use the 1.9.0.a1 pre-release.

While one could use most of the core C++11 language features with ODB even before 2.0.0, what was lacking is the integration with the new C++11 standard library components, specifically smart pointers and containers. By default, ODB still compiles in the C++98 mode, however, it is now possible to switch to the C++11 mode using the --std c++11 command line option (this is similar to GCC’s --std=c++0x). As you may remember, ODB uses GCC as a C++ compiler frontend which means ODB has arguably the best C++11 feature coverage available, especially now with the release of GCC 4.7.

Let’s start our examination of the C++11 standard library integration with smart pointers. New in C++11 are std::unique_ptr and std::shared_ptr/weak_ptr. Both of these smart pointers can now be used as object pointers:

#include <memory>
 
class employer;
 
#pragma db object pointer(std::unique_ptr)
class employee
{
  ...
 
  std::shared_ptr<employer> employer_;
};
 
#pragma db object pointer(std::shared_ptr)
class employer
{
  ...
};

ODB now also provides lazy variants for these smart pointers: odb::lazy_unique_ptr, odb::lazy_shared_ptr, and odb::lazy_weak_ptr. Here is an example:

#include <memory>
#include <vector>
 
#include <odb/lazy-ptr.hxx>
 
class employer;
 
#pragma db object pointer(std::shared_ptr)
class employee
{
  ...
 
  std::shared_ptr<employer> employer_;
};
 
#pragma db object pointer(std::shared_ptr)
class employer
{
  ...
 
  #pragma db inverse(employer_)
  std::vector<odb::lazy_weak_ptr<employee>> employees_;
};

Besides as object pointers, unique_ptr and shared_ptr/weak_ptr can also be used in data members. For example:

#include <memory>
#include <vector>
 
#pragma db object
class person
{
  ...
 
  #pragma db type("BLOB") null
  std::unique_ptr<std::vector<char>> public_key_;
};

It is unfortunate that boost::optional didn’t make it to C++11 as it would be ideal to handle the NULL semantics (boost::optional is supported by the Boost profile). The good news is that it seems there are plans to submit an std::optional proposal for TR2.

The newly supported containers are: std::array, std::forward_list, and the unordered containers. Here is an example of using std::unordered_set:

#include <string>
#include <unordered_set>
 
#pragma db object
class person
{
  ...
 
  std::unordered_set<std::string> emails_;
};

One C++11 language feature that comes really handy when dealing with query results is the range-based for-loop:

typedef odb::query<employee> query;
 
transaction t (db->begin ());
 
auto r (db->query<employee> (query::first == "John"));
 
for (employee& e: r)
  cout << e.first () << ' ' << e.last () << endl;
 
t.commit ();

So far we have tested C++11 support with various versions of GCC as well as VC++ 10 (we will also test with Clang before the final release). In fact, all the tests in our test suite build and run without any issues in the C++11 mode with these two compilers. ODB also comes with an example, called c++11, that shows support for some of the C++11 features discussed above.

These are the user-visible features when it comes to C++11 support and they are nice and neat. For those interested, here are some not so neat implementation details that I think other library authors will have to deal with if they decide to support C++11.

The first issue that we had to address is simultaneous support for C++98 and C++11. In our case, supporting both from the same codebase was not that difficult (though more on that shortly). We just had to add a number of #ifdef ODB_CXX11.

What we only realized later was that to make C++11 support practical we also had to support both from the same installation. To understand why, consider what happens when a library is packaged, say, for Ubuntu or Fedora. A single library is built and a single set of headers is packaged. To be at all usable, these packages cannot be C++98 or C++11. They have to support both at the same time. It is probably possible to have two versions of the library and ask the user to link to the correct one depending on which C++ standard they are using. But you will inevitably run into tooling limitations (e.g., pkg-config doesn’t have the --std c++11 option). The situation with headers are even worse, unless your users are prepared to pass a specific -I option depending on which C++ standard they are using.

The conclusion that we came to is this: if you want your library to be usable once installed in both C++98 and C++11 modes in a canonical way (i.e., without having to specify extra -I options, defines, or different libraries to link), then the C++11 support has to be header-only.

This has some interesting implications. For example, initially, we used an autoconf test to detect whether we are in the C++11 mode and write the appropriate value to config.h. This had to be scraped and we now use a more convoluted and less robust way of detecting the C++ standard using pre-defined compiler macros such as __cplusplus and __GXX_EXPERIMENTAL_CXX0X__. The other limitation of this decision is that all “extra” C++11 functions, such as move constructors, etc., have to be inline or templates. While these restrictions sound constraining, so far we didn’t have any serious issues maintaining C++11 support header-only. Things fitted quite naturally into this model but that, of course, may change in the future.

The other issue that we had to deal with is the different level of C++11 support provided by different compiler implementations. While GCC is more or less the gold standard in this regard, VC++ 10 lacked quite a few features that we needed, specifically, deleted functions, explicit conversion operators, and default function template arguments. As a result, we had to introduce additional macros that indicate which C++11 features are available. This felt like early C++98 days all over again. Interestingly, none of the above mentioned three features will be supported in the upcoming VC++ 11. In fact, if you look at the VC++ C++11 support table, it is quite clear that Microsoft is concentrating on the user-facing features, like the range-based for-loop. This means there will probably be some grief for some time for library writers.

Delaying function signature instantiation in C++11

March 20th, 2012

I think everyone had enough of rvalue references for now so let’s look at another interesting C++11 technique: delayed function signature instantiation. It is made possible thanks to the default function template arguments.

To understand the motivation behind this technique, let’s first review the various stages of instantiation of a class template. At the first stage all we get is just the template-id. Here is an example:

template <typename T>
class foo;
 
class bar;
 
typedef foo<bar> foo_bar;

At this stage both the template and its type arguments only need to be forward-declared and the resulting template-id can be used in places where the size of a class nor its members need to be known. For example, to form a pointer or a reference:

foo_bar* p = 0;     // ok
void f (foo<bar>&); // ok
foo_bar x;          // error: need size
p->f ();            // error: foo<bar>::f is unknown

In other words, this is the same as forward-declaration for non-template classes.

The last two lines in the above example wouldn’t have been errors if we had defined the foo class template. Instead, it would have triggered the second instantiation stage during which the class definition (i.e., its body) is instantiated. In particular, this includes instantiation of all data members and member function signatures. However, this stage does not involve instantiation of member function bodies. This only happens at the third stage, when we actually use (e.g., call or take a pointer to) specific functions. Here is another example that illustrates all the stages together:

template <typename T>
class foo
{
public:
  void f (T* p)
  {
    delete p_;
    p_ = p;
  }
 
  T* p_;
};
 
class bar;
 
void f (foo<bar>&); // stage 1
foo<bar> x;         // stage 2
x.f ();             // stage 3

While the class template definition is required for the second stage and the function definition is required for the third stage, whether the type template arguments must be defined at any of these stages depends on the template implementation. For example, the foo class template above does not require the template argument to be defined during the second stage but does require it to be defined during the third stage when f()’s body is instantiated.

Probably the best known example of class templates that don’t require the template argument to be defined during the second stage are smart pointers. This is because, like with raw pointers, we often need to form smart pointers to forward-declared types:

class bar;
 
bar* create ();                 // ok
std::shared_ptr<bar> create (); // ok

It is fairly straightforward to implement normal smart pointers like std::shared_ptr in such a way as to not require the template argument to be defined. But here is a problem that I ran into when implementing a special kind of smart pointer in ODB, called a lazy pointer. If you read some of my previous posts you probably remember what a lazy pointer is (it turned out to be a very fertile ground for discovering interesting C++11 techniques). For those new to the idea, here is quick recap: when an object that contains lazy pointers to other objects is loaded from the database, these other objects are not loaded right away (which would be the case for normal, eager pointers such as std::shared_ptr). Instead, just the object ids are loaded and the objects themselves can be loaded later, when and if required.

A lazy pointer can be initialized with an actual pointer to a persistent object, in which case the pointer is said to be loaded. Or we can initialize it with an object id, in which case the pointer is unloaded.

When I first set out to implement a lazy pointer, I naturally added the following extra constructor to support creating unloaded pointers (in reality id_type is not defined by T but rather by odb::object_traits<T>; however this difference is not material to the discussion):

template <class T>
class lazy_shared_ptr
{
  lazy_shared_ptr (database&, const typename T::id_type&);
 
  ...
};

Do you see the problem? Remember that during the second stage function signatures get instantiated. And in order to instantiate the signature of the above constructor, the template argument must be defined, since we are looking for id_type inside this type. As a result, lazy_shared_ptr can no longer be used with forward-declared classes.

As it turns out, we can delay function signature instantiation until the third stage (i.e., when the function is actually used) by making the function itself a template. Here is how we can fix the above constructor so that we can continue using lazy_shared_ptr with forward-declared types. This method works even in C++98:

template <class T>
class lazy_shared_ptr
{
  ...
 
  template <typename ID>
  lazy_shared_ptr (database&, const ID&);
};

As a side note, some of you who read my previous posts about rvalue references were wondering why I used the constructor template here. Well, now you know.

The above C++98-compatible implementation has a number of drawbacks. The biggest is that we cannot use this technique for function return types. In ODB, lazy pointers also allow querying the object id of a stored object. In the C++98 mode, to keep the implementation usable on forward-declared types, I had to resort to this ugly interface:

template <class T>
class lazy_shared_ptr
{
  ...
 
  template <typename T1>
  typename T1::id_type object_id () const;
};
 
lazy_shared_ptr<object> lp = ...
cerr << lp->object_id<object> (); << endl;

That is, the user has to explicitly specify the object type when calling object_id().

The second problem has to do with the looseness of the resulting interface. Now we can pass any value as id when initializing lazy_shared_ptr. While an incompatible type will get caught, it will only happen in the implementation with the resulting diagnostics pointing to the wrong place and saying the wrong thing (we have to provide our own correct “diagnostics” in the comment):

template <class T>
class lazy_shared_ptr
{
  ...
 
  template <typename ID>
  lazy_shared_ptr (database&, const ID& id)
  {
    // Compiler error pointing here? Perhaps the id
    // argument is wrong?
    //
    const typename T::id_type& real_id (id);
    ...
  }
};

Support for default function template arguments in C++11 allows us to resolve both of these problems. Let’s start with the return type:

template <class T>
class lazy_shared_ptr
{
  ...
 
  template <typename T1 = T>
  typename T1::id_type object_id () const;
};
 
lazy_shared_ptr<object> lp = ...
cerr << lp->object_id (); << endl;

The solution to the second problem is equally simple:

template <class T>
class lazy_shared_ptr
{
  ...
 
  template <typename T1 = T>
  lazy_shared_ptr (database&, const typename T1::id_type&);
};

The idea here is to inhibit template argument deduction in order to force the default type to always be used. This is similar to the trick used in std::forward().