C++11 range-based for loop

May 16th, 2012

On the surface, the new range-based for loop may seem like a simple feature, perhaps the simplest of all the core language changes in C++11. However, like with most higher-level abstractions, there are quite a few nuances once we start digging a little bit deeper. So in this post I am going to dig a little bit deeper with the intent to get a better understanding of this feature as well as the contexts in which it can and cannot be used.

The range-based for loop has the following form:

for ( declaration : expression ) statement

According to the standard, this is equivalent to the the following plain for loop:

1   {
2     auto&& __range = expression;
3     for (auto __begin = begin-expression,
4               __end = end-expression;
5          __begin != __end;
6          ++__begin)
7     {
8       declaration = *__begin;
9       statement
10    }
11  }

Note that when the standard says equivalent, it means that the resulting logic is equivalent and not that this is the actual translation; in particular, the variable names (e.g., __range, __begin, etc.) are for exposition only and cannot be referred to by the application.

Ok, the equivalent plain for loop version looks quite a bit more complicated compared to the range-based one. Let’s start our examination with the __range initialization (line 2). We use automatic type deduction to determine the type of the range variable based on the initializing expression. Note also that the resulting variable is made an r-value reference. This is done to allow us to iterate over temporaries without making any copies and without imposing additional const restrictions. To see where the use of the r-value reference becomes important, consider this example:

std::vector<int> f ();
 
for (int& x: f ())
  x = 0;

What can we have for the expression? Well, it can be a standard container, an array, a brace initializer list (in which case __range will be std::initializer_list), or anything that supports the concept of iteration by providing suitable begin() and end() functions. Here are a few examples:

int primes[] = {1, 2, 3, 5, 7, 11};
 
for (int x: primes)
  ...;
 
for (int x: {1, 2, 3, 5, 7, 11})
  ...;
 
template <typename T>
struct istream_range
{
  typedef std::istream_iterator<T> iterator_type;
 
  istream_range (std::istream& is): is_ (is) {}
 
  iterator_type begin () const
  {
    return iterator_type (is_);
  }
 
  iterator_type end () const
  {
    return iterator_type ();
  }
 
private:
  std::istream& is_;
};
 
for (int x: istream_range<int> (cin))
  ...;

The begin-expression and end-expression (lines 3 and 4) are determined as follows:

  • If expression is an array, then begin-expression and end-expression are __range and __range + __bound, respectively, where __bound is the array bound.
  • If expression is of a class type that declares begin() and end() member functions, then begin-expression and end-expression are __range.begin() and __range.end(), respectively.
  • Otherwise, begin-expression and end-expression are begin(__range) and end(__range), respectively, where the begin() and end() functions are looked up using the argument-dependent lookup (ADL) which also includes the std namespace.

With arrays taken care of by the first rule, the second rule makes sure that all the standard containers as well as all the user-defined ones that follow the standard sequence interface will work with range-based for out of the box. For example, in ODB (an ORM for C++), we have the container-like result class template which allows iteration over the query result. Because it has the standard sequence interface with a forward iterator, we didn’t have to do anything extra to make it work with range-based for.

The last rule (the fallback to the free-standing begin()and end() functions) allows us to non-invasively adapt an existing container to the range-based for loop interface.

You may be wondering why did the standard explicitly add the std namespace to ADL in the last rule? That’s a good question since the implementations provided in std simply call the corresponding member functions (which, if existed, would have satisfied the second rule). My guess is that it allows for a single place where a custom container can be adapted to the standard interface by specializing std::begin() and std::end().

The last interesting bit is the declaration (line 8). If we specified the type explicitly, then things are pretty straightforward. However, we can also let the compiler deduce the type for us, for example:

std::vector<int> v = {1, 2, 3, 5, 7, 11};
for (auto x: v)
  ...;

When automatic type deduction is used, generally, the resulting type will be the type of the *(__range.begin()) or *(begin(__range)) expression. When standard containers are used, however, the type will be const element type when __range is const and we are forming a reference and just the element type otherwise. For example:

std::vector<int> v = {1, 2, 3, 5, 7, 11};
const std::vector<int> cv = {1, 2, 3, 5, 7, 11};
 
for (auto x: v) // x is int
  ...;
 
for (auto x: cv) // x is int
  ...;
 
for (auto& x: v) // x is int&
  ...;
 
for (auto& x: cv) // x is const int&
  ...;

Another thing to note is the caching of the end iterator which makes the range-based for as efficient as what we could have written ourselves. There is, however, no provision for handling cases where the container is modified during iteration, unless iterator stability is guaranteed.

While the range-based for loop only supports straight iteration, it is easy to add support for reverse iteration with a simple adapter. In fact, it is strange that something like this is not part of the standard library:

template <typename T>
struct reverse_range
{
private:
  T& x_;
 
public:
  reverse_range (T& x): x_ (x) {}
 
  auto begin () const -> decltype (this->x_.rbegin ())
  {
    return x_.rbegin ();
  }
 
  auto end () const -> decltype (this->x_.rend ())
  {
    return x_.rend ();
  }
};
 
template <typename T>
reverse_range<T> reverse_iterate (T& x)
{
  return reverse_range<T> (x);
}
 
std::vector<int> v = {1, 2, 3, 5, 7, 11};
 
for (auto x: reverse_iterate (v))
  ...;

GCC can now be built with a C++ compiler

May 8th, 2012

You probably heard about the decision to allow the use of C++ in GCC itself. But it is one thing to say this and completely different to actually making a large code base like GCC to even compile with a C++ compiler instead of C. Well, GCC 4.7 got one step closer to this and can now be compiled with either a C or C++ compiler. Starting with 4.8, it is planned to build GCC in the C++ mode by default. Here is the C++ Build Status page for GCC 4.8 on various targets.

To enable the C++ mode in GCC 4.7, we use the --enable-build-with-cxx GCC configure option. As one would expect, different distributions made different decisions about how to build GCC 4.7. For example, Debian and Ubuntu use C++ while Arch Linux uses C. These differences are not visible to a typical GCC user which is why neither the GCC 4.7 release notes nor the distributions mention any of this. In fact, I didn’t know about the new C++ build mode until ODB, which is implemented as a GCC plugin, mysteriously failed to load with GCC 4.7. This “war story” is actually quite interesting so I am going to tell it below. At the end I will also discuss some implications of this change for GCC plugin development.

But first a quick recap on the GCC plugin architecture: GCC plugin is a shared object (.so) that is dynamically-loaded using the dlopen()/dlsym() API. As you may already know, with such dynamically-loaded shared objects, symbol exporting can work both ways: the executable can use symbols from the shared object and the shared object can use symbols from the executable, provided this executable was built with the -rdynamic option in order to export its symbols. This back-exporting (from executable to shared object) is quite common in GCC plugins since to do anything useful a plugin will most likely need to call some GCC functions.

Ok, so I built ODB with GCC 4.7 and tried to run it for the first time. The error I got looked like this:

 
cc1plus: error: cannot load plugin odb.so
odb.so: undefined symbol: instantiate_decl
 

Since the same code worked fine with GCC 4.5 and 4.6, my first thought was that in GCC 4.7 instantiate_decl() was removed, renamed, or made static. So I downloaded GCC source code and looked for instantiate_decl(). Nope, the function was there, the signature was unchanged, and it was still extern.

My next guess was that building GCC itself with the -rdynamic option was somehow botched in 4.7. So I grabbed Debian build logs (this is all happening on a Debian box with Debian-packaged GCC 4.7.0) and examined the configure output. Nope, -rdynamic was passed as before.

This was getting weirder and weirder. Running out of ideas, I decided to examine the list of symbols that are in fact exported by cc1plus (this is the actual C++ compiler; g++ is just a compiler driver). Note that these are not the normal symbols which we see when we run nm (and which can be stripped). These symbols come from the dynamic symbol table and we need to use the -D|--dynamic nm option to see them:

 
$ nm -D /usr/lib/gcc/x86_64-linux-gnu/4.7.0/cc1plus | 
grep instantiate_decl
0000000000529c50 T _Z16instantiate_declP9tree_nodeib
 

Wait a second. This looks a lot like a mangled C++ name. Sure enough:

 
nm -D -C /usr/lib/gcc/x86_64-linux-gnu/4.7.0/cc1plus | 
grep instantiate_decl
0000000000529c50 T instantiate_decl(tree_node*, int, bool)
 

I then ran nm without grep and saw that all the text symbols are mangled. Then it hit me: GCC is now built with a C++ compiler!

Seeing that the ODB plugin is written in C++, you may be wondering why did it still reference instantiate_decl() as a C function? Prior to 4.7, GCC headers that a plugin had to include weren’t C++-aware. As a result, I had to wrap them in the extern "C" block. Because GCC 4.7 can be built either in C or C++ mode, that extern "C" block is only necessary in the former case. Luckily, the config.h GCC plugin header defines the ENABLE_BUILD_WITH_CXX macro which we can use to decide how we should include the rest of the GCC headers:

 
#include <config.h>
 
#ifndef ENABLE_BUILD_WITH_CXX
extern "C"
{
#endif
 
...
 
#ifndef ENABLE_BUILD_WITH_CXX
} // extern "C"
#endif
 

There is also an interesting implication of this switch to the C++ mode for GCC plugin writers. In order to work with GCC 4.7, a plugin will have to be compiled with a C++ compiler even if it is written in C. Once the GCC developers actually start using C++ in the GCC source code, it won’t be possible to write a plugin in C anymore.

ODB 2.0.0 released

May 2nd, 2012

ODB 2.0.0 was released today.

In case you are not familiar with ODB, it is an object-relational mapping (ORM) system for C++. It allows you to persist C++ objects to a relational database without having to deal with tables, columns, or SQL, and manually writing any of the mapping code. ODB natively supports SQLite, PostgreSQL, MySQL, Oracle, and Microsoft SQL Server.

This release packs a number of major new features, including support for C++11, polymorphism, and composite object ids, as well as a few backwards-incompatible changes (thus the major version bump). We have also added GCC 4.7 and Clang 3.0 to the list of compilers that we use for testing each release. Specifically, the ODB compiler has been updated to be compatible with the GCC 4.7 series plugin API. There is also an interesting addition (free proprietary licence) to the licensing terms. As usual, below I am going to examine these and other notable new features in more detail. For the complete list of changes, see the official ODB 2.0.0 announcement.

C++11 support

This is a big feature so I wrote a separate post about C++11 support in ODB a couple of weeks ago. It describes in detail what is now possible when using ODB in the C++11 mode. Briefly, this release adds integration with the new C++11 standard library components, specifically smart pointers and containers. We can now use std::unique_ptr and std::shared_ptr as object pointers (their lazy versions are also provided). On the containers front, support was added for std::array, std::forward_list, and the unordered containers.

One C++11 language feature that really stands out when dealing with query results is the range-based for-loop. Compare the C++98 way:

 
typedef odb::query<employee> query;
typedef odb::result<employee> result;
 
result r (db.query<employee> (query::first == "John"));
 
for (result::iterator i (r.begin ()); i != r.end (); ++i)
  cout << i->first () << ' ' << i->last () << endl;
 

To the C++11 way:

 
typedef odb::query<employee> query;
 
auto r (db.query<employee> (query::first == "John"));
 
for (employee& e: r)
  cout << e.first () << ' ' << e.last () << endl;
 

If you are interested in more information on C++11 support, do read that post, it has much more detail and code samples.

Polymorphism support

Another big feature in this release is support for polymorphism. Now we can declare a persistent class hierarchy as polymorphic and then persist, load, update, erase, and query objects of derived classes using their base class interfaces. Consider this hierarchy as an example:

#pragma db object polymorphic pointer(std::shared_ptr)
class person
{
  ...
 
  virtual void print () = 0;
 
  std::string first_;
  std::string last_;
};
 
#pragma db object
class employee: public person
{
  ...
 
  virtual void print ()
  {
    cout << (temporary_ ? "temporary" : "permanent")
         << " employee " << first_ << ' ' << last_;
  }
 
  bool temporary_;
};
 
#pragma db object
class contractor: public person
{
  ...
 
  virtual void print ()
  {
    cout << "contractor " << first_ << ' ' << last_
         << ' ' << email_;
  }
 
  std::string email_;
};

Now we can work with the employee and contractor objects polymorphically using their person base class:

unsigned long id1, id2;
 
// Persist.
//
{
  shared_ptr<person> p1 (new employee ("John", "Doe", true));
  shared_ptr<person> p2 (new contractor ("Jane", "Doe", "j@d.eu"));
 
  transaction t (db.begin ());
  id1 = db.persist (p1); // Stores employee.
  id2 = db.persist (p2); // Stores contractor.
  t.commit ();
}
 
// Load.
//
{
  shared_ptr<person> p;
 
  transaction t (db.begin ());
  p = db.load<person> (id1); // Loads employee.
  p = db.load<person> (id2); // Loads contractor.
  t.commit ();
}
 
// Update.
//
{
  shared_ptr<person> p;
  shared_ptr<employee> e;
 
  transaction t (db.begin ());
 
  e = db.load<employee> (id1);
  e->temporary (false);
  p = e;
  db.update (p); // Updates employee.
 
  t.commit ();
}
 
// Erase.
//
{
  shared_ptr<person> p;
 
  transaction t (db.begin ());
  p = db.load<person> (id1); // Loads employee.
  db.erase (p);              // Erases employee.
  db.erase<person> (id2);    // Erases contractor.
  t.commit ();
}

Polymorphic behavior is also implemented in queries, for example:

 
typedef odb::query<person> query;
 
transaction t (db.begin ());
 
auto r (db.query<person> (query::last == "Doe"));
 
for (person& p: r) // Can be employee or contractor.
  p.print ();
 
t.commit ();
 

The above query will select person objects that have the Doe last name, that is, any employee or contractor with this name. While the result set is defined in terms of the person interface, the actual objects (i.e., their dynamic types) that it will contain are employee or contractor. Given the above persist() calls, here is what this code fragment will print:

permanent employee John Doe
contractor Jane Doe j@d.eu

There are several alternative ways to map a polymorphic hierarchy to a relational database model. ODB implements the so-called table-per-difference mapping where each derived class is mapped to a separate table that contains only columns corresponding to the data members added by this derived class. This approach is believed to strike the best balance between flexibility, performance, and space efficiency. In the future we will consider supporting other mappings (e.g, table-per-hierarchy), depending on user demand.

For more detailed information on polymorphism support, refer to Chapter 8, “Inheritance” in the ODB Manual. There is also the inheritance/polymorphism example in the odb-examples package.

Composite object ids

ODB now supports composite object ids (translated to composite primary keys in the relational database). For example:

#pragma db value
class name
{
  ...
 
  std::string first_;
  std::string last_;
};
 
#pragma db object
class person
{
  ...
 
  #pragma db id
  name name_;
};

For more information on this feature, refer to Section 7.2.1, “Composite Object Ids” in the ODB manual as well as the composite example in the odb-examples package.

Optional session support

The most important backwards-incompatible change in this release is making session support optional (the other has to do with the database operations callbacks; see the official announcement for details). As you may remember, session is a persistent object cache which is often useful to minimize the number of database operations and can be required in order to load some bidirectional object relationships.

With ODB we try to follow the “you don’t pay for things you don’t use” principle. So support for things that are not needed by all the applications (e.g., query) is not included into the generated code by default. This is particularly important for mobile/embedded applications that need to minimize code size as well as memory and CPU usage. Session support was an exception to this rule and we’ve decided to fix it in this release.

Now there are several ways to enable/disable session support for persistent classes. It can be done on the per object basis or at the namespace level using the new session pragma. It can also be enabled by default for all the objects using the --generate-session ODB compiler option. Thus to get the old behavior where all the objects were session-enabled, simply add --generate-session to your ODB compiler command line. For more information, refer to Chapter 10, “Session” in the ODB manual.

Free proprietary licence

To conclude, I would also like to mention a change to the ODB licensing terms. In addition to all the licensing options we currently have (open source and commercial proprietary licenses), we now offer a free proprietary license for small object models. This license allows you to use ODB in a proprietary (closed-source) application free of charge and without any of the GPL restrictions provided that the amount of the generated database support code does not exceed 10,000 lines. The ODB compiler now includes the --show-sloc command line option that can be used to show the amount of code being generated.

How much is 10,000 lines? While it depends on the optional features used (e.g., query support, views, containers, etc.), as a rough guide, 10,000 lines of code are sufficient to handle an object model with 10-20 persistent classes each with half a dozen data members.

For more information on the free proprietary license, including a Q&A section, refer to the ODB Licensing page.