[odb-users] Closing SQLite Database/Performance Gains

Boris Kolpackov boris at codesynthesis.com
Thu Jul 26 09:43:30 EDT 2012

Hi Eric,

Sum, Eric B <eric.b.sum at lmco.com> writes:

> I am persisting and manipulating objects in a SQLite database. I have 
> 2 questions:
> 1. When does the database connection actually close? (In other words, when
>    is the equivalent of the sqlite_close() function called?  Is there an odb
>    function that I need to call to close the database?  Does the connection
>    close when a transaction is committed?  Does it close when the scope of
>    the auto_ptr to the database object has ended?

The short answer is that the database will definitely be closed when the
odb::sqlite::database object is destroyed.

The long answer is that when the database is actually closed depends on
the connection factory being used. The default factory is connection_pool.
In this case the database is not closed until the sqlite::database object
is destroyed since the connections that have been created are generally
kept around so that they can be reused. The same goes for 
single_connection_factory which just creates a single connection and
holds it until the sqlite::database object is destroyed.

The new_connection_factory, on the hand, creates a new connection every
time one is requested and closes it when the connection object is 
released. So this will mean that the database is closed every time
you commit or rollback a transaction.

For more information on connection factories and how to select one,
refer to Section 14.3, "SQLite Connection and Connection Factory"
in the ODB manual.

> 2. I am trying to gain performance/speed by using an in-memory database
>    rather than an on-disk one.  However, I have run some tests with odb
>    through many transactions, and I do not notice a significant gain in
>    speed. Is there a reason for this?  I have read that in-memory databases
>    can be quite beneficial in terms of performance.

Hm, that's strange. I just ran our 'threads' test (common/threads in the
odb-tests package) and I see a clear performance advantage of an in-memory

I had to modify the test to run single-threaded (an in-memory SQLite 
database cannot be shared among multiple threads, unlike an on-disk 
one). So I changed these test parameters in driver.cxx:

const unsigned long thread_count = 32;
const unsigned long iteration_count = 50;
const unsigned long sub_iteration_count = 20;


const unsigned long thread_count = 1;
const unsigned long iteration_count = 500;
const unsigned long sub_iteration_count = 60;

I then measured the execution time for these four cases:

Database on a slow USB 2.0 disk:  45sec, 40% CPU utilization
Database on a SAS 7K rpm disk:    25sec, 90% CPU utilization
Database on a SAS 15K rpm disk:   23sec, 92% CPU utilization
Database in memory:               17sec, 100% CPU utilization

One thing that could explain your results is if you had your database
on an SSD. As you can see, faster drives approach in-memory and an SSD
could probably get even closer.


More information about the odb-users mailing list