[odb-users] SQLite Concurrency

Mike Ruffing mike.ruffing at ipconfigure.com
Fri Jan 27 12:14:27 EST 2017


Boris,

Thank you for the detailed response.  I think I have a better understanding
of what's going on.

After further testing I can confirm that the blocking statement was NOT the
begin transaction, but rather at the the result iteration, which confirms
your reference to "cache-level" locking via a mutex.  Also, since my long
running query only returned a single row, there was no interleaving with
the short running queries.

In a separate test I turned off shared cache via SQLITE_OPEN_PRIVATECACHE.
With caching turned off, all the short running queries executed and
finished in parallel to the long running query.

For our use case (favor concurrency over memory footprint), I think
disabling shared cache is the right approach.  Are there any known
problems/gotchas with disabling shared cache?

Thanks,
Mike Ruffing

On Thu, Jan 26, 2017 at 10:10 AM Boris Kolpackov <boris at codesynthesis.com>
wrote:

> Hi Mike,
>
> Mike Ruffing <mike.ruffing at ipconfigure.com> writes:
>
> > Is it possible to query (select statement) a SQLite database using
> multiple
> > threads (unique connection per thread via connection_pool_factory),
> without
> > one blocking the other?
>
> SQLite 3 has a pretty poor concurrency support with a fairly convoluted
> (and often surprising) locking architecture. To start with, there are
> two levels of locking (at least in the way ODB uses SQLite):
>
> To support multiple connections in multi-threaded applications ODB uses
> SQLite's shared cache[1] with the unlock notification functionality[2].
> This forms the first level of locking (let's call it the "cache level"):
> the database pages are shared by all the threads and access is serialized
> via a mutex (note: not an RW mutex)[3]. In practical terms, this means
> that every step of executing a query (i.e., every sqlite3_step() call)
> starts by acquiring this mutex and ends by releasing it. Which, in turn,
> means that while multiple threads can execute a SELECT statement at the
> same time, at best (see the second level of locking) it will be inter-
> leaving, not parallel (e.g., the first thread get a row, then the second
> gets a row, then the first one again, and so on).
>
> Then there is the second level of locking which is on the database and
> its tables (let's call it the "database level"). In a nutshell (see this
> post[4] for details), there can only be one transaction that is writing
> to the database but several that are reading.
>
>
> > Modifying the "hello" odb example, I created one long running query (> 3
> > seconds) and four short running queries (< 1 second).  I hosted each
> > "db->query" in its own thread, executing the long running query first.  I
> > can see all the threads begin execution but the four short running
> queries
> > block at the begin transaction, waiting for the long query transaction to
> > complete.
>
> If all four transactions are read-only then none of them should block
> in BEGIN (provided you are not using IMMEDIATE/EXCLUSIVE). Can you add
> a delay (sleep) and a print statement on each iteration over the query
> in all four transactions? You should then see overlapping execution of
> the queries.
>
> [1] http://www.sqlite.org/sharedcache.html
> [2] https://www.sqlite.org/unlock_notify.html
> [3]
> https://www.mail-archive.com/sqlite-users@mailinglists.sqlite.org/msg72045.html
> [4]
> https://www.mail-archive.com/sqlite-users@mailinglists.sqlite.org/msg59903.html
>
> Boris
>


More information about the odb-users mailing list