Seven Surprising Findings About DB2
Join the DZone community and get the full member experience.Join For Free
i’ve just completed ibm db2 for linux, unix and windows (luw) coverage here on use the index, luke as preparation for an upcoming training i’m giving . this blog post describes the major differences i’ve found compared to the other databases i’m covering (oracle, sql server, postgresql and mysql).
free & easy
well, let’s face it: it’s ibm software. it has a pretty long history. you would probably not expect that it is easy to install and configure, but in fact: it is. at least db2 luw express-c 10.5 (luw is for linux, unix and windows, express-c is the free community edition). that might be another surprise: there is a free community edition. it’s not open source, but it’s free as in free beer .
no easy explain
the first problem i stumbled upon is that db2 has no easy way to display an execution plan. no kidding. here is what ibm says about it:
explain a statement by prefixing it with
explain plan for
this stores the execution plan in a set of tables in the database (you’ll need to create these tables first). this is pretty much like in oracle.
display a stored explain plan using db2exfmt
this is a command line tool, not something you can fall from an sql prompt. to run this tool you’ll need shell access to a db2 installation (e.g. on the server). that means, that you cannot use this tool over an regular database connection.
there is another command line tool ( db2expln ) that combines the two steps from above. apart from the fact that this procedure is not exactly convenient, the output you get an ascii art:
access plan: ----------- total cost: 60528.3 query degree: 1 rows return ( 1) cost i/o | 49534.9 ^hsjoin ( 2) 60528.3 68095 /-----+------\ 49534.9 10000 tbscan tbscan ( 3) ( 4) 59833.6 687.72 67325 770 | | 1.00933e+06 10000 table: db2inst1 table: db2inst1 sales employees q2 q1
please note that this is just an excerpt—the full output of db2exfmt has 400 lines. quite a lot information that you’ll hardly ever need. even the information that you need all the time (the operations) is presented in a pretty unreadable way (imho). i’m particularly thankful that all the numbers you see above are not labeled—that’s really the icing that renders this “tool” totally useless for the occasional user.
however, according to the ibm documentation there is another way to display an execution plan: “
write your own queries against the explain tables.
” and that’s exactly what i did: i wrote a view called
that does exactly what it’s name suggest: it shows the execution plan of the last statement that was explained (in a non-useless formatting):
explain plan ------------------------------------------------------------ id | operation | rows | cost 1 | return | | 60528 2 | hsjoin | 49535 of 10000 | 60528 3 | tbscan sales | 49535 of 1009326 ( 4.91%) | 59833 4 | tbscan employees | 10000 of 10000 (100.00%) | 687 predicate information 2 - join (q2.subsidiary_id = decimal(q1.subsidiary_id, 10, 0)) join (q2.employee_id = decimal(q1.employee_id, 10, 0)) 3 - sarg ((current date - 6 months) < q2.sale_date) explain plan by markus winand - no warranty http://use-the-index-luke.com/s/last_explained
i’m pretty sure many db2 users will say that this presentation of the execution plan is confusing. and that’s ok. if you are used to the way ibm presents execution plans, just stick to what you are used to. however, i’m working with all kinds of databases and they all have a way to display the execution plan similar to the one shown above—for me this format is much more useful. further, i’ve made a useful selection of data to display: the row count estimates and the predicate information.
emulating partial indexes is possible
partial indexes are indexes not containing all table rows. they are useful in three cases:
to preserve space when the index is only useful for a very small fraction of the rows. example: queue tables .
to establish a specific row order in presence of constant non-equality predicates. example:
where x in (1, 5, 9) order by y. an index like the following can be used to avoid a sort operation:
create index … on … (y) where x in (1, 5, 9)
to implement unique constraints on a subset of rows (e.g. only those
where active = 'y').
however, db2 doesn’t support a
clause for indexes like shown above. but db2 has many oracle-compatibility features, one of them is
exclude null keys
: “specifies that an index entry is not created when all parts of the index key contain the null value.” this is actually the hard-wired behaviour in the oracle database and it is commonly exploited to
emulate partial indexes in the oracle database
generally speaking, emulating partial indexes works by mapping all parts of the key (all indexed columns) to
for rows that should not end up in the index. as an example, let’s emulate this partial index in the oracle database (db2 is next):
create index messages_todo on messages (receiver) where processed = 'n'
the solution presented in
sql performance explained
uses a function to map the processed rows to
, otherwise the receiver value is passed through:
create or replace function pi_processed(processed char, receiver number) return number deterministic as begin if processed in ('n') then return receiver; else return null; end if; end; /
it’s a deterministic function and can thus be used in an oracle function-based index. this won’t work with db2, because db2 doesn’t allow user defined-functions in index definitions. however, let’s first complete the oracle example.
create index messages_todo on messages (pi_processed(processed, receiver));
this index has only rows
where processed in ('n')
—otherwise the function returns
which is not put in the index (there is no other column that could be non-
). voilà: a partial index in the oracle database.
to use this index, just use the
function in the
select message from messages where pi_processed(processed, receiver) = ?
this is functionally equivalent to:
select message from messages where processed = 'n' and receiver = ?
so far, so ugly. if you go for this approach, you’d better need the partial index desperately.
to make this approach work in db2 we need two components: (1) the
exclude null keys
clause (no-brainer); (2) a way to map processed rows to
without using a user-defined function so it can be used in a db2 index.
although the second one might seem to be hard, it is actually very simple: db2 can do expression based indexing, just not on user-defined functions. the mapping we need can be accomplished with regular sql expressions:
case when processed = 'n' then receiver else null end
this implements the very same mapping as the
function above. remember that
expressions are first class citizens in sql—they can be used in db2 index definitions (on luw just since 10.5):
create index messages_not_processed_pi on messages (case when processed = 'n' then receiver else null end) exclude null keys;
this index uses the
expression to map not to be indexed rows to
exclude null keys
feature to prevent those row from being stored in the index. voilà: a partial index in db2 luw 10.5.
to use the index, just use the
expression in the
clause and check the execution plan:
select * from messages where (case when processed = 'n' then receiver else null end) = ?;
explain plan ------------------------------------------------------- id | operation | rows | cost 1 | return | | 49686 2 | tbscan messages | 900 of 999999 ( .09%) | 49686 predicate information 2 - sarg (q1.processed = 'n') sarg (q1.receiver = ?)
oh, that’s a big disappointment: the optimizer didn’t take the index. it does a full table scan instead. what’s wrong?
if you have a very close look at the execution plan above, which i created with my
, you might see something suspicious.
look at the predicate information. what happened to the
expression that we used in the query? the db2 optimizer was smart enough rewrite the expression as
where processed = 'n' and receiver = ?
. isn’t that great? absolutely!…except that this smartness has just ruined my attempt to use the partial index. that’s what i meant when i said that
expressions are first class citizens in sql: the database has a pretty good understanding what they do and can transform them.
we need a way to apply our magic
-mapping but we can’t use functions (can’t be indexed) nor can we use
expressions, because they are optimized away. dead-end? au contraire: it’s pretty easy to confuse an optimizer. all you need to do is to obfuscate the
expression so that the optimizer doesn’t transform it anymore. adding zero to a numeric column is always my first attempt in such cases:
case when processed = 'n' then receiver + 0 else null end
expression is essentially the same, i’ve just added zero to the
column, which is numeric. if i use this expression in the index and the query, i get this execution plan:
id | operation | rows | cost 1 | return | | 13071 2 | fetch messages | 40000 of 40000 | 13071 3 | ridscn | 40000 of 40000 | 1665 4 | sort (unqiue) | 40000 of 40000 | 1665 5 | ixscan messages_not_processed_pi | 40000 of 999999 | 1646 predicate information 2 - sarg ( case when (q1.processed = 'n') then (q1.receiver + 0) else null end = ?) 5 - start ( case when (q1.processed = 'n') then (q1.receiver + 0) else null end = ?) stop ( case when (q1.processed = 'n') then (q1.receiver + 0) else null end = ?)
the partial index is used as intended. the
expression appears unchanged in the predicate information section.
i haven’t checked any other ways to emulate partial indexes in db2 (e.g., using partitions like in more recent oracle versions).
as always: just because you can do something doesn’t mean you should. this approach is so ugly—even more ugly than the oracle workaround—that you must desperately need a partial index to justify this maintenance nightmare. further it will stop working whenever the optimizer becomes smart enough to optimize
away. however, then you just need put an even more ugly obfuscation in there.
include clause only for unique indexes
clause you can add extra columns to an index for the sole purpose to allow in
when these columns are selected. i knew the
clause before because sql server offers it too, but there are some differences:
in sql server
includecolumns are only added to the leaf nodes of the index—not in the root and branch nodes. this limits the impact on the b-tree’s depth when adding many or long columns to an index. this also allows to bypass some limitations (number of columns, total index row length, allowed data types). that doesn’t seem to be the case in db2.
in db2 the
includeclause is only valid for unique indexes. it allows you to enforce the uniqueness of the key columns only—the
includecolumns are just not considered when checking for uniqueness. this is the same in sql server except that sql server supports
includecolumns on non-unique indexes too (to leverage the above-mentioned benefits).
almost no nulls first/last support
modifiers to the
clause allow you to specify whether
values are considered as larger or smaller than non-
values during sorting. strictly speaking, you must always specify the desired order when sorting nullable columns because the sql standard doesn’t specify a default. as you can see in the following chart, the default order of
is indeed different across various databases:
figure a.1. database/feature matrix
in this chart, you can also see that db2 doesn’t support
—neither in the order by clause no in the index definition. however, note that this is a simplified statement. in fact, db2 accepts
when it is in line with the default
order. in other words,
order by col asc nulls first
is valid, but it doesn’t change the result—
is anyways the default. same is true for
order by col desc nulls last
—accepted, but doesn’t change anything. the other two combinations are not valid at all and yield a syntax error.
sql:2008 fetch first but not offset
db2 supports the fetch first … rows only clause for a while now—kind-of impressive considering it was “just” added with the sql:2008 standard. however, db2 doesn’t support the offset clause, which was introduced with the very same release of the sql standard. although it might look like an arbitrary omission, it is in fact a very wise move that i deeply respect. offset is the root of so much evil. in the next section , i’ll explain how to live without offset .
side node: if you have code using offset that you cannot change, you can still activate the mysql compatibility vector that makes limit and offset available in db2. funny enough, combining fetch first with offset is then still not possible (that would be standard compliant).
decent row-value predicates support
sql row-values are multiple scalar values grouped together by braces to form a single logical value.
-lists are a common use-case:
where (col_a, col_b) in (select col_a, col_b from…)
this is supported by pretty much every database. however, there is a second, hardly known use-case that has pretty poor support in today’s sql databases: key-set pagination or offset -less pagination. keyset pagination uses a where clause that basically says “i’ve seen everything up till here, just give me the next rows”. in the simplest case it looks like this:
select … from … where time_stamp < ? order by time_stamp desc fetch first 10 rows only
imagine you’ve already fetched a bunch of rows and need to get the next few ones. for that you’d use the
value of the last entry you’ve got for the bind value (
). the query then just return the rows from there on. but what if there are two rows with the very same
value? then you need a tiebreaker: a second column—preferably a unique column—in the
clauses that unambiguously marks the place till where you have the result. this is where row-value predicates come in:
select … from … where (time_stamp, id) < (?, ?) order by time_stamp desc, id desc fetch first 10 rows only
clause is extended to make sure there is a well-defined order if there are equal
clause just selects what’s after the row specified by the
pair. it couldn’t be any simpler to express this selection criteria. unfortunately, neither the oracle database nor sqlite or sql server understand this syntax—even though it’s in the sql standard since 1992! however, it is possible to
apply the same logic without row-value predicates
—but that’s rather inconvenient and easy to get wrong.
even if a database understands the row-value predicate, it’s not necessarily understanding these predicates good enough to make proper use of indexes that support the order by clause. this is where mysql fails—although it applies the logic correctly and delivers the right result, it does not use an index for that and is thus rather slow. in the end, db2 luw (since 10.1) and postgresql (since 8.4) are the only two databases that support row-value predicates in the way it should be.
the fact that db2 luw has everything you need for convenient keyset pagination is also the reason why there is absolutely no reason to complain about the missing offset functionality. in fact i think that offset should not have been added to the sql standard and i’m happy to see a vendor that resisted the urge to add it because its became part of the standard. sometimes the standard is wrong—just sometimes, not very often ;) i can’t change the standard—all i can do is teaching how to do it right and start campaigns like #nooffset .
figure a.2. database/feature matrix
if you like my way of explaining things, you’ll love my book “sql performance explained” .
Published at DZone with permission of Markus Winand, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.