Pagination and Querying in Cassandra
Join the DZone community and get the full member experience.
Join For Free1. what if we want to fetch rows batch wise instead of columns?
2. if there are updates during the paged retrieval there is a chance that some items will be missed out. for example let’s say the last access is at column with column key with “florence”. so the next retrieval would fetch a batch starting from “florence” on wards. what if a column with key “cologne” has been newly added? it would not get included in any of the future retrieval.
3. also there may be a use case where it is required paginate the results obtained by filtering with a range query rather than fetching all the rows page wise in the column family. (actually this was our use case)
so let’s have a look at how we took a stab at the beast, cassandra pagination. before that let me explain our use case fully so that it’s easier to grasp what we did and why we did it. our use main case was to paginate the access to results returned from a range query which can cleanly expressed in sql lingo as follows.
select * from <column_family> where <column_name_1> between [from_1] and [to_1] and <column_name_2> between [from_2] and [to_2] .... and <column_name_n> between <from_n> and <to_n>
here each column_name is an index. (actually a sub index of a
composite index. for a description on our indexing scheme refer to my
earlier blog
cassandra: lessons learnt
)
. so our use case is bit complicated in that it’s required to paginate
the access of the result set obtained from a range query. also our
requirement was to fetch all the rows satisfying this criteria without
missing any row provided that there would be new additions while we are
retrieving rows in batches. in fact there may be a considerable
time-lapse between two fetches since the retrieved data are processed
using a scheduled task with configurable interval in our use case.
additionally we had to leave the room for non batched access of the
range query result as well. and of course we were not using the
orderedpartitioner. (evils of orderedpartitioner is well documented
elsewhere. sub optimal loadbalancing, creating hot spots etc.. ). had we
used orderedpartitioner our life would have been bit easier since we
would have been able to do a range query on the rows. but since we were
using randompartitioner no ordering of rows using row keys can be
assumed as well.
ok that’s enough for the predicament that we were in couple of months back while faced with the task of ‘cassandrafication’ our data layer. hope you got the idea.. now let’s see what we did to improve the situation.
first we had to deal with our inability to do range query on rows.
cassandra has this nice caveat, that columns of a particular row is
always sorted using the column keys. so we utilized this nicety to
impose an ordering on rows. we always maintain a meta row in which all
the row keys are stored as columns. (actually a row key is a column key
and column value is empty).
let’s say this row is ‘rowindex’. (see
figure 1). now when doing a query on column family we first query this
row using a range query and get the rows matching the criteria and then
do the real row fetching one by one using the row keys fetched. you
might be wondering how the range query is constructed to match the where
clauses in the given sql above. in our scheme the row key
is constituted from concatenating the value for each index. (
index
is in fact a column in a particular row and we use the column value as
the index value. this will become clearer by having a look at the first
step of illustration given in figure 2). so this is the scheme we used
for non batched retrieval of rows satisfying a particular query.
figure 1 : column family with meta row ‘rowindex’
but for paginated use case this proved to be insufficient due to the second shortcoming outlined earlier. we realized that there needs to be an ordering from the timestamp to catch a newly added row even if its row key put it in a location in sorted order which is before the last accessed row key. so we introduced another meta row storing the timestamp of insertion of each row. let’s say this row key of this meta row is ‘timestampindex’. each column of this row will hold the insertion timestamp as the column key and the corresponding row key of the row inserted at that particular timestamp as the column value. so now we need to do four things we add a row to the column family.
figure 2 : row insertion algorithm
1. create the row key using the defined indexes. here we use ‘server’ and ‘time’ as the indexes.
2. insert row key in to the ‘rowindex’ as a column.
3. insert the row insertion timestamp along with row key as a column to the ‘timestampindex’
4. add the row itself to the column family.
‘rowindex’ is to be used for non batched access of the range query result while ‘timestampindex’ is to be used for batched access of the range query result.
now when we want to fetch the rows in batches satisfying the range query criteria, first we get a batch size chunk of timestamps from ‘timestampindex’. then for each and every row associated with the timestamp we check whether if the row matches the filter criteria. this is a simple string comparison to check whether the row key falls between the range first and range last values.
say for example the filter criteria for above illustration is following where clause.
where 'server' between 'esb' and 'esb' and 'hour' between '08:00' and '09:00'
now the range first value of the query would be ‘esb—08:00′ and the range last value would be ‘esb—09:00′. this will select events for server ‘esb’ during the hours from ’08:00′ to ’09:00′. so if the row key is ‘esb—08:23′ it will get picked and if it is ‘esb—09:23′ it won’t.
so as can be seen for this scenario we didn’t use ‘rowindex’ meta row. it’s for non batched use only. and in this way using ‘timestampindex’ we can catch newly added rows without missing out on any row.
however it’s not without its own drawbacks.
1. the batch size is not consistent. even though the batch size chunk is picked from the query some of these rows will be discarded since they do not match the filtering criteria. solution would be to get multiple batches until the batch size number of rows fulfilling the filter criteria is found. but for now we are ok with inconsistent batch sizes.
2. what if an existing row is updated? it will get fetched a second time since the algorithm will not miss any newly added or updated row. this may or may not be desirable according to the use case. for us this is in fact the needed behavior since we need any new updates to an already fetched row. so we are ok with that too.
so that concludes our escapade with cassandra pagination. the (love)
story continues.. (hope you saw the sarcasm sign unlike sheldon..
)
source:
http://chamibuddhika.wordpress.com/2011/12/11/pagination-and-querying-in-cassandra/
Opinions expressed by DZone contributors are their own.
Comments