The Redshift Query Queues
Working with query queues without a predefined tool is more of an art than a science. You will come to your optimal configuration after some trial and error.
Join the DZone community and get the full member experience.
Join For Freesetting up a redshift cluster that hangs on some number of query executions is always a hassle. when users run queries in amazon redshift, the queries are routed to query queues. usually, the hangups could be mitigated in advance with a good redshift query queues setup.
amazon redshift has implemented mechanism with which we can modify the queues to our advantage. in this article, you will learn the challenges and some best practices on how to modify query queues and execution of queries to maintain an optimized query runtime.
key components
before we go into the challenges, let’s start with discussing key components of redshift.
workload manager (wlm)
amazon redshift workload manager is a tool for managing user-defined query queues in a flexible manner. we use redshifts workload management console to define new user-defined queues and to define or modify their parameters. we can also use it to define the parameters of existing default queues.
3 queue types
by default, amazon redshift has three queues types: for super users, default queue, and user-defined queues.
1. superuser queue
the super user queue is reserved for running commands related to the system, troubleshooting or for some emergency manual operations. this queue cannot be configured and can only process one query at a time.
2. default queues
every redshift cluster has a default queue. the default queue comes with the default setting of concurrency level of 5. for default queues, you can change the concurrency, timeout and memory allocation. any queries that are not routed to other queues run in the default queue.
3. user-defined queues
besides the default queue, you can add other user-defined queues. for user-defined queues besides parameters listed in the default queues, you can change user groups parameters and query groups parameters.
5 queue parameters
smart use of queues parameters allows users to optimize time and execution cost of a query. we will look at parameters of queues:
1. concurrency level
specifies the number of queries that run concurrently within a particular queue.
2. user groups
executing a query by a member of a user group runs the query inside the queue assigned to that user group.
3. query groups
query group is a simple label. users can assign queries to a particular queue on the fly using this label.
4. memory allocation
you have the option of changing the percentage of memory assigned to each queue by setting wlm memory percent parameter. the rate for all the queues adds up to 100%.
5. timeout
with this parameter, you specify the amount of time, in milliseconds, that the redshift waits for a query to execute before canceling the query. note that the timeout is based on query execution time which doesn’t include time spent waiting in a queue.
now that we know what are the main points… let’s move to the challenges.
the challenge
unlike transactional systems which have queries of uniform size and execution cost, data warehouse queries vary greatly in execution cost, time and result-set. optimal execution of these queries necessitates a balanced structure of execution queues configurations dedicated to different query size and/or priority. we want to make sure that the slow running queries are not blocking quickly running queries that execute in a manner of minutes or seconds.
arriving at an optimal queue setting for the redshift cluster is a challenge and needs to take into account the needs of the specific implementation of user requirements.
the wlm configuration properties are either dynamic or static. if you change any of the dynamic properties, you don’t need to reboot your cluster for the changes to take effect unlike the change of the static properties.
the following wlm properties are static:
- user groups.
- user group wild-card.
- query groups.
- query group wild-card.
with defined queue parameter a dynamic execution parameter can be set for specific queries that impact their performance.
the following wlm properties are dynamic:
- concurrency.
- percent of memory to use.
- timeout.
as mentioned above, the user can change dynamic properties without restarting the redshift cluster. it allows dynamic memory management when needed, we will look at some examples in the tips section.
an example
an example of a setup of the wlm configuration that handles a solid dwh/bi configuration looks something like this:
we defined the
fast_etl_execution
query with the user group called
etl
. this user group handles etl executions. another group is for bi-related queries. in this configuration, ad-hoc queries are handled by the default queue. it is important to define etl and bi user groups beforehand, or you will have to restart your redshift cluster as these parameters are static.
assign all your etl users to the
etl
user group:
create group etl with user etl_execution;
now when the user
etl_execution
executes an
etl
job, if it takes more than 2 minutes (3,000,000 milliseconds), the timeout parameter of the first defined queue of the user (
fast_etl_execution
) will cancel the execution in that queue and route it to the
long_etl_execution
queue. the
slow_etl_execution
queue has more memory and lower concurrency level so each query has more power to finish the job.
we can check the memory allocation of our queues with the statement:
select
name
, service_class
, num_query_tasks as slots
, query_working_mem as memory
from
stv_wlm_service_class_config
the result shows the memory and the available slots for different “service class #x” queues, where x denotes a queue mapped to the redshift console “query x” queue.
you can also see the internal query queues which are not accessible to users,
service_class
1-4, and a super user query queue,
service_class
5.
queues management tips
let’s look at some general tips on working with query queues.
queues setup
having only default execution queue can cause bottlenecks. if a large time-consuming query blocks the only default queue small, fast queries have to wait. make sure you create at least one user-defined query besides the redshift default query queue. i recommend creating a separate query queue for fast and slow queries — in our example,
fast_etl_execution
. mind the level of concurrent processes that run across all the query queues in redshift. another recommendation is having a level of concurrency of at least 2 in particular queues.
dynamic management for loads
we can modify the dynamic properties to tune the execution of particular queries that execute within queue via memory allocation. the main way you control this is with wlm_query_slot_count parameter. you should change dynamically wlm_query_slot_count when you perform resource intensive statements like:
-
vacuum
, which reclaims space and resorts rows in either a specified table or all tables in the current database. -
analyze
, which gathers table statistics for redshifts optimizer. -
copy
, which transfers data into redshift.
you should set the statement to use all the available resources of the query queue. in this case where the concurrency setting of the queue is 10 we set the slot count to 10 (meaning the following query will use all the available slots of the queue):
set wlm_query_slot_count to 10;
vacuum;
set wlm_query_slot_count to 1;
after the statement finishes (
vacuum
will take some time if you have a large database), you reset the session to use the normal slot count of one.
it is wise to increase the query slot count for
copy
statements when ingesting data into your redshift cluster.
copy
works best with maximal parallelism enabled so that redshift can route all the data into the nodes simultaneously.
superuser queue for housekeeping
you can switch the query group of a query to the superuser queue for housekeeping activities like analyzing of even killing a query.
changing the queue is done with the set
query_group
command. command to gather statistics with the superuser queue is:
set query_group to 'superuser';
analyze;
reset query_group;
summary
as usual, there is no one universal setup to cover all the query setups of the redshift cluster; it heavily depends on user requirements that you are implementing. working with query queues without a predefined tool is more of an art form than a science. you will come to your optimal configuration after some trial and error.
we covered some rules that get you to a great redshift cluster setup. more often than not, you will setup a different a separate user defined queue besides the default one. you will set the level of concurrency of at least 2 for a query queue. finally, you will tune execution of your more demanding statements to use all the resources available in the query queue.
Published at DZone with permission of Alon Brody. See the original article here.
Opinions expressed by DZone contributors are their own.
Trending
-
Getting Started With Istio in AWS EKS for Multicluster Setup
-
10 Traits That Separate the Best Devs From the Crowd
-
Strategies for Reducing Total Cost of Ownership (TCO) For Integration Solutions
-
How To Use Pandas and Matplotlib To Perform EDA In Python
Comments