Over a million developers have joined DZone.

Meditations on Writing a Queue, Part 1

DZone's Guide to

Meditations on Writing a Queue, Part 1

In this article, a DZone MVB walks us through is thought process as he teaches himself how to construct queues in the C programming language.

· Web Dev Zone ·
Free Resource

Learn how error monitoring with Sentry closes the gap between the product team and your customers. With Sentry, you can focus on what you do best: building and scaling software that makes your users’ lives better.

What is a queue besides the line for the little teacups at Disney? In programming, a queue is a very useful data structure that can simplify our programs, especially when it comes to threading. In thsi series, I’m going to walk you through building a queue in C, talk about how to effectively use a queue, and also compare to the Queue implementation that ships with Ruby.

What Is a Queue?

While there are different types of queues, the most common is a FIFO (first in first out). The first person in line to ride Space Mountain is the first person who leaves the waiting area (and they also get the best seat).

Queues can have many different operations, but the most basic are push, this adds something to the queue, and pop which removes the most recent thing from the queue (assume all queues in this post are FIFO).

The next really important thing about a queue is that it’s threadsafe. What do I mean by that? If you’re trying to pop an element off of the queue in two different threads, you won’t pop the same element off twice, and you won’t cause an SEGV.

Because of this thread safety, we can use a queue to safely transport data between a worker (consumer) and a boss (producer) thread. Because pushing and popping are atomic (more thread-safe jargon), we can use them without having to introduce locks at our top level code. This can simplify the design of our program and make it more readable. Readable code is maintainable code, so that’s why I like using queues.

Queues are used all over in programming; a common one is in web servers like NGINX. As requests come in they will wait at the socket until a worker is available to pop it from the queue and begin processing. In industrial engineering speak, a queue gives us the ability to “accumulate” work. This means that our system can have a burst capacity that is higher than it’s actual capacity (as long as the average is at or below capacity, see Little’s Law). We even see queues in our day-to-day life such as at the grocery store.

Note: I’m using “capacity” in this paragraph to indicate “throughput.”

Why Build a Queue?

The C programming language does not come with a queue structure out of the box like Ruby, but it provides all the primitives to build one. I’m learning C currently and I’ve been writing threading code, so having a queue structure helps simplify my end programs. If you know C, feel free to critique (but not criticize) my implementation: feedback is how we all get better.

Build a Queue Data Structure

I’ve got all the code online. If you want you can skip straight past the docs and straight to this commit of the C code. I recommend opening up that in another browser window to follow along as I explain what’s going on. Note that while I might update the code on GitHub, it’s a pain to keep a post in sync, so my explanations will always match an early version of this lib. For a more up-to-date version, you can check out the repo.

First up, we’ll need a way to store our queue. For this, I introduce a struct called tiny_queue_t. In C there are no objects, instead, we can build value objects using structs, here’s the definition:

typedef struct tiny_queue_t {
  struct tiny_linked_list_t* head;
  struct tiny_linked_list_t* tail;
  pthread_mutex_t mutex;
  pthread_cond_t wakeup;
} tiny_queue_t;

This struct has a pointer to a linked list (which I’ll get to later) called head, and another called tail. It also has a mutex called mutex and a condition variable called wakeup.

If you’ve not written any threadsafe codes before, a mutex is like a “talking stick” that only allows the current owner to run. Another example would be a stop light that only allows one car through at a time. Feel free to stop and Google here if you need to. I’ll talk more about the condition variable and mutex later.

Next up we need a linked list implementation.

typedef struct tiny_linked_list_t {
  void *data;
  struct tiny_linked_list_t* next;
} tiny_linked_list_t;

The list is called tiny_linked_list_t. One thing to note is that I prefixed all my calls with tiny_ since C does not have namespaces and I want to be able to use it with other code that has the same name. This struct has a generic pointer to data, this is where the elements in our queue will live. It then has a pointer to another tin_linked_list_t called next.

In C a “pointer” means a “reference to.” So “a pointer to a linked list” means “a reference to a linked list.”

I wanted my queue to be able to grow to arbitrary length without putting any constraints on the system. I chose to use a linked list to do this. Each element in the list contains some data and a pointer to the next element in the list. When we have access to the first element in the list then we can iterate through all the elements in the list. This is why our tiny_queue_t data type has a head pointer. It has a tailpointer so we can know when we’re at the end of the list.

Allocate a Queue

Next up we need to be able to allocate a queue instance. Here’s the code:

tiny_queue_t* tiny_queue_create() {
  struct tiny_queue_t* queue = (struct tiny_queue_t*)malloc(sizeof(struct tiny_queue_t));
  queue->head = NULL;
  queue->tail = NULL;

  queue->mutex  = (pthread_mutex_t)PTHREAD_MUTEX_INITIALIZER;
  queue->wakeup = (pthread_cond_t)PTHREAD_COND_INITIALIZER;
  return queue;

The function definition tells us that tiny_queue_create() takes no arguments and returns a pointer to a tiny_queue_t type. Next up we have to allocate the queue:

struct tiny_queue_t* queue = (struct tiny_queue_t*)malloc(sizeof(struct tiny_queue_t));

Here our variable name is queue, it is of type tiny_queue_t and we are telling C to make it the size of a struct of tiny_queue_t. This will ask the OS for space in the heap to store our variable. Once allocated, our queue is empty so we set the head and tail to be NULL. We do this so then later we can explicitly check for the condition of having a NULL head.

The syntax, if you’ve not guessed it, is that queue->head means that we want the head variable contained in the queue struct. This is similar to accessing an attribute from a value object in Ruby. We can read and write to values in structs like this:

In C NULL is like nil in Ruby, you can read more about null pointers here.

Next up we have to allocate our mutex and our condition variable. Honestly, these lines are kinda like voodoo to me:

queue->mutex  = (pthread_mutex_t)PTHREAD_MUTEX_INITIALIZER;
queue->wakeup = (pthread_cond_t)PTHREAD_COND_INITIALIZER;

I do know that PTHREAD_MUTEX_INITIALIZER and PTHREAD_COND_INITIALIZERare macros, which I don’t entirely understand yet. I also know that the type casting is required, but I’m not sure why. Either way, just know that we’re setting these variables.

At this point, we can allocate a queue instance:

tiny_queue_t *my_queue = tiny_queue_create()

But we can’t use it, we don’t have any way to push data onto the queue or to pop data off of it. It makes sense to look at the push first since we can’t pop what isn’t there.

Push to the Queue

void tiny_queue_push(tiny_queue_t *queue, void *x) {
    struct tiny_linked_list_t* new_node = (struct tiny_linked_list_t*)malloc(sizeof(struct tiny_linked_list_t));
    new_node->data = x;
    new_node->next = NULL;

    if(queue->head == NULL && queue->tail == NULL){
      queue->head = queue->tail = new_node;
    } else {
      queue->tail->next = new_node;
      queue->tail = new_node;

I know this looks intimidating, so I’ll walk through it. Our method signature says that the first argument is a pointer to a tiny_queue_t instance, we access this in the queue variable. The second argument is a pointer to anything we want to store in the queue, this is passed in as the x variable. There is no return from this function.

Before we can push something to our queue we have to make sure that no one else is trying to also write to the queue, or take something off of the queue. This is where our mutexes come in. When we “lock” a mutex, no one else can acquire the mutex. This is similar to how in a discussion circle, only the person holding a “talking stick” can speak. In C you must manually lock and unlock the mutex. The code is indented to help visually identify this behavior.

  # ...

Anything between this lock and unlock is “protected,” meaning that we can modify things as long as all other code modifying the same values are also wrapped in a similar lock/unlock, then we are safe.

This is the first time we’ve seen & in our code. The & is kind of like the companion of *. I think in this case that queue->mutex is the actual mutex, but the method signature of pthread_mutex_lock requires a pointer to a mutex. You can get this from:

$ man pthread_mutex_lock
PTHREAD_MUTEX_LOCK(3)    BSD Library Functions Manual    PTHREAD_MUTEX_LOCK(3)

     pthread_mutex_lock -- lock a mutex

     #include <pthread.h>

     pthread_mutex_lock(pthread_mutex_t *mutex);
# ...

On the last line notice it takes a pointer pthread_mutex_t *mutex.

Inside of the actual code, we initialize a new linked list node of type linked_list_t and we put our pointer we passed in (x) on the node:

struct tiny_linked_list_t* new_node = (struct tiny_linked_list_t*)malloc(sizeof(struct tiny_linked_list_t));
new_node->data = x;
new_node->next = NULL;

At this point we have a node, that is not linked to anything, and it points at nothing.

Next up we need to check for the case where our list is currently empty. This is when head and tail are both NULL

queue->head == NULL && queue->tail == NULL

When this happens, we can set both head and tail to the same element because the list only has one item, the thing we just passed in:

queue->head = queue->tail = new_node;

In this case, new_node->next is still NULL because there is no second node to point at.

If the list was not empty then we must add our new node to the end of the list. We set the next value of our current last item in the list to this new node:

queue->tail->next = new_node;

Then we make this node the new last item in the list:

queue->tail = new_node;

That’s all there is to it. We just pushed a node onto our list. The last thing we do after unlocking our mutex is to signal to our condition variable with pthread_cond_signal. You’ll see what this does in the next section.

Tune in tomorrow for the thrilling conclusion! 

What’s the best way to boost the efficiency of your product team and ship with confidence? Check out this ebook to learn how Sentry's real-time error monitoring helps developers stay in their workflow to fix bugs before the user even knows there’s a problem.

web dev ,queues ,ruby ,c

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}