Over a million developers have joined DZone.

“Volatile” can be harmful …

· Performance Zone

Discover 50 of the latest mobile performance statistics with the Ultimate Guide to Digital Experience Monitoring, brought to you in partnership with Catchpoint.

What could be wrong with this code?

volatile uint16_t myDelay;
void wait(uint16_t time) {
  myDelay = 0;
  while (myDelay<time) {
    /* wait .... */
void TimerInterrupt(void) {

Obviously, given this post's title and the usage of volatile in the above source, this is about a C/C++ keyword that is very important for every embedded systems programmer.

The Volatile Keyword and the Type Qualifier

volatile is a reserved word (‘keyword’) in C/C++. It it is a "type qualifier", which means that it is added as an extra attribute or qualifier to the type, similar to const. The following is a typical example:

volatile int flags;

This is a variable named "flags" of the type "int", and it is marked as "volatile".

Compiler, don’t be smart!

"Volatile" means the variable might change at any given time. And this is exactly what it says to the compiler:

  1. The variable or memory location can change even without an access from the program code.
  2. Reading or writing to that memory location might cause side effects--it might trigger changes in other memory locations, or it might change the variable itself.

Consequently, this tells the compiler not to be smart and to not optimize accesses to the variable.

Volatile for Peripheral Registers

A typical usage of this is for volatile peripheral registers, e.g., for a variable that maps to a timer register:

extern volatile uint16_t TIMER_A; /* Timer A counter register */

Another way to use such a timer register is:

#define TIMER_A (*((volatile uint16_t*)0xAA)) /* Timer A counter register at address 0xAA */

This casts the integer 0xAA as a pointer to a volatile 16-bit value.

Using it like this:

timerVal = TIMER_A;

will read the content of the timer register.

Using volatile makes sure that the compiler does not transform this code:

TIMER_A = 0;
while (TIMER_A == 0) {
  /* wait until timer gets incremented */

into an endless loop. The compiler knows that the variable might change, and is performing "dumb" instructions, not keeping the variable in a register or optimizing the access.

Volatile has the effect that it will not optimize a "read only" access to a variable:

TIMER_A; /* read access to register, not optimized because volatile */

Without volatile in the type, the compiler could remove the access.

:!: While using C language for this kind of accesses might be fine, I recommend using Assembly or an inline assembler for when the hardware requires a very specific access method.

Keep in mind that volatile is a qualifier to the type. So

volatile int *ptr;

is a pointer to a volatile int, while

int *volatile ptr;

is a volatile pointer to a (normal) int.

Volatile for Local Variables

volatile can be useful for local variables or parameters, too:

void foo(volatile int param) {

Or, as in this function:

void bar(int flags) {
  volatile int tmp;

As outlined above, the compiler will not optimize this code. There are two use cases for this:

  1. Making the code "easier to debug": if you are not sure about what the compiler does, using volatile will make sure that the compiler generates simple code, so it might be easier to follow the code sequence and to debug a (user code) problem. Of course you might want to remove volatile afterwards.
  2. As a workaround for a compiler problem: Compilers might optimize things so much that the code is wrong. :-(  In that case, volatile should trick out the optimization, and can be used as a workaround. :-)

Volatile and Interrupts

Because volatile informs the compiler that the variable can be changed, it is a perfect way to mark shared variables between interrupt functions and the main program:

static volatile bool dataSentFlag = FALSE;
void myTxInterrupt(void) {
  dataSentFlag = TRUE;
void main(void) {
  dataSentFlag = FALSE;
  TxData(); /* send data, will raise myTxInterrupt() */
  while(!dataSentFlag) {
    /* wait until interrupt sets flag */

While usage of volatile is perfect here, there is a general misconception about volatile: it does NOT:!: guarantee re-entrant access to the variable!

The above example can be considered as not exposing a problem if the microcontroller reads and writes the shared variable in an atomic way.

:idea: "Atomic" means that the access to the variable is in "one piece", and cannot be interrupted or performed in multiple steps that can be interrupted.

This might not be the case for the following example that counts the number of transmitted bytes:

static volatile uint32_t nofTxBytes = 0;
void myTxInterrupt(void) {
void main(void) {
  nofTxBytes = 0;
  TxData(); /* send data, will raise myTxInterrupt() */
  while(nofTxBytes < nofDataSent) { /* compare against how much we sent */
    /* wait until transaction is done */

It now all depends on the microcontroller and bus architecture to determine what happens. The access and usage of nofTxBytes is not atomic anymore, resulting in likely wrong run-time behavior. To avoid race conditions, access to the shared variable needs to be protected with a critical section.

:idea: Critical sections can be easily implemented by disabling and re-enabling interrrupts. Processor Expert generates the EnterCritical() and ExitCritical() macros, which disable and re-enable interrupts. These macros have the advantage that the interrupt state gets preserved. Keep in mind that the EnterCritical() and ExitCritical() macros cannot be nested!

Serializing Memory Access

Things can get even more complicated. Sometimes it is necessary to understand the underlying bus accesses of the microcontroller. An article I was reading recently is one about Serialization of Memory Operations and Events: Just doing a write in my code does not mean that the write is effective immediately! Because of the way the bus works, and because of caching and wait states, the write can happen way later than I would expect it. So if I have a register write, and based on that write I need to read something else (a write-to-read dependency), I might not get the result I expect because of the bus cycles. Instead, I need to do a read of the register to enforce the writing ("serialize the memory access"):

  1. Write to the register
  2. Immediately read the register again to serialize the memory access

Without doing so, subtle timing problems might occur.

ARM Memory Barrier Instructions

Below is the FreeRTOS source, which creates a context switch within an interrupt service routine:

void vPortYieldFromISR(void) {
  /* Set a PendSV to request a context switch. */
  /* Barriers are normally not required but do ensure the code is completely
     within the specified behavior for the architecture. */
  __asm volatile("dsb");
  __asm volatile("isb");

Notice the two assembly instructions at the end: DSB (data synchronization barrier) and ISB (instruction synchronization barrier). They ensure that data and instructions get serialized. See this ARM Infocenter article for details.

Critical Section

Coming back to the example at the beginning, what could be one solution is this:

volatile uint16_t myDelay;
void wait(uint16_t time) {
  uint16_t tmp;
  myDelay = 0;
  do {
    tmp = myDelay();
  } while(tmp<time);
void TimerInterrupt(void) {

It assumes that:

  1. EnterCritical() and ExitCritcal() build a critical section, e.g., with disabling and re-enabling interrupts.
  2. TimerInterrupt() itself is not interrupted (has the highest priority or no nested interrupts).
  3. The interrupt is not so fast that it would immediately overflow the counter.


The volatile keyword tells the compiler not to be smart about a memory access. Basically, this avoids compiler optimization and is used to mark peripheral registers having side effects. On modern microprocessors, volatile alone is not enough to guarantee serialization or to ensure re-entrant access to memory. Only using volatile alone can be harmful: Additional measures like read-after-write, memory barriers, or disabling interrupts are necessary.

Happy Volatiling :-)

Is your APM strategy broken? This ebook explores the latest in Gartner research to help you learn how to close the end-user experience gap in APM, brought to you in partnership with Catchpoint.


Published at DZone with permission of Erich Styger, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}