“Volatile” can be harmful …
Join the DZone community and get the full member experience.
Join For Freewhat could be wrong with this code?
volatile uint16_t mydelay; void wait(uint16_t time) { mydelay = 0; while (mydelay<time) { /* wait .... */ } } void timerinterrupt(void) { mydelay++; }
obviously, given this post's title and the usage of
volatile
in the above source, this is about a c/c++ keyword that is very important for every embedded systems programmer.
the volatile keyword and the type qualifier
volatile
is a reserved word (‘keyword’) in c/c++. it it is a "type qualifier", which means that it is added as an extra attribute or qualifier to the type, similar to
const
. the following is a typical example:
volatile int flags;
this is a variable named "flags" of the type "int", and it is marked as "volatile".
compiler, don’t be smart!
"volatile" means the variable might change at any given time. and this is exactly what it says to the compiler:
- the variable or memory location can change even without an access from the program code.
- reading or writing to that memory location might cause side effects--it might trigger changes in other memory locations, or it might change the variable itself.
consequently, this tells the compiler not to be smart and to not optimize accesses to the variable.
volatile for peripheral registers
a typical usage of this is for
volatile
peripheral registers, e.g., for a variable that maps to a timer register:
extern volatile uint16_t timer_a; /* timer a counter register */
another way to use such a timer register is:
#define timer_a (*((volatile uint16_t*)0xaa)) /* timer a counter register at address 0xaa */
this casts the integer 0xaa as a pointer to a volatile 16-bit value.
using it like this:
timerval = timer_a;
will read the content of the timer register.
using
volatile
makes sure that the compiler does not transform this code:
timer_a = 0; while (timer_a == 0) { /* wait until timer gets incremented */ }
into an endless loop. the compiler knows that the variable might change, and is performing "dumb" instructions, not keeping the variable in a register or optimizing the access.
volatile
has the effect that it will not optimize a "read only" access to a variable:
timer_a; /* read access to register, not optimized because volatile */
without
volatile
in the type, the compiler could remove the access.
while using c language for this kind of accesses might be fine, i recommend using assembly or an inline assembler for when the hardware requires a very specific access method.
keep in mind that volatile is a qualifier to the type. so
volatile int *ptr;
is a pointer to a volatile int , while
int *volatile ptr;
is a volatile pointer to a (normal) int.
volatile for local variables
volatile
can be useful for local variables or parameters, too:
void foo(volatile int param) { ... }
or, as in this function:
void bar(int flags) { volatile int tmp; ... }
as outlined above, the compiler will not optimize this code. there are two use cases for this:
-
making the code "easier to debug": if you are not sure about what the compiler does, using
volatile
will make sure that the compiler generates simple code, so it might be easier to follow the code sequence and to debug a (user code) problem. of course you might want to removevolatile
afterwards. -
as a workaround for a compiler problem: compilers might optimize things so much that the code is wrong.
in that case,
volatile
should trick out the optimization, and can be used as a workaround.
volatile and interrupts
because
volatile
informs the compiler that the variable can be changed, it is a perfect way to mark shared variables between interrupt functions and the main program:
static volatile bool datasentflag = false; void mytxinterrupt(void) { ... datasentflag = true; ... } void main(void) { ... datasentflag = false; txdata(); /* send data, will raise mytxinterrupt() */ while(!datasentflag) { /* wait until interrupt sets flag */ } ... }
while usage of
volatile
is perfect here, there is a general misconception about
volatile
: it does
not
guarantee re-entrant access to the variable!
the above example can be considered as not exposing a problem if the microcontroller reads and writes the shared variable in an atomic way.
"atomic" means that the access to the variable is in "one piece", and cannot be interrupted or performed in multiple steps that can be interrupted.
this might not be the case for the following example that counts the number of transmitted bytes:
static volatile uint32_t noftxbytes = 0; void mytxinterrupt(void) { ... noftxbytes++; ... } void main(void) { ... noftxbytes = 0; txdata(); /* send data, will raise mytxinterrupt() */ while(noftxbytes < nofdatasent) { /* compare against how much we sent */ /* wait until transaction is done */ } ... }
it now all depends on the microcontroller and bus architecture to determine what happens. the access and usage of
noftxbytes
is not atomic anymore, resulting in likely wrong run-time behavior. to avoid
race conditions
, access to the shared variable needs to be protected with a
critical section
.
critical sections can be easily implemented by disabling and re-enabling interrrupts. processor expert generates the entercritical() and exitcritical() macros, which disable and re-enable interrupts. these macros have the advantage that the interrupt state gets preserved. keep in mind that the entercritical() and exitcritical() macros cannot be nested!
serializing memory access
things can get even more complicated. sometimes it is necessary to understand the underlying bus accesses of the microcontroller. an article i was reading recently is one about serialization of memory operations and events : just doing a write in my code does not mean that the write is effective immediately! because of the way the bus works, and because of caching and wait states, the write can happen way later than i would expect it. so if i have a register write, and based on that write i need to read something else (a write-to-read dependency), i might not get the result i expect because of the bus cycles. instead, i need to do a read of the register to enforce the writing ("serialize the memory access"):
- write to the register
- immediately read the register again to serialize the memory access
without doing so, subtle timing problems might occur.
arm memory barrier instructions
below is the freertos source, which creates a context switch within an interrupt service routine:
void vportyieldfromisr(void) { /* set a pendsv to request a context switch. */ *(portnvic_int_ctrl) = portnvic_pendsvset_bit; /* barriers are normally not required but do ensure the code is completely within the specified behavior for the architecture. */ __asm volatile("dsb"); __asm volatile("isb"); }
notice the two assembly instructions at the end: dsb ( d ata s ynchronization b arrier) and isb ( i nstruction s ynchronization b arrier). they ensure that data and instructions get serialized. see this arm infocenter article for details.
critical section
coming back to the example at the beginning, what could be one solution is this:
volatile uint16_t mydelay; void wait(uint16_t time) { uint16_t tmp; entercritical(); mydelay = 0; exitcritical(); do { entercritical(); tmp = mydelay(); exitcritical(); } while(tmp<time); } void timerinterrupt(void) { mydelay++; }
it assumes that:
-
entercritical()
andexitcritcal()
build a critical section, e.g., with disabling and re-enabling interrupts. -
timerinterrupt()
itself is not interrupted (has the highest priority or no nested interrupts). - the interrupt is not so fast that it would immediately overflow the counter.
summary
the
volatile
keyword tells the compiler not to be smart about a memory access. basically, this avoids compiler optimization and is used to mark peripheral registers having side effects. on modern microprocessors,
volatile
alone is not enough to guarantee serialization or to ensure re-entrant access to memory. only using
volatile
alone can be harmful: additional measures like read-after-write, memory barriers, or disabling interrupts are necessary.
happy volatiling
Published at DZone with permission of Erich Styger, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Trending
-
Superior Stream Processing: Apache Flink's Impact on Data Lakehouse Architecture
-
Decoding ChatGPT: The Concerns We All Should Be Aware Of
-
Knowing and Valuing Apache Kafka’s ISR (In-Sync Replicas)
-
New ORM Framework for Kotlin
Comments