Application Memory Management in .NET Framework
Application Memory Management in .NET Framework
In this article, we focus on how .NET framework implements resource management, specifically the internals of memory management.
Join the DZone community and get the full member experience.Join For Free
Application memory management is about providing the memory necessary for a program's objects and data structures with the limited resources available, and recycling the used memory for later use when it is no longer required.
Programmers nowadays do not waste time on writing code to manage the application memory, as they can easily leverage the built-in memory/resource management mechanisms that come with software development frameworks. However, it is useful to know the internal of resource management to write more efficient and advanced codes.
In this article, we focus on how .NET framework implements resource management, specifically the internals of memory management. There are a few video clips within this article to demonstrate some of the concepts used in .NET resource management.
Before we get into the internals of .NET resource management, let’s briefly acknowledge the importance of memory management.
Why Resource Management Is Important
These are several issues that we are trying to solve by correctly managing the resources. Some of those are:
Memory leaks that lead to out of memory conditions.The programmer forgets to release the allocated memory/resources correctly. The allocated memory in the application grows and eventually runs out of memory and application crashes.
Memory corruption.An application tries to access an object that was freed. These are the dangling memory pointers present because the programmer has not reset the reference variables.
Fragmentation.Memory is not released on time and the application tries to allocate memory everywhere in the heap (Memory address space).
Where Memory Gets Allocated
Primarily, the memory gets allocated in an application either in the call Stack or on the application Heap.
Stack: This is a special region for temporary variables created by each function. The stack grows and shrinks as functions push and pop local variables. Not required to manage memory, variables are allocated and freed automatically. Stack Has a size limit.
Heap: Free large floating region(s), there are multiple types of heaps in an application such as code heap, small object heap (SOH) large object heap (LOH), and process heap. The memory in the heap needed to be managed.
The Managed Heap
This is the heap that is managed by the .NET framework, more specifically it refers to the Small Object Heap that managed by the .NET. When the process initializes, CLR reserves a contiguous region of address space for manage heap.
The programmer never ‘deletes’ objects from the managed heap. The .NET garbage collection is solely responsible for freeing memory.
Steps in .NET Memory Management
AllocateIn .NET framework MSIL (IL) uses the ‘newObj’ command. The ‘new’ keyword in C# is used to allocate objects.
InitializeInitializes memory and sets initial state, typically the Type constructor is responsible.
UseThe application uses the resources as required.
Tear DownTears down the state.
FreeFrees up the memory. The GC is solely responsible for freeing up managed resources.
The .NET Garbage collection is a major part of .NET resource management. Let’s look at how the internals of GC works.
Memory Allocation in the Managed heap
Memory gets allocated contiguously in the managed heap. This gives a performance boost as objects can be accessed faster as they are next to each other and also lead to having a smaller working set that can reside in cache. The Managed heap maintains a “NextObjPointer” (initially set to the base address of the managed heap); every time it allocates a memory the NextObjectPointer gets updated appropriately. Let’s see this is in action in the animation below.
Object A, B, C, D, E, F are getting allocated in the managed heap.
Once the application is finished using the memory, the GC will start the teardown of the memory. The GC is based on the object reference tracking so when there are no object references that exist for a particular object, the GC teardown process kicks in. Let’s see it in action in the animation below.
Once the NULL value is set to the variables that hold the object's references, the object becomes an orphan and is deleted by the .NET GC.
Compacting (defragmentation) of memory
Once the objects are deleted, the rest of the objects get arranged to make a contiguous block; this is called compacting or defragmentation.
Generational Garbage Collection.
The Managed heap has been segmented to multiple generations as Generation 0, 1, and 2. This is solely for performance reasons. the GC algorithm is based on the idea that younger objects die soon and when objects survive longer, they tend to live even longer. The first generation is for newly created objects. If those objects survive in the first GC scan, they will be promoted to Generation 1 and so on.
The GC algorithm in .NET uses Reference Tracking instead of (strong) Reference counting that systems like COM used to manage the object lifetime because reference counting will not work with circular references.
Steps in the GC process are as follows:
Suspend all application threads to avoid accessing objects and changing their state while the CLR examines objects.
It walks through all the objects in the heap setting a bit in the sync block index field to 0. ( 0 means GC can delete object)
Then, the CLR looks at all active roots (references) to see which objects they refer to. This is what makes the CLR’s GC a reference tracking GC.
If there are any active roots (references) for that object, it will mark the bit in the sync block index field to 1. If it is already marked for 1, then it will be skipped ( this is the difference of ref tracking with ref counting so no problem with circular ref).
Now, the GC deletes all objects with the bit in the sync block index with 0. Then, the GC starts the compacting phase which moves the object to make a contiguous block.
Now shift the pointer: CLR subtracts from each root (reference) the number of bytes that the object it referred to was shifted down in memory. This ensures that every root (reference) refers to the same object it did before the object compacting.
After the heap memory is compacted, the managed heap’s NextObjPtr pointer is set to point to a location just after the last surviving object.
Resume all threads.
Thank you very much for taking time to read, your feedback is greatly appreciated.
Opinions expressed by DZone contributors are their own.