Pinned Object Heap in .NET 5

poh01

In the upcoming .NET 5 a very interesting change is added to the GC – a dedicated Pinned Object Heap, a very new type of the managed heap segment (as we have Small and Large Object Heaps so far). Pinning has its own costs, because it introduces fragmentation (and in general complicates object compaction a lot). We are used to have some good practices about it, like “pin only for…:

  • a very short time” so, the GC will not bother – to reduce probability that the GC happens while many objects were pinned. That’s a scenario to use fixed keyword, which is in fact only a very lightweight way of flagging particular local variable as a pinned reference. As long as GC does not happen, there is no additional overhead.
  • a very long time”, so the GC will promote those objects to generation 2 – as gen2 GCs should be not so common, the impact will be minimized also. That’s a scenario to use GCHandle of type Pinned, which is a little bigger overhead because we need to allocate/free handle.

However, even if applied, those rules will produce some fragmentation, depending how much you pin, for how long, what’s the resulting layout of the pinned objects in memory and many other, intermittent conditions.

So, in the end, it would be perfect just to get rid of pinned objects and move them to a different place than SOH/LOH. This separate place would be simply ignored, by the GC design, when considering heap compaction so we will get pinning behaviour out of the box.

However, while the concept is simple, it no straightforward to implement it with the current API of allocations in C#. Currently pinning is a “two-phase process”:

  1. we allocate an object and store the resulting reference somewhere
  2. we pin the object with the help of fixed keyword or GCHandle

In the other words, the allocator does know nothing about the fact that the object being created will be pinned in future.

So that’s why altogether with the Pinned Object Heap introduced to .NET 5, a new allocation API has been provided (for the very first time since .NET 1.0?). Instead of using new operator, we are able to allocate arrays with the help of two methods:

As we see, the new allocation API allows us specify that we want to have this object pinned. And this fact allows to allocate it directly in the POH, instead of SOH/LOH. The question arises, why only arrays?! As Microsoft says:

“Allowing to allocate a non array object is certainly possible but at this time we do not see much value in it”

That’s mostly because of the scenarios where pinning is really used – pinning buffers for various purposes. And buffers are arrays. In other words, while technically POH could contain any object, currently it only supports arrays due to the provided allocation API. You can read a detailed design of Pinned Object Heap at the runtime documentation.

It is important to remember that allocation in Pinned Object Heap is a little slower than regular SOH allocation. It is not based on a per-thread-based allocation context, but a synchronized free list of gaps (like in LOH). Thus, when we allocate in POH, an appropriate free space must be found in one of the POH segments. That’s why we should treat POH as a replacement for long-running GC.Handle-based allocation replacement rather than short-living fixed-based pinning.

There is yet another very important limitation of Pinned Object Heap. The second design decision was to limit it to arrays of types that are not references, and do not contain references (“blittable” types). Again, it is not a technical limitation but a decision derived from the typical pinning use cases – we mostly pin buffers of unmanaged data like primitive types (int, byte). This decision has an additional performance benefit because GC may omit POH while marking reachable objects. In other words, as there are no outgoing references, there is no need to treat POH-allocated objects as potential roots.

The corresponding check is made at runtime because it depends on the pinned flag:

Thus, we will be able to compile the code as follows:

But it will throw an exception during execution:

You may wonder why runtime check is preferred over unmanaged generic constrain? Using it, obviously, would require to create a dedicated method only for pinning:

while .NET team is planning to extend AllocateArray with some additional parameters in future (like specifying in which generation we want to allocate), or at least does not want to limit itself by providing such specialized methods.

There is also an ongoing discussion whether to add GC.IsPinnedHeapObject(obj) API for checking if a given object has been allocated in POH. It is not decided yet, as there is some overhead of checking this that will probably overwhelm the benefits in a typical scenarios.

As a last word, we can think of various usage scenarios of arrays used with the help of this API. Typically, we will still an address of them, so fixed keyword may be still necessary:

But know this is in fact kind of no-op, because it influences object on the POH, so no bad consequences for the GC.

Summary

As we can see, the very first usages are showing up. like very recently PR added to the Kestrel by Ben Adams. And knowing all the facts presented in this post, in the next one we will write our own implementation of the ArrayPool based on the arrays allocated in the Pinned Object Heap!

7 comments

  1. Pingback: dotnetomaniak.pl
  2. What happens if you use Array.Resize on the pinned array? If it is enlarged and moved to a new spot is this new array also pinned in POH?

      1. It allocates not on POH base on the code decompiled in ILSPY:

        public static void Resize([Nullable(new byte[] { 2, 1 })][NotNull] ref T[] array, int newSize)
        {
        if (newSize < 0)
        {
        ThrowHelper.ThrowArgumentOutOfRangeException(ExceptionArgument.newSize, ExceptionResource.ArgumentOutOfRange_NeedNonNegNum);
        }
        T[] array2 = array;
        if (array2 == null)
        {
        array = new T[newSize];
        }
        else if (array2.Length != newSize)
        {
        T[] array3 = new T[newSize];
        Buffer.Memmove(ref MemoryMarshal.GetArrayDataReference(array3), ref MemoryMarshal.GetArrayDataReference(array2), (UIntPtr)(uint)Math.Min(newSize, array2.Length));
        array = array3;
        }
        }

Leave a Reply

Your email address will not be published. Required fields are marked *