quiz_banner

What do you know about .NET memory management? Should you care at all? Yes, it is all managed and automatic and the Garbage Collector takes care of everything. That’s a great achievement of managed runtimes in general – they provide this super nice abstraction of memory that is just there and we do not need to think about it at all, right?

Well, yes and no. For sure you may for years develop business applications earning money and do not touch a topic of .NET memory management at all. I’ve been there too. But, there is always more. There is always some edge case, some scalability problem, a memory leak or just not efficient resources utilization (wasting money💸). I’m not here today to convince you to that, tough. There will be time for this 🙂

Today I just wanted to invite you to my new initiative – a small quiz about .NET memory management. 32 questions that may, or may not, shed some light on you in the context of this topic:

👉 Take a quiz 👈

Share your result, elaborate about questions! And as you see at the end, you can subscribe to the newsletter where I’ll be providing explanations of correct answers for every question.

And yes, this is a beginning of a bigger initiative https://dotnetmemoryexpert.com – more about it soon. In general, more and more GC- and memory-related content is coming!😍

 

 

I’ve decided to make a series of at least 8 free weekly-based webinars about in-depth implementation details of the .NET GC and… I’m super happy with it! Why the idea? Many my other activities are about more practical “.NET memory management”, like my book or workshops/trainings/consultancy I gave. But during all this practical-oriented events there is always not enough time to explain in details how .NET GC is implemented. Obviously, I always explain some concepts and algorithms, to the level that helps in understanding the whole concept. But not with the level of details that I am satisfied.

Hence the idea – make a separate content that will be just as deep as I like 🙂 So, I will cover details on the level of bits, bytes and source code, not only on the level of the overall algorithm description.

The first episode was yesterday, feel invited to watch:

Continue reading

poster_sdram

We all are using RAM, probably DDR4 is just sitting there in your PC and serving memory with an outstanding speed, even while you are reading this sentence. But have you ever wondered how DRAM is working internally?

If so, please find my new poster about DRAM anatomy. It provides a little oversimplified view, for illustrative purposes. But it explains DRAM internals “good enough” for any regular, mortal developer like you and me. This is a great basis to understand while linear memory access is so much preferred over random one, cryptic mamory access timings like 8-8-8-24, and for explaining bugs like Rowhammer bug. Or just to hang it on the wall as a nerdy decoration 🙂

The poster is available for free in a printable version from https://goodies.dotnetos.org.

Here are some additional materials that you can follow while interpreting/digging in this poster:

Happy printing!

Imagine we have a simple class, a wrapper around some array of structs (better data locality etc.):

Now, I would like to have an efficient access to every element. Obviously, a trivial indexer would be inefficient here, as it would return a copy of the given array element (a struct):

Luckily, since C# 7.0 we can “ref return” to efficiently return a reference to a given array element which is super nice (refer to my article about ref for more info):

Here, 99.9999% of devs will stop and will be satisfied with the semantics and performance results. But… if we know we will call it tremendously often, can we do better?!

First of all, let’s see what is being JITted by the .NET Core x64 runtime (5.0rc) when accessing 9th element (index is 8):

To those who know assembler a little, it may be clear what is going on here. But let’s make a short summary:

  • we see a little of “stack frame” creation here (sub/add rsp) – could we get rid of it in such a simple method?
  • we see bound check in line 4 (cmp the index to 8) to check if we are accessing an array with a correct index – could we get rid of it because we trust our code? 😇

Disclaimer: Getting rid of bound checks is very risky and the resulting dangers probably will overcome the performance benefits. Thus, use it only after heavy consideration, if you are sure why you need it and you can ensure caller’s code will be correct (providing valid indices).

To continue, we will be walking on thin ice of unsafe code now.

The first idea is to use Unsafe.Add to provide kind of “pointer arithmetic” – add an index-element to the first element:

The “problem” here is, it produces almost identical results because _array[0] is still a bound-checked array access (and we don’t get rid of stack frame too):

Hence, the non trivial question arises – how to get the address/ref to the first element of an array?

We could think of doing some Span-based magic (to use MemoryMarshal.GetReference):

But you can probably feel it – it produces even slower and bigger code (Span creation handling etc.) while still bound check will be there (Span is “safe”).

So, we need somehow to find a better way of getting an address of the first array’s element. The thing is, the internal structure of the array type is an implementation detail (although well-known). How can we overcome that?

The idea is… to rely on that implementation detail. This approach is being used by DangerousGetReferenceAt method from Microsoft.Toolkit.HighPerformance package maintained by Sergio Pedri. DangerousGetReferenceAt source code explains it well:

So, we are casting (reinterpreting) an array reference as a reference to some artificial RawArrayData class, which has a layout corresponding to an array layout. Thus, getting “data” reference is now just trivial. No bound checks at all!

The good news is this method has been ported to .NET 5! So, in .NET 5.0rc we can already use MemoryMarshal.GetArrayDataReference which does exactly the same thing:

Thus, without any external dependencies our code in .NET 5 may be rewritten to:

And the resulting code is indeed much more lightweight:

No bound-checks, and as an additional reward from the method simplicity – no stack frame.

Benchmarks are indeed showing a noticeable (well, in ns order of magnitude) difference:

Which simply means, we are now about 5x faster than with the initial solution!

Disclaimer #2: Approach taken here with the usage of GetArrayDataReferece is super dangerous. As Levi Broderick, one of .NET framework developers, said: “Also, read the method documentation. It does more than remove bounds checks; it also removes array variance checks. So it might not be valid to write to the ref, even if the index is within bounds. Misuse of the method will bite you in the ass, guaranteed.”  Moreover, documentation clearly states that “a reference may be used for pinning but must never be dereferenced” 😇

poh01

In the upcoming .NET 5 a very interesting change is added to the GC – a dedicated Pinned Object Heap, a very new type of the managed heap segment (as we have Small and Large Object Heaps so far). Pinning has its own costs, because it introduces fragmentation (and in general complicates object compaction a lot). We are used to have some good practices about it, like “pin only for…:

  • a very short time” so, the GC will not bother – to reduce probability that the GC happens while many objects were pinned. That’s a scenario to use fixed keyword, which is in fact only a very lightweight way of flagging particular local variable as a pinned reference. As long as GC does not happen, there is no additional overhead.
  • a very long time”, so the GC will promote those objects to generation 2 – as gen2 GCs should be not so common, the impact will be minimized also. That’s a scenario to use GCHandle of type Pinned, which is a little bigger overhead because we need to allocate/free handle.

However, even if applied, those rules will produce some fragmentation, depending how much you pin, for how long, what’s the resulting layout of the pinned objects in memory and many other, intermittent conditions.

So, in the end, it would be perfect just to get rid of pinned objects and move them to a different place than SOH/LOH. This separate place would be simply ignored, by the GC design, when considering heap compaction so we will get pinning behaviour out of the box.Continue reading

Mobius Overview

.NET application is “just” a piece of CIL bytecode to be executed by the .NET runtime. And .NET runtime is “just” a program that is able to perform this task. It happens that currently .NET Framework/.NET Core runtimes are written in C++. I am also fully aware of CoreRT that was .NET runtime with many parts rewritten to C# (like type system) but still, crucial parts (including JIT compiler and the GC) were left written in C++.

But what if we write .NET runtime as… .NET application? Is is possible at all? I mean, literally no native/C++ code, everything running as .NET Core application written in C#? Does this sound like kind of inception and infinite recursion? It would require running one .NET runtime on the top of another .NET runtime, right?

I decided to check it out and that’s how Mobius runtime idea has been coined! Yeah, I know it sound strange and I do not expect it will be anything close to production ready thingy in the nearest century. I am fully aware of the amount of code needed to be written to make full .NET runtime. However, I found it interesting to validate such idea and I find it small usages as well. Imagine a NuGet package with the separate runtime that you can add to your application 😉

Continue reading

GC posters

In short words, I’ve prepared two posters about .NET memory management. They provide a comprehensive summary of “what’s inside .NET GC”, based on .NET Core (although almost all information is relevant also for .NET Framework).

The first shows a static point of view – how memory is organized into segments, generations, what are the roots and etc.:

Poster I

The second shows a dynamic point of view – how GC threads are working and what GC modes are available:

Poster II

 

You can download them for FREE in vector PDF format from my https://prodotnetmemory.com site. Take it and print it!

 

 

As a part of my consultancy job, I have a pleasure to help various customers with problems that could be described collectively as GC-related (or memory-related in general). One day Tamir Dresher from Clarizen company (BTW, an author of Rx.NET in Action) contacted me with such an extremely interesting message (emphasis mine):

We are experiencing a phenomenon of GC duration of 15 minutes in our backend servers. (…) Do you think we can have a session with you and perhaps you’ll have ideas on how to find the root cause?

15 minutes! That’s an infinity! If we see something like this, one thought comes to mind – something really serious must be happening there! As nowadays most of such problems may be diagnosed remotely, after signing NDAs we could go straight into attacking the problem. Clarizen has provided a very well-prepared and concise summary of their architecture and current findings.Continue reading

zerogclead

A few months ago I wrote an article about Zero GC in .NET Core 2.0. This proof of concept was based on a preview version of .NET Core 2.0 in which a possibility to plug in custom garbage collector has been added. Such “standalone GC”, as it was named, required custom CoreCLR compilation because it was not enabled by default. Quite a lot of other tweaks were necessary to make this working – especially including required headers from CoreCLR code was very cumbersome.

However upcoming .NET Core 2.1 contains many improvements in that field so I’ve decided to write follow up post. I’ve also answered one of the questions bothering me for a long time (well, at least started answering…) – how would real usage of Zero GC like in the context of ASP.NET Core application?

.NET Core 2.1 changes

Here is a short summary of most important changes. I’ve updated CoreCLR.Zero repository to reflect them.

  • first of all, as previously mentioned, now standalone GC is pluggable by default so no custom CoreCLR is required. We will be able to plug our custom GC just by setting a single environment variable:
  • as standalone GC matured, documentation in CoreCLR appeared
  • a great improvement is that code between library implementing standalone GC and CoreCLR has been greatly decoupled. Now it is possible to include only a few files directly from CoreCLR code to have things compiled:

    Previously I had to create my own headers with some of the declarations from CoreCLR copy-pasted which was obviously not maintanable and cumbersome.
  • loading path has been refactored slightly. InitializeGarbageCollector inside CoreCLR calls GCHeapUtilities::LoadAndInitialize() with the following code inside:

    Inside LoadAndInitializeGC there is a brand new functionality – verification of GC/EE interface version match. It checks whether version used by standalone GC library (returned by GC_VersionInfo function) matches the runtime version – major version must match and minor version must be equal or higher. Additionaly, GC initialization function has been renamed to GC_Initialize.
  • core logic of my the poor man’s allocator remained the same so please refer to the original article for details

ASP.NET Core 2.1 integration

As this CoreCLR feature has matured, I’ve decided do use standard .NET CLI instead of CoreRun.exe. This allowed me to easily test the question bothering me for a long time – how even the simplest ASP.NET Core application will consume memory without garbage collection? .NET Core 2.1 is still in preview so I’ve just used Latest Daily Build of .NET CLI to create WebApi project:

I’ve modified Controller a little to do something more dynamic that just returning two string literals:

Additionally, I’ve disabled Server GC which is enabled by default. Obviously setting GC mode does not make sense as there is no GC at all, right? However, Server GC crashes runtime because GC JIT_WriteBarrier_SVR64 is being used which requires valid card table address – and there are no card tables either 🙂

Then we simply compile and run, remembering about the environment variable:

Everything should be running fine so… congratulations! We’ve just run ASP.NET Core application on .NET Core with standalone GC plugged in which is doing nothing but allocating.

Benchmarks

I’ve created the same WebApi via regular .NET Core 2.0 CLI for reference. Then via SuperBenchmarker I’ve started simple load test: 10 concurrent users making 100 000 requests in total with 10 ms delay between each request.

.NET Core 2.1 with Zero GC:

zerogcaspnetcore02

.NET Core 2.0:

zerogcaspnetcore03

As we can see classic GC from .NET Core was able to process slightly more requests (357.8 requests/second) comparing to version with Zero GC plugged in. It does not surprise me at all because my version uses the most primitive allocation based on calloc. I’m quite surprised that Zero GC is doing so well after all. However, this is not so interesting because I assume that replacing calloc with a simple bump a pointer allocation would improve performance noticeably.

What is interesting is the memory usage over time. As you can see in the chart below, after a minute of such test, the process using Zero GC takes around 1 GB of memory. This is… quite a lot. Not sure yet how to interpret this. Version with regular GC ended with a stable 120 MB size. Both started from fresh run.

zerogcaspnetcore

This would mean that each REST WebApi requests triggers around 55 kB of allocations. Any comments will be appreciated here…

Update 30.01.2018: After debugging allocations during single ASP.NET requests, most of them comes from RouterMiddleware. This is no surprise as currently this application does almost nothing but routing… I’ve uploaded sample log of such single request which seems to be minimal (others are allocating some buffers from time to time). It consumes around 7 kB of memory.

We can often hear that allocation of objects is “cheap” in .NET. I fully support this sentence because the most important part is its continuation – allocation is cheap but allocating a lot of objects will hit you back as sooner or later garbage collector will kick in and start messing around. Thus, the fewer allocations, the better.

However, I would like to add a few words about “allocation is cheap” itself. This is true to some extent because the typical path of objects allocation is indeed really fast. So-called bump a pointer technique is most often used. It consists of the following simple steps:

  • it uses so-called allocation pointer as an address of a newly created object
  • it increases allocation pointer by the requested size (so next object will be created there

Continue reading