I’m a guy talking and showing a lot of stuff about .NET memory management. I’ve wrote a book about it, I’ve done webinars, conference talks, infographics, a lot. Add there is ALWAYS at least one question like “Ok, that’s nice, but why should I care about it. It’s all automatic, isn’t it?”. Answering the same question again and again is pretty tiresome. Hence, I’ve eventually decided to make a webinar just about that – why should I care? Why should YOU care? If you have more than hour to watch it, I strongly invite you. But if you don’t, stay with me for a written version here.

First of all, I would rephrase the question – “Why should YOU care about learning and understanding tools you use…?”. Why should you care about async/await, database or ORM you use? In the end, what do you really need to know to program and earn money in a given technology?

Let’s pause this questions for a moment and let’s be clear about one thing – .NET GC (and automatic memory management in overall) is a convenient abstraction. As our dear Wikipedia says, an abstraction is “a simplification of something much more complicated that is going on under the covers”. And automatic memory management is a simplification of much more complex manual memory management – giving a promise you have an infinite amount of memory (you only allocate) and don’t care about freeing it. In IT we love abstractions. Abstractions simplify our lives, A LOT. But there is also The Law of Leaky Abstractions coined by Joel Spolsky (co-founder of StackOverflow and Trello) – “All non-trivial abstractions, to some degree, are leaky.

So, TCP is an abstraction of reliable connection built on unreliable IP. ORM like Entity Framework is an abstraction of classes and methods representing DB tables and SQL queries. And ASP.NET Web Forms has an abstraction of statefulness over stateless HTTP protocol with the help of ViewState. But if you have ever developed something bigger than school project, you were seeing those abstractions leaking. Observing TCP latency drops due to IP retransmissions, analyzing SQL queries generated by your ORM and being surprised how big your ViewState is transferred between the client and server again and again. Abstractions fail. We are fooled by abstractions. Sometimes they leak more, sometimes less.

Ok, so .NET GC abstraction leaks, too? But for sure someone will say: “I have been programming in C# for X years and have never needed to care about it…”. Where X is like 10, 20 or gazillion. But putting jokes aside, it makes sense. Because, as I said, abstractions make our lives so much easier! That’s it, that the result – you MAY not think about it and live.

That’s why “80% rule” works here – MOST of the time you don’t need to think about it. Until you need. Or until you want to for your own benefit.

You can be in “80%” of people that don’t care – that’s TOTALLY fine. But you can be in this “20%” of experts that do care! If someone says “I have been programming in C# for X years and have never needed to care about it”, I can reverse the question: what would you be working on IF you had cared about it during those X years? Maybe they were not measured, inefficient apps wasting resources and you could help saving some money for your company? Or maybe they were just pretty boring apps? Or maybe you could just be a better Software Craftmanship, caring to deliver not only working software, but well-crafted working software? I believe this all leads to one crucial question – do you want to be an expert or a regular developer? If you want to be just a regular .NET developer, absolutely, YMMV!

Ok, let’s pause for a moment. There is one more aspect of it… There are two main perspectives you can look at it:

  • application developer perspective – what’s said above perfectly suits here. “You don’t have to care about until it becomes a problem” 👍 You are on good position here – you know your application, its context, requirements, current and expected workload. You can measure it (if you know how…). You know what are the bottleneck. So, the goal here is not to write everything with pointers and Span. But, we return to the “expert” story again – you can still keep in mind performance, good and bad practices, be aware of abstraction leaks. And have “entry points” how to fix, trace, debug, measure… You can be an expert!
  • library author perspective – well, you don’t know your customers. It just should be fast. And performance, among features, documentation, clean design, becomes one of the distinguishing factors when we choose this library or another. There is no sky limit here. The goal is to write everything with pointers and Span 😉 (yes, joke!). What’s I’m trying to say is that this perspective favors writing performance- and memory-aware code a little more.

Bonus question to you: Do you want to only consume .NET libraries, EF Core, ASP.NET Core, Unity, Xamarin, RavenDB, … or produce them? For money? Guess what you need to have a chance for doing that. Yes, you have to scratch your comfortable abstractions off the surface. “The only way to deal with the leaks competently is to learn about how the abstractions work and what they are abstracting as Joel Spolsky said in the article cited above. The whole Mechanical Sympathy movement is about that. Understand one or two layers below the one you use.

Summarizing, and answering the title question, here is my pragmatic approach to .NET memory management – split it into levels and decide which level suits you the most:

  • Level 0 – I really don’t care, I just want to make things done.
  • Level 1 – Keep in mind some knowledge of abstractions, some basic sanity checks like: use struct here, don’t use LINQ there, set Capacity of the List, know what are the consequences of allocating a lot in LOH (and what LOH is in the first place), and so on and so forth. Just clean code for performance. Explaining that you should be there is just like explaining you should write clean code and tests. Or wash hands before you eat. This allows you to feel comfortable and “expertised” in “80%” zone.
  • Level 2 – Be more interested. Know how to measure and diagnose if something bad happens. Have a good toolbox for measuring and troubleshooting. Have some more knowledge of abstractions that may leak. Move to the “20%” zone, become an expert. Get better salary, seek for interesting projects (and they will seek you!).
  • Level 3: Optimize and be really memory-aware like a crazy. Use advanced C# tricks, refs, Span, stackalloc. Get some internals knowledge. Sell your unique knowledge for serious money.

I’m personally not paralized by the high-perf code too. I start from working prototype and only then I measure and optimize, if needed at all. But having those sanity checks allows me to avoid stupid mistakes. It is just a “gut feeling” you may get when moving to the higher levels. BTW, this all applies to everything we use – architecture, async/await, ORM, .NET runtime itself, …

This is where the most of the story ends for you and me. But there’s more a little more – a broader, company perspective. When talking about taking care of .NET memory and performance, I often meet those tree mantras:

  • #1 “Infra will take it” – I agree, sometimes it is just cheaper to put more $$$ on Azure instead of paying a dev. Especially, if you are a startup. But this approach should have some end, I strongly believe. There is “Software disenchantment” article from 2018, which received some attention in my circles recently, with which I agree SO MUCH“Everything is unbearably slow”, and inefficient. I highly recommend reading it as a follow-up to this post! And well, throwing money on hardware to excuse our laziness… I’d really love to see start seeing some “Ecological Sympathy” among companies. Yes, consuming more CPU/RAM on a phone or in cloud in the end translates to natural resources drainage.
  • #2 “We have C++ for that” – Yes… but does .NET-based company really want to hire a single C/C++ developer, install the whole new build ecosystem into the pipeline… for some critical 3% part of it being written in C++, while C# allows to do the same with just a little involvement of learning?
  • #3 “We just don’t care” – Well… Milliseconds Make Millions. There are so many reports showing clearly how directly slowness of an app translates to worse conversions, income, retention, however you call it.

But again, this is all more like company perspective. What does this mean for you? Just sell it, become an expert – whether it is architecture scalability & performance. Become Software Craftsmen. It waits for you!

HOW you can learn? There’s a lot of free stuff like .NET Memory Performance Analysis document by Maoni Stephens, Garbage Collection Design from “The Book of the Runtime” or my .NET GC Tips & Tricks and .NET GC Internals series on YT. There are conferences and webinars. And if you want to pay, there’s my book and my recently announced .NET Memory Course course which is just for that.

PS. Bonus sidenote for people seeing some small optimizations and immediately misquoting Donald Knuth saying “premature optimization is the root of all evil”. The whole context of this quote, coming from 1974 “Structured Programming with goto Statements” paper, is: “We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified.”


What do you know about .NET memory management? Should you care at all? Yes, it is all managed and automatic and the Garbage Collector takes care of everything. That’s a great achievement of managed runtimes in general – they provide this super nice abstraction of memory that is just there and we do not need to think about it at all, right?

Well, yes and no. For sure you may for years develop business applications earning money and do not touch a topic of .NET memory management at all. I’ve been there too. But, there is always more. There is always some edge case, some scalability problem, a memory leak or just not efficient resources utilization (wasting money💸). I’m not here today to convince you to that, tough. There will be time for this 🙂

Today I just wanted to invite you to my new initiative – a small quiz about .NET memory management. 32 questions that may, or may not, shed some light on you in the context of this topic:

👉 Take a quiz 👈

Share your result, elaborate about questions! And as you see at the end, you can subscribe to the newsletter where I’ll be providing explanations of correct answers for every question.

And yes, this is a beginning of a bigger initiative https://dotnetmemoryexpert.com – more about it soon. In general, more and more GC- and memory-related content is coming!😍




Have you ever wonder what happens when you create and use breakpoints in .NET? Here’s a little picture that answers that question (if you don’t like the font, you have a different version at the bottom).

We have the main actors here as follows:

  • .NET Application – our regular .NET application that we want to debug. Methods, as provided by the compiler in the Intermediate Language form (IL) are Just-in-Time compiled to a native code when called. So, imagine our “e8 50 ff ff” represents an example binary code of a line we want to debug (no matter what it does now)
  • Debugger Runtime Control Thread (hereinafter referred to as Debugger RC Thread) – it is a special thread inside every .NET process for debugging purposes and serves as a bridge between the CLR and an external debugger. It consists of a so-called “debugger loop”, listening on events coming from the Debug Port (supported by the OS). Please note that in case of native debugging, such a special thread is typically injected into the debuggee process. But we don’t need to do that here, as .NET runtime provides it. And moreover, this thread understand the CLR data structures, so it is able to cooperate with JIT and so.
  • external Debugger – it is our external process that we cooperate with. Imagine it as a tooling part of Visual Studio or other IDE you use. It is using a set of COM objects that are able to communicate via Inter-process communication (IPC) mechanism with Debugger Runtime Control Thread.

Continue reading

Sometimes you may hear oppinions that there are too many changes introduced to the C# and/or too fast. That people get lost and confused about all this new syntax and feature added here and there. And while one may argue or not, I would like to look at this topic from a different angle – what are you missing in C#? What single functionality would you enjoy the most?

I’ve asked the same question here and there (in Polish) and here are aggregated results, sorted by popularity, with some additional remarks from my side:

1. Discriminated unions

The clear winner is a possibility to use “discriminated unions”. There is an ongoing proposal about them already. The most often mentioned wishes here are to have good pattern matching support (we hope so!) and F# DU compatiliby (unlikely, like in the case with records).Continue reading

Hi and welcome to the third episode of the .NET GC internals! Yesterday is was again 1.5h of talking about the Mark phase, this time in its Concurrent flavour. Most of the time I’ve described the solution and implementation of it to two problems:

  • Problem #1 – how to mark an object while being used? – in non-Concurrent Mark phase MethodTable pointers is used for this. But now, while the application is running and may access this pointer, it is not the best place ever 🙂
  • Problem #2 – how to get a consistent view while references are changing? – even more difficult problem, as the GC tries to discovers what is reachable while references between objects are constantly changed by the application


Enjoy watching the recording on YouTube and if you have any questions, do not hesitate to ask!

As usual, you can get the slides from our goodies page.

And do not forget to join Wednesday, February 10, 7PM CET – this time I’ll cover Plan phase, describing such internal stuff like “plugs”, “gaps”, “brick tables” and more!

Hi and welcome to the second episode of the .NET GC internals! Yesterday is was 1.5h of talking about the (non-concurrent) Mark phase. The one responsible for discovering which object are “reachable” and which may be garbage collected. I’ve covered topics like object graph traversal algorithm, the pinning and marking flag, the mark stack and mark list data structures. And obviously, some deep dive into the gc.cpp at the end.


Enjoy watching the recording on YouTube and if you have any questions, do not hesitate to ask!Continue reading

I’ve decided to make a series of at least 8 free weekly-based webinars about in-depth implementation details of the .NET GC and… I’m super happy with it! Why the idea? Many my other activities are about more practical “.NET memory management”, like my book or workshops/trainings/consultancy I gave. But during all this practical-oriented events there is always not enough time to explain in details how .NET GC is implemented. Obviously, I always explain some concepts and algorithms, to the level that helps in understanding the whole concept. But not with the level of details that I am satisfied.

Hence the idea – make a separate content that will be just as deep as I like 🙂 So, I will cover details on the level of bits, bytes and source code, not only on the level of the overall algorithm description.

The first episode was yesterday, feel invited to watch:

Continue reading


We all are using RAM, probably DDR4 is just sitting there in your PC and serving memory with an outstanding speed, even while you are reading this sentence. But have you ever wondered how DRAM is working internally?

If so, please find my new poster about DRAM anatomy. It provides a little oversimplified view, for illustrative purposes. But it explains DRAM internals “good enough” for any regular, mortal developer like you and me. This is a great basis to understand while linear memory access is so much preferred over random one, cryptic mamory access timings like 8-8-8-24, and for explaining bugs like Rowhammer bug. Or just to hang it on the wall as a nerdy decoration 🙂

The poster is available for free in a printable version from https://goodies.dotnetos.org.

Here are some additional materials that you can follow while interpreting/digging in this poster:

Happy printing!


Testing shows the presence of errors in a product, but “cannot prove that there are no defects” – you probably know that quote. I remember so many hours spent on debugging those little, mean bugs hidding deeply in the code edge cases. But what’s worse, I remember even more hours trying to understand and reproduce an error that happens only in production environment. Here’s the first top 5 most popular issues I’ve met during last years:

  • app hangs due to deadlocks (in the app or external library)
  • memory issues like memory leaks, long GC pauses or high CPU usage due to the GC
  • swallowed exception preventing some logic, with no logs available
  • threading issues like thread-pool starvation
  • intermittent errors due to the resources shortage, like running out of sockets or file handles

BTW. And what’s yours top 5?

Continue reading


A lot of C# 9-related content is around. Very often, records are mentioned as one of the most interestning new features. So, while we can find A LOT of buzz around them, I wanted to provide a distilled set of facts typically not presented when describing them.

Fact #1. You can use them in pre-.NET 5

Records has been announced as C# 9 feature (and thus .NET 5), and it is the officially supported way. But you can “not officialy” use most C# 9 features in earlier frameworks, as they don’t need the new runtime support. So, if being not “officially supported” does not bother you too much, just set proper LangVersion in csproj and you are (almost) done:

Trying to compile super typical example like the following:

will still not compile, complaining about the lack of mysterious IsExternalInit type:

To be funny, the workaround is just to define it in your project (exactly as it is in the newer CoreLib, shipped with .NET 5):

BTW, IsExternalInit is not required for the record usage by itself, but for init as discussed in https://github.com/dotnet/runtime/issues/34978 and https://github.com/dotnet/runtime/pull/37763. So if creating mutable records is ok, no need for that.

Sidenote: If you are interested what more you can “not officially” use, look at Using C# 9 outside .NET 5 #47701 discussion.Continue reading