Why should you care about .NET GC…?

I’m a guy talking and showing a lot of stuff about .NET memory management. I’ve wrote a book about it, I’ve done webinars, conference talks, infographics, a lot. Add there is ALWAYS at least one question like “Ok, that’s nice, but why should I care about it. It’s all automatic, isn’t it?”. Answering the same question again and again is pretty tiresome. Hence, I’ve eventually decided to make a webinar just about that – why should I care? Why should YOU care? If you have more than hour to watch it, I strongly invite you. But if you don’t, stay with me for a written version here.

First of all, I would rephrase the question – “Why should YOU care about learning and understanding tools you use…?”. Why should you care about async/await, database or ORM you use? In the end, what do you really need to know to program and earn money in a given technology?

Let’s pause this questions for a moment and let’s be clear about one thing – .NET GC (and automatic memory management in overall) is a convenient abstraction. As our dear Wikipedia says, an abstraction is “a simplification of something much more complicated that is going on under the covers”. And automatic memory management is a simplification of much more complex manual memory management – giving a promise you have an infinite amount of memory (you only allocate) and don’t care about freeing it. In IT we love abstractions. Abstractions simplify our lives, A LOT. But there is also The Law of Leaky Abstractions coined by Joel Spolsky (co-founder of StackOverflow and Trello) – “All non-trivial abstractions, to some degree, are leaky.

So, TCP is an abstraction of reliable connection built on unreliable IP. ORM like Entity Framework is an abstraction of classes and methods representing DB tables and SQL queries. And ASP.NET Web Forms has an abstraction of statefulness over stateless HTTP protocol with the help of ViewState. But if you have ever developed something bigger than school project, you were seeing those abstractions leaking. Observing TCP latency drops due to IP retransmissions, analyzing SQL queries generated by your ORM and being surprised how big your ViewState is transferred between the client and server again and again. Abstractions fail. We are fooled by abstractions. Sometimes they leak more, sometimes less.

Ok, so .NET GC abstraction leaks, too? But for sure someone will say: “I have been programming in C# for X years and have never needed to care about it…”. Where X is like 10, 20 or gazillion. But putting jokes aside, it makes sense. Because, as I said, abstractions make our lives so much easier! That’s it, that the result – you MAY not think about it and live.

That’s why “80% rule” works here – MOST of the time you don’t need to think about it. Until you need. Or until you want to for your own benefit.

You can be in “80%” of people that don’t care – that’s TOTALLY fine. But you can be in this “20%” of experts that do care! If someone says “I have been programming in C# for X years and have never needed to care about it”, I can reverse the question: what would you be working on IF you had cared about it during those X years? Maybe they were not measured, inefficient apps wasting resources and you could help saving some money for your company? Or maybe they were just pretty boring apps? Or maybe you could just be a better Software Craftmanship, caring to deliver not only working software, but well-crafted working software? I believe this all leads to one crucial question – do you want to be an expert or a regular developer? If you want to be just a regular .NET developer, absolutely, YMMV!

Ok, let’s pause for a moment. There is one more aspect of it… There are two main perspectives you can look at it:

  • application developer perspective – what’s said above perfectly suits here. “You don’t have to care about until it becomes a problem” 👍 You are on good position here – you know your application, its context, requirements, current and expected workload. You can measure it (if you know how…). You know what are the bottleneck. So, the goal here is not to write everything with pointers and Span. But, we return to the “expert” story again – you can still keep in mind performance, good and bad practices, be aware of abstraction leaks. And have “entry points” how to fix, trace, debug, measure… You can be an expert!
  • library author perspective – well, you don’t know your customers. It just should be fast. And performance, among features, documentation, clean design, becomes one of the distinguishing factors when we choose this library or another. There is no sky limit here. The goal is to write everything with pointers and Span 😉 (yes, joke!). What’s I’m trying to say is that this perspective favors writing performance- and memory-aware code a little more.

Bonus question to you: Do you want to only consume .NET libraries, EF Core, ASP.NET Core, Unity, Xamarin, RavenDB, … or produce them? For money? Guess what you need to have a chance for doing that. Yes, you have to scratch your comfortable abstractions off the surface. “The only way to deal with the leaks competently is to learn about how the abstractions work and what they are abstracting as Joel Spolsky said in the article cited above. The whole Mechanical Sympathy movement is about that. Understand one or two layers below the one you use.

Summarizing, and answering the title question, here is my pragmatic approach to .NET memory management – split it into levels and decide which level suits you the most:

  • Level 0 – I really don’t care, I just want to make things done.
  • Level 1 – Keep in mind some knowledge of abstractions, some basic sanity checks like: use struct here, don’t use LINQ there, set Capacity of the List, know what are the consequences of allocating a lot in LOH (and what LOH is in the first place), and so on and so forth. Just clean code for performance. Explaining that you should be there is just like explaining you should write clean code and tests. Or wash hands before you eat. This allows you to feel comfortable and “expertised” in “80%” zone.
  • Level 2 – Be more interested. Know how to measure and diagnose if something bad happens. Have a good toolbox for measuring and troubleshooting. Have some more knowledge of abstractions that may leak. Move to the “20%” zone, become an expert. Get better salary, seek for interesting projects (and they will seek you!).
  • Level 3: Optimize and be really memory-aware like a crazy. Use advanced C# tricks, refs, Span, stackalloc. Get some internals knowledge. Sell your unique knowledge for serious money.

I’m personally not paralized by the high-perf code too. I start from working prototype and only then I measure and optimize, if needed at all. But having those sanity checks allows me to avoid stupid mistakes. It is just a “gut feeling” you may get when moving to the higher levels. BTW, this all applies to everything we use – architecture, async/await, ORM, .NET runtime itself, …

This is where the most of the story ends for you and me. But there’s more a little more – a broader, company perspective. When talking about taking care of .NET memory and performance, I often meet those tree mantras:

  • #1 “Infra will take it” – I agree, sometimes it is just cheaper to put more $$$ on Azure instead of paying a dev. Especially, if you are a startup. But this approach should have some end, I strongly believe. There is “Software disenchantment” article from 2018, which received some attention in my circles recently, with which I agree SO MUCH“Everything is unbearably slow”, and inefficient. I highly recommend reading it as a follow-up to this post! And well, throwing money on hardware to excuse our laziness… I’d really love to see start seeing some “Ecological Sympathy” among companies. Yes, consuming more CPU/RAM on a phone or in cloud in the end translates to natural resources drainage.
  • #2 “We have C++ for that” – Yes… but does .NET-based company really want to hire a single C/C++ developer, install the whole new build ecosystem into the pipeline… for some critical 3% part of it being written in C++, while C# allows to do the same with just a little involvement of learning?
  • #3 “We just don’t care” – Well… Milliseconds Make Millions. There are so many reports showing clearly how directly slowness of an app translates to worse conversions, income, retention, however you call it.

But again, this is all more like company perspective. What does this mean for you? Just sell it, become an expert – whether it is architecture scalability & performance. Become Software Craftsmen. It waits for you!

HOW you can learn? There’s a lot of free stuff like .NET Memory Performance Analysis document by Maoni Stephens, Garbage Collection Design from “The Book of the Runtime” or my .NET GC Tips & Tricks and .NET GC Internals series on YT. There are conferences and webinars. And if you want to pay, there’s my book and my recently announced .NET Memory Course course which is just for that.

PS. Bonus sidenote for people seeing some small optimizations and immediately misquoting Donald Knuth saying “premature optimization is the root of all evil”. The whole context of this quote, coming from 1974 “Structured Programming with goto Statements” paper, is: “We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified.”

Leave a Reply

Your email address will not be published. Required fields are marked *