quiz_banner

What do you know about .NET memory management? Should you care at all? Yes, it is all managed and automatic and the Garbage Collector takes care of everything. That’s a great achievement of managed runtimes in general – they provide this super nice abstraction of memory that is just there and we do not need to think about it at all, right?

Well, yes and no. For sure you may for years develop business applications earning money and do not touch a topic of .NET memory management at all. I’ve been there too. But, there is always more. There is always some edge case, some scalability problem, a memory leak or just not efficient resources utilization (wasting money💸). I’m not here today to convince you to that, tough. There will be time for this 🙂

Today I just wanted to invite you to my new initiative – a small quiz about .NET memory management. 32 questions that may, or may not, shed some light on you in the context of this topic:

👉 Take a quiz 👈

Share your result, elaborate about questions! And as you see at the end, you can subscribe to the newsletter where I’ll be providing explanations of correct answers for every question.

And yes, this is a beginning of a bigger initiative https://dotnetmemoryexpert.com – more about it soon. In general, more and more GC- and memory-related content is coming!😍

 

 

Sometimes you may hear oppinions that there are too many changes introduced to the C# and/or too fast. That people get lost and confused about all this new syntax and feature added here and there. And while one may argue or not, I would like to look at this topic from a different angle – what are you missing in C#? What single functionality would you enjoy the most?

I’ve asked the same question here and there (in Polish) and here are aggregated results, sorted by popularity, with some additional remarks from my side:

1. Discriminated unions

The clear winner is a possibility to use “discriminated unions”. There is an ongoing proposal about them already. The most often mentioned wishes here are to have good pattern matching support (we hope so!) and F# DU compatiliby (unlikely, like in the case with records).Continue reading

I’ve decided to make a series of at least 8 free weekly-based webinars about in-depth implementation details of the .NET GC and… I’m super happy with it! Why the idea? Many my other activities are about more practical “.NET memory management”, like my book or workshops/trainings/consultancy I gave. But during all this practical-oriented events there is always not enough time to explain in details how .NET GC is implemented. Obviously, I always explain some concepts and algorithms, to the level that helps in understanding the whole concept. But not with the level of details that I am satisfied.

Hence the idea – make a separate content that will be just as deep as I like 🙂 So, I will cover details on the level of bits, bytes and source code, not only on the level of the overall algorithm description.

The first episode was yesterday, feel invited to watch:

Continue reading

logo-thinkific

Testing shows the presence of errors in a product, but “cannot prove that there are no defects” – you probably know that quote. I remember so many hours spent on debugging those little, mean bugs hidding deeply in the code edge cases. But what’s worse, I remember even more hours trying to understand and reproduce an error that happens only in production environment. Here’s the first top 5 most popular issues I’ve met during last years:

  • app hangs due to deadlocks (in the app or external library)
  • memory issues like memory leaks, long GC pauses or high CPU usage due to the GC
  • swallowed exception preventing some logic, with no logs available
  • threading issues like thread-pool starvation
  • intermittent errors due to the resources shortage, like running out of sockets or file handles

BTW. And what’s yours top 5?

Continue reading

async08

Awaitables are the types on which await can be called. It happens due to “duck typing” – the only thing which makes type “awaitable” is an existence of the GetAwaiter() method that returns type implementing INotifyCompletion interface. That’s it.

Obviously, the most popular awaitables are Task and Task<T> so we can await them – this ends up in a possibility to call:

if DoSomethingAsync method returns Task or Task<T>. Another popular awaitables are ValueTask & ValueTask<T> and ConfiguredTaskAwaiter & ConfiguredValueTaskAwaiter (when you use ConfigureAwait).

But nothing stops us to write our own awaitables. Moreover, as I said duck typing is involved here so the type itself does not have to define proper GetAwaiter method. We can use extension method and it will satisfy duck typing.

Thus, we can make ANY type awaitable. So, we can make bool awaitable:

Awaiter uses yet another set of duck-typed methods to satisfy underlying state machine:

Because IsCompleted is always true (checked by the underlying async state machine), GetResult will be called for the result. Here OnCompleted is never called because it is a callback called when the operation “completed” (to execute continuation somehow).

So now we can write:

And because GetResult returns bool… which is now awaitable, we can make recurrent calls:

And because in the end it is a bool, it has its all normal operators:

Summary

BoolAwaiter presented here is just for an educational purposes to better understand awaitables. And for fun a little. You can write your own, for example making int awaitable 🙂 Obviously, it has no practical usage and may be even dangerous (imagine a DoAsync method returning int instead of Task<int> by mistake).

If you found it interesting, this is just an example of work for a much bigger project – I really do believe it’s going to be the best in the market on-line course about asynchronous and concurrent programming in .NET.

Sync over async in .NET is always bad and there is no better advice than just to avoid it. What does “Sync over async” mean exactly? It happens if you synchronously wait on an asynchronous operation result with the help of .Result, .Wait or similar. Why is it bad? First of all, it blocks (wastes) one thread to wait on a result – which may lead to threads starvation. But even worse, it may deadlock your operation and (sometimes) the whole application.

Probably you’ve heard all that previously. I just wanted to present a picture, “worth a thousand words“, to explain why does it happen.

synchronizationcontext_winforms_nestedsyncThere is a concept of SynchronizationContext in .NET – an abstraction that knows how/where schedule a work item (like an async/await continuation). When you await something, SynchronizationContext is being captured. And when continuation is going to be run – we use SynchronizationContext to run the continuation “somewhere”. SynchronizationContext implementation may be different in various scenarios (console, UI, web, mobile applications), because there are various needs to “synchronize” work items. The main example is a GUI-based application. When we start an asynchronous operation on the UI thread, we expect its continuations will “return” to the same thread.

But, if we .Result that operation, the main UI thread is blocked waiting on the result, so it is not able to process anything (including mouse/keyboard events). So there is no way continuation (that would set the result) may run, thus we endlessly wait for the result – deadlock.

synchronizationcontext_winforms_configureawait

That’s why ConfigureAwait helps – it allows to say “I don’t care about scheduling continuation to the original (captured) context“. Thanks to that asynchronous operation continuation is scheduled to a different thread (thread pool’s) and sets the result with no problem. This resumes the main UI thread, and there is no deadlock.

That was just two simple drawings. If you’d like to know more, refer to a great ConfigureAwait FAQ by Stephen Toub.

Again, all this is just a work for a much bigger project, which is awesome Async Expert on-line course about asynchronous and concurrent programming in .NET. If you found it interesting, stay tuned by subscribing to the newsletter on the above-mentioned page!

It is said that picture is worth a thousand words, and I agree. That’s why I like preparing technical drawings to explain various concepts. So, here it is – a short story of how async/await works in .NET.

thereisnothread

The main power behind async/await is that while we “await” on an ongoing I/O operation, the calling thread may be released for doing other work. And this provides a great thread re-usability. Thus, better scalability – much smaller number of threads is able to handle the same amount of operations comparing to asynchronous/waiting approach.

The main role here plays so-called overlapped I/O (in case of Windows) which allows to asynchronously delegate the I/O operation to the operating system, and only after completion the provided callback will notify us about the result. The main workforce here is so-called I/O completion port (IOCP).Continue reading

poh01

In the upcoming .NET 5 a very interesting change is added to the GC – a dedicated Pinned Object Heap, a very new type of the managed heap segment (as we have Small and Large Object Heaps so far). Pinning has its own costs, because it introduces fragmentation (and in general complicates object compaction a lot). We are used to have some good practices about it, like “pin only for…:

  • a very short time” so, the GC will not bother – to reduce probability that the GC happens while many objects were pinned. That’s a scenario to use fixed keyword, which is in fact only a very lightweight way of flagging particular local variable as a pinned reference. As long as GC does not happen, there is no additional overhead.
  • a very long time”, so the GC will promote those objects to generation 2 – as gen2 GCs should be not so common, the impact will be minimized also. That’s a scenario to use GCHandle of type Pinned, which is a little bigger overhead because we need to allocate/free handle.

However, even if applied, those rules will produce some fragmentation, depending how much you pin, for how long, what’s the resulting layout of the pinned objects in memory and many other, intermittent conditions.

So, in the end, it would be perfect just to get rid of pinned objects and move them to a different place than SOH/LOH. This separate place would be simply ignored, by the GC design, when considering heap compaction so we will get pinning behaviour out of the box.Continue reading

cilvalid

Everyone knows that C# is a strongly typed language and incorrect type usage is simply not possible there. So, the following program will just not compile:

That’s good, it means we can trust Roslyn (C# compiler) not to generate improper type-safety code. But what if we rewrite the same code to the Common Intermediate Language, omitting completely C# and its compiler?

First of all, it will be assembled by ILASM tool without any errors because it is a syntactically correct CIL. And ILASM is not a compiler, so it will not do any type checks on its own. So we end up with an assembly file with a smelly CIL inside. If not using ILASM, we could also simply modify CIL with the help of any tool like dnSpy.

Ok, let’s say that is fine. But what will happen when we try to execute such code? Will .NET runtime verify somehow the CIL of those methods? Just-In-Time compiler for sure will notice type mismatch and do something to prevent executing it, right?

What will happen is the program will just execute without any errors and will print 4 (the length of “Test”) followed by… 0 in a new line. The truth is that JIT or any other part of .NET runtime does not examine type safety.

Why the result is 0? Because when the JIT emits native code of a particular method, it uses type layout information of the data/types being used. And it happens that string.Length property is just an inlined method call that access the very first int field of an object (because string length is stored there):

As we pass a newly created object instance, which always has one pointer-sized field initialized to zero (this is a requirement of the current GC), the result is 0.

And yes, if we pass a reference to an object with some int field, its value will be returned (again, instead of throwing any type-safety related runtime exception). The following code (when converted to CIL) will execute with no errors and print 44!

This all may be quite suprising, so what ECMA-335 standard says about it? Point “II.3 Validation and verification” mentions all CIL verification rules and algorithms and states:

“Aside from these rules, this standard leaves as unspecified:

  • The time at which (if ever) such an algorithm should be performed.
  • What a conforming implementation should do in the event of a verification failure.”

And:

“Ordinarily, a conforming implementation of the CLI can allow unverifiable code (valid code that does not pass verification) to be executed, although this can be subject to administrative trust controls that are not part of this standard.”

While indeed .NET runtime does some validation, it does not verify the IL. The difference? If we run the following code:

It will end up with System.InvalidProgramException: Common Language Runtime detected an invalid program. being thrown. So, we can summarize it as the fact that invalid CIL code may trigger InvalidProgramException for some cases, but for others will just allow the program to execute (with many unexpected results). And all this may happen only during JIT compilation, at runtime.

So, what can we do to protect ourselves, before deploying and running it on production? We need to verify our IL on our own. There is PEVerify tool for exactly that purpose, shipped with .NET Framework SDK. You can find one in a folder similar to c:\Program Files (x86)\Microsoft SDKs\Windows\v10.0A\bin\NETFX 4.8 Tools\x64\.

When running against our example, it will indeed detect an incorrect method with a proper explanation:

The only problem with PEVerify is… it does not support .NET Core.

What for .NET Core then? There is ILVerify, a cross-platform, open source counterpart of it developed as a part of CoreRT runtime (although it supports analyzing both .NET Framework and .NET Core). Currently, to have it working we need to compile the whole CoreRT (How to run ILVerify? issue #6198) OR you can use unofficial Microsoft.DotNet.ILVerification package to write your own command line tool (inspired by the original Program.cs).

So, nothing officially supported and shipped with the runtime itself, yet. And BTW, there is ongoing process to make Roslyn IL verification fully working as well.

Sidenote

The previous example was a little simplified because ConsumeString(string) called a virtual get_Length method on a sealed string type, so it was aggressively inlined. If we experiment with regular virtual method on a not sealed type, things become more intermittent because now the call is using virtual stub dispatch mechanism. In the following example (again, if rewritten to CIL), how Consume will behave depends on what we have passed as an argument and where the pointers of VSD will follow (most likely, triggering access violation).

Conclusions

  • if you do write in CIL, to have more power in hands (like using Reflection.Emit, manipulate CIL fore the code weaving or any other magic like the whole Unsafe class), please be aware of the difference between validation and verification. And verify your assembly on your own, as JIT compiler will not do it!
  • if you do want to trust your app FULLY, run IL verification before executing it. Probably it could be even added to you CI pipeline as an additional check – you may trust your code but not someone else code (and the code modified by the tools you use). And yes, it is not straightforward currently in .NET Core case.

Subscribe to my mailing list dedicated for .NET performance and internals related stuff!

Please select all the ways you would like to hear from Konrad Kokosa:

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please visit our website.

Mobius Overview

.NET application is “just” a piece of CIL bytecode to be executed by the .NET runtime. And .NET runtime is “just” a program that is able to perform this task. It happens that currently .NET Framework/.NET Core runtimes are written in C++. I am also fully aware of CoreRT that was .NET runtime with many parts rewritten to C# (like type system) but still, crucial parts (including JIT compiler and the GC) were left written in C++.

But what if we write .NET runtime as… .NET application? Is is possible at all? I mean, literally no native/C++ code, everything running as .NET Core application written in C#? Does this sound like kind of inception and infinite recursion? It would require running one .NET runtime on the top of another .NET runtime, right?

I decided to check it out and that’s how Mobius runtime idea has been coined! Yeah, I know it sound strange and I do not expect it will be anything close to production ready thingy in the nearest century. I am fully aware of the amount of code needed to be written to make full .NET runtime. However, I found it interesting to validate such idea and I find it small usages as well. Imagine a NuGet package with the separate runtime that you can add to your application 😉

Continue reading