debuggers02

Have you ever wonder what happens when you create and use breakpoints in .NET? Here’s a little picture that answers that question (if you don’t like the font, you have a different version at the bottom).

We have the main actors here as follows:

  • .NET Application – our regular .NET application that we want to debug. Methods, as provided by the compiler in the Intermediate Language form (IL) are Just-in-Time compiled to a native code when called. So, imagine our “e8 50 ff ff” represents an example binary code of a line we want to debug (no matter what it does now)
  • Debugger Runtime Control Thread (hereinafter referred to as Debugger RC Thread) – it is a special thread inside every .NET process for debugging purposes and serves as a bridge between the CLR and an external debugger. It consists of a so-called “debugger loop”, listening on events coming from the Debug Port (supported by the OS). Please note that in case of native debugging, such a special thread is typically injected into the debuggee process. But we don’t need to do that here, as .NET runtime provides it. And moreover, this thread understand the CLR data structures, so it is able to cooperate with JIT and so.
  • external Debugger – it is our external process that we cooperate with. Imagine it as a tooling part of Visual Studio or other IDE you use. It is using a set of COM objects that are able to communicate via Inter-process communication (IPC) mechanism with Debugger Runtime Control Thread.

Continue reading

Sometimes you may hear oppinions that there are too many changes introduced to the C# and/or too fast. That people get lost and confused about all this new syntax and feature added here and there. And while one may argue or not, I would like to look at this topic from a different angle – what are you missing in C#? What single functionality would you enjoy the most?

I’ve asked the same question here and there (in Polish) and here are aggregated results, sorted by popularity, with some additional remarks from my side:

1. Discriminated unions

The clear winner is a possibility to use “discriminated unions”. There is an ongoing proposal about them already. The most often mentioned wishes here are to have good pattern matching support (we hope so!) and F# DU compatiliby (unlikely, like in the case with records).Continue reading

Hi and welcome to the third episode of the .NET GC internals! Yesterday is was again 1.5h of talking about the Mark phase, this time in its Concurrent flavour. Most of the time I’ve described the solution and implementation of it to two problems:

  • Problem #1 – how to mark an object while being used? – in non-Concurrent Mark phase MethodTable pointers is used for this. But now, while the application is running and may access this pointer, it is not the best place ever 🙂
  • Problem #2 – how to get a consistent view while references are changing? – even more difficult problem, as the GC tries to discovers what is reachable while references between objects are constantly changed by the application

gcwebinar12

Enjoy watching the recording on YouTube and if you have any questions, do not hesitate to ask!

As usual, you can get the slides from our goodies page.

And do not forget to join Wednesday, February 10, 7PM CET – this time I’ll cover Plan phase, describing such internal stuff like “plugs”, “gaps”, “brick tables” and more!

Hi and welcome to the second episode of the .NET GC internals! Yesterday is was 1.5h of talking about the (non-concurrent) Mark phase. The one responsible for discovering which object are “reachable” and which may be garbage collected. I’ve covered topics like object graph traversal algorithm, the pinning and marking flag, the mark stack and mark list data structures. And obviously, some deep dive into the gc.cpp at the end.

gcwebinar07

Enjoy watching the recording on YouTube and if you have any questions, do not hesitate to ask!Continue reading

I’ve decided to make a series of at least 8 free weekly-based webinars about in-depth implementation details of the .NET GC and… I’m super happy with it! Why the idea? Many my other activities are about more practical “.NET memory management”, like my book or workshops/trainings/consultancy I gave. But during all this practical-oriented events there is always not enough time to explain in details how .NET GC is implemented. Obviously, I always explain some concepts and algorithms, to the level that helps in understanding the whole concept. But not with the level of details that I am satisfied.

Hence the idea – make a separate content that will be just as deep as I like 🙂 So, I will cover details on the level of bits, bytes and source code, not only on the level of the overall algorithm description.

The first episode was yesterday, feel invited to watch:

Continue reading

poster_sdram

We all are using RAM, probably DDR4 is just sitting there in your PC and serving memory with an outstanding speed, even while you are reading this sentence. But have you ever wondered how DRAM is working internally?

If so, please find my new poster about DRAM anatomy. It provides a little oversimplified view, for illustrative purposes. But it explains DRAM internals “good enough” for any regular, mortal developer like you and me. This is a great basis to understand while linear memory access is so much preferred over random one, cryptic mamory access timings like 8-8-8-24, and for explaining bugs like Rowhammer bug. Or just to hang it on the wall as a nerdy decoration 🙂

The poster is available for free in a printable version from https://goodies.dotnetos.org.

Here are some additional materials that you can follow while interpreting/digging in this poster:

Happy printing!

logo-thinkific

Testing shows the presence of errors in a product, but “cannot prove that there are no defects” – you probably know that quote. I remember so many hours spent on debugging those little, mean bugs hidding deeply in the code edge cases. But what’s worse, I remember even more hours trying to understand and reproduce an error that happens only in production environment. Here’s the first top 5 most popular issues I’ve met during last years:

  • app hangs due to deadlocks (in the app or external library)
  • memory issues like memory leaks, long GC pauses or high CPU usage due to the GC
  • swallowed exception preventing some logic, with no logs available
  • threading issues like thread-pool starvation
  • intermittent errors due to the resources shortage, like running out of sockets or file handles

BTW. And what’s yours top 5?

Continue reading

csharp9_01

A lot of C# 9-related content is around. Very often, records are mentioned as one of the most interestning new features. So, while we can find A LOT of buzz around them, I wanted to provide a distilled set of facts typically not presented when describing them.

Fact #1. You can use them in pre-.NET 5

Records has been announced as C# 9 feature (and thus .NET 5), and it is the officially supported way. But you can “not officialy” use most C# 9 features in earlier frameworks, as they don’t need the new runtime support. So, if being not “officially supported” does not bother you too much, just set proper LangVersion in csproj and you are (almost) done:

Trying to compile super typical example like the following:

will still not compile, complaining about the lack of mysterious IsExternalInit type:

To be funny, the workaround is just to define it in your project (exactly as it is in the newer CoreLib, shipped with .NET 5):

BTW, IsExternalInit is not required for the record usage by itself, but for init as discussed in https://github.com/dotnet/runtime/issues/34978 and https://github.com/dotnet/runtime/pull/37763. So if creating mutable records is ok, no need for that.

Sidenote: If you are interested what more you can “not officially” use, look at Using C# 9 outside .NET 5 #47701 discussion.Continue reading

Imagine we have a simple class, a wrapper around some array of structs (better data locality etc.):

Now, I would like to have an efficient access to every element. Obviously, a trivial indexer would be inefficient here, as it would return a copy of the given array element (a struct):

Luckily, since C# 7.0 we can “ref return” to efficiently return a reference to a given array element which is super nice (refer to my article about ref for more info):

Here, 99.9999% of devs will stop and will be satisfied with the semantics and performance results. But… if we know we will call it tremendously often, can we do better?!

First of all, let’s see what is being JITted by the .NET Core x64 runtime (5.0rc) when accessing 9th element (index is 8):

To those who know assembler a little, it may be clear what is going on here. But let’s make a short summary:

  • we see a little of “stack frame” creation here (sub/add rsp) – could we get rid of it in such a simple method?
  • we see bound check in line 4 (cmp the index to 8) to check if we are accessing an array with a correct index – could we get rid of it because we trust our code? 😇

Disclaimer: Getting rid of bound checks is very risky and the resulting dangers probably will overcome the performance benefits. Thus, use it only after heavy consideration, if you are sure why you need it and you can ensure caller’s code will be correct (providing valid indices).

To continue, we will be walking on thin ice of unsafe code now.

The first idea is to use Unsafe.Add to provide kind of “pointer arithmetic” – add an index-element to the first element:

The “problem” here is, it produces almost identical results because _array[0] is still a bound-checked array access (and we don’t get rid of stack frame too):

Hence, the non trivial question arises – how to get the address/ref to the first element of an array?

We could think of doing some Span-based magic (to use MemoryMarshal.GetReference):

But you can probably feel it – it produces even slower and bigger code (Span creation handling etc.) while still bound check will be there (Span is “safe”).

So, we need somehow to find a better way of getting an address of the first array’s element. The thing is, the internal structure of the array type is an implementation detail (although well-known). How can we overcome that?

The idea is… to rely on that implementation detail. This approach is being used by DangerousGetReferenceAt method from Microsoft.Toolkit.HighPerformance package maintained by Sergio Pedri. DangerousGetReferenceAt source code explains it well:

So, we are casting (reinterpreting) an array reference as a reference to some artificial RawArrayData class, which has a layout corresponding to an array layout. Thus, getting “data” reference is now just trivial. No bound checks at all!

The good news is this method has been ported to .NET 5! So, in .NET 5.0rc we can already use MemoryMarshal.GetArrayDataReference which does exactly the same thing:

Thus, without any external dependencies our code in .NET 5 may be rewritten to:

And the resulting code is indeed much more lightweight:

No bound-checks, and as an additional reward from the method simplicity – no stack frame.

Benchmarks are indeed showing a noticeable (well, in ns order of magnitude) difference:

Which simply means, we are now about 5x faster than with the initial solution!

Disclaimer #2: Approach taken here with the usage of GetArrayDataReferece is super dangerous. As Levi Broderick, one of .NET framework developers, said: “Also, read the method documentation. It does more than remove bounds checks; it also removes array variance checks. So it might not be valid to write to the ref, even if the index is within bounds. Misuse of the method will bite you in the ass, guaranteed.”  Moreover, documentation clearly states that “a reference may be used for pinning but must never be dereferenced” 😇

abbreviations

We all use libraries from NuGet tremendously often. But how many times you’ve created your own, even the simplest library that is being published there? I expect, not so often… So do I. Thus, I decided to make the super simple example of doing that, while utilizing the new and shiny GitHub Actions for all automations.

This serves many purposes, as often when I do something:

  • Minimum Viable Product (or example) of creating your own library that is automatically tested/versioned and published as a NuGet package – it requires to combine various building blocks and it turns out to be not-so-trivial. My goal would be to provide it as a kind of template for creating a new NuGet-based libraries.
  • playground for fresh open source contributors that are willing to do some contributions that will be then propagated to a regular NuGet package – without a risk and fear of destroying something
  • funny, joke-like package that, still, may be extended/improved
  • and obviously, a great place to learn all this stuff (including me)

So this is how Strings.Abbreviations has been coined. A very simple C#-based library that makes the simplest thing I could imagine – provide sets of methods that resolve popular abbreviations like BRB or IMHO. For example Strings.BRB() with return “Be right back” stringTo add some “logic”, there is an additional option to use “title casing”. So, Strings.BRB(titleCase: true) will return “Be Right Back”.

Obviously, this is trivial but it is by design! Thanks to that there is a very low entrance level for any contributor. Additionally, automation workflow is not hidden inside the complex logic and dependencies.

Initial design

So, this repository is based on a regular solution containing .NET Core library and test project. After the recent PR, there is also a trivial F#-based library (because we can!). As I said, the core part here is that Strings.Abbreviations is build, tested and published as a NuGet package with the use of Github Actions. All workflows we define are living inside the repository under .github/workflows folder. Currently there are two working workflows defined.

Building and testing

Every push and PR to master will trigger building and unit testing the whole solution. What’s super nice – thanks to that the result of the tests are available as an additional PR check. And we have a “badge” available on the top of this file (we will return to that).

Setting this was trivial, as we can use the default .NET Core workflow suggested when using Actions for the very first time. Underneath this workflow uses dotnet restore/build/test commands.

Building and publishing a NuGet package

Every push to master that bumps up Version in the Strings.Abbreviations.csproj file will trigger building and publishing a new NuGet package. Additionally, a new git tag will be pushed with the given version. It uses Publish NuGet custom Github Action which was also really easy to set up thanks to the documentation. All package data are taken from csproj file (no need for a separate .nuspec file). The only additional thing required here is to add NUGET_API_KEY secret key from the repository Settings tab.

Badges

To make our repository more professional and fancy, it is nice to add some “badges”. Readme.md file contains some. One is provided by the build-and-test workflow, and others (like current NuGet version or number of downloads) are provided by super-popular https://shields.io/ site:

What’s next?

It is not a full MVP still. There is a field of many improvements to make it a true template for potential library authors, including:

  • the next big thing to add is producing GitHub releases every time a new version is being created. I wanted to depend on version tags being already pushed by the publish-nuget workflow. Unfortunately I have an issue with that.
  • GitHub Actions for creating prerelease versions
  • some simple documentation auto-generation example, like the list of currently supported abbreviations
  • more automated tests, like code coverage or even benchmarks
  • the core “logic” improvement, like auto-generating title case versions, more abbreviations, more languages etc.
  • …? Any other good, not-to-complex automation ideas

So, in the end, please feel invited to look around and help me building it with your contributions. Any contributions to the “logic” itself is also welcomed!