It is said that picture is worth a thousand words, and I agree. That’s why I like preparing technical drawings to explain various concepts. So, here it is – a short story of how async/await works in .NET.

thereisnothread

The main power behind async/await is that while we “await” on an ongoing I/O operation, the calling thread may be released for doing other work. And this provides a great thread re-usability. Thus, better scalability – much smaller number of threads is able to handle the same amount of operations comparing to asynchronous/waiting approach.

The main role here plays so-called overlapped I/O (in case of Windows) which allows to asynchronously delegate the I/O operation to the operating system, and only after completion the provided callback will notify us about the result. The main workforce here is so-called I/O completion port (IOCP).Continue reading

poh01

In the upcoming .NET 5 a very interesting change is added to the GC – a dedicated Pinned Object Heap, a very new type of the managed heap segment (as we have Small and Large Object Heaps so far). Pinning has its own costs, because it introduces fragmentation (and in general complicates object compaction a lot). We are used to have some good practices about it, like “pin only for…:

  • a very short time” so, the GC will not bother – to reduce probability that the GC happens while many objects were pinned. That’s a scenario to use fixed keyword, which is in fact only a very lightweight way of flagging particular local variable as a pinned reference. As long as GC does not happen, there is no additional overhead.
  • a very long time”, so the GC will promote those objects to generation 2 – as gen2 GCs should be not so common, the impact will be minimized also. That’s a scenario to use GCHandle of type Pinned, which is a little bigger overhead because we need to allocate/free handle.

However, even if applied, those rules will produce some fragmentation, depending how much you pin, for how long, what’s the resulting layout of the pinned objects in memory and many other, intermittent conditions.

So, in the end, it would be perfect just to get rid of pinned objects and move them to a different place than SOH/LOH. This separate place would be simply ignored, by the GC design, when considering heap compaction so we will get pinning behaviour out of the box.Continue reading

cilvalid

Everyone knows that C# is a strongly typed language and incorrect type usage is simply not possible there. So, the following program will just not compile:

That’s good, it means we can trust Roslyn (C# compiler) not to generate improper type-safety code. But what if we rewrite the same code to the Common Intermediate Language, omitting completely C# and its compiler?

First of all, it will be assembled by ILASM tool without any errors because it is a syntactically correct CIL. And ILASM is not a compiler, so it will not do any type checks on its own. So we end up with an assembly file with a smelly CIL inside. If not using ILASM, we could also simply modify CIL with the help of any tool like dnSpy.

Ok, let’s say that is fine. But what will happen when we try to execute such code? Will .NET runtime verify somehow the CIL of those methods? Just-In-Time compiler for sure will notice type mismatch and do something to prevent executing it, right?

What will happen is the program will just execute without any errors and will print 4 (the length of “Test”) followed by… 0 in a new line. The truth is that JIT or any other part of .NET runtime does not examine type safety.

Why the result is 0? Because when the JIT emits native code of a particular method, it uses type layout information of the data/types being used. And it happens that string.Length property is just an inlined method call that access the very first int field of an object (because string length is stored there):

As we pass a newly created object instance, which always has one pointer-sized field initialized to zero (this is a requirement of the current GC), the result is 0.

And yes, if we pass a reference to an object with some int field, its value will be returned (again, instead of throwing any type-safety related runtime exception). The following code (when converted to CIL) will execute with no errors and print 44!

This all may be quite suprising, so what ECMA-335 standard says about it? Point “II.3 Validation and verification” mentions all CIL verification rules and algorithms and states:

“Aside from these rules, this standard leaves as unspecified:

  • The time at which (if ever) such an algorithm should be performed.
  • What a conforming implementation should do in the event of a verification failure.”

And:

“Ordinarily, a conforming implementation of the CLI can allow unverifiable code (valid code that does not pass verification) to be executed, although this can be subject to administrative trust controls that are not part of this standard.”

While indeed .NET runtime does some validation, it does not verify the IL. The difference? If we run the following code:

It will end up with System.InvalidProgramException: Common Language Runtime detected an invalid program. being thrown. So, we can summarize it as the fact that invalid CIL code may trigger InvalidProgramException for some cases, but for others will just allow the program to execute (with many unexpected results). And all this may happen only during JIT compilation, at runtime.

So, what can we do to protect ourselves, before deploying and running it on production? We need to verify our IL on our own. There is PEVerify tool for exactly that purpose, shipped with .NET Framework SDK. You can find one in a folder similar to c:\Program Files (x86)\Microsoft SDKs\Windows\v10.0A\bin\NETFX 4.8 Tools\x64\.

When running against our example, it will indeed detect an incorrect method with a proper explanation:

The only problem with PEVerify is… it does not support .NET Core.

What for .NET Core then? There is ILVerify, a cross-platform, open source counterpart of it developed as a part of CoreRT runtime (although it supports analyzing both .NET Framework and .NET Core). Currently, to have it working we need to compile the whole CoreRT (How to run ILVerify? issue #6198) OR you can use unofficial Microsoft.DotNet.ILVerification package to write your own command line tool (inspired by the original Program.cs).

So, nothing officially supported and shipped with the runtime itself, yet. And BTW, there is ongoing process to make Roslyn IL verification fully working as well.

Sidenote

The previous example was a little simplified because ConsumeString(string) called a virtual get_Length method on a sealed string type, so it was aggressively inlined. If we experiment with regular virtual method on a not sealed type, things become more intermittent because now the call is using virtual stub dispatch mechanism. In the following example (again, if rewritten to CIL), how Consume will behave depends on what we have passed as an argument and where the pointers of VSD will follow (most likely, triggering access violation).

Conclusions

  • if you do write in CIL, to have more power in hands (like using Reflection.Emit, manipulate CIL fore the code weaving or any other magic like the whole Unsafe class), please be aware of the difference between validation and verification. And verify your assembly on your own, as JIT compiler will not do it!
  • if you do want to trust your app FULLY, run IL verification before executing it. Probably it could be even added to you CI pipeline as an additional check – you may trust your code but not someone else code (and the code modified by the tools you use). And yes, it is not straightforward currently in .NET Core case.

Subscribe to my mailing list dedicated for .NET performance and internals related stuff!

Please select all the ways you would like to hear from Konrad Kokosa:

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please visit our website.

Mobius Overview

.NET application is “just” a piece of CIL bytecode to be executed by the .NET runtime. And .NET runtime is “just” a program that is able to perform this task. It happens that currently .NET Framework/.NET Core runtimes are written in C++. I am also fully aware of CoreRT that was .NET runtime with many parts rewritten to C# (like type system) but still, crucial parts (including JIT compiler and the GC) were left written in C++.

But what if we write .NET runtime as… .NET application? Is is possible at all? I mean, literally no native/C++ code, everything running as .NET Core application written in C#? Does this sound like kind of inception and infinite recursion? It would require running one .NET runtime on the top of another .NET runtime, right?

I decided to check it out and that’s how Mobius runtime idea has been coined! Yeah, I know it sound strange and I do not expect it will be anything close to production ready thingy in the nearest century. I am fully aware of the amount of code needed to be written to make full .NET runtime. However, I found it interesting to validate such idea and I find it small usages as well. Imagine a NuGet package with the separate runtime that you can add to your application 😉

Continue reading

GC posters

In short words, I’ve prepared two posters about .NET memory management. They provide a comprehensive summary of “what’s inside .NET GC”, based on .NET Core (although almost all information is relevant also for .NET Framework).

The first shows a static point of view – how memory is organized into segments, generations, what are the roots and etc.:

Poster I

The second shows a dynamic point of view – how GC threads are working and what GC modes are available:

Poster II

 

You can download them for FREE in vector PDF format from my https://prodotnetmemory.com site. Take it and print it!

 

 

bothgamesbanner_white

2 Developers from Poland join forces to publish their IT-related card games: from Devs to Devs – OutOfMemory and IT Startup!

Each of us already has a published book and now we want to share some knowledge with fun games! Remember all the original 151 Pokemon names? How about playing a game that lets you remember something useful: like what technologies are beneficial to learn to be a better Developer! One of the games is already published in Poland and sold over 3k copies (so we how already some publishing know how). Now we want to start a company and publish 2 of our IT related card games worldwide in English.

Have fun while playing our print and play prototypes!

We plan to publish the games as one company on Kicktarter in Q1 2020.

Continue reading

outofmemory

In the previous post, I’ve announced that I’m working on a card game for .NET developers. In this post, I would like to make yet another one exciting announcement – OutOfMemory! game will be Kickstarted and… even more – it will be not alone! As around at the same time as I planned, another IT-related card game was going to be announced, we decided to join forces and benefit from such synergy, instead of stepping on each other toes. This decision was quite obvious – both we are developers, both we are from Poland and both we are planning to offer IT-related card game 🙂

So, I am pleased to announce that since today we are joining our design, marketing, and social media forces, which eventually will result in a joined Kickstarter campaign, about two game cards at once:

  • OutOfMemory! game for .NET developers (and I’m strongly considering other languages/ecosystems as an option) – in short, a game about building an application from features, with a strong emphasis on their performance and memory consumption
  • IT Startup game – about building IT company, by hiring developers and taking care of their burnout. This project has been already successfully crowdfunded here in Poland and now its author – Mateusz Kupilas – wants to expand into the international market.

With our joined social media channels and Mateusz experience of selling over 2500 IT Startup games, we believe everyone will benefit – and mostly, potential players!

As a project supporter, you will have an option to support one of those games or both (with a proper discount, of course!). We even considered designing both games in a way allowing to combine them into one, if one wishes. But this idea is currently postponed.

When both games will show up on Kickstarter? Not in the upcoming months, for sure. Most probably, at the beginning of February 2020. Since then there will be a lot of time to polish game design and build some community around prototypes.

If you want to learn more about IT Startup, please feel invited to visit its dedicated page at http://playitstartup.com/, or follow its social media channels on Facebook It Startup – The Card Game page or on Twitter @playitstartup. Mateusz writes also about our cooperation on his blog (in Polish) – https://www.javadevmatt.pl/lacze-sily-z-konradem/

1

prototype07

So…after quite a serious thing which was writing Pro .NET Memory Management book, I’ve decided to experiment with a little pet project for having some more fun. I have quite a few very interesting ideas going on in my head. Yet, I needed to choose one!

And that’s how an idea of OutOfMemory game prototype materialized! Ladies and gentleman, please meet the first in the world card game about .NET-based high performance and memory-aware programming. Sounds so nerdy, doesn’t it?! That’s by design and I like it! From developer to developers, with love 😉 It contains a huge amount of stuff related to programming, hardware architecture and… GC obviously! This is going to be a physical card game, not a computer or a mobile one.

The goal of the game is to build an application. You build it by playing on the table so-called Feature cards – the first player getting a given amount of Feature points wins!

prototype01

But each Feature has its cost! Each Feature card allocates Memory and consumes CPU Ticks. At the end of each player’s turn, you add an appropriate amount of Memory. If you hit a given limit – OutOfMemory occurs! CPU Ticks play also an important role in limiting possible actions in a turn.

In each turn, the player takes a single card from a deck. Or two, if she has an appropriate special card which is… Adam Sitnik‘s card (an example of a Hero card). Another example of .NET Hero card is Ben Adams – very powerful as it both reduces Memory and CPU Ticks, has a special ability (removes all the nasty Issues played against you by your opponents) and even adds a single Feature point (having Ben contributing to your app is a feature by itself)!

prototype02b

Guess who is on the other Hero cards?! Currently, I plan eight such cards!

Card(s) taken from the deck can be used immediately or kept at hand (but there is a limit of cards in hand, reflecting CPU cache my dear!)

There are various other types of cards. For example, there are Garbage Collection Action cards so if you are lucky enough (and plan to keep them in advance) you can clean your Memory periodically.

prototype03

A lot of cards help you to keep Features while reducing their Memory and/or Ticks costs, like Span<T> or Lock-free programming

prototype04

There are also nasty Bug and Issue cards that you can play against your opponents, removing their Feature points or making their app more allocating and slower!

prototype05b

Additionally, there are various Action cards that can be played to receive a short, single-turn benefit. Some influence only you, some a given opponent, and some all the players – like Black Friday cards that makes all Features double-allocating in the next turn (due to high volume traffic!).

prototype06

All this is in a very early prototype stage, requiring possibly quite a lot of rethinking. And a lot of balancing is required to create playable and enjoyable deck. Currently, my prototype consists of 80 cards.

prototype07

I am playing this game a lot with… myself, to balance the very first prototype. When ready, I plan to publish self-printable very rough version and I hope there will be .NET developers out there willing to try it out! With your feedback, we may create an amazing game!

Nevertheless, I would LOVE to start receiving your feedback RIGHT away! Do you like this idea? Do you have ideas of Hero, Bug, Issue, Fix and Feature cards?


If you are want to be informed about further work and the prototype, feel invited to subscribe at a dedicated page!


Note also that this initiative is a part of a bigger Dotnetos initiative – although currently only I am involved in the design of this game, most probably sooner or later all three Dotnetos will be somehow involved in it – I hope so!

Note. All graphics on cards and the cards itself are prototypes and do not reflect the final quality. Moreover, all cliparts were taken from Free Vectors via vecteezy.com.

gcbook

It’s been about half a year since my book has been published and people started to read it. Some even already finished! I thought it would be nice to gather all the reviews I’ve seen so far. There are not so many still – I was able to find 13 reviews on various sites. Still, big thank you for those who write them!

I must admit, I am very, very pleased with those opinions! There are almost no negatives! The greatest satisfaction is given by statements that someone felt a better programmer after reading it. Or applied it in practice immediately. Some reviews are quite comprehensive, some are pretty short. To help you in skimming all them, I’ve highlighted the most interesting (and positive!) parts of them. Negative parts are also highlighted – luckily there is so few of them!


review05 review11


 

Obviously, general statements about the book being very well-written or structured are also very pleasing! Hearing “amazing” about your work is awesome!

 


review10

review04


 

One of the nicest things is to hear that this book is a “must read” for every .NET developer!

 


review09


 

The same about hearing that it is “one of the best books about .NET” or the best book about this topic!

 


review08

review07

review12


 

And even two of those reviews are written by friends of mine, I know them well enough to be sure they are writing sincerely. If they did not like something, they’d write it honestly!

So after more than two years of writing, this is a kind of reward for me to read all this!

 


review02

review03


 

The very first review was kind of problematic because of Amazon’s formatting issue. Not my fault, unfortunately. But luckily reviewer revised his review after correspondence with the Amazon (AFAIK).

 


review06


 

So in summary – I am very pleased to hear that people like the approach taken by me in designing this book. I wanted it to be both very theoretical (a lot of deep-inside stuff and its theoretical basis) and very practical (presented Scenarios and Rules). And it seems it works!

 


 

review13

review01


 

Reviews presented here were taken from the following pages:

If you see another review elsewhere, or you’ve written one, please leave a comment!

And if you’ve read my book already, please write a review somewhere!

 

.NET reference

Almost every article about .NET memory tells the same story – “there are value types allocated on the stack and reference types allocated on the heap”. And, “classes are reference types while structs are value types”. They are so many popular job interview questions for .NET developers touching this topic. But this is by far not the most appropriate way of seeing a difference between value types and reference types. Why it is not quite correct? Because it describes the concept from the implementation point of view, not from the point that explains the true difference behind those two categories of types. It was already explained in popular articles The Stack Is An Implementation Detail, Part One and The Stack Is An Implementation Detail, Part Two.

We will delve into implementation details later, but it is worth it to note that they are still only implementation details. And as all implementations behind some kind of abstractions, they are subject to change. What really matters is the abstraction they provide to the developer. So instead of taking the same implementation-driven approach, I would like you to present a rationale behind it. And only then we can reach the point when understanding the current implementation will be possible (and will be sensible also).

Let’s start from the beginning, which is an ECMA 335 standard. Unfortunately, the definitions we need are a little blurry, and you can get lost in different meanings of words like type, value, value type, value of type, and so on, so forth. In general, it is worth remembering that this standard defines that:

“any value described by a type is called an instance of that type”

In other words, we can say about value (or instance, interchangeably) of value type or reference type. Going further, those are defined as:

“type, value: A type such that an instance of it directly contains all its data. (…) The values described by a value type are self-contained.”

“type, reference: A type such that an instance of it contains a reference to its data. (…) A value described by a reference type denotes the location of another value.”

We can spot there the true difference in abstraction that those two kinds of types provide: instances (values) of value types contain all its data in place (they are, in fact, values itself ), while reference types values only point to data located “somewhere” (they reference something). But this data-location abstraction implies a very significant consequence that relates to some fundamental topics:

Lifetime:

  • Values of value types contain all its data – we can see it as a single, self-contained being. The data lives as long as the instance of the value type itself.
  • Values of reference types denote the location of another value whose lifetime is not defined by the definition itself.

Sharing:

  • Value type’s value cannot be shared by default – if we would like to use it in another place (for example, although we are passing a bit of implementation details here, method argument, or another local variable), it will be copied byte by byte by default. We say then about passing-by-value semantics. And as a copy of the value is passed to another place, the lifetime of the original value does not change
  • Reference type’s value can be shared by default – if we would like to use it in another place, passing-by-reference semantics will be used by default. Hence, after that, one more reference type instance denotes the same value location. We have to track somehow all references to discover the value’s lifetime.

Identity:

  • Value types do not have an identity. Value types are identical if and only if the bit sequences of their data are the same.
  • Reference types are identical if and only if their locations are the same.

Again, there is no single mention about heap or stack in this context at all. Keeping in mind those differences and definitions should clarify things a little, although you may need a while to get used to them. Next time when asked during job interview about where value types are stored, you may start from such an alternative, extended elaboration.

Note. There is yet another type of category we should know – immutable types. Immutable type is a type whose value cannot be changed after creation. No more and no less. They do say nothing about their value or reference semantics. In other words, both value type and reference type can be immutable. We can enforce immutability in object-oriented programming by simply not exposing any methods and properties that would lead to changing an object’s value.

Locations

When considering a .NET stack machine, we should mention an important concept of locations. When considering storage of various values required for program execution, a few logical locations exist:

  • local variables in a method
  • arguments of a method
  • instance field of another value
  • static field (inside class, interface or module)
  • local memory pool
  • temporarily on the evaluation stack

Type Storage

One could insist on asking where is the place here that implies using stack or heap for those two, basic kinds of types? The answer is – there is none! This is an implementation
detail taken during design of Microsoft .NET Framework CLI standard. Because it was for years overwhelmingly the most popular one, the “value types allocated on the stack
and reference types allocated on the heap” story have been repeated again and again like a mantra without deep reflection. And since it is a very good design decision, it was
repeated in different CLI implementations we have discussed earlier. Keep in mind, this sentence is not entirely true in the first place. As we will see in the following sections,
there are exceptions to that rule. Different locations can be treated differently as to how to store the value. And this is exactly the case with CLI as we will soon see.

Nevertheless, we only can think about the storage of the value types and reference types when designing CLI implementation for a specific platform. We simply just need
to know whether we have stack or heap available at all on that particular platform! As the vast majority of today’s computers have both, the decision is simple. But then probably
we have also CPU registers and no one is mentioning them in the “value types allocated on the…” mantra although it is the same level of implementation detail like using
stack or heap.

The truth is that the storage implementation of one or another type may be located mostly in the JIT compiler design. This is a component that is designed for a specific
platform on which it is running so we know what resources will be available there. x86/x64-based JIT has obviously both stack, heap, and registers at its disposal. However, such
a decision on where to save a given type value can be left not only at the JIT compiler level. We can allow the compiler to influence this decision based on the analysis that
it performs. And we can even expose somehow such a decision to the developer at the language level (exactly like in C++ where you can allocate objects both on the stack or
on the heap).

There is an even simpler approach taken by Java, where there are no user-defined value types at all, hence no problem exists where to store them! A few built-in primitives
(integers and so forth) are said to be value types there, but everything else is being allocated on the heap (not taking into consideration escape analysis described later). In
case of .NET design, we could also decide to allocate all types instances (including value types) on the heap, and it would be perfectly fine as long as the value type and reference type semantic would not be violated. When talking about memory location, the ECMA-335 standard gives complete freedom:

“The four areas of the method state – incoming arguments array, local variables array, local memory pool and evaluation stack – are specified as if logically distinct areas. A conforming implementation of the CLI can map these areas into one contiguous array of memory, held as a conventional stack frame on the underlying target architecture, or use any other equivalent representation technique.”

Why these and no other implementation decisions were taken will be more practical to explain in the following sections, discussing separately the value and the reference types.

Note. There is only a single important remark left. When we know now that talking about stack and heap is an implementation detail, it can still be reasonable to do that. Unfortunately, there is a place where “as it should be” odds with the “as is practical”. And this place is performance and memory usage optimization. If we are writing our code in C# targeting x86/x64 or ARM computers, we know perfectly that heap, stack, and registers will be used by those types in certain scenarios. So as The Law of Leaky Abstractions says, value or reference type abstraction can leak here. And if we want, we can take advantage of it for performance reasons.

Value Types

As previously said, value type “directly contains all its data”. ECMA 335 defines value as:

” A simple bit pattern for something like an integer or a float. Each value has a type that describes both the storage that it occupies and the meanings of the bits in its representation, and also the operations that can be performed on that representation. Values are intended for representing the simple types and non-objects in programming languages.”

So what about “value types are stored on the stack” part of the story? Regarding implementation, there is nothing stopping from storing all value types on the heap, irrespective of the location used. Except for the fact that there is a better solution – using the stack or CPU register. The stack is quite a lightweight mechanism. We can “allocate” and “deallocate” objects there by simply creating a properly sized activation frame and dismissing it when no longer needed. As the stack seems to be so fast, we should use it all the time, right?

The problem is it is not always possible, mainly because of the lifetime of the stack data versus the desired lifetime of the value itself. It is the life span and value sharing that determines which mechanism we can use to store value type data.

Let’s now consider each possible location of value type and what storage we can use there:

  • local variables in a method – they have a very strict and well-defined lifetime, which is a lifetime of a method call (and all its subcalls). We could allocate all value-type local variables on the heap and then just deallocate them when the method ends. But we could also use stack here because we know there is only a single instance of the value (there is no sharing of it). So there is no risk that someone will try to use this value after the method ends or concurrently from another thread. It is then just perfectly fine to use a stack inside an activation frame as storage for local value types (or use a CPU register(s)).
  • arguments of a method – they can be treated exactly as local variables here so again, we can use the stack instead of the heap.
  • instance field of a reference type – their lifetime depends on the lifetime of the containing value. For sure it may live longer than the current or any other activation frame so a stack is not the right place for it. Hence, value types that are fields of reference types (like classes) will be allocated on the heap along with them (which we know as one of the boxing reasons).
  • instance field of another value-type – here the situation is slightly complicated. If the containing value is on the stack, we would also use it. If it is on the heap already, we will use the heap for the field’s value also.
  • static field (inside class, interface or module) – here the situation is similar to using an instance field of reference type. The static field has a lifetime of the type in which it is defined. This means we could not use the stack as storage, as an activation frame may live much shorter.
  • local memory pool – its lifetime is strictly related to the method’s lifetime (ECMA says “the local memory pool is reclaimed on method exit”). This means we can without a problem use stack and that’s why local memory pool is implemented as a growth of the activation frame.
  • temporarily on the evaluation stack – value on the evaluation stack has a lifetime strictly controlled by JIT. It perfectly knows why this value is needed and when it will be consumed. Hence, it has complete freedom whether it would like to use the heap, stack, or register. From performance reasons, it will obviously try to use CPU registers and the stack.

So that is how we come to the first part – “value types are stored on the stack”. As we see, the more true is the statement – “value types are stored on the stack when the value is a local
variable or lives inside the local memory pool. But are stored on the heap when they are a part of other objects on the heap or are a static field. And they always can be stored inside CPU register as a part of evaluation stack processing”. Slightly more complicated, isn’t?

Reference Types

When talking about reference types, it is convenient to consider them as consisting of two entities::

  • reference – a value of the reference type is a reference to its data. This reference means, in particular, an address of data stored elsewhere. A reference itself can be seen as a value type because internally it is just a 32- or 64-bit wide address. References have copy-by-value semantics so when passed between locations, they are just copied.
  • reference type’s data – this is a memory region denoted by the reference. Standard does not define where this data should be stored. It is just stored elsewhere.

.NET reference

Considering possible storage for each location of the reference type is simpler than for value types. As mentioned, because references can share data, the lifetime of them is
not well-defined. In general cases, it is impossible to store reference types on instances the stack because their lifetime is probably much longer than an activation frame life (method call duration). Hence it is quite an obvious implementation decision where to store them and that is how we come to “reference types are stored on the heap” part of the story.

Regarding the heap allocation possibilities for reference types – there is one exception. If we could know that a reference type instance has the same characteristic as a local value-type variable, we could allocate it on the stack, as usual for value types. This particularly means we should know whether a reference does not escape from its local scope (does not escape the stack or thread) and start to be shared among other references. A way of checking this is called Escape Analysis. It has been successfully implemented in Java where it’s especially beneficial because of their approach of allocating almost everything on the heap by default. At the time of this writing, .NET environment does not support Escape Analysis, yet. Well, at least not officialy. And this is the topic we will look at in the next blog post!