✅ What is the deal with default jit optimizations

According to my memory debugger
interface ITest1
{
}
interface ITest2 { }

struct Test1 : ITest1
{
}
[System.Runtime.CompilerServices.MethodImplAttribute(System.Runtime.CompilerServices.MethodImplOptions.NoInlining)]
static bool test1<T>(ref T value)
{
return value is ITest1;
}
[System.Runtime.CompilerServices.MethodImplAttribute(System.Runtime.CompilerServices.MethodImplOptions.NoInlining)]
static bool test2<T>(ref T value)
{
return value is ITest2;
}
[System.Runtime.CompilerServices.MethodImplAttribute(System.Runtime.CompilerServices.MethodImplOptions.NoInlining)]
static object Test<T>(ref T value)
{
bool flag = true;
int x = 0;
int y = 0;
while (flag)
{
if (test1(ref value))
x++;
if (test2(ref value))
y++;
}
return null;
}
interface ITest1
{
}
interface ITest2 { }

struct Test1 : ITest1
{
}
[System.Runtime.CompilerServices.MethodImplAttribute(System.Runtime.CompilerServices.MethodImplOptions.NoInlining)]
static bool test1<T>(ref T value)
{
return value is ITest1;
}
[System.Runtime.CompilerServices.MethodImplAttribute(System.Runtime.CompilerServices.MethodImplOptions.NoInlining)]
static bool test2<T>(ref T value)
{
return value is ITest2;
}
[System.Runtime.CompilerServices.MethodImplAttribute(System.Runtime.CompilerServices.MethodImplOptions.NoInlining)]
static object Test<T>(ref T value)
{
bool flag = true;
int x = 0;
int y = 0;
while (flag)
{
if (test1(ref value))
x++;
if (test2(ref value))
y++;
}
return null;
}
The generic value given to test1 and test2 gets boxed at every iteration. Now in a debug senario this isn't surprising given that the emited il for this is indeed boxing. But I thought boxing like this are suppose to be optimized out by default in a release build. To make this code get optimized like it should be "AggresiveOptimization" flag has to be manuelly used. My project settings are standard .net 7 settings. Unless I'm missing something this seem rarther odd. Infact the implementation of hot and common code paths like GenericEqualityComparer depend on semilare optimization to happen. Obviously that is getting optimized otherwise simple collection lookups would blow the GC up on a regular basis. Having optimizations disabled by default is surely not the intended behavior here? Surely my memory debugger are making some pretty heavy assumptions without actuelly analyzing memory allocations correctly right? Thanks in advance for any insight you guys can give.
20 Replies
mtreit
mtreit2y ago
This kind of question is something the folks in #allow-unsafe-blocks that work on the things like the JIT can probably answer if you ask there.
Aaron
Aaron2y ago
it's probably because of tiered compilation the first pass of JITing a method is done with less optimizations enabled to increase startup speed debuggers also usually disable optimizations using the debugging API to make the debugging experience better, which might be another cause if you look at sharplab, which shows the decompilation of tier 1 (which is full opts currently), you will see there is no box
MODiX
MODiX2y ago
Windows10CE#8553
sharplab.io (click here)
public class C {
interface ITest1 {
}
interface ITest2 { }
struct Test1 : ITest1 {
}
[JitGeneric(typeof(Test1))]
[System.Runtime.CompilerServices.MethodImplAttribu...
static bool test1<T>(ref T value) {
return value is ITest1;
// 24 more lines. Follow the link to view.
public class C {
interface ITest1 {
}
interface ITest2 { }
struct Test1 : ITest1 {
}
[JitGeneric(typeof(Test1))]
[System.Runtime.CompilerServices.MethodImplAttribu...
static bool test1<T>(ref T value) {
return value is ITest1;
// 24 more lines. Follow the link to view.
React with ❌ to remove this embed.
Tacti Tacoz
Tacti TacozOP2y ago
Ohh yeah looks like it initially runs the methods in no optimization mode but optimizes it after around 50K iterations. Thanks that make sense. Are there any details available regarding when a method is considered a "hot code path" and thus optimized? microsofts documentation entry regarding tiered compilation doesn't really give details
interface ITest
{
bool Valid { get; }
}
struct Test1 : ITest
{
public bool valid;
public bool Valid => valid;
}
struct Test2<T> : ITest
{
public Test1 test1;
public T value;
public bool Valid => test1.Valid;
}
[System.Runtime.CompilerServices.MethodImplAttribute(System.Runtime.CompilerServices.MethodImplOptions.NoInlining)]
unsafe static bool test1<T>(ref T value)
{
if (value is ITest)
{
// ldarg_0
// constrained.
// callvirt ITest.Valid
// ret
}
return false;
}
interface ITest
{
bool Valid { get; }
}
struct Test1 : ITest
{
public bool valid;
public bool Valid => valid;
}
struct Test2<T> : ITest
{
public Test1 test1;
public T value;
public bool Valid => test1.Valid;
}
[System.Runtime.CompilerServices.MethodImplAttribute(System.Runtime.CompilerServices.MethodImplOptions.NoInlining)]
unsafe static bool test1<T>(ref T value)
{
if (value is ITest)
{
// ldarg_0
// constrained.
// callvirt ITest.Valid
// ret
}
return false;
}
On a another note, is there really no way to call ITest.Valid with the contrainted il modifier like shown above? Like how it would do automatically if T was constained to ITest. I could do ((ITest)value).Valid of course, but that will cause boxing if T refer to a value type. Another way is to create a seperate method constained to ITest and then call that from test1, but there seems to be no way to force that either. . For cases of Test1 value is Test1 value && value.Valid can be used which is fine. But that can't be done for Test2 since it's generic thus why we need the interface here. . I'm aware this will get solved if I make sure only reference types will be fed to test1 (or even constrain it to 'class') but in my use case I need it to support value types without boxing
Aaron
Aaron2y ago
no, that would require a new language feature to constrain generics within blocks with checks like that
Tacti Tacoz
Tacti TacozOP2y ago
no way to force it using Unsafe or anything like that?
Aaron
Aaron2y ago
no
Tacti Tacoz
Tacti TacozOP2y ago
hmm ok
Aaron
Aaron2y ago
using unsafe to do it would require you to know the concrete type (not the generic) ahead of time
Tacti Tacoz
Tacti TacozOP2y ago
hmm alright. Lets say I knew the concreate type ahead of time
Aaron
Aaron2y ago
then you can just Unsafe.As from T to the concrete type
Tacti Tacoz
Tacti TacozOP2y ago
issue is though that one of the types are generic. So I suppose I can't know the concrete type then
Aaron
Aaron2y ago
iirc, it's 30 iterations, then 300 milliseconds with no methods JITd, then running the method again will cause it to promote to t1 in the background then there's not much you can do you could generate that IL, either in an assembly you reference or at runtime with DynamicMethod or something (note that DynamicMethod would kill AOT support)
Tacti Tacoz
Tacti TacozOP2y ago
yeah I'll look into using the dynamic method system (and yeah right now I'm doing server related work so AOT won't need to be on my mind for a while. When I start using AOT I'll use an assembly generator instead of dynamic method at runtime)
Aaron
Aaron2y ago
though tier 1 should also elide that box so I'd probably try to avoid prematurely optimizing here and observe that box actually being a problem at runtime
Tacti Tacoz
Tacti TacozOP2y ago
hmm ok. I'll do some testing
Anton
Anton2y ago
use benchmark.net
Tacti Tacoz
Tacti TacozOP2y ago
Yeah you are right if you don't use pattern matching it gets optimized out. (Aka using the result of the box directly instead of assining it to a ITest variable first which makes sense) Thanks 🙂
Accord
Accord2y ago
Was this issue resolved? If so, run /close - otherwise I will mark this as stale and this post will be archived until there is new activity.
Tacti Tacoz
Tacti TacozOP2y ago
/close
Want results from more Discord servers?
Add your server