C
C#8mo ago
The Mist

✅ :white_check_mark: Dictionary<int, ...> lookup time seems a bit too slow

I was profiling a program, and I saw this:
100 % HasObject • 136 ms • 189 217 Calls • Engine.Runtime.Heap.HasObject(Int32)
50.3 % FindEntry • 68 ms • 189 217 calls • System.Collections.Generic.Dictionary`2.FindEntry(TKey)
100 % HasObject • 136 ms • 189 217 Calls • Engine.Runtime.Heap.HasObject(Int32)
50.3 % FindEntry • 68 ms • 189 217 calls • System.Collections.Generic.Dictionary`2.FindEntry(TKey)
Here is the code in question:
private Dictionary<int, RawObjectType> _objectId_object_map = new Dictionary<int, RawObjectType>();

public bool HasObject(int id)
{
return _objectId_object_map.ContainsKey(id);
}
private Dictionary<int, RawObjectType> _objectId_object_map = new Dictionary<int, RawObjectType>();

public bool HasObject(int id)
{
return _objectId_object_map.ContainsKey(id);
}
136ms for 190,000 calls is translates to about 1,400 lookups per millisecond. That seems a little slow to me, is it not? If it is, why could this be? I figured looking up an int in a hashmap should be faster than that.
13 Replies
HtmlCompiler
HtmlCompiler8mo ago
just to be clear, this was measured in release mode, right?
The Mist
The MistOP8mo ago
This was done with a line-by-line profiler, the configuration is set to Release mode and I used the option highlighted in the screenshot.
No description
HtmlCompiler
HtmlCompiler8mo ago
then i would expect this to be at least 10x faster
The Mist
The MistOP8mo ago
What could be the reason for it to be so slow? Maybe I should just chalk this up to line-by-line profiling miscalculations for now
mtreit
mtreit8mo ago
I don't know what line by line profiling is, but doing 1 million lookups on my machine took 5 milliseconds. Just measured with Stopwatch.
canton7
canton78mo ago
I don't know how line-by-line works under the hood, but if it's instrumenting the code I'd expect that to add a small overhead. When compared to something as cheap as a dictionary lookup, that overhead may well be significant
Joschi
Joschi8mo ago
This is what I got from a quick benchmark, that has a pre existing dictionary/hashset and tries to get a random number from it n times.
No description
reflectronic
reflectronic8mo ago
i agree that profiling is not going to be very helpful for measuring the absolute performance of a line or two of code it's much more helpful for measuring relative performance, how much longer you spend in some function compared to all the rest. the distortion caused by the profiling applies about equally, so it balances out if you want a microbenchmark to compare things precisely, you should use a library like BenchmarkDotNet (as the previous post did)
The Mist
The MistOP8mo ago
Thanks guys! I don't get to use line-by-line profilers too often, so maybe I'm just overestimating how well it accounts for its own overhead in the benchmark results I've gotten the performance to an acceptable threshold for now by optimizing other things, if it falls under the threshold again I'll try using BenchmarkDotNet to see if there is really some issue with the dictionary/method or if it was just the overhead after all
canton7
canton78mo ago
I suspect it doesn't account for overhead. It's meant to show you which of your lines are more expensive compared to other lines
The Mist
The MistOP8mo ago
Yeah, that's fair
reflectronic
reflectronic8mo ago
$close
MODiX
MODiX8mo ago
Use the /close command to mark a forum thread as answered
Want results from more Discord servers?
Add your server