Fast look-up algorithms
I'm looking to use a sort of cache of objects which are stored each with a unique key.
Currently, I do this simply using a
Dictionary<TKey, TValue>
. This is to prevent having to fetch the items anew, when I can simply get them from that cache.
My question is, what are some quicker and more efficient collections or algorithms to use for storing pairs of keys and values and fetching them when needed? (Obviously the logic to "refresh" the cache will run when an item cannot be found.)12 Replies
You're not gonna get faster than an in-memory dictionary
Oh? I don't know much about this stuff, but i see O(n) and O(log n) thrown around a lot. Isn't Dictionary O(n) and therefore on the worse side?
Or was this strictly about sorting and I'm conflating it with lookups
Typically it's more efficient than that
As in, dictionaries and hashtables in general
Docs say nearly O(1) https://docs.microsoft.com/en-us/dotnet/api/system.collections.generic.dictionary-2?view=net-6.0#remarks
Dictionary Class (System.Collections.Generic)
Represents a collection of keys and values.
Hm, does that include
TryGetValue
?
Most of my TKeys will be stringsI don't see why it wouldn't
O(1) lookups for the typical case is the whole reason for dictionaries
So yes, that's what you want
actually, i guess the speed would depend on how quick the
TKey
's Equals
method is, right?What makes you say that?
i mean, do the keys not still need to be compared for equality in some way?
So there's a couple of problems with that statement
Keys are put in buckets by their hash code
You normally don't want to have hash code collisions causing multiple keys to fall in the same bucket
If they do happen to fall in the same bucket, you do have to compare the keys by value
However, comparing the keys by value won't affect the big O speed
Big-O is about how the time grows with size
O(1) doesn't mean "fast"
It means "the time won't increase as the collection size increases"