Reputation: 167
Depending on which position my agent is, along with its rotation, I need to determine the distances to a wall. As this function takes a little bit and needs to be called a lot, my idea was to store the distances by discretizing the position x and y as well as the angle.
My function is therefore called as follows:
float GetWallDistance(int x, int y, int angle)
{
return CalcWallDistance(x, y, angle);
}
where the x and y range from 0 to 500, and the angle ranges from 0 to 360. My first idea was to store it in a multidimensional array as follows:
Initialize the array somewhere by using float[,,] saveArray = new float[500, 500, 360];
float GetWallDistance(int x, int y, int angle)
{
float distance = 0;
if(saveArray[x, y, angle] == 0)
{
distance = CalcWallDistance(x, y, angle);
saveArray[x, y, angle] = distance;
}
else
{
distance = saveArray[x, y, angle];
}
return distance;
}
This tremendously sped up the calculation time but the problem here is that the saveArray takes up quite a big chuck of memory, and the agent will most likely not go through the entire 500 x 500 x 360 search space. A lot of memory is therefore taken up for nothing.
I therefore used a dictionary to store the data much more ram efficiently as follows:
Initialize the dictionary somewhere by using Dictionary<double, float> saveDictionairy = new Dictionary<double, float>();
float GetWallDistance(int x, int y, int angle)
{
double key = (double)x * 1000 + (double)y + (double)angle/1000
float distance = 0;
if(!saveDictionairy.TryGetValue(key, out distance))
{
distance = CalcWallDistance(x, y, angle);
saveDictionairy.Add(key, distance);
}
return distance;
}
(I tried using a string as key for the dictionary but it seemed that concatenating the x, y and angle takes up quite some time apparently)
This method is indeed a lot more memory efficient but the lookup time for the dictionary using the keys items increases by a large amount with respect to indexing the multi dimensional array.
Does anyone know a way how to store this data efficiently in a way that is also easy to lookup?
Upvotes: 4
Views: 230
Reputation: 1104
From your current listed options, it seems the matrix approach is your best bet both in terms of performance and memory allocation.
I have run the following benchmarks:
[Benchmark(Baseline = true)]
public void MatrixTest()
{
// float[,,] saveArray = new float[501, 501, 361];
for (int x = 0; x <= 500; x++)
for (int y = 0; y <= 500; y++)
for (int angle = 0; angle <= 360; angle++)
if (saveArray[x, y, angle] == 0) saveArray[x, y, angle] = 42;
}
[Benchmark]
void IntKeyDictionaryTest()
{
// Dictionary<int, float> saveDictionary = new Dictionary<int, float>();
for (int x = 0; x <= 500; x++)
for (int y = 0; y <= 500; y++)
for (int angle = 0; angle <= 360; angle++)
{
int key = (x << 18) | (y << 9) | (angle);
if (!saveDictionary.TryGetValue(key, out float d)) saveDictionary[key] = 42;
}
}
[Benchmark]
void DoubleKeyDictionaryTest()
{
// Dictionary<double, float> saveDictionary = new Dictionary<double, float>();
for (int x = 0; x <= 500; x++)
for (int y = 0; y <= 500; y++)
for (int angle = 0; angle <= 360; angle++)
{
double key = (double)x * 1000 + (double)y + (double)angle / 1000l;
if (!saveDictionary.TryGetValue(key, out float d)) saveDictionary[key] = 42;
}
}
with following results:
Method | Mean | Error | StdDev | Ratio | RatioSD | Gen 0/1k Op | Gen 1/1k Op | Gen 2/1k Op | Allocated Memory/Op |
------------------------ |------------:|----------:|----------:|------:|--------:|------------:|------------:|------------:|--------------------:|
MatrixTest | 727.9 ms | 5.733 ms | 5.363 ms | 1.00 | 0.00 | - | - | - | - |
IntKeyDictionaryTest | 4,682.1 ms | 12.017 ms | 11.241 ms | 6.43 | 0.05 | - | - | - | - |
DoubleKeyDictionaryTest | 12,804.1 ms | 66.425 ms | 62.134 ms | 17.59 | 0.17 | - | - | - | - |
So I managed to make a more efficient key for your dictionary knowing the fact that x, y and angle can each be represented on 9 bits => 27bits total which fits in an int. Anyway from the looks of it, the matrix approach seems to be the winner.
Upvotes: 2
Reputation: 171246
The .NET Dictionary
uses a fast algorithm but has a fairly high overhead still. I experimented with making it faster a while ago. I found that I could make it 6x faster by deleting things that I did not need and by making other design changes.
For example, Dictionary
uses the modulo operator to map from hash codes to buckets. %
is surprisingly slow. It takes, I think, 31 CPU cycles. When I replaced that with hashCode & bucketCountMask
where the bucket count is a power of two and bucketCountMask
is buckets.Length - 1
I immediately realized a big performance gain.
I also deleted support for removing items and the iterator version
feature. I directly exposed the slots
array so that callers could directly mutate data in it.
This custom type was a lot faster because it was more specialized to my needs and it's API was a lot more difficult to use.
.NET Code on GitHub contains a DictionarySlim
type for their internal use. Maybe you can use that.
Upvotes: 4