Reputation: 11436
While processing a large text file, I came across the following (unexpected) performance degradation that I can't explain. My objectives for this question are:
int[]
but MyComplexType[]
MyComplexType
is a class, not a structMyComplexType
contains some string
propertiesConsider the following C#
program:
namespace Test
{
public static class Program
{
// Simple data structure
private sealed class Item
{
public Item(int i)
{
this.Name = "Hello " + i;
//this.Name = "Hello";
//this.Name = null;
}
public readonly string Name;
}
// Test program
public static void Main()
{
const int length = 1000000;
var items = new Item[length];
// Create one million items but don't assign to array
var w = System.Diagnostics.Stopwatch.StartNew();
for (var i = 0; i < length; i++)
{
var item = new Item(i);
if (!string.IsNullOrEmpty(item.Name)) // reference the item and its Name property
{
items[i] = null; // do not remember the item
}
}
System.Console.Error.WriteLine("Without assignment: " + w.Elapsed);
// Create one million items and assign to array
w.Restart();
for (var i = 0; i < length; i++)
{
var item = new Item(i);
if (!string.IsNullOrEmpty(item.Name)) // reference the item and its Name property
{
items[i] = item; // remember the item
}
}
System.Console.Error.WriteLine(" With assignment: " + w.Elapsed);
}
}
}
It contains two almost-identical loops. Each loop creates one million instances of Item
class. First loop uses the created item and then throws away the reference (not persisting it in the items
array). Second loop uses the created item and then stores the reference in the items
array. Array item assignment is the only difference between the loops.
When I run Release
build (optimizations turned on) on my machine, I get the following results:
Without assignment: 00:00:00.2193348
With assignment: 00:00:00.8819170
Loop with array assignment is significantly slower than the one without assignment (~4x slower).
If I change Item
constructor to assign a constant string to Name
property:
public Item(int i)
{
//this.Name = "Hello " + i;
this.Name = "Hello";
//this.Name = null;
}
I get the following results:
Without assignment: 00:00:00.0228067
With assignment: 00:00:00.0718317
Loop with assignment is still ~3x slower than the one without
Finally if I assign null
to the Name
property:
public Item(int i)
{
//this.Name = "Hello " + i;
//this.Name = "Hello";
this.Name = null;
}
I get the following result:
Without assignment: 00:00:00.0146696
With assignment: 00:00:00.0105369
Once no string is allocated, the version without assignment is finally slightly slower (I assume because all those instances are released for garbage collection)
Why is array item assignment slowing the test program so much?
Is there an attribute/language construct/etc that will speed up the assignment?
PS: I tried investigating the slowdown using dotTrace, but it was inconclusive. One thing I saw was a lot more string copying and garbage collection overhead in the loop with assignment than in the loop without assignment (even though I expected the reverse).
Upvotes: 19
Views: 1834
Reputation: 11
In my opinion you are victim of branch prediction. Let's look in details what you are doing:
In the case "Without assigment", you just assign null to all elements of the array items; by so doing the processor learn, after some iterations of the for loop, that you are assigning the same value (even null) to the array items; hence the if statement is no more needed: your program will run faster.
In the case "With assignment", the processor have no idea of the progression of the new generated items: the if statement is called each iteration of the for loop; this leads to a program running slower...
This behavior rely on a part of the processor hardware called Branch Prediction Unit (consuming a significant proportion of transistors of the chip...) A similar topic is well illustrated here Why is it faster to process a sorted array than an unsorted array?
Upvotes: 0
Reputation: 152624
To try and solve your actual problem (although this was an interesting puzzle to solve). I would recommend a few things:
get
accessor to return the string values. That takes the string concatenation out of the picture when diagnosing array allocation. If you want to "cache" the computed value when you first get
it that should be OK.Upvotes: 0
Reputation: 5212
I don't believe this has anything to do (really) with array assignment. It has to do with the amount of time the item and its contained objects have to be kept around, just in case you may later reference them. It's to do with heap allocation and garbage collection generations.
When first allocated the item
and it's strings will be in "generation 0". This is often garbage collected and is very hot, maybe even cached, memory. It's very likely that on the next few iterations of the loop the whole "generation 0" will be GC'ed and the memory re-used for new items
and their strings. When we add the assignment to the array, the object can not be garbage collected because there is still a reference to it. This causes increased memory consumption.
I believe you will see memory increasing during the execution of your code: I believe that the problem is memory allocations in the heap combined with cache-misses because it is always having to use "fresh" memory and can not benefit from hardware memory caching.
Upvotes: 1
Reputation: 22914
My guess is that the compiler is being really smart and sees you don't need to do anything significant with Item in the case where you're not assigning it. It probably just reuses the Item object memory in the first loop since it can. In the second loop bits of heap need to be allocated since they're all independent and referenced later.
I guess this kind of agrees with what you saw related to garbage collection. One item is created in the first loop vs. many.
A quick note - the first loop is probably using object pooling as it's optimization. This article may provide insight. As Reed is quick to point out, the article talks about app optimizations, but I imagine the allocator itself has a lot of optimizations that do similar things.
Upvotes: 2
Reputation: 8190
Okay, I'm still looking, but MSDN suggests you use a collection (presumably List<T>
or HashTable<T>
or something similar) rather than an array. From the MSDN documentation:
Class library designers might need to make difficult decisions about when to use an array and when to return a collection. Although these types have similar usage models, they have different performance characteristics. In general, you should use a collection when Add, Remove, or other methods for manipulating the collection are supported.
Maybe there's something in the .NET specification docs.
Upvotes: -3
Reputation: 564771
I suspect most of the timing issues are related to memory allocation.
When you assign the items into the array, they are never becoming eligible for garbage collection. When you have a string as a property that isn't a constant (interned) or null, this is going to cause your memory allocation requirements to go up.
In the first case, I suspect what's happening is that you're churning through the objects fast, so they stay in Gen0, and can be GCed quickly, and that memory segment can be reused. This means that you're never having to allocate more memory from the OS.
In the second case, you're creating strings within your objects, both of which are two allocations, then storing these so they aren't eligible for GC. At some point, you'll need to get more memory, so you'll get allocated memory.
As for your final check - when you set the Name
to null
, the if (!string.IsNullOrEmpty(item.Name))
check will prevent it from being added. As such, the two code paths, and therefore the timings, become (effectively) identical, though the first is marginally slower (most likely due to the JIT running the first time).
Upvotes: 27