Nik
Nik

Reputation: 77

"in" modifier with primitive value types?

I've read an article on MSDN. It explains why "in" should ONLY be used with custom readonly structs, otherwise there will be performance penalties. However, I didn't quite understand how to use "in" with primitive types. Since all built in value types in C# are immutable, does it mean that passing them by reference using "in" modifier will slightly improve performance compare to passing them by value?

Example:

public class Product
{
    private readonly int _weight;
    private readonly decimal _price;

    public Product(in decimal price, in int weight)
    {
        _price = price;
        _weight = weight;
    }
}

vs

public class Product
{
    private readonly int _weight;
    private readonly decimal _price;

    public Product(decimal price, int weight)
    {
        _price = price;
        _weight = weight;
    }
}

Upvotes: 2

Views: 159

Answers (1)

Dai
Dai

Reputation: 155503

The in modifier helps performance by avoiding unnecessary copies of value-types when invoking a method. Note that value-types (i.e. structs) are not actually automatically immutable (but the C# compiler does gives the appearance of immutability by making "defensive copies" and then overwriting the entire parent struct when setting a property value, this is explained in the article you linked to).

Given that structs are copied in their entirety in method calls if you didn't use in (or out/ref), you could improve performance by only passing a pointer to a struct object higher-up in the call-stack because a pointer (a reference in .NET) is smaller than a struct, but this is only avoiding a copy if the struct is also truly immutable.

C#'s built-in value-types (Int16, Int32, Double, UInt64, etc) are all smaller than (or the same size as) a pointer on 64-bit systems (except Decimal which is a 128-bit type, and String which is a reference-type), which means there is zero benefit to using the in modifier with those types. You will also suffer a performance hit from incurring the cost of a pointer dereference - which may also cause a processor memory cache miss.

Consider some different scenarios below (all examples assume x64 and no optimizations that change the semantics of the method calls or the calling convention):

Passing a small value-type

public static void Main()
{
    Int32 value = 123; // 4 bytes
    Foo( in value ); // Get an 8-byte pointer to `value`, then pass that
}

public static void Foo( in Int32 x ) { ... }

There is a performance hit because now the computer is passing an 8-byte pointer value that also needs dereferencing instead of a 4-byte value that can be used immediately.

Passing a large value-type

public struct MyBigStruct
{
    public Decimal Foo;
    public Decimal Bar;
    public Decimal Baz;
}

public static void Main()
{
    MyBigStruct value; // 48 bytes
    Foo( in value ); // Get an 8-byte pointer to `value`, then pass that
}

public static void Foo( in MyBigStruct x ) { ... }

There is a likely performance gain because the computer is passing an 8-byte pointer value instead of copying a 48-byte value, but the pointer dereference may be more expensive than copying the extra 32 bytes. You should profile at runtime to decide if the change is worthwhile. This also makes x in Foo immutable because otherwise value in Main would be modified.

Upvotes: 5

Related Questions