scubasteve
scubasteve

Reputation: 2868

Is there a more efficient way to convert double to float?

I have a need to convert a multi-dimensional double array to a jagged float array. The sizes will var from [2][5] up to around [6][1024].

I was curious how just looping and casting the double to the float would perform and it's not TOO bad, about 225µs for a [2][5] array - here's the code:

const int count = 5;
const int numCh = 2;
double[,] dbl = new double[numCh, count];
float[][] flt = new float[numCh][];

for (int i = 0; i < numCh; i++)
{
    flt[i] = new float[count];
    for (int j = 0; j < count; j++)
    {
        flt[i][j] = (float)dbl[i, j];
    }
}

However if there are more efficient techniques I'd like to use them. I should mention that I ONLY timed the two nested loops, not the allocations before it.

After experimenting a little more I think 99% of the time is burned on the loops, even without the assignment!

Upvotes: 6

Views: 3129

Answers (4)

fixagon
fixagon

Reputation: 5566

If you could use also Lists in your case you could use the LINQ approach:

List<List<double>> t = new List<List<double>>();
//adding test data
t.Add(new List<double>() { 12343, 345, 3, 23, 2, 1 });
t.Add(new List<double>() { 43, 123, 3, 54, 233, 1 });
//creating target
List<List<float>> q;
//conversion
q = t.ConvertAll<List<float>>(
        (List<double> inList) => 
        {
            return inList.ConvertAll<float>((double inValue) => { return (float)inValue; });
        }
     );

if its faster you have to measure yourself. (doubtful) but you could parallelize it which could fasten it up (PLINQ)

Upvotes: 0

Seph
Seph

Reputation: 8703

This will run faster, for small data it's not worth doing Parallel.For(0, count, (j) => it actually runs considerably slower for very small data, which is why that I have commented that section out.

double* dp0;
float* fp0;

fixed (double* dp1 = dbl)
{
    dp0 = dp1;

    float[] newFlt = new float[count];
    fixed (float* fp1 = newFlt)
    {
        fp0 = fp1;
        for (int i = 0; i < numCh; i++)
        {
            //Parallel.For(0, count, (j) =>
            for (int j = 0; j < count; j++)
            {
                fp0[j] = (float)dp0[i * count + j];
            }
            //});
            flt[i] = newFlt.Clone() as float[];
        }
     }
  }

This runs faster because double accessing double arrays [,] is really taxing in .NET due to the array bounds checking. the newFlt.Clone() just means we're not fixing and unfixing new pointers all the time (as there is a slight overhead in doing so)

You will need to run it with the unsafe code tag and compile with /UNSAFE

But really you should be running with data closer to 5000 x 5000 not 5 x 2, if something takes less than 1000 ms you need to either add in more loops or increase the data because at that level a minor spike in cpu activity can add a lot of noise to your profiling.

Upvotes: 6

Żubr&#243;wka
Żubr&#243;wka

Reputation: 720

I don't really think that you can optimize your code much more, one option would be to make your code parallel but for your input data size ([2][5] up to around [6][1024]) I don't thing that you would profit so much if you would even have any profit. In fact, I wouldn't even bother optimizing that piece of code at all...

Anyway, to optimize that, the only thing that I would do (if that fits in what you want to do) would be to just used fixed-width arrays instead of the jagged ones, even if you would waste memory with that.

Upvotes: 0

TomTom
TomTom

Reputation: 62127

In your example - I think you dont measure the double/float comparison so much (which should be a processor internal instruction) as the array accesses (which have a lot of redirects plus obviousl.... aray delimiter checks (for the array index of bounds exception).

I would suggest timining a test without arrays.

Upvotes: 0

Related Questions