Andreass
Andreass

Reputation: 3228

Why does using float instead of double not improve Android performance?

Since all smart phones (at least the ones, that I can find specs on) have 32-bit processors, I would imagine that using single precision floating-point values in extensive calculations would perform significantly better than doubles. However, that doesn't seem to be the case.

Even if I avoid type casts, and use the FloatMath package whenever possible, I can hardly see any improvements in performance except for the memory use, when comparing float-based methods to double-based ones.

I am currently working on a rather large, calculation intensive sound analysis tool, which is doing several million multiplications and additions per second. Since a double precision multiplication on a 32-bit processor takes several clock cycles vs. 1 for single precision, I was assuming the type change would be noticeable... But it isn't :-(

Is there a good explanation for this? Is it due to the way the Dalvik VM works, or what?

Upvotes: 12

Views: 4872

Answers (3)

Stephen Canon
Stephen Canon

Reputation: 106127

It seems that you're using a Nexus One, which has a Scorpion core.

I believe that both single- and double-precision scalar floating point are fully pipelined in Scorpion, so although the latency of the operations may differ, the throughput is the same.

That said, I believe that Scorpion also has a SIMD unit which is capable of operating on floats, but not doubles. In theory a program written against he NDK taking advantage of the SIMD instructions can run substantially faster on single-precision than on double-precision, but only with significant effort from the programmer.

Upvotes: 3

Gabe
Gabe

Reputation: 86718

Floating-point units on typical CPUs perform all of their calculations in double-precision (or better) and simply round or convert to whatever the final precision is. In other words, even 32-bit CPUs have 64-bit FPUs.

Many phones have CPUs that include FPUs, but have the FPUs disabled to save power, causing the floating-point operations to be slowly emulated (in which case 32-bit floats would be an advantage).

There are also vector units that have 32-bit FPUs, causing 64-bit floating-point operations to take longer. Some SIMD units (like those that execute SSE instructions) perform 32-bit and 64-bit operations in the same amount of time, so you could do twice as many 32-bit ops at a time, but a single 32-bit op won't go any faster than a single 64-bit op.

Upvotes: 17

CommonsWare
CommonsWare

Reputation: 1006634

Many, perhaps most, Android devices have no floating-point co-processor.

I am currently working on a rather large, calculation intensive sound analysis tool, which is doing several million multiplications and additions per second.

That's not going to work very well on Android devices lacking a floating-point co-processor.

Move it into C/C++ with the NDK, then limit your targets to ARM7, which has a floating-point co-processor.

Or, change your math to work in fixed-point mode. For example, Google Maps does not deal with decimal degrees for latitude and longitude, but rather microdegrees (10^6 times degrees), specifically so that it can do its calculations using fixed-point math.

Upvotes: 8

Related Questions