Reputation: 23
Okay, I've been banging my head for the last day and I'm sure it's something simple so here goes. Why does this code not work? I'm using Xcode 3.2.5 and LLVM and when I try to compile something like this:
uint16x8_t testUnsigned = {1,2,3,4,5,6,7,8};
int16x8_t testSigned;
testSigned = vreinterpretq_s16_u16(testUnsigned);
I get the error: "Assigning to 'int16x8_t' from incompatible type 'int'" all my other intrinsics work fine but for some reason I can't reinterpret a vector. Any ideas? Thanks in advance.
Upvotes: 2
Views: 1255
Reputation: 8432
As Hiroshi points out, there appears to be a bug with this particular call. However, since it is just casting under the hood, you can go by way of any other type, without any runtime penalty. For example, I tested, and this works:
testSigned = vreinterpretq_s16_f32(vreinterpretq_f32_u16(testUnsigned));
Upvotes: 0
Reputation: 7251
/Developer/Platforms/iPhoneOS.platform/Developer/usr/llvm-gcc-4.2/lib/gcc/arm-apple-darwin10/4.2.1/include/arm_neon_gcc.h:6947
#define vreinterpretq_s16_u16(__a) \
(int16x8_t)__builtin_neon_vreinterpretv8hiv8hi ((int16x8_t) __a)
This seem like that the type of the argument is a signed int. It smell like a bug. I'm not sure, but you should try
testSigned = vreinterpretq_s16_u16((int16x8_t)testUnsigned);
Upvotes: 0