Reputation: 353
I'm new to AVX intrinsics (and AVX in general) and I'm attempting to speed up some code that is using a 256 bit lookup table consisting of 32 unsigned chars. Currently the code (with dummy data) is written as such:
unsigned char lookup_table[32] = { 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 };
unsigned char result[8];
unsigned char indices[8] = { 0, 4, 8, 12, 16, 20, 24, 28};
for(int i = 0; i < 8; i++)
{
result[i] = lookup_table[indices[i]];
}
Which works fine and results in the following being placed into "result":
0, 4, 8, 12, 16, 20, 24, 28
In attempt to speed this up, I've replaced the above code with the following AVX instructions:
unsigned char lookup_table[32] = { 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 };
unsigned char result[8];
unsigned char indices[8] = { 0, 4, 8, 12, 16, 20, 24, 28};
__m256i avxTable = _mm256_loadu_si256((__m256i*)&table);
__m256i avxIndices = _mm256_loadu_si256((__m256i*)&indices);
__m256i avxResult= _mm256_shuffle_epi8(avxTable , avxIndices);
Which results in the following output:
0, 4, 8, 12, 0, 4, 8, 12
What I've gathered is that the _mm256_shuffle_epi8 instrinsic ANDs the indices with 0X0F (according to the psuedocode at https://software.intel.com/en-us/node/524017), effectively making any indices above 16 "wrap around" again, hence the repeat of the (0, 4, 8, 12).
Am I using the wrong AVX call? Am I totally off base with the way I believe this should work?
Upvotes: 2
Views: 3075
Reputation: 213060
Here is a solution using SSE rather than AVX. Note that it performs 16 lookups in parallel (you can't do fewer than this with 128 bit SIMD and 8 bit elements):
#include <stdio.h>
#include <smmintrin.h> // SSE 4.1
int main()
{
unsigned char lookup_table[32] = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31 };
unsigned char result[16];
unsigned char indices[16] = { 0, 4, 8, 12, 16, 20, 24, 28, 2, 6, 10, 14, 18, 22, 26, 30 };
__m128i vIndices, vSelect, vTable0, vTable1, vResult0, vResult1, vResult;
vIndices = _mm_loadu_si128((__m128i *)&indices);
vSelect = _mm_cmpgt_epi8(vIndices, _mm_set1_epi8(15));
vTable0 = _mm_loadu_si128((__m128i *)&lookup_table[0]);
vTable1 = _mm_loadu_si128((__m128i *)&lookup_table[16]);
vResult0 = _mm_shuffle_epi8(vTable0, vIndices);
vResult1 = _mm_shuffle_epi8(vTable1, vIndices);
vResult = _mm_blendv_epi8(vResult0, vResult1, vSelect);
_mm_storeu_si128((__m128i *)result, vResult);
printf("%vd\n", vResult);
return 0;
}
Compile and test:
$ gcc -Wall test_lut.c -msse4 && ./a.out
0 4 8 12 16 20 24 28 2 6 10 14 18 22 26 30
Upvotes: 7