Question

The following loop is executed hundreds of times.
elma and elmc are both unsigned long (64-bit) arrays, so is res1 and res2.

unsigned long simdstore[2];  
__m128i *p, simda, simdb, simdc;  
p = (__m128i *) simdstore;  

for (i = 0; i < _polylen; i++)  
{  

    u1 = (elma[i] >> l) & 15;  
    u2 = (elmc[i] >> l) & 15;  
    for (k = 0; k < 20; k++)  
    {     

    1.  //res1[i + k] ^= _mulpre1[u1][k];  
    2.  //res2[i + k] ^= _mulpre2[u2][k];               
    3.        _mm_prefetch ((const void *) &_mulpre2[u2][k], _MM_HINT_T0);
    4.        _mm_prefetch ((const void *) &_mulpre1[u1][k], _MM_HINT_T0);
    5.        simda = _mm_set_epi64x (_mulpre2[u2][k], _mulpre1[u1][k]);
    6.        _mm_prefetch ((const void *) &res2[i + k], _MM_HINT_T0); 
    7.        _mm_prefetch ((const void *) &res1[i + k], _MM_HINT_T0); 
    8.        simdb = _mm_set_epi64x (res2[i + k], res1[i + k]);  
    9.        simdc = _mm_xor_si128 (simda, simdb);  
    10.        _mm_store_si128 (p, simdc);  
    11.        res1[i + k] = simdstore[0];  
    12.        res2[i + k] = simdstore[1];                      
    }     
}  

Within the for loop, scalar version of code (commented) runs twice as faster than simd code. With cachegrind output (Instruction reads) of the above lines is mentioned below.

Line 1: 668,460,000 2 2
Line 2: 668,460,000 1 1
Line 3: 89,985,000 1 1
Line 4: 89,985,000 1 1
Line 5: 617,040,000 2 2
Line 6: 44,992,500 0 0
Line 7: 44,992,500 0 0
Line 8: 539,910,000 1 1
Line 9: 128,550,000 0 0
Line 10: . . .
Line 11: 205,680,000 0 0
Line 12: 205,680,000 0 0

From the above figure, it appears that the commented (scalar code) requires significantly less number of instructions than simd code.

How can this code be made faster?

Was it helpful?

Solution

Take out the _mm_prefetch intrinsics - they are achieving nothing in this context and may even be hurting performance. Prefetch is only of benefit if (a) you have bandwidth to spare and (b) you can issue the prefetch hint several hundred clock cycles ahead of when the data is actually needed. I think neither (a) nor (b) are true in your case.

OTHER TIPS

Your peformance problem is this:

_mm_set_epi64x (_mulpre2[u2][k], _mulpre1[u1][k]);

The mm_set(a,b,c,d) class of intrinsics are very slow. Only single parameter set intrinsics (aka broadcast) are fast.

I looked at what they do in assembly code.

They basically create an array on the stack, move your two integers from the multidimensional arrays they currently reside in, to a stack array, using normal memory moves (mov DWORD). Then from the stack array using an XMM memory move (mov XMWORD).

The scalar version goes directly from memory to registers. FASTER!

You see the overhead comes from the fact that that the XMM register can only be communicated with 128 bits at a time, so your program is first ordering the 128 bits in another area of memory before loading them.

If there's a way to move 64 bit values directly to or from a normal register to an XMM register i'm still looking for it.

To get a speed boost from using SSE/XMM registers your data will probably need to be already in order in memory. Loading out of order data into an XMM register is only worth it if you can do several XMM operations per out of order load. Here your doing a single XOR operation.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top