[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Atomic test&exchange on alpha - assember code for verification



Hi, 

Doing a cut and paste from /usr/src/linux/include/asm-alpha/system.h 

__cmpxchg_u32(volatile int *m, int old, int new)
{
        unsigned long prev, cmp;

        __asm__ __volatile__(
        "1:     ldl_l %0,%5\n"
        "       cmpeq %0,%3,%1\n"
        "       beq %1,2f\n"
        "       mov %4,%1\n"
        "       stl_c %1,%2\n"
        "       beq %1,3f\n"
#ifdef CONFIG_SMP
        "       mb\n"
#endif
        "2:\n"
        ".subsection 2\n"
        "3:     br 1b\n"
        ".previous"
        : "=&r"(prev), "=&r"(cmp), "=m"(*m)
        : "r"((long) old), "r"(new), "m"(*m) : "memory");

        return prev;
}


For rest see the same header file. 

-aneesh 




On Wed, 2002-08-14 at 16:55, Grzegorz Prokopski wrote:
> Hello!
> 
> I am porting SableVM (JVM) to alpha platform. I am almost finished, but
> as I don't really know alpha assemler too well and google can't give
> guarantees that what I created works as I wanted it to - I decided
> to ask here.
> 
> In the app there's small assembler code, which does the following:
> 
> function test&exchange(volatile *pword, old_value, new_value)
>  if (*pword == old_value)
>      {
>        *pword = new_value;
>        return 1;
>      }
>    else
>      {
>        return 0;
>      }
> }
> 
> But it cannot be written in C. That part of operations:
> 
> if (*pword == old_value)
>      {
>        *pword = new_value;
> 
> must:
> - be atomic / non-preempable "in the middle" by
>  exceptions/interrupts/etc
> - make sure, that other processors see exactly the same data
>  if they also try to do that operation (assure coherency between
>  main memory, processor cache and it seems also register - when
>  the operation is being done)
> 
> I spent all evening and night digging ito many sources, from gcc
> assembler documentation, alpha-assembler-guides, to bsd-alpha and
> postgresql mailing list archives.
> 
> And finally - here's what I wrote:
> 
> static inline jboolean
> _svmh_compare_and_swap (volatile _svmt_word *pword, _svmt_word
> 		 old_value, _svmt_word new_value) {
>   register int result, tmp;
> 
>   __asm__ __volatile__ (
> "1:  mb\n\t"                    // make sure (may be unneded?)
> "    ldq_l      %1,%4\n\t"      // load *pword into tmp (reg,<= mem)
> // does above make sure main memory->cache->register are coherent?
> "    cmpeq      %1,%5,%0\n\t"   // result = (*pword == tmp)
> "    beq        %0,2f\n\t"      // nothing to do if they differ we
> // get 0 just jump away (what happens to processor lock then?)
> "    stq_c      %3,%4\n\t"      // *pword = new_value (reg,=> mem)
> "    mb\n\t"                    // make sure everything was put back to
> // main mem
> "2:  nop"
>          : "=&r"(result), "=&r"(tmp), "=m"(*pword)
>          : "r" (new_value), "m" (*pword), "r" (old_value));
> 
>   return result ? JNI_TRUE : JNI_FALSE;
> }
> 
> _svmt_word is 64 bit (on alpha)
> 
> Please verify, comment and elaborate if you can.
> Cc: me on replies. Thanks.
> 
> Regards
> 
> 					Grzegorz B. Prokopski
> 
> PS: I never wrote in alpha assember, but I did for i386, i8051,
> motorola HC11
> PSS: If nobody is able to verify that code (can happen?) - please point
> me to where I should ask for this.
> 




Reply to: