This comes from the Linux patches
15e7e5c1ebf5 for arm32 and
4ecf7ccb1973 for
arm64 by Will Deacon and Catalin Marinas respectively. The Linux commit message
says:
An exclusive store instruction may fail for reasons other than lock
contention (e.g. a cache eviction during the critical section) so, in
line with other architectures using similar exclusive instructions
(alpha, mips, powerpc), retry the trylock operation if the lock appears
to be free but the strex reported failure.
I have observed this due to register_cpu_notifier containing:
if ( !spin_trylock(&cpu_add_remove_lock) )
BUG(); /* Should never fail as we are called only during boot. */
which was spuriously failing.
The ARMv8 variant is taken directly from the Linux patch. For v7 I had to
reimplement since we don't currently use ticket locks.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
static always_inline int _raw_spin_trylock(raw_spinlock_t *lock)
{
- unsigned long tmp;
-
- __asm__ __volatile__(
-" ldrex %0, [%1]\n"
-" teq %0, #0\n"
-" strexeq %0, %2, [%1]"
- : "=&r" (tmp)
- : "r" (&lock->lock), "r" (1)
- : "cc");
-
- if (tmp == 0) {
+ unsigned long contended, res;
+
+ do {
+ __asm__ __volatile__(
+ " ldrex %0, [%2]\n"
+ " teq %0, #0\n"
+ " strexeq %1, %3, [%2]\n"
+ " movne %1, #0\n"
+ : "=&r" (contended), "=r" (res)
+ : "r" (&lock->lock), "r" (1)
+ : "cc");
+ } while (res);
+
+ if (!contended) {
smp_mb();
return 1;
} else {
unsigned int tmp;
asm volatile(
- " ldaxr %w0, %1\n"
+ "2: ldaxr %w0, %1\n"
" cbnz %w0, 1f\n"
" stxr %w0, %w2, %1\n"
+ " cbnz %w0, 2b\n"
"1:\n"
: "=&r" (tmp), "+Q" (lock->lock)
: "r" (1)