xen/arm64: Assembly optimized bitops from Linux
authorIan Campbell <ian.campbell@citrix.com>
Fri, 19 Jul 2013 15:20:08 +0000 (16:20 +0100)
committerIan Campbell <ian.campbell@citrix.com>
Thu, 22 Aug 2013 14:47:44 +0000 (15:47 +0100)
commit79474832b5e068aa4d131f8e586b0fa674a7ee3e
treed888e28a0b8bcaa5f04c3c5eae4e6467b64b1f52
parent7f4f85a70d645bff93230ab86c0276a5eb67c3bc
xen/arm64: Assembly optimized bitops from Linux

This patch replaces the previous hashed lock implementaiton of bitops with
assembly optimized ones taken from Linux v3.10-rc4.

The Linux derived ASM only supports 8 byte aligned bitmaps (which under Linux
are unsigned long * rather than our void *). We do have actually uses of 4
byte alignment (i.e. the bitmaps in struct xmem_pool) which trigger alignment
faults.

Therefore adjust the assembly to work in 4 byte increments, which involved:
    - bit offset now bits 4:0 => mask #31 not #63
    - use wN register not xN for load/modify/store loop.

There is no need to adjust the shift used to calculate the word offset, the
difference is already acounted for in the #63->#31 change.

NB: Xen's build system cannot cope with the change from .c to .S file,
remove xen/arch/arm/arm64/lib/.bitops.o.d or clean your build tree.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
xen/arch/arm/arm64/lib/bitops.S [new file with mode: 0644]
xen/arch/arm/arm64/lib/bitops.c [deleted file]
xen/include/asm-arm/arm64/bitops.h