- android-platform-external-boringssl (10.0.0+r36-1+rpi1) bullseye-staging; urgency=medium
++android-platform-external-boringssl (13~preview2-7+rpi1) bookworm-staging; urgency=medium
+
++ [changes brought forward from 10.0.0+r36-1+rpi1 by Peter Michael Green <plugwash@raspbian.org> at Tue, 19 Jan 2021 16:01:40 +0000]
+ * Mark asm as armv6 to avoid setting off armv7 contamination checker.
+ * set __ARM_MAX_ARCH__ to 6.
+ * disable HWAES define.
+ * disable GHASH_ASM_ARM define.
+
- -- Peter Michael Green <plugwash@raspbian.org> Tue, 19 Jan 2021 16:01:40 +0000
++ -- Peter Michael Green <plugwash@raspbian.org> Thu, 14 Jul 2022 13:55:23 +0000
++
+ android-platform-external-boringssl (13~preview2-7) unstable; urgency=medium
+
+ * Team upload.
+ * [again] Use lld as linker on available platforms.
+
+ -- Roger Shimizu <rosh@debian.org> Tue, 28 Jun 2022 02:04:55 +0900
+
+ android-platform-external-boringssl (13~preview2-6) unstable; urgency=medium
+
+ * Team upload.
+ * Use lld as linker on available platforms.
+ * debian/patches/0[12]: Add patch description.
+ * d/source/lintian-overrides: Adapt new rule to source filename.
+ * Add debian/upstream/metadata.
+
+ -- Roger Shimizu <rosh@debian.org> Mon, 27 Jun 2022 19:38:58 +0900
+
+ android-platform-external-boringssl (13~preview2-5) unstable; urgency=medium
+
+ * Team upload.
+ * debian/*.mk: Fix ftbfs for mips*el.
+
+ -- Roger Shimizu <rosh@debian.org> Sun, 19 Jun 2022 02:14:20 +0900
+
+ android-platform-external-boringssl (13~preview2-4) unstable; urgency=medium
+
+ * Team upload.
+ * Add patch from upstream tag platform-tools-33.0.1.
+ * Move -pie from debian/rules to debian/*.mk executable build.
+ * [ubuntu] debian/rules: ignore dh_dwz error.
+ * Use lld as linker when available.
+
+ -- Roger Shimizu <rosh@debian.org> Sun, 19 Jun 2022 00:21:10 +0900
+
+ android-platform-external-boringssl (13~preview2-3) unstable; urgency=medium
+
+ * Team upload.
+ * d/rules: Move common CPPFLAGS from d/*.mk to d/rules
+ * d/control: Move android-libboringssl-dev from Architecture: all to
+ each arch being supported.
+
+ -- Roger Shimizu <rosh@debian.org> Mon, 13 Jun 2022 00:52:06 +0900
+
+ android-platform-external-boringssl (13~preview2-2) unstable; urgency=medium
+
+ * Team upload.
+ * debian/*.mk: Fix ftbfs for armel.
+
+ -- Roger Shimizu <rosh@debian.org> Tue, 07 Jun 2022 18:35:21 +0900
+
+ android-platform-external-boringssl (13~preview2-1) unstable; urgency=medium
+
+ * Team upload.
+ * debian/*.mk: Using the "gnu11" variant means we don't need _XOPEN_SOURCE.
+ Additionally, using C11 makes the faster refcount implementation
+ available. This setting is from upstream.
+
+ -- Roger Shimizu <rosh@debian.org> Mon, 06 Jun 2022 21:09:55 +0900
+
+ android-platform-external-boringssl (13~preview2-1~exp1) unstable; urgency=medium
+
+ * Team upload.
+ * New upstream version 13~preview2
+ * debian/patches: Refresh patches.
+ * debian/sources.mk: Update by script.
+
+ -- Roger Shimizu <rosh@debian.org> Wed, 01 Jun 2022 04:00:31 +0900
+
+ android-platform-external-boringssl (12.1.0+r5-2) unstable; urgency=medium
+
+ * Team upload.
+ * debian/tests/control: Limit architecture.
+
+ -- Roger Shimizu <rosh@debian.org> Tue, 31 May 2022 01:41:52 +0900
+
+ android-platform-external-boringssl (12.1.0+r5-1) unstable; urgency=medium
+
+ * Team upload.
+ * Upload to unstable.
+ * debian/rules: Build and test only for -arch build.
+
+ -- Roger Shimizu <rosh@debian.org> Mon, 30 May 2022 19:06:02 +0900
+
+ android-platform-external-boringssl (12.1.0+r5-1~exp10) experimental; urgency=medium
+
+ * Team upload.
+ * debian/crypto_test.mk: Fallback to gcc for mips64el.
+ Thanks to Adrian Bunk for fixing this test for mips64el.
+
+ -- Roger Shimizu <rosh@debian.org> Sun, 29 May 2022 16:50:06 +0900
+
+ android-platform-external-boringssl (12.1.0+r5-1~exp9) experimental; urgency=medium
+
+ * Team upload.
+ * debian/control: Update Depends version.
+ * debian/tests/control: Add autopkgtest test.
+
+ -- Roger Shimizu <rosh@debian.org> Sat, 28 May 2022 18:38:29 +0900
+
+ android-platform-external-boringssl (12.1.0+r5-1~exp8) experimental; urgency=medium
+
+ * Team upload.
+ * debian/rules:
+ - Disable building bssl-tools for Hurd.
+ - Add -pie to LDFLAGS to enhance the hardening.
+ * debian/control:
+ - Add android-boringssl package to include the tool.
+ * debian/copyright & debian/source/lintian-overrides:
+ - Adapt with new upstream.
+
+ -- Roger Shimizu <rosh@debian.org> Thu, 26 May 2022 01:00:02 +0900
+
+ android-platform-external-boringssl (12.1.0+r5-1~exp7) experimental; urgency=medium
+
+ * Team upload.
+ * debian/tool_test.mk and debian/rules:
+ - Add bssl-tool to build.
+ * debian/rules:
+ - Still run failing test for mips64el, just ignore the result.
+
+ -- Roger Shimizu <rosh@debian.org> Mon, 23 May 2022 21:01:59 +0900
+
+ android-platform-external-boringssl (12.1.0+r5-1~exp6) experimental; urgency=medium
+
+ * Team upload.
+ * Disable crypto_test for mips64el temporarily.
+ * Split test_support as an independant library.
+ * d/rules:
+ - Make dependency driven makefile rules.
+
+ -- Roger Shimizu <rosh@debian.org> Mon, 16 May 2022 23:23:41 +0900
+
+ android-platform-external-boringssl (12.1.0+r5-1~exp5) experimental; urgency=medium
+
+ * Team upload.
+ * Update eureka.mk and source it in debian/*.mk
+ * d/{crypto,ssl}_test.mk:
+ - Link with atomic for armel.
+ * d/patches:
+ - Update 01 patch to fix x32.
+
+ -- Roger Shimizu <rosh@debian.org> Mon, 16 May 2022 02:25:59 +0900
+
+ android-platform-external-boringssl (12.1.0+r5-1~exp4) experimental; urgency=medium
+
+ * Team upload.
+ * debian/patches:
+ - Update patch to fix sh4 and x32.
+ * Add debian/{crypto,ssl}_test.mk to test built libraries.
+ * d/lib{crypto,ssl}.mk:
+ - Import source list from eureka.mk.
+
+ -- Roger Shimizu <rosh@debian.org> Sun, 15 May 2022 19:24:19 +0900
+
+ android-platform-external-boringssl (12.1.0+r5-1~exp3) experimental; urgency=medium
+
+ * Team upload.
+ * debian/control: Add all little endian Arch, to check the buildd
+ result.
+
+ -- Roger Shimizu <rosh@debian.org> Sun, 15 May 2022 03:27:01 +0900
+
+ android-platform-external-boringssl (12.1.0+r5-1~exp2) experimental; urgency=medium
+
+ * Team upload.
+ * Try to build on new Arch (little endian): ia64, riscv64, sh4, x32
+
+ -- Roger Shimizu <rosh@debian.org> Sun, 15 May 2022 01:59:06 +0900
+
+ android-platform-external-boringssl (12.1.0+r5-1~exp1) experimental; urgency=medium
+
+ * New upstream version 12.1.0+r5
+ * debian/control:
+ - Fix multiarch issues.
+ - Add ppc64el support.
+ * debian/rules:
+ - Use clang as default compiler.
+
+ -- Roger Shimizu <rosh@debian.org> Sat, 14 May 2022 02:09:14 +0900
+
+ android-platform-external-boringssl (10.0.0+r36-2~exp1) experimental; urgency=medium
+
+ * Team upload.
+
+ [ Hans-Christoph Steiner ]
+ * gitlab-ci: exclude tags, pristine-tar, upstream
+
+ [ Roger Shimizu ]
+ * debian/control:
+ - Add mips*el to build.
+
+ -- Roger Shimizu <rosh@debian.org> Mon, 11 Jan 2021 03:31:07 +0900
android-platform-external-boringssl (10.0.0+r36-1) unstable; urgency=medium
--- /dev/null
- Index: android-platform-external-boringssl-10.0.0+r36/linux-arm/crypto/fipsmodule/aes-armv4.S
+Description: Mark asm as armv6 to avoid setting off armv7 contamination checker.
+Author: Peter Michael Green <plugwash@raspbian.org>
+
+Index: android-platform-external-boringssl-10.0.0+r36/linux-arm/crypto/chacha/chacha-armv4.S
+===================================================================
+--- android-platform-external-boringssl-10.0.0+r36.orig/linux-arm/crypto/chacha/chacha-armv4.S
++++ android-platform-external-boringssl-10.0.0+r36/linux-arm/crypto/chacha/chacha-armv4.S
+@@ -16,7 +16,7 @@
+
+ @ Silence ARMv8 deprecated IT instruction warnings. This file is used by both
+ @ ARMv7 and ARMv8 processors and does not use ARMv8 instructions.
+-.arch armv7-a
++.arch armv6
+
+ .text
+ #if defined(__thumb2__) || defined(__clang__)
+@@ -808,7 +808,7 @@ ChaCha20_ctr32:
+ ldmia sp!,{r4,r5,r6,r7,r8,r9,r10,r11,pc}
+ .size ChaCha20_ctr32,.-ChaCha20_ctr32
+ #if __ARM_MAX_ARCH__>=7
+-.arch armv7-a
++.arch armv6
+ .fpu neon
+
+ .type ChaCha20_neon,%function
- --- android-platform-external-boringssl-10.0.0+r36.orig/linux-arm/crypto/fipsmodule/aes-armv4.S
- +++ android-platform-external-boringssl-10.0.0+r36/linux-arm/crypto/fipsmodule/aes-armv4.S
- @@ -61,7 +61,7 @@
- @ Silence ARMv8 deprecated IT instruction warnings. This file is used by both
- @ ARMv7 and ARMv8 processors and does not use ARMv8 instructions. (ARMv8 AES
- @ instructions are in aesv8-armx.pl.)
++Index: android-platform-external-boringssl-10.0.0+r36/linux-arm/crypto/fipsmodule/vpaes-armv7.S
+===================================================================
-
- .text
- #if defined(__thumb2__) && !defined(__APPLE__)
++--- android-platform-external-boringssl-10.0.0+r36.orig/linux-arm/crypto/fipsmodule/vpaes-armv7.S
+++++ android-platform-external-boringssl-10.0.0+r36/linux-arm/crypto/fipsmodule/vpaes-armv7.S
++@@ -64,1 +64,1 @@
+-.arch armv7-a
++.arch armv6
- Index: android-platform-external-boringssl-10.0.0+r36/src/crypto/fipsmodule/aes/asm/aes-armv4.pl
+Index: android-platform-external-boringssl-10.0.0+r36/linux-arm/crypto/fipsmodule/aesv8-armx32.S
+===================================================================
+--- android-platform-external-boringssl-10.0.0+r36.orig/linux-arm/crypto/fipsmodule/aesv8-armx32.S
++++ android-platform-external-boringssl-10.0.0+r36/linux-arm/crypto/fipsmodule/aesv8-armx32.S
+@@ -16,7 +16,7 @@
+
+ #if __ARM_MAX_ARCH__>=7
+ .text
+-.arch armv7-a @ don't confuse not-so-latest binutils with argv8 :-)
++.arch armv6 @ don't confuse not-so-latest binutils with argv8 :-)
+ .fpu neon
+ .code 32
+ #undef __thumb2__
+Index: android-platform-external-boringssl-10.0.0+r36/linux-arm/crypto/fipsmodule/armv4-mont.S
+===================================================================
+--- android-platform-external-boringssl-10.0.0+r36.orig/linux-arm/crypto/fipsmodule/armv4-mont.S
++++ android-platform-external-boringssl-10.0.0+r36/linux-arm/crypto/fipsmodule/armv4-mont.S
+@@ -16,7 +16,7 @@
+
+ @ Silence ARMv8 deprecated IT instruction warnings. This file is used by both
+ @ ARMv7 and ARMv8 processors and does not use ARMv8 instructions.
+-.arch armv7-a
++.arch armv6
+
+ .text
+ #if defined(__thumb2__)
+@@ -210,7 +210,7 @@ bn_mul_mont:
+ #endif
+ .size bn_mul_mont,.-bn_mul_mont
+ #if __ARM_MAX_ARCH__>=7
+-.arch armv7-a
++.arch armv6
+ .fpu neon
+
+ .type bn_mul8x_mont_neon,%function
+Index: android-platform-external-boringssl-10.0.0+r36/linux-arm/crypto/fipsmodule/bsaes-armv7.S
+===================================================================
+--- android-platform-external-boringssl-10.0.0+r36.orig/linux-arm/crypto/fipsmodule/bsaes-armv7.S
++++ android-platform-external-boringssl-10.0.0+r36/linux-arm/crypto/fipsmodule/bsaes-armv7.S
+@@ -84,7 +84,7 @@
+ #endif
+
+ #if __ARM_MAX_ARCH__>=7
+-.arch armv7-a
++.arch armv6
+ .fpu neon
+
+ .text
+Index: android-platform-external-boringssl-10.0.0+r36/linux-arm/crypto/fipsmodule/ghash-armv4.S
+===================================================================
+--- android-platform-external-boringssl-10.0.0+r36.orig/linux-arm/crypto/fipsmodule/ghash-armv4.S
++++ android-platform-external-boringssl-10.0.0+r36/linux-arm/crypto/fipsmodule/ghash-armv4.S
+@@ -17,7 +17,7 @@
+ @ Silence ARMv8 deprecated IT instruction warnings. This file is used by both
+ @ ARMv7 and ARMv8 processors and does not use ARMv8 instructions. (ARMv8 PMULL
+ @ instructions are in aesv8-armx.pl.)
+-.arch armv7-a
++.arch armv6
+
+ .text
+ #if defined(__thumb2__) || defined(__clang__)
+@@ -367,7 +367,7 @@ gcm_gmult_4bit:
+ #endif
+ .size gcm_gmult_4bit,.-gcm_gmult_4bit
+ #if __ARM_MAX_ARCH__>=7
+-.arch armv7-a
++.arch armv6
+ .fpu neon
+
+ .globl gcm_init_neon
+Index: android-platform-external-boringssl-10.0.0+r36/linux-arm/crypto/fipsmodule/sha1-armv4-large.S
+===================================================================
+--- android-platform-external-boringssl-10.0.0+r36.orig/linux-arm/crypto/fipsmodule/sha1-armv4-large.S
++++ android-platform-external-boringssl-10.0.0+r36/linux-arm/crypto/fipsmodule/sha1-armv4-large.S
+@@ -506,7 +506,7 @@ sha1_block_data_order:
+ .align 2
+ .align 5
+ #if __ARM_MAX_ARCH__>=7
+-.arch armv7-a
++.arch armv6
+ .fpu neon
+
+ .type sha1_block_data_order_neon,%function
+Index: android-platform-external-boringssl-10.0.0+r36/linux-arm/crypto/fipsmodule/sha256-armv4.S
+===================================================================
+--- android-platform-external-boringssl-10.0.0+r36.orig/linux-arm/crypto/fipsmodule/sha256-armv4.S
++++ android-platform-external-boringssl-10.0.0+r36/linux-arm/crypto/fipsmodule/sha256-armv4.S
+@@ -67,7 +67,7 @@
+ @ Silence ARMv8 deprecated IT instruction warnings. This file is used by both
+ @ ARMv7 and ARMv8 processors. It does have ARMv8-only code, but those
+ @ instructions are manually-encoded. (See unsha256.)
+-.arch armv7-a
++.arch armv6
+
+ .text
+ #if defined(__thumb2__)
+@@ -1892,7 +1892,7 @@ sha256_block_data_order:
+ #endif
+ .size sha256_block_data_order,.-sha256_block_data_order
+ #if __ARM_MAX_ARCH__>=7
+-.arch armv7-a
++.arch armv6
+ .fpu neon
+
+ .globl sha256_block_data_order_neon
+Index: android-platform-external-boringssl-10.0.0+r36/linux-arm/crypto/fipsmodule/sha512-armv4.S
+===================================================================
+--- android-platform-external-boringssl-10.0.0+r36.orig/linux-arm/crypto/fipsmodule/sha512-armv4.S
++++ android-platform-external-boringssl-10.0.0+r36/linux-arm/crypto/fipsmodule/sha512-armv4.S
+@@ -79,7 +79,7 @@
+
+ @ Silence ARMv8 deprecated IT instruction warnings. This file is used by both
+ @ ARMv7 and ARMv8 processors and does not use ARMv8 instructions.
+-.arch armv7-a
++.arch armv6
+
+ #ifdef __ARMEL__
+ # define LO 0
+@@ -550,7 +550,7 @@ sha512_block_data_order:
+ #endif
+ .size sha512_block_data_order,.-sha512_block_data_order
+ #if __ARM_MAX_ARCH__>=7
+-.arch armv7-a
++.arch armv6
+ .fpu neon
+
+ .globl sha512_block_data_order_neon
+Index: android-platform-external-boringssl-10.0.0+r36/linux-arm/crypto/test/trampoline-armv4.S
+===================================================================
+--- android-platform-external-boringssl-10.0.0+r36.orig/linux-arm/crypto/test/trampoline-armv4.S
++++ android-platform-external-boringssl-10.0.0+r36/linux-arm/crypto/test/trampoline-armv4.S
+@@ -14,7 +14,7 @@
+ #endif
+ .syntax unified
+
+-.arch armv7-a
++.arch armv6
+ .fpu vfp
+
+ .text
+Index: android-platform-external-boringssl-10.0.0+r36/src/crypto/chacha/asm/chacha-armv4.pl
+===================================================================
+--- android-platform-external-boringssl-10.0.0+r36.orig/src/crypto/chacha/asm/chacha-armv4.pl
++++ android-platform-external-boringssl-10.0.0+r36/src/crypto/chacha/asm/chacha-armv4.pl
+@@ -173,7 +173,7 @@ $code.=<<___;
+
+ @ Silence ARMv8 deprecated IT instruction warnings. This file is used by both
+ @ ARMv7 and ARMv8 processors and does not use ARMv8 instructions.
+-.arch armv7-a
++.arch armv6
+
+ .text
+ #if defined(__thumb2__) || defined(__clang__)
+@@ -665,7 +665,7 @@ my ($a,$b,$c,$d,$t)=@_;
+
+ $code.=<<___;
+ #if __ARM_MAX_ARCH__>=7
+-.arch armv7-a
++.arch armv6
+ .fpu neon
+
+ .type ChaCha20_neon,%function
- --- android-platform-external-boringssl-10.0.0+r36.orig/src/crypto/fipsmodule/aes/asm/aes-armv4.pl
- +++ android-platform-external-boringssl-10.0.0+r36/src/crypto/fipsmodule/aes/asm/aes-armv4.pl
- @@ -79,7 +79,7 @@ $code=<<___;
- @ Silence ARMv8 deprecated IT instruction warnings. This file is used by both
- @ ARMv7 and ARMv8 processors and does not use ARMv8 instructions. (ARMv8 AES
- @ instructions are in aesv8-armx.pl.)
- -.arch armv7-a
- +.arch armv6
-
- .text
- #if defined(__thumb2__) && !defined(__APPLE__)
++Index: android-platform-external-boringssl-10.0.0+r36/src/crypto/fipsmodule/aes/asm/vpaes-armv7.pl
+===================================================================
++--- android-platform-external-boringssl-10.0.0+r36.orig/src/crypto/fipsmodule/aes/asm/vpaes-armv7.pl
+++++ android-platform-external-boringssl-10.0.0+r36/src/crypto/fipsmodule/aes/asm/vpaes-armv7.pl
++@@ -82,1 +82,1 @@ $code=<<___;
++-.arch armv7-a
+++.arch armv6
+Index: android-platform-external-boringssl-10.0.0+r36/src/crypto/fipsmodule/aes/asm/aesv8-armx.pl
+===================================================================
+--- android-platform-external-boringssl-10.0.0+r36.orig/src/crypto/fipsmodule/aes/asm/aesv8-armx.pl
++++ android-platform-external-boringssl-10.0.0+r36/src/crypto/fipsmodule/aes/asm/aesv8-armx.pl
+@@ -60,7 +60,7 @@ $code=<<___;
+ ___
+ $code.=".arch armv8-a+crypto\n" if ($flavour =~ /64/);
+ $code.=<<___ if ($flavour !~ /64/);
+-.arch armv7-a // don't confuse not-so-latest binutils with argv8 :-)
++.arch armv6 // don't confuse not-so-latest binutils with argv8 :-)
+ .fpu neon
+ .code 32
+ #undef __thumb2__
+Index: android-platform-external-boringssl-10.0.0+r36/src/crypto/fipsmodule/aes/asm/bsaes-armv7.pl
+===================================================================
+--- android-platform-external-boringssl-10.0.0+r36.orig/src/crypto/fipsmodule/aes/asm/bsaes-armv7.pl
++++ android-platform-external-boringssl-10.0.0+r36/src/crypto/fipsmodule/aes/asm/bsaes-armv7.pl
+@@ -725,7 +725,7 @@ $code.=<<___;
+ #endif
+
+ #if __ARM_MAX_ARCH__>=7
+-.arch armv7-a
++.arch armv6
+ .fpu neon
+
+ .text
+Index: android-platform-external-boringssl-10.0.0+r36/src/crypto/fipsmodule/bn/asm/armv4-mont.pl
+===================================================================
+--- android-platform-external-boringssl-10.0.0+r36.orig/src/crypto/fipsmodule/bn/asm/armv4-mont.pl
++++ android-platform-external-boringssl-10.0.0+r36/src/crypto/fipsmodule/bn/asm/armv4-mont.pl
+@@ -99,7 +99,7 @@ $code=<<___;
+
+ @ Silence ARMv8 deprecated IT instruction warnings. This file is used by both
+ @ ARMv7 and ARMv8 processors and does not use ARMv8 instructions.
+-.arch armv7-a
++.arch armv6
+
+ .text
+ #if defined(__thumb2__)
+@@ -306,7 +306,7 @@ my ($tinptr,$toutptr,$inner,$outer,$bnpt
+
+ $code.=<<___;
+ #if __ARM_MAX_ARCH__>=7
+-.arch armv7-a
++.arch armv6
+ .fpu neon
+
+ .type bn_mul8x_mont_neon,%function
+Index: android-platform-external-boringssl-10.0.0+r36/src/crypto/fipsmodule/modes/asm/ghash-armv4.pl
+===================================================================
+--- android-platform-external-boringssl-10.0.0+r36.orig/src/crypto/fipsmodule/modes/asm/ghash-armv4.pl
++++ android-platform-external-boringssl-10.0.0+r36/src/crypto/fipsmodule/modes/asm/ghash-armv4.pl
+@@ -145,7 +145,7 @@ $code=<<___;
+ @ Silence ARMv8 deprecated IT instruction warnings. This file is used by both
+ @ ARMv7 and ARMv8 processors and does not use ARMv8 instructions. (ARMv8 PMULL
+ @ instructions are in aesv8-armx.pl.)
+-.arch armv7-a
++.arch armv6
+
+ .text
+ #if defined(__thumb2__) || defined(__clang__)
+@@ -429,7 +429,7 @@ ___
+
+ $code.=<<___;
+ #if __ARM_MAX_ARCH__>=7
+-.arch armv7-a
++.arch armv6
+ .fpu neon
+
+ .global gcm_init_neon
+Index: android-platform-external-boringssl-10.0.0+r36/src/crypto/fipsmodule/sha/asm/sha1-armv4-large.pl
+===================================================================
+--- android-platform-external-boringssl-10.0.0+r36.orig/src/crypto/fipsmodule/sha/asm/sha1-armv4-large.pl
++++ android-platform-external-boringssl-10.0.0+r36/src/crypto/fipsmodule/sha/asm/sha1-armv4-large.pl
+@@ -525,7 +525,7 @@ sub Xloop()
+
+ $code.=<<___;
+ #if __ARM_MAX_ARCH__>=7
+-.arch armv7-a
++.arch armv6
+ .fpu neon
+
+ .type sha1_block_data_order_neon,%function
+Index: android-platform-external-boringssl-10.0.0+r36/src/crypto/fipsmodule/sha/asm/sha256-armv4.pl
+===================================================================
+--- android-platform-external-boringssl-10.0.0+r36.orig/src/crypto/fipsmodule/sha/asm/sha256-armv4.pl
++++ android-platform-external-boringssl-10.0.0+r36/src/crypto/fipsmodule/sha/asm/sha256-armv4.pl
+@@ -184,7 +184,7 @@ $code=<<___;
+ @ Silence ARMv8 deprecated IT instruction warnings. This file is used by both
+ @ ARMv7 and ARMv8 processors. It does have ARMv8-only code, but those
+ @ instructions are manually-encoded. (See unsha256.)
+-.arch armv7-a
++.arch armv6
+
+ .text
+ #if defined(__thumb2__)
+@@ -475,7 +475,7 @@ sub body_00_15 () {
+
+ $code.=<<___;
+ #if __ARM_MAX_ARCH__>=7
+-.arch armv7-a
++.arch armv6
+ .fpu neon
+
+ .global sha256_block_data_order_neon
+Index: android-platform-external-boringssl-10.0.0+r36/src/crypto/fipsmodule/sha/asm/sha512-armv4.pl
+===================================================================
+--- android-platform-external-boringssl-10.0.0+r36.orig/src/crypto/fipsmodule/sha/asm/sha512-armv4.pl
++++ android-platform-external-boringssl-10.0.0+r36/src/crypto/fipsmodule/sha/asm/sha512-armv4.pl
+@@ -210,7 +210,7 @@ $code=<<___;
+
+ @ Silence ARMv8 deprecated IT instruction warnings. This file is used by both
+ @ ARMv7 and ARMv8 processors and does not use ARMv8 instructions.
+-.arch armv7-a
++.arch armv6
+
+ #ifdef __ARMEL__
+ # define LO 0
+@@ -606,7 +606,7 @@ ___
+
+ $code.=<<___;
+ #if __ARM_MAX_ARCH__>=7
+-.arch armv7-a
++.arch armv6
+ .fpu neon
+
+ .global sha512_block_data_order_neon
+Index: android-platform-external-boringssl-10.0.0+r36/src/crypto/test/asm/trampoline-armv4.pl
+===================================================================
+--- android-platform-external-boringssl-10.0.0+r36.orig/src/crypto/test/asm/trampoline-armv4.pl
++++ android-platform-external-boringssl-10.0.0+r36/src/crypto/test/asm/trampoline-armv4.pl
+@@ -49,7 +49,7 @@ my ($func, $state, $argv, $argc) = ("r0"
+ my $code = <<____;
+ .syntax unified
+
+-.arch armv7-a
++.arch armv6
+ .fpu vfp
+
+ .text
+Index: android-platform-external-boringssl-10.0.0+r36/src/include/openssl/arm_arch.h
+===================================================================
+--- android-platform-external-boringssl-10.0.0+r36.orig/src/include/openssl/arm_arch.h
++++ android-platform-external-boringssl-10.0.0+r36/src/include/openssl/arm_arch.h
+@@ -100,7 +100,7 @@
+
+ // Even when building for 32-bit ARM, support for aarch64 crypto instructions
+ // will be included.
+-#define __ARM_MAX_ARCH__ 8
++#define __ARM_MAX_ARCH__ 6
+
+ // ARMV7_NEON is true when a NEON unit is present in the current CPU.
+ #define ARMV7_NEON (1 << 0)
+Index: android-platform-external-boringssl-10.0.0+r36/src/crypto/fipsmodule/aes/internal.h
+===================================================================
+--- android-platform-external-boringssl-10.0.0+r36.orig/src/crypto/fipsmodule/aes/internal.h
++++ android-platform-external-boringssl-10.0.0+r36/src/crypto/fipsmodule/aes/internal.h
+@@ -42,7 +42,7 @@ OPENSSL_INLINE int vpaes_capable(void) {
+ return (OPENSSL_ia32cap_get()[1] & (1 << (41 - 32))) != 0;
+ }
+
+-#elif defined(OPENSSL_ARM) || defined(OPENSSL_AARCH64)
++#elif false
+ #define HWAES
+
+ OPENSSL_INLINE int hwaes_capable(void) { return CRYPTO_is_ARMv8_AES_capable(); }
+Index: android-platform-external-boringssl-10.0.0+r36/src/crypto/fipsmodule/modes/internal.h
+===================================================================
+--- android-platform-external-boringssl-10.0.0+r36.orig/src/crypto/fipsmodule/modes/internal.h
++++ android-platform-external-boringssl-10.0.0+r36/src/crypto/fipsmodule/modes/internal.h
+@@ -314,7 +314,7 @@ void gcm_ghash_4bit_mmx(uint64_t Xi[2],
+ size_t len);
+ #endif // OPENSSL_X86
+
+-#elif defined(OPENSSL_ARM) || defined(OPENSSL_AARCH64)
++#elif false
+ #define GHASH_ASM_ARM
+ #define GCM_FUNCREF_4BIT
+
+ 01-Add-new-Arch-ia64-riscv64-sh4-x32.patch
+ 02-sources-mk.patch
+ Sync-to-81502beeddc5f116d44d0898c.patch
+armv6.patch
#else
.code 32
#endif
-
- .type rem_4bit,%object
- .align 5
- rem_4bit:
- .short 0x0000,0x1C20,0x3840,0x2460
- .short 0x7080,0x6CA0,0x48C0,0x54E0
- .short 0xE100,0xFD20,0xD940,0xC560
- .short 0x9180,0x8DA0,0xA9C0,0xB5E0
- .size rem_4bit,.-rem_4bit
-
- .type rem_4bit_get,%function
- rem_4bit_get:
- #if defined(__thumb2__)
- adr r2,rem_4bit
- #else
- sub r2,pc,#8+32 @ &rem_4bit
- #endif
- b .Lrem_4bit_got
- nop
- nop
- .size rem_4bit_get,.-rem_4bit_get
-
- .globl gcm_ghash_4bit
- .hidden gcm_ghash_4bit
- .type gcm_ghash_4bit,%function
- .align 4
- gcm_ghash_4bit:
- #if defined(__thumb2__)
- adr r12,rem_4bit
- #else
- sub r12,pc,#8+48 @ &rem_4bit
- #endif
- add r3,r2,r3 @ r3 to point at the end
- stmdb sp!,{r3,r4,r5,r6,r7,r8,r9,r10,r11,lr} @ save r3/end too
-
- ldmia r12,{r4,r5,r6,r7,r8,r9,r10,r11} @ copy rem_4bit ...
- stmdb sp!,{r4,r5,r6,r7,r8,r9,r10,r11} @ ... to stack
-
- ldrb r12,[r2,#15]
- ldrb r14,[r0,#15]
- .Louter:
- eor r12,r12,r14
- and r14,r12,#0xf0
- and r12,r12,#0x0f
- mov r3,#14
-
- add r7,r1,r12,lsl#4
- ldmia r7,{r4,r5,r6,r7} @ load Htbl[nlo]
- add r11,r1,r14
- ldrb r12,[r2,#14]
-
- and r14,r4,#0xf @ rem
- ldmia r11,{r8,r9,r10,r11} @ load Htbl[nhi]
- add r14,r14,r14
- eor r4,r8,r4,lsr#4
- ldrh r8,[sp,r14] @ rem_4bit[rem]
- eor r4,r4,r5,lsl#28
- ldrb r14,[r0,#14]
- eor r5,r9,r5,lsr#4
- eor r5,r5,r6,lsl#28
- eor r6,r10,r6,lsr#4
- eor r6,r6,r7,lsl#28
- eor r7,r11,r7,lsr#4
- eor r12,r12,r14
- and r14,r12,#0xf0
- and r12,r12,#0x0f
- eor r7,r7,r8,lsl#16
-
- .Linner:
- add r11,r1,r12,lsl#4
- and r12,r4,#0xf @ rem
- subs r3,r3,#1
- add r12,r12,r12
- ldmia r11,{r8,r9,r10,r11} @ load Htbl[nlo]
- eor r4,r8,r4,lsr#4
- eor r4,r4,r5,lsl#28
- eor r5,r9,r5,lsr#4
- eor r5,r5,r6,lsl#28
- ldrh r8,[sp,r12] @ rem_4bit[rem]
- eor r6,r10,r6,lsr#4
- #ifdef __thumb2__
- it pl
- #endif
- ldrplb r12,[r2,r3]
- eor r6,r6,r7,lsl#28
- eor r7,r11,r7,lsr#4
-
- add r11,r1,r14
- and r14,r4,#0xf @ rem
- eor r7,r7,r8,lsl#16 @ ^= rem_4bit[rem]
- add r14,r14,r14
- ldmia r11,{r8,r9,r10,r11} @ load Htbl[nhi]
- eor r4,r8,r4,lsr#4
- #ifdef __thumb2__
- it pl
- #endif
- ldrplb r8,[r0,r3]
- eor r4,r4,r5,lsl#28
- eor r5,r9,r5,lsr#4
- ldrh r9,[sp,r14]
- eor r5,r5,r6,lsl#28
- eor r6,r10,r6,lsr#4
- eor r6,r6,r7,lsl#28
- #ifdef __thumb2__
- it pl
- #endif
- eorpl r12,r12,r8
- eor r7,r11,r7,lsr#4
- #ifdef __thumb2__
- itt pl
- #endif
- andpl r14,r12,#0xf0
- andpl r12,r12,#0x0f
- eor r7,r7,r9,lsl#16 @ ^= rem_4bit[rem]
- bpl .Linner
-
- ldr r3,[sp,#32] @ re-load r3/end
- add r2,r2,#16
- mov r14,r4
- #if __ARM_ARCH__>=7 && defined(__ARMEL__)
- rev r4,r4
- str r4,[r0,#12]
- #elif defined(__ARMEB__)
- str r4,[r0,#12]
- #else
- mov r9,r4,lsr#8
- strb r4,[r0,#12+3]
- mov r10,r4,lsr#16
- strb r9,[r0,#12+2]
- mov r11,r4,lsr#24
- strb r10,[r0,#12+1]
- strb r11,[r0,#12]
- #endif
- cmp r2,r3
- #if __ARM_ARCH__>=7 && defined(__ARMEL__)
- rev r5,r5
- str r5,[r0,#8]
- #elif defined(__ARMEB__)
- str r5,[r0,#8]
- #else
- mov r9,r5,lsr#8
- strb r5,[r0,#8+3]
- mov r10,r5,lsr#16
- strb r9,[r0,#8+2]
- mov r11,r5,lsr#24
- strb r10,[r0,#8+1]
- strb r11,[r0,#8]
- #endif
-
- #ifdef __thumb2__
- it ne
- #endif
- ldrneb r12,[r2,#15]
- #if __ARM_ARCH__>=7 && defined(__ARMEL__)
- rev r6,r6
- str r6,[r0,#4]
- #elif defined(__ARMEB__)
- str r6,[r0,#4]
- #else
- mov r9,r6,lsr#8
- strb r6,[r0,#4+3]
- mov r10,r6,lsr#16
- strb r9,[r0,#4+2]
- mov r11,r6,lsr#24
- strb r10,[r0,#4+1]
- strb r11,[r0,#4]
- #endif
-
- #if __ARM_ARCH__>=7 && defined(__ARMEL__)
- rev r7,r7
- str r7,[r0,#0]
- #elif defined(__ARMEB__)
- str r7,[r0,#0]
- #else
- mov r9,r7,lsr#8
- strb r7,[r0,#0+3]
- mov r10,r7,lsr#16
- strb r9,[r0,#0+2]
- mov r11,r7,lsr#24
- strb r10,[r0,#0+1]
- strb r11,[r0,#0]
- #endif
-
- bne .Louter
-
- add sp,sp,#36
- #if __ARM_ARCH__>=5
- ldmia sp!,{r4,r5,r6,r7,r8,r9,r10,r11,pc}
- #else
- ldmia sp!,{r4,r5,r6,r7,r8,r9,r10,r11,lr}
- tst lr,#1
- moveq pc,lr @ be binary compatible with V4, yet
- .word 0xe12fff1e @ interoperable with Thumb ISA:-)
- #endif
- .size gcm_ghash_4bit,.-gcm_ghash_4bit
-
- .globl gcm_gmult_4bit
- .hidden gcm_gmult_4bit
- .type gcm_gmult_4bit,%function
- gcm_gmult_4bit:
- stmdb sp!,{r4,r5,r6,r7,r8,r9,r10,r11,lr}
- ldrb r12,[r0,#15]
- b rem_4bit_get
- .Lrem_4bit_got:
- and r14,r12,#0xf0
- and r12,r12,#0x0f
- mov r3,#14
-
- add r7,r1,r12,lsl#4
- ldmia r7,{r4,r5,r6,r7} @ load Htbl[nlo]
- ldrb r12,[r0,#14]
-
- add r11,r1,r14
- and r14,r4,#0xf @ rem
- ldmia r11,{r8,r9,r10,r11} @ load Htbl[nhi]
- add r14,r14,r14
- eor r4,r8,r4,lsr#4
- ldrh r8,[r2,r14] @ rem_4bit[rem]
- eor r4,r4,r5,lsl#28
- eor r5,r9,r5,lsr#4
- eor r5,r5,r6,lsl#28
- eor r6,r10,r6,lsr#4
- eor r6,r6,r7,lsl#28
- eor r7,r11,r7,lsr#4
- and r14,r12,#0xf0
- eor r7,r7,r8,lsl#16
- and r12,r12,#0x0f
-
- .Loop:
- add r11,r1,r12,lsl#4
- and r12,r4,#0xf @ rem
- subs r3,r3,#1
- add r12,r12,r12
- ldmia r11,{r8,r9,r10,r11} @ load Htbl[nlo]
- eor r4,r8,r4,lsr#4
- eor r4,r4,r5,lsl#28
- eor r5,r9,r5,lsr#4
- eor r5,r5,r6,lsl#28
- ldrh r8,[r2,r12] @ rem_4bit[rem]
- eor r6,r10,r6,lsr#4
- #ifdef __thumb2__
- it pl
- #endif
- ldrplb r12,[r0,r3]
- eor r6,r6,r7,lsl#28
- eor r7,r11,r7,lsr#4
-
- add r11,r1,r14
- and r14,r4,#0xf @ rem
- eor r7,r7,r8,lsl#16 @ ^= rem_4bit[rem]
- add r14,r14,r14
- ldmia r11,{r8,r9,r10,r11} @ load Htbl[nhi]
- eor r4,r8,r4,lsr#4
- eor r4,r4,r5,lsl#28
- eor r5,r9,r5,lsr#4
- ldrh r8,[r2,r14] @ rem_4bit[rem]
- eor r5,r5,r6,lsl#28
- eor r6,r10,r6,lsr#4
- eor r6,r6,r7,lsl#28
- eor r7,r11,r7,lsr#4
- #ifdef __thumb2__
- itt pl
- #endif
- andpl r14,r12,#0xf0
- andpl r12,r12,#0x0f
- eor r7,r7,r8,lsl#16 @ ^= rem_4bit[rem]
- bpl .Loop
- #if __ARM_ARCH__>=7 && defined(__ARMEL__)
- rev r4,r4
- str r4,[r0,#12]
- #elif defined(__ARMEB__)
- str r4,[r0,#12]
- #else
- mov r9,r4,lsr#8
- strb r4,[r0,#12+3]
- mov r10,r4,lsr#16
- strb r9,[r0,#12+2]
- mov r11,r4,lsr#24
- strb r10,[r0,#12+1]
- strb r11,[r0,#12]
- #endif
-
- #if __ARM_ARCH__>=7 && defined(__ARMEL__)
- rev r5,r5
- str r5,[r0,#8]
- #elif defined(__ARMEB__)
- str r5,[r0,#8]
- #else
- mov r9,r5,lsr#8
- strb r5,[r0,#8+3]
- mov r10,r5,lsr#16
- strb r9,[r0,#8+2]
- mov r11,r5,lsr#24
- strb r10,[r0,#8+1]
- strb r11,[r0,#8]
- #endif
-
- #if __ARM_ARCH__>=7 && defined(__ARMEL__)
- rev r6,r6
- str r6,[r0,#4]
- #elif defined(__ARMEB__)
- str r6,[r0,#4]
- #else
- mov r9,r6,lsr#8
- strb r6,[r0,#4+3]
- mov r10,r6,lsr#16
- strb r9,[r0,#4+2]
- mov r11,r6,lsr#24
- strb r10,[r0,#4+1]
- strb r11,[r0,#4]
- #endif
-
- #if __ARM_ARCH__>=7 && defined(__ARMEL__)
- rev r7,r7
- str r7,[r0,#0]
- #elif defined(__ARMEB__)
- str r7,[r0,#0]
- #else
- mov r9,r7,lsr#8
- strb r7,[r0,#0+3]
- mov r10,r7,lsr#16
- strb r9,[r0,#0+2]
- mov r11,r7,lsr#24
- strb r10,[r0,#0+1]
- strb r11,[r0,#0]
- #endif
-
- #if __ARM_ARCH__>=5
- ldmia sp!,{r4,r5,r6,r7,r8,r9,r10,r11,pc}
- #else
- ldmia sp!,{r4,r5,r6,r7,r8,r9,r10,r11,lr}
- tst lr,#1
- moveq pc,lr @ be binary compatible with V4, yet
- .word 0xe12fff1e @ interoperable with Thumb ISA:-)
- #endif
- .size gcm_gmult_4bit,.-gcm_gmult_4bit
#if __ARM_MAX_ARCH__>=7
-.arch armv7-a
+.arch armv6
.fpu neon
.globl gcm_init_neon
--- /dev/null
-.arch armv7-a
+ // This file is generated from a similarly-named Perl script in the BoringSSL
+ // source tree. Do not edit by hand.
+
+ #if !defined(__has_feature)
+ #define __has_feature(x) 0
+ #endif
+ #if __has_feature(memory_sanitizer) && !defined(OPENSSL_NO_ASM)
+ #define OPENSSL_NO_ASM
+ #endif
+
+ #if !defined(OPENSSL_NO_ASM)
+ #if defined(__arm__)
+ #if defined(BORINGSSL_PREFIX)
+ #include <boringssl_prefix_symbols_asm.h>
+ #endif
+ .syntax unified
+
++.arch armv6
+ .fpu neon
+
+ #if defined(__thumb2__)
+ .thumb
+ #else
+ .code 32
+ #endif
+
+ .text
+
+ .type _vpaes_consts,%object
+ .align 7 @ totally strategic alignment
+ _vpaes_consts:
+ .Lk_mc_forward:@ mc_forward
+ .quad 0x0407060500030201, 0x0C0F0E0D080B0A09
+ .quad 0x080B0A0904070605, 0x000302010C0F0E0D
+ .quad 0x0C0F0E0D080B0A09, 0x0407060500030201
+ .quad 0x000302010C0F0E0D, 0x080B0A0904070605
+ .Lk_mc_backward:@ mc_backward
+ .quad 0x0605040702010003, 0x0E0D0C0F0A09080B
+ .quad 0x020100030E0D0C0F, 0x0A09080B06050407
+ .quad 0x0E0D0C0F0A09080B, 0x0605040702010003
+ .quad 0x0A09080B06050407, 0x020100030E0D0C0F
+ .Lk_sr:@ sr
+ .quad 0x0706050403020100, 0x0F0E0D0C0B0A0908
+ .quad 0x030E09040F0A0500, 0x0B06010C07020D08
+ .quad 0x0F060D040B020900, 0x070E050C030A0108
+ .quad 0x0B0E0104070A0D00, 0x0306090C0F020508
+
+ @
+ @ "Hot" constants
+ @
+ .Lk_inv:@ inv, inva
+ .quad 0x0E05060F0D080180, 0x040703090A0B0C02
+ .quad 0x01040A060F0B0780, 0x030D0E0C02050809
+ .Lk_ipt:@ input transform (lo, hi)
+ .quad 0xC2B2E8985A2A7000, 0xCABAE09052227808
+ .quad 0x4C01307D317C4D00, 0xCD80B1FCB0FDCC81
+ .Lk_sbo:@ sbou, sbot
+ .quad 0xD0D26D176FBDC700, 0x15AABF7AC502A878
+ .quad 0xCFE474A55FBB6A00, 0x8E1E90D1412B35FA
+ .Lk_sb1:@ sb1u, sb1t
+ .quad 0x3618D415FAE22300, 0x3BF7CCC10D2ED9EF
+ .quad 0xB19BE18FCB503E00, 0xA5DF7A6E142AF544
+ .Lk_sb2:@ sb2u, sb2t
+ .quad 0x69EB88400AE12900, 0xC2A163C8AB82234A
+ .quad 0xE27A93C60B712400, 0x5EB7E955BC982FCD
+
+ .byte 86,101,99,116,111,114,32,80,101,114,109,117,116,97,116,105,111,110,32,65,69,83,32,102,111,114,32,65,82,77,118,55,32,78,69,79,78,44,32,77,105,107,101,32,72,97,109,98,117,114,103,32,40,83,116,97,110,102,111,114,100,32,85,110,105,118,101,114,115,105,116,121,41,0
+ .align 2
+ .size _vpaes_consts,.-_vpaes_consts
+ .align 6
+ @@
+ @@ _aes_preheat
+ @@
+ @@ Fills q9-q15 as specified below.
+ @@
+ .type _vpaes_preheat,%function
+ .align 4
+ _vpaes_preheat:
+ adr r10, .Lk_inv
+ vmov.i8 q9, #0x0f @ .Lk_s0F
+ vld1.64 {q10,q11}, [r10]! @ .Lk_inv
+ add r10, r10, #64 @ Skip .Lk_ipt, .Lk_sbo
+ vld1.64 {q12,q13}, [r10]! @ .Lk_sb1
+ vld1.64 {q14,q15}, [r10] @ .Lk_sb2
+ bx lr
+
+ @@
+ @@ _aes_encrypt_core
+ @@
+ @@ AES-encrypt q0.
+ @@
+ @@ Inputs:
+ @@ q0 = input
+ @@ q9-q15 as in _vpaes_preheat
+ @@ [r2] = scheduled keys
+ @@
+ @@ Output in q0
+ @@ Clobbers q1-q5, r8-r11
+ @@ Preserves q6-q8 so you get some local vectors
+ @@
+ @@
+ .type _vpaes_encrypt_core,%function
+ .align 4
+ _vpaes_encrypt_core:
+ mov r9, r2
+ ldr r8, [r2,#240] @ pull rounds
+ adr r11, .Lk_ipt
+ @ vmovdqa .Lk_ipt(%rip), %xmm2 # iptlo
+ @ vmovdqa .Lk_ipt+16(%rip), %xmm3 # ipthi
+ vld1.64 {q2, q3}, [r11]
+ adr r11, .Lk_mc_forward+16
+ vld1.64 {q5}, [r9]! @ vmovdqu (%r9), %xmm5 # round0 key
+ vand q1, q0, q9 @ vpand %xmm9, %xmm0, %xmm1
+ vshr.u8 q0, q0, #4 @ vpsrlb $4, %xmm0, %xmm0
+ vtbl.8 d2, {q2}, d2 @ vpshufb %xmm1, %xmm2, %xmm1
+ vtbl.8 d3, {q2}, d3
+ vtbl.8 d4, {q3}, d0 @ vpshufb %xmm0, %xmm3, %xmm2
+ vtbl.8 d5, {q3}, d1
+ veor q0, q1, q5 @ vpxor %xmm5, %xmm1, %xmm0
+ veor q0, q0, q2 @ vpxor %xmm2, %xmm0, %xmm0
+
+ @ .Lenc_entry ends with a bnz instruction which is normally paired with
+ @ subs in .Lenc_loop.
+ tst r8, r8
+ b .Lenc_entry
+
+ .align 4
+ .Lenc_loop:
+ @ middle of middle round
+ add r10, r11, #0x40
+ vtbl.8 d8, {q13}, d4 @ vpshufb %xmm2, %xmm13, %xmm4 # 4 = sb1u
+ vtbl.8 d9, {q13}, d5
+ vld1.64 {q1}, [r11]! @ vmovdqa -0x40(%r11,%r10), %xmm1 # .Lk_mc_forward[]
+ vtbl.8 d0, {q12}, d6 @ vpshufb %xmm3, %xmm12, %xmm0 # 0 = sb1t
+ vtbl.8 d1, {q12}, d7
+ veor q4, q4, q5 @ vpxor %xmm5, %xmm4, %xmm4 # 4 = sb1u + k
+ vtbl.8 d10, {q15}, d4 @ vpshufb %xmm2, %xmm15, %xmm5 # 4 = sb2u
+ vtbl.8 d11, {q15}, d5
+ veor q0, q0, q4 @ vpxor %xmm4, %xmm0, %xmm0 # 0 = A
+ vtbl.8 d4, {q14}, d6 @ vpshufb %xmm3, %xmm14, %xmm2 # 2 = sb2t
+ vtbl.8 d5, {q14}, d7
+ vld1.64 {q4}, [r10] @ vmovdqa (%r11,%r10), %xmm4 # .Lk_mc_backward[]
+ vtbl.8 d6, {q0}, d2 @ vpshufb %xmm1, %xmm0, %xmm3 # 0 = B
+ vtbl.8 d7, {q0}, d3
+ veor q2, q2, q5 @ vpxor %xmm5, %xmm2, %xmm2 # 2 = 2A
+ @ Write to q5 instead of q0, so the table and destination registers do
+ @ not overlap.
+ vtbl.8 d10, {q0}, d8 @ vpshufb %xmm4, %xmm0, %xmm0 # 3 = D
+ vtbl.8 d11, {q0}, d9
+ veor q3, q3, q2 @ vpxor %xmm2, %xmm3, %xmm3 # 0 = 2A+B
+ vtbl.8 d8, {q3}, d2 @ vpshufb %xmm1, %xmm3, %xmm4 # 0 = 2B+C
+ vtbl.8 d9, {q3}, d3
+ @ Here we restore the original q0/q5 usage.
+ veor q0, q5, q3 @ vpxor %xmm3, %xmm0, %xmm0 # 3 = 2A+B+D
+ and r11, r11, #~(1<<6) @ and $0x30, %r11 # ... mod 4
+ veor q0, q0, q4 @ vpxor %xmm4, %xmm0, %xmm0 # 0 = 2A+3B+C+D
+ subs r8, r8, #1 @ nr--
+
+ .Lenc_entry:
+ @ top of round
+ vand q1, q0, q9 @ vpand %xmm0, %xmm9, %xmm1 # 0 = k
+ vshr.u8 q0, q0, #4 @ vpsrlb $4, %xmm0, %xmm0 # 1 = i
+ vtbl.8 d10, {q11}, d2 @ vpshufb %xmm1, %xmm11, %xmm5 # 2 = a/k
+ vtbl.8 d11, {q11}, d3
+ veor q1, q1, q0 @ vpxor %xmm0, %xmm1, %xmm1 # 0 = j
+ vtbl.8 d6, {q10}, d0 @ vpshufb %xmm0, %xmm10, %xmm3 # 3 = 1/i
+ vtbl.8 d7, {q10}, d1
+ vtbl.8 d8, {q10}, d2 @ vpshufb %xmm1, %xmm10, %xmm4 # 4 = 1/j
+ vtbl.8 d9, {q10}, d3
+ veor q3, q3, q5 @ vpxor %xmm5, %xmm3, %xmm3 # 3 = iak = 1/i + a/k
+ veor q4, q4, q5 @ vpxor %xmm5, %xmm4, %xmm4 # 4 = jak = 1/j + a/k
+ vtbl.8 d4, {q10}, d6 @ vpshufb %xmm3, %xmm10, %xmm2 # 2 = 1/iak
+ vtbl.8 d5, {q10}, d7
+ vtbl.8 d6, {q10}, d8 @ vpshufb %xmm4, %xmm10, %xmm3 # 3 = 1/jak
+ vtbl.8 d7, {q10}, d9
+ veor q2, q2, q1 @ vpxor %xmm1, %xmm2, %xmm2 # 2 = io
+ veor q3, q3, q0 @ vpxor %xmm0, %xmm3, %xmm3 # 3 = jo
+ vld1.64 {q5}, [r9]! @ vmovdqu (%r9), %xmm5
+ bne .Lenc_loop
+
+ @ middle of last round
+ add r10, r11, #0x80
+
+ adr r11, .Lk_sbo
+ @ Read to q1 instead of q4, so the vtbl.8 instruction below does not
+ @ overlap table and destination registers.
+ vld1.64 {q1}, [r11]! @ vmovdqa -0x60(%r10), %xmm4 # 3 : sbou
+ vld1.64 {q0}, [r11] @ vmovdqa -0x50(%r10), %xmm0 # 0 : sbot .Lk_sbo+16
+ vtbl.8 d8, {q1}, d4 @ vpshufb %xmm2, %xmm4, %xmm4 # 4 = sbou
+ vtbl.8 d9, {q1}, d5
+ vld1.64 {q1}, [r10] @ vmovdqa 0x40(%r11,%r10), %xmm1 # .Lk_sr[]
+ @ Write to q2 instead of q0 below, to avoid overlapping table and
+ @ destination registers.
+ vtbl.8 d4, {q0}, d6 @ vpshufb %xmm3, %xmm0, %xmm0 # 0 = sb1t
+ vtbl.8 d5, {q0}, d7
+ veor q4, q4, q5 @ vpxor %xmm5, %xmm4, %xmm4 # 4 = sb1u + k
+ veor q2, q2, q4 @ vpxor %xmm4, %xmm0, %xmm0 # 0 = A
+ @ Here we restore the original q0/q2 usage.
+ vtbl.8 d0, {q2}, d2 @ vpshufb %xmm1, %xmm0, %xmm0
+ vtbl.8 d1, {q2}, d3
+ bx lr
+ .size _vpaes_encrypt_core,.-_vpaes_encrypt_core
+
+ .globl vpaes_encrypt
+ .hidden vpaes_encrypt
+ .type vpaes_encrypt,%function
+ .align 4
+ vpaes_encrypt:
+ @ _vpaes_encrypt_core uses r8-r11. Round up to r7-r11 to maintain stack
+ @ alignment.
+ stmdb sp!, {r7,r8,r9,r10,r11,lr}
+ @ _vpaes_encrypt_core uses q4-q5 (d8-d11), which are callee-saved.
+ vstmdb sp!, {d8,d9,d10,d11}
+
+ vld1.64 {q0}, [r0]
+ bl _vpaes_preheat
+ bl _vpaes_encrypt_core
+ vst1.64 {q0}, [r1]
+
+ vldmia sp!, {d8,d9,d10,d11}
+ ldmia sp!, {r7,r8,r9,r10,r11, pc} @ return
+ .size vpaes_encrypt,.-vpaes_encrypt
+
+ @
+ @ Decryption stuff
+ @
+ .type _vpaes_decrypt_consts,%object
+ .align 4
+ _vpaes_decrypt_consts:
+ .Lk_dipt:@ decryption input transform
+ .quad 0x0F505B040B545F00, 0x154A411E114E451A
+ .quad 0x86E383E660056500, 0x12771772F491F194
+ .Lk_dsbo:@ decryption sbox final output
+ .quad 0x1387EA537EF94000, 0xC7AA6DB9D4943E2D
+ .quad 0x12D7560F93441D00, 0xCA4B8159D8C58E9C
+ .Lk_dsb9:@ decryption sbox output *9*u, *9*t
+ .quad 0x851C03539A86D600, 0xCAD51F504F994CC9
+ .quad 0xC03B1789ECD74900, 0x725E2C9EB2FBA565
+ .Lk_dsbd:@ decryption sbox output *D*u, *D*t
+ .quad 0x7D57CCDFE6B1A200, 0xF56E9B13882A4439
+ .quad 0x3CE2FAF724C6CB00, 0x2931180D15DEEFD3
+ .Lk_dsbb:@ decryption sbox output *B*u, *B*t
+ .quad 0xD022649296B44200, 0x602646F6B0F2D404
+ .quad 0xC19498A6CD596700, 0xF3FF0C3E3255AA6B
+ .Lk_dsbe:@ decryption sbox output *E*u, *E*t
+ .quad 0x46F2929626D4D000, 0x2242600464B4F6B0
+ .quad 0x0C55A6CDFFAAC100, 0x9467F36B98593E32
+ .size _vpaes_decrypt_consts,.-_vpaes_decrypt_consts
+
+ @@
+ @@ Decryption core
+ @@
+ @@ Same API as encryption core, except it clobbers q12-q15 rather than using
+ @@ the values from _vpaes_preheat. q9-q11 must still be set from
+ @@ _vpaes_preheat.
+ @@
+ .type _vpaes_decrypt_core,%function
+ .align 4
+ _vpaes_decrypt_core:
+ mov r9, r2
+ ldr r8, [r2,#240] @ pull rounds
+
+ @ This function performs shuffles with various constants. The x86_64
+ @ version loads them on-demand into %xmm0-%xmm5. This does not work well
+ @ for ARMv7 because those registers are shuffle destinations. The ARMv8
+ @ version preloads those constants into registers, but ARMv7 has half
+ @ the registers to work with. Instead, we load them on-demand into
+ @ q12-q15, registers normally use for preloaded constants. This is fine
+ @ because decryption doesn't use those constants. The values are
+ @ constant, so this does not interfere with potential 2x optimizations.
+ adr r7, .Lk_dipt
+
+ vld1.64 {q12,q13}, [r7] @ vmovdqa .Lk_dipt(%rip), %xmm2 # iptlo
+ lsl r11, r8, #4 @ mov %rax, %r11; shl $4, %r11
+ eor r11, r11, #0x30 @ xor $0x30, %r11
+ adr r10, .Lk_sr
+ and r11, r11, #0x30 @ and $0x30, %r11
+ add r11, r11, r10
+ adr r10, .Lk_mc_forward+48
+
+ vld1.64 {q4}, [r9]! @ vmovdqu (%r9), %xmm4 # round0 key
+ vand q1, q0, q9 @ vpand %xmm9, %xmm0, %xmm1
+ vshr.u8 q0, q0, #4 @ vpsrlb $4, %xmm0, %xmm0
+ vtbl.8 d4, {q12}, d2 @ vpshufb %xmm1, %xmm2, %xmm2
+ vtbl.8 d5, {q12}, d3
+ vld1.64 {q5}, [r10] @ vmovdqa .Lk_mc_forward+48(%rip), %xmm5
+ @ vmovdqa .Lk_dipt+16(%rip), %xmm1 # ipthi
+ vtbl.8 d0, {q13}, d0 @ vpshufb %xmm0, %xmm1, %xmm0
+ vtbl.8 d1, {q13}, d1
+ veor q2, q2, q4 @ vpxor %xmm4, %xmm2, %xmm2
+ veor q0, q0, q2 @ vpxor %xmm2, %xmm0, %xmm0
+
+ @ .Ldec_entry ends with a bnz instruction which is normally paired with
+ @ subs in .Ldec_loop.
+ tst r8, r8
+ b .Ldec_entry
+
+ .align 4
+ .Ldec_loop:
+ @
+ @ Inverse mix columns
+ @
+
+ @ We load .Lk_dsb* into q12-q15 on-demand. See the comment at the top of
+ @ the function.
+ adr r10, .Lk_dsb9
+ vld1.64 {q12,q13}, [r10]! @ vmovdqa -0x20(%r10),%xmm4 # 4 : sb9u
+ @ vmovdqa -0x10(%r10),%xmm1 # 0 : sb9t
+ @ Load sbd* ahead of time.
+ vld1.64 {q14,q15}, [r10]! @ vmovdqa 0x00(%r10),%xmm4 # 4 : sbdu
+ @ vmovdqa 0x10(%r10),%xmm1 # 0 : sbdt
+ vtbl.8 d8, {q12}, d4 @ vpshufb %xmm2, %xmm4, %xmm4 # 4 = sb9u
+ vtbl.8 d9, {q12}, d5
+ vtbl.8 d2, {q13}, d6 @ vpshufb %xmm3, %xmm1, %xmm1 # 0 = sb9t
+ vtbl.8 d3, {q13}, d7
+ veor q0, q4, q0 @ vpxor %xmm4, %xmm0, %xmm0
+
+ veor q0, q0, q1 @ vpxor %xmm1, %xmm0, %xmm0 # 0 = ch
+
+ @ Load sbb* ahead of time.
+ vld1.64 {q12,q13}, [r10]! @ vmovdqa 0x20(%r10),%xmm4 # 4 : sbbu
+ @ vmovdqa 0x30(%r10),%xmm1 # 0 : sbbt
+
+ vtbl.8 d8, {q14}, d4 @ vpshufb %xmm2, %xmm4, %xmm4 # 4 = sbdu
+ vtbl.8 d9, {q14}, d5
+ @ Write to q1 instead of q0, so the table and destination registers do
+ @ not overlap.
+ vtbl.8 d2, {q0}, d10 @ vpshufb %xmm5, %xmm0, %xmm0 # MC ch
+ vtbl.8 d3, {q0}, d11
+ @ Here we restore the original q0/q1 usage. This instruction is
+ @ reordered from the ARMv8 version so we do not clobber the vtbl.8
+ @ below.
+ veor q0, q1, q4 @ vpxor %xmm4, %xmm0, %xmm0 # 4 = ch
+ vtbl.8 d2, {q15}, d6 @ vpshufb %xmm3, %xmm1, %xmm1 # 0 = sbdt
+ vtbl.8 d3, {q15}, d7
+ @ vmovdqa 0x20(%r10), %xmm4 # 4 : sbbu
+ veor q0, q0, q1 @ vpxor %xmm1, %xmm0, %xmm0 # 0 = ch
+ @ vmovdqa 0x30(%r10), %xmm1 # 0 : sbbt
+
+ @ Load sbd* ahead of time.
+ vld1.64 {q14,q15}, [r10]! @ vmovdqa 0x40(%r10),%xmm4 # 4 : sbeu
+ @ vmovdqa 0x50(%r10),%xmm1 # 0 : sbet
+
+ vtbl.8 d8, {q12}, d4 @ vpshufb %xmm2, %xmm4, %xmm4 # 4 = sbbu
+ vtbl.8 d9, {q12}, d5
+ @ Write to q1 instead of q0, so the table and destination registers do
+ @ not overlap.
+ vtbl.8 d2, {q0}, d10 @ vpshufb %xmm5, %xmm0, %xmm0 # MC ch
+ vtbl.8 d3, {q0}, d11
+ @ Here we restore the original q0/q1 usage. This instruction is
+ @ reordered from the ARMv8 version so we do not clobber the vtbl.8
+ @ below.
+ veor q0, q1, q4 @ vpxor %xmm4, %xmm0, %xmm0 # 4 = ch
+ vtbl.8 d2, {q13}, d6 @ vpshufb %xmm3, %xmm1, %xmm1 # 0 = sbbt
+ vtbl.8 d3, {q13}, d7
+ veor q0, q0, q1 @ vpxor %xmm1, %xmm0, %xmm0 # 0 = ch
+
+ vtbl.8 d8, {q14}, d4 @ vpshufb %xmm2, %xmm4, %xmm4 # 4 = sbeu
+ vtbl.8 d9, {q14}, d5
+ @ Write to q1 instead of q0, so the table and destination registers do
+ @ not overlap.
+ vtbl.8 d2, {q0}, d10 @ vpshufb %xmm5, %xmm0, %xmm0 # MC ch
+ vtbl.8 d3, {q0}, d11
+ @ Here we restore the original q0/q1 usage. This instruction is
+ @ reordered from the ARMv8 version so we do not clobber the vtbl.8
+ @ below.
+ veor q0, q1, q4 @ vpxor %xmm4, %xmm0, %xmm0 # 4 = ch
+ vtbl.8 d2, {q15}, d6 @ vpshufb %xmm3, %xmm1, %xmm1 # 0 = sbet
+ vtbl.8 d3, {q15}, d7
+ vext.8 q5, q5, q5, #12 @ vpalignr $12, %xmm5, %xmm5, %xmm5
+ veor q0, q0, q1 @ vpxor %xmm1, %xmm0, %xmm0 # 0 = ch
+ subs r8, r8, #1 @ sub $1,%rax # nr--
+
+ .Ldec_entry:
+ @ top of round
+ vand q1, q0, q9 @ vpand %xmm9, %xmm0, %xmm1 # 0 = k
+ vshr.u8 q0, q0, #4 @ vpsrlb $4, %xmm0, %xmm0 # 1 = i
+ vtbl.8 d4, {q11}, d2 @ vpshufb %xmm1, %xmm11, %xmm2 # 2 = a/k
+ vtbl.8 d5, {q11}, d3
+ veor q1, q1, q0 @ vpxor %xmm0, %xmm1, %xmm1 # 0 = j
+ vtbl.8 d6, {q10}, d0 @ vpshufb %xmm0, %xmm10, %xmm3 # 3 = 1/i
+ vtbl.8 d7, {q10}, d1
+ vtbl.8 d8, {q10}, d2 @ vpshufb %xmm1, %xmm10, %xmm4 # 4 = 1/j
+ vtbl.8 d9, {q10}, d3
+ veor q3, q3, q2 @ vpxor %xmm2, %xmm3, %xmm3 # 3 = iak = 1/i + a/k
+ veor q4, q4, q2 @ vpxor %xmm2, %xmm4, %xmm4 # 4 = jak = 1/j + a/k
+ vtbl.8 d4, {q10}, d6 @ vpshufb %xmm3, %xmm10, %xmm2 # 2 = 1/iak
+ vtbl.8 d5, {q10}, d7
+ vtbl.8 d6, {q10}, d8 @ vpshufb %xmm4, %xmm10, %xmm3 # 3 = 1/jak
+ vtbl.8 d7, {q10}, d9
+ veor q2, q2, q1 @ vpxor %xmm1, %xmm2, %xmm2 # 2 = io
+ veor q3, q3, q0 @ vpxor %xmm0, %xmm3, %xmm3 # 3 = jo
+ vld1.64 {q0}, [r9]! @ vmovdqu (%r9), %xmm0
+ bne .Ldec_loop
+
+ @ middle of last round
+
+ adr r10, .Lk_dsbo
+
+ @ Write to q1 rather than q4 to avoid overlapping table and destination.
+ vld1.64 {q1}, [r10]! @ vmovdqa 0x60(%r10), %xmm4 # 3 : sbou
+ vtbl.8 d8, {q1}, d4 @ vpshufb %xmm2, %xmm4, %xmm4 # 4 = sbou
+ vtbl.8 d9, {q1}, d5
+ @ Write to q2 rather than q1 to avoid overlapping table and destination.
+ vld1.64 {q2}, [r10] @ vmovdqa 0x70(%r10), %xmm1 # 0 : sbot
+ vtbl.8 d2, {q2}, d6 @ vpshufb %xmm3, %xmm1, %xmm1 # 0 = sb1t
+ vtbl.8 d3, {q2}, d7
+ vld1.64 {q2}, [r11] @ vmovdqa -0x160(%r11), %xmm2 # .Lk_sr-.Lk_dsbd=-0x160
+ veor q4, q4, q0 @ vpxor %xmm0, %xmm4, %xmm4 # 4 = sb1u + k
+ @ Write to q1 rather than q0 so the table and destination registers
+ @ below do not overlap.
+ veor q1, q1, q4 @ vpxor %xmm4, %xmm1, %xmm0 # 0 = A
+ vtbl.8 d0, {q1}, d4 @ vpshufb %xmm2, %xmm0, %xmm0
+ vtbl.8 d1, {q1}, d5
+ bx lr
+ .size _vpaes_decrypt_core,.-_vpaes_decrypt_core
+
+ .globl vpaes_decrypt
+ .hidden vpaes_decrypt
+ .type vpaes_decrypt,%function
+ .align 4
+ vpaes_decrypt:
+ @ _vpaes_decrypt_core uses r7-r11.
+ stmdb sp!, {r7,r8,r9,r10,r11,lr}
+ @ _vpaes_decrypt_core uses q4-q5 (d8-d11), which are callee-saved.
+ vstmdb sp!, {d8,d9,d10,d11}
+
+ vld1.64 {q0}, [r0]
+ bl _vpaes_preheat
+ bl _vpaes_decrypt_core
+ vst1.64 {q0}, [r1]
+
+ vldmia sp!, {d8,d9,d10,d11}
+ ldmia sp!, {r7,r8,r9,r10,r11, pc} @ return
+ .size vpaes_decrypt,.-vpaes_decrypt
+ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
+ @@ @@
+ @@ AES key schedule @@
+ @@ @@
+ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
+
+ @ This function diverges from both x86_64 and armv7 in which constants are
+ @ pinned. x86_64 has a common preheat function for all operations. aarch64
+ @ separates them because it has enough registers to pin nearly all constants.
+ @ armv7 does not have enough registers, but needing explicit loads and stores
+ @ also complicates using x86_64's register allocation directly.
+ @
+ @ We pin some constants for convenience and leave q14 and q15 free to load
+ @ others on demand.
+
+ @
+ @ Key schedule constants
+ @
+ .type _vpaes_key_consts,%object
+ .align 4
+ _vpaes_key_consts:
+ .Lk_dksd:@ decryption key schedule: invskew x*D
+ .quad 0xFEB91A5DA3E44700, 0x0740E3A45A1DBEF9
+ .quad 0x41C277F4B5368300, 0x5FDC69EAAB289D1E
+ .Lk_dksb:@ decryption key schedule: invskew x*B
+ .quad 0x9A4FCA1F8550D500, 0x03D653861CC94C99
+ .quad 0x115BEDA7B6FC4A00, 0xD993256F7E3482C8
+ .Lk_dkse:@ decryption key schedule: invskew x*E + 0x63
+ .quad 0xD5031CCA1FC9D600, 0x53859A4C994F5086
+ .quad 0xA23196054FDC7BE8, 0xCD5EF96A20B31487
+ .Lk_dks9:@ decryption key schedule: invskew x*9
+ .quad 0xB6116FC87ED9A700, 0x4AED933482255BFC
+ .quad 0x4576516227143300, 0x8BB89FACE9DAFDCE
+
+ .Lk_rcon:@ rcon
+ .quad 0x1F8391B9AF9DEEB6, 0x702A98084D7C7D81
+
+ .Lk_opt:@ output transform
+ .quad 0xFF9F4929D6B66000, 0xF7974121DEBE6808
+ .quad 0x01EDBD5150BCEC00, 0xE10D5DB1B05C0CE0
+ .Lk_deskew:@ deskew tables: inverts the sbox's "skew"
+ .quad 0x07E4A34047A4E300, 0x1DFEB95A5DBEF91A
+ .quad 0x5F36B5DC83EA6900, 0x2841C2ABF49D1E77
+ .size _vpaes_key_consts,.-_vpaes_key_consts
+
+ .type _vpaes_key_preheat,%function
+ .align 4
+ _vpaes_key_preheat:
+ adr r11, .Lk_rcon
+ vmov.i8 q12, #0x5b @ .Lk_s63
+ adr r10, .Lk_inv @ Must be aligned to 8 mod 16.
+ vmov.i8 q9, #0x0f @ .Lk_s0F
+ vld1.64 {q10,q11}, [r10] @ .Lk_inv
+ vld1.64 {q8}, [r11] @ .Lk_rcon
+ bx lr
+ .size _vpaes_key_preheat,.-_vpaes_key_preheat
+
+ .type _vpaes_schedule_core,%function
+ .align 4
+ _vpaes_schedule_core:
+ @ We only need to save lr, but ARM requires an 8-byte stack alignment,
+ @ so save an extra register.
+ stmdb sp!, {r3,lr}
+
+ bl _vpaes_key_preheat @ load the tables
+
+ adr r11, .Lk_ipt @ Must be aligned to 8 mod 16.
+ vld1.64 {q0}, [r0]! @ vmovdqu (%rdi), %xmm0 # load key (unaligned)
+
+ @ input transform
+ @ Use q4 here rather than q3 so .Lschedule_am_decrypting does not
+ @ overlap table and destination.
+ vmov q4, q0 @ vmovdqa %xmm0, %xmm3
+ bl _vpaes_schedule_transform
+ adr r10, .Lk_sr @ Must be aligned to 8 mod 16.
+ vmov q7, q0 @ vmovdqa %xmm0, %xmm7
+
+ add r8, r8, r10
+ tst r3, r3
+ bne .Lschedule_am_decrypting
+
+ @ encrypting, output zeroth round key after transform
+ vst1.64 {q0}, [r2] @ vmovdqu %xmm0, (%rdx)
+ b .Lschedule_go
+
+ .Lschedule_am_decrypting:
+ @ decrypting, output zeroth round key after shiftrows
+ vld1.64 {q1}, [r8] @ vmovdqa (%r8,%r10), %xmm1
+ vtbl.8 d6, {q4}, d2 @ vpshufb %xmm1, %xmm3, %xmm3
+ vtbl.8 d7, {q4}, d3
+ vst1.64 {q3}, [r2] @ vmovdqu %xmm3, (%rdx)
+ eor r8, r8, #0x30 @ xor $0x30, %r8
+
+ .Lschedule_go:
+ cmp r1, #192 @ cmp $192, %esi
+ bhi .Lschedule_256
+ beq .Lschedule_192
+ @ 128: fall though
+
+ @@
+ @@ .schedule_128
+ @@
+ @@ 128-bit specific part of key schedule.
+ @@
+ @@ This schedule is really simple, because all its parts
+ @@ are accomplished by the subroutines.
+ @@
+ .Lschedule_128:
+ mov r0, #10 @ mov $10, %esi
+
+ .Loop_schedule_128:
+ bl _vpaes_schedule_round
+ subs r0, r0, #1 @ dec %esi
+ beq .Lschedule_mangle_last
+ bl _vpaes_schedule_mangle @ write output
+ b .Loop_schedule_128
+
+ @@
+ @@ .aes_schedule_192
+ @@
+ @@ 192-bit specific part of key schedule.
+ @@
+ @@ The main body of this schedule is the same as the 128-bit
+ @@ schedule, but with more smearing. The long, high side is
+ @@ stored in q7 as before, and the short, low side is in
+ @@ the high bits of q6.
+ @@
+ @@ This schedule is somewhat nastier, however, because each
+ @@ round produces 192 bits of key material, or 1.5 round keys.
+ @@ Therefore, on each cycle we do 2 rounds and produce 3 round
+ @@ keys.
+ @@
+ .align 4
+ .Lschedule_192:
+ sub r0, r0, #8
+ vld1.64 {q0}, [r0] @ vmovdqu 8(%rdi),%xmm0 # load key part 2 (very unaligned)
+ bl _vpaes_schedule_transform @ input transform
+ vmov q6, q0 @ vmovdqa %xmm0, %xmm6 # save short part
+ vmov.i8 d12, #0 @ vpxor %xmm4, %xmm4, %xmm4 # clear 4
+ @ vmovhlps %xmm4, %xmm6, %xmm6 # clobber low side with zeros
+ mov r0, #4 @ mov $4, %esi
+
+ .Loop_schedule_192:
+ bl _vpaes_schedule_round
+ vext.8 q0, q6, q0, #8 @ vpalignr $8,%xmm6,%xmm0,%xmm0
+ bl _vpaes_schedule_mangle @ save key n
+ bl _vpaes_schedule_192_smear
+ bl _vpaes_schedule_mangle @ save key n+1
+ bl _vpaes_schedule_round
+ subs r0, r0, #1 @ dec %esi
+ beq .Lschedule_mangle_last
+ bl _vpaes_schedule_mangle @ save key n+2
+ bl _vpaes_schedule_192_smear
+ b .Loop_schedule_192
+
+ @@
+ @@ .aes_schedule_256
+ @@
+ @@ 256-bit specific part of key schedule.
+ @@
+ @@ The structure here is very similar to the 128-bit
+ @@ schedule, but with an additional "low side" in
+ @@ q6. The low side's rounds are the same as the
+ @@ high side's, except no rcon and no rotation.
+ @@
+ .align 4
+ .Lschedule_256:
+ vld1.64 {q0}, [r0] @ vmovdqu 16(%rdi),%xmm0 # load key part 2 (unaligned)
+ bl _vpaes_schedule_transform @ input transform
+ mov r0, #7 @ mov $7, %esi
+
+ .Loop_schedule_256:
+ bl _vpaes_schedule_mangle @ output low result
+ vmov q6, q0 @ vmovdqa %xmm0, %xmm6 # save cur_lo in xmm6
+
+ @ high round
+ bl _vpaes_schedule_round
+ subs r0, r0, #1 @ dec %esi
+ beq .Lschedule_mangle_last
+ bl _vpaes_schedule_mangle
+
+ @ low round. swap xmm7 and xmm6
+ vdup.32 q0, d1[1] @ vpshufd $0xFF, %xmm0, %xmm0
+ vmov.i8 q4, #0
+ vmov q5, q7 @ vmovdqa %xmm7, %xmm5
+ vmov q7, q6 @ vmovdqa %xmm6, %xmm7
+ bl _vpaes_schedule_low_round
+ vmov q7, q5 @ vmovdqa %xmm5, %xmm7
+
+ b .Loop_schedule_256
+
+ @@
+ @@ .aes_schedule_mangle_last
+ @@
+ @@ Mangler for last round of key schedule
+ @@ Mangles q0
+ @@ when encrypting, outputs out(q0) ^ 63
+ @@ when decrypting, outputs unskew(q0)
+ @@
+ @@ Always called right before return... jumps to cleanup and exits
+ @@
+ .align 4
+ .Lschedule_mangle_last:
+ @ schedule last round key from xmm0
+ adr r11, .Lk_deskew @ lea .Lk_deskew(%rip),%r11 # prepare to deskew
+ tst r3, r3
+ bne .Lschedule_mangle_last_dec
+
+ @ encrypting
+ vld1.64 {q1}, [r8] @ vmovdqa (%r8,%r10),%xmm1
+ adr r11, .Lk_opt @ lea .Lk_opt(%rip), %r11 # prepare to output transform
+ add r2, r2, #32 @ add $32, %rdx
+ vmov q2, q0
+ vtbl.8 d0, {q2}, d2 @ vpshufb %xmm1, %xmm0, %xmm0 # output permute
+ vtbl.8 d1, {q2}, d3
+
+ .Lschedule_mangle_last_dec:
+ sub r2, r2, #16 @ add $-16, %rdx
+ veor q0, q0, q12 @ vpxor .Lk_s63(%rip), %xmm0, %xmm0
+ bl _vpaes_schedule_transform @ output transform
+ vst1.64 {q0}, [r2] @ vmovdqu %xmm0, (%rdx) # save last key
+
+ @ cleanup
+ veor q0, q0, q0 @ vpxor %xmm0, %xmm0, %xmm0
+ veor q1, q1, q1 @ vpxor %xmm1, %xmm1, %xmm1
+ veor q2, q2, q2 @ vpxor %xmm2, %xmm2, %xmm2
+ veor q3, q3, q3 @ vpxor %xmm3, %xmm3, %xmm3
+ veor q4, q4, q4 @ vpxor %xmm4, %xmm4, %xmm4
+ veor q5, q5, q5 @ vpxor %xmm5, %xmm5, %xmm5
+ veor q6, q6, q6 @ vpxor %xmm6, %xmm6, %xmm6
+ veor q7, q7, q7 @ vpxor %xmm7, %xmm7, %xmm7
+ ldmia sp!, {r3,pc} @ return
+ .size _vpaes_schedule_core,.-_vpaes_schedule_core
+
+ @@
+ @@ .aes_schedule_192_smear
+ @@
+ @@ Smear the short, low side in the 192-bit key schedule.
+ @@
+ @@ Inputs:
+ @@ q7: high side, b a x y
+ @@ q6: low side, d c 0 0
+ @@
+ @@ Outputs:
+ @@ q6: b+c+d b+c 0 0
+ @@ q0: b+c+d b+c b a
+ @@
+ .type _vpaes_schedule_192_smear,%function
+ .align 4
+ _vpaes_schedule_192_smear:
+ vmov.i8 q1, #0
+ vdup.32 q0, d15[1]
+ vshl.i64 q1, q6, #32 @ vpshufd $0x80, %xmm6, %xmm1 # d c 0 0 -> c 0 0 0
+ vmov d0, d15 @ vpshufd $0xFE, %xmm7, %xmm0 # b a _ _ -> b b b a
+ veor q6, q6, q1 @ vpxor %xmm1, %xmm6, %xmm6 # -> c+d c 0 0
+ veor q1, q1, q1 @ vpxor %xmm1, %xmm1, %xmm1
+ veor q6, q6, q0 @ vpxor %xmm0, %xmm6, %xmm6 # -> b+c+d b+c b a
+ vmov q0, q6 @ vmovdqa %xmm6, %xmm0
+ vmov d12, d2 @ vmovhlps %xmm1, %xmm6, %xmm6 # clobber low side with zeros
+ bx lr
+ .size _vpaes_schedule_192_smear,.-_vpaes_schedule_192_smear
+
+ @@
+ @@ .aes_schedule_round
+ @@
+ @@ Runs one main round of the key schedule on q0, q7
+ @@
+ @@ Specifically, runs subbytes on the high dword of q0
+ @@ then rotates it by one byte and xors into the low dword of
+ @@ q7.
+ @@
+ @@ Adds rcon from low byte of q8, then rotates q8 for
+ @@ next rcon.
+ @@
+ @@ Smears the dwords of q7 by xoring the low into the
+ @@ second low, result into third, result into highest.
+ @@
+ @@ Returns results in q7 = q0.
+ @@ Clobbers q1-q4, r11.
+ @@
+ .type _vpaes_schedule_round,%function
+ .align 4
+ _vpaes_schedule_round:
+ @ extract rcon from xmm8
+ vmov.i8 q4, #0 @ vpxor %xmm4, %xmm4, %xmm4
+ vext.8 q1, q8, q4, #15 @ vpalignr $15, %xmm8, %xmm4, %xmm1
+ vext.8 q8, q8, q8, #15 @ vpalignr $15, %xmm8, %xmm8, %xmm8
+ veor q7, q7, q1 @ vpxor %xmm1, %xmm7, %xmm7
+
+ @ rotate
+ vdup.32 q0, d1[1] @ vpshufd $0xFF, %xmm0, %xmm0
+ vext.8 q0, q0, q0, #1 @ vpalignr $1, %xmm0, %xmm0, %xmm0
+
+ @ fall through...
+
+ @ low round: same as high round, but no rotation and no rcon.
+ _vpaes_schedule_low_round:
+ @ The x86_64 version pins .Lk_sb1 in %xmm13 and .Lk_sb1+16 in %xmm12.
+ @ We pin other values in _vpaes_key_preheat, so load them now.
+ adr r11, .Lk_sb1
+ vld1.64 {q14,q15}, [r11]
+
+ @ smear xmm7
+ vext.8 q1, q4, q7, #12 @ vpslldq $4, %xmm7, %xmm1
+ veor q7, q7, q1 @ vpxor %xmm1, %xmm7, %xmm7
+ vext.8 q4, q4, q7, #8 @ vpslldq $8, %xmm7, %xmm4
+
+ @ subbytes
+ vand q1, q0, q9 @ vpand %xmm9, %xmm0, %xmm1 # 0 = k
+ vshr.u8 q0, q0, #4 @ vpsrlb $4, %xmm0, %xmm0 # 1 = i
+ veor q7, q7, q4 @ vpxor %xmm4, %xmm7, %xmm7
+ vtbl.8 d4, {q11}, d2 @ vpshufb %xmm1, %xmm11, %xmm2 # 2 = a/k
+ vtbl.8 d5, {q11}, d3
+ veor q1, q1, q0 @ vpxor %xmm0, %xmm1, %xmm1 # 0 = j
+ vtbl.8 d6, {q10}, d0 @ vpshufb %xmm0, %xmm10, %xmm3 # 3 = 1/i
+ vtbl.8 d7, {q10}, d1
+ veor q3, q3, q2 @ vpxor %xmm2, %xmm3, %xmm3 # 3 = iak = 1/i + a/k
+ vtbl.8 d8, {q10}, d2 @ vpshufb %xmm1, %xmm10, %xmm4 # 4 = 1/j
+ vtbl.8 d9, {q10}, d3
+ veor q7, q7, q12 @ vpxor .Lk_s63(%rip), %xmm7, %xmm7
+ vtbl.8 d6, {q10}, d6 @ vpshufb %xmm3, %xmm10, %xmm3 # 2 = 1/iak
+ vtbl.8 d7, {q10}, d7
+ veor q4, q4, q2 @ vpxor %xmm2, %xmm4, %xmm4 # 4 = jak = 1/j + a/k
+ vtbl.8 d4, {q10}, d8 @ vpshufb %xmm4, %xmm10, %xmm2 # 3 = 1/jak
+ vtbl.8 d5, {q10}, d9
+ veor q3, q3, q1 @ vpxor %xmm1, %xmm3, %xmm3 # 2 = io
+ veor q2, q2, q0 @ vpxor %xmm0, %xmm2, %xmm2 # 3 = jo
+ vtbl.8 d8, {q15}, d6 @ vpshufb %xmm3, %xmm13, %xmm4 # 4 = sbou
+ vtbl.8 d9, {q15}, d7
+ vtbl.8 d2, {q14}, d4 @ vpshufb %xmm2, %xmm12, %xmm1 # 0 = sb1t
+ vtbl.8 d3, {q14}, d5
+ veor q1, q1, q4 @ vpxor %xmm4, %xmm1, %xmm1 # 0 = sbox output
+
+ @ add in smeared stuff
+ veor q0, q1, q7 @ vpxor %xmm7, %xmm1, %xmm0
+ veor q7, q1, q7 @ vmovdqa %xmm0, %xmm7
+ bx lr
+ .size _vpaes_schedule_round,.-_vpaes_schedule_round
+
+ @@
+ @@ .aes_schedule_transform
+ @@
+ @@ Linear-transform q0 according to tables at [r11]
+ @@
+ @@ Requires that q9 = 0x0F0F... as in preheat
+ @@ Output in q0
+ @@ Clobbers q1, q2, q14, q15
+ @@
+ .type _vpaes_schedule_transform,%function
+ .align 4
+ _vpaes_schedule_transform:
+ vld1.64 {q14,q15}, [r11] @ vmovdqa (%r11), %xmm2 # lo
+ @ vmovdqa 16(%r11), %xmm1 # hi
+ vand q1, q0, q9 @ vpand %xmm9, %xmm0, %xmm1
+ vshr.u8 q0, q0, #4 @ vpsrlb $4, %xmm0, %xmm0
+ vtbl.8 d4, {q14}, d2 @ vpshufb %xmm1, %xmm2, %xmm2
+ vtbl.8 d5, {q14}, d3
+ vtbl.8 d0, {q15}, d0 @ vpshufb %xmm0, %xmm1, %xmm0
+ vtbl.8 d1, {q15}, d1
+ veor q0, q0, q2 @ vpxor %xmm2, %xmm0, %xmm0
+ bx lr
+ .size _vpaes_schedule_transform,.-_vpaes_schedule_transform
+
+ @@
+ @@ .aes_schedule_mangle
+ @@
+ @@ Mangles q0 from (basis-transformed) standard version
+ @@ to our version.
+ @@
+ @@ On encrypt,
+ @@ xor with 0x63
+ @@ multiply by circulant 0,1,1,1
+ @@ apply shiftrows transform
+ @@
+ @@ On decrypt,
+ @@ xor with 0x63
+ @@ multiply by "inverse mixcolumns" circulant E,B,D,9
+ @@ deskew
+ @@ apply shiftrows transform
+ @@
+ @@
+ @@ Writes out to [r2], and increments or decrements it
+ @@ Keeps track of round number mod 4 in r8
+ @@ Preserves q0
+ @@ Clobbers q1-q5
+ @@
+ .type _vpaes_schedule_mangle,%function
+ .align 4
+ _vpaes_schedule_mangle:
+ tst r3, r3
+ vmov q4, q0 @ vmovdqa %xmm0, %xmm4 # save xmm0 for later
+ adr r11, .Lk_mc_forward @ Must be aligned to 8 mod 16.
+ vld1.64 {q5}, [r11] @ vmovdqa .Lk_mc_forward(%rip),%xmm5
+ bne .Lschedule_mangle_dec
+
+ @ encrypting
+ @ Write to q2 so we do not overlap table and destination below.
+ veor q2, q0, q12 @ vpxor .Lk_s63(%rip), %xmm0, %xmm4
+ add r2, r2, #16 @ add $16, %rdx
+ vtbl.8 d8, {q2}, d10 @ vpshufb %xmm5, %xmm4, %xmm4
+ vtbl.8 d9, {q2}, d11
+ vtbl.8 d2, {q4}, d10 @ vpshufb %xmm5, %xmm4, %xmm1
+ vtbl.8 d3, {q4}, d11
+ vtbl.8 d6, {q1}, d10 @ vpshufb %xmm5, %xmm1, %xmm3
+ vtbl.8 d7, {q1}, d11
+ veor q4, q4, q1 @ vpxor %xmm1, %xmm4, %xmm4
+ vld1.64 {q1}, [r8] @ vmovdqa (%r8,%r10), %xmm1
+ veor q3, q3, q4 @ vpxor %xmm4, %xmm3, %xmm3
+
+ b .Lschedule_mangle_both
+ .align 4
+ .Lschedule_mangle_dec:
+ @ inverse mix columns
+ adr r11, .Lk_dksd @ lea .Lk_dksd(%rip),%r11
+ vshr.u8 q1, q4, #4 @ vpsrlb $4, %xmm4, %xmm1 # 1 = hi
+ vand q4, q4, q9 @ vpand %xmm9, %xmm4, %xmm4 # 4 = lo
+
+ vld1.64 {q14,q15}, [r11]! @ vmovdqa 0x00(%r11), %xmm2
+ @ vmovdqa 0x10(%r11), %xmm3
+ vtbl.8 d4, {q14}, d8 @ vpshufb %xmm4, %xmm2, %xmm2
+ vtbl.8 d5, {q14}, d9
+ vtbl.8 d6, {q15}, d2 @ vpshufb %xmm1, %xmm3, %xmm3
+ vtbl.8 d7, {q15}, d3
+ @ Load .Lk_dksb ahead of time.
+ vld1.64 {q14,q15}, [r11]! @ vmovdqa 0x20(%r11), %xmm2
+ @ vmovdqa 0x30(%r11), %xmm3
+ @ Write to q13 so we do not overlap table and destination.
+ veor q13, q3, q2 @ vpxor %xmm2, %xmm3, %xmm3
+ vtbl.8 d6, {q13}, d10 @ vpshufb %xmm5, %xmm3, %xmm3
+ vtbl.8 d7, {q13}, d11
+
+ vtbl.8 d4, {q14}, d8 @ vpshufb %xmm4, %xmm2, %xmm2
+ vtbl.8 d5, {q14}, d9
+ veor q2, q2, q3 @ vpxor %xmm3, %xmm2, %xmm2
+ vtbl.8 d6, {q15}, d2 @ vpshufb %xmm1, %xmm3, %xmm3
+ vtbl.8 d7, {q15}, d3
+ @ Load .Lk_dkse ahead of time.
+ vld1.64 {q14,q15}, [r11]! @ vmovdqa 0x40(%r11), %xmm2
+ @ vmovdqa 0x50(%r11), %xmm3
+ @ Write to q13 so we do not overlap table and destination.
+ veor q13, q3, q2 @ vpxor %xmm2, %xmm3, %xmm3
+ vtbl.8 d6, {q13}, d10 @ vpshufb %xmm5, %xmm3, %xmm3
+ vtbl.8 d7, {q13}, d11
+
+ vtbl.8 d4, {q14}, d8 @ vpshufb %xmm4, %xmm2, %xmm2
+ vtbl.8 d5, {q14}, d9
+ veor q2, q2, q3 @ vpxor %xmm3, %xmm2, %xmm2
+ vtbl.8 d6, {q15}, d2 @ vpshufb %xmm1, %xmm3, %xmm3
+ vtbl.8 d7, {q15}, d3
+ @ Load .Lk_dkse ahead of time.
+ vld1.64 {q14,q15}, [r11]! @ vmovdqa 0x60(%r11), %xmm2
+ @ vmovdqa 0x70(%r11), %xmm4
+ @ Write to q13 so we do not overlap table and destination.
+ veor q13, q3, q2 @ vpxor %xmm2, %xmm3, %xmm3
+
+ vtbl.8 d4, {q14}, d8 @ vpshufb %xmm4, %xmm2, %xmm2
+ vtbl.8 d5, {q14}, d9
+ vtbl.8 d6, {q13}, d10 @ vpshufb %xmm5, %xmm3, %xmm3
+ vtbl.8 d7, {q13}, d11
+ vtbl.8 d8, {q15}, d2 @ vpshufb %xmm1, %xmm4, %xmm4
+ vtbl.8 d9, {q15}, d3
+ vld1.64 {q1}, [r8] @ vmovdqa (%r8,%r10), %xmm1
+ veor q2, q2, q3 @ vpxor %xmm3, %xmm2, %xmm2
+ veor q3, q4, q2 @ vpxor %xmm2, %xmm4, %xmm3
+
+ sub r2, r2, #16 @ add $-16, %rdx
+
+ .Lschedule_mangle_both:
+ @ Write to q2 so table and destination do not overlap.
+ vtbl.8 d4, {q3}, d2 @ vpshufb %xmm1, %xmm3, %xmm3
+ vtbl.8 d5, {q3}, d3
+ add r8, r8, #64-16 @ add $-16, %r8
+ and r8, r8, #~(1<<6) @ and $0x30, %r8
+ vst1.64 {q2}, [r2] @ vmovdqu %xmm3, (%rdx)
+ bx lr
+ .size _vpaes_schedule_mangle,.-_vpaes_schedule_mangle
+
+ .globl vpaes_set_encrypt_key
+ .hidden vpaes_set_encrypt_key
+ .type vpaes_set_encrypt_key,%function
+ .align 4
+ vpaes_set_encrypt_key:
+ stmdb sp!, {r7,r8,r9,r10,r11, lr}
+ vstmdb sp!, {d8,d9,d10,d11,d12,d13,d14,d15}
+
+ lsr r9, r1, #5 @ shr $5,%eax
+ add r9, r9, #5 @ $5,%eax
+ str r9, [r2,#240] @ mov %eax,240(%rdx) # AES_KEY->rounds = nbits/32+5;
+
+ mov r3, #0 @ mov $0,%ecx
+ mov r8, #0x30 @ mov $0x30,%r8d
+ bl _vpaes_schedule_core
+ eor r0, r0, r0
+
+ vldmia sp!, {d8,d9,d10,d11,d12,d13,d14,d15}
+ ldmia sp!, {r7,r8,r9,r10,r11, pc} @ return
+ .size vpaes_set_encrypt_key,.-vpaes_set_encrypt_key
+
+ .globl vpaes_set_decrypt_key
+ .hidden vpaes_set_decrypt_key
+ .type vpaes_set_decrypt_key,%function
+ .align 4
+ vpaes_set_decrypt_key:
+ stmdb sp!, {r7,r8,r9,r10,r11, lr}
+ vstmdb sp!, {d8,d9,d10,d11,d12,d13,d14,d15}
+
+ lsr r9, r1, #5 @ shr $5,%eax
+ add r9, r9, #5 @ $5,%eax
+ str r9, [r2,#240] @ mov %eax,240(%rdx) # AES_KEY->rounds = nbits/32+5;
+ lsl r9, r9, #4 @ shl $4,%eax
+ add r2, r2, #16 @ lea 16(%rdx,%rax),%rdx
+ add r2, r2, r9
+
+ mov r3, #1 @ mov $1,%ecx
+ lsr r8, r1, #1 @ shr $1,%r8d
+ and r8, r8, #32 @ and $32,%r8d
+ eor r8, r8, #32 @ xor $32,%r8d # nbits==192?0:32
+ bl _vpaes_schedule_core
+
+ vldmia sp!, {d8,d9,d10,d11,d12,d13,d14,d15}
+ ldmia sp!, {r7,r8,r9,r10,r11, pc} @ return
+ .size vpaes_set_decrypt_key,.-vpaes_set_decrypt_key
+
+ @ Additional constants for converting to bsaes.
+ .type _vpaes_convert_consts,%object
+ .align 4
+ _vpaes_convert_consts:
+ @ .Lk_opt_then_skew applies skew(opt(x)) XOR 0x63, where skew is the linear
+ @ transform in the AES S-box. 0x63 is incorporated into the low half of the
+ @ table. This was computed with the following script:
+ @
+ @ def u64s_to_u128(x, y):
+ @ return x | (y << 64)
+ @ def u128_to_u64s(w):
+ @ return w & ((1<<64)-1), w >> 64
+ @ def get_byte(w, i):
+ @ return (w >> (i*8)) & 0xff
+ @ def apply_table(table, b):
+ @ lo = b & 0xf
+ @ hi = b >> 4
+ @ return get_byte(table[0], lo) ^ get_byte(table[1], hi)
+ @ def opt(b):
+ @ table = [
+ @ u64s_to_u128(0xFF9F4929D6B66000, 0xF7974121DEBE6808),
+ @ u64s_to_u128(0x01EDBD5150BCEC00, 0xE10D5DB1B05C0CE0),
+ @ ]
+ @ return apply_table(table, b)
+ @ def rot_byte(b, n):
+ @ return 0xff & ((b << n) | (b >> (8-n)))
+ @ def skew(x):
+ @ return (x ^ rot_byte(x, 1) ^ rot_byte(x, 2) ^ rot_byte(x, 3) ^
+ @ rot_byte(x, 4))
+ @ table = [0, 0]
+ @ for i in range(16):
+ @ table[0] |= (skew(opt(i)) ^ 0x63) << (i*8)
+ @ table[1] |= skew(opt(i<<4)) << (i*8)
+ @ print(" .quad 0x%016x, 0x%016x" % u128_to_u64s(table[0]))
+ @ print(" .quad 0x%016x, 0x%016x" % u128_to_u64s(table[1]))
+ .Lk_opt_then_skew:
+ .quad 0x9cb8436798bc4763, 0x6440bb9f6044bf9b
+ .quad 0x1f30062936192f00, 0xb49bad829db284ab
+
+ @ .Lk_decrypt_transform is a permutation which performs an 8-bit left-rotation
+ @ followed by a byte-swap on each 32-bit word of a vector. E.g., 0x11223344
+ @ becomes 0x22334411 and then 0x11443322.
+ .Lk_decrypt_transform:
+ .quad 0x0704050603000102, 0x0f0c0d0e0b08090a
+ .size _vpaes_convert_consts,.-_vpaes_convert_consts
+
+ @ void vpaes_encrypt_key_to_bsaes(AES_KEY *bsaes, const AES_KEY *vpaes);
+ .globl vpaes_encrypt_key_to_bsaes
+ .hidden vpaes_encrypt_key_to_bsaes
+ .type vpaes_encrypt_key_to_bsaes,%function
+ .align 4
+ vpaes_encrypt_key_to_bsaes:
+ stmdb sp!, {r11, lr}
+
+ @ See _vpaes_schedule_core for the key schedule logic. In particular,
+ @ _vpaes_schedule_transform(.Lk_ipt) (section 2.2 of the paper),
+ @ _vpaes_schedule_mangle (section 4.3), and .Lschedule_mangle_last
+ @ contain the transformations not in the bsaes representation. This
+ @ function inverts those transforms.
+ @
+ @ Note also that bsaes-armv7.pl expects aes-armv4.pl's key
+ @ representation, which does not match the other aes_nohw_*
+ @ implementations. The ARM aes_nohw_* stores each 32-bit word
+ @ byteswapped, as a convenience for (unsupported) big-endian ARM, at the
+ @ cost of extra REV and VREV32 operations in little-endian ARM.
+
+ vmov.i8 q9, #0x0f @ Required by _vpaes_schedule_transform
+ adr r2, .Lk_mc_forward @ Must be aligned to 8 mod 16.
+ add r3, r2, 0x90 @ .Lk_sr+0x10-.Lk_mc_forward = 0x90 (Apple's toolchain doesn't support the expression)
+
+ vld1.64 {q12}, [r2]
+ vmov.i8 q10, #0x5b @ .Lk_s63 from vpaes-x86_64
+ adr r11, .Lk_opt @ Must be aligned to 8 mod 16.
+ vmov.i8 q11, #0x63 @ .LK_s63 without .Lk_ipt applied
+
+ @ vpaes stores one fewer round count than bsaes, but the number of keys
+ @ is the same.
+ ldr r2, [r1,#240]
+ add r2, r2, #1
+ str r2, [r0,#240]
+
+ @ The first key is transformed with _vpaes_schedule_transform(.Lk_ipt).
+ @ Invert this with .Lk_opt.
+ vld1.64 {q0}, [r1]!
+ bl _vpaes_schedule_transform
+ vrev32.8 q0, q0
+ vst1.64 {q0}, [r0]!
+
+ @ The middle keys have _vpaes_schedule_transform(.Lk_ipt) applied,
+ @ followed by _vpaes_schedule_mangle. _vpaes_schedule_mangle XORs 0x63,
+ @ multiplies by the circulant 0,1,1,1, then applies ShiftRows.
+ .Loop_enc_key_to_bsaes:
+ vld1.64 {q0}, [r1]!
+
+ @ Invert the ShiftRows step (see .Lschedule_mangle_both). Note we cycle
+ @ r3 in the opposite direction and start at .Lk_sr+0x10 instead of 0x30.
+ @ We use r3 rather than r8 to avoid a callee-saved register.
+ vld1.64 {q1}, [r3]
+ vtbl.8 d4, {q0}, d2
+ vtbl.8 d5, {q0}, d3
+ add r3, r3, #16
+ and r3, r3, #~(1<<6)
+ vmov q0, q2
+
+ @ Handle the last key differently.
+ subs r2, r2, #1
+ beq .Loop_enc_key_to_bsaes_last
+
+ @ Multiply by the circulant. This is its own inverse.
+ vtbl.8 d2, {q0}, d24
+ vtbl.8 d3, {q0}, d25
+ vmov q0, q1
+ vtbl.8 d4, {q1}, d24
+ vtbl.8 d5, {q1}, d25
+ veor q0, q0, q2
+ vtbl.8 d2, {q2}, d24
+ vtbl.8 d3, {q2}, d25
+ veor q0, q0, q1
+
+ @ XOR and finish.
+ veor q0, q0, q10
+ bl _vpaes_schedule_transform
+ vrev32.8 q0, q0
+ vst1.64 {q0}, [r0]!
+ b .Loop_enc_key_to_bsaes
+
+ .Loop_enc_key_to_bsaes_last:
+ @ The final key does not have a basis transform (note
+ @ .Lschedule_mangle_last inverts the original transform). It only XORs
+ @ 0x63 and applies ShiftRows. The latter was already inverted in the
+ @ loop. Note that, because we act on the original representation, we use
+ @ q11, not q10.
+ veor q0, q0, q11
+ vrev32.8 q0, q0
+ vst1.64 {q0}, [r0]
+
+ @ Wipe registers which contained key material.
+ veor q0, q0, q0
+ veor q1, q1, q1
+ veor q2, q2, q2
+
+ ldmia sp!, {r11, pc} @ return
+ .size vpaes_encrypt_key_to_bsaes,.-vpaes_encrypt_key_to_bsaes
+
+ @ void vpaes_decrypt_key_to_bsaes(AES_KEY *vpaes, const AES_KEY *bsaes);
+ .globl vpaes_decrypt_key_to_bsaes
+ .hidden vpaes_decrypt_key_to_bsaes
+ .type vpaes_decrypt_key_to_bsaes,%function
+ .align 4
+ vpaes_decrypt_key_to_bsaes:
+ stmdb sp!, {r11, lr}
+
+ @ See _vpaes_schedule_core for the key schedule logic. Note vpaes
+ @ computes the decryption key schedule in reverse. Additionally,
+ @ aes-x86_64.pl shares some transformations, so we must only partially
+ @ invert vpaes's transformations. In general, vpaes computes in a
+ @ different basis (.Lk_ipt and .Lk_opt) and applies the inverses of
+ @ MixColumns, ShiftRows, and the affine part of the AES S-box (which is
+ @ split into a linear skew and XOR of 0x63). We undo all but MixColumns.
+ @
+ @ Note also that bsaes-armv7.pl expects aes-armv4.pl's key
+ @ representation, which does not match the other aes_nohw_*
+ @ implementations. The ARM aes_nohw_* stores each 32-bit word
+ @ byteswapped, as a convenience for (unsupported) big-endian ARM, at the
+ @ cost of extra REV and VREV32 operations in little-endian ARM.
+
+ adr r2, .Lk_decrypt_transform
+ adr r3, .Lk_sr+0x30
+ adr r11, .Lk_opt_then_skew @ Input to _vpaes_schedule_transform.
+ vld1.64 {q12}, [r2] @ Reuse q12 from encryption.
+ vmov.i8 q9, #0x0f @ Required by _vpaes_schedule_transform
+
+ @ vpaes stores one fewer round count than bsaes, but the number of keys
+ @ is the same.
+ ldr r2, [r1,#240]
+ add r2, r2, #1
+ str r2, [r0,#240]
+
+ @ Undo the basis change and reapply the S-box affine transform. See
+ @ .Lschedule_mangle_last.
+ vld1.64 {q0}, [r1]!
+ bl _vpaes_schedule_transform
+ vrev32.8 q0, q0
+ vst1.64 {q0}, [r0]!
+
+ @ See _vpaes_schedule_mangle for the transform on the middle keys. Note
+ @ it simultaneously inverts MixColumns and the S-box affine transform.
+ @ See .Lk_dksd through .Lk_dks9.
+ .Loop_dec_key_to_bsaes:
+ vld1.64 {q0}, [r1]!
+
+ @ Invert the ShiftRows step (see .Lschedule_mangle_both). Note going
+ @ forwards cancels inverting for which direction we cycle r3. We use r3
+ @ rather than r8 to avoid a callee-saved register.
+ vld1.64 {q1}, [r3]
+ vtbl.8 d4, {q0}, d2
+ vtbl.8 d5, {q0}, d3
+ add r3, r3, #64-16
+ and r3, r3, #~(1<<6)
+ vmov q0, q2
+
+ @ Handle the last key differently.
+ subs r2, r2, #1
+ beq .Loop_dec_key_to_bsaes_last
+
+ @ Undo the basis change and reapply the S-box affine transform.
+ bl _vpaes_schedule_transform
+
+ @ Rotate each word by 8 bytes (cycle the rows) and then byte-swap. We
+ @ combine the two operations in .Lk_decrypt_transform.
+ @
+ @ TODO(davidben): Where does the rotation come from?
+ vtbl.8 d2, {q0}, d24
+ vtbl.8 d3, {q0}, d25
+
+ vst1.64 {q1}, [r0]!
+ b .Loop_dec_key_to_bsaes
+
+ .Loop_dec_key_to_bsaes_last:
+ @ The final key only inverts ShiftRows (already done in the loop). See
+ @ .Lschedule_am_decrypting. Its basis is not transformed.
+ vrev32.8 q0, q0
+ vst1.64 {q0}, [r0]!
+
+ @ Wipe registers which contained key material.
+ veor q0, q0, q0
+ veor q1, q1, q1
+ veor q2, q2, q2
+
+ ldmia sp!, {r11, pc} @ return
+ .size vpaes_decrypt_key_to_bsaes,.-vpaes_decrypt_key_to_bsaes
+ .globl vpaes_ctr32_encrypt_blocks
+ .hidden vpaes_ctr32_encrypt_blocks
+ .type vpaes_ctr32_encrypt_blocks,%function
+ .align 4
+ vpaes_ctr32_encrypt_blocks:
+ mov ip, sp
+ stmdb sp!, {r7,r8,r9,r10,r11, lr}
+ @ This function uses q4-q7 (d8-d15), which are callee-saved.
+ vstmdb sp!, {d8,d9,d10,d11,d12,d13,d14,d15}
+
+ cmp r2, #0
+ @ r8 is passed on the stack.
+ ldr r8, [ip]
+ beq .Lctr32_done
+
+ @ _vpaes_encrypt_core expects the key in r2, so swap r2 and r3.
+ mov r9, r3
+ mov r3, r2
+ mov r2, r9
+
+ @ Load the IV and counter portion.
+ ldr r7, [r8, #12]
+ vld1.8 {q7}, [r8]
+
+ bl _vpaes_preheat
+ rev r7, r7 @ The counter is big-endian.
+
+ .Lctr32_loop:
+ vmov q0, q7
+ vld1.8 {q6}, [r0]! @ .Load input ahead of time
+ bl _vpaes_encrypt_core
+ veor q0, q0, q6 @ XOR input and result
+ vst1.8 {q0}, [r1]!
+ subs r3, r3, #1
+ @ Update the counter.
+ add r7, r7, #1
+ rev r9, r7
+ vmov.32 d15[1], r9
+ bne .Lctr32_loop
+
+ .Lctr32_done:
+ vldmia sp!, {d8,d9,d10,d11,d12,d13,d14,d15}
+ ldmia sp!, {r7,r8,r9,r10,r11, pc} @ return
+ .size vpaes_ctr32_encrypt_blocks,.-vpaes_ctr32_encrypt_blocks
+ #endif
+ #endif // !OPENSSL_NO_ASM
+ .section .note.GNU-stack,"",%progbits
--- /dev/null
-.arch armv7-a
+ #! /usr/bin/env perl
+ # Copyright 2015-2016 The OpenSSL Project Authors. All Rights Reserved.
+ #
+ # Licensed under the OpenSSL license (the "License"). You may not use
+ # this file except in compliance with the License. You can obtain a copy
+ # in the file LICENSE in the source distribution or at
+ # https://www.openssl.org/source/license.html
+
+
+ ######################################################################
+ ## Constant-time SSSE3 AES core implementation.
+ ## version 0.1
+ ##
+ ## By Mike Hamburg (Stanford University), 2009
+ ## Public domain.
+ ##
+ ## For details see http://shiftleft.org/papers/vector_aes/ and
+ ## http://crypto.stanford.edu/vpaes/.
+ ##
+ ######################################################################
+ # Adapted from the original x86_64 version and <appro@openssl.org>'s ARMv8
+ # version.
+ #
+ # armv7, aarch64, and x86_64 differ in several ways:
+ #
+ # * x86_64 SSSE3 instructions are two-address (destination operand is also a
+ # source), while NEON is three-address (destination operand is separate from
+ # two sources).
+ #
+ # * aarch64 has 32 SIMD registers available, while x86_64 and armv7 have 16.
+ #
+ # * x86_64 instructions can take memory references, while ARM is a load/store
+ # architecture. This means we sometimes need a spare register.
+ #
+ # * aarch64 and x86_64 have 128-bit byte shuffle instructions (tbl and pshufb),
+ # while armv7 only has a 64-bit byte shuffle (vtbl).
+ #
+ # This means this armv7 version must be a mix of both aarch64 and x86_64
+ # implementations. armv7 and aarch64 have analogous SIMD instructions, so we
+ # base the instructions on aarch64. However, we cannot use aarch64's register
+ # allocation. x86_64's register count matches, but x86_64 is two-address.
+ # vpaes-armv8.pl already accounts for this in the comments, which use
+ # three-address AVX instructions instead of the original SSSE3 ones. We base
+ # register usage on these comments, which are preserved in this file.
+ #
+ # This means we do not use separate input and output registers as in aarch64 and
+ # cannot pin as many constants in the preheat functions. However, the load/store
+ # architecture means we must still deviate from x86_64 in places.
+ #
+ # Next, we account for the byte shuffle instructions. vtbl takes 64-bit source
+ # and destination and 128-bit table. Fortunately, armv7 also allows addressing
+ # upper and lower halves of each 128-bit register. The lower half of q{N} is
+ # d{2*N}. The upper half is d{2*N+1}. Instead of the following non-existent
+ # instruction,
+ #
+ # vtbl.8 q0, q1, q2 @ Index each of q2's 16 bytes into q1. Store in q0.
+ #
+ # we write:
+ #
+ # vtbl.8 d0, q1, d4 @ Index each of d4's 8 bytes into q1. Store in d0.
+ # vtbl.8 d1, q1, d5 @ Index each of d5's 8 bytes into q1. Store in d1.
+ #
+ # For readability, we write d0 and d1 as q0#lo and q0#hi, respectively and
+ # post-process before outputting. (This is adapted from ghash-armv4.pl.) Note,
+ # however, that destination (q0) and table (q1) registers may no longer match.
+ # We adjust the register usage from x86_64 to avoid this. (Unfortunately, the
+ # two-address pshufb always matched these operands, so this is common.)
+ #
+ # This file also runs against the limit of ARMv7's ADR pseudo-instruction. ADR
+ # expands to an ADD or SUB of the pc register to find an address. That immediate
+ # must fit in ARM's encoding scheme: 8 bits of constant and 4 bits of rotation.
+ # This means larger values must be more aligned.
+ #
+ # ARM additionally has two encodings, ARM and Thumb mode. Our assembly files may
+ # use either encoding (do we actually need to support this?). In ARM mode, the
+ # distances get large enough to require 16-byte alignment. Moving constants
+ # closer to their use resolves most of this, but common constants in
+ # _vpaes_consts are used by the whole file. Affected ADR instructions must be
+ # placed at 8 mod 16 (the pc register is 8 ahead). Instructions with this
+ # constraint have been commented.
+ #
+ # For details on ARM's immediate value encoding scheme, see
+ # https://alisdair.mcdiarmid.org/arm-immediate-value-encoding/
+ #
+ # Finally, a summary of armv7 and aarch64 SIMD syntax differences:
+ #
+ # * armv7 prefixes SIMD instructions with 'v', while aarch64 does not.
+ #
+ # * armv7 SIMD registers are named like q0 (and d0 for the half-width ones).
+ # aarch64 names registers like v0, and denotes half-width operations in an
+ # instruction suffix (see below).
+ #
+ # * aarch64 embeds size and lane information in register suffixes. v0.16b is
+ # 16 bytes, v0.8h is eight u16s, v0.4s is four u32s, and v0.2d is two u64s.
+ # armv7 embeds the total size in the register name (see above) and the size of
+ # each element in an instruction suffix, which may look like vmov.i8,
+ # vshr.u8, or vtbl.8, depending on instruction.
+
+ use strict;
+
+ my $flavour = shift;
+ my $output;
+ while (($output=shift) && ($output!~/\w[\w\-]*\.\w+$/)) {}
+
+ $0 =~ m/(.*[\/\\])[^\/\\]+$/;
+ my $dir=$1;
+ my $xlate;
+ ( $xlate="${dir}arm-xlate.pl" and -f $xlate ) or
+ ( $xlate="${dir}../../../perlasm/arm-xlate.pl" and -f $xlate) or
+ die "can't locate arm-xlate.pl";
+
+ open OUT,"| \"$^X\" \"$xlate\" $flavour \"$output\"";
+ *STDOUT=*OUT;
+
+ my $code = "";
+
+ $code.=<<___;
+ .syntax unified
+
++.arch armv6
+ .fpu neon
+
+ #if defined(__thumb2__)
+ .thumb
+ #else
+ .code 32
+ #endif
+
+ .text
+
+ .type _vpaes_consts,%object
+ .align 7 @ totally strategic alignment
+ _vpaes_consts:
+ .Lk_mc_forward: @ mc_forward
+ .quad 0x0407060500030201, 0x0C0F0E0D080B0A09
+ .quad 0x080B0A0904070605, 0x000302010C0F0E0D
+ .quad 0x0C0F0E0D080B0A09, 0x0407060500030201
+ .quad 0x000302010C0F0E0D, 0x080B0A0904070605
+ .Lk_mc_backward:@ mc_backward
+ .quad 0x0605040702010003, 0x0E0D0C0F0A09080B
+ .quad 0x020100030E0D0C0F, 0x0A09080B06050407
+ .quad 0x0E0D0C0F0A09080B, 0x0605040702010003
+ .quad 0x0A09080B06050407, 0x020100030E0D0C0F
+ .Lk_sr: @ sr
+ .quad 0x0706050403020100, 0x0F0E0D0C0B0A0908
+ .quad 0x030E09040F0A0500, 0x0B06010C07020D08
+ .quad 0x0F060D040B020900, 0x070E050C030A0108
+ .quad 0x0B0E0104070A0D00, 0x0306090C0F020508
+
+ @
+ @ "Hot" constants
+ @
+ .Lk_inv: @ inv, inva
+ .quad 0x0E05060F0D080180, 0x040703090A0B0C02
+ .quad 0x01040A060F0B0780, 0x030D0E0C02050809
+ .Lk_ipt: @ input transform (lo, hi)
+ .quad 0xC2B2E8985A2A7000, 0xCABAE09052227808
+ .quad 0x4C01307D317C4D00, 0xCD80B1FCB0FDCC81
+ .Lk_sbo: @ sbou, sbot
+ .quad 0xD0D26D176FBDC700, 0x15AABF7AC502A878
+ .quad 0xCFE474A55FBB6A00, 0x8E1E90D1412B35FA
+ .Lk_sb1: @ sb1u, sb1t
+ .quad 0x3618D415FAE22300, 0x3BF7CCC10D2ED9EF
+ .quad 0xB19BE18FCB503E00, 0xA5DF7A6E142AF544
+ .Lk_sb2: @ sb2u, sb2t
+ .quad 0x69EB88400AE12900, 0xC2A163C8AB82234A
+ .quad 0xE27A93C60B712400, 0x5EB7E955BC982FCD
+
+ .asciz "Vector Permutation AES for ARMv7 NEON, Mike Hamburg (Stanford University)"
+ .size _vpaes_consts,.-_vpaes_consts
+ .align 6
+ ___
+ \f
+ {
+ my ($inp,$out,$key) = map("r$_", (0..2));
+
+ my ($invlo,$invhi) = map("q$_", (10..11));
+ my ($sb1u,$sb1t,$sb2u,$sb2t) = map("q$_", (12..15));
+
+ $code.=<<___;
+ @@
+ @@ _aes_preheat
+ @@
+ @@ Fills q9-q15 as specified below.
+ @@
+ .type _vpaes_preheat,%function
+ .align 4
+ _vpaes_preheat:
+ adr r10, .Lk_inv
+ vmov.i8 q9, #0x0f @ .Lk_s0F
+ vld1.64 {q10,q11}, [r10]! @ .Lk_inv
+ add r10, r10, #64 @ Skip .Lk_ipt, .Lk_sbo
+ vld1.64 {q12,q13}, [r10]! @ .Lk_sb1
+ vld1.64 {q14,q15}, [r10] @ .Lk_sb2
+ bx lr
+
+ @@
+ @@ _aes_encrypt_core
+ @@
+ @@ AES-encrypt q0.
+ @@
+ @@ Inputs:
+ @@ q0 = input
+ @@ q9-q15 as in _vpaes_preheat
+ @@ [$key] = scheduled keys
+ @@
+ @@ Output in q0
+ @@ Clobbers q1-q5, r8-r11
+ @@ Preserves q6-q8 so you get some local vectors
+ @@
+ @@
+ .type _vpaes_encrypt_core,%function
+ .align 4
+ _vpaes_encrypt_core:
+ mov r9, $key
+ ldr r8, [$key,#240] @ pull rounds
+ adr r11, .Lk_ipt
+ @ vmovdqa .Lk_ipt(%rip), %xmm2 # iptlo
+ @ vmovdqa .Lk_ipt+16(%rip), %xmm3 # ipthi
+ vld1.64 {q2, q3}, [r11]
+ adr r11, .Lk_mc_forward+16
+ vld1.64 {q5}, [r9]! @ vmovdqu (%r9), %xmm5 # round0 key
+ vand q1, q0, q9 @ vpand %xmm9, %xmm0, %xmm1
+ vshr.u8 q0, q0, #4 @ vpsrlb \$4, %xmm0, %xmm0
+ vtbl.8 q1#lo, {q2}, q1#lo @ vpshufb %xmm1, %xmm2, %xmm1
+ vtbl.8 q1#hi, {q2}, q1#hi
+ vtbl.8 q2#lo, {q3}, q0#lo @ vpshufb %xmm0, %xmm3, %xmm2
+ vtbl.8 q2#hi, {q3}, q0#hi
+ veor q0, q1, q5 @ vpxor %xmm5, %xmm1, %xmm0
+ veor q0, q0, q2 @ vpxor %xmm2, %xmm0, %xmm0
+
+ @ .Lenc_entry ends with a bnz instruction which is normally paired with
+ @ subs in .Lenc_loop.
+ tst r8, r8
+ b .Lenc_entry
+
+ .align 4
+ .Lenc_loop:
+ @ middle of middle round
+ add r10, r11, #0x40
+ vtbl.8 q4#lo, {$sb1t}, q2#lo @ vpshufb %xmm2, %xmm13, %xmm4 # 4 = sb1u
+ vtbl.8 q4#hi, {$sb1t}, q2#hi
+ vld1.64 {q1}, [r11]! @ vmovdqa -0x40(%r11,%r10), %xmm1 # .Lk_mc_forward[]
+ vtbl.8 q0#lo, {$sb1u}, q3#lo @ vpshufb %xmm3, %xmm12, %xmm0 # 0 = sb1t
+ vtbl.8 q0#hi, {$sb1u}, q3#hi
+ veor q4, q4, q5 @ vpxor %xmm5, %xmm4, %xmm4 # 4 = sb1u + k
+ vtbl.8 q5#lo, {$sb2t}, q2#lo @ vpshufb %xmm2, %xmm15, %xmm5 # 4 = sb2u
+ vtbl.8 q5#hi, {$sb2t}, q2#hi
+ veor q0, q0, q4 @ vpxor %xmm4, %xmm0, %xmm0 # 0 = A
+ vtbl.8 q2#lo, {$sb2u}, q3#lo @ vpshufb %xmm3, %xmm14, %xmm2 # 2 = sb2t
+ vtbl.8 q2#hi, {$sb2u}, q3#hi
+ vld1.64 {q4}, [r10] @ vmovdqa (%r11,%r10), %xmm4 # .Lk_mc_backward[]
+ vtbl.8 q3#lo, {q0}, q1#lo @ vpshufb %xmm1, %xmm0, %xmm3 # 0 = B
+ vtbl.8 q3#hi, {q0}, q1#hi
+ veor q2, q2, q5 @ vpxor %xmm5, %xmm2, %xmm2 # 2 = 2A
+ @ Write to q5 instead of q0, so the table and destination registers do
+ @ not overlap.
+ vtbl.8 q5#lo, {q0}, q4#lo @ vpshufb %xmm4, %xmm0, %xmm0 # 3 = D
+ vtbl.8 q5#hi, {q0}, q4#hi
+ veor q3, q3, q2 @ vpxor %xmm2, %xmm3, %xmm3 # 0 = 2A+B
+ vtbl.8 q4#lo, {q3}, q1#lo @ vpshufb %xmm1, %xmm3, %xmm4 # 0 = 2B+C
+ vtbl.8 q4#hi, {q3}, q1#hi
+ @ Here we restore the original q0/q5 usage.
+ veor q0, q5, q3 @ vpxor %xmm3, %xmm0, %xmm0 # 3 = 2A+B+D
+ and r11, r11, #~(1<<6) @ and \$0x30, %r11 # ... mod 4
+ veor q0, q0, q4 @ vpxor %xmm4, %xmm0, %xmm0 # 0 = 2A+3B+C+D
+ subs r8, r8, #1 @ nr--
+
+ .Lenc_entry:
+ @ top of round
+ vand q1, q0, q9 @ vpand %xmm0, %xmm9, %xmm1 # 0 = k
+ vshr.u8 q0, q0, #4 @ vpsrlb \$4, %xmm0, %xmm0 # 1 = i
+ vtbl.8 q5#lo, {$invhi}, q1#lo @ vpshufb %xmm1, %xmm11, %xmm5 # 2 = a/k
+ vtbl.8 q5#hi, {$invhi}, q1#hi
+ veor q1, q1, q0 @ vpxor %xmm0, %xmm1, %xmm1 # 0 = j
+ vtbl.8 q3#lo, {$invlo}, q0#lo @ vpshufb %xmm0, %xmm10, %xmm3 # 3 = 1/i
+ vtbl.8 q3#hi, {$invlo}, q0#hi
+ vtbl.8 q4#lo, {$invlo}, q1#lo @ vpshufb %xmm1, %xmm10, %xmm4 # 4 = 1/j
+ vtbl.8 q4#hi, {$invlo}, q1#hi
+ veor q3, q3, q5 @ vpxor %xmm5, %xmm3, %xmm3 # 3 = iak = 1/i + a/k
+ veor q4, q4, q5 @ vpxor %xmm5, %xmm4, %xmm4 # 4 = jak = 1/j + a/k
+ vtbl.8 q2#lo, {$invlo}, q3#lo @ vpshufb %xmm3, %xmm10, %xmm2 # 2 = 1/iak
+ vtbl.8 q2#hi, {$invlo}, q3#hi
+ vtbl.8 q3#lo, {$invlo}, q4#lo @ vpshufb %xmm4, %xmm10, %xmm3 # 3 = 1/jak
+ vtbl.8 q3#hi, {$invlo}, q4#hi
+ veor q2, q2, q1 @ vpxor %xmm1, %xmm2, %xmm2 # 2 = io
+ veor q3, q3, q0 @ vpxor %xmm0, %xmm3, %xmm3 # 3 = jo
+ vld1.64 {q5}, [r9]! @ vmovdqu (%r9), %xmm5
+ bne .Lenc_loop
+
+ @ middle of last round
+ add r10, r11, #0x80
+
+ adr r11, .Lk_sbo
+ @ Read to q1 instead of q4, so the vtbl.8 instruction below does not
+ @ overlap table and destination registers.
+ vld1.64 {q1}, [r11]! @ vmovdqa -0x60(%r10), %xmm4 # 3 : sbou
+ vld1.64 {q0}, [r11] @ vmovdqa -0x50(%r10), %xmm0 # 0 : sbot .Lk_sbo+16
+ vtbl.8 q4#lo, {q1}, q2#lo @ vpshufb %xmm2, %xmm4, %xmm4 # 4 = sbou
+ vtbl.8 q4#hi, {q1}, q2#hi
+ vld1.64 {q1}, [r10] @ vmovdqa 0x40(%r11,%r10), %xmm1 # .Lk_sr[]
+ @ Write to q2 instead of q0 below, to avoid overlapping table and
+ @ destination registers.
+ vtbl.8 q2#lo, {q0}, q3#lo @ vpshufb %xmm3, %xmm0, %xmm0 # 0 = sb1t
+ vtbl.8 q2#hi, {q0}, q3#hi
+ veor q4, q4, q5 @ vpxor %xmm5, %xmm4, %xmm4 # 4 = sb1u + k
+ veor q2, q2, q4 @ vpxor %xmm4, %xmm0, %xmm0 # 0 = A
+ @ Here we restore the original q0/q2 usage.
+ vtbl.8 q0#lo, {q2}, q1#lo @ vpshufb %xmm1, %xmm0, %xmm0
+ vtbl.8 q0#hi, {q2}, q1#hi
+ bx lr
+ .size _vpaes_encrypt_core,.-_vpaes_encrypt_core
+
+ .globl vpaes_encrypt
+ .type vpaes_encrypt,%function
+ .align 4
+ vpaes_encrypt:
+ @ _vpaes_encrypt_core uses r8-r11. Round up to r7-r11 to maintain stack
+ @ alignment.
+ stmdb sp!, {r7-r11,lr}
+ @ _vpaes_encrypt_core uses q4-q5 (d8-d11), which are callee-saved.
+ vstmdb sp!, {d8-d11}
+
+ vld1.64 {q0}, [$inp]
+ bl _vpaes_preheat
+ bl _vpaes_encrypt_core
+ vst1.64 {q0}, [$out]
+
+ vldmia sp!, {d8-d11}
+ ldmia sp!, {r7-r11, pc} @ return
+ .size vpaes_encrypt,.-vpaes_encrypt
+
+ @
+ @ Decryption stuff
+ @
+ .type _vpaes_decrypt_consts,%object
+ .align 4
+ _vpaes_decrypt_consts:
+ .Lk_dipt: @ decryption input transform
+ .quad 0x0F505B040B545F00, 0x154A411E114E451A
+ .quad 0x86E383E660056500, 0x12771772F491F194
+ .Lk_dsbo: @ decryption sbox final output
+ .quad 0x1387EA537EF94000, 0xC7AA6DB9D4943E2D
+ .quad 0x12D7560F93441D00, 0xCA4B8159D8C58E9C
+ .Lk_dsb9: @ decryption sbox output *9*u, *9*t
+ .quad 0x851C03539A86D600, 0xCAD51F504F994CC9
+ .quad 0xC03B1789ECD74900, 0x725E2C9EB2FBA565
+ .Lk_dsbd: @ decryption sbox output *D*u, *D*t
+ .quad 0x7D57CCDFE6B1A200, 0xF56E9B13882A4439
+ .quad 0x3CE2FAF724C6CB00, 0x2931180D15DEEFD3
+ .Lk_dsbb: @ decryption sbox output *B*u, *B*t
+ .quad 0xD022649296B44200, 0x602646F6B0F2D404
+ .quad 0xC19498A6CD596700, 0xF3FF0C3E3255AA6B
+ .Lk_dsbe: @ decryption sbox output *E*u, *E*t
+ .quad 0x46F2929626D4D000, 0x2242600464B4F6B0
+ .quad 0x0C55A6CDFFAAC100, 0x9467F36B98593E32
+ .size _vpaes_decrypt_consts,.-_vpaes_decrypt_consts
+
+ @@
+ @@ Decryption core
+ @@
+ @@ Same API as encryption core, except it clobbers q12-q15 rather than using
+ @@ the values from _vpaes_preheat. q9-q11 must still be set from
+ @@ _vpaes_preheat.
+ @@
+ .type _vpaes_decrypt_core,%function
+ .align 4
+ _vpaes_decrypt_core:
+ mov r9, $key
+ ldr r8, [$key,#240] @ pull rounds
+
+ @ This function performs shuffles with various constants. The x86_64
+ @ version loads them on-demand into %xmm0-%xmm5. This does not work well
+ @ for ARMv7 because those registers are shuffle destinations. The ARMv8
+ @ version preloads those constants into registers, but ARMv7 has half
+ @ the registers to work with. Instead, we load them on-demand into
+ @ q12-q15, registers normally use for preloaded constants. This is fine
+ @ because decryption doesn't use those constants. The values are
+ @ constant, so this does not interfere with potential 2x optimizations.
+ adr r7, .Lk_dipt
+
+ vld1.64 {q12,q13}, [r7] @ vmovdqa .Lk_dipt(%rip), %xmm2 # iptlo
+ lsl r11, r8, #4 @ mov %rax, %r11; shl \$4, %r11
+ eor r11, r11, #0x30 @ xor \$0x30, %r11
+ adr r10, .Lk_sr
+ and r11, r11, #0x30 @ and \$0x30, %r11
+ add r11, r11, r10
+ adr r10, .Lk_mc_forward+48
+
+ vld1.64 {q4}, [r9]! @ vmovdqu (%r9), %xmm4 # round0 key
+ vand q1, q0, q9 @ vpand %xmm9, %xmm0, %xmm1
+ vshr.u8 q0, q0, #4 @ vpsrlb \$4, %xmm0, %xmm0
+ vtbl.8 q2#lo, {q12}, q1#lo @ vpshufb %xmm1, %xmm2, %xmm2
+ vtbl.8 q2#hi, {q12}, q1#hi
+ vld1.64 {q5}, [r10] @ vmovdqa .Lk_mc_forward+48(%rip), %xmm5
+ @ vmovdqa .Lk_dipt+16(%rip), %xmm1 # ipthi
+ vtbl.8 q0#lo, {q13}, q0#lo @ vpshufb %xmm0, %xmm1, %xmm0
+ vtbl.8 q0#hi, {q13}, q0#hi
+ veor q2, q2, q4 @ vpxor %xmm4, %xmm2, %xmm2
+ veor q0, q0, q2 @ vpxor %xmm2, %xmm0, %xmm0
+
+ @ .Ldec_entry ends with a bnz instruction which is normally paired with
+ @ subs in .Ldec_loop.
+ tst r8, r8
+ b .Ldec_entry
+
+ .align 4
+ .Ldec_loop:
+ @
+ @ Inverse mix columns
+ @
+
+ @ We load .Lk_dsb* into q12-q15 on-demand. See the comment at the top of
+ @ the function.
+ adr r10, .Lk_dsb9
+ vld1.64 {q12,q13}, [r10]! @ vmovdqa -0x20(%r10),%xmm4 # 4 : sb9u
+ @ vmovdqa -0x10(%r10),%xmm1 # 0 : sb9t
+ @ Load sbd* ahead of time.
+ vld1.64 {q14,q15}, [r10]! @ vmovdqa 0x00(%r10),%xmm4 # 4 : sbdu
+ @ vmovdqa 0x10(%r10),%xmm1 # 0 : sbdt
+ vtbl.8 q4#lo, {q12}, q2#lo @ vpshufb %xmm2, %xmm4, %xmm4 # 4 = sb9u
+ vtbl.8 q4#hi, {q12}, q2#hi
+ vtbl.8 q1#lo, {q13}, q3#lo @ vpshufb %xmm3, %xmm1, %xmm1 # 0 = sb9t
+ vtbl.8 q1#hi, {q13}, q3#hi
+ veor q0, q4, q0 @ vpxor %xmm4, %xmm0, %xmm0
+
+ veor q0, q0, q1 @ vpxor %xmm1, %xmm0, %xmm0 # 0 = ch
+
+ @ Load sbb* ahead of time.
+ vld1.64 {q12,q13}, [r10]! @ vmovdqa 0x20(%r10),%xmm4 # 4 : sbbu
+ @ vmovdqa 0x30(%r10),%xmm1 # 0 : sbbt
+
+ vtbl.8 q4#lo, {q14}, q2#lo @ vpshufb %xmm2, %xmm4, %xmm4 # 4 = sbdu
+ vtbl.8 q4#hi, {q14}, q2#hi
+ @ Write to q1 instead of q0, so the table and destination registers do
+ @ not overlap.
+ vtbl.8 q1#lo, {q0}, q5#lo @ vpshufb %xmm5, %xmm0, %xmm0 # MC ch
+ vtbl.8 q1#hi, {q0}, q5#hi
+ @ Here we restore the original q0/q1 usage. This instruction is
+ @ reordered from the ARMv8 version so we do not clobber the vtbl.8
+ @ below.
+ veor q0, q1, q4 @ vpxor %xmm4, %xmm0, %xmm0 # 4 = ch
+ vtbl.8 q1#lo, {q15}, q3#lo @ vpshufb %xmm3, %xmm1, %xmm1 # 0 = sbdt
+ vtbl.8 q1#hi, {q15}, q3#hi
+ @ vmovdqa 0x20(%r10), %xmm4 # 4 : sbbu
+ veor q0, q0, q1 @ vpxor %xmm1, %xmm0, %xmm0 # 0 = ch
+ @ vmovdqa 0x30(%r10), %xmm1 # 0 : sbbt
+
+ @ Load sbd* ahead of time.
+ vld1.64 {q14,q15}, [r10]! @ vmovdqa 0x40(%r10),%xmm4 # 4 : sbeu
+ @ vmovdqa 0x50(%r10),%xmm1 # 0 : sbet
+
+ vtbl.8 q4#lo, {q12}, q2#lo @ vpshufb %xmm2, %xmm4, %xmm4 # 4 = sbbu
+ vtbl.8 q4#hi, {q12}, q2#hi
+ @ Write to q1 instead of q0, so the table and destination registers do
+ @ not overlap.
+ vtbl.8 q1#lo, {q0}, q5#lo @ vpshufb %xmm5, %xmm0, %xmm0 # MC ch
+ vtbl.8 q1#hi, {q0}, q5#hi
+ @ Here we restore the original q0/q1 usage. This instruction is
+ @ reordered from the ARMv8 version so we do not clobber the vtbl.8
+ @ below.
+ veor q0, q1, q4 @ vpxor %xmm4, %xmm0, %xmm0 # 4 = ch
+ vtbl.8 q1#lo, {q13}, q3#lo @ vpshufb %xmm3, %xmm1, %xmm1 # 0 = sbbt
+ vtbl.8 q1#hi, {q13}, q3#hi
+ veor q0, q0, q1 @ vpxor %xmm1, %xmm0, %xmm0 # 0 = ch
+
+ vtbl.8 q4#lo, {q14}, q2#lo @ vpshufb %xmm2, %xmm4, %xmm4 # 4 = sbeu
+ vtbl.8 q4#hi, {q14}, q2#hi
+ @ Write to q1 instead of q0, so the table and destination registers do
+ @ not overlap.
+ vtbl.8 q1#lo, {q0}, q5#lo @ vpshufb %xmm5, %xmm0, %xmm0 # MC ch
+ vtbl.8 q1#hi, {q0}, q5#hi
+ @ Here we restore the original q0/q1 usage. This instruction is
+ @ reordered from the ARMv8 version so we do not clobber the vtbl.8
+ @ below.
+ veor q0, q1, q4 @ vpxor %xmm4, %xmm0, %xmm0 # 4 = ch
+ vtbl.8 q1#lo, {q15}, q3#lo @ vpshufb %xmm3, %xmm1, %xmm1 # 0 = sbet
+ vtbl.8 q1#hi, {q15}, q3#hi
+ vext.8 q5, q5, q5, #12 @ vpalignr \$12, %xmm5, %xmm5, %xmm5
+ veor q0, q0, q1 @ vpxor %xmm1, %xmm0, %xmm0 # 0 = ch
+ subs r8, r8, #1 @ sub \$1,%rax # nr--
+
+ .Ldec_entry:
+ @ top of round
+ vand q1, q0, q9 @ vpand %xmm9, %xmm0, %xmm1 # 0 = k
+ vshr.u8 q0, q0, #4 @ vpsrlb \$4, %xmm0, %xmm0 # 1 = i
+ vtbl.8 q2#lo, {$invhi}, q1#lo @ vpshufb %xmm1, %xmm11, %xmm2 # 2 = a/k
+ vtbl.8 q2#hi, {$invhi}, q1#hi
+ veor q1, q1, q0 @ vpxor %xmm0, %xmm1, %xmm1 # 0 = j
+ vtbl.8 q3#lo, {$invlo}, q0#lo @ vpshufb %xmm0, %xmm10, %xmm3 # 3 = 1/i
+ vtbl.8 q3#hi, {$invlo}, q0#hi
+ vtbl.8 q4#lo, {$invlo}, q1#lo @ vpshufb %xmm1, %xmm10, %xmm4 # 4 = 1/j
+ vtbl.8 q4#hi, {$invlo}, q1#hi
+ veor q3, q3, q2 @ vpxor %xmm2, %xmm3, %xmm3 # 3 = iak = 1/i + a/k
+ veor q4, q4, q2 @ vpxor %xmm2, %xmm4, %xmm4 # 4 = jak = 1/j + a/k
+ vtbl.8 q2#lo, {$invlo}, q3#lo @ vpshufb %xmm3, %xmm10, %xmm2 # 2 = 1/iak
+ vtbl.8 q2#hi, {$invlo}, q3#hi
+ vtbl.8 q3#lo, {$invlo}, q4#lo @ vpshufb %xmm4, %xmm10, %xmm3 # 3 = 1/jak
+ vtbl.8 q3#hi, {$invlo}, q4#hi
+ veor q2, q2, q1 @ vpxor %xmm1, %xmm2, %xmm2 # 2 = io
+ veor q3, q3, q0 @ vpxor %xmm0, %xmm3, %xmm3 # 3 = jo
+ vld1.64 {q0}, [r9]! @ vmovdqu (%r9), %xmm0
+ bne .Ldec_loop
+
+ @ middle of last round
+
+ adr r10, .Lk_dsbo
+
+ @ Write to q1 rather than q4 to avoid overlapping table and destination.
+ vld1.64 {q1}, [r10]! @ vmovdqa 0x60(%r10), %xmm4 # 3 : sbou
+ vtbl.8 q4#lo, {q1}, q2#lo @ vpshufb %xmm2, %xmm4, %xmm4 # 4 = sbou
+ vtbl.8 q4#hi, {q1}, q2#hi
+ @ Write to q2 rather than q1 to avoid overlapping table and destination.
+ vld1.64 {q2}, [r10] @ vmovdqa 0x70(%r10), %xmm1 # 0 : sbot
+ vtbl.8 q1#lo, {q2}, q3#lo @ vpshufb %xmm3, %xmm1, %xmm1 # 0 = sb1t
+ vtbl.8 q1#hi, {q2}, q3#hi
+ vld1.64 {q2}, [r11] @ vmovdqa -0x160(%r11), %xmm2 # .Lk_sr-.Lk_dsbd=-0x160
+ veor q4, q4, q0 @ vpxor %xmm0, %xmm4, %xmm4 # 4 = sb1u + k
+ @ Write to q1 rather than q0 so the table and destination registers
+ @ below do not overlap.
+ veor q1, q1, q4 @ vpxor %xmm4, %xmm1, %xmm0 # 0 = A
+ vtbl.8 q0#lo, {q1}, q2#lo @ vpshufb %xmm2, %xmm0, %xmm0
+ vtbl.8 q0#hi, {q1}, q2#hi
+ bx lr
+ .size _vpaes_decrypt_core,.-_vpaes_decrypt_core
+
+ .globl vpaes_decrypt
+ .type vpaes_decrypt,%function
+ .align 4
+ vpaes_decrypt:
+ @ _vpaes_decrypt_core uses r7-r11.
+ stmdb sp!, {r7-r11,lr}
+ @ _vpaes_decrypt_core uses q4-q5 (d8-d11), which are callee-saved.
+ vstmdb sp!, {d8-d11}
+
+ vld1.64 {q0}, [$inp]
+ bl _vpaes_preheat
+ bl _vpaes_decrypt_core
+ vst1.64 {q0}, [$out]
+
+ vldmia sp!, {d8-d11}
+ ldmia sp!, {r7-r11, pc} @ return
+ .size vpaes_decrypt,.-vpaes_decrypt
+ ___
+ }\f
+ {
+ my ($inp,$bits,$out,$dir)=("r0","r1","r2","r3");
+ my ($rcon,$s0F,$invlo,$invhi,$s63) = map("q$_",(8..12));
+
+ $code.=<<___;
+ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
+ @@ @@
+ @@ AES key schedule @@
+ @@ @@
+ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
+
+ @ This function diverges from both x86_64 and armv7 in which constants are
+ @ pinned. x86_64 has a common preheat function for all operations. aarch64
+ @ separates them because it has enough registers to pin nearly all constants.
+ @ armv7 does not have enough registers, but needing explicit loads and stores
+ @ also complicates using x86_64's register allocation directly.
+ @
+ @ We pin some constants for convenience and leave q14 and q15 free to load
+ @ others on demand.
+
+ @
+ @ Key schedule constants
+ @
+ .type _vpaes_key_consts,%object
+ .align 4
+ _vpaes_key_consts:
+ .Lk_dksd: @ decryption key schedule: invskew x*D
+ .quad 0xFEB91A5DA3E44700, 0x0740E3A45A1DBEF9
+ .quad 0x41C277F4B5368300, 0x5FDC69EAAB289D1E
+ .Lk_dksb: @ decryption key schedule: invskew x*B
+ .quad 0x9A4FCA1F8550D500, 0x03D653861CC94C99
+ .quad 0x115BEDA7B6FC4A00, 0xD993256F7E3482C8
+ .Lk_dkse: @ decryption key schedule: invskew x*E + 0x63
+ .quad 0xD5031CCA1FC9D600, 0x53859A4C994F5086
+ .quad 0xA23196054FDC7BE8, 0xCD5EF96A20B31487
+ .Lk_dks9: @ decryption key schedule: invskew x*9
+ .quad 0xB6116FC87ED9A700, 0x4AED933482255BFC
+ .quad 0x4576516227143300, 0x8BB89FACE9DAFDCE
+
+ .Lk_rcon: @ rcon
+ .quad 0x1F8391B9AF9DEEB6, 0x702A98084D7C7D81
+
+ .Lk_opt: @ output transform
+ .quad 0xFF9F4929D6B66000, 0xF7974121DEBE6808
+ .quad 0x01EDBD5150BCEC00, 0xE10D5DB1B05C0CE0
+ .Lk_deskew: @ deskew tables: inverts the sbox's "skew"
+ .quad 0x07E4A34047A4E300, 0x1DFEB95A5DBEF91A
+ .quad 0x5F36B5DC83EA6900, 0x2841C2ABF49D1E77
+ .size _vpaes_key_consts,.-_vpaes_key_consts
+
+ .type _vpaes_key_preheat,%function
+ .align 4
+ _vpaes_key_preheat:
+ adr r11, .Lk_rcon
+ vmov.i8 $s63, #0x5b @ .Lk_s63
+ adr r10, .Lk_inv @ Must be aligned to 8 mod 16.
+ vmov.i8 $s0F, #0x0f @ .Lk_s0F
+ vld1.64 {$invlo,$invhi}, [r10] @ .Lk_inv
+ vld1.64 {$rcon}, [r11] @ .Lk_rcon
+ bx lr
+ .size _vpaes_key_preheat,.-_vpaes_key_preheat
+
+ .type _vpaes_schedule_core,%function
+ .align 4
+ _vpaes_schedule_core:
+ @ We only need to save lr, but ARM requires an 8-byte stack alignment,
+ @ so save an extra register.
+ stmdb sp!, {r3,lr}
+
+ bl _vpaes_key_preheat @ load the tables
+
+ adr r11, .Lk_ipt @ Must be aligned to 8 mod 16.
+ vld1.64 {q0}, [$inp]! @ vmovdqu (%rdi), %xmm0 # load key (unaligned)
+
+ @ input transform
+ @ Use q4 here rather than q3 so .Lschedule_am_decrypting does not
+ @ overlap table and destination.
+ vmov q4, q0 @ vmovdqa %xmm0, %xmm3
+ bl _vpaes_schedule_transform
+ adr r10, .Lk_sr @ Must be aligned to 8 mod 16.
+ vmov q7, q0 @ vmovdqa %xmm0, %xmm7
+
+ add r8, r8, r10
+ tst $dir, $dir
+ bne .Lschedule_am_decrypting
+
+ @ encrypting, output zeroth round key after transform
+ vst1.64 {q0}, [$out] @ vmovdqu %xmm0, (%rdx)
+ b .Lschedule_go
+
+ .Lschedule_am_decrypting:
+ @ decrypting, output zeroth round key after shiftrows
+ vld1.64 {q1}, [r8] @ vmovdqa (%r8,%r10), %xmm1
+ vtbl.8 q3#lo, {q4}, q1#lo @ vpshufb %xmm1, %xmm3, %xmm3
+ vtbl.8 q3#hi, {q4}, q1#hi
+ vst1.64 {q3}, [$out] @ vmovdqu %xmm3, (%rdx)
+ eor r8, r8, #0x30 @ xor \$0x30, %r8
+
+ .Lschedule_go:
+ cmp $bits, #192 @ cmp \$192, %esi
+ bhi .Lschedule_256
+ beq .Lschedule_192
+ @ 128: fall though
+
+ @@
+ @@ .schedule_128
+ @@
+ @@ 128-bit specific part of key schedule.
+ @@
+ @@ This schedule is really simple, because all its parts
+ @@ are accomplished by the subroutines.
+ @@
+ .Lschedule_128:
+ mov $inp, #10 @ mov \$10, %esi
+
+ .Loop_schedule_128:
+ bl _vpaes_schedule_round
+ subs $inp, $inp, #1 @ dec %esi
+ beq .Lschedule_mangle_last
+ bl _vpaes_schedule_mangle @ write output
+ b .Loop_schedule_128
+
+ @@
+ @@ .aes_schedule_192
+ @@
+ @@ 192-bit specific part of key schedule.
+ @@
+ @@ The main body of this schedule is the same as the 128-bit
+ @@ schedule, but with more smearing. The long, high side is
+ @@ stored in q7 as before, and the short, low side is in
+ @@ the high bits of q6.
+ @@
+ @@ This schedule is somewhat nastier, however, because each
+ @@ round produces 192 bits of key material, or 1.5 round keys.
+ @@ Therefore, on each cycle we do 2 rounds and produce 3 round
+ @@ keys.
+ @@
+ .align 4
+ .Lschedule_192:
+ sub $inp, $inp, #8
+ vld1.64 {q0}, [$inp] @ vmovdqu 8(%rdi),%xmm0 # load key part 2 (very unaligned)
+ bl _vpaes_schedule_transform @ input transform
+ vmov q6, q0 @ vmovdqa %xmm0, %xmm6 # save short part
+ vmov.i8 q6#lo, #0 @ vpxor %xmm4, %xmm4, %xmm4 # clear 4
+ @ vmovhlps %xmm4, %xmm6, %xmm6 # clobber low side with zeros
+ mov $inp, #4 @ mov \$4, %esi
+
+ .Loop_schedule_192:
+ bl _vpaes_schedule_round
+ vext.8 q0, q6, q0, #8 @ vpalignr \$8,%xmm6,%xmm0,%xmm0
+ bl _vpaes_schedule_mangle @ save key n
+ bl _vpaes_schedule_192_smear
+ bl _vpaes_schedule_mangle @ save key n+1
+ bl _vpaes_schedule_round
+ subs $inp, $inp, #1 @ dec %esi
+ beq .Lschedule_mangle_last
+ bl _vpaes_schedule_mangle @ save key n+2
+ bl _vpaes_schedule_192_smear
+ b .Loop_schedule_192
+
+ @@
+ @@ .aes_schedule_256
+ @@
+ @@ 256-bit specific part of key schedule.
+ @@
+ @@ The structure here is very similar to the 128-bit
+ @@ schedule, but with an additional "low side" in
+ @@ q6. The low side's rounds are the same as the
+ @@ high side's, except no rcon and no rotation.
+ @@
+ .align 4
+ .Lschedule_256:
+ vld1.64 {q0}, [$inp] @ vmovdqu 16(%rdi),%xmm0 # load key part 2 (unaligned)
+ bl _vpaes_schedule_transform @ input transform
+ mov $inp, #7 @ mov \$7, %esi
+
+ .Loop_schedule_256:
+ bl _vpaes_schedule_mangle @ output low result
+ vmov q6, q0 @ vmovdqa %xmm0, %xmm6 # save cur_lo in xmm6
+
+ @ high round
+ bl _vpaes_schedule_round
+ subs $inp, $inp, #1 @ dec %esi
+ beq .Lschedule_mangle_last
+ bl _vpaes_schedule_mangle
+
+ @ low round. swap xmm7 and xmm6
+ vdup.32 q0, q0#hi[1] @ vpshufd \$0xFF, %xmm0, %xmm0
+ vmov.i8 q4, #0
+ vmov q5, q7 @ vmovdqa %xmm7, %xmm5
+ vmov q7, q6 @ vmovdqa %xmm6, %xmm7
+ bl _vpaes_schedule_low_round
+ vmov q7, q5 @ vmovdqa %xmm5, %xmm7
+
+ b .Loop_schedule_256
+
+ @@
+ @@ .aes_schedule_mangle_last
+ @@
+ @@ Mangler for last round of key schedule
+ @@ Mangles q0
+ @@ when encrypting, outputs out(q0) ^ 63
+ @@ when decrypting, outputs unskew(q0)
+ @@
+ @@ Always called right before return... jumps to cleanup and exits
+ @@
+ .align 4
+ .Lschedule_mangle_last:
+ @ schedule last round key from xmm0
+ adr r11, .Lk_deskew @ lea .Lk_deskew(%rip),%r11 # prepare to deskew
+ tst $dir, $dir
+ bne .Lschedule_mangle_last_dec
+
+ @ encrypting
+ vld1.64 {q1}, [r8] @ vmovdqa (%r8,%r10),%xmm1
+ adr r11, .Lk_opt @ lea .Lk_opt(%rip), %r11 # prepare to output transform
+ add $out, $out, #32 @ add \$32, %rdx
+ vmov q2, q0
+ vtbl.8 q0#lo, {q2}, q1#lo @ vpshufb %xmm1, %xmm0, %xmm0 # output permute
+ vtbl.8 q0#hi, {q2}, q1#hi
+
+ .Lschedule_mangle_last_dec:
+ sub $out, $out, #16 @ add \$-16, %rdx
+ veor q0, q0, $s63 @ vpxor .Lk_s63(%rip), %xmm0, %xmm0
+ bl _vpaes_schedule_transform @ output transform
+ vst1.64 {q0}, [$out] @ vmovdqu %xmm0, (%rdx) # save last key
+
+ @ cleanup
+ veor q0, q0, q0 @ vpxor %xmm0, %xmm0, %xmm0
+ veor q1, q1, q1 @ vpxor %xmm1, %xmm1, %xmm1
+ veor q2, q2, q2 @ vpxor %xmm2, %xmm2, %xmm2
+ veor q3, q3, q3 @ vpxor %xmm3, %xmm3, %xmm3
+ veor q4, q4, q4 @ vpxor %xmm4, %xmm4, %xmm4
+ veor q5, q5, q5 @ vpxor %xmm5, %xmm5, %xmm5
+ veor q6, q6, q6 @ vpxor %xmm6, %xmm6, %xmm6
+ veor q7, q7, q7 @ vpxor %xmm7, %xmm7, %xmm7
+ ldmia sp!, {r3,pc} @ return
+ .size _vpaes_schedule_core,.-_vpaes_schedule_core
+
+ @@
+ @@ .aes_schedule_192_smear
+ @@
+ @@ Smear the short, low side in the 192-bit key schedule.
+ @@
+ @@ Inputs:
+ @@ q7: high side, b a x y
+ @@ q6: low side, d c 0 0
+ @@
+ @@ Outputs:
+ @@ q6: b+c+d b+c 0 0
+ @@ q0: b+c+d b+c b a
+ @@
+ .type _vpaes_schedule_192_smear,%function
+ .align 4
+ _vpaes_schedule_192_smear:
+ vmov.i8 q1, #0
+ vdup.32 q0, q7#hi[1]
+ vshl.i64 q1, q6, #32 @ vpshufd \$0x80, %xmm6, %xmm1 # d c 0 0 -> c 0 0 0
+ vmov q0#lo, q7#hi @ vpshufd \$0xFE, %xmm7, %xmm0 # b a _ _ -> b b b a
+ veor q6, q6, q1 @ vpxor %xmm1, %xmm6, %xmm6 # -> c+d c 0 0
+ veor q1, q1, q1 @ vpxor %xmm1, %xmm1, %xmm1
+ veor q6, q6, q0 @ vpxor %xmm0, %xmm6, %xmm6 # -> b+c+d b+c b a
+ vmov q0, q6 @ vmovdqa %xmm6, %xmm0
+ vmov q6#lo, q1#lo @ vmovhlps %xmm1, %xmm6, %xmm6 # clobber low side with zeros
+ bx lr
+ .size _vpaes_schedule_192_smear,.-_vpaes_schedule_192_smear
+
+ @@
+ @@ .aes_schedule_round
+ @@
+ @@ Runs one main round of the key schedule on q0, q7
+ @@
+ @@ Specifically, runs subbytes on the high dword of q0
+ @@ then rotates it by one byte and xors into the low dword of
+ @@ q7.
+ @@
+ @@ Adds rcon from low byte of q8, then rotates q8 for
+ @@ next rcon.
+ @@
+ @@ Smears the dwords of q7 by xoring the low into the
+ @@ second low, result into third, result into highest.
+ @@
+ @@ Returns results in q7 = q0.
+ @@ Clobbers q1-q4, r11.
+ @@
+ .type _vpaes_schedule_round,%function
+ .align 4
+ _vpaes_schedule_round:
+ @ extract rcon from xmm8
+ vmov.i8 q4, #0 @ vpxor %xmm4, %xmm4, %xmm4
+ vext.8 q1, $rcon, q4, #15 @ vpalignr \$15, %xmm8, %xmm4, %xmm1
+ vext.8 $rcon, $rcon, $rcon, #15 @ vpalignr \$15, %xmm8, %xmm8, %xmm8
+ veor q7, q7, q1 @ vpxor %xmm1, %xmm7, %xmm7
+
+ @ rotate
+ vdup.32 q0, q0#hi[1] @ vpshufd \$0xFF, %xmm0, %xmm0
+ vext.8 q0, q0, q0, #1 @ vpalignr \$1, %xmm0, %xmm0, %xmm0
+
+ @ fall through...
+
+ @ low round: same as high round, but no rotation and no rcon.
+ _vpaes_schedule_low_round:
+ @ The x86_64 version pins .Lk_sb1 in %xmm13 and .Lk_sb1+16 in %xmm12.
+ @ We pin other values in _vpaes_key_preheat, so load them now.
+ adr r11, .Lk_sb1
+ vld1.64 {q14,q15}, [r11]
+
+ @ smear xmm7
+ vext.8 q1, q4, q7, #12 @ vpslldq \$4, %xmm7, %xmm1
+ veor q7, q7, q1 @ vpxor %xmm1, %xmm7, %xmm7
+ vext.8 q4, q4, q7, #8 @ vpslldq \$8, %xmm7, %xmm4
+
+ @ subbytes
+ vand q1, q0, $s0F @ vpand %xmm9, %xmm0, %xmm1 # 0 = k
+ vshr.u8 q0, q0, #4 @ vpsrlb \$4, %xmm0, %xmm0 # 1 = i
+ veor q7, q7, q4 @ vpxor %xmm4, %xmm7, %xmm7
+ vtbl.8 q2#lo, {$invhi}, q1#lo @ vpshufb %xmm1, %xmm11, %xmm2 # 2 = a/k
+ vtbl.8 q2#hi, {$invhi}, q1#hi
+ veor q1, q1, q0 @ vpxor %xmm0, %xmm1, %xmm1 # 0 = j
+ vtbl.8 q3#lo, {$invlo}, q0#lo @ vpshufb %xmm0, %xmm10, %xmm3 # 3 = 1/i
+ vtbl.8 q3#hi, {$invlo}, q0#hi
+ veor q3, q3, q2 @ vpxor %xmm2, %xmm3, %xmm3 # 3 = iak = 1/i + a/k
+ vtbl.8 q4#lo, {$invlo}, q1#lo @ vpshufb %xmm1, %xmm10, %xmm4 # 4 = 1/j
+ vtbl.8 q4#hi, {$invlo}, q1#hi
+ veor q7, q7, $s63 @ vpxor .Lk_s63(%rip), %xmm7, %xmm7
+ vtbl.8 q3#lo, {$invlo}, q3#lo @ vpshufb %xmm3, %xmm10, %xmm3 # 2 = 1/iak
+ vtbl.8 q3#hi, {$invlo}, q3#hi
+ veor q4, q4, q2 @ vpxor %xmm2, %xmm4, %xmm4 # 4 = jak = 1/j + a/k
+ vtbl.8 q2#lo, {$invlo}, q4#lo @ vpshufb %xmm4, %xmm10, %xmm2 # 3 = 1/jak
+ vtbl.8 q2#hi, {$invlo}, q4#hi
+ veor q3, q3, q1 @ vpxor %xmm1, %xmm3, %xmm3 # 2 = io
+ veor q2, q2, q0 @ vpxor %xmm0, %xmm2, %xmm2 # 3 = jo
+ vtbl.8 q4#lo, {q15}, q3#lo @ vpshufb %xmm3, %xmm13, %xmm4 # 4 = sbou
+ vtbl.8 q4#hi, {q15}, q3#hi
+ vtbl.8 q1#lo, {q14}, q2#lo @ vpshufb %xmm2, %xmm12, %xmm1 # 0 = sb1t
+ vtbl.8 q1#hi, {q14}, q2#hi
+ veor q1, q1, q4 @ vpxor %xmm4, %xmm1, %xmm1 # 0 = sbox output
+
+ @ add in smeared stuff
+ veor q0, q1, q7 @ vpxor %xmm7, %xmm1, %xmm0
+ veor q7, q1, q7 @ vmovdqa %xmm0, %xmm7
+ bx lr
+ .size _vpaes_schedule_round,.-_vpaes_schedule_round
+
+ @@
+ @@ .aes_schedule_transform
+ @@
+ @@ Linear-transform q0 according to tables at [r11]
+ @@
+ @@ Requires that q9 = 0x0F0F... as in preheat
+ @@ Output in q0
+ @@ Clobbers q1, q2, q14, q15
+ @@
+ .type _vpaes_schedule_transform,%function
+ .align 4
+ _vpaes_schedule_transform:
+ vld1.64 {q14,q15}, [r11] @ vmovdqa (%r11), %xmm2 # lo
+ @ vmovdqa 16(%r11), %xmm1 # hi
+ vand q1, q0, $s0F @ vpand %xmm9, %xmm0, %xmm1
+ vshr.u8 q0, q0, #4 @ vpsrlb \$4, %xmm0, %xmm0
+ vtbl.8 q2#lo, {q14}, q1#lo @ vpshufb %xmm1, %xmm2, %xmm2
+ vtbl.8 q2#hi, {q14}, q1#hi
+ vtbl.8 q0#lo, {q15}, q0#lo @ vpshufb %xmm0, %xmm1, %xmm0
+ vtbl.8 q0#hi, {q15}, q0#hi
+ veor q0, q0, q2 @ vpxor %xmm2, %xmm0, %xmm0
+ bx lr
+ .size _vpaes_schedule_transform,.-_vpaes_schedule_transform
+
+ @@
+ @@ .aes_schedule_mangle
+ @@
+ @@ Mangles q0 from (basis-transformed) standard version
+ @@ to our version.
+ @@
+ @@ On encrypt,
+ @@ xor with 0x63
+ @@ multiply by circulant 0,1,1,1
+ @@ apply shiftrows transform
+ @@
+ @@ On decrypt,
+ @@ xor with 0x63
+ @@ multiply by "inverse mixcolumns" circulant E,B,D,9
+ @@ deskew
+ @@ apply shiftrows transform
+ @@
+ @@
+ @@ Writes out to [r2], and increments or decrements it
+ @@ Keeps track of round number mod 4 in r8
+ @@ Preserves q0
+ @@ Clobbers q1-q5
+ @@
+ .type _vpaes_schedule_mangle,%function
+ .align 4
+ _vpaes_schedule_mangle:
+ tst $dir, $dir
+ vmov q4, q0 @ vmovdqa %xmm0, %xmm4 # save xmm0 for later
+ adr r11, .Lk_mc_forward @ Must be aligned to 8 mod 16.
+ vld1.64 {q5}, [r11] @ vmovdqa .Lk_mc_forward(%rip),%xmm5
+ bne .Lschedule_mangle_dec
+
+ @ encrypting
+ @ Write to q2 so we do not overlap table and destination below.
+ veor q2, q0, $s63 @ vpxor .Lk_s63(%rip), %xmm0, %xmm4
+ add $out, $out, #16 @ add \$16, %rdx
+ vtbl.8 q4#lo, {q2}, q5#lo @ vpshufb %xmm5, %xmm4, %xmm4
+ vtbl.8 q4#hi, {q2}, q5#hi
+ vtbl.8 q1#lo, {q4}, q5#lo @ vpshufb %xmm5, %xmm4, %xmm1
+ vtbl.8 q1#hi, {q4}, q5#hi
+ vtbl.8 q3#lo, {q1}, q5#lo @ vpshufb %xmm5, %xmm1, %xmm3
+ vtbl.8 q3#hi, {q1}, q5#hi
+ veor q4, q4, q1 @ vpxor %xmm1, %xmm4, %xmm4
+ vld1.64 {q1}, [r8] @ vmovdqa (%r8,%r10), %xmm1
+ veor q3, q3, q4 @ vpxor %xmm4, %xmm3, %xmm3
+
+ b .Lschedule_mangle_both
+ .align 4
+ .Lschedule_mangle_dec:
+ @ inverse mix columns
+ adr r11, .Lk_dksd @ lea .Lk_dksd(%rip),%r11
+ vshr.u8 q1, q4, #4 @ vpsrlb \$4, %xmm4, %xmm1 # 1 = hi
+ vand q4, q4, $s0F @ vpand %xmm9, %xmm4, %xmm4 # 4 = lo
+
+ vld1.64 {q14,q15}, [r11]! @ vmovdqa 0x00(%r11), %xmm2
+ @ vmovdqa 0x10(%r11), %xmm3
+ vtbl.8 q2#lo, {q14}, q4#lo @ vpshufb %xmm4, %xmm2, %xmm2
+ vtbl.8 q2#hi, {q14}, q4#hi
+ vtbl.8 q3#lo, {q15}, q1#lo @ vpshufb %xmm1, %xmm3, %xmm3
+ vtbl.8 q3#hi, {q15}, q1#hi
+ @ Load .Lk_dksb ahead of time.
+ vld1.64 {q14,q15}, [r11]! @ vmovdqa 0x20(%r11), %xmm2
+ @ vmovdqa 0x30(%r11), %xmm3
+ @ Write to q13 so we do not overlap table and destination.
+ veor q13, q3, q2 @ vpxor %xmm2, %xmm3, %xmm3
+ vtbl.8 q3#lo, {q13}, q5#lo @ vpshufb %xmm5, %xmm3, %xmm3
+ vtbl.8 q3#hi, {q13}, q5#hi
+
+ vtbl.8 q2#lo, {q14}, q4#lo @ vpshufb %xmm4, %xmm2, %xmm2
+ vtbl.8 q2#hi, {q14}, q4#hi
+ veor q2, q2, q3 @ vpxor %xmm3, %xmm2, %xmm2
+ vtbl.8 q3#lo, {q15}, q1#lo @ vpshufb %xmm1, %xmm3, %xmm3
+ vtbl.8 q3#hi, {q15}, q1#hi
+ @ Load .Lk_dkse ahead of time.
+ vld1.64 {q14,q15}, [r11]! @ vmovdqa 0x40(%r11), %xmm2
+ @ vmovdqa 0x50(%r11), %xmm3
+ @ Write to q13 so we do not overlap table and destination.
+ veor q13, q3, q2 @ vpxor %xmm2, %xmm3, %xmm3
+ vtbl.8 q3#lo, {q13}, q5#lo @ vpshufb %xmm5, %xmm3, %xmm3
+ vtbl.8 q3#hi, {q13}, q5#hi
+
+ vtbl.8 q2#lo, {q14}, q4#lo @ vpshufb %xmm4, %xmm2, %xmm2
+ vtbl.8 q2#hi, {q14}, q4#hi
+ veor q2, q2, q3 @ vpxor %xmm3, %xmm2, %xmm2
+ vtbl.8 q3#lo, {q15}, q1#lo @ vpshufb %xmm1, %xmm3, %xmm3
+ vtbl.8 q3#hi, {q15}, q1#hi
+ @ Load .Lk_dkse ahead of time.
+ vld1.64 {q14,q15}, [r11]! @ vmovdqa 0x60(%r11), %xmm2
+ @ vmovdqa 0x70(%r11), %xmm4
+ @ Write to q13 so we do not overlap table and destination.
+ veor q13, q3, q2 @ vpxor %xmm2, %xmm3, %xmm3
+
+ vtbl.8 q2#lo, {q14}, q4#lo @ vpshufb %xmm4, %xmm2, %xmm2
+ vtbl.8 q2#hi, {q14}, q4#hi
+ vtbl.8 q3#lo, {q13}, q5#lo @ vpshufb %xmm5, %xmm3, %xmm3
+ vtbl.8 q3#hi, {q13}, q5#hi
+ vtbl.8 q4#lo, {q15}, q1#lo @ vpshufb %xmm1, %xmm4, %xmm4
+ vtbl.8 q4#hi, {q15}, q1#hi
+ vld1.64 {q1}, [r8] @ vmovdqa (%r8,%r10), %xmm1
+ veor q2, q2, q3 @ vpxor %xmm3, %xmm2, %xmm2
+ veor q3, q4, q2 @ vpxor %xmm2, %xmm4, %xmm3
+
+ sub $out, $out, #16 @ add \$-16, %rdx
+
+ .Lschedule_mangle_both:
+ @ Write to q2 so table and destination do not overlap.
+ vtbl.8 q2#lo, {q3}, q1#lo @ vpshufb %xmm1, %xmm3, %xmm3
+ vtbl.8 q2#hi, {q3}, q1#hi
+ add r8, r8, #64-16 @ add \$-16, %r8
+ and r8, r8, #~(1<<6) @ and \$0x30, %r8
+ vst1.64 {q2}, [$out] @ vmovdqu %xmm3, (%rdx)
+ bx lr
+ .size _vpaes_schedule_mangle,.-_vpaes_schedule_mangle
+
+ .globl vpaes_set_encrypt_key
+ .type vpaes_set_encrypt_key,%function
+ .align 4
+ vpaes_set_encrypt_key:
+ stmdb sp!, {r7-r11, lr}
+ vstmdb sp!, {d8-d15}
+
+ lsr r9, $bits, #5 @ shr \$5,%eax
+ add r9, r9, #5 @ \$5,%eax
+ str r9, [$out,#240] @ mov %eax,240(%rdx) # AES_KEY->rounds = nbits/32+5;
+
+ mov $dir, #0 @ mov \$0,%ecx
+ mov r8, #0x30 @ mov \$0x30,%r8d
+ bl _vpaes_schedule_core
+ eor r0, r0, r0
+
+ vldmia sp!, {d8-d15}
+ ldmia sp!, {r7-r11, pc} @ return
+ .size vpaes_set_encrypt_key,.-vpaes_set_encrypt_key
+
+ .globl vpaes_set_decrypt_key
+ .type vpaes_set_decrypt_key,%function
+ .align 4
+ vpaes_set_decrypt_key:
+ stmdb sp!, {r7-r11, lr}
+ vstmdb sp!, {d8-d15}
+
+ lsr r9, $bits, #5 @ shr \$5,%eax
+ add r9, r9, #5 @ \$5,%eax
+ str r9, [$out,#240] @ mov %eax,240(%rdx) # AES_KEY->rounds = nbits/32+5;
+ lsl r9, r9, #4 @ shl \$4,%eax
+ add $out, $out, #16 @ lea 16(%rdx,%rax),%rdx
+ add $out, $out, r9
+
+ mov $dir, #1 @ mov \$1,%ecx
+ lsr r8, $bits, #1 @ shr \$1,%r8d
+ and r8, r8, #32 @ and \$32,%r8d
+ eor r8, r8, #32 @ xor \$32,%r8d # nbits==192?0:32
+ bl _vpaes_schedule_core
+
+ vldmia sp!, {d8-d15}
+ ldmia sp!, {r7-r11, pc} @ return
+ .size vpaes_set_decrypt_key,.-vpaes_set_decrypt_key
+ ___
+ }
+
+ {
+ my ($out, $inp) = map("r$_", (0..1));
+ my ($s0F, $s63, $s63_raw, $mc_forward) = map("q$_", (9..12));
+
+ $code .= <<___;
+
+ @ Additional constants for converting to bsaes.
+ .type _vpaes_convert_consts,%object
+ .align 4
+ _vpaes_convert_consts:
+ @ .Lk_opt_then_skew applies skew(opt(x)) XOR 0x63, where skew is the linear
+ @ transform in the AES S-box. 0x63 is incorporated into the low half of the
+ @ table. This was computed with the following script:
+ @
+ @ def u64s_to_u128(x, y):
+ @ return x | (y << 64)
+ @ def u128_to_u64s(w):
+ @ return w & ((1<<64)-1), w >> 64
+ @ def get_byte(w, i):
+ @ return (w >> (i*8)) & 0xff
+ @ def apply_table(table, b):
+ @ lo = b & 0xf
+ @ hi = b >> 4
+ @ return get_byte(table[0], lo) ^ get_byte(table[1], hi)
+ @ def opt(b):
+ @ table = [
+ @ u64s_to_u128(0xFF9F4929D6B66000, 0xF7974121DEBE6808),
+ @ u64s_to_u128(0x01EDBD5150BCEC00, 0xE10D5DB1B05C0CE0),
+ @ ]
+ @ return apply_table(table, b)
+ @ def rot_byte(b, n):
+ @ return 0xff & ((b << n) | (b >> (8-n)))
+ @ def skew(x):
+ @ return (x ^ rot_byte(x, 1) ^ rot_byte(x, 2) ^ rot_byte(x, 3) ^
+ @ rot_byte(x, 4))
+ @ table = [0, 0]
+ @ for i in range(16):
+ @ table[0] |= (skew(opt(i)) ^ 0x63) << (i*8)
+ @ table[1] |= skew(opt(i<<4)) << (i*8)
+ @ print("\t.quad\t0x%016x, 0x%016x" % u128_to_u64s(table[0]))
+ @ print("\t.quad\t0x%016x, 0x%016x" % u128_to_u64s(table[1]))
+ .Lk_opt_then_skew:
+ .quad 0x9cb8436798bc4763, 0x6440bb9f6044bf9b
+ .quad 0x1f30062936192f00, 0xb49bad829db284ab
+
+ @ .Lk_decrypt_transform is a permutation which performs an 8-bit left-rotation
+ @ followed by a byte-swap on each 32-bit word of a vector. E.g., 0x11223344
+ @ becomes 0x22334411 and then 0x11443322.
+ .Lk_decrypt_transform:
+ .quad 0x0704050603000102, 0x0f0c0d0e0b08090a
+ .size _vpaes_convert_consts,.-_vpaes_convert_consts
+
+ @ void vpaes_encrypt_key_to_bsaes(AES_KEY *bsaes, const AES_KEY *vpaes);
+ .globl vpaes_encrypt_key_to_bsaes
+ .type vpaes_encrypt_key_to_bsaes,%function
+ .align 4
+ vpaes_encrypt_key_to_bsaes:
+ stmdb sp!, {r11, lr}
+
+ @ See _vpaes_schedule_core for the key schedule logic. In particular,
+ @ _vpaes_schedule_transform(.Lk_ipt) (section 2.2 of the paper),
+ @ _vpaes_schedule_mangle (section 4.3), and .Lschedule_mangle_last
+ @ contain the transformations not in the bsaes representation. This
+ @ function inverts those transforms.
+ @
+ @ Note also that bsaes-armv7.pl expects aes-armv4.pl's key
+ @ representation, which does not match the other aes_nohw_*
+ @ implementations. The ARM aes_nohw_* stores each 32-bit word
+ @ byteswapped, as a convenience for (unsupported) big-endian ARM, at the
+ @ cost of extra REV and VREV32 operations in little-endian ARM.
+
+ vmov.i8 $s0F, #0x0f @ Required by _vpaes_schedule_transform
+ adr r2, .Lk_mc_forward @ Must be aligned to 8 mod 16.
+ add r3, r2, 0x90 @ .Lk_sr+0x10-.Lk_mc_forward = 0x90 (Apple's toolchain doesn't support the expression)
+
+ vld1.64 {$mc_forward}, [r2]
+ vmov.i8 $s63, #0x5b @ .Lk_s63 from vpaes-x86_64
+ adr r11, .Lk_opt @ Must be aligned to 8 mod 16.
+ vmov.i8 $s63_raw, #0x63 @ .LK_s63 without .Lk_ipt applied
+
+ @ vpaes stores one fewer round count than bsaes, but the number of keys
+ @ is the same.
+ ldr r2, [$inp,#240]
+ add r2, r2, #1
+ str r2, [$out,#240]
+
+ @ The first key is transformed with _vpaes_schedule_transform(.Lk_ipt).
+ @ Invert this with .Lk_opt.
+ vld1.64 {q0}, [$inp]!
+ bl _vpaes_schedule_transform
+ vrev32.8 q0, q0
+ vst1.64 {q0}, [$out]!
+
+ @ The middle keys have _vpaes_schedule_transform(.Lk_ipt) applied,
+ @ followed by _vpaes_schedule_mangle. _vpaes_schedule_mangle XORs 0x63,
+ @ multiplies by the circulant 0,1,1,1, then applies ShiftRows.
+ .Loop_enc_key_to_bsaes:
+ vld1.64 {q0}, [$inp]!
+
+ @ Invert the ShiftRows step (see .Lschedule_mangle_both). Note we cycle
+ @ r3 in the opposite direction and start at .Lk_sr+0x10 instead of 0x30.
+ @ We use r3 rather than r8 to avoid a callee-saved register.
+ vld1.64 {q1}, [r3]
+ vtbl.8 q2#lo, {q0}, q1#lo
+ vtbl.8 q2#hi, {q0}, q1#hi
+ add r3, r3, #16
+ and r3, r3, #~(1<<6)
+ vmov q0, q2
+
+ @ Handle the last key differently.
+ subs r2, r2, #1
+ beq .Loop_enc_key_to_bsaes_last
+
+ @ Multiply by the circulant. This is its own inverse.
+ vtbl.8 q1#lo, {q0}, $mc_forward#lo
+ vtbl.8 q1#hi, {q0}, $mc_forward#hi
+ vmov q0, q1
+ vtbl.8 q2#lo, {q1}, $mc_forward#lo
+ vtbl.8 q2#hi, {q1}, $mc_forward#hi
+ veor q0, q0, q2
+ vtbl.8 q1#lo, {q2}, $mc_forward#lo
+ vtbl.8 q1#hi, {q2}, $mc_forward#hi
+ veor q0, q0, q1
+
+ @ XOR and finish.
+ veor q0, q0, $s63
+ bl _vpaes_schedule_transform
+ vrev32.8 q0, q0
+ vst1.64 {q0}, [$out]!
+ b .Loop_enc_key_to_bsaes
+
+ .Loop_enc_key_to_bsaes_last:
+ @ The final key does not have a basis transform (note
+ @ .Lschedule_mangle_last inverts the original transform). It only XORs
+ @ 0x63 and applies ShiftRows. The latter was already inverted in the
+ @ loop. Note that, because we act on the original representation, we use
+ @ $s63_raw, not $s63.
+ veor q0, q0, $s63_raw
+ vrev32.8 q0, q0
+ vst1.64 {q0}, [$out]
+
+ @ Wipe registers which contained key material.
+ veor q0, q0, q0
+ veor q1, q1, q1
+ veor q2, q2, q2
+
+ ldmia sp!, {r11, pc} @ return
+ .size vpaes_encrypt_key_to_bsaes,.-vpaes_encrypt_key_to_bsaes
+
+ @ void vpaes_decrypt_key_to_bsaes(AES_KEY *vpaes, const AES_KEY *bsaes);
+ .globl vpaes_decrypt_key_to_bsaes
+ .type vpaes_decrypt_key_to_bsaes,%function
+ .align 4
+ vpaes_decrypt_key_to_bsaes:
+ stmdb sp!, {r11, lr}
+
+ @ See _vpaes_schedule_core for the key schedule logic. Note vpaes
+ @ computes the decryption key schedule in reverse. Additionally,
+ @ aes-x86_64.pl shares some transformations, so we must only partially
+ @ invert vpaes's transformations. In general, vpaes computes in a
+ @ different basis (.Lk_ipt and .Lk_opt) and applies the inverses of
+ @ MixColumns, ShiftRows, and the affine part of the AES S-box (which is
+ @ split into a linear skew and XOR of 0x63). We undo all but MixColumns.
+ @
+ @ Note also that bsaes-armv7.pl expects aes-armv4.pl's key
+ @ representation, which does not match the other aes_nohw_*
+ @ implementations. The ARM aes_nohw_* stores each 32-bit word
+ @ byteswapped, as a convenience for (unsupported) big-endian ARM, at the
+ @ cost of extra REV and VREV32 operations in little-endian ARM.
+
+ adr r2, .Lk_decrypt_transform
+ adr r3, .Lk_sr+0x30
+ adr r11, .Lk_opt_then_skew @ Input to _vpaes_schedule_transform.
+ vld1.64 {$mc_forward}, [r2] @ Reuse $mc_forward from encryption.
+ vmov.i8 $s0F, #0x0f @ Required by _vpaes_schedule_transform
+
+ @ vpaes stores one fewer round count than bsaes, but the number of keys
+ @ is the same.
+ ldr r2, [$inp,#240]
+ add r2, r2, #1
+ str r2, [$out,#240]
+
+ @ Undo the basis change and reapply the S-box affine transform. See
+ @ .Lschedule_mangle_last.
+ vld1.64 {q0}, [$inp]!
+ bl _vpaes_schedule_transform
+ vrev32.8 q0, q0
+ vst1.64 {q0}, [$out]!
+
+ @ See _vpaes_schedule_mangle for the transform on the middle keys. Note
+ @ it simultaneously inverts MixColumns and the S-box affine transform.
+ @ See .Lk_dksd through .Lk_dks9.
+ .Loop_dec_key_to_bsaes:
+ vld1.64 {q0}, [$inp]!
+
+ @ Invert the ShiftRows step (see .Lschedule_mangle_both). Note going
+ @ forwards cancels inverting for which direction we cycle r3. We use r3
+ @ rather than r8 to avoid a callee-saved register.
+ vld1.64 {q1}, [r3]
+ vtbl.8 q2#lo, {q0}, q1#lo
+ vtbl.8 q2#hi, {q0}, q1#hi
+ add r3, r3, #64-16
+ and r3, r3, #~(1<<6)
+ vmov q0, q2
+
+ @ Handle the last key differently.
+ subs r2, r2, #1
+ beq .Loop_dec_key_to_bsaes_last
+
+ @ Undo the basis change and reapply the S-box affine transform.
+ bl _vpaes_schedule_transform
+
+ @ Rotate each word by 8 bytes (cycle the rows) and then byte-swap. We
+ @ combine the two operations in .Lk_decrypt_transform.
+ @
+ @ TODO(davidben): Where does the rotation come from?
+ vtbl.8 q1#lo, {q0}, $mc_forward#lo
+ vtbl.8 q1#hi, {q0}, $mc_forward#hi
+
+ vst1.64 {q1}, [$out]!
+ b .Loop_dec_key_to_bsaes
+
+ .Loop_dec_key_to_bsaes_last:
+ @ The final key only inverts ShiftRows (already done in the loop). See
+ @ .Lschedule_am_decrypting. Its basis is not transformed.
+ vrev32.8 q0, q0
+ vst1.64 {q0}, [$out]!
+
+ @ Wipe registers which contained key material.
+ veor q0, q0, q0
+ veor q1, q1, q1
+ veor q2, q2, q2
+
+ ldmia sp!, {r11, pc} @ return
+ .size vpaes_decrypt_key_to_bsaes,.-vpaes_decrypt_key_to_bsaes
+ ___
+ }
+
+ {
+ # Register-passed parameters.
+ my ($inp, $out, $len, $key) = map("r$_", 0..3);
+ # Temporaries. _vpaes_encrypt_core already uses r8..r11, so overlap $ivec and
+ # $tmp. $ctr is r7 because it must be preserved across calls.
+ my ($ctr, $ivec, $tmp) = map("r$_", 7..9);
+
+ # void vpaes_ctr32_encrypt_blocks(const uint8_t *in, uint8_t *out, size_t len,
+ # const AES_KEY *key, const uint8_t ivec[16]);
+ $code .= <<___;
+ .globl vpaes_ctr32_encrypt_blocks
+ .type vpaes_ctr32_encrypt_blocks,%function
+ .align 4
+ vpaes_ctr32_encrypt_blocks:
+ mov ip, sp
+ stmdb sp!, {r7-r11, lr}
+ @ This function uses q4-q7 (d8-d15), which are callee-saved.
+ vstmdb sp!, {d8-d15}
+
+ cmp $len, #0
+ @ $ivec is passed on the stack.
+ ldr $ivec, [ip]
+ beq .Lctr32_done
+
+ @ _vpaes_encrypt_core expects the key in r2, so swap $len and $key.
+ mov $tmp, $key
+ mov $key, $len
+ mov $len, $tmp
+ ___
+ my ($len, $key) = ($key, $len);
+ $code .= <<___;
+
+ @ Load the IV and counter portion.
+ ldr $ctr, [$ivec, #12]
+ vld1.8 {q7}, [$ivec]
+
+ bl _vpaes_preheat
+ rev $ctr, $ctr @ The counter is big-endian.
+
+ .Lctr32_loop:
+ vmov q0, q7
+ vld1.8 {q6}, [$inp]! @ Load input ahead of time
+ bl _vpaes_encrypt_core
+ veor q0, q0, q6 @ XOR input and result
+ vst1.8 {q0}, [$out]!
+ subs $len, $len, #1
+ @ Update the counter.
+ add $ctr, $ctr, #1
+ rev $tmp, $ctr
+ vmov.32 q7#hi[1], $tmp
+ bne .Lctr32_loop
+
+ .Lctr32_done:
+ vldmia sp!, {d8-d15}
+ ldmia sp!, {r7-r11, pc} @ return
+ .size vpaes_ctr32_encrypt_blocks,.-vpaes_ctr32_encrypt_blocks
+ ___
+ }
+
+ foreach (split("\n",$code)) {
+ s/\bq([0-9]+)#(lo|hi)/sprintf "d%d",2*$1+($2 eq "hi")/geo;
+ print $_,"\n";
+ }
+
+ close STDOUT or die "error closing STDOUT: $!";
#if defined(OPENSSL_X86_64)
#define VPAES_CTR32
#endif
- OPENSSL_INLINE int vpaes_capable(void) {
- return (OPENSSL_ia32cap_get()[1] & (1 << (41 - 32))) != 0;
- }
+ #define VPAES_CBC
+ OPENSSL_INLINE int vpaes_capable(void) { return CRYPTO_is_SSSE3_capable(); }
-#elif defined(OPENSSL_ARM) || defined(OPENSSL_AARCH64)
+#elif false
#define HWAES
OPENSSL_INLINE int hwaes_capable(void) { return CRYPTO_is_ARMv8_AES_capable(); }
#if defined(OPENSSL_X86)
#define GHASH_ASM_X86
- void gcm_gmult_4bit_mmx(uint64_t Xi[2], const u128 Htable[16]);
- void gcm_ghash_4bit_mmx(uint64_t Xi[2], const u128 Htable[16], const uint8_t *inp,
- size_t len);
#endif // OPENSSL_X86
-#elif defined(OPENSSL_ARM) || defined(OPENSSL_AARCH64)
+#elif false
#define GHASH_ASM_ARM
- #define GCM_FUNCREF_4BIT
+ #define GCM_FUNCREF
OPENSSL_INLINE int gcm_pmull_capable(void) {
return CRYPTO_is_ARMv8_PMULL_capable();
// ARMV8_PMULL indicates support for carryless multiplication.
#define ARMV8_PMULL (1 << 5)
-#define __ARM_MAX_ARCH__ 8
+ // ARMV8_SHA512 indicates support for hardware SHA-512 instructions.
+ #define ARMV8_SHA512 (1 << 6)
+
+ #if defined(__ASSEMBLER__)
+
+ // We require the ARM assembler provide |__ARM_ARCH| from Arm C Language
+ // Extensions (ACLE). This is supported in GCC 4.8+ and Clang 3.2+. MSVC does
+ // not implement ACLE, but we require Clang's assembler on Windows.
+ #if !defined(__ARM_ARCH)
+ #error "ARM assembler must define __ARM_ARCH"
+ #endif
+
+ // __ARM_ARCH__ is used by OpenSSL assembly to determine the minimum target ARM
+ // version.
+ //
+ // TODO(davidben): Switch the assembly to use |__ARM_ARCH| directly.
+ #define __ARM_ARCH__ __ARM_ARCH
+
+ // Even when building for 32-bit ARM, support for aarch64 crypto instructions
+ // will be included.
++#define __ARM_MAX_ARCH__ 6
+
+ // Support macros for
+ // - Armv8.3-A Pointer Authentication and
+ // - Armv8.5-A Branch Target Identification
+ // features which require emitting a .note.gnu.property section with the
+ // appropriate architecture-dependent feature bits set.
+ //
+ // |AARCH64_SIGN_LINK_REGISTER| and |AARCH64_VALIDATE_LINK_REGISTER| expand to
+ // PACIxSP and AUTIxSP, respectively. |AARCH64_SIGN_LINK_REGISTER| should be
+ // used immediately before saving the LR register (x30) to the stack.
+ // |AARCH64_VALIDATE_LINK_REGISTER| should be used immediately after restoring
+ // it. Note |AARCH64_SIGN_LINK_REGISTER|'s modifications to LR must be undone
+ // with |AARCH64_VALIDATE_LINK_REGISTER| before RET. The SP register must also
+ // have the same value at the two points. For example:
+ //
+ // .global f
+ // f:
+ // AARCH64_SIGN_LINK_REGISTER
+ // stp x29, x30, [sp, #-96]!
+ // mov x29, sp
+ // ...
+ // ldp x29, x30, [sp], #96
+ // AARCH64_VALIDATE_LINK_REGISTER
+ // ret
+ //
+ // |AARCH64_VALID_CALL_TARGET| expands to BTI 'c'. Either it, or
+ // |AARCH64_SIGN_LINK_REGISTER|, must be used at every point that may be an
+ // indirect call target. In particular, all symbols exported from a file must
+ // begin with one of these macros. For example, a leaf function that does not
+ // save LR can instead use |AARCH64_VALID_CALL_TARGET|:
+ //
+ // .globl return_zero
+ // return_zero:
+ // AARCH64_VALID_CALL_TARGET
+ // mov x0, #0
+ // ret
+ //
+ // A non-leaf function which does not immediately save LR may need both macros
+ // because |AARCH64_SIGN_LINK_REGISTER| appears late. For example, the function
+ // may jump to an alternate implementation before setting up the stack:
+ //
+ // .globl with_early_jump
+ // with_early_jump:
+ // AARCH64_VALID_CALL_TARGET
+ // cmp x0, #128
+ // b.lt .Lwith_early_jump_128
+ // AARCH64_SIGN_LINK_REGISTER
+ // stp x29, x30, [sp, #-96]!
+ // mov x29, sp
+ // ...
+ // ldp x29, x30, [sp], #96
+ // AARCH64_VALIDATE_LINK_REGISTER
+ // ret
+ //
+ // .Lwith_early_jump_128:
+ // ...
+ // ret
+ //
+ // These annotations are only required with indirect calls. Private symbols that
+ // are only the target of direct calls do not require annotations. Also note
+ // that |AARCH64_VALID_CALL_TARGET| is only valid for indirect calls (BLR), not
+ // indirect jumps (BR). Indirect jumps in assembly are currently not supported
+ // and would require a macro for BTI 'j'.
+ //
+ // Although not necessary, it is safe to use these macros in 32-bit ARM
+ // assembly. This may be used to simplify dual 32-bit and 64-bit files.
+ //
+ // References:
+ // - "ELF for the ArmĀ® 64-bit Architecture"
+ // https://github.com/ARM-software/abi-aa/blob/master/aaelf64/aaelf64.rst
+ // - "Providing protection for complex software"
+ // https://developer.arm.com/architectures/learn-the-architecture/providing-protection-for-complex-software
+
+ #if defined(__ARM_FEATURE_BTI_DEFAULT) && __ARM_FEATURE_BTI_DEFAULT == 1
+ #define GNU_PROPERTY_AARCH64_BTI (1 << 0) // Has Branch Target Identification
+ #define AARCH64_VALID_CALL_TARGET hint #34 // BTI 'c'
+ #else
+ #define GNU_PROPERTY_AARCH64_BTI 0 // No Branch Target Identification
+ #define AARCH64_VALID_CALL_TARGET
+ #endif
+
+ #if defined(__ARM_FEATURE_PAC_DEFAULT) && \
+ (__ARM_FEATURE_PAC_DEFAULT & 1) == 1 // Signed with A-key
+ #define GNU_PROPERTY_AARCH64_POINTER_AUTH \
+ (1 << 1) // Has Pointer Authentication
+ #define AARCH64_SIGN_LINK_REGISTER hint #25 // PACIASP
+ #define AARCH64_VALIDATE_LINK_REGISTER hint #29 // AUTIASP
+ #elif defined(__ARM_FEATURE_PAC_DEFAULT) && \
+ (__ARM_FEATURE_PAC_DEFAULT & 2) == 2 // Signed with B-key
+ #define GNU_PROPERTY_AARCH64_POINTER_AUTH \
+ (1 << 1) // Has Pointer Authentication
+ #define AARCH64_SIGN_LINK_REGISTER hint #27 // PACIBSP
+ #define AARCH64_VALIDATE_LINK_REGISTER hint #31 // AUTIBSP
+ #else
+ #define GNU_PROPERTY_AARCH64_POINTER_AUTH 0 // No Pointer Authentication
+ #if GNU_PROPERTY_AARCH64_BTI != 0
+ #define AARCH64_SIGN_LINK_REGISTER AARCH64_VALID_CALL_TARGET
+ #else
+ #define AARCH64_SIGN_LINK_REGISTER
+ #endif
+ #define AARCH64_VALIDATE_LINK_REGISTER
+ #endif
+
+ #if GNU_PROPERTY_AARCH64_POINTER_AUTH != 0 || GNU_PROPERTY_AARCH64_BTI != 0
+ .pushsection .note.gnu.property, "a";
+ .balign 8;
+ .long 4;
+ .long 0x10;
+ .long 0x5;
+ .asciz "GNU";
+ .long 0xc0000000; /* GNU_PROPERTY_AARCH64_FEATURE_1_AND */
+ .long 4;
+ .long (GNU_PROPERTY_AARCH64_POINTER_AUTH | GNU_PROPERTY_AARCH64_BTI);
+ .long 0;
+ .popsection;
+ #endif
+
+ #endif // __ASSEMBLER__
+
+ #endif // __ARMEL__ || _M_ARM || __AARCH64EL__ || _M_ARM64
#endif // OPENSSL_HEADER_ARM_ARCH_H