Import ffcall_2.0-2+rpi1.debian.tar.xz
authorPeter Michael Green <plugwash@raspbian.org>
Thu, 7 Dec 2017 01:23:49 +0000 (01:23 +0000)
committerPeter Michael Green <plugwash@raspbian.org>
Thu, 7 Dec 2017 01:23:49 +0000 (01:23 +0000)
[dgit import tarball ffcall 2.0-2+rpi1 ffcall_2.0-2+rpi1.debian.tar.xz]

23 files changed:
changelog [new file with mode: 0644]
compat [new file with mode: 0644]
control [new file with mode: 0644]
copyright [new file with mode: 0644]
libavcall1.install [new file with mode: 0644]
libavcall1.symbols [new file with mode: 0644]
libcallback1.install [new file with mode: 0644]
libcallback1.symbols [new file with mode: 0644]
libffcall-dev.docs [new file with mode: 0644]
libffcall-dev.install [new file with mode: 0644]
libffcall1b.install [new file with mode: 0644]
libffcall1b.symbols [new file with mode: 0644]
libtrampoline1.install [new file with mode: 0644]
libtrampoline1.symbols [new file with mode: 0644]
patches/fix-powerpcspe.patch [new file with mode: 0644]
patches/mips-fpxx.patch [new file with mode: 0644]
patches/raspbian.patch [new file with mode: 0644]
patches/series [new file with mode: 0644]
patches/trampoline-mips64el.patch [new file with mode: 0644]
rules [new file with mode: 0755]
source/format [new file with mode: 0644]
upstream/signing-key.asc [new file with mode: 0644]
watch [new file with mode: 0644]

diff --git a/changelog b/changelog
new file mode 100644 (file)
index 0000000..f81fff5
--- /dev/null
+++ b/changelog
@@ -0,0 +1,259 @@
+ffcall (2.0-2+rpi1) buster-staging; urgency=medium
+
+  * Replace movw/movt with ldr psuedo instruction.
+  * Mark binaries as armv6 not armv7
+  * Disable testsuite, it fails on some of our buildboxes.
+
+ -- Peter Michael Green <plugwash@raspbian.org>  Thu, 07 Dec 2017 01:23:49 +0000
+
+ffcall (2.0-2) unstable; urgency=medium
+
+  * Fix FTBFS on armhf by disabling PIE (for vacall test).
+  * trampoline-mips64el.patch: new patch, fixes FTBFS on mips64el.
+  * Fix symbols file on sparc64 (callback_get_receiver not present there).
+  * mips-fpxx.patch: regenerate avcall assembly files using -fno-tree-dce.
+    This fixes test failures on mips and mipsel.
+
+ -- Sébastien Villemot <sebastien@debian.org>  Sun, 26 Nov 2017 18:23:10 +0100
+
+ffcall (2.0-1) experimental; urgency=medium
+
+  * New upstream version 2.0
+  * Rewrite debian/copyright using machine-readable format 1.0.
+  * Move under Debian Common Lisp Team maintenance.
+    Thanks to Christoph Egger for his past work on this package.
+  * Verify PGP signature of upstream tarball.
+  * Drop Fix_MIPS_N32_test.patch, no longer needed.
+  * Bump to debhelper compat level 10.
+  * Make testsuite failures fatal at build time.
+  * Reorganize binary packages:
+    - libffcall1-dev renamed to libffcall-dev (with a transitional package)
+    - libffcall1a renamed to libffcall1b: now contains the new libffcall shared
+      library, which combines avcall and callback
+    - new libavcall1 and libcallback1 packages, for providing the separate
+      libraries, which are now deprecated
+    - new shared library package libtrampoline1
+  * Ship all docs under /usr/share/doc/libffcall-dev.
+  * Mark all binary packages as M-A same.
+  * Bump Standards-Version to 4.1.1.
+
+ -- Sébastien Villemot <sebastien@debian.org>  Thu, 09 Nov 2017 16:08:34 +0100
+
+ffcall (1.13-0.2) unstable; urgency=low
+
+  * Non-maintainer upload.
+  * Rename libffcall1 to libffcall1a because ABI was broken in 1.13-0.1.
+    (Closes: #874068)
+  * Update symbols file. This should fix FTBFS on hurd-i386, powerpcspe and
+    sparc64.
+
+ -- Sébastien Villemot <sebastien@debian.org>  Sat, 02 Sep 2017 23:43:43 +0200
+
+ffcall (1.13-0.1) unstable; urgency=low
+
+  * Non-maintainer upload.
+  * Thanks to Frédéric Bonnard for crafting a first version of this upload.
+    (Closes: #868021)
+  * New upstream version 1.13. (Closes: #806992)
+    + better ppc64el support (Closes: #871567)
+    + better arm64 support (Closes: #758379)
+    + fixes testsuite failure on any-amd64 (Closes: #451356)
+    + executable stack no longer needed (Closes: #445895)
+  * d/copyright: ffcall has been relicensed to GPL-2+
+  * Drop patches no longer needed:
+    + 0001-fix-callback-on-x86_64.patch
+    + ppc64el-elfv2.patch
+  * New mips-frpxx.patch, needed for compiling on mips and mipsel.
+  * Tell aclocal to look into gnulib-m4 (because it is now patched).
+    Also effectively run libtoolize. (Closes: #727848)
+  * No longer mark libffcall1-dev as Multi-Arch: same. (Closes: #824725)
+  * Fix URL in debian/watch.
+  * Fix Vcs-Browser URL.
+  * Drop obsolete README.source.
+
+ -- Sébastien Villemot <sebastien@debian.org>  Sat, 02 Sep 2017 16:42:44 +0200
+
+ffcall (1.10+cvs20100619-4) unstable; urgency=medium
+
+  [ Fernando Seiti Furusato ]
+  * Replaced .odp section, unexistent on ELFv2 by .localentry for the
+    entry point within *.s files, which fixes ftbfs on
+    ppc64el. Additionally made some other modifications for endianness
+    definition for ppc64 into header files (Closes: #768236).
+
+  [ Christoph Egger ]
+  * Import path by James Cowgill to fix MIPS n32 (Closes: #782590)
+  * use autoreconf dh addon including relibtoolizing
+  * Update VCS URLs
+  * Update compat version
+  * Implement compat 9 + multiarch
+
+ -- Christoph Egger <christoph@debian.org>  Wed, 18 May 2016 17:11:00 +0200
+
+ffcall (1.10+cvs20100619-3) unstable; urgency=medium
+
+  * Update config.* during build (Closes: #727848)
+  * Add patch by Roland Stigge to add support for powerpcspe (Closes:
+    #731647)
+  * Add patch by Andrey Kutejko to save needed register content (Closes:
+    #636849)
+
+ -- Christoph Egger <christoph@debian.org>  Tue, 24 Dec 2013 12:55:04 +0100
+
+ffcall (1.10+cvs20100619-2) unstable; urgency=low
+
+  * Ship to unstable
+
+ -- Christoph Egger <christoph@debian.org>  Sat, 26 Jun 2010 15:29:30 +0200
+
+ffcall (1.10+cvs20100619-1) experimental; urgency=low
+
+  * New Upstream  CVS snapshot  (LP: #274951) (Closes: #504515)
+    * Adding support for armel
+  * Upload to experimental for now
+
+ -- Christoph Egger <christoph@debian.org>  Sat, 19 Jun 2010 17:43:53 +0200
+
+ffcall (1.10+2.41-4) unstable; urgency=low
+
+  * Adopt package
+  * Import into git, set relevant headers
+  * set Homepage: field
+  * Update for Policy 3.8.4
+  * Jump to debhelper 7 (non-dh for now)
+  * Cleanup packaging
+
+ -- Christoph Egger <christoph@debian.org>  Fri, 11 Jun 2010 18:32:21 +0200
+
+ffcall (1.10+2.41-3) unstable; urgency=low
+
+  * Uhm.  It helps if I regenerate the configure scripts too.
+
+ -- Hubert Chan <uhoreg@debian.org>  Tue, 21 Nov 2006 15:09:04 -0500
+
+ffcall (1.10+2.41-2) unstable; urgency=low
+
+  * Patch ffcall/m4/general.m4 to support mipsel. (cross fingers)
+
+ -- Hubert Chan <uhoreg@debian.org>  Thu, 16 Nov 2006 17:37:32 -0500
+
+ffcall (1.10+2.41-1) unstable; urgency=low
+
+  * Update from clisp-2.41, fixing powerpc build failure.
+  * New maintainer.  (Thanks to doko for his work.)
+  * Bump standards version to 3.7.2. (no changes)
+  * Sync copyright info.
+
+ -- Hubert Chan <uhoreg@debian.org>  Mon, 13 Nov 2006 15:03:56 -0500
+
+ffcall (1.10-3) unstable; urgency=low
+
+  * Update libtool scripts to build on (GNU/k*BSD) (Petr Salinger).
+    Closes: #380118.
+
+ -- Matthias Klose <doko@debian.org>  Mon, 14 Aug 2006 01:34:55 +0200
+
+ffcall (1.10-2) unstable; urgency=low
+
+  * Add support for mipsel (Thiemo Seufer). Closes: #189992.
+
+ -- Matthias Klose <doko@debian.org>  Mon, 28 Mar 2005 14:57:56 +0200
+
+ffcall (1.10-1) unstable; urgency=low
+
+  * New upstream version.
+  * Apply patch from arm
+    (http://savannah.gnu.org/bugs/?func=detailitem&item_id=9468).
+
+ -- Matthias Klose <doko@debian.org>  Sat,  3 Jul 2004 06:27:52 +0200
+
+ffcall (1.9-1) unstable; urgency=low
+
+  * New upstream version (amd64 support added).
+
+ -- Matthias Klose <doko@debian.org>  Tue, 27 Jan 2004 07:58:04 +0100
+
+ffcall (1.8.20030831-1) unstable; urgency=low
+
+  * New maintainer (closes: #130293).
+  * New upstream version taken from the clisp-2003-08-31 snapshot.
+    - Builds on ia64 (closes: #110080).
+  * mips: Don't rely on obsolete header file (closes: #189991).
+
+ -- Matthias Klose <doko@debian.org>  Sun, 31 Aug 2003 21:04:43 +0200
+
+ffcall (1.8-5) unstable; urgency=low
+
+  * Patch to fix build scripts on newer Alphas.
+    Thanks to Christopher Chimes.  (Closes: #153959)
+  * Acknowledging NMUs below:
+    (Closes: #104638)
+    (Closes: #130213)
+    (Closes: #129141)
+    Thank you for the help.
+
+ -- Matthew Danish <mdanish@andrew.cmu.edu>  Fri,  9 Aug 2002 15:43:03 -0400
+
+ffcall (1.8-4.2) unstable; urgency=low
+
+  * NMU
+  * Fix hppa compile failures.  Closes: #104638
+  * Do a better job on detecting arm.  Closes: #130213.
+
+ -- LaMont Jones <lamont@debian.org>  Thu,  4 Apr 2002 22:14:46 -0700
+
+ffcall (1.8-4.1) unstable; urgency=low
+
+  * Non-maintainer upload
+  * s390 ffcall patches applied. (Closes: #129141)
+
+ -- Gerhard Tonn <gt@debian.org>  Mon, 21 Jan 2002 07:46:15 +0100
+
+ffcall (1.8-4) unstable; urgency=low
+
+  * Added .la files into -dev, added some man pages and docs
+
+ -- Matthew Danish <mdanish@andrew.cmu.edu>  Sat, 28 Apr 2001 19:32:27 -0400
+
+ffcall (1.8-3) unstable; urgency=low
+
+  * Updated Standards Version
+
+ -- Matthew Danish <mdanish@andrew.cmu.edu>  Tue, 24 Apr 2001 13:44:14 -0400
+
+ffcall (1.8-2) unstable; urgency=low
+
+  * Fixed some lintian warnings
+
+ -- Matthew Danish <mdanish@andrew.cmu.edu>  Mon, 26 Mar 2001 00:35:00 -0500
+
+ffcall (1.8-1) unstable; urgency=low
+
+  * Updated Standards-Version
+
+ -- Matthew Danish <mdanish@andrew.cmu.edu>  Sun, 25 Mar 2001 23:17:29 -0500
+
+ffcall (1.8-0) unstable; urgency=low
+
+  * NMU.
+  * New upstream release
+
+ -- Matthias Klose <doko@debian.org>  Sun, 25 Feb 2001 07:37:32 +0100
+
+ffcall (1.7-2) unstable; urgency=low
+
+  * Bugfixes in package
+
+ -- Matthew Danish <mdanish@andrew.cmu.edu>  Fri, 16 Feb 2001 20:02:41 -0500
+
+ffcall (1.7-1) unstable; urgency=low
+
+  * New upstream release
+
+ -- Matthew Danish <mdanish@andrew.cmu.edu>  Mon,  8 Jan 2001 15:53:12 -0500
+
+ffcall (1.6-1) unstable; urgency=low
+
+  * Initial Release.
+
+ -- Matthew Danish <mdanish@andrew.cmu.edu>  Sat,  6 Jan 2001 23:29:53 -0500
diff --git a/compat b/compat
new file mode 100644 (file)
index 0000000..f599e28
--- /dev/null
+++ b/compat
@@ -0,0 +1 @@
+10
diff --git a/control b/control
new file mode 100644 (file)
index 0000000..9ba24c8
--- /dev/null
+++ b/control
@@ -0,0 +1,113 @@
+Source: ffcall
+Section: libs
+Priority: optional
+Maintainer: Debian Common Lisp Team <pkg-common-lisp-devel@lists.alioth.debian.org>
+Uploaders: Sébastien Villemot <sebastien@debian.org>
+Build-Depends: debhelper (>= 10)
+Standards-Version: 4.1.1
+Vcs-Git: https://anonscm.debian.org/git/pkg-common-lisp/ffcall.git
+Vcs-Browser: https://anonscm.debian.org/cgit/pkg-common-lisp/ffcall.git
+Homepage: https://savannah.gnu.org/projects/libffcall/
+
+Package: libffcall-dev
+Architecture: any
+Multi-Arch: same
+Section: libdevel
+Depends: libffcall1b (= ${binary:Version}),
+         libavcall1 (= ${binary:Version}),
+         libcallback1 (= ${binary:Version}),
+         libtrampoline1 (= ${binary:Version}),
+         ${misc:Depends}
+Breaks: libffcall1-dev (<< 2.0)
+Replaces: libffcall1-dev (<< 2.0)
+Description: foreign function call libraries - development files
+ ffcall is a collection of libraries which can be used to build
+ foreign function call interfaces in embedded interpreters.
+ .
+ The main libffcall library consists of two parts:
+ .
+    avcall - calling C functions with variable arguments
+ .
+    callback - closures with variable arguments as first-class C functions
+ .
+ The avcall and callback modules are also provided as separate
+ libraries, but those are deprecated and are installed only for backward
+ compatibility.
+ .
+ Two other libraries are provided:
+ .
+    vacall - C functions accepting variable argument prototypes
+    (non-reentrant variant of part of 'callback')
+ .
+    trampoline - closures as first-class C functions
+    (non-reentrant variant of part of 'callback')
+ .
+ This package also includes documentation, in HTML format and as manual pages.
+
+Package: libffcall1-dev
+Architecture: all
+Section: oldlibs
+Depends: libffcall-dev,
+         ${misc:Depends}
+Description: foreign function call libraries - transitional package
+ This transitional package can be safely removed once libffcall-dev has been
+ installed.
+
+Package: libffcall1b
+Architecture: any
+Multi-Arch: same
+Depends: ${shlibs:Depends},
+         ${misc:Depends}
+Description: foreign function call libraries - main shared library
+ ffcall is a collection of libraries which can be used to build
+ foreign function call interfaces in embedded interpreters.
+ .
+ This package installs a shared version of the main libffcall library,
+ which consists of two parts:
+ .
+    avcall - calling C functions with variable arguments
+ .
+    callback - closures with variable arguments as first-class C functions
+
+Package: libavcall1
+Architecture: any
+Multi-Arch: same
+Depends: ${shlibs:Depends},
+         ${misc:Depends}
+Description: foreign function call libraries - calling C functions with variable arguments
+ ffcall is a collection of libraries which can be used to build
+ foreign function call interfaces in embedded interpreters.
+ .
+ This package installs a shared library version of the avcall library, which
+ can be used for calling C functions with variable arguments.
+ .
+ The use of this shared library is deprecated. The main libffcall library,
+ which embeds avcall, should be preferred.
+
+Package: libcallback1
+Architecture: any
+Multi-Arch: same
+Depends: ${shlibs:Depends},
+         ${misc:Depends}
+Description: foreign function call libraries - closures with variable arguments in C
+ ffcall is a collection of libraries which can be used to build
+ foreign function call interfaces in embedded interpreters.
+ .
+ This package installs a shared library version of the callback library, which
+ provides closures with variable arguments as first-class C functions.
+ .
+ The use of this shared library is deprecated. The main libffcall library,
+ which embeds callback, should be preferred.
+
+Package: libtrampoline1
+Architecture: any
+Multi-Arch: same
+Depends: ${shlibs:Depends},
+         ${misc:Depends}
+Description: foreign function call libraries - closures in C (non-reentrant variant)
+ ffcall is a collection of libraries which can be used to build
+ foreign function call interfaces in embedded interpreters.
+ .
+ This package installs a shared library version of the trampoline library,
+ which implements closures as first-class C functions. This is a non-reentrant
+ variant of part of the 'callback' library.
diff --git a/copyright b/copyright
new file mode 100644 (file)
index 0000000..f0dc589
--- /dev/null
+++ b/copyright
@@ -0,0 +1,129 @@
+Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
+Upstream-Name: libffcall
+Upstream-Contact: Bruno Haible <bruno@clisp.org>
+Source: https://savannah.gnu.org/projects/libffcall/
+
+Files: *
+Copyright: 1993-1995 Bill Triggs <Bill.Triggs@inrialpes.fr>        (original avcall)
+           1995-2017 Bruno Haible <bruno@clisp.org>                (everything)
+           1997      Jörg Höhle <Joerg.Hoehle@gmd.de>              (m68k AmigaOS support)
+           2000      Adam Fedor <fedor@gnu.org>                    (PowerPC MacOS support)
+           2001-2012 Sam Steingold <sds@gnu.org>                   (build infrastructure)
+           2001-2002 Gerhard Tonn <GerhardTonn@swol.de> <gt@debian.org> (s390 support)
+           2004      Paul Guyot <pguyot@kallisys.net>              (PowerPC MacOS support)
+           2005      Thiemo Seufer <ths@debian.org>                (MIPS EL support)
+           2009      Max Lapan <max.lapan@gmail.com>               (ARM EL support)
+           2010      Valery Ushakov <uwe@netbsd.org>               (SPARC64 improvements)
+           1986-2017, Free Software Foundation, Inc.
+License: GPL-2+
+
+Files: callback/callback.3
+       callback/trampoline_r/trampoline_r.3
+       trampoline/trampoline.3
+       vacall/vacall.3
+Copyright: 1995-2017, Bruno Haible
+License: GFDL-NIV-1.2+ or GPL-2+
+
+Files: gnulib-lib/*
+Copyright: 1994-2017, Free Software Foundation, Inc.
+License: GPL-3+
+
+Files: gnulib-m4/*
+       m4/endianness.m4
+       m4/ltoptions.m4
+       m4/ltversion.m4
+       m4/libtool.m4
+       m4/ltsugar.m4
+       m4/lt~obsolete.m4
+Copyright: 1989, 1991, 1996-2017, Free Software Foundation, Inc.
+License: permissive
+ This file is free software; the Free Software Foundation
+ gives unlimited permission to copy and/or distribute it,
+ with or without modifications, as long as this notice is preserved.
+
+Files: gnulib-m4/gnulib-cache.m4
+       gnulib-m4/gnulib-comp.m4
+       gnulib-m4/onceonly.m4
+Copyright: 1994-2017, Free Software Foundation, Inc.
+License: GPL-3+ with Autoconf exception
+
+Files: debian/*
+Copyright: 2001-2002 Matthew Danish <mdanish@andrew.cmu.edu>
+           2003-2006 Matthias Klose <doko@debian.org>
+           2006 Hubert Chan <uhoreg@debian.org>
+           2010-2016 Christoph Egger <christoph@debian.org>
+           2017 Sébastien Villemot <sebastien@debian.org>
+License: GPL-2+
+
+License: GPL-2+
+ This program is free software; you can redistribute it and/or
+ modify it under the terms of the GNU General Public License as
+ published by the Free Software Foundation; either version 2,
+ or (at your option) any later version.
+ .
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ GNU General Public License for more details.
+ .
+ You should have received a copy of the GNU General Public License along
+ with this program. If not, see <http://www.gnu.org/licenses/>.
+ .
+ On Debian systems, the complete text of the GNU General Public
+ License, version 2, can be found in the file
+ `/usr/share/common-licenses/GPL-2'.
+
+License: GPL-3+
+ This program is free software: you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation, either version 3 of the License, or
+ (at your option) any later version.
+ .
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ GNU General Public License for more details.
+ .
+ You should have received a copy of the GNU General Public License
+ along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ .
+ On Debian systems, the complete text of the GNU General Public
+ License, version 3, can be found in the file
+ `/usr/share/common-licenses/GPL-3'.
+
+License: GPL-3+ with Autoconf exception
+ This program is free software: you can redistribute it and/or modify it
+ under the terms of the GNU General Public License as published by the
+ Free Software Foundation, either version 3 of the License, or (at your
+ option) any later version.
+ .
+ This program is distributed in the hope that it will be useful, but
+ WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ Public License for more details.
+ .
+ You should have received a copy of the GNU General Public License along
+ with this program. If not, see <http://www.gnu.org/licenses/>.
+ .
+ On Debian systems, the complete text of the GNU General Public
+ License, version 3, can be found in the file
+ `/usr/share/common-licenses/GPL-3'.
+ .
+ As a special exception to the GNU General Public License,
+ this file may be distributed as part of a program
+ that contains a configuration script generated by Autoconf, under
+ the same distribution terms as the rest of that program.
+
+License: GFDL-NIV-1.2+
+ This manual is covered by the GNU FDL.  Permission is granted to copy,
+ distribute and/or modify this document under the terms of the
+ GNU Free Documentation License (FDL), either version 1.2 of the
+ License, or (at your option) any later version published by the
+ Free Software Foundation (FSF); with no Invariant Sections, with no
+ Front-Cover Text, and with no Back-Cover Texts
+ .
+ A copy of the license is at <https://www.gnu.org/licenses/old-licenses/fdl-1.2>.
+ .
+ On Debian systems, the complete text of the GNU Free Documentation
+ License, version 1.2, can be found in the file
+ `/usr/share/common-licenses/GFDL-1.2'.
diff --git a/libavcall1.install b/libavcall1.install
new file mode 100644 (file)
index 0000000..6fae34f
--- /dev/null
@@ -0,0 +1 @@
+usr/lib/*/libavcall.so.*
diff --git a/libavcall1.symbols b/libavcall1.symbols
new file mode 100644 (file)
index 0000000..10c6401
--- /dev/null
@@ -0,0 +1,15 @@
+libavcall.so.1 libavcall1 #MINVER#
+ __builtin_avcall@Base 2.0
+ avcall_arg_double@Base 2.0
+ avcall_arg_float@Base 2.0
+ avcall_arg_long@Base 2.0
+ avcall_arg_longlong@Base 2.0
+ avcall_arg_ptr@Base 2.0
+ avcall_arg_struct@Base 2.0
+ avcall_arg_ulong@Base 2.0
+ avcall_arg_ulonglong@Base 2.0
+ avcall_call@Base 2.0
+ avcall_overflown@Base 2.0
+ avcall_start@Base 2.0
+ avcall_start_struct@Base 2.0
+ avcall_structcpy@Base 2.0
diff --git a/libcallback1.install b/libcallback1.install
new file mode 100644 (file)
index 0000000..96117cd
--- /dev/null
@@ -0,0 +1 @@
+usr/lib/*/libcallback.so.*
diff --git a/libcallback1.symbols b/libcallback1.symbols
new file mode 100644 (file)
index 0000000..29f2c19
--- /dev/null
@@ -0,0 +1,49 @@
+libcallback.so.1 libcallback1 #MINVER#
+ alloc_callback@Base 2.0
+ callback_address@Base 2.0
+ callback_arg_char@Base 2.0
+ callback_arg_double@Base 2.0
+ callback_arg_float@Base 2.0
+ callback_arg_int@Base 2.0
+ callback_arg_long@Base 2.0
+ callback_arg_longlong@Base 2.0
+ callback_arg_ptr@Base 2.0
+ callback_arg_schar@Base 2.0
+ callback_arg_short@Base 2.0
+ callback_arg_struct@Base 2.0
+ callback_arg_uchar@Base 2.0
+ callback_arg_uint@Base 2.0
+ callback_arg_ulong@Base 2.0
+ callback_arg_ulonglong@Base 2.0
+ callback_arg_ushort@Base 2.0
+ callback_data@Base 2.0
+ (arch=!sparc64)callback_get_receiver@Base 2.0
+ (arch=armhf sparc64)callback_receiver@Base 2.0
+ callback_return_char@Base 2.0
+ callback_return_double@Base 2.0
+ callback_return_float@Base 2.0
+ callback_return_int@Base 2.0
+ callback_return_long@Base 2.0
+ callback_return_longlong@Base 2.0
+ callback_return_ptr@Base 2.0
+ callback_return_schar@Base 2.0
+ callback_return_short@Base 2.0
+ callback_return_struct@Base 2.0
+ callback_return_uchar@Base 2.0
+ callback_return_uint@Base 2.0
+ callback_return_ulong@Base 2.0
+ callback_return_ulonglong@Base 2.0
+ callback_return_ushort@Base 2.0
+ callback_return_void@Base 2.0
+ callback_start@Base 2.0
+ callback_start_struct@Base 2.0
+ callback_structcpy@Base 2.0
+ callback_trampoline_address@Base 2.0
+ callback_trampoline_alloc@Base 2.0
+ callback_trampoline_data0@Base 2.0
+ callback_trampoline_data1@Base 2.0
+ callback_trampoline_free@Base 2.0
+ callback_trampoline_is@Base 2.0
+ free_callback@Base 2.0
+ is_callback@Base 2.0
+ trampoline_r_data0@Base 2.0
diff --git a/libffcall-dev.docs b/libffcall-dev.docs
new file mode 100644 (file)
index 0000000..5f74ca1
--- /dev/null
@@ -0,0 +1,8 @@
+NEWS
+README
+PLATFORMS
+debian/tmp/usr/share/html/vacall.html
+debian/tmp/usr/share/html/callback.html
+debian/tmp/usr/share/html/trampoline.html
+debian/tmp/usr/share/html/avcall.html
+
diff --git a/libffcall-dev.install b/libffcall-dev.install
new file mode 100644 (file)
index 0000000..b03cc5f
--- /dev/null
@@ -0,0 +1,14 @@
+usr/include/
+usr/lib/*/libffcall.so
+usr/lib/*/libffcall.a
+usr/lib/*/libcallback.so
+usr/lib/*/libcallback.a
+usr/lib/*/libavcall.so
+usr/lib/*/libavcall.a
+usr/lib/*/libtrampoline.so
+usr/lib/*/libtrampoline.a
+usr/lib/*/libvacall.a
+usr/share/man/man3/vacall.3
+usr/share/man/man3/callback.3
+usr/share/man/man3/avcall.3
+usr/share/man/man3/trampoline.3
diff --git a/libffcall1b.install b/libffcall1b.install
new file mode 100644 (file)
index 0000000..b07ab5c
--- /dev/null
@@ -0,0 +1 @@
+usr/lib/*/libffcall.so.*
diff --git a/libffcall1b.symbols b/libffcall1b.symbols
new file mode 100644 (file)
index 0000000..96fbfe2
--- /dev/null
@@ -0,0 +1,62 @@
+libffcall.so.0 libffcall1b #MINVER#
+ alloc_callback@Base 2.0
+ avcall_arg_double@Base 2.0
+ avcall_arg_float@Base 2.0
+ avcall_arg_long@Base 2.0
+ avcall_arg_longlong@Base 2.0
+ avcall_arg_ptr@Base 2.0
+ avcall_arg_struct@Base 2.0
+ avcall_arg_ulong@Base 2.0
+ avcall_arg_ulonglong@Base 2.0
+ avcall_call@Base 2.0
+ avcall_overflown@Base 2.0
+ avcall_start@Base 2.0
+ avcall_start_struct@Base 2.0
+ avcall_structcpy@Base 2.0
+ callback_address@Base 2.0
+ callback_arg_char@Base 2.0
+ callback_arg_double@Base 2.0
+ callback_arg_float@Base 2.0
+ callback_arg_int@Base 2.0
+ callback_arg_long@Base 2.0
+ callback_arg_longlong@Base 2.0
+ callback_arg_ptr@Base 2.0
+ callback_arg_schar@Base 2.0
+ callback_arg_short@Base 2.0
+ callback_arg_struct@Base 2.0
+ callback_arg_uchar@Base 2.0
+ callback_arg_uint@Base 2.0
+ callback_arg_ulong@Base 2.0
+ callback_arg_ulonglong@Base 2.0
+ callback_arg_ushort@Base 2.0
+ callback_data@Base 2.0
+ (arch=!sparc64)callback_get_receiver@Base 2.0
+ (arch=armhf sparc64)callback_receiver@Base 2.0
+ callback_return_char@Base 2.0
+ callback_return_double@Base 2.0
+ callback_return_float@Base 2.0
+ callback_return_int@Base 2.0
+ callback_return_long@Base 2.0
+ callback_return_longlong@Base 2.0
+ callback_return_ptr@Base 2.0
+ callback_return_schar@Base 2.0
+ callback_return_short@Base 2.0
+ callback_return_struct@Base 2.0
+ callback_return_uchar@Base 2.0
+ callback_return_uint@Base 2.0
+ callback_return_ulong@Base 2.0
+ callback_return_ulonglong@Base 2.0
+ callback_return_ushort@Base 2.0
+ callback_return_void@Base 2.0
+ callback_start@Base 2.0
+ callback_start_struct@Base 2.0
+ callback_structcpy@Base 2.0
+ callback_trampoline_address@Base 2.0
+ callback_trampoline_alloc@Base 2.0
+ callback_trampoline_data0@Base 2.0
+ callback_trampoline_data1@Base 2.0
+ callback_trampoline_free@Base 2.0
+ callback_trampoline_is@Base 2.0
+ ffcall_get_version@Base 2.0
+ free_callback@Base 2.0
+ is_callback@Base 2.0
diff --git a/libtrampoline1.install b/libtrampoline1.install
new file mode 100644 (file)
index 0000000..ca56810
--- /dev/null
@@ -0,0 +1 @@
+usr/lib/*/libtrampoline.so.*
diff --git a/libtrampoline1.symbols b/libtrampoline1.symbols
new file mode 100644 (file)
index 0000000..687ac47
--- /dev/null
@@ -0,0 +1,7 @@
+libtrampoline.so.1 libtrampoline1 #MINVER#
+ alloc_trampoline@Base 2.0
+ free_trampoline@Base 2.0
+ is_trampoline@Base 2.0
+ trampoline_address@Base 2.0
+ trampoline_data@Base 2.0
+ trampoline_variable@Base 2.0
diff --git a/patches/fix-powerpcspe.patch b/patches/fix-powerpcspe.patch
new file mode 100644 (file)
index 0000000..b681d48
--- /dev/null
@@ -0,0 +1,584 @@
+Description: Fix compilation on powerpcspe
+Author: Roland Stigge <stigge@antcom.de>
+Bug-Debian: https://bugs.debian.org/731647
+Reviewed-By: Christoph Egger <christoph@debian.org>
+Forwarded: no
+Last-Update: 2017-11-26
+--- a/avcall/Makefile.in
++++ b/avcall/Makefile.in
+@@ -162,6 +162,7 @@ avcall-powerpc.lo : avcall-powerpc.s
+ avcall-powerpc.s : $(srcdir)/avcall-powerpc-aix.s $(srcdir)/avcall-powerpc-linux-macro.S $(srcdir)/avcall-powerpc-macos.s $(srcdir)/avcall-powerpc-sysv4-macro.S
+       case "$(OS)" in \
+         aix*) syntax=aix;; \
++        linux-gnuspe) syntax=linux-gnuspe;; \
+         linux* | netbsd* | openbsd*) syntax=linux;; \
+         macos* | darwin*) syntax=macos;; \
+         *) syntax=sysv4;; \
+--- /dev/null
++++ b/avcall/avcall-powerpc-linux-gnuspe.s
+@@ -0,0 +1,241 @@
++      .file   "avcall-powerpc.c"
++gcc2_compiled.:
++      .section        ".text"
++      .align 2
++      .globl __builtin_avcall
++      .type    __builtin_avcall,@function
++__builtin_avcall:
++      stwu 1,-1040(1)
++      mflr 0
++      stw 31,1036(1)
++      stw 0,1044(1)
++      mr 31,3
++      addi 7,1,8
++      lwz 9,20(31)
++      addi 11,9,-40
++      subf 11,31,11
++      srawi 11,11,2
++      lwz 9,1064(31)
++      addi 10,9,-1072
++      subf 10,31,10
++      srawi 10,10,3
++      subfic 3,10,8
++      cmpw 0,3,11
++      bc 4,0,.L4
++      addi 8,31,40
++.L6:
++      add 9,10,3
++      slwi 9,9,2
++      add 9,9,7
++      slwi 0,3,2
++      lwzx 0,8,0
++      stw 0,-32(9)
++      addi 3,3,1
++      cmpw 0,3,11
++      bc 12,0,.L6
++.L4:
++      lwz 9,1064(31)
++      addi 11,9,-1072
++      subf 11,31,11
++      srawi. 11,11,3
++      bc 12,2,.L9
++      cmpwi 0,11,1
++      bc 12,2,.L12
++      cmpwi 0,11,2
++      bc 12,2,.L15
++      cmpwi 0,11,3
++      bc 12,2,.L18
++      cmpwi 0,11,4
++      bc 12,2,.L21
++      cmpwi 0,11,5
++      bc 12,2,.L24
++      cmpwi 0,11,6
++      bc 12,2,.L27
++      cmpwi 0,11,7
++      bc 12,2,.L30
++      cmpwi 0,11,8
++      bc 12,2,.L33
++      cmpwi 0,11,9
++      bc 12,2,.L36
++      cmpwi 0,11,10
++      bc 12,2,.L39
++      cmpwi 0,11,11
++      bc 12,2,.L42
++      cmpwi 0,11,12
++      bc 12,2,.L45
++.L45:
++.L42:
++.L39:
++.L36:
++.L33:
++.L30:
++.L27:
++.L24:
++.L21:
++.L18:
++.L15:
++.L12:
++.L9:
++      lwz 11,0(31)
++      lwz 3,40(31)
++      lwz 4,44(31)
++      lwz 5,48(31)
++      lwz 6,52(31)
++      lwz 7,56(31)
++      lwz 8,60(31)
++      lwz 9,64(31)
++      lwz 10,68(31)
++      mtlr 11
++      crxor 6,6,6
++      blrl
++      lwz 0,12(31)
++      cmpwi 0,0,1
++      bc 12,2,.L50
++      cmpwi 0,0,0
++      bc 12,2,.L102
++      lwz 0,12(31)
++      cmpwi 0,0,2
++      bc 12,2,.L103
++      lwz 0,12(31)
++      cmpwi 0,0,3
++      bc 12,2,.L103
++      lwz 0,12(31)
++      cmpwi 0,0,4
++      bc 12,2,.L103
++      lwz 0,12(31)
++      cmpwi 0,0,5
++      bc 12,2,.L104
++      lwz 0,12(31)
++      cmpwi 0,0,6
++      bc 12,2,.L104
++      lwz 0,12(31)
++      cmpwi 0,0,7
++      bc 12,2,.L102
++      lwz 0,12(31)
++      cmpwi 0,0,8
++      bc 12,2,.L102
++      lwz 0,12(31)
++      cmpwi 0,0,9
++      bc 12,2,.L102
++      lwz 0,12(31)
++      cmpwi 0,0,10
++      bc 12,2,.L102
++      lwz 9,12(31)
++      addi 9,9,-11
++      cmplwi 0,9,1
++      bc 4,1,.L105
++      lwz 0,12(31)
++      cmpwi 0,0,13
++      bc 4,2,.L73
++      lwz 9,8(31)
++      b .L50
++.L73:
++      lwz 0,12(31)
++      cmpwi 0,0,14
++      bc 4,2,.L75
++      lwz 9,8(31)
++      b .L50
++.L75:
++      lwz 0,12(31)
++      cmpwi 0,0,15
++      bc 12,2,.L102
++      lwz 0,12(31)
++      cmpwi 0,0,16
++      bc 4,2,.L50
++      lwz 0,4(31)
++      andi. 9,0,1
++      bc 12,2,.L80
++      lwz 0,16(31)
++      cmpwi 0,0,1
++      bc 4,2,.L81
++      lwz 9,8(31)
++      lbz 0,0(3)
++      stb 0,0(9)
++      b .L50
++.L81:
++      lwz 0,16(31)
++      cmpwi 0,0,2
++      bc 4,2,.L83
++      lwz 9,8(31)
++      lhz 0,0(3)
++      sth 0,0(9)
++      b .L50
++.L83:
++      lwz 0,16(31)
++      cmpwi 0,0,4
++      bc 4,2,.L85
++      lwz 9,8(31)
++      lwz 0,0(3)
++      stw 0,0(9)
++      b .L50
++.L85:
++      lwz 0,16(31)
++      cmpwi 0,0,8
++      bc 4,2,.L87
++      lwz 9,8(31)
++      lwz 0,0(3)
++      stw 0,0(9)
++      lwz 9,8(31)
++      lwz 0,4(3)
++      stw 0,4(9)
++      b .L50
++.L87:
++      lwz 9,16(31)
++      addi 10,9,3
++      srwi 10,10,2
++      addic. 10,10,-1
++      bc 12,0,.L50
++.L91:
++      lwz 11,8(31)
++      slwi 9,10,2
++      lwzx 0,9,3
++      stwx 0,9,11
++      addic. 10,10,-1
++      bc 4,0,.L91
++      b .L50
++.L80:
++      lwz 0,4(31)
++      andi. 9,0,512
++      bc 12,2,.L50
++      lwz 0,16(31)
++      cmpwi 0,0,1
++      bc 4,2,.L95
++.L103:
++      lwz 9,8(31)
++      stb 3,0(9)
++      b .L50
++.L95:
++      lwz 0,16(31)
++      cmpwi 0,0,2
++      bc 4,2,.L97
++.L104:
++      lwz 9,8(31)
++      sth 3,0(9)
++      b .L50
++.L97:
++      lwz 0,16(31)
++      cmpwi 0,0,4
++      bc 4,2,.L99
++.L102:
++      lwz 9,8(31)
++      stw 3,0(9)
++      b .L50
++.L99:
++      lwz 0,16(31)
++      cmpwi 0,0,8
++      bc 4,2,.L50
++.L105:
++      lwz 9,8(31)
++      stw 3,0(9)
++      lwz 9,8(31)
++      stw 4,4(9)
++.L50:
++      li 3,0
++      lwz 0,1044(1)
++      mtlr 0
++      lwz 31,1036(1)
++      la 1,1040(1)
++      blr
++.Lfe1:
++      .size    __builtin_avcall,.Lfe1-__builtin_avcall
++      .ident  "GCC: (GNU) 2.95.2 19991024 (release/franzo)"
+--- /dev/null
++++ b/vacall/vacall-powerpc-linux-gnuspe.s
+@@ -0,0 +1,149 @@
++      .file   "vacall-powerpc.c"
++gcc2_compiled.:
++      .section        ".text"
++      .align 2
++      .globl __vacall
++      .type    __vacall,@function
++__vacall:
++      stwu 1,-208(1)
++      mflr 0
++      stw 0,212(1)
++      stw 3,152(1)
++      stw 4,156(1)
++      stw 5,160(1)
++      stw 6,164(1)
++      stw 7,168(1)
++      stw 8,172(1)
++      stw 9,176(1)
++      stw 10,180(1)
++      li 9,0
++      stw 9,8(1)
++      addi 0,1,152
++      stw 0,12(1)
++      addi 0,1,216
++      stw 0,184(1)
++      stw 9,188(1)
++      stw 9,16(1)
++      stw 9,20(1)
++      addi 0,1,48
++      stw 0,40(1)
++      lis 9,vacall_function@ha
++      lwz 0,vacall_function@l(9)
++      addi 3,1,8
++      mtlr 0
++      blrl
++      lwz 0,20(1)
++      cmpwi 0,0,0
++      bc 12,2,.L4
++      cmpwi 0,0,1
++      bc 12,2,.L42
++      lwz 0,20(1)
++      cmpwi 0,0,2
++      bc 4,2,.L7
++      lbz 0,32(1)
++      extsb 3,0
++      b .L4
++.L7:
++      lwz 0,20(1)
++      cmpwi 0,0,3
++      bc 4,2,.L9
++.L42:
++      lbz 3,32(1)
++      b .L4
++.L9:
++      lwz 0,20(1)
++      cmpwi 0,0,4
++      bc 4,2,.L11
++      lha 3,32(1)
++      b .L4
++.L11:
++      lwz 0,20(1)
++      cmpwi 0,0,5
++      bc 4,2,.L13
++      lhz 3,32(1)
++      b .L4
++.L13:
++      lwz 0,20(1)
++      cmpwi 0,0,6
++      bc 12,2,.L43
++      lwz 0,20(1)
++      cmpwi 0,0,7
++      bc 12,2,.L43
++      lwz 0,20(1)
++      cmpwi 0,0,8
++      bc 12,2,.L43
++      lwz 0,20(1)
++      cmpwi 0,0,9
++      bc 12,2,.L43
++      lwz 9,20(1)
++      addi 9,9,-10
++      cmplwi 0,9,1
++      bc 12,1,.L23
++      lwz 3,32(1)
++      lwz 4,36(1)
++      b .L4
++.L23:
++      lwz 0,20(1)
++      cmpwi 0,0,12
++      bc 4,2,.L25
++      b .L4
++.L25:
++      lwz 0,20(1)
++      cmpwi 0,0,13
++      bc 4,2,.L27
++      b .L4
++.L27:
++      lwz 0,20(1)
++      cmpwi 0,0,14
++      bc 4,2,.L29
++.L43:
++      lwz 3,32(1)
++      b .L4
++.L29:
++      lwz 0,20(1)
++      cmpwi 0,0,15
++      bc 4,2,.L4
++      lwz 0,8(1)
++      andi. 9,0,1
++      bc 12,2,.L32
++      lwz 3,16(1)
++      b .L4
++.L32:
++      lwz 0,8(1)
++      andi. 9,0,1024
++      bc 12,2,.L4
++      lwz 0,24(1)
++      cmpwi 0,0,1
++      bc 4,2,.L35
++      lwz 9,16(1)
++      lbz 3,0(9)
++      b .L4
++.L35:
++      lwz 0,24(1)
++      cmpwi 0,0,2
++      bc 4,2,.L37
++      lwz 9,16(1)
++      lhz 3,0(9)
++      b .L4
++.L37:
++      lwz 0,24(1)
++      cmpwi 0,0,4
++      bc 4,2,.L39
++      lwz 9,16(1)
++      lwz 3,0(9)
++      b .L4
++.L39:
++      lwz 0,24(1)
++      cmpwi 0,0,8
++      bc 4,2,.L4
++      lwz 9,16(1)
++      lwz 3,0(9)
++      lwz 4,4(9)
++.L4:
++      lwz 0,212(1)
++      mtlr 0
++      la 1,208(1)
++      blr
++.Lfe1:
++      .size    __vacall,.Lfe1-__vacall
++      .ident  "GCC: (GNU) 2.95.2 19991024 (release/franzo)"
+--- a/vacall/Makefile.in
++++ b/vacall/Makefile.in
+@@ -146,6 +146,7 @@ vacall-powerpc.@OBJEXT@ : vacall-powerpc
+ vacall-powerpc.s : $(srcdir)/vacall-powerpc-aix.s $(srcdir)/vacall-powerpc-linux-macro.S $(srcdir)/vacall-powerpc-macos.s $(srcdir)/vacall-powerpc-sysv4-macro.S
+       case "$(OS)" in \
+         aix*) syntax=aix;; \
++        linux-gnuspe) syntax=linux-gnuspe;; \
+         linux* | netbsd* | openbsd*) syntax=linux;; \
+         macos* | darwin*) syntax=macos;; \
+         *) syntax=sysv4;; \
+--- /dev/null
++++ b/callback/vacall_r/vacall-powerpc-linux-gnuspe.s
+@@ -0,0 +1,149 @@
++      .file   "vacall-powerpc.c"
++gcc2_compiled.:
++      .section        ".text"
++      .align 2
++      .globl __vacall_r
++      .type    __vacall_r,@function
++__vacall_r:
++      stwu 1,-208(1)
++      mflr 0
++      stw 0,212(1)
++      stw 3,152(1)
++      stw 4,156(1)
++      stw 5,160(1)
++      stw 6,164(1)
++      stw 7,168(1)
++      stw 8,172(1)
++      stw 9,176(1)
++      stw 10,180(1)
++      li 9,0
++      stw 9,8(1)
++      addi 0,1,152
++      stw 0,12(1)
++      addi 0,1,216
++      stw 0,184(1)
++      stw 9,188(1)
++      stw 9,16(1)
++      stw 9,20(1)
++      addi 0,1,48
++      stw 0,40(1)
++      lwz 9,0(11)
++      lwz 3,4(11)
++      addi 4,1,8
++      mtlr 9
++      blrl
++      lwz 0,20(1)
++      cmpwi 0,0,0
++      bc 12,2,.L4
++      cmpwi 0,0,1
++      bc 12,2,.L42
++      lwz 0,20(1)
++      cmpwi 0,0,2
++      bc 4,2,.L7
++      lbz 0,32(1)
++      extsb 3,0
++      b .L4
++.L7:
++      lwz 0,20(1)
++      cmpwi 0,0,3
++      bc 4,2,.L9
++.L42:
++      lbz 3,32(1)
++      b .L4
++.L9:
++      lwz 0,20(1)
++      cmpwi 0,0,4
++      bc 4,2,.L11
++      lha 3,32(1)
++      b .L4
++.L11:
++      lwz 0,20(1)
++      cmpwi 0,0,5
++      bc 4,2,.L13
++      lhz 3,32(1)
++      b .L4
++.L13:
++      lwz 0,20(1)
++      cmpwi 0,0,6
++      bc 12,2,.L43
++      lwz 0,20(1)
++      cmpwi 0,0,7
++      bc 12,2,.L43
++      lwz 0,20(1)
++      cmpwi 0,0,8
++      bc 12,2,.L43
++      lwz 0,20(1)
++      cmpwi 0,0,9
++      bc 12,2,.L43
++      lwz 9,20(1)
++      addi 9,9,-10
++      cmplwi 0,9,1
++      bc 12,1,.L23
++      lwz 3,32(1)
++      lwz 4,36(1)
++      b .L4
++.L23:
++      lwz 0,20(1)
++      cmpwi 0,0,12
++      bc 4,2,.L25
++      b .L4
++.L25:
++      lwz 0,20(1)
++      cmpwi 0,0,13
++      bc 4,2,.L27
++      b .L4
++.L27:
++      lwz 0,20(1)
++      cmpwi 0,0,14
++      bc 4,2,.L29
++.L43:
++      lwz 3,32(1)
++      b .L4
++.L29:
++      lwz 0,20(1)
++      cmpwi 0,0,15
++      bc 4,2,.L4
++      lwz 0,8(1)
++      andi. 9,0,1
++      bc 12,2,.L32
++      lwz 3,16(1)
++      b .L4
++.L32:
++      lwz 0,8(1)
++      andi. 9,0,1024
++      bc 12,2,.L4
++      lwz 0,24(1)
++      cmpwi 0,0,1
++      bc 4,2,.L35
++      lwz 9,16(1)
++      lbz 3,0(9)
++      b .L4
++.L35:
++      lwz 0,24(1)
++      cmpwi 0,0,2
++      bc 4,2,.L37
++      lwz 9,16(1)
++      lhz 3,0(9)
++      b .L4
++.L37:
++      lwz 0,24(1)
++      cmpwi 0,0,4
++      bc 4,2,.L39
++      lwz 9,16(1)
++      lwz 3,0(9)
++      b .L4
++.L39:
++      lwz 0,24(1)
++      cmpwi 0,0,8
++      bc 4,2,.L4
++      lwz 9,16(1)
++      lwz 3,0(9)
++      lwz 4,4(9)
++.L4:
++      lwz 0,212(1)
++      mtlr 0
++      la 1,208(1)
++      blr
++.Lfe1:
++      .size    __vacall_r,.Lfe1-__vacall_r
++      .ident  "GCC: (GNU) 2.95.2 19991024 (release/franzo)"
+--- a/callback/vacall_r/Makefile.in
++++ b/callback/vacall_r/Makefile.in
+@@ -150,6 +150,7 @@ vacall-powerpc.lo : vacall-powerpc.s
+ vacall-powerpc.s : $(srcdir)/vacall-powerpc-aix.s $(srcdir)/vacall-powerpc-linux-macro.S $(srcdir)/vacall-powerpc-macos.s $(srcdir)/vacall-powerpc-sysv4-macro.S
+       case "$(OS)" in \
+         aix*) syntax=aix;; \
++        linux-gnuspe) syntax=linux-gnuspe;; \
+         linux* | netbsd* | openbsd*) syntax=linux;; \
+         macos* | darwin*) syntax=macos;; \
+         *) syntax=sysv4;; \
diff --git a/patches/mips-fpxx.patch b/patches/mips-fpxx.patch
new file mode 100644 (file)
index 0000000..40048dd
--- /dev/null
@@ -0,0 +1,5104 @@
+Description: Update assembly code for new ABI on mips and mipsel
+ There was a change in the ABI of mips and mipsel since gcc-5 (see #825342), that
+ makes ffcall FTBFS on those arches, because it uses odd-numbered floating point
+ registers, which is no longer possible with the new ABI.
+ .
+ This patch regenerates the assembly files with the new ABI. Note that the flag
+ -fno-tree-dce was added for generating the avcall files, because otherwise the
+ allocation on the stack with __builtin_alloca is optimized out.
+ .
+ On mips, the new assembly files have been created with the following commands:
+ .
+   rm avcall/avcall-mipseb* vacall/vacall-mipseb* callback/vacall_r/vacall-mipseb*
+   make -C avcall -f Makefile.devel avcall-mipseb-linux.s avcall-mipseb-macro.S
+   make -C vacall -f Makefile.devel vacall-mipseb-linux.s vacall-mipseb-macro.S
+   make -C callback/vacall_r -f Makefile.devel vacall-mipseb-linux.s vacall-mipseb-macro.S
+ .
+ On mipsel, the commands are the same after substituting mipseb by mipsel.
+Author: Sébastien Villemot <sebastien@debian.org>
+Forwarded: https://savannah.gnu.org/bugs/index.php?52510
+Last-Update: 2017-11-26
+---
+This patch header follows DEP-3: http://dep.debian.net/deps/dep3/
+--- a/avcall/Makefile.devel
++++ b/avcall/Makefile.devel
+@@ -48,13 +48,13 @@ avcall-m68k.motorola.S : avcall-m68k-lin
+ avcall-mipseb-linux.s : avcall-mips.c avcall-internal.h avcall.h avcall-alist.h $(THISFILE)
+-      $(CROSS_TOOL) mips64-linux gcc -V 4.0.2 -mabi=32 -meb $(GCCFLAGS) -D__mips__ -S avcall-mips.c -o avcall-mipseb-linux.s
++      mips-linux-gnu-gcc $(GCCFLAGS) -fno-tree-dce -D__mips__ -S avcall-mips.c -o avcall-mipseb-linux.s
+ avcall-mipseb-macro.S : avcall-mipseb-linux.s ../common/asm-mips.sh $(THISFILE)
+       (echo '#include "asm-mips.h"' ; ../common/asm-mips.sh < avcall-mipseb-linux.s) > avcall-mipseb-macro.S
+ avcall-mipsel-linux.s : avcall-mips.c avcall-internal.h avcall.h avcall-alist.h $(THISFILE)
+-      $(CROSS_TOOL) mips64-linux gcc -V 4.0.2 -mabi=32 -mel $(GCCFLAGS) -D__mips__ -S avcall-mips.c -o avcall-mipsel-linux.s
++      mipsel-linux-gnu-gcc $(GCCFLAGS) -fno-tree-dce -D__mips__ -S avcall-mips.c -o avcall-mipsel-linux.s
+ avcall-mipsel-macro.S : avcall-mipsel-linux.s ../common/asm-mips.sh $(THISFILE)
+       (echo '#include "asm-mips.h"' ; ../common/asm-mips.sh < avcall-mipsel-linux.s) > avcall-mipsel-macro.S
+--- a/callback/vacall_r/Makefile.devel
++++ b/callback/vacall_r/Makefile.devel
+@@ -49,13 +49,13 @@ vacall-m68k.motorola.S : vacall-m68k-lin
+ vacall-mipseb-linux.s : ../../vacall/vacall-mips.c ../../vacall/vacall-internal.h vacall_r.h $(THISFILE)
+ # For references to symbols: -mno-explicit-relocs ensures a syntax that the IRIX assembler understands.
+-      $(CROSS_TOOL) mips64-linux gcc -V 4.0.2 -mabi=32 -meb -mno-explicit-relocs $(GCCFLAGS) -D__mips__ -S ../../vacall/vacall-mips.c -I../../vacall -I. -o vacall-mipseb-linux.s
++      mips-linux-gnu-gcc -mno-explicit-relocs $(GCCFLAGS) -D__mips__ -S ../../vacall/vacall-mips.c -I../../vacall -I. -o vacall-mipseb-linux.s
+ vacall-mipseb-macro.S : vacall-mipseb-linux.s ../../common/asm-mips.sh $(THISFILE)
+       (echo '#include "asm-mips.h"' ; ../../common/asm-mips.sh < vacall-mipseb-linux.s) > vacall-mipseb-macro.S
+ vacall-mipsel-linux.s : ../../vacall/vacall-mips.c ../../vacall/vacall-internal.h vacall_r.h $(THISFILE)
+-      $(CROSS_TOOL) mips64-linux gcc -V 4.0.2 -mabi=32 -mel -mno-explicit-relocs $(GCCFLAGS) -D__mips__ -S ../../vacall/vacall-mips.c -I../../vacall -I. -o vacall-mipsel-linux.s
++      mipsel-linux-gnu-gcc -mno-explicit-relocs $(GCCFLAGS) -D__mips__ -S ../../vacall/vacall-mips.c -I../../vacall -I. -o vacall-mipsel-linux.s
+ vacall-mipsel-macro.S : vacall-mipsel-linux.s ../../common/asm-mips.sh $(THISFILE)
+       (echo '#include "asm-mips.h"' ; ../../common/asm-mips.sh < vacall-mipsel-linux.s) > vacall-mipsel-macro.S
+--- a/vacall/Makefile.devel
++++ b/vacall/Makefile.devel
+@@ -49,13 +49,13 @@ vacall-m68k.motorola.S : vacall-m68k-lin
+ vacall-mipseb-linux.s : vacall-mips.c vacall-internal.h vacall.h $(THISFILE)
+ # For references to global symbols: -mno-explicit-relocs ensures a syntax that the IRIX assembler understands.
+-      $(CROSS_TOOL) mips64-linux gcc -V 4.0.2 -mabi=32 -meb -mno-explicit-relocs $(GCCFLAGS) -D__mips__ -S vacall-mips.c -o vacall-mipseb-linux.s
++      mips-linux-gnu-gcc -mno-explicit-relocs $(GCCFLAGS) -D__mips__ -S vacall-mips.c -o vacall-mipseb-linux.s
+ vacall-mipseb-macro.S : vacall-mipseb-linux.s ../common/asm-mips.sh $(THISFILE)
+       (echo '#include "asm-mips.h"' ; ../common/asm-mips.sh < vacall-mipseb-linux.s) > vacall-mipseb-macro.S
+ vacall-mipsel-linux.s : vacall-mips.c vacall-internal.h vacall.h $(THISFILE)
+-      $(CROSS_TOOL) mips64-linux gcc -V 4.0.2 -mabi=32 -mel -mno-explicit-relocs $(GCCFLAGS) -D__mips__ -S vacall-mips.c -o vacall-mipsel-linux.s
++      mipsel-linux-gnu-gcc -mno-explicit-relocs $(GCCFLAGS) -D__mips__ -S vacall-mips.c -o vacall-mipsel-linux.s
+ vacall-mipsel-macro.S : vacall-mipsel-linux.s ../common/asm-mips.sh $(THISFILE)
+       (echo '#include "asm-mips.h"' ; ../common/asm-mips.sh < vacall-mipsel-linux.s) > vacall-mipsel-macro.S
+--- a/avcall/avcall-mipseb-linux.s
++++ b/avcall/avcall-mipseb-linux.s
+@@ -1,325 +1,310 @@
+       .file   1 "avcall-mips.c"
+       .section .mdebug.abi32
+       .previous
++      .nan    legacy
++      .module fp=xx
++      .module nooddspreg
+       .abicalls
+       .text
+       .align  2
+       .globl  avcall_call
++      .set    nomips16
++      .set    nomicromips
+       .ent    avcall_call
+       .type   avcall_call, @function
+ avcall_call:
+       .frame  $fp,40,$31              # vars= 0, regs= 3/0, args= 16, gp= 8
+-      .mask   0xc0010000,-8
++      .mask   0xc0010000,-4
+       .fmask  0x00000000,0
+       addiu   $sp,$sp,-40
+-      sw      $fp,28($sp)
++      lw      $8,24($4)
++      lw      $3,20($4)
++      lw      $5,40($4)
++      sw      $fp,32($sp)
+       move    $fp,$sp
+-      sw      $31,32($sp)
+-      sw      $16,24($sp)
+-      .cprestore      16
+-      lw      $3,24($4)
+-      lw      $2,20($4)
+-      move    $16,$4
+-      lw      $4,40($4)
+-      subu    $2,$2,$3
++      sw      $16,28($sp)
++      subu    $3,$3,$8
++      sw      $31,36($sp)
+       addiu   $sp,$sp,-1032
+-      andi    $3,$4,0x1
+-      sra     $7,$2,2
++      move    $16,$4
++      andi    $4,$5,0x1
++      move    $2,$sp
+       .set    noreorder
+       .set    nomacro
+-      beq     $3,$0,$L2
+-      move    $5,$sp
++      beq     $4,$0,$L2
++      sra     $7,$3,2
+       .set    macro
+       .set    reorder
+ #APP
++ # 76 "avcall-mips.c" 1
+       l.s $f12,48($16)
++ # 0 "" 2
+ #NO_APP
+ $L2:
+-      lw      $3,44($16)
+-      #nop
+-      andi    $2,$3,0x1
+-      .set    noreorder
+-      .set    nomacro
+-      beq     $2,$0,$L63
+-      andi    $2,$4,0x2
+-      .set    macro
+-      .set    reorder
+-
++      lw      $4,44($16)
++      andi    $6,$4,0x1
++      beq     $6,$0,$L3
+ #APP
++ # 78 "avcall-mips.c" 1
+       l.d $f12,56($16)
++ # 0 "" 2
+ #NO_APP
+-      andi    $2,$4,0x2
+-$L63:
+-      .set    noreorder
+-      .set    nomacro
+-      beq     $2,$0,$L64
+-      andi    $2,$3,0x2
+-      .set    macro
+-      .set    reorder
+-
++$L3:
++      andi    $5,$5,0x2
++      beq     $5,$0,$L4
+ #APP
++ # 80 "avcall-mips.c" 1
+       l.s $f14,52($16)
++ # 0 "" 2
+ #NO_APP
+-      andi    $2,$3,0x2
+-$L64:
+-      .set    noreorder
+-      .set    nomacro
+-      beq     $2,$0,$L65
+-      slt     $2,$7,5
+-      .set    macro
+-      .set    reorder
+-
++$L4:
++      andi    $4,$4,0x2
++      beq     $4,$0,$L5
+ #APP
++ # 82 "avcall-mips.c" 1
+       l.d $f14,64($16)
++ # 0 "" 2
+ #NO_APP
+-      slt     $2,$7,5
+-$L65:
++$L5:
++      slt     $3,$3,17
+       .set    noreorder
+       .set    nomacro
+-      bne     $2,$0,$L51
+-      li      $4,4                    # 0x4
++      bne     $3,$0,$L6
++      addiu   $4,$8,16
+       .set    macro
+       .set    reorder
+-      lw      $6,24($16)
+-$L10:
+-      sll     $2,$4,2
+-      addu    $2,$2,$6
+-      lw      $3,0($2)
+-      addiu   $4,$4,1
+-      sw      $3,16($5)
++      addiu   $2,$2,16
++      li      $3,4                    # 0x4
++$L7:
++      lw      $6,0($4)
++      addiu   $3,$3,1
++      addiu   $4,$4,4
++      slt     $5,$3,$7
++      sw      $6,0($2)
+       .set    noreorder
+       .set    nomacro
+-      bne     $7,$4,$L10
+-      addiu   $5,$5,4
++      bne     $5,$0,$L7
++      addiu   $2,$2,4
+       .set    macro
+       .set    reorder
+-$L11:
+-      lw      $4,0($6)
+-      lw      $5,4($6)
+-      lw      $7,12($6)
++$L6:
++      lw      $5,4($8)
++      lw      $4,0($8)
+       lw      $25,4($16)
+-      lw      $6,8($6)
+-      jalr    $25
+-      lw      $4,12($16)
+-      move    $5,$2
+-      li      $2,1                    # 0x1
+-      lw      $28,16($fp)
+-      beq     $4,$2,$L12
++      lw      $7,12($8)
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$0,$L57
+-      li      $2,2                    # 0x2
++      jalr    $25
++      lw      $6,8($8)
+       .set    macro
+       .set    reorder
++      li      $5,1                    # 0x1
++      lw      $4,12($16)
++      beq     $4,$5,$L8
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L56
+-      li      $2,3                    # 0x3
++      beq     $4,$0,$L42
++      li      $5,2                    # 0x2
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L56
+-      li      $2,4                    # 0x4
++      beq     $4,$5,$L45
++      li      $5,3                    # 0x3
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L56
+-      li      $2,5                    # 0x5
++      beq     $4,$5,$L45
++      li      $5,4                    # 0x4
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L58
+-      li      $2,6                    # 0x6
++      beq     $4,$5,$L45
++      li      $5,5                    # 0x5
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L58
+-      li      $2,7                    # 0x7
++      beq     $4,$5,$L46
++      li      $5,6                    # 0x6
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L57
+-      li      $2,8                    # 0x8
++      beq     $4,$5,$L46
++      li      $5,7                    # 0x7
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L57
+-      li      $2,9                    # 0x9
++      beq     $4,$5,$L42
++      li      $5,8                    # 0x8
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L57
+-      li      $2,10                   # 0xa
++      beq     $4,$5,$L42
++      li      $5,9                    # 0x9
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L57
+-      addiu   $2,$4,-11
++      beq     $4,$5,$L42
++      li      $5,10                   # 0xa
+       .set    macro
+       .set    reorder
+-      sltu    $2,$2,2
+       .set    noreorder
+       .set    nomacro
+-      bne     $2,$0,$L60
+-      li      $2,13                   # 0xd
++      beq     $4,$5,$L42
++      addiu   $5,$4,-11
+       .set    macro
+       .set    reorder
++      sltu    $5,$5,2
++      bne     $5,$0,$L48
++      li      $3,13                   # 0xd
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L61
+-      li      $2,14                   # 0xe
++      beq     $4,$3,$L49
++      li      $3,14                   # 0xe
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L62
+-      li      $2,15                   # 0xf
++      beq     $4,$3,$L50
++      li      $3,15                   # 0xf
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L57
+-      li      $2,16                   # 0x10
++      beq     $4,$3,$L42
++      li      $3,16                   # 0x10
+       .set    macro
+       .set    reorder
+-      bne     $4,$2,$L12
+-      lw      $2,0($16)
+-      #nop
+-      andi    $2,$2,0x2
++      bne     $4,$3,$L8
++      lw      $3,0($16)
++      andi    $3,$3,0x2
+       .set    noreorder
+       .set    nomacro
+-      beq     $2,$0,$L12
+-      li      $2,1                    # 0x1
++      beq     $3,$0,$L8
++      li      $4,1                    # 0x1
+       .set    macro
+       .set    reorder
+       lw      $3,16($16)
+-      #nop
+       .set    noreorder
+       .set    nomacro
+-      beq     $3,$2,$L56
+-      li      $2,2                    # 0x2
++      beq     $3,$4,$L45
++      li      $4,2                    # 0x2
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $3,$2,$L58
+-      li      $2,4                    # 0x4
++      beq     $3,$4,$L46
++      li      $4,4                    # 0x4
+       .set    macro
+       .set    reorder
+-      beq     $3,$2,$L57
+-$L12:
++      beq     $3,$4,$L42
++$L8:
+       move    $sp,$fp
+-      lw      $31,32($fp)
+-      lw      $fp,28($sp)
+-      lw      $16,24($sp)
+       move    $2,$0
++      lw      $31,36($sp)
++      lw      $fp,32($sp)
++      lw      $16,28($sp)
+       .set    noreorder
+       .set    nomacro
+-      j       $31
++      jr      $31
+       addiu   $sp,$sp,40
+       .set    macro
+       .set    reorder
+-$L51:
+-      lw      $6,24($16)
+-      b       $L11
+-$L57:
+-      lw      $2,8($16)
++$L42:
++      lw      $3,8($16)
++      sw      $2,0($3)
+       move    $sp,$fp
+-      sw      $5,0($2)
+-      lw      $31,32($sp)
+-      lw      $fp,28($sp)
+-      lw      $16,24($sp)
+       move    $2,$0
++      lw      $31,36($sp)
++      lw      $fp,32($sp)
++      lw      $16,28($sp)
+       .set    noreorder
+       .set    nomacro
+-      j       $31
++      jr      $31
+       addiu   $sp,$sp,40
+       .set    macro
+       .set    reorder
+-$L56:
+-      lw      $2,8($16)
++$L45:
++      lw      $3,8($16)
++      sb      $2,0($3)
+       move    $sp,$fp
+-      sb      $5,0($2)
+-      lw      $31,32($sp)
+-      lw      $fp,28($sp)
+-      lw      $16,24($sp)
+       move    $2,$0
++      lw      $31,36($sp)
++      lw      $fp,32($sp)
++      lw      $16,28($sp)
+       .set    noreorder
+       .set    nomacro
+-      j       $31
++      jr      $31
+       addiu   $sp,$sp,40
+       .set    macro
+       .set    reorder
+-$L58:
+-      lw      $2,8($16)
++$L46:
++      lw      $3,8($16)
+       .set    noreorder
+       .set    nomacro
+-      b       $L12
+-      sh      $5,0($2)
++      b       $L8
++      sh      $2,0($3)
+       .set    macro
+       .set    reorder
+-$L61:
+-      lw      $2,8($16)
++$L48:
++      lw      $4,8($16)
++      sw      $2,0($4)
+       .set    noreorder
+       .set    nomacro
+-      b       $L12
+-      swc1    $f0,0($2)
++      b       $L8
++      sw      $3,4($4)
+       .set    macro
+       .set    reorder
+-$L60:
++$L49:
+       lw      $2,8($16)
+-      #nop
+-      sw      $3,4($2)
+       .set    noreorder
+       .set    nomacro
+-      b       $L12
+-      sw      $5,0($2)
++      b       $L8
++      swc1    $f0,0($2)
+       .set    macro
+       .set    reorder
+-$L62:
++$L50:
+       lw      $2,8($16)
+-      #nop
+-      swc1    $f0,4($2)
+       .set    noreorder
+       .set    nomacro
+-      b       $L12
+-      swc1    $f1,0($2)
++      b       $L8
++      sdc1    $f0,0($2)
+       .set    macro
+       .set    reorder
+       .end    avcall_call
+-      .ident  "GCC: (GNU) 4.0.2"
++      .size   avcall_call, .-avcall_call
++      .ident  "GCC: (Debian 7.2.0-11) 7.2.0"
+--- a/avcall/avcall-mipseb-macro.S
++++ b/avcall/avcall-mipseb-macro.S
+@@ -1,322 +1,307 @@
+ #include "asm-mips.h"
+       .file   1 "avcall-mips.c"
++      .nan    legacy
++      .module fp=xx
++      .module nooddspreg
+       .text
+       .align  2
+       .globl  avcall_call
++      .set    nomips16
++      .set    nomicromips
+       .ent    avcall_call
+       DECLARE_FUNCTION(avcall_call)
+ avcall_call:
+       .frame  $fp,40,$31              
+-      .mask   0xc0010000,-8
++      .mask   0xc0010000,-4
+       .fmask  0x00000000,0
+       addiu   $sp,$sp,-40
+-      sw      $fp,28($sp)
++      lw      $8,24($4)
++      lw      $3,20($4)
++      lw      $5,40($4)
++      sw      $fp,32($sp)
+       move    $fp,$sp
+-      sw      $31,32($sp)
+-      sw      $16,24($sp)
+-      .cprestore      16
+-      lw      $3,24($4)
+-      lw      $2,20($4)
+-      move    $16,$4
+-      lw      $4,40($4)
+-      subu    $2,$2,$3
++      sw      $16,28($sp)
++      subu    $3,$3,$8
++      sw      $31,36($sp)
+       addiu   $sp,$sp,-1032
+-      andi    $3,$4,0x1
+-      sra     $7,$2,2
++      move    $16,$4
++      andi    $4,$5,0x1
++      move    $2,$sp
+       .set    noreorder
+       .set    nomacro
+-      beq     $3,$0,$L2
+-      move    $5,$sp
++      beq     $4,$0,$L2
++      sra     $7,$3,2
+       .set    macro
+       .set    reorder
++ 
+       l.s $f12,48($16)
++ 
+ $L2:
+-      lw      $3,44($16)
+-      
+-      andi    $2,$3,0x1
+-      .set    noreorder
+-      .set    nomacro
+-      beq     $2,$0,$L63
+-      andi    $2,$4,0x2
+-      .set    macro
+-      .set    reorder
+-
++      lw      $4,44($16)
++      andi    $6,$4,0x1
++      beq     $6,$0,$L3
++ 
+       l.d $f12,56($16)
++ 
+-      andi    $2,$4,0x2
+-$L63:
+-      .set    noreorder
+-      .set    nomacro
+-      beq     $2,$0,$L64
+-      andi    $2,$3,0x2
+-      .set    macro
+-      .set    reorder
+-
++$L3:
++      andi    $5,$5,0x2
++      beq     $5,$0,$L4
++ 
+       l.s $f14,52($16)
++ 
+-      andi    $2,$3,0x2
+-$L64:
+-      .set    noreorder
+-      .set    nomacro
+-      beq     $2,$0,$L65
+-      slt     $2,$7,5
+-      .set    macro
+-      .set    reorder
+-
++$L4:
++      andi    $4,$4,0x2
++      beq     $4,$0,$L5
++ 
+       l.d $f14,64($16)
++ 
+-      slt     $2,$7,5
+-$L65:
++$L5:
++      slt     $3,$3,17
+       .set    noreorder
+       .set    nomacro
+-      bne     $2,$0,$L51
+-      li      $4,4                    
++      bne     $3,$0,$L6
++      addiu   $4,$8,16
+       .set    macro
+       .set    reorder
+-      lw      $6,24($16)
+-$L10:
+-      sll     $2,$4,2
+-      addu    $2,$2,$6
+-      lw      $3,0($2)
+-      addiu   $4,$4,1
+-      sw      $3,16($5)
++      addiu   $2,$2,16
++      li      $3,4                    
++$L7:
++      lw      $6,0($4)
++      addiu   $3,$3,1
++      addiu   $4,$4,4
++      slt     $5,$3,$7
++      sw      $6,0($2)
+       .set    noreorder
+       .set    nomacro
+-      bne     $7,$4,$L10
+-      addiu   $5,$5,4
++      bne     $5,$0,$L7
++      addiu   $2,$2,4
+       .set    macro
+       .set    reorder
+-$L11:
+-      lw      $4,0($6)
+-      lw      $5,4($6)
+-      lw      $7,12($6)
++$L6:
++      lw      $5,4($8)
++      lw      $4,0($8)
+       lw      $25,4($16)
+-      lw      $6,8($6)
+-      jalr    $25
+-      lw      $4,12($16)
+-      move    $5,$2
+-      li      $2,1                    
+-      lw      $28,16($fp)
+-      beq     $4,$2,$L12
++      lw      $7,12($8)
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$0,$L57
+-      li      $2,2                    
++      jalr    $25
++      lw      $6,8($8)
+       .set    macro
+       .set    reorder
++      li      $5,1                    
++      lw      $4,12($16)
++      beq     $4,$5,$L8
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L56
+-      li      $2,3                    
++      beq     $4,$0,$L42
++      li      $5,2                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L56
+-      li      $2,4                    
++      beq     $4,$5,$L45
++      li      $5,3                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L56
+-      li      $2,5                    
++      beq     $4,$5,$L45
++      li      $5,4                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L58
+-      li      $2,6                    
++      beq     $4,$5,$L45
++      li      $5,5                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L58
+-      li      $2,7                    
++      beq     $4,$5,$L46
++      li      $5,6                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L57
+-      li      $2,8                    
++      beq     $4,$5,$L46
++      li      $5,7                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L57
+-      li      $2,9                    
++      beq     $4,$5,$L42
++      li      $5,8                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L57
+-      li      $2,10                   
++      beq     $4,$5,$L42
++      li      $5,9                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L57
+-      addiu   $2,$4,-11
++      beq     $4,$5,$L42
++      li      $5,10                   
+       .set    macro
+       .set    reorder
+-      sltu    $2,$2,2
+       .set    noreorder
+       .set    nomacro
+-      bne     $2,$0,$L60
+-      li      $2,13                   
++      beq     $4,$5,$L42
++      addiu   $5,$4,-11
+       .set    macro
+       .set    reorder
++      sltu    $5,$5,2
++      bne     $5,$0,$L48
++      li      $3,13                   
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L61
+-      li      $2,14                   
++      beq     $4,$3,$L49
++      li      $3,14                   
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L62
+-      li      $2,15                   
++      beq     $4,$3,$L50
++      li      $3,15                   
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L57
+-      li      $2,16                   
++      beq     $4,$3,$L42
++      li      $3,16                   
+       .set    macro
+       .set    reorder
+-      bne     $4,$2,$L12
+-      lw      $2,0($16)
+-      
+-      andi    $2,$2,0x2
++      bne     $4,$3,$L8
++      lw      $3,0($16)
++      andi    $3,$3,0x2
+       .set    noreorder
+       .set    nomacro
+-      beq     $2,$0,$L12
+-      li      $2,1                    
++      beq     $3,$0,$L8
++      li      $4,1                    
+       .set    macro
+       .set    reorder
+       lw      $3,16($16)
+-      
+       .set    noreorder
+       .set    nomacro
+-      beq     $3,$2,$L56
+-      li      $2,2                    
++      beq     $3,$4,$L45
++      li      $4,2                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $3,$2,$L58
+-      li      $2,4                    
++      beq     $3,$4,$L46
++      li      $4,4                    
+       .set    macro
+       .set    reorder
+-      beq     $3,$2,$L57
+-$L12:
++      beq     $3,$4,$L42
++$L8:
+       move    $sp,$fp
+-      lw      $31,32($fp)
+-      lw      $fp,28($sp)
+-      lw      $16,24($sp)
+       move    $2,$0
++      lw      $31,36($sp)
++      lw      $fp,32($sp)
++      lw      $16,28($sp)
+       .set    noreorder
+       .set    nomacro
+-      j       $31
++      jr      $31
+       addiu   $sp,$sp,40
+       .set    macro
+       .set    reorder
+-$L51:
+-      lw      $6,24($16)
+-      b       $L11
+-$L57:
+-      lw      $2,8($16)
++$L42:
++      lw      $3,8($16)
++      sw      $2,0($3)
+       move    $sp,$fp
+-      sw      $5,0($2)
+-      lw      $31,32($sp)
+-      lw      $fp,28($sp)
+-      lw      $16,24($sp)
+       move    $2,$0
++      lw      $31,36($sp)
++      lw      $fp,32($sp)
++      lw      $16,28($sp)
+       .set    noreorder
+       .set    nomacro
+-      j       $31
++      jr      $31
+       addiu   $sp,$sp,40
+       .set    macro
+       .set    reorder
+-$L56:
+-      lw      $2,8($16)
++$L45:
++      lw      $3,8($16)
++      sb      $2,0($3)
+       move    $sp,$fp
+-      sb      $5,0($2)
+-      lw      $31,32($sp)
+-      lw      $fp,28($sp)
+-      lw      $16,24($sp)
+       move    $2,$0
++      lw      $31,36($sp)
++      lw      $fp,32($sp)
++      lw      $16,28($sp)
+       .set    noreorder
+       .set    nomacro
+-      j       $31
++      jr      $31
+       addiu   $sp,$sp,40
+       .set    macro
+       .set    reorder
+-$L58:
+-      lw      $2,8($16)
++$L46:
++      lw      $3,8($16)
+       .set    noreorder
+       .set    nomacro
+-      b       $L12
+-      sh      $5,0($2)
++      b       $L8
++      sh      $2,0($3)
+       .set    macro
+       .set    reorder
+-$L61:
+-      lw      $2,8($16)
++$L48:
++      lw      $4,8($16)
++      sw      $2,0($4)
+       .set    noreorder
+       .set    nomacro
+-      b       $L12
+-      swc1    $f0,0($2)
++      b       $L8
++      sw      $3,4($4)
+       .set    macro
+       .set    reorder
+-$L60:
++$L49:
+       lw      $2,8($16)
+-      
+-      sw      $3,4($2)
+       .set    noreorder
+       .set    nomacro
+-      b       $L12
+-      sw      $5,0($2)
++      b       $L8
++      swc1    $f0,0($2)
+       .set    macro
+       .set    reorder
+-$L62:
++$L50:
+       lw      $2,8($16)
+-      
+-      swc1    $f0,4($2)
+       .set    noreorder
+       .set    nomacro
+-      b       $L12
+-      swc1    $f1,0($2)
++      b       $L8
++      sdc1    $f0,0($2)
+       .set    macro
+       .set    reorder
+       .end    avcall_call
++      .size   avcall_call, .-avcall_call
+--- a/avcall/avcall-mipsel-linux.s
++++ b/avcall/avcall-mipsel-linux.s
+@@ -1,325 +1,310 @@
+       .file   1 "avcall-mips.c"
+       .section .mdebug.abi32
+       .previous
++      .nan    legacy
++      .module fp=xx
++      .module nooddspreg
+       .abicalls
+       .text
+       .align  2
+       .globl  avcall_call
++      .set    nomips16
++      .set    nomicromips
+       .ent    avcall_call
+       .type   avcall_call, @function
+ avcall_call:
+       .frame  $fp,40,$31              # vars= 0, regs= 3/0, args= 16, gp= 8
+-      .mask   0xc0010000,-8
++      .mask   0xc0010000,-4
+       .fmask  0x00000000,0
+       addiu   $sp,$sp,-40
+-      sw      $fp,28($sp)
++      lw      $8,24($4)
++      lw      $3,20($4)
++      lw      $5,40($4)
++      sw      $fp,32($sp)
+       move    $fp,$sp
+-      sw      $31,32($sp)
+-      sw      $16,24($sp)
+-      .cprestore      16
+-      lw      $3,24($4)
+-      lw      $2,20($4)
+-      move    $16,$4
+-      lw      $4,40($4)
+-      subu    $2,$2,$3
++      sw      $16,28($sp)
++      subu    $3,$3,$8
++      sw      $31,36($sp)
+       addiu   $sp,$sp,-1032
+-      andi    $3,$4,0x1
+-      sra     $7,$2,2
++      move    $16,$4
++      andi    $4,$5,0x1
++      move    $2,$sp
+       .set    noreorder
+       .set    nomacro
+-      beq     $3,$0,$L2
+-      move    $5,$sp
++      beq     $4,$0,$L2
++      sra     $7,$3,2
+       .set    macro
+       .set    reorder
+ #APP
++ # 76 "avcall-mips.c" 1
+       l.s $f12,48($16)
++ # 0 "" 2
+ #NO_APP
+ $L2:
+-      lw      $3,44($16)
+-      #nop
+-      andi    $2,$3,0x1
+-      .set    noreorder
+-      .set    nomacro
+-      beq     $2,$0,$L63
+-      andi    $2,$4,0x2
+-      .set    macro
+-      .set    reorder
+-
++      lw      $4,44($16)
++      andi    $6,$4,0x1
++      beq     $6,$0,$L3
+ #APP
++ # 78 "avcall-mips.c" 1
+       l.d $f12,56($16)
++ # 0 "" 2
+ #NO_APP
+-      andi    $2,$4,0x2
+-$L63:
+-      .set    noreorder
+-      .set    nomacro
+-      beq     $2,$0,$L64
+-      andi    $2,$3,0x2
+-      .set    macro
+-      .set    reorder
+-
++$L3:
++      andi    $5,$5,0x2
++      beq     $5,$0,$L4
+ #APP
++ # 80 "avcall-mips.c" 1
+       l.s $f14,52($16)
++ # 0 "" 2
+ #NO_APP
+-      andi    $2,$3,0x2
+-$L64:
+-      .set    noreorder
+-      .set    nomacro
+-      beq     $2,$0,$L65
+-      slt     $2,$7,5
+-      .set    macro
+-      .set    reorder
+-
++$L4:
++      andi    $4,$4,0x2
++      beq     $4,$0,$L5
+ #APP
++ # 82 "avcall-mips.c" 1
+       l.d $f14,64($16)
++ # 0 "" 2
+ #NO_APP
+-      slt     $2,$7,5
+-$L65:
++$L5:
++      slt     $3,$3,17
+       .set    noreorder
+       .set    nomacro
+-      bne     $2,$0,$L51
+-      li      $4,4                    # 0x4
++      bne     $3,$0,$L6
++      addiu   $4,$8,16
+       .set    macro
+       .set    reorder
+-      lw      $6,24($16)
+-$L10:
+-      sll     $2,$4,2
+-      addu    $2,$2,$6
+-      lw      $3,0($2)
+-      addiu   $4,$4,1
+-      sw      $3,16($5)
++      addiu   $2,$2,16
++      li      $3,4                    # 0x4
++$L7:
++      lw      $6,0($4)
++      addiu   $3,$3,1
++      addiu   $4,$4,4
++      slt     $5,$3,$7
++      sw      $6,0($2)
+       .set    noreorder
+       .set    nomacro
+-      bne     $7,$4,$L10
+-      addiu   $5,$5,4
++      bne     $5,$0,$L7
++      addiu   $2,$2,4
+       .set    macro
+       .set    reorder
+-$L11:
+-      lw      $4,0($6)
+-      lw      $5,4($6)
+-      lw      $7,12($6)
++$L6:
++      lw      $5,4($8)
++      lw      $4,0($8)
+       lw      $25,4($16)
+-      lw      $6,8($6)
+-      jalr    $25
+-      lw      $4,12($16)
+-      move    $5,$2
+-      li      $2,1                    # 0x1
+-      lw      $28,16($fp)
+-      beq     $4,$2,$L12
++      lw      $7,12($8)
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$0,$L57
+-      li      $2,2                    # 0x2
++      jalr    $25
++      lw      $6,8($8)
+       .set    macro
+       .set    reorder
++      li      $5,1                    # 0x1
++      lw      $4,12($16)
++      beq     $4,$5,$L8
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L56
+-      li      $2,3                    # 0x3
++      beq     $4,$0,$L42
++      li      $5,2                    # 0x2
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L56
+-      li      $2,4                    # 0x4
++      beq     $4,$5,$L45
++      li      $5,3                    # 0x3
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L56
+-      li      $2,5                    # 0x5
++      beq     $4,$5,$L45
++      li      $5,4                    # 0x4
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L58
+-      li      $2,6                    # 0x6
++      beq     $4,$5,$L45
++      li      $5,5                    # 0x5
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L58
+-      li      $2,7                    # 0x7
++      beq     $4,$5,$L46
++      li      $5,6                    # 0x6
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L57
+-      li      $2,8                    # 0x8
++      beq     $4,$5,$L46
++      li      $5,7                    # 0x7
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L57
+-      li      $2,9                    # 0x9
++      beq     $4,$5,$L42
++      li      $5,8                    # 0x8
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L57
+-      li      $2,10                   # 0xa
++      beq     $4,$5,$L42
++      li      $5,9                    # 0x9
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L57
+-      addiu   $2,$4,-11
++      beq     $4,$5,$L42
++      li      $5,10                   # 0xa
+       .set    macro
+       .set    reorder
+-      sltu    $2,$2,2
+       .set    noreorder
+       .set    nomacro
+-      bne     $2,$0,$L60
+-      li      $2,13                   # 0xd
++      beq     $4,$5,$L42
++      addiu   $5,$4,-11
+       .set    macro
+       .set    reorder
++      sltu    $5,$5,2
++      bne     $5,$0,$L48
++      li      $3,13                   # 0xd
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L61
+-      li      $2,14                   # 0xe
++      beq     $4,$3,$L49
++      li      $3,14                   # 0xe
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L62
+-      li      $2,15                   # 0xf
++      beq     $4,$3,$L50
++      li      $3,15                   # 0xf
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L57
+-      li      $2,16                   # 0x10
++      beq     $4,$3,$L42
++      li      $3,16                   # 0x10
+       .set    macro
+       .set    reorder
+-      bne     $4,$2,$L12
+-      lw      $2,0($16)
+-      #nop
+-      andi    $2,$2,0x2
++      bne     $4,$3,$L8
++      lw      $3,0($16)
++      andi    $3,$3,0x2
+       .set    noreorder
+       .set    nomacro
+-      beq     $2,$0,$L12
+-      li      $2,1                    # 0x1
++      beq     $3,$0,$L8
++      li      $4,1                    # 0x1
+       .set    macro
+       .set    reorder
+       lw      $3,16($16)
+-      #nop
+       .set    noreorder
+       .set    nomacro
+-      beq     $3,$2,$L56
+-      li      $2,2                    # 0x2
++      beq     $3,$4,$L45
++      li      $4,2                    # 0x2
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $3,$2,$L58
+-      li      $2,4                    # 0x4
++      beq     $3,$4,$L46
++      li      $4,4                    # 0x4
+       .set    macro
+       .set    reorder
+-      beq     $3,$2,$L57
+-$L12:
++      beq     $3,$4,$L42
++$L8:
+       move    $sp,$fp
+-      lw      $31,32($fp)
+-      lw      $fp,28($sp)
+-      lw      $16,24($sp)
+       move    $2,$0
++      lw      $31,36($sp)
++      lw      $fp,32($sp)
++      lw      $16,28($sp)
+       .set    noreorder
+       .set    nomacro
+-      j       $31
++      jr      $31
+       addiu   $sp,$sp,40
+       .set    macro
+       .set    reorder
+-$L51:
+-      lw      $6,24($16)
+-      b       $L11
+-$L57:
+-      lw      $2,8($16)
++$L42:
++      lw      $3,8($16)
++      sw      $2,0($3)
+       move    $sp,$fp
+-      sw      $5,0($2)
+-      lw      $31,32($sp)
+-      lw      $fp,28($sp)
+-      lw      $16,24($sp)
+       move    $2,$0
++      lw      $31,36($sp)
++      lw      $fp,32($sp)
++      lw      $16,28($sp)
+       .set    noreorder
+       .set    nomacro
+-      j       $31
++      jr      $31
+       addiu   $sp,$sp,40
+       .set    macro
+       .set    reorder
+-$L56:
+-      lw      $2,8($16)
++$L45:
++      lw      $3,8($16)
++      sb      $2,0($3)
+       move    $sp,$fp
+-      sb      $5,0($2)
+-      lw      $31,32($sp)
+-      lw      $fp,28($sp)
+-      lw      $16,24($sp)
+       move    $2,$0
++      lw      $31,36($sp)
++      lw      $fp,32($sp)
++      lw      $16,28($sp)
+       .set    noreorder
+       .set    nomacro
+-      j       $31
++      jr      $31
+       addiu   $sp,$sp,40
+       .set    macro
+       .set    reorder
+-$L58:
+-      lw      $2,8($16)
++$L46:
++      lw      $3,8($16)
+       .set    noreorder
+       .set    nomacro
+-      b       $L12
+-      sh      $5,0($2)
++      b       $L8
++      sh      $2,0($3)
+       .set    macro
+       .set    reorder
+-$L61:
+-      lw      $2,8($16)
++$L48:
++      lw      $4,8($16)
++      sw      $2,0($4)
+       .set    noreorder
+       .set    nomacro
+-      b       $L12
+-      swc1    $f0,0($2)
++      b       $L8
++      sw      $3,4($4)
+       .set    macro
+       .set    reorder
+-$L60:
++$L49:
+       lw      $2,8($16)
+-      #nop
+-      sw      $3,4($2)
+       .set    noreorder
+       .set    nomacro
+-      b       $L12
+-      sw      $5,0($2)
++      b       $L8
++      swc1    $f0,0($2)
+       .set    macro
+       .set    reorder
+-$L62:
++$L50:
+       lw      $2,8($16)
+-      #nop
+-      swc1    $f0,0($2)
+       .set    noreorder
+       .set    nomacro
+-      b       $L12
+-      swc1    $f1,4($2)
++      b       $L8
++      sdc1    $f0,0($2)
+       .set    macro
+       .set    reorder
+       .end    avcall_call
+-      .ident  "GCC: (GNU) 4.0.2"
++      .size   avcall_call, .-avcall_call
++      .ident  "GCC: (Debian 7.2.0-11) 7.2.0"
+--- a/avcall/avcall-mipsel-macro.S
++++ b/avcall/avcall-mipsel-macro.S
+@@ -1,322 +1,307 @@
+ #include "asm-mips.h"
+       .file   1 "avcall-mips.c"
++      .nan    legacy
++      .module fp=xx
++      .module nooddspreg
+       .text
+       .align  2
+       .globl  avcall_call
++      .set    nomips16
++      .set    nomicromips
+       .ent    avcall_call
+       DECLARE_FUNCTION(avcall_call)
+ avcall_call:
+       .frame  $fp,40,$31              
+-      .mask   0xc0010000,-8
++      .mask   0xc0010000,-4
+       .fmask  0x00000000,0
+       addiu   $sp,$sp,-40
+-      sw      $fp,28($sp)
++      lw      $8,24($4)
++      lw      $3,20($4)
++      lw      $5,40($4)
++      sw      $fp,32($sp)
+       move    $fp,$sp
+-      sw      $31,32($sp)
+-      sw      $16,24($sp)
+-      .cprestore      16
+-      lw      $3,24($4)
+-      lw      $2,20($4)
+-      move    $16,$4
+-      lw      $4,40($4)
+-      subu    $2,$2,$3
++      sw      $16,28($sp)
++      subu    $3,$3,$8
++      sw      $31,36($sp)
+       addiu   $sp,$sp,-1032
+-      andi    $3,$4,0x1
+-      sra     $7,$2,2
++      move    $16,$4
++      andi    $4,$5,0x1
++      move    $2,$sp
+       .set    noreorder
+       .set    nomacro
+-      beq     $3,$0,$L2
+-      move    $5,$sp
++      beq     $4,$0,$L2
++      sra     $7,$3,2
+       .set    macro
+       .set    reorder
++ 
+       l.s $f12,48($16)
++ 
+ $L2:
+-      lw      $3,44($16)
+-      
+-      andi    $2,$3,0x1
+-      .set    noreorder
+-      .set    nomacro
+-      beq     $2,$0,$L63
+-      andi    $2,$4,0x2
+-      .set    macro
+-      .set    reorder
+-
++      lw      $4,44($16)
++      andi    $6,$4,0x1
++      beq     $6,$0,$L3
++ 
+       l.d $f12,56($16)
++ 
+-      andi    $2,$4,0x2
+-$L63:
+-      .set    noreorder
+-      .set    nomacro
+-      beq     $2,$0,$L64
+-      andi    $2,$3,0x2
+-      .set    macro
+-      .set    reorder
+-
++$L3:
++      andi    $5,$5,0x2
++      beq     $5,$0,$L4
++ 
+       l.s $f14,52($16)
++ 
+-      andi    $2,$3,0x2
+-$L64:
+-      .set    noreorder
+-      .set    nomacro
+-      beq     $2,$0,$L65
+-      slt     $2,$7,5
+-      .set    macro
+-      .set    reorder
+-
++$L4:
++      andi    $4,$4,0x2
++      beq     $4,$0,$L5
++ 
+       l.d $f14,64($16)
++ 
+-      slt     $2,$7,5
+-$L65:
++$L5:
++      slt     $3,$3,17
+       .set    noreorder
+       .set    nomacro
+-      bne     $2,$0,$L51
+-      li      $4,4                    
++      bne     $3,$0,$L6
++      addiu   $4,$8,16
+       .set    macro
+       .set    reorder
+-      lw      $6,24($16)
+-$L10:
+-      sll     $2,$4,2
+-      addu    $2,$2,$6
+-      lw      $3,0($2)
+-      addiu   $4,$4,1
+-      sw      $3,16($5)
++      addiu   $2,$2,16
++      li      $3,4                    
++$L7:
++      lw      $6,0($4)
++      addiu   $3,$3,1
++      addiu   $4,$4,4
++      slt     $5,$3,$7
++      sw      $6,0($2)
+       .set    noreorder
+       .set    nomacro
+-      bne     $7,$4,$L10
+-      addiu   $5,$5,4
++      bne     $5,$0,$L7
++      addiu   $2,$2,4
+       .set    macro
+       .set    reorder
+-$L11:
+-      lw      $4,0($6)
+-      lw      $5,4($6)
+-      lw      $7,12($6)
++$L6:
++      lw      $5,4($8)
++      lw      $4,0($8)
+       lw      $25,4($16)
+-      lw      $6,8($6)
+-      jalr    $25
+-      lw      $4,12($16)
+-      move    $5,$2
+-      li      $2,1                    
+-      lw      $28,16($fp)
+-      beq     $4,$2,$L12
++      lw      $7,12($8)
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$0,$L57
+-      li      $2,2                    
++      jalr    $25
++      lw      $6,8($8)
+       .set    macro
+       .set    reorder
++      li      $5,1                    
++      lw      $4,12($16)
++      beq     $4,$5,$L8
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L56
+-      li      $2,3                    
++      beq     $4,$0,$L42
++      li      $5,2                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L56
+-      li      $2,4                    
++      beq     $4,$5,$L45
++      li      $5,3                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L56
+-      li      $2,5                    
++      beq     $4,$5,$L45
++      li      $5,4                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L58
+-      li      $2,6                    
++      beq     $4,$5,$L45
++      li      $5,5                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L58
+-      li      $2,7                    
++      beq     $4,$5,$L46
++      li      $5,6                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L57
+-      li      $2,8                    
++      beq     $4,$5,$L46
++      li      $5,7                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L57
+-      li      $2,9                    
++      beq     $4,$5,$L42
++      li      $5,8                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L57
+-      li      $2,10                   
++      beq     $4,$5,$L42
++      li      $5,9                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L57
+-      addiu   $2,$4,-11
++      beq     $4,$5,$L42
++      li      $5,10                   
+       .set    macro
+       .set    reorder
+-      sltu    $2,$2,2
+       .set    noreorder
+       .set    nomacro
+-      bne     $2,$0,$L60
+-      li      $2,13                   
++      beq     $4,$5,$L42
++      addiu   $5,$4,-11
+       .set    macro
+       .set    reorder
++      sltu    $5,$5,2
++      bne     $5,$0,$L48
++      li      $3,13                   
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L61
+-      li      $2,14                   
++      beq     $4,$3,$L49
++      li      $3,14                   
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L62
+-      li      $2,15                   
++      beq     $4,$3,$L50
++      li      $3,15                   
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$2,$L57
+-      li      $2,16                   
++      beq     $4,$3,$L42
++      li      $3,16                   
+       .set    macro
+       .set    reorder
+-      bne     $4,$2,$L12
+-      lw      $2,0($16)
+-      
+-      andi    $2,$2,0x2
++      bne     $4,$3,$L8
++      lw      $3,0($16)
++      andi    $3,$3,0x2
+       .set    noreorder
+       .set    nomacro
+-      beq     $2,$0,$L12
+-      li      $2,1                    
++      beq     $3,$0,$L8
++      li      $4,1                    
+       .set    macro
+       .set    reorder
+       lw      $3,16($16)
+-      
+       .set    noreorder
+       .set    nomacro
+-      beq     $3,$2,$L56
+-      li      $2,2                    
++      beq     $3,$4,$L45
++      li      $4,2                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $3,$2,$L58
+-      li      $2,4                    
++      beq     $3,$4,$L46
++      li      $4,4                    
+       .set    macro
+       .set    reorder
+-      beq     $3,$2,$L57
+-$L12:
++      beq     $3,$4,$L42
++$L8:
+       move    $sp,$fp
+-      lw      $31,32($fp)
+-      lw      $fp,28($sp)
+-      lw      $16,24($sp)
+       move    $2,$0
++      lw      $31,36($sp)
++      lw      $fp,32($sp)
++      lw      $16,28($sp)
+       .set    noreorder
+       .set    nomacro
+-      j       $31
++      jr      $31
+       addiu   $sp,$sp,40
+       .set    macro
+       .set    reorder
+-$L51:
+-      lw      $6,24($16)
+-      b       $L11
+-$L57:
+-      lw      $2,8($16)
++$L42:
++      lw      $3,8($16)
++      sw      $2,0($3)
+       move    $sp,$fp
+-      sw      $5,0($2)
+-      lw      $31,32($sp)
+-      lw      $fp,28($sp)
+-      lw      $16,24($sp)
+       move    $2,$0
++      lw      $31,36($sp)
++      lw      $fp,32($sp)
++      lw      $16,28($sp)
+       .set    noreorder
+       .set    nomacro
+-      j       $31
++      jr      $31
+       addiu   $sp,$sp,40
+       .set    macro
+       .set    reorder
+-$L56:
+-      lw      $2,8($16)
++$L45:
++      lw      $3,8($16)
++      sb      $2,0($3)
+       move    $sp,$fp
+-      sb      $5,0($2)
+-      lw      $31,32($sp)
+-      lw      $fp,28($sp)
+-      lw      $16,24($sp)
+       move    $2,$0
++      lw      $31,36($sp)
++      lw      $fp,32($sp)
++      lw      $16,28($sp)
+       .set    noreorder
+       .set    nomacro
+-      j       $31
++      jr      $31
+       addiu   $sp,$sp,40
+       .set    macro
+       .set    reorder
+-$L58:
+-      lw      $2,8($16)
++$L46:
++      lw      $3,8($16)
+       .set    noreorder
+       .set    nomacro
+-      b       $L12
+-      sh      $5,0($2)
++      b       $L8
++      sh      $2,0($3)
+       .set    macro
+       .set    reorder
+-$L61:
+-      lw      $2,8($16)
++$L48:
++      lw      $4,8($16)
++      sw      $2,0($4)
+       .set    noreorder
+       .set    nomacro
+-      b       $L12
+-      swc1    $f0,0($2)
++      b       $L8
++      sw      $3,4($4)
+       .set    macro
+       .set    reorder
+-$L60:
++$L49:
+       lw      $2,8($16)
+-      
+-      sw      $3,4($2)
+       .set    noreorder
+       .set    nomacro
+-      b       $L12
+-      sw      $5,0($2)
++      b       $L8
++      swc1    $f0,0($2)
+       .set    macro
+       .set    reorder
+-$L62:
++$L50:
+       lw      $2,8($16)
+-      
+-      swc1    $f0,0($2)
+       .set    noreorder
+       .set    nomacro
+-      b       $L12
+-      swc1    $f1,4($2)
++      b       $L8
++      sdc1    $f0,0($2)
+       .set    macro
+       .set    reorder
+       .end    avcall_call
++      .size   avcall_call, .-avcall_call
+--- a/callback/vacall_r/vacall-mipseb-linux.s
++++ b/callback/vacall_r/vacall-mipseb-linux.s
+@@ -1,9 +1,14 @@
+       .file   1 "vacall-mips.c"
+       .section .mdebug.abi32
+       .previous
++      .nan    legacy
++      .module fp=xx
++      .module nooddspreg
+       .abicalls
+       .text
+       .align  2
++      .set    nomips16
++      .set    nomicromips
+       .ent    callback_receiver
+       .type   callback_receiver, @function
+ callback_receiver:
+@@ -17,162 +22,164 @@ callback_receiver:
+       sw      $fp,96($sp)
+       move    $fp,$sp
+       sw      $31,100($sp)
+-      .cprestore      16
+-      sw      $0,44($fp)
+-      sw      $5,108($fp)
+-      addiu   $5,$fp,120
+       sw      $4,104($fp)
+-      sw      $5,56($fp)
+-      lw      $4,4($2)
+-      addiu   $5,$fp,104
+-      lw      $25,0($2)
+-      sw      $5,40($fp)
+-      sw      $6,112($fp)
++      addiu   $4,$fp,104
++      sw      $5,108($fp)
+       addiu   $5,$fp,24
++      sw      $4,40($fp)
++      addiu   $4,$fp,120
++      .cprestore      16
++      sw      $4,56($fp)
++      sw      $6,112($fp)
+       sw      $7,116($fp)
+-      swc1    $f12,84($fp)
+-      swc1    $f13,80($fp)
+-      swc1    $f14,92($fp)
+-      swc1    $f15,88($fp)
+-      swc1    $f12,68($fp)
+-      swc1    $f14,72($fp)
+       sw      $0,24($fp)
++      sw      $0,44($fp)
+       sw      $0,48($fp)
+       sw      $0,60($fp)
+       sw      $0,64($fp)
++      lw      $4,4($2)
++      lw      $25,0($2)
++      sdc1    $f12,80($fp)
++      sdc1    $f14,88($fp)
++      swc1    $f12,68($fp)
++      swc1    $f14,72($fp)
+       jal     $25
+-      lw      $5,48($fp)
++      lw      $4,48($fp)
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$0,$L38
+-      li      $4,1                    # 0x1
++      beq     $4,$0,$L1
++      li      $5,1                    # 0x1
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L39
+-      li      $4,2                    # 0x2
++      beq     $4,$5,$L23
++      li      $5,2                    # 0x2
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L39
+-      li      $4,3                    # 0x3
++      beq     $4,$5,$L23
++      li      $5,3                    # 0x3
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L45
+-      li      $4,4                    # 0x4
++      beq     $4,$5,$L29
++      li      $5,4                    # 0x4
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L46
+-      li      $4,5                    # 0x5
++      beq     $4,$5,$L30
++      li      $5,5                    # 0x5
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L47
+-      li      $4,6                    # 0x6
++      beq     $4,$5,$L31
++      li      $5,6                    # 0x6
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      li      $4,7                    # 0x7
++      beq     $4,$5,$L27
++      li      $5,7                    # 0x7
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      li      $4,8                    # 0x8
++      beq     $4,$5,$L27
++      li      $5,8                    # 0x8
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      li      $4,9                    # 0x9
++      beq     $4,$5,$L27
++      li      $5,9                    # 0x9
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      addiu   $4,$5,-10
++      beq     $4,$5,$L27
++      addiu   $5,$4,-10
+       .set    macro
+       .set    reorder
+-      sltu    $4,$4,2
++      sltu    $5,$5,2
+       .set    noreorder
+       .set    nomacro
+-      bne     $4,$0,$L48
+-      li      $4,12                   # 0xc
++      bne     $5,$0,$L32
++      li      $5,12                   # 0xc
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L49
+-      li      $4,13                   # 0xd
++      beq     $4,$5,$L33
++      li      $5,13                   # 0xd
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L50
+-      li      $4,14                   # 0xe
++      beq     $4,$5,$L34
++      li      $5,14                   # 0xe
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      li      $4,15                   # 0xf
++      beq     $4,$5,$L27
++      li      $5,15                   # 0xf
+       .set    macro
+       .set    reorder
+-      bne     $5,$4,$L38
++      .set    noreorder
++      .set    nomacro
++      bne     $4,$5,$L1
+       lw      $4,24($fp)
++      .set    macro
++      .set    reorder
++
+       andi    $4,$4,0x2
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$0,$L31
+-      li      $4,1                    # 0x1
++      beq     $4,$0,$L16
++      lw      $4,52($fp)
+       .set    macro
+       .set    reorder
+-      lw      $5,52($fp)
++      li      $5,1                    # 0x1
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L51
+-      li      $4,2                    # 0x2
++      beq     $4,$5,$L35
++      li      $5,2                    # 0x2
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L52
+-      li      $4,4                    # 0x4
++      beq     $4,$5,$L36
++      li      $5,4                    # 0x4
+       .set    macro
+       .set    reorder
+-      bne     $5,$4,$L38
+-      lw      $4,44($fp)
+-      lw      $2,0($4)
+-$L38:
++      bne     $4,$5,$L1
++      lw      $2,44($fp)
++      lw      $2,0($2)
++$L1:
+       move    $sp,$fp
+-$L53:
+-      lw      $31,100($sp)
++      lw      $31,100($fp)
+       lw      $fp,96($sp)
+       .set    noreorder
+       .set    nomacro
+@@ -181,7 +188,7 @@ $L53:
+       .set    macro
+       .set    reorder
+-$L39:
++$L23:
+       move    $sp,$fp
+       lb      $2,32($fp)
+       lw      $31,100($sp)
+@@ -193,118 +200,110 @@ $L39:
+       .set    macro
+       .set    reorder
+-$L46:
+-      lh      $2,32($fp)
++$L29:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lbu     $2,32($fp)
+       .set    macro
+       .set    reorder
+-$L43:
+-      lw      $2,32($fp)
++$L27:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lw      $2,32($fp)
+       .set    macro
+       .set    reorder
+-$L45:
+-      lbu     $2,32($fp)
++$L30:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lh      $2,32($fp)
+       .set    macro
+       .set    reorder
+-$L47:
+-      lhu     $2,32($fp)
++$L31:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lhu     $2,32($fp)
+       .set    macro
+       .set    reorder
+-$L49:
+-      lwc1    $f0,32($fp)
++$L32:
++      lw      $2,32($fp)
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lw      $3,36($fp)
+       .set    macro
+       .set    reorder
+-$L48:
+-      lw      $2,32($fp)
+-      lw      $3,36($fp)
++$L33:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lwc1    $f0,32($fp)
+       .set    macro
+       .set    reorder
+-$L50:
+-      lwc1    $f0,36($fp)
+-      lwc1    $f1,32($fp)
++$L34:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      ldc1    $f0,32($fp)
+       .set    macro
+       .set    reorder
+-$L31:
+-      lw      $2,44($fp)
++$L16:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lw      $2,44($fp)
+       .set    macro
+       .set    reorder
+-$L51:
+-      lw      $4,44($fp)
+-      lbu     $2,0($4)
++$L35:
++      lw      $2,44($fp)
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lbu     $2,0($2)
+       .set    macro
+       .set    reorder
+-$L52:
+-      lw      $4,44($fp)
+-      lhu     $2,0($4)
++$L36:
++      lw      $2,44($fp)
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lhu     $2,0($2)
+       .set    macro
+       .set    reorder
+       .end    callback_receiver
++      .size   callback_receiver, .-callback_receiver
+       .align  2
+       .globl  callback_get_receiver
++      .set    nomips16
++      .set    nomicromips
+       .ent    callback_get_receiver
+       .type   callback_get_receiver, @function
+ callback_get_receiver:
+       .frame  $fp,8,$31               # vars= 0, regs= 1/0, args= 0, gp= 0
+-      .mask   0x40000000,-8
++      .mask   0x40000000,-4
+       .fmask  0x00000000,0
+       .set    noreorder
+       .cpload $25
+       .set    reorder
+       addiu   $sp,$sp,-8
+-      sw      $fp,0($sp)
++      sw      $fp,4($sp)
+       move    $fp,$sp
+       move    $sp,$fp
+-      lw      $fp,0($sp)
+       la      $2,callback_receiver
++      lw      $fp,4($sp)
+       .set    noreorder
+       .set    nomacro
+       j       $31
+@@ -313,4 +312,5 @@ callback_get_receiver:
+       .set    reorder
+       .end    callback_get_receiver
+-      .ident  "GCC: (GNU) 4.0.2"
++      .size   callback_get_receiver, .-callback_get_receiver
++      .ident  "GCC: (Debian 7.2.0-11) 7.2.0"
+--- a/callback/vacall_r/vacall-mipseb-macro.S
++++ b/callback/vacall_r/vacall-mipseb-macro.S
+@@ -1,7 +1,12 @@
+ #include "asm-mips.h"
+       .file   1 "vacall-mips.c"
++      .nan    legacy
++      .module fp=xx
++      .module nooddspreg
+       .text
+       .align  2
++      .set    nomips16
++      .set    nomicromips
+       .ent    callback_receiver
+       DECLARE_FUNCTION(callback_receiver)
+ callback_receiver:
+@@ -15,162 +20,164 @@ callback_receiver:
+       sw      $fp,96($sp)
+       move    $fp,$sp
+       sw      $31,100($sp)
+-      .cprestore      16
+-      sw      $0,44($fp)
+-      sw      $5,108($fp)
+-      addiu   $5,$fp,120
+       sw      $4,104($fp)
+-      sw      $5,56($fp)
+-      lw      $4,4($2)
+-      addiu   $5,$fp,104
+-      lw      $25,0($2)
+-      sw      $5,40($fp)
+-      sw      $6,112($fp)
++      addiu   $4,$fp,104
++      sw      $5,108($fp)
+       addiu   $5,$fp,24
++      sw      $4,40($fp)
++      addiu   $4,$fp,120
++      .cprestore      16
++      sw      $4,56($fp)
++      sw      $6,112($fp)
+       sw      $7,116($fp)
+-      swc1    $f12,84($fp)
+-      swc1    $f13,80($fp)
+-      swc1    $f14,92($fp)
+-      swc1    $f15,88($fp)
+-      swc1    $f12,68($fp)
+-      swc1    $f14,72($fp)
+       sw      $0,24($fp)
++      sw      $0,44($fp)
+       sw      $0,48($fp)
+       sw      $0,60($fp)
+       sw      $0,64($fp)
++      lw      $4,4($2)
++      lw      $25,0($2)
++      sdc1    $f12,80($fp)
++      sdc1    $f14,88($fp)
++      swc1    $f12,68($fp)
++      swc1    $f14,72($fp)
+       jal     $25
+-      lw      $5,48($fp)
++      lw      $4,48($fp)
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$0,$L38
+-      li      $4,1                    
++      beq     $4,$0,$L1
++      li      $5,1                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L39
+-      li      $4,2                    
++      beq     $4,$5,$L23
++      li      $5,2                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L39
+-      li      $4,3                    
++      beq     $4,$5,$L23
++      li      $5,3                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L45
+-      li      $4,4                    
++      beq     $4,$5,$L29
++      li      $5,4                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L46
+-      li      $4,5                    
++      beq     $4,$5,$L30
++      li      $5,5                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L47
+-      li      $4,6                    
++      beq     $4,$5,$L31
++      li      $5,6                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      li      $4,7                    
++      beq     $4,$5,$L27
++      li      $5,7                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      li      $4,8                    
++      beq     $4,$5,$L27
++      li      $5,8                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      li      $4,9                    
++      beq     $4,$5,$L27
++      li      $5,9                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      addiu   $4,$5,-10
++      beq     $4,$5,$L27
++      addiu   $5,$4,-10
+       .set    macro
+       .set    reorder
+-      sltu    $4,$4,2
++      sltu    $5,$5,2
+       .set    noreorder
+       .set    nomacro
+-      bne     $4,$0,$L48
+-      li      $4,12                   
++      bne     $5,$0,$L32
++      li      $5,12                   
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L49
+-      li      $4,13                   
++      beq     $4,$5,$L33
++      li      $5,13                   
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L50
+-      li      $4,14                   
++      beq     $4,$5,$L34
++      li      $5,14                   
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      li      $4,15                   
++      beq     $4,$5,$L27
++      li      $5,15                   
+       .set    macro
+       .set    reorder
+-      bne     $5,$4,$L38
++      .set    noreorder
++      .set    nomacro
++      bne     $4,$5,$L1
+       lw      $4,24($fp)
++      .set    macro
++      .set    reorder
++
+       andi    $4,$4,0x2
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$0,$L31
+-      li      $4,1                    
++      beq     $4,$0,$L16
++      lw      $4,52($fp)
+       .set    macro
+       .set    reorder
+-      lw      $5,52($fp)
++      li      $5,1                    
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L51
+-      li      $4,2                    
++      beq     $4,$5,$L35
++      li      $5,2                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L52
+-      li      $4,4                    
++      beq     $4,$5,$L36
++      li      $5,4                    
+       .set    macro
+       .set    reorder
+-      bne     $5,$4,$L38
+-      lw      $4,44($fp)
+-      lw      $2,0($4)
+-$L38:
++      bne     $4,$5,$L1
++      lw      $2,44($fp)
++      lw      $2,0($2)
++$L1:
+       move    $sp,$fp
+-$L53:
+-      lw      $31,100($sp)
++      lw      $31,100($fp)
+       lw      $fp,96($sp)
+       .set    noreorder
+       .set    nomacro
+@@ -179,7 +186,7 @@ $L53:
+       .set    macro
+       .set    reorder
+-$L39:
++$L23:
+       move    $sp,$fp
+       lb      $2,32($fp)
+       lw      $31,100($sp)
+@@ -191,118 +198,110 @@ $L39:
+       .set    macro
+       .set    reorder
+-$L46:
+-      lh      $2,32($fp)
++$L29:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lbu     $2,32($fp)
+       .set    macro
+       .set    reorder
+-$L43:
+-      lw      $2,32($fp)
++$L27:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lw      $2,32($fp)
+       .set    macro
+       .set    reorder
+-$L45:
+-      lbu     $2,32($fp)
++$L30:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lh      $2,32($fp)
+       .set    macro
+       .set    reorder
+-$L47:
+-      lhu     $2,32($fp)
++$L31:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lhu     $2,32($fp)
+       .set    macro
+       .set    reorder
+-$L49:
+-      lwc1    $f0,32($fp)
++$L32:
++      lw      $2,32($fp)
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lw      $3,36($fp)
+       .set    macro
+       .set    reorder
+-$L48:
+-      lw      $2,32($fp)
+-      lw      $3,36($fp)
++$L33:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lwc1    $f0,32($fp)
+       .set    macro
+       .set    reorder
+-$L50:
+-      lwc1    $f0,36($fp)
+-      lwc1    $f1,32($fp)
++$L34:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      ldc1    $f0,32($fp)
+       .set    macro
+       .set    reorder
+-$L31:
+-      lw      $2,44($fp)
++$L16:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lw      $2,44($fp)
+       .set    macro
+       .set    reorder
+-$L51:
+-      lw      $4,44($fp)
+-      lbu     $2,0($4)
++$L35:
++      lw      $2,44($fp)
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lbu     $2,0($2)
+       .set    macro
+       .set    reorder
+-$L52:
+-      lw      $4,44($fp)
+-      lhu     $2,0($4)
++$L36:
++      lw      $2,44($fp)
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lhu     $2,0($2)
+       .set    macro
+       .set    reorder
+       .end    callback_receiver
++      .size   callback_receiver, .-callback_receiver
+       .align  2
+       .globl  callback_get_receiver
++      .set    nomips16
++      .set    nomicromips
+       .ent    callback_get_receiver
+       DECLARE_FUNCTION(callback_get_receiver)
+ callback_get_receiver:
+       .frame  $fp,8,$31               
+-      .mask   0x40000000,-8
++      .mask   0x40000000,-4
+       .fmask  0x00000000,0
+       .set    noreorder
+       .cpload $25
+       .set    reorder
+       addiu   $sp,$sp,-8
+-      sw      $fp,0($sp)
++      sw      $fp,4($sp)
+       move    $fp,$sp
+       move    $sp,$fp
+-      lw      $fp,0($sp)
+       la      $2,callback_receiver
++      lw      $fp,4($sp)
+       .set    noreorder
+       .set    nomacro
+       j       $31
+@@ -311,3 +310,4 @@ callback_get_receiver:
+       .set    reorder
+       .end    callback_get_receiver
++      .size   callback_get_receiver, .-callback_get_receiver
+--- a/callback/vacall_r/vacall-mipsel-linux.s
++++ b/callback/vacall_r/vacall-mipsel-linux.s
+@@ -1,9 +1,14 @@
+       .file   1 "vacall-mips.c"
+       .section .mdebug.abi32
+       .previous
++      .nan    legacy
++      .module fp=xx
++      .module nooddspreg
+       .abicalls
+       .text
+       .align  2
++      .set    nomips16
++      .set    nomicromips
+       .ent    callback_receiver
+       .type   callback_receiver, @function
+ callback_receiver:
+@@ -17,162 +22,164 @@ callback_receiver:
+       sw      $fp,96($sp)
+       move    $fp,$sp
+       sw      $31,100($sp)
+-      .cprestore      16
+-      sw      $0,44($fp)
+-      sw      $5,108($fp)
+-      addiu   $5,$fp,120
+       sw      $4,104($fp)
+-      sw      $5,56($fp)
+-      lw      $4,4($2)
+-      addiu   $5,$fp,104
+-      lw      $25,0($2)
+-      sw      $5,40($fp)
+-      sw      $6,112($fp)
++      addiu   $4,$fp,104
++      sw      $5,108($fp)
+       addiu   $5,$fp,24
++      sw      $4,40($fp)
++      addiu   $4,$fp,120
++      .cprestore      16
++      sw      $4,56($fp)
++      sw      $6,112($fp)
+       sw      $7,116($fp)
+-      swc1    $f12,80($fp)
+-      swc1    $f13,84($fp)
+-      swc1    $f14,88($fp)
+-      swc1    $f15,92($fp)
+-      swc1    $f12,68($fp)
+-      swc1    $f14,72($fp)
+       sw      $0,24($fp)
++      sw      $0,44($fp)
+       sw      $0,48($fp)
+       sw      $0,60($fp)
+       sw      $0,64($fp)
++      lw      $4,4($2)
++      lw      $25,0($2)
++      sdc1    $f12,80($fp)
++      sdc1    $f14,88($fp)
++      swc1    $f12,68($fp)
++      swc1    $f14,72($fp)
+       jal     $25
+-      lw      $5,48($fp)
++      lw      $4,48($fp)
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$0,$L38
+-      li      $4,1                    # 0x1
++      beq     $4,$0,$L1
++      li      $5,1                    # 0x1
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L39
+-      li      $4,2                    # 0x2
++      beq     $4,$5,$L23
++      li      $5,2                    # 0x2
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L39
+-      li      $4,3                    # 0x3
++      beq     $4,$5,$L23
++      li      $5,3                    # 0x3
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L45
+-      li      $4,4                    # 0x4
++      beq     $4,$5,$L29
++      li      $5,4                    # 0x4
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L46
+-      li      $4,5                    # 0x5
++      beq     $4,$5,$L30
++      li      $5,5                    # 0x5
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L47
+-      li      $4,6                    # 0x6
++      beq     $4,$5,$L31
++      li      $5,6                    # 0x6
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      li      $4,7                    # 0x7
++      beq     $4,$5,$L27
++      li      $5,7                    # 0x7
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      li      $4,8                    # 0x8
++      beq     $4,$5,$L27
++      li      $5,8                    # 0x8
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      li      $4,9                    # 0x9
++      beq     $4,$5,$L27
++      li      $5,9                    # 0x9
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      addiu   $4,$5,-10
++      beq     $4,$5,$L27
++      addiu   $5,$4,-10
+       .set    macro
+       .set    reorder
+-      sltu    $4,$4,2
++      sltu    $5,$5,2
+       .set    noreorder
+       .set    nomacro
+-      bne     $4,$0,$L48
+-      li      $4,12                   # 0xc
++      bne     $5,$0,$L32
++      li      $5,12                   # 0xc
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L49
+-      li      $4,13                   # 0xd
++      beq     $4,$5,$L33
++      li      $5,13                   # 0xd
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L50
+-      li      $4,14                   # 0xe
++      beq     $4,$5,$L34
++      li      $5,14                   # 0xe
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      li      $4,15                   # 0xf
++      beq     $4,$5,$L27
++      li      $5,15                   # 0xf
+       .set    macro
+       .set    reorder
+-      bne     $5,$4,$L38
++      .set    noreorder
++      .set    nomacro
++      bne     $4,$5,$L1
+       lw      $4,24($fp)
++      .set    macro
++      .set    reorder
++
+       andi    $4,$4,0x2
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$0,$L31
+-      li      $4,1                    # 0x1
++      beq     $4,$0,$L16
++      lw      $4,52($fp)
+       .set    macro
+       .set    reorder
+-      lw      $5,52($fp)
++      li      $5,1                    # 0x1
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L51
+-      li      $4,2                    # 0x2
++      beq     $4,$5,$L35
++      li      $5,2                    # 0x2
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L52
+-      li      $4,4                    # 0x4
++      beq     $4,$5,$L36
++      li      $5,4                    # 0x4
+       .set    macro
+       .set    reorder
+-      bne     $5,$4,$L38
+-      lw      $4,44($fp)
+-      lw      $2,0($4)
+-$L38:
++      bne     $4,$5,$L1
++      lw      $2,44($fp)
++      lw      $2,0($2)
++$L1:
+       move    $sp,$fp
+-$L53:
+-      lw      $31,100($sp)
++      lw      $31,100($fp)
+       lw      $fp,96($sp)
+       .set    noreorder
+       .set    nomacro
+@@ -181,7 +188,7 @@ $L53:
+       .set    macro
+       .set    reorder
+-$L39:
++$L23:
+       move    $sp,$fp
+       lb      $2,32($fp)
+       lw      $31,100($sp)
+@@ -193,118 +200,110 @@ $L39:
+       .set    macro
+       .set    reorder
+-$L46:
+-      lh      $2,32($fp)
++$L29:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lbu     $2,32($fp)
+       .set    macro
+       .set    reorder
+-$L43:
+-      lw      $2,32($fp)
++$L27:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lw      $2,32($fp)
+       .set    macro
+       .set    reorder
+-$L45:
+-      lbu     $2,32($fp)
++$L30:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lh      $2,32($fp)
+       .set    macro
+       .set    reorder
+-$L47:
+-      lhu     $2,32($fp)
++$L31:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lhu     $2,32($fp)
+       .set    macro
+       .set    reorder
+-$L49:
+-      lwc1    $f0,32($fp)
++$L32:
++      lw      $2,32($fp)
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lw      $3,36($fp)
+       .set    macro
+       .set    reorder
+-$L48:
+-      lw      $2,32($fp)
+-      lw      $3,36($fp)
++$L33:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lwc1    $f0,32($fp)
+       .set    macro
+       .set    reorder
+-$L50:
+-      lwc1    $f0,32($fp)
+-      lwc1    $f1,36($fp)
++$L34:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      ldc1    $f0,32($fp)
+       .set    macro
+       .set    reorder
+-$L31:
+-      lw      $2,44($fp)
++$L16:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lw      $2,44($fp)
+       .set    macro
+       .set    reorder
+-$L51:
+-      lw      $4,44($fp)
+-      lbu     $2,0($4)
++$L35:
++      lw      $2,44($fp)
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lbu     $2,0($2)
+       .set    macro
+       .set    reorder
+-$L52:
+-      lw      $4,44($fp)
+-      lhu     $2,0($4)
++$L36:
++      lw      $2,44($fp)
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lhu     $2,0($2)
+       .set    macro
+       .set    reorder
+       .end    callback_receiver
++      .size   callback_receiver, .-callback_receiver
+       .align  2
+       .globl  callback_get_receiver
++      .set    nomips16
++      .set    nomicromips
+       .ent    callback_get_receiver
+       .type   callback_get_receiver, @function
+ callback_get_receiver:
+       .frame  $fp,8,$31               # vars= 0, regs= 1/0, args= 0, gp= 0
+-      .mask   0x40000000,-8
++      .mask   0x40000000,-4
+       .fmask  0x00000000,0
+       .set    noreorder
+       .cpload $25
+       .set    reorder
+       addiu   $sp,$sp,-8
+-      sw      $fp,0($sp)
++      sw      $fp,4($sp)
+       move    $fp,$sp
+       move    $sp,$fp
+-      lw      $fp,0($sp)
+       la      $2,callback_receiver
++      lw      $fp,4($sp)
+       .set    noreorder
+       .set    nomacro
+       j       $31
+@@ -313,4 +312,5 @@ callback_get_receiver:
+       .set    reorder
+       .end    callback_get_receiver
+-      .ident  "GCC: (GNU) 4.0.2"
++      .size   callback_get_receiver, .-callback_get_receiver
++      .ident  "GCC: (Debian 7.2.0-11) 7.2.0"
+--- a/callback/vacall_r/vacall-mipsel-macro.S
++++ b/callback/vacall_r/vacall-mipsel-macro.S
+@@ -1,7 +1,12 @@
+ #include "asm-mips.h"
+       .file   1 "vacall-mips.c"
++      .nan    legacy
++      .module fp=xx
++      .module nooddspreg
+       .text
+       .align  2
++      .set    nomips16
++      .set    nomicromips
+       .ent    callback_receiver
+       DECLARE_FUNCTION(callback_receiver)
+ callback_receiver:
+@@ -15,162 +20,164 @@ callback_receiver:
+       sw      $fp,96($sp)
+       move    $fp,$sp
+       sw      $31,100($sp)
+-      .cprestore      16
+-      sw      $0,44($fp)
+-      sw      $5,108($fp)
+-      addiu   $5,$fp,120
+       sw      $4,104($fp)
+-      sw      $5,56($fp)
+-      lw      $4,4($2)
+-      addiu   $5,$fp,104
+-      lw      $25,0($2)
+-      sw      $5,40($fp)
+-      sw      $6,112($fp)
++      addiu   $4,$fp,104
++      sw      $5,108($fp)
+       addiu   $5,$fp,24
++      sw      $4,40($fp)
++      addiu   $4,$fp,120
++      .cprestore      16
++      sw      $4,56($fp)
++      sw      $6,112($fp)
+       sw      $7,116($fp)
+-      swc1    $f12,80($fp)
+-      swc1    $f13,84($fp)
+-      swc1    $f14,88($fp)
+-      swc1    $f15,92($fp)
+-      swc1    $f12,68($fp)
+-      swc1    $f14,72($fp)
+       sw      $0,24($fp)
++      sw      $0,44($fp)
+       sw      $0,48($fp)
+       sw      $0,60($fp)
+       sw      $0,64($fp)
++      lw      $4,4($2)
++      lw      $25,0($2)
++      sdc1    $f12,80($fp)
++      sdc1    $f14,88($fp)
++      swc1    $f12,68($fp)
++      swc1    $f14,72($fp)
+       jal     $25
+-      lw      $5,48($fp)
++      lw      $4,48($fp)
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$0,$L38
+-      li      $4,1                    
++      beq     $4,$0,$L1
++      li      $5,1                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L39
+-      li      $4,2                    
++      beq     $4,$5,$L23
++      li      $5,2                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L39
+-      li      $4,3                    
++      beq     $4,$5,$L23
++      li      $5,3                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L45
+-      li      $4,4                    
++      beq     $4,$5,$L29
++      li      $5,4                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L46
+-      li      $4,5                    
++      beq     $4,$5,$L30
++      li      $5,5                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L47
+-      li      $4,6                    
++      beq     $4,$5,$L31
++      li      $5,6                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      li      $4,7                    
++      beq     $4,$5,$L27
++      li      $5,7                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      li      $4,8                    
++      beq     $4,$5,$L27
++      li      $5,8                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      li      $4,9                    
++      beq     $4,$5,$L27
++      li      $5,9                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      addiu   $4,$5,-10
++      beq     $4,$5,$L27
++      addiu   $5,$4,-10
+       .set    macro
+       .set    reorder
+-      sltu    $4,$4,2
++      sltu    $5,$5,2
+       .set    noreorder
+       .set    nomacro
+-      bne     $4,$0,$L48
+-      li      $4,12                   
++      bne     $5,$0,$L32
++      li      $5,12                   
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L49
+-      li      $4,13                   
++      beq     $4,$5,$L33
++      li      $5,13                   
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L50
+-      li      $4,14                   
++      beq     $4,$5,$L34
++      li      $5,14                   
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      li      $4,15                   
++      beq     $4,$5,$L27
++      li      $5,15                   
+       .set    macro
+       .set    reorder
+-      bne     $5,$4,$L38
++      .set    noreorder
++      .set    nomacro
++      bne     $4,$5,$L1
+       lw      $4,24($fp)
++      .set    macro
++      .set    reorder
++
+       andi    $4,$4,0x2
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$0,$L31
+-      li      $4,1                    
++      beq     $4,$0,$L16
++      lw      $4,52($fp)
+       .set    macro
+       .set    reorder
+-      lw      $5,52($fp)
++      li      $5,1                    
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L51
+-      li      $4,2                    
++      beq     $4,$5,$L35
++      li      $5,2                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L52
+-      li      $4,4                    
++      beq     $4,$5,$L36
++      li      $5,4                    
+       .set    macro
+       .set    reorder
+-      bne     $5,$4,$L38
+-      lw      $4,44($fp)
+-      lw      $2,0($4)
+-$L38:
++      bne     $4,$5,$L1
++      lw      $2,44($fp)
++      lw      $2,0($2)
++$L1:
+       move    $sp,$fp
+-$L53:
+-      lw      $31,100($sp)
++      lw      $31,100($fp)
+       lw      $fp,96($sp)
+       .set    noreorder
+       .set    nomacro
+@@ -179,7 +186,7 @@ $L53:
+       .set    macro
+       .set    reorder
+-$L39:
++$L23:
+       move    $sp,$fp
+       lb      $2,32($fp)
+       lw      $31,100($sp)
+@@ -191,118 +198,110 @@ $L39:
+       .set    macro
+       .set    reorder
+-$L46:
+-      lh      $2,32($fp)
++$L29:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lbu     $2,32($fp)
+       .set    macro
+       .set    reorder
+-$L43:
+-      lw      $2,32($fp)
++$L27:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lw      $2,32($fp)
+       .set    macro
+       .set    reorder
+-$L45:
+-      lbu     $2,32($fp)
++$L30:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lh      $2,32($fp)
+       .set    macro
+       .set    reorder
+-$L47:
+-      lhu     $2,32($fp)
++$L31:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lhu     $2,32($fp)
+       .set    macro
+       .set    reorder
+-$L49:
+-      lwc1    $f0,32($fp)
++$L32:
++      lw      $2,32($fp)
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lw      $3,36($fp)
+       .set    macro
+       .set    reorder
+-$L48:
+-      lw      $2,32($fp)
+-      lw      $3,36($fp)
++$L33:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lwc1    $f0,32($fp)
+       .set    macro
+       .set    reorder
+-$L50:
+-      lwc1    $f0,32($fp)
+-      lwc1    $f1,36($fp)
++$L34:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      ldc1    $f0,32($fp)
+       .set    macro
+       .set    reorder
+-$L31:
+-      lw      $2,44($fp)
++$L16:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lw      $2,44($fp)
+       .set    macro
+       .set    reorder
+-$L51:
+-      lw      $4,44($fp)
+-      lbu     $2,0($4)
++$L35:
++      lw      $2,44($fp)
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lbu     $2,0($2)
+       .set    macro
+       .set    reorder
+-$L52:
+-      lw      $4,44($fp)
+-      lhu     $2,0($4)
++$L36:
++      lw      $2,44($fp)
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lhu     $2,0($2)
+       .set    macro
+       .set    reorder
+       .end    callback_receiver
++      .size   callback_receiver, .-callback_receiver
+       .align  2
+       .globl  callback_get_receiver
++      .set    nomips16
++      .set    nomicromips
+       .ent    callback_get_receiver
+       DECLARE_FUNCTION(callback_get_receiver)
+ callback_get_receiver:
+       .frame  $fp,8,$31               
+-      .mask   0x40000000,-8
++      .mask   0x40000000,-4
+       .fmask  0x00000000,0
+       .set    noreorder
+       .cpload $25
+       .set    reorder
+       addiu   $sp,$sp,-8
+-      sw      $fp,0($sp)
++      sw      $fp,4($sp)
+       move    $fp,$sp
+       move    $sp,$fp
+-      lw      $fp,0($sp)
+       la      $2,callback_receiver
++      lw      $fp,4($sp)
+       .set    noreorder
+       .set    nomacro
+       j       $31
+@@ -311,3 +310,4 @@ callback_get_receiver:
+       .set    reorder
+       .end    callback_get_receiver
++      .size   callback_get_receiver, .-callback_get_receiver
+--- a/vacall/vacall-mipseb-linux.s
++++ b/vacall/vacall-mipseb-linux.s
+@@ -1,10 +1,15 @@
+       .file   1 "vacall-mips.c"
+       .section .mdebug.abi32
+       .previous
++      .nan    legacy
++      .module fp=xx
++      .module nooddspreg
+       .abicalls
+       .text
+       .align  2
+       .globl  vacall_receiver
++      .set    nomips16
++      .set    nomicromips
+       .ent    vacall_receiver
+       .type   vacall_receiver, @function
+ vacall_receiver:
+@@ -18,162 +23,164 @@ vacall_receiver:
+       sw      $fp,96($sp)
+       move    $fp,$sp
+       sw      $31,100($sp)
+-      .cprestore      16
+-      sw      $4,104($fp)
+-      la      $8,vacall_function
+-      addiu   $4,$fp,120
+-      sw      $4,56($fp)
+-      lw      $25,0($8)
+-      addiu   $4,$fp,104
+       sw      $5,108($fp)
+-      sw      $4,40($fp)
++      addiu   $5,$fp,104
++      sw      $4,104($fp)
++      addiu   $4,$fp,24
++      sw      $5,40($fp)
++      addiu   $5,$fp,120
++      sdc1    $f12,80($fp)
++      sw      $5,56($fp)
++      la      $5,vacall_function
++      sdc1    $f14,88($fp)
++      .cprestore      16
+       sw      $6,112($fp)
+       sw      $7,116($fp)
+-      swc1    $f12,84($fp)
+-      swc1    $f13,80($fp)
+-      swc1    $f14,92($fp)
+-      swc1    $f15,88($fp)
+-      swc1    $f12,68($fp)
+-      swc1    $f14,72($fp)
+       sw      $0,24($fp)
+       sw      $0,44($fp)
+       sw      $0,48($fp)
+       sw      $0,60($fp)
+       sw      $0,64($fp)
+-      addiu   $4,$fp,24
++      lw      $25,0($5)
++      swc1    $f12,68($fp)
++      swc1    $f14,72($fp)
+       jal     $25
+-      lw      $5,48($fp)
++      lw      $4,48($fp)
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$0,$L38
+-      li      $4,1                    # 0x1
++      beq     $4,$0,$L1
++      li      $5,1                    # 0x1
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L39
+-      li      $4,2                    # 0x2
++      beq     $4,$5,$L23
++      li      $5,2                    # 0x2
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L39
+-      li      $4,3                    # 0x3
++      beq     $4,$5,$L23
++      li      $5,3                    # 0x3
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L45
+-      li      $4,4                    # 0x4
++      beq     $4,$5,$L29
++      li      $5,4                    # 0x4
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L46
+-      li      $4,5                    # 0x5
++      beq     $4,$5,$L30
++      li      $5,5                    # 0x5
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L47
+-      li      $4,6                    # 0x6
++      beq     $4,$5,$L31
++      li      $5,6                    # 0x6
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      li      $4,7                    # 0x7
++      beq     $4,$5,$L27
++      li      $5,7                    # 0x7
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      li      $4,8                    # 0x8
++      beq     $4,$5,$L27
++      li      $5,8                    # 0x8
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      li      $4,9                    # 0x9
++      beq     $4,$5,$L27
++      li      $5,9                    # 0x9
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      addiu   $4,$5,-10
++      beq     $4,$5,$L27
++      addiu   $5,$4,-10
+       .set    macro
+       .set    reorder
+-      sltu    $4,$4,2
++      sltu    $5,$5,2
+       .set    noreorder
+       .set    nomacro
+-      bne     $4,$0,$L48
+-      li      $4,12                   # 0xc
++      bne     $5,$0,$L32
++      li      $5,12                   # 0xc
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L49
+-      li      $4,13                   # 0xd
++      beq     $4,$5,$L33
++      li      $5,13                   # 0xd
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L50
+-      li      $4,14                   # 0xe
++      beq     $4,$5,$L34
++      li      $5,14                   # 0xe
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      li      $4,15                   # 0xf
++      beq     $4,$5,$L27
++      li      $5,15                   # 0xf
+       .set    macro
+       .set    reorder
+-      bne     $5,$4,$L38
++      .set    noreorder
++      .set    nomacro
++      bne     $4,$5,$L1
+       lw      $4,24($fp)
++      .set    macro
++      .set    reorder
++
+       andi    $4,$4,0x2
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$0,$L31
+-      li      $4,1                    # 0x1
++      beq     $4,$0,$L16
++      lw      $4,52($fp)
+       .set    macro
+       .set    reorder
+-      lw      $5,52($fp)
++      li      $5,1                    # 0x1
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L51
+-      li      $4,2                    # 0x2
++      beq     $4,$5,$L35
++      li      $5,2                    # 0x2
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L52
+-      li      $4,4                    # 0x4
++      beq     $4,$5,$L36
++      li      $5,4                    # 0x4
+       .set    macro
+       .set    reorder
+-      bne     $5,$4,$L38
+-      lw      $4,44($fp)
+-      lw      $2,0($4)
+-$L38:
++      bne     $4,$5,$L1
++      lw      $2,44($fp)
++      lw      $2,0($2)
++$L1:
+       move    $sp,$fp
+-$L53:
+-      lw      $31,100($sp)
++      lw      $31,100($fp)
+       lw      $fp,96($sp)
+       .set    noreorder
+       .set    nomacro
+@@ -182,7 +189,7 @@ $L53:
+       .set    macro
+       .set    reorder
+-$L39:
++$L23:
+       move    $sp,$fp
+       lb      $2,32($fp)
+       lw      $31,100($sp)
+@@ -194,99 +201,89 @@ $L39:
+       .set    macro
+       .set    reorder
+-$L46:
+-      lh      $2,32($fp)
++$L29:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lbu     $2,32($fp)
+       .set    macro
+       .set    reorder
+-$L43:
+-      lw      $2,32($fp)
++$L27:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lw      $2,32($fp)
+       .set    macro
+       .set    reorder
+-$L45:
+-      lbu     $2,32($fp)
++$L30:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lh      $2,32($fp)
+       .set    macro
+       .set    reorder
+-$L47:
+-      lhu     $2,32($fp)
++$L31:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lhu     $2,32($fp)
+       .set    macro
+       .set    reorder
+-$L49:
+-      lwc1    $f0,32($fp)
++$L32:
++      lw      $2,32($fp)
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lw      $3,36($fp)
+       .set    macro
+       .set    reorder
+-$L48:
+-      lw      $2,32($fp)
+-      lw      $3,36($fp)
++$L33:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lwc1    $f0,32($fp)
+       .set    macro
+       .set    reorder
+-$L50:
+-      lwc1    $f0,36($fp)
+-      lwc1    $f1,32($fp)
++$L34:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      ldc1    $f0,32($fp)
+       .set    macro
+       .set    reorder
+-$L31:
+-      lw      $2,44($fp)
++$L16:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lw      $2,44($fp)
+       .set    macro
+       .set    reorder
+-$L51:
+-      lw      $4,44($fp)
+-      lbu     $2,0($4)
++$L35:
++      lw      $2,44($fp)
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lbu     $2,0($2)
+       .set    macro
+       .set    reorder
+-$L52:
+-      lw      $4,44($fp)
+-      lhu     $2,0($4)
++$L36:
++      lw      $2,44($fp)
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lhu     $2,0($2)
+       .set    macro
+       .set    reorder
+       .end    vacall_receiver
+-      .ident  "GCC: (GNU) 4.0.2"
++      .size   vacall_receiver, .-vacall_receiver
++      .ident  "GCC: (Debian 7.2.0-11) 7.2.0"
+--- a/vacall/vacall-mipseb-macro.S
++++ b/vacall/vacall-mipseb-macro.S
+@@ -1,8 +1,13 @@
+ #include "asm-mips.h"
+       .file   1 "vacall-mips.c"
++      .nan    legacy
++      .module fp=xx
++      .module nooddspreg
+       .text
+       .align  2
+       .globl  vacall_receiver
++      .set    nomips16
++      .set    nomicromips
+       .ent    vacall_receiver
+       DECLARE_FUNCTION(vacall_receiver)
+ vacall_receiver:
+@@ -16,162 +21,164 @@ vacall_receiver:
+       sw      $fp,96($sp)
+       move    $fp,$sp
+       sw      $31,100($sp)
+-      .cprestore      16
+-      sw      $4,104($fp)
+-      la      $8,vacall_function
+-      addiu   $4,$fp,120
+-      sw      $4,56($fp)
+-      lw      $25,0($8)
+-      addiu   $4,$fp,104
+       sw      $5,108($fp)
+-      sw      $4,40($fp)
++      addiu   $5,$fp,104
++      sw      $4,104($fp)
++      addiu   $4,$fp,24
++      sw      $5,40($fp)
++      addiu   $5,$fp,120
++      sdc1    $f12,80($fp)
++      sw      $5,56($fp)
++      la      $5,vacall_function
++      sdc1    $f14,88($fp)
++      .cprestore      16
+       sw      $6,112($fp)
+       sw      $7,116($fp)
+-      swc1    $f12,84($fp)
+-      swc1    $f13,80($fp)
+-      swc1    $f14,92($fp)
+-      swc1    $f15,88($fp)
+-      swc1    $f12,68($fp)
+-      swc1    $f14,72($fp)
+       sw      $0,24($fp)
+       sw      $0,44($fp)
+       sw      $0,48($fp)
+       sw      $0,60($fp)
+       sw      $0,64($fp)
+-      addiu   $4,$fp,24
++      lw      $25,0($5)
++      swc1    $f12,68($fp)
++      swc1    $f14,72($fp)
+       jal     $25
+-      lw      $5,48($fp)
++      lw      $4,48($fp)
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$0,$L38
+-      li      $4,1                    
++      beq     $4,$0,$L1
++      li      $5,1                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L39
+-      li      $4,2                    
++      beq     $4,$5,$L23
++      li      $5,2                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L39
+-      li      $4,3                    
++      beq     $4,$5,$L23
++      li      $5,3                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L45
+-      li      $4,4                    
++      beq     $4,$5,$L29
++      li      $5,4                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L46
+-      li      $4,5                    
++      beq     $4,$5,$L30
++      li      $5,5                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L47
+-      li      $4,6                    
++      beq     $4,$5,$L31
++      li      $5,6                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      li      $4,7                    
++      beq     $4,$5,$L27
++      li      $5,7                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      li      $4,8                    
++      beq     $4,$5,$L27
++      li      $5,8                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      li      $4,9                    
++      beq     $4,$5,$L27
++      li      $5,9                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      addiu   $4,$5,-10
++      beq     $4,$5,$L27
++      addiu   $5,$4,-10
+       .set    macro
+       .set    reorder
+-      sltu    $4,$4,2
++      sltu    $5,$5,2
+       .set    noreorder
+       .set    nomacro
+-      bne     $4,$0,$L48
+-      li      $4,12                   
++      bne     $5,$0,$L32
++      li      $5,12                   
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L49
+-      li      $4,13                   
++      beq     $4,$5,$L33
++      li      $5,13                   
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L50
+-      li      $4,14                   
++      beq     $4,$5,$L34
++      li      $5,14                   
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      li      $4,15                   
++      beq     $4,$5,$L27
++      li      $5,15                   
+       .set    macro
+       .set    reorder
+-      bne     $5,$4,$L38
++      .set    noreorder
++      .set    nomacro
++      bne     $4,$5,$L1
+       lw      $4,24($fp)
++      .set    macro
++      .set    reorder
++
+       andi    $4,$4,0x2
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$0,$L31
+-      li      $4,1                    
++      beq     $4,$0,$L16
++      lw      $4,52($fp)
+       .set    macro
+       .set    reorder
+-      lw      $5,52($fp)
++      li      $5,1                    
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L51
+-      li      $4,2                    
++      beq     $4,$5,$L35
++      li      $5,2                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L52
+-      li      $4,4                    
++      beq     $4,$5,$L36
++      li      $5,4                    
+       .set    macro
+       .set    reorder
+-      bne     $5,$4,$L38
+-      lw      $4,44($fp)
+-      lw      $2,0($4)
+-$L38:
++      bne     $4,$5,$L1
++      lw      $2,44($fp)
++      lw      $2,0($2)
++$L1:
+       move    $sp,$fp
+-$L53:
+-      lw      $31,100($sp)
++      lw      $31,100($fp)
+       lw      $fp,96($sp)
+       .set    noreorder
+       .set    nomacro
+@@ -180,7 +187,7 @@ $L53:
+       .set    macro
+       .set    reorder
+-$L39:
++$L23:
+       move    $sp,$fp
+       lb      $2,32($fp)
+       lw      $31,100($sp)
+@@ -192,98 +199,88 @@ $L39:
+       .set    macro
+       .set    reorder
+-$L46:
+-      lh      $2,32($fp)
++$L29:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lbu     $2,32($fp)
+       .set    macro
+       .set    reorder
+-$L43:
+-      lw      $2,32($fp)
++$L27:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lw      $2,32($fp)
+       .set    macro
+       .set    reorder
+-$L45:
+-      lbu     $2,32($fp)
++$L30:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lh      $2,32($fp)
+       .set    macro
+       .set    reorder
+-$L47:
+-      lhu     $2,32($fp)
++$L31:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lhu     $2,32($fp)
+       .set    macro
+       .set    reorder
+-$L49:
+-      lwc1    $f0,32($fp)
++$L32:
++      lw      $2,32($fp)
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lw      $3,36($fp)
+       .set    macro
+       .set    reorder
+-$L48:
+-      lw      $2,32($fp)
+-      lw      $3,36($fp)
++$L33:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lwc1    $f0,32($fp)
+       .set    macro
+       .set    reorder
+-$L50:
+-      lwc1    $f0,36($fp)
+-      lwc1    $f1,32($fp)
++$L34:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      ldc1    $f0,32($fp)
+       .set    macro
+       .set    reorder
+-$L31:
+-      lw      $2,44($fp)
++$L16:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lw      $2,44($fp)
+       .set    macro
+       .set    reorder
+-$L51:
+-      lw      $4,44($fp)
+-      lbu     $2,0($4)
++$L35:
++      lw      $2,44($fp)
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lbu     $2,0($2)
+       .set    macro
+       .set    reorder
+-$L52:
+-      lw      $4,44($fp)
+-      lhu     $2,0($4)
++$L36:
++      lw      $2,44($fp)
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lhu     $2,0($2)
+       .set    macro
+       .set    reorder
+       .end    vacall_receiver
++      .size   vacall_receiver, .-vacall_receiver
+--- a/vacall/vacall-mipsel-linux.s
++++ b/vacall/vacall-mipsel-linux.s
+@@ -1,10 +1,15 @@
+       .file   1 "vacall-mips.c"
+       .section .mdebug.abi32
+       .previous
++      .nan    legacy
++      .module fp=xx
++      .module nooddspreg
+       .abicalls
+       .text
+       .align  2
+       .globl  vacall_receiver
++      .set    nomips16
++      .set    nomicromips
+       .ent    vacall_receiver
+       .type   vacall_receiver, @function
+ vacall_receiver:
+@@ -18,162 +23,164 @@ vacall_receiver:
+       sw      $fp,96($sp)
+       move    $fp,$sp
+       sw      $31,100($sp)
+-      .cprestore      16
+-      sw      $4,104($fp)
+-      la      $8,vacall_function
+-      addiu   $4,$fp,120
+-      sw      $4,56($fp)
+-      lw      $25,0($8)
+-      addiu   $4,$fp,104
+       sw      $5,108($fp)
+-      sw      $4,40($fp)
++      addiu   $5,$fp,104
++      sw      $4,104($fp)
++      addiu   $4,$fp,24
++      sw      $5,40($fp)
++      addiu   $5,$fp,120
++      sdc1    $f12,80($fp)
++      sw      $5,56($fp)
++      la      $5,vacall_function
++      sdc1    $f14,88($fp)
++      .cprestore      16
+       sw      $6,112($fp)
+       sw      $7,116($fp)
+-      swc1    $f12,80($fp)
+-      swc1    $f13,84($fp)
+-      swc1    $f14,88($fp)
+-      swc1    $f15,92($fp)
+-      swc1    $f12,68($fp)
+-      swc1    $f14,72($fp)
+       sw      $0,24($fp)
+       sw      $0,44($fp)
+       sw      $0,48($fp)
+       sw      $0,60($fp)
+       sw      $0,64($fp)
+-      addiu   $4,$fp,24
++      lw      $25,0($5)
++      swc1    $f12,68($fp)
++      swc1    $f14,72($fp)
+       jal     $25
+-      lw      $5,48($fp)
++      lw      $4,48($fp)
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$0,$L38
+-      li      $4,1                    # 0x1
++      beq     $4,$0,$L1
++      li      $5,1                    # 0x1
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L39
+-      li      $4,2                    # 0x2
++      beq     $4,$5,$L23
++      li      $5,2                    # 0x2
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L39
+-      li      $4,3                    # 0x3
++      beq     $4,$5,$L23
++      li      $5,3                    # 0x3
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L45
+-      li      $4,4                    # 0x4
++      beq     $4,$5,$L29
++      li      $5,4                    # 0x4
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L46
+-      li      $4,5                    # 0x5
++      beq     $4,$5,$L30
++      li      $5,5                    # 0x5
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L47
+-      li      $4,6                    # 0x6
++      beq     $4,$5,$L31
++      li      $5,6                    # 0x6
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      li      $4,7                    # 0x7
++      beq     $4,$5,$L27
++      li      $5,7                    # 0x7
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      li      $4,8                    # 0x8
++      beq     $4,$5,$L27
++      li      $5,8                    # 0x8
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      li      $4,9                    # 0x9
++      beq     $4,$5,$L27
++      li      $5,9                    # 0x9
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      addiu   $4,$5,-10
++      beq     $4,$5,$L27
++      addiu   $5,$4,-10
+       .set    macro
+       .set    reorder
+-      sltu    $4,$4,2
++      sltu    $5,$5,2
+       .set    noreorder
+       .set    nomacro
+-      bne     $4,$0,$L48
+-      li      $4,12                   # 0xc
++      bne     $5,$0,$L32
++      li      $5,12                   # 0xc
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L49
+-      li      $4,13                   # 0xd
++      beq     $4,$5,$L33
++      li      $5,13                   # 0xd
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L50
+-      li      $4,14                   # 0xe
++      beq     $4,$5,$L34
++      li      $5,14                   # 0xe
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      li      $4,15                   # 0xf
++      beq     $4,$5,$L27
++      li      $5,15                   # 0xf
+       .set    macro
+       .set    reorder
+-      bne     $5,$4,$L38
++      .set    noreorder
++      .set    nomacro
++      bne     $4,$5,$L1
+       lw      $4,24($fp)
++      .set    macro
++      .set    reorder
++
+       andi    $4,$4,0x2
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$0,$L31
+-      li      $4,1                    # 0x1
++      beq     $4,$0,$L16
++      lw      $4,52($fp)
+       .set    macro
+       .set    reorder
+-      lw      $5,52($fp)
++      li      $5,1                    # 0x1
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L51
+-      li      $4,2                    # 0x2
++      beq     $4,$5,$L35
++      li      $5,2                    # 0x2
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L52
+-      li      $4,4                    # 0x4
++      beq     $4,$5,$L36
++      li      $5,4                    # 0x4
+       .set    macro
+       .set    reorder
+-      bne     $5,$4,$L38
+-      lw      $4,44($fp)
+-      lw      $2,0($4)
+-$L38:
++      bne     $4,$5,$L1
++      lw      $2,44($fp)
++      lw      $2,0($2)
++$L1:
+       move    $sp,$fp
+-$L53:
+-      lw      $31,100($sp)
++      lw      $31,100($fp)
+       lw      $fp,96($sp)
+       .set    noreorder
+       .set    nomacro
+@@ -182,7 +189,7 @@ $L53:
+       .set    macro
+       .set    reorder
+-$L39:
++$L23:
+       move    $sp,$fp
+       lb      $2,32($fp)
+       lw      $31,100($sp)
+@@ -194,99 +201,89 @@ $L39:
+       .set    macro
+       .set    reorder
+-$L46:
+-      lh      $2,32($fp)
++$L29:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lbu     $2,32($fp)
+       .set    macro
+       .set    reorder
+-$L43:
+-      lw      $2,32($fp)
++$L27:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lw      $2,32($fp)
+       .set    macro
+       .set    reorder
+-$L45:
+-      lbu     $2,32($fp)
++$L30:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lh      $2,32($fp)
+       .set    macro
+       .set    reorder
+-$L47:
+-      lhu     $2,32($fp)
++$L31:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lhu     $2,32($fp)
+       .set    macro
+       .set    reorder
+-$L49:
+-      lwc1    $f0,32($fp)
++$L32:
++      lw      $2,32($fp)
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lw      $3,36($fp)
+       .set    macro
+       .set    reorder
+-$L48:
+-      lw      $2,32($fp)
+-      lw      $3,36($fp)
++$L33:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lwc1    $f0,32($fp)
+       .set    macro
+       .set    reorder
+-$L50:
+-      lwc1    $f0,32($fp)
+-      lwc1    $f1,36($fp)
++$L34:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      ldc1    $f0,32($fp)
+       .set    macro
+       .set    reorder
+-$L31:
+-      lw      $2,44($fp)
++$L16:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lw      $2,44($fp)
+       .set    macro
+       .set    reorder
+-$L51:
+-      lw      $4,44($fp)
+-      lbu     $2,0($4)
++$L35:
++      lw      $2,44($fp)
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lbu     $2,0($2)
+       .set    macro
+       .set    reorder
+-$L52:
+-      lw      $4,44($fp)
+-      lhu     $2,0($4)
++$L36:
++      lw      $2,44($fp)
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lhu     $2,0($2)
+       .set    macro
+       .set    reorder
+       .end    vacall_receiver
+-      .ident  "GCC: (GNU) 4.0.2"
++      .size   vacall_receiver, .-vacall_receiver
++      .ident  "GCC: (Debian 7.2.0-11) 7.2.0"
+--- a/vacall/vacall-mipsel-macro.S
++++ b/vacall/vacall-mipsel-macro.S
+@@ -1,8 +1,13 @@
+ #include "asm-mips.h"
+       .file   1 "vacall-mips.c"
++      .nan    legacy
++      .module fp=xx
++      .module nooddspreg
+       .text
+       .align  2
+       .globl  vacall_receiver
++      .set    nomips16
++      .set    nomicromips
+       .ent    vacall_receiver
+       DECLARE_FUNCTION(vacall_receiver)
+ vacall_receiver:
+@@ -16,162 +21,164 @@ vacall_receiver:
+       sw      $fp,96($sp)
+       move    $fp,$sp
+       sw      $31,100($sp)
+-      .cprestore      16
+-      sw      $4,104($fp)
+-      la      $8,vacall_function
+-      addiu   $4,$fp,120
+-      sw      $4,56($fp)
+-      lw      $25,0($8)
+-      addiu   $4,$fp,104
+       sw      $5,108($fp)
+-      sw      $4,40($fp)
++      addiu   $5,$fp,104
++      sw      $4,104($fp)
++      addiu   $4,$fp,24
++      sw      $5,40($fp)
++      addiu   $5,$fp,120
++      sdc1    $f12,80($fp)
++      sw      $5,56($fp)
++      la      $5,vacall_function
++      sdc1    $f14,88($fp)
++      .cprestore      16
+       sw      $6,112($fp)
+       sw      $7,116($fp)
+-      swc1    $f12,80($fp)
+-      swc1    $f13,84($fp)
+-      swc1    $f14,88($fp)
+-      swc1    $f15,92($fp)
+-      swc1    $f12,68($fp)
+-      swc1    $f14,72($fp)
+       sw      $0,24($fp)
+       sw      $0,44($fp)
+       sw      $0,48($fp)
+       sw      $0,60($fp)
+       sw      $0,64($fp)
+-      addiu   $4,$fp,24
++      lw      $25,0($5)
++      swc1    $f12,68($fp)
++      swc1    $f14,72($fp)
+       jal     $25
+-      lw      $5,48($fp)
++      lw      $4,48($fp)
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$0,$L38
+-      li      $4,1                    
++      beq     $4,$0,$L1
++      li      $5,1                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L39
+-      li      $4,2                    
++      beq     $4,$5,$L23
++      li      $5,2                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L39
+-      li      $4,3                    
++      beq     $4,$5,$L23
++      li      $5,3                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L45
+-      li      $4,4                    
++      beq     $4,$5,$L29
++      li      $5,4                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L46
+-      li      $4,5                    
++      beq     $4,$5,$L30
++      li      $5,5                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L47
+-      li      $4,6                    
++      beq     $4,$5,$L31
++      li      $5,6                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      li      $4,7                    
++      beq     $4,$5,$L27
++      li      $5,7                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      li      $4,8                    
++      beq     $4,$5,$L27
++      li      $5,8                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      li      $4,9                    
++      beq     $4,$5,$L27
++      li      $5,9                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      addiu   $4,$5,-10
++      beq     $4,$5,$L27
++      addiu   $5,$4,-10
+       .set    macro
+       .set    reorder
+-      sltu    $4,$4,2
++      sltu    $5,$5,2
+       .set    noreorder
+       .set    nomacro
+-      bne     $4,$0,$L48
+-      li      $4,12                   
++      bne     $5,$0,$L32
++      li      $5,12                   
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L49
+-      li      $4,13                   
++      beq     $4,$5,$L33
++      li      $5,13                   
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L50
+-      li      $4,14                   
++      beq     $4,$5,$L34
++      li      $5,14                   
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L43
+-      li      $4,15                   
++      beq     $4,$5,$L27
++      li      $5,15                   
+       .set    macro
+       .set    reorder
+-      bne     $5,$4,$L38
++      .set    noreorder
++      .set    nomacro
++      bne     $4,$5,$L1
+       lw      $4,24($fp)
++      .set    macro
++      .set    reorder
++
+       andi    $4,$4,0x2
+       .set    noreorder
+       .set    nomacro
+-      beq     $4,$0,$L31
+-      li      $4,1                    
++      beq     $4,$0,$L16
++      lw      $4,52($fp)
+       .set    macro
+       .set    reorder
+-      lw      $5,52($fp)
++      li      $5,1                    
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L51
+-      li      $4,2                    
++      beq     $4,$5,$L35
++      li      $5,2                    
+       .set    macro
+       .set    reorder
+       .set    noreorder
+       .set    nomacro
+-      beq     $5,$4,$L52
+-      li      $4,4                    
++      beq     $4,$5,$L36
++      li      $5,4                    
+       .set    macro
+       .set    reorder
+-      bne     $5,$4,$L38
+-      lw      $4,44($fp)
+-      lw      $2,0($4)
+-$L38:
++      bne     $4,$5,$L1
++      lw      $2,44($fp)
++      lw      $2,0($2)
++$L1:
+       move    $sp,$fp
+-$L53:
+-      lw      $31,100($sp)
++      lw      $31,100($fp)
+       lw      $fp,96($sp)
+       .set    noreorder
+       .set    nomacro
+@@ -180,7 +187,7 @@ $L53:
+       .set    macro
+       .set    reorder
+-$L39:
++$L23:
+       move    $sp,$fp
+       lb      $2,32($fp)
+       lw      $31,100($sp)
+@@ -192,98 +199,88 @@ $L39:
+       .set    macro
+       .set    reorder
+-$L46:
+-      lh      $2,32($fp)
++$L29:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lbu     $2,32($fp)
+       .set    macro
+       .set    reorder
+-$L43:
+-      lw      $2,32($fp)
++$L27:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lw      $2,32($fp)
+       .set    macro
+       .set    reorder
+-$L45:
+-      lbu     $2,32($fp)
++$L30:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lh      $2,32($fp)
+       .set    macro
+       .set    reorder
+-$L47:
+-      lhu     $2,32($fp)
++$L31:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lhu     $2,32($fp)
+       .set    macro
+       .set    reorder
+-$L49:
+-      lwc1    $f0,32($fp)
++$L32:
++      lw      $2,32($fp)
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lw      $3,36($fp)
+       .set    macro
+       .set    reorder
+-$L48:
+-      lw      $2,32($fp)
+-      lw      $3,36($fp)
++$L33:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lwc1    $f0,32($fp)
+       .set    macro
+       .set    reorder
+-$L50:
+-      lwc1    $f0,32($fp)
+-      lwc1    $f1,36($fp)
++$L34:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      ldc1    $f0,32($fp)
+       .set    macro
+       .set    reorder
+-$L31:
+-      lw      $2,44($fp)
++$L16:
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lw      $2,44($fp)
+       .set    macro
+       .set    reorder
+-$L51:
+-      lw      $4,44($fp)
+-      lbu     $2,0($4)
++$L35:
++      lw      $2,44($fp)
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lbu     $2,0($2)
+       .set    macro
+       .set    reorder
+-$L52:
+-      lw      $4,44($fp)
+-      lhu     $2,0($4)
++$L36:
++      lw      $2,44($fp)
+       .set    noreorder
+       .set    nomacro
+-      b       $L53
+-      move    $sp,$fp
++      b       $L1
++      lhu     $2,0($2)
+       .set    macro
+       .set    reorder
+       .end    vacall_receiver
++      .size   vacall_receiver, .-vacall_receiver
diff --git a/patches/raspbian.patch b/patches/raspbian.patch
new file mode 100644 (file)
index 0000000..18ccaaf
--- /dev/null
@@ -0,0 +1,86 @@
+Description: ffcall changes for raspbian.
+   * Replace movw/movt with ldr psuedo instruction.
+   * Mark binaries as armv6 not armv7
+Author: Peter Michael Green <plugwash@raspbian.org>
+
+---
+The information above should follow the Patch Tagging Guidelines, please
+checkout http://dep.debian.net/deps/dep3/ to learn about the format. Here
+are templates for supplementary fields that you might want to add:
+
+Origin: <vendor|upstream|other>, <url of original patch>
+Bug: <url in upstream bugtracker>
+Bug-Debian: https://bugs.debian.org/<bugnumber>
+Bug-Ubuntu: https://launchpad.net/bugs/<bugnumber>
+Forwarded: <no|not-needed|url proving that it has been forwarded>
+Reviewed-By: <name and email of someone who approved the patch>
+Last-Update: 2017-12-07
+
+--- ffcall-2.0.orig/avcall/avcall-armhf-macro.S
++++ ffcall-2.0/avcall/avcall-armhf-macro.S
+@@ -1,5 +1,5 @@
+ #include "asm-arm.h"
+-      .arch armv7-a
++      .arch armv6
+       .eabi_attribute 28, 1
+       .eabi_attribute 20, 1
+       .eabi_attribute 21, 1
+@@ -15,7 +15,7 @@
+       .global C(avcall_call)
+       .syntax unified
+       .arm
+-      .fpu vfpv3-d16
++      .fpu vfpv2
+       .type   avcall_call, %function
+ FUNBEGIN(avcall_call)
+       // args = 0, pretend = 0, frame = 0
+--- ffcall-2.0.orig/callback/vacall_r/vacall-armhf-macro.S
++++ ffcall-2.0/callback/vacall_r/vacall-armhf-macro.S
+@@ -1,5 +1,5 @@
+ #include "asm-arm.h"
+-      .arch armv7-a
++      .arch armv6
+       .eabi_attribute 28, 1
+       .eabi_attribute 20, 1
+       .eabi_attribute 21, 1
+@@ -15,7 +15,7 @@
+       .global C(callback_receiver)
+       .syntax unified
+       .arm
+-      .fpu vfpv3-d16
++      .fpu vfpv2
+       .type   callback_receiver, %function
+ FUNBEGIN(callback_receiver)
+       // args = 28, pretend = 0, frame = 176
+--- ffcall-2.0.orig/vacall/vacall-armhf-macro.S
++++ ffcall-2.0/vacall/vacall-armhf-macro.S
+@@ -1,5 +1,5 @@
+ #include "asm-arm.h"
+-      .arch armv7-a
++      .arch armv6
+       .eabi_attribute 28, 1
+       .eabi_attribute 20, 1
+       .eabi_attribute 21, 1
+@@ -15,7 +15,7 @@
+       .global C(vacall_receiver)
+       .syntax unified
+       .arm
+-      .fpu vfpv3-d16
++      .fpu vfpv2
+       .type   vacall_receiver, %function
+ FUNBEGIN(vacall_receiver)
+       // args = 20, pretend = 16, frame = 176
+@@ -25,8 +25,11 @@ FUNBEGIN(vacall_receiver)
+       push    {r4, r5, fp, lr}
+       add     fp, sp, $12
+       add     lr, fp, $4
+-      movw    r4, $:lower16:C(vacall_function)
+-      movt    r4, $:upper16:C(vacall_function)
++      #raspbian mod, replace movw/movt with ldr psuedo-instruction
++      #movw   r4, $:lower16:C(vacall_function)
++      #movt   r4, $:upper16:C(vacall_function)
++      ldr r4, =C(vacall_function)
++      #end raspbian mod.
+       sub     sp, sp, $176
+       add     r5, fp, $20
+       stm     lr, {r0, r1, r2, r3}
diff --git a/patches/series b/patches/series
new file mode 100644 (file)
index 0000000..701c76e
--- /dev/null
@@ -0,0 +1,4 @@
+fix-powerpcspe.patch
+mips-fpxx.patch
+trampoline-mips64el.patch
+raspbian.patch
diff --git a/patches/trampoline-mips64el.patch b/patches/trampoline-mips64el.patch
new file mode 100644 (file)
index 0000000..65a274c
--- /dev/null
@@ -0,0 +1,47 @@
+Description: Fix endianness issue in trampoline on mips64el
+ Even though the assembly instructions seem to commute 2 by 2, it is necessary
+ for the NOP to come *after* the jump (J), because of the “branch-delay slot”.
+ .
+ For background, see:
+  https://stackoverflow.com/questions/3807480/weird-mips-assembler-behavior-with-jump-and-link-instruction
+Author: Sébastien Villemot <sebastien@debian.org>
+Bug: https://savannah.gnu.org/bugs/index.php?52502
+Applied-Upstream: http://git.savannah.gnu.org/gitweb/?p=libffcall.git;a=commitdiff;h=7bb35bc0149b1034f4b6ff5bb6df440024e4c77e
+                  http://git.savannah.gnu.org/gitweb/?p=libffcall.git;a=commitdiff;h=0363b6fda4de25be575c0e7a090f8fa8032ee54d
+Last-Update: 2017-11-26
+---
+This patch header follows DEP-3: http://dep.debian.net/deps/dep3/
+--- a/trampoline/trampoline.c
++++ b/trampoline/trampoline.c
+@@ -760,17 +760,30 @@ trampoline_function_t alloc_trampoline (
+    *    .dword <data>                 <data>
+    *    .dword <address>              <address>
+    */
++#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+   *(long *)          (function + 0) = 0xDF220018DF230020L;
+   *(long *)          (function + 8) = 0xFC430000DF390028L;
+   *(long *)          (function +16) = 0x0320000800000000L;
++#else
++  *(long *)          (function + 0) = 0xDF230020DF220018L;
++  *(long *)          (function + 8) = 0xDF390028FC430000L;
++  *(long *)          (function +16) = 0x0000000003200008L;
++#endif
+   *(unsigned long *) (function +24) = (unsigned long) variable;
+   *(unsigned long *) (function +32) = (unsigned long) data;
+   *(unsigned long *) (function +40) = (unsigned long) address;
+ #define TRAMP_CODE_LENGTH  24
+-#define is_tramp(function)  \
++#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
++# define is_tramp(function)  \
+   *(long *)          (function + 0) == 0xDF220018DF230020L && \
+   *(long *)          (function + 8) == 0xFC430000DF390028L && \
+   *(long *)          (function +16) == 0x0320000800000000L
++#else
++# define is_tramp(function)  \
++  *(long *)          (function + 0) == 0xDF230020DF220018L && \
++  *(long *)          (function + 8) == 0xDF390028FC430000L && \
++  *(long *)          (function +16) == 0x0000000003200008L
++#endif
+ #define tramp_address(function)  \
+   *(unsigned long *) (function +40)
+ #define tramp_variable(function)  \
diff --git a/rules b/rules
new file mode 100755 (executable)
index 0000000..dd05edf
--- /dev/null
+++ b/rules
@@ -0,0 +1,16 @@
+#!/usr/bin/make -f
+
+include /usr/share/dpkg/default.mk
+
+# Disable PIE on armhf, it triggers an FTBFS in the compilation of the vacall test
+ifeq (armhf,$(DEB_HOST_ARCH))
+export DEB_BUILD_MAINT_OPTIONS=hardening=-pie
+endif
+
+export ACLOCAL = aclocal -I$(CURDIR)/m4 -I$(CURDIR)/gnulib-m4
+
+%:
+       dh $@ --no-parallel
+
+override_dh_auto_test:
+       echo testsuite disabled.
diff --git a/source/format b/source/format
new file mode 100644 (file)
index 0000000..163aaf8
--- /dev/null
@@ -0,0 +1 @@
+3.0 (quilt)
diff --git a/upstream/signing-key.asc b/upstream/signing-key.asc
new file mode 100644 (file)
index 0000000..814142b
--- /dev/null
@@ -0,0 +1,53 @@
+-----BEGIN PGP PUBLIC KEY BLOCK-----
+
+mQINBFJQWp0BEADquWF30FIja/DgTROcki+lIhbtxhm7eagnA2+p+c3G6D4AwhWW
+1vOgfFNiR+MyFmF1oEanNyJJ1OpiH0EUNTVBH7TNH8XAx9lSuUOAoHEVzKfuKjow
+PgbbVMVKHHoUx6XzHM4CUj4E2apmgMdH62/brNtFvElGjXEUOFKrvI3ucVUKJWw0
+8lwMhZoybtvwsojROcitePmoaQRfwJfMqIl/t0LsRD0bqfajD7AczptdYJgmJ/04
+UDCTI51p1VKYkLqMuqnNq8nnrX11AqKf9K31XFtZk8XhqL2VsAM5Jw8zAFkXJ9Mx
+DxhIH/LWs8i3DnS2cjjcN7oEPsLa37bDZHu9S6vGq6HzIRtEVEhepDDcnTbZb45Q
+z5RWOHu3fsTZj+eHQerHxGtoaYuk44pxJB48dca77fAOPlQkvSQxhy6HKXVHfwmh
+KCc5NpDu4/hk9Vi8eqDvCPXvdBQMZZXHxRBJJbhT8o6k/TawepMnWSOI/Y2iZAwr
+Z3hX9bvcdY5+urqlFtjKG+KWkDkq5WcfRxsBlY78nfxkhlwq61zBemTscjZ1HVXW
+EZywklay1vKS+d3/F3elgBVr94ta0SHo7OPBG/gOxiAMbwiLaWE0Hg6Ycs8exT8t
+NTv/ZG5wbEWqBuzHo9mL9/j3tOQLNoVlYZ1SYpbhDVEkDo6uBXe10ngt4QARAQAB
+tDhCcnVubyBIYWlibGUgKE9wZW4gU291cmNlIERldmVsb3BtZW50KSA8YnJ1bm9A
+Y2xpc3Aub3JnPokCPgQTAQIAKAUCUlBanQIbAwUJC0c1AAYLCQgHAwIGFQgCCQoL
+BBYCAwECHgECF4AACgkQT0lKlC5GFsI3YBAAlCcuYgNDi6EmuoMBId2cXLX8uHoD
+BlB/T9c58EDZrzmiDu62zVtXTeK5ML8k74ZkzqNufM7XjinWcwhr/TMfL6l+imA3
+hGc5ZKKtACdLywJU2WJzVaFNN7249Sx+/c6DqhpDHVAPM0grdfdT+9AJPPcVj3lo
+p5dGIK8zRByEqI7FT2YhxbGuhqqW2ufKe85HdgRSK9Od8K9DMXjW4XY0xX9+Ru9M
+CMApIog2zruiTmVijack9jndcSBIuQRyrwsgLb3zoKsLWP0wS9czSdJD6uDT3nvd
+sIAJ+gQM507y5p8gBks71J3MkiXnV82MBQIil9xLbqui22bN2VcPnZaREqJ8LWZM
+2BTNm1NHt/epU2kxGyy4Vxc41xi52b5KqyOkUAiSRKvDb6pNorYaAgTeHc0w/nTd
+QJ2IKSvRyDwCNvj9H4S0HabAN+mh5EbgIKlVKcWPxlhMG6rWNVAHtWUjj+/RtC17
+xhHptyVt2/N3CooixOm1bQK2l5WxxdiAxjI4xOKCSh2goKhXS+Jy0QSYUllROxlh
+BpNfQ7TAIWQC81Y2jEyMpaCkgaXGFDK5fE9U5GPrKMbLCIIsOOeXU6h1AIwjthlr
+EaOj6zOMBTS1P35t+NqZlyVP7qEgFCJM4OqAWRVaqsmTQJwJeski/c81XWdsCZ8T
+eD3PYKd5uQ4CBBCIRgQQEQIABgUCUlBbTwAKCRDHGkxl8Fmx0d9/AJwNJOK2oZVq
+/LJoxrUmG6cMDSxbbQCbBw8Y7qUqQTvp7ItuTbs90pNSdfy5Ag0EUlBanQEQANJJ
+rOnwJYaX6jC/EgQ1LOuB5th5UkXFeGNMl+5kgsgcBJUPG7x3IpRUmSr1eW7D0/1E
+88UWB1IFJtpYEd5g7VDQIdY2Abb4fMRGBn5DI20ht0pD1O+ypIVIu9VRq1wWWRFm
+TfDLCPSPa1ahy/NCXYiXK/behxP6kJvVXhdt+XzEJ31rz040l5dFgxokWNdV7/gt
+hNcazSJrTVBF7uK4CHLcfISKJdM8Xq/CLuKf8Qm9V/DXpiKSswIu6SpQCkuxaDVz
+B8/50HvlCeGZHbxfEy8hCaOjoUPGkVEwM6XzU5cn4A/LbLBcTJX6cBV0RiqdJTZd
+yDDa9YeRX4e4Ks5/i47fGH9Im27ddVZPkQerYmok/Y11GNA/jpgijb/HckMa39Xk
+gWHfKjZf2XrTTnjDH4K8Xj+LaWSIohjcHZ5Vjqou40BAqOPniot+h9PFPFsAtYBs
+PJ2nq4yGKBci8+srWj7jAH8LCXXA6NwZmqEvW+xsCTKtr5RqGOSQaF8QH6Y3RE3X
+QBGR3JTQFYe9+EYcqk8YYg6Yh4iuNWDDtGeCpZ2B7xPEVik17oco4nEHXnyzXY9N
+4LKXS+TZzUCM9QNeQ3HYVkLQ2thSZlBLIfeFbiuvVuzsdkjmCZloYApYUlZqkCPJ
+K2YfOBAgnEKfG6RNrkauwQBfrki+LyCjhnLH/2AxABEBAAGJAiUEGAECAA8FAlJQ
+Wp0CGwwFCQtHNQAACgkQT0lKlC5GFsJeQhAAv9dRPOoGmHO6UVzjazkZxDSlsrnI
+XqU2Jz9KP4Etw5FFDhWakdBgSwYEpJWuGXcGEZqlSIHsNeVH1lS9udCQC/yGmvX5
+xYChJMiMvyROjZhVD7tfVykGJlChD0xLVvGy1MIWY5cR6L1ofFzv6AB1jgEmCwGa
+dQM/22/qJHuhHXO0hwFYKOYsLxNcM8kh4vdg6f/0VjAGSeb7Ih5a6PN/xAImSV/c
+VGOVUMBnCWFFadqZAZwjEWr7fCh2f606vT9Gvnikggdr0TRRdMOhVhaKAWx7RBQz
+hJ19PE8ekDOxOHpYpFSoEN5kVrmoNgIfsuKTXGLYMYXEu43HiwkajEYV01XIFNPb
+110x8akbZ7h8N7cd84YNd/iqOKiCDuNTlD6C+YHjUXYcJqWtWjp94dFQdM8+VV0c
+Iw6qZ7V7/WK+/13/I5K9JgnbKBrcnHtvU8w7sclfGO3AiDDG9vOC/2yTd3i85mPs
+/5+hhXvpfMmGt2G4B4hrclxgEsNP2OFDVxYJVWNhV1NrXfCDOxHWG06SDaNlh8vu
+3FfFRPJ4W0YIZSwIrnM5WBjdGgEb1kWqapJ8bIJImCim3NNAhfYD3CIQObORq4ZV
+1Fqf+rBhagh56auelseH3mt9vqDRFRsKHPYC0NakdjPzojRiUSuvJdkopUeeAJrQ
+fBVUDvSsJHDvpaM=
+=BuIp
+-----END PGP PUBLIC KEY BLOCK-----
diff --git a/watch b/watch
new file mode 100644 (file)
index 0000000..b179ee2
--- /dev/null
+++ b/watch
@@ -0,0 +1,2 @@
+version=4
+opts=pgpsigurlmangle=s/$/.sig/ https://ftpmirror.gnu.org/gnu/libffcall/libffcall-(.*).tar.gz