Currently the mapping from pages to zones causes the page at zero to go into
zone -1 and the page at 4096 to go into zone 0, which is the Xen zone
(confusing various assertions).
Arrange instead for the mapping to be such that zone 0 is always reserved for
Xen and all other pages map to a zone >= 1.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Keir Fraser <keir@xen.org>
Cc: jbeulich@suse.com
Acked-by: Tim Deegan <tim@xen.org>
*/
#define MEMZONE_XEN 0
-#define NR_ZONES (PADDR_BITS - PAGE_SHIFT)
+#define NR_ZONES (PADDR_BITS - PAGE_SHIFT + 1)
-#define bits_to_zone(b) (((b) < (PAGE_SHIFT + 1)) ? 0 : ((b) - PAGE_SHIFT - 1))
+#define bits_to_zone(b) (((b) < (PAGE_SHIFT + 1)) ? 1 : ((b) - PAGE_SHIFT))
#define page_to_zone(pg) (is_xen_heap_page(pg) ? MEMZONE_XEN : \
- (fls(page_to_mfn(pg)) - 1))
+ (fls(page_to_mfn(pg)) ? : 1))
typedef struct page_list_head heap_by_zone_and_order_t[NR_ZONES][MAX_ORDER+1];
static heap_by_zone_and_order_t *_heap[MAX_NUMNODES];