Documentation updates.
3f5ef5a24IaQasQE2tyMxrfxskMmvw README
3f5ef5a2l4kfBYSQTUaOyyD76WROZQ README.CD
3f69d8abYB1vMyD_QVDvzxy5Zscf1A TODO
-405ef604hIZH5pGi2uwlrlSvUMrutw docs/Console-HOWTO.txt
+405ef604hIZH5pGi2uwlrlSvUMrutw docs/HOWTOs/Console-HOWTO
+4083e798FbE1MIsQaIYvjnx1uvFhBg docs/HOWTOs/Sched-HOWTO
+40083bb4LVQzRqA3ABz0__pPhGNwtA docs/HOWTOs/VBD-HOWTO
+4021053fmeFrEyPHcT8JFiDpLNgtHQ docs/HOWTOs/Xen-HOWTO
+4022a73cgxX1ryj1HgS-IwwB6NUi2A docs/HOWTOs/XenDebugger-HOWTO
3f9e7d53iC47UnlfORp9iC1vai6kWw docs/Makefile
-4083e798FbE1MIsQaIYvjnx1uvFhBg docs/Sched-HOWTO.txt
-40083bb4LVQzRqA3ABz0__pPhGNwtA docs/VBD-HOWTO.txt
-4021053fmeFrEyPHcT8JFiDpLNgtHQ docs/Xen-HOWTO.txt
3f9e7d60PWZJeVh5xdnk0nLUdxlqEA docs/eps/xenlogo.eps
3f9e7d63lTwQbp2fnx7yY93epWS-eQ docs/figs/dummy
3f9e7d564bWFB-Czjv1qdmE6o0GqNg docs/interface.tex
-4022a73cgxX1ryj1HgS-IwwB6NUi2A docs/pdb.txt
3f9e7d58t7N6hjjBMxSn-NMxBphchA docs/style.tex
3f9e7d5bz8BwYkNuwyiPVu7JJG441A docs/xenstyle.cls
3f815144d1vI2777JI-dO4wk49Iw7g extras/mini-os/Makefile
+++ /dev/null
- New console I/O infrastructure in Xen 1.3
- =========================================
-
- Keir Fraser, University of Cambridge, 3rd June 2004
-
- I thought I'd write a quick note about using the new console I/O
- infrastructure in Xen 1.3. Significant new features compared with 1.2,
- and with older revisions of 1.3, include:
- - bi-directional console access
- - log in to a Xenolinux guest OS via its virtual console
- - a new terminal client (replaces the use of telnet in character mode)
- - proper handling of terminal emulation
-
-Accessing the virtual console from within the guest OS
-------------------------------------------------------
- Every Xenolinux instance owns a bidirectional 'virtual console'.
- The device node to which this console is attached can be configured
- by specifying 'xencons=' on the OS command line:
- 'xencons=off' --> disable virtual console
- 'xencons=tty' --> attach console to /dev/tty1 (tty0 at boot-time)
- 'xencons=ttyS' --> attach console to /dev/ttyS0
- The default is to attach to /dev/tty1, and also to create dummy
- devices for /dev/tty2-63 to avoid warnings from many standard distro
- startup scripts. The exception is domain 0, which by default attaches
- to /dev/ttyS0.
-
-Domain 0 virtual console
-------------------------
- The virtual console for domain 0 is shared with Xen's console. For
- example, if you specify 'console=com1' as a boot parameter to Xen,
- then domain 0 will have bi-directional access to the primary serial
- line. Boot-time messages can be directed to the virtual console by
- specifying 'console=ttyS0' as a boot parameter to Xenolinux.
-
-Connecting to the virtual console
----------------------------------
- Domain 0 console may be accessed using the supplied 'miniterm' program
- if raw serial access is desired. If the Xen machine is connected to a
- serial-port server, then the supplied 'xencons' program may be used to
- connect to the appropriate TCP port on the server:
- # xencons <server host> <server port>
-
-Logging in via virtual console
-------------------------------
- It is possible to log in to a guest OS via its virtual console if a
- 'getty' is running. In most domains the virtual console is named tty1
- so standard startup scripts and /etc/inittab should work
- fine. Furthermore, tty2-63 are created as dummy console devices to
- suppress warnings from standard startup scripts. If the OS has
- attached the virtual console to /dev/ttyS0 then you will need to
- start a 'mingetty' on that device node.
-
-Virtual console for other domains
----------------------------------
- Every guest OS has a virtual console that is accessible via
- 'console=tty0' at boot time (or 'console=xencons0' for domain 0), and
- mingetty running on /dev/tty1 (or /dev/xen/cons for domain 0).
- However, domains other than domain 0 do not have access to the
- physical serial line. Instead, their console data is sent to and from
- a control daemon running in domain 0. When properly installed, this
- daemon can be started from the init scripts (e.g., rc.local):
- # /usr/sbin/xend start
-
- Alternatively, Redhat- and LSB-compatible Linux installations can use
- the provided init.d script. To integrate startup and shutdown of xend
- in such a system, you will need to run a few configuration commands:
- # chkconfig --add xend
- # chkconfig --level 35 xend on
- # chkconfig --level 01246 xend off
- This will avoid the need to run xend manually from rc.local, for example.
-
- Note that, when a domain is created using xc_dom_create.py, xend MUST
- be running. If everything is set up correctly then xc_dom_create will
- print the local TCP port to which you should connect to perform
- console I/O. A suitable console client is provided by the Python
- module xenctl.console_client: running this module from the command
- line with <host> and <port> parameters will start a terminal
- session. This module is also installed as /usr/bin/xencons, from a
- copy in tools/misc/xencons. For example:
- # xencons localhost 9600
-
- An alternative to manually running a terminal client is to specify
- '-c' to xc_dom_create.py, or add 'auto_console=True' to the defaults
- file. This will cause xc_dom_create.py to automatically become the
- console terminal after starting the domain.
--- /dev/null
+ New console I/O infrastructure in Xen 1.3
+ =========================================
+
+ Keir Fraser, University of Cambridge, 3rd June 2004
+
+ I thought I'd write a quick note about using the new console I/O
+ infrastructure in Xen 1.3. Significant new features compared with 1.2,
+ and with older revisions of 1.3, include:
+ - bi-directional console access
+ - log in to a Xenolinux guest OS via its virtual console
+ - a new terminal client (replaces the use of telnet in character mode)
+ - proper handling of terminal emulation
+
+Accessing the virtual console from within the guest OS
+------------------------------------------------------
+ Every Xenolinux instance owns a bidirectional 'virtual console'.
+ The device node to which this console is attached can be configured
+ by specifying 'xencons=' on the OS command line:
+ 'xencons=off' --> disable virtual console
+ 'xencons=tty' --> attach console to /dev/tty1 (tty0 at boot-time)
+ 'xencons=ttyS' --> attach console to /dev/ttyS0
+ The default is to attach to /dev/tty1, and also to create dummy
+ devices for /dev/tty2-63 to avoid warnings from many standard distro
+ startup scripts. The exception is domain 0, which by default attaches
+ to /dev/ttyS0.
+
+Domain 0 virtual console
+------------------------
+ The virtual console for domain 0 is shared with Xen's console. For
+ example, if you specify 'console=com1' as a boot parameter to Xen,
+ then domain 0 will have bi-directional access to the primary serial
+ line. Boot-time messages can be directed to the virtual console by
+ specifying 'console=ttyS0' as a boot parameter to Xenolinux.
+
+Connecting to the virtual console
+---------------------------------
+ Domain 0 console may be accessed using the supplied 'miniterm' program
+ if raw serial access is desired. If the Xen machine is connected to a
+ serial-port server, then the supplied 'xencons' program may be used to
+ connect to the appropriate TCP port on the server:
+ # xencons <server host> <server port>
+
+Logging in via virtual console
+------------------------------
+ It is possible to log in to a guest OS via its virtual console if a
+ 'getty' is running. In most domains the virtual console is named tty1
+ so standard startup scripts and /etc/inittab should work
+ fine. Furthermore, tty2-63 are created as dummy console devices to
+ suppress warnings from standard startup scripts. If the OS has
+ attached the virtual console to /dev/ttyS0 then you will need to
+ start a 'mingetty' on that device node.
+
+Virtual console for other domains
+---------------------------------
+ Every guest OS has a virtual console that is accessible via
+ 'console=tty0' at boot time (or 'console=xencons0' for domain 0), and
+ mingetty running on /dev/tty1 (or /dev/xen/cons for domain 0).
+ However, domains other than domain 0 do not have access to the
+ physical serial line. Instead, their console data is sent to and from
+ a control daemon running in domain 0. When properly installed, this
+ daemon can be started from the init scripts (e.g., rc.local):
+ # /usr/sbin/xend start
+
+ Alternatively, Redhat- and LSB-compatible Linux installations can use
+ the provided init.d script. To integrate startup and shutdown of xend
+ in such a system, you will need to run a few configuration commands:
+ # chkconfig --add xend
+ # chkconfig --level 35 xend on
+ # chkconfig --level 01246 xend off
+ This will avoid the need to run xend manually from rc.local, for example.
+
+ Note that, when a domain is created using xc_dom_create.py, xend MUST
+ be running. If everything is set up correctly then xc_dom_create will
+ print the local TCP port to which you should connect to perform
+ console I/O. A suitable console client is provided by the Python
+ module xenctl.console_client: running this module from the command
+ line with <host> and <port> parameters will start a terminal
+ session. This module is also installed as /usr/bin/xencons, from a
+ copy in tools/misc/xencons. For example:
+ # xencons localhost 9600
+
+ An alternative to manually running a terminal client is to specify
+ '-c' to xc_dom_create.py, or add 'auto_console=True' to the defaults
+ file. This will cause xc_dom_create.py to automatically become the
+ console terminal after starting the domain.
--- /dev/null
+Xen Scheduler HOWTO
+===================
+
+by Mark Williamson
+(c) 2004 Intel Research Cambridge
+
+
+Introduction
+------------
+
+Xen offers a choice of CPU schedulers. All available schedulers are
+included in Xen at compile time and the administrator may select a
+particular scheduler using a boot-time parameter to Xen. It is
+expected that administrators will choose the scheduler most
+appropriate to their application and configure the machine to boot
+with that scheduler.
+
+Note: the default scheduler is the Borrowed Virtual Time (BVT)
+scheduler which was also used in previous releases of Xen. No
+configuration changes are required to keep using this scheduler.
+
+This file provides a brief description of the CPU schedulers available
+in Xen, what they are useful for and the parameters that are used to
+configure them. This information is necessarily fairly technical at
+the moment. The recommended way to fully understand the scheduling
+algorithms is to read the relevant research papers.
+
+The interface to the schedulers is basically "raw" at the moment,
+without sanity checking - administrators should be careful when
+setting the parameters since it is possible for a mistake to hang
+domains, or the entire system (in particular, double check parameters
+for sanity and make sure that DOM0 will get enough CPU time to remain
+usable). Note that xc_dom_control.py takes time values in
+nanoseconds.
+
+Future tools will implement friendlier control interfaces.
+
+
+Borrowed Virtual Time (BVT)
+---------------------------
+
+All releases of Xen have featured the BVT scheduler, which is used to
+provide proportional fair shares of the CPU based on weights assigned
+to domains. BVT is "work conserving" - the CPU will never be left
+idle if there are runnable tasks.
+
+BVT uses "virtual time" to make decisions on which domain should be
+scheduled on the processor. Each time a scheduling decision is
+required, BVT evaluates the "Effective Virtual Time" of all domains
+and then schedules the domain with the least EVT. Domains are allowed
+to "borrow" virtual time by "time warping", which reduces their EVT by
+a certain amount, so that they may be scheduled sooner. In order to
+maintain long term fairness, there are limits on when a domain can
+time warp and for how long. [ For more details read the SOSP'99 paper
+by Duda and Cheriton ]
+
+In the Xen implementation, domains time warp when they unblock, so
+that domain wakeup latencies are reduced.
+
+The BVT algorithm uses the following per-domain parameters (set using
+xc_dom_control.py cpu_bvtset):
+
+* mcuadv - the MCU (Minimum Charging Unit) advance determines the
+ proportional share of the CPU that a domain receives. It
+ is set inversely proportionally to a domain's sharing weight.
+* warp - the amount of "virtual time" the domain is allowed to warp
+ backwards
+* warpl - the warp limit is the maximum time a domain can run warped for
+* warpu - the unwarp requirement is the minimum time a domain must
+ run unwarped for before it can warp again
+
+BVT also has the following global parameter (set using
+xc_dom_control.py cpu_bvtslice):
+
+* ctx_allow - the context switch allowance is similar to the "quantum"
+ in traditional schedulers. It is the minimum time that
+ a scheduled domain will be allowed to run before be
+ pre-empted. This prevents thrashing of the CPU.
+
+BVT can now be selected by passing the 'sched=bvt' argument to Xen at
+boot-time and is the default scheduler if no 'sched' argument is
+supplied.
+
+Atropos
+-------
+
+Atropos is a scheduler originally developed for the Nemesis multimedia
+operating system. Atropos can be used to reserve absolute shares of
+the CPU. It also includes some features to improve the efficiency of
+domains that block for I/O and to allow spare CPU time to be shared
+out.
+
+The Atropos algorithm has the following parameters for each domain
+(set using xc_dom_control.py cpu_atropos_set):
+
+ * slice - The length of time per period that a domain is guaranteed.
+ * period - The period over which a domain is guaranteed to receive
+ its slice of CPU time.
+ * latency - The latency hint is used to control how soon after
+ waking up a domain should be scheduled.
+ * xtratime - This is a true (1) / false (0) flag that specifies whether
+ a domain should be allowed a share of the system slack time.
+
+Every domain has an associated period and slice. The domain should
+receive 'slice' nanoseconds every 'period' nanoseconds. This allows
+the administrator to configure both the absolute share of the CPU a
+domain receives and the frequency with which it is scheduled. When
+domains unblock, their period is reduced to the value of the latency
+hint (the slice is scaled accordingly so that they still get the same
+proportion of the CPU). For each subsequent period, the slice and
+period times are doubled until they reach their original values.
+
+Atropos is selected by adding 'sched=atropos' to Xen's boot-time
+arguments.
+
+Note: don't overcommit the CPU when using Atropos (i.e. don't reserve
+more CPU than is available - the utilisation should be kept to
+slightly less than 100% in order to ensure predictable behaviour).
+
+Round-Robin
+-----------
+
+The Round-Robin scheduler is provided as a simple example of Xen's
+internal scheduler API. For production systems, one of the other
+schedulers should be used, since they are more flexible and more
+efficient.
+
+The Round-robin scheduler has one global parameter (set using
+xc_dom_control.py cpu_rrobin_slice):
+
+ * rr_slice - The time for which each domain runs before the next
+ scheduling decision is made.
+
+The Round-Robin scheduler can be selected by adding 'sched=rrobin' to
+Xen's boot-time arguments.
--- /dev/null
+Virtual Block Devices / Virtual Disks in Xen - HOWTO
+====================================================
+
+HOWTO for Xen 1.2
+
+Mark A. Williamson (mark.a.williamson@intel.com)
+(C) Intel Research Cambridge 2004
+
+Introduction
+------------
+
+This document describes the new Virtual Block Device (VBD) and Virtual Disk
+features available in Xen release 1.2. First, a brief introduction to some
+basic disk concepts on a Xen system:
+
+Virtual Block Devices (VBDs):
+ VBDs are the disk abstraction provided by Xen. All XenoLinux disk accesses
+ go through the VBD driver. Using the VBD functionality, it is possible
+ to selectively grant domains access to portions of the physical disks
+ in the system.
+
+ A virtual block device can also consist of multiple extents from the
+ physical disks in the system, allowing them to be accessed as a single
+ uniform device from the domain with access to that VBD. The
+ functionality is somewhat similar to that underpinning LVM, since
+ you can combine multiple regions from physical devices into a single
+ logical device, from the point of view of a guest virtual machine.
+
+ Everyone who boots Xen / XenoLinux from a hard drive uses VBDs
+ but for some uses they can almost be ignored.
+
+Virtual Disks (VDs):
+ VDs are an abstraction built on top of the functionality provided by
+ VBDs. The VD management code maintains a "free pool" of disk space on
+ the system that has been reserved for use with VDs. The tools can
+ automatically allocate collections of extents from this free pool to
+ create "virtual disks" on demand.
+
+ VDs can then be used just like normal disks by domains. VDs appear
+ just like any other disk to guest domains, since they use the same VBD
+ abstraction, as provided by Xen.
+
+ Using VDs is optional, since it's always possible to dedicate
+ partitions, or entire disks to your virtual machines. VDs are handy
+ when you have a dynamically changing set of virtual machines and you
+ don't want to have to keep repartitioning in order to provide them with
+ disk space.
+
+ Virtual Disks are rather like "logical volumes" in LVM.
+
+If that didn't all make sense, it doesn't matter too much ;-) Using the
+functionality is fairly straightforward and some examples will clarify things.
+The text below expands a bit on the concepts involved, finishing up with a
+walk-through of some simple virtual disk management tasks.
+
+
+Virtual Block Devices
+---------------------
+
+Before covering VD management, it's worth discussing some aspects of the VBD
+functionality that will be useful to know.
+
+A VBD is made up of a number of extents from physical disk devices. The
+extents for a VBD don't have to be contiguous, or even on the same device. Xen
+performs address translation so that they appear as a single contiguous
+device to a domain.
+
+When the VBD layer is used to give access to entire drives or entire
+partitions, the VBDs simply consist of a single extent that corresponds to the
+drive or partition used. Lists of extents are usually only used when virtual
+disks (VDs) are being used.
+
+Xen 1.2 and its associated XenoLinux release support automatic registration /
+removal of VBDs. It has always been possible to add a VBD to a running
+XenoLinux domain but it was then necessary to run the "xen_vbd_refresh" tool in
+order for the new device to be detected. Nowadays, when a VBD is added, the
+domain it's added to automatically registers the disk, with no special action
+by the user being required.
+
+Note that it is possible to use the VBD functionality to allow multiple domains
+write access to the same areas of disk. This is almost always a bad thing!
+The provided example scripts for creating domains do their best to check that
+disk areas are not shared unsafely and will catch many cases of this. Setting
+the vbd_expert variable in config files for xc_dom_create.py controls how
+unsafe it allows VBD mappings to be - 0 (read only sharing allowed) should be
+right for most people ;-). Level 1 attempts to allow at most one writer to any
+area of disk. Level 2 allows multiple writers (i.e. anything!).
+
+
+Virtual Disk Management
+-----------------------
+
+The VD management code runs entirely in user space. The code is written in
+Python and can therefore be accessed from custom scripts, as well as from the
+convenience scripts provided. The underlying VD database is a SQLite database
+in /var/db/xen_vdisks.sqlite.
+
+Most virtual disk management can be performed using the xc_vd_tool.py script
+provided in the tools/examples/ directory of the source tree. It supports the
+following operations:
+
+initialise - "Formats" a partition or disk device for use storing
+ virtual disks. This does not actually write data to the
+ specified device. Rather, it adds the device to the VD
+ free-space pool, for later allocation.
+
+ You should only add devices that correspond directly to
+ physical disks / partitions - trying to use a VBD that you
+ have created yourself as part of the free space pool has
+ undefined (possibly nasty) results.
+
+create - Creates a virtual disk of specified size by allocating space
+ from the free space pool. The virtual disk is identified
+ in future by the unique ID returned by this script.
+
+ The disk can be given an expiry time, if desired. For
+ most users, the best idea is to specify a time of 0 (which
+ has the special meaning "never expire") and then
+ explicitly delete the VD when finished with it -
+ otherwise, VDs will disappear if allowed to expire.
+
+delete - Explicitly delete a VD. Makes it disappear immediately!
+
+setexpiry - Allows the expiry time of a (not yet expired) virtual disk
+ to be modified. Be aware the VD will disappear when the
+ time has expired.
+
+enlarge - Increase the allocation of space to a virtual disk.
+ Currently this will not be immediately visible to running
+ domain(s) using it. You can make it visible by destroying
+ the corresponding VBDs and then using xc_dom_control.py to
+ add them to the domain again. Note: doing this to
+ filesystems that are in use may well cause errors in the
+ guest Linux, or even a crash although it will probably be
+ OK if you stop the domain before updating the VBD and
+ restart afterwards.
+
+import - Allocate a virtual disk and populate it with the contents of
+ some disk file. This can be used to import root file system
+ images or to restore backups of virtual disks, for instance.
+
+export - Write the contents of a virtual disk out to a disk file.
+ Useful for creating disk images for use elsewhere, such as
+ standard root file systems and backups.
+
+list - List the non-expired virtual disks currently available in the
+ system.
+
+undelete - Attempts to recover an expired (or deleted) virtual disk.
+
+freespace - Get the free space (in megabytes) available for allocating
+ new virtual disk extents.
+
+The functionality provided by these scripts is also available directly from
+Python functions in the xenctl.utils module - you can use this functionality in
+your own scripts.
+
+Populating VDs:
+
+Once you've created a VD, you might want to populate it from DOM0 (for
+instance, to put a root file system onto it for a guest domain). This can be
+done by creating a VBD for dom0 to access the VD through - this is discussed
+below.
+
+More detail on how virtual disks work:
+
+When you "format" a device for virtual disks, the device is logically split up
+into extents. These extents are recorded in the Virtual Disk Management
+database in /var/db/xen_vdisks.sqlite.
+
+When you use xc_vd_tool.py to add create a virtual disk, some of the extents in
+the free space pool are reallocated for that virtual disk and a record for that
+VD is added to the database. When VDs are mapped into domains as VBDs, the
+system looks up the allocated extents for the virtual disk in order to set up
+the underlying VBD.
+
+Free space is identified by the fact that it belongs to an "expired" disk.
+When "initialising" with xc_vd_tool.py adds a real device to the free pool, it
+actually divides the device into extents and adds them to an already-expired
+virtual disk. The allocated device is not written to during this operation -
+its availability is simply recorded into the virtual disks database.
+
+If you set an expiry time on a VD, its extents will be liable to be reallocated
+to new VDs as soon as that expiry time runs out. Therefore, be careful when
+setting expiry times! Many users will find it simplest to set all VDs to not
+expire automatically, then explicitly delete them later on.
+
+Deleted / expired virtual disks may sometimes be undeleted - currently this
+only works when none of the virtual disk's extents have been reallocated to
+other virtual disks, since that's the only situation where the disk is likely
+to be fully intact. You should try undeletion as soon as you realise you've
+mistakenly deleted (or allowed to expire) a virtual disk. At some point in the
+future, an "unsafe" undelete which can recover what remains of partially
+reallocated virtual disks may also be implemented.
+
+Security note:
+
+The disk space for VDs is not zeroed when it is initially added to the free
+space pool OR when a VD expires OR when a VD is created. Therefore, if this is
+not done manually it is possible for a domain to read a VD to determine what
+was written by previous owners of its constituent extents. If this is a
+problem, users should manually clean VDs in some way either on allocation, or
+just before deallocation (automated support for this may be added at a later
+date).
+
+
+Side note: The xvd* devices
+---------------------------
+
+The examples in this document make frequent use of the xvd* device nodes for
+representing virtual block devices. It is not a requirement to use these with
+Xen, since VBDs can be mapped to any IDE or SCSI device node in the system.
+Changing the the references to xvd* nodes in the examples below to refer to
+some unused hd* or sd* node would also be valid.
+
+They can be useful when accessing VBDs from dom0, since binding VBDs to xvd*
+devices under will avoid clashes with real IDE or SCSI drives.
+
+There is a shell script provided in tools/misc/xen-mkdevnodes to create these
+nodes. Specify on the command line the directory that the nodes should be
+placed under (e.g. /dev):
+
+> cd {root of Xen source tree}/tools/misc/
+> ./xen-mkdevnodes /dev
+
+
+Dynamically Registering VBDs
+----------------------------
+
+The domain control tool (xc_dom_control.py) includes the ability to add and
+remove VBDs to / from running domains. As usual, the command format is:
+
+xc_dom_control.py [operation] [arguments]
+
+The operations (and their arguments) are as follows:
+
+vbd_add dom uname dev mode - Creates a VBD corresponding to either a physical
+ device or a virtual disk and adds it as a
+ specified device under the target domain, with
+ either read or write access.
+
+vbd_remove dom dev - Removes the VBD associated with a specified device
+ node from the target domain.
+
+These scripts are most useful when populating VDs. VDs can't be populated
+directly, since they don't correspond to real devices. Using:
+
+ xc_dom_control.py vbd_add 0 vd:your_vd_id /dev/whatever w
+
+You can make a virtual disk available to DOM0. Sensible devices to map VDs to
+in DOM0 are the /dev/xvd* nodes, since that makes it obvious that they are Xen
+virtual devices that don't correspond to real physical devices.
+
+You can then format, mount and populate the VD through the nominated device
+node. When you've finished, use:
+
+ xc_dom_control.py vbd_remove 0 /dev/whatever
+
+To revoke DOM0's access to it. It's then ready for use in a guest domain.
+
+
+
+You can also use this functionality to grant access to a physical device to a
+guest domain - you might use this to temporarily share a partition, or to add
+access to a partition that wasn't granted at boot time.
+
+When playing with VBDs, remember that in general, it is only safe for two
+domains to have access to a file system if they both have read-only access. You
+shouldn't be trying to share anything which is writable, even if only by one
+domain, unless you're really sure you know what you're doing!
+
+
+Granting access to real disks and partitions
+--------------------------------------------
+
+During the boot process, Xen automatically creates a VBD for each physical disk
+and gives Dom0 read / write access to it. This makes it look like Dom0 has
+normal access to the disks, just as if Xen wasn't being used - in reality, even
+Dom0 talks to disks through Xen VBDs.
+
+To give another domain access to a partition or whole disk then you need to
+create a corresponding VBD for that partition, for use by that domain. As for
+virtual disks, you can grant access to a running domain, or specify that the
+domain should have access when it is first booted.
+
+To grant access to a physical partition or disk whilst a domain is running, use
+the xc_dom_control.py script - the usage is very similar to the case of adding
+access virtual disks to a running domain (described above). Specify the device
+as "phy:device", where device is the name of the device as seen from domain 0,
+or from normal Linux without Xen. For instance:
+
+> xc_dom_control.py vbd_add 2 phy:hdc /dev/whatever r
+
+Will grant domain 2 read-only access to the device /dev/hdc (as seen from Dom0
+/ normal Linux running on the same machine - i.e. the master drive on the
+secondary IDE chain), as /dev/whatever in the target domain.
+
+Note that you can use this within domain 0 to map disks / partitions to other
+device nodes within domain 0. For instance, you could map /dev/hda to also be
+accessible through /dev/xvda. This is not generally recommended, since if you
+(for instance) mount both device nodes read / write you could cause corruption
+to the underlying filesystem. It's also quite confusing ;-)
+
+To grant a domain access to a partition or disk when it boots, the appropriate
+VBD needs to be created before the domain is started. This can be done very
+easily using the tools provided. To specify this to the xc_dom_create.py tool
+(either in a startup script or on the command line) use triples of the format:
+
+ phy:dev,target_dev,perms
+
+Where dev is the device name as seen from Dom0, target_dev is the device you
+want it to appear as in the target domain and perms is 'w' if you want to give
+write privileges, or 'r' otherwise.
+
+These may either be specified on the command line or in an initialisation
+script. For instance, to grant the same access rights as described by the
+command example above, you would use the triple:
+
+ phy:hdc,/dev/whatever,r
+
+If you are using a config file, then you should add this triple into the
+vbd_list variable, for instance using the line:
+
+ vbd_list = [ ('phy:dev', 'hdc', 'r') ]
+
+(Note that you need to use quotes here, since config files are really small
+Python scripts.)
+
+To specify the mapping on the command line, you'd use the -d switch and supply
+the triple as the argument, e.g.:
+
+> xc_dom_create.py [other arguments] -d phy:hdc,/dev/whatever,r
+
+(You don't need to explicitly quote things in this case.)
+
+
+Walk-through: Booting a domain from a VD
+----------------------------------------
+
+As an example, here is a sequence of commands you might use to create a virtual
+disk, populate it with a root file system and boot a domain from it. These
+steps assume that you've installed the example scripts somewhere on your PATH -
+if you haven't done that, you'll need to specify a fully qualified pathname in
+the examples below. It is also assumed that you know how to use the
+xc_dom_create.py tool (apart from configuring virtual disks!)
+
+[ This example is intended only for users of virtual disks (VDs). You don't
+need to follow this example if you'll be booting a domain from a dedicated
+partition, since you can create that partition and populate it, directly from
+Dom0, as normal. ]
+
+First, if you haven't done so already, you'll initialise the free space pool by
+adding a real partition to it. The details are stored in the database, so
+you'll only need to do it once. You can also use this command to add further
+partitions to the existing free space pool.
+
+> xc_vd_tool.py format /dev/<real partition>
+
+Now you'll want to allocate the space for your virtual disk. Do so using the
+following, specifying the size in megabytes.
+
+> xc_vd_tool.py create <size in megabytes>
+
+At this point, the program will tell you the virtual disk ID. Note it down, as
+it is how you will identify the virtual device in future.
+
+If you don't want the VD to be bootable (i.e. you're booting a domain from some
+other medium and just want it to be able to access this VD), you can simply add
+it to the vbd_list used by xc_dom_create.py, either by putting it in a config
+file or by specifying it on the command line. Formatting / populating of the
+VD could then done from that domain once it's started.
+
+If you want to boot off your new VD as well then you need to populate it with a
+standard Linux root filesystem. You'll need to temporarily add the VD to DOM0
+in order to do this. To give DOM0 r/w access to the VD, use the following
+command line, substituting the ID you got earlier.
+
+> xc_dom_control.py vbd_add 0 vd:<id> /dev/xvda w
+
+This attaches the VD to the device /dev/xvda in domain zero, with read / write
+privileges - you can use other devices nodes if you choose too.
+
+Now make a filesystem on this device, mount it and populate it with a root
+filesystem. These steps are exactly the same as under normal Linux. When
+you've finished, unmount the filesystem again.
+
+You should now remove the VD from DOM0. This will prevent you accidentally
+changing it in DOM0, whilst the guest domain is using it (which could cause
+filesystem corruption, and confuse Linux).
+
+> xc_dom_control.py vbd_remove 0 /dev/xvda
+
+It should now be possible to boot a guest domain from the VD. To do this, you
+should specify the the VD's details in some way so that xc_dom_create.py will
+be able to set up the corresponding VBD for the domain to access. If you're
+using a config file, you should include:
+
+ ('vd:<id>', '/dev/whatever', 'w')
+
+In the vbd_list, substituting the appropriate virtual disk ID, device node and
+read / write setting.
+
+To specify access on the command line, as you start the domain, you would use
+the -d switch (note that you don't need to use quote marks here):
+
+> xc_dom_create.py [other arguments] -d vd:<id>,/dev/whatever,w
+
+To tell Linux which device to boot from, you should either include:
+
+ root=/dev/whatever
+
+in your cmdline_root in the config file, or specify it on the command line,
+using the -R option:
+
+> xc_dom_create.py [other arguments] -R root=/dev/whatever
+
+That should be it: sit back watch your domain boot off its virtual disk!
+
+
+Getting help
+------------
+
+The main source of help using Xen is the developer's e-mail list:
+<xen-devel@lists.sourceforge.net>. The developers will help with problems,
+listen to feature requests and do bug fixes. It is, however, helpful if you
+can look through the mailing list archives and HOWTOs provided to make sure
+your question is not answered there. If you post to the list, please provide
+as much information as possible about your setup and your problem.
+
+There is also a general Xen FAQ, kindly started by Jan van Rensburg, which (at
+time of writing) is located at: <http://xen.epiuse.com/xen-faq.txt>.
+
+Contributing
+------------
+
+Patches and extra documentation are also welcomed ;-) and should also be posted
+to the xen-devel e-mail list.
--- /dev/null
+###########################################
+Xen HOWTO
+
+University of Cambridge Computer Laboratory
+
+http://www.cl.cam.ac.uk/netos/xen
+#############################
+
+
+Get Xen Source Code
+=============================
+
+The public master BK repository for the 1.2 release lives at:
+'bk://xen.bkbits.net/xeno-1.2.bk'
+The current unstable release (1.3) is available at:
+'bk://xen.bkbits.net/xeno-unstable.bk'
+
+To fetch a local copy, first download the BitKeeper tools at:
+http://www.bitmover.com/download with username 'bitkeeper' and
+password 'get bitkeeper'.
+
+Then install the tools and then run:
+# bk clone bk://xen.bkbits.net/xeno-1.2.bk
+
+Under your current directory, a new directory named 'xeno-1.2.bk' has
+been created, which contains all the necessary source codes for the
+Xen hypervisor and Linux guest OSes.
+
+To get newest changes to the repository, run
+# cd xeno-1.2.bk
+# bk pull
+
+
+Configuring Xen
+=============================
+
+Xen's build configuration is managed via a set of environment
+variables. These should be set before invoking make
+(e.g., 'export debug=y; make', 'debug=y make').
+
+The options that can be configured are as follows (all options default
+to 'n' or off):
+
+ debug=y -- Enable debug assertions and console output.
+ (Primarily useful for tracing bugs in Xen).
+
+ debugger=y -- Enable the in-Xen pervasive debugger (PDB).
+ This can be used to debug Xen, guest OSes, and
+ applications. For more information see the
+ XenDebugger-HOWTO.
+
+ old_drivers=y -- Enable the old hardware-device architecture, in
+ which network and block devices are managed by
+ Xen. The new (and default) model requires such
+ devices to be managed by a suitably-privileged
+ guest OS (e.g., within domain 0).
+
+ perfc=y -- Enable performance-counters for significant events
+ within Xen. The counts can be reset or displayed
+ on Xen's console via console control keys.
+
+ trace=y -- Enable per-cpu trace buffers which log a range of
+ events within Xen for collection by control
+ software.
+
+
+Build Xen
+=============================
+
+Hint: To see how to build Xen and all the control tools, inspect the
+tools/misc/xen-clone script in the BK repository. This script can be
+used to clone the repository and perform a full build.
+
+To build Xen manually:
+
+# cd xeno-1.2.bk/xen
+# make clean
+# make
+
+This will (should) produce a file called 'xen' in the current
+directory. This is the ELF 32-bit LSB executable file of Xen. You
+can also find a gzip version, named 'xen.gz'.
+
+To install the built files on your server under /usr, type 'make
+install' at the root of the BK repository. You will need to be root to
+do this!
+
+Hint: There is also a 'make dist' rule which copies built files to an
+install directory just outside the BK repo; if this suits your setup,
+go for it.
+
+
+Build Linux as a Xen guest OS
+==============================
+
+This is a little more involved since the repository only contains a
+"sparse" tree -- this is essentially an 'overlay' on a standard linux
+kernel source tree. It contains only those files currently 'in play'
+which are either modified versions of files in the vanilla linux tree,
+or brand new files specific to the Xen port.
+
+So, first you need a vanilla linux-2.4.26 tree, which is located at:
+http://www.kernel.org/pub/linux/kernel/v2.4
+
+Then:
+ # mv linux-2.4.26.tar.gz /xeno-1.2.bk
+ # cd /xeno-1.2.bk
+ # tar -zxvf linux-2.4.26.tar.gz
+
+You'll find a new directory 'linux-2.4.26' which contains all
+the vanilla Linux 2.4.26 kernel source codes.
+
+Hint: You should choose the vanilla linux kernel tree that has the
+same version as the "sparse" tree.
+
+Next, you need to 'overlay' this sparse tree on the full vanilla Linux
+kernel tree:
+
+ # cd /xeno-1.2.bk/xenolinux-2.4.26-sparse
+ # ./mkbuildtree ../linux-2.4.26
+
+Finally, rename the buildtree since it is now a 'xenolinux' buildtree.
+
+ # cd /xeno-1.2.bk
+ # mv linux-2.4.26 xenolinux-2.4.26
+
+Now that the buildtree is there, you can build the xenolinux kernel.
+The default configuration should work fine for most people (use 'make
+oldconfig') but you can customise using one of the other config tools
+if you want.
+
+ # cd /xeno-1.2.bk/xenolinux-2.4.26
+ # ARCH=xen make oldconfig { or menuconfig, or xconfig, or config }
+ # ARCH=xen make dep
+ # ARCH=xen make bzImage
+
+Assuming the build works, you'll end up with
+/xeno-1.2.bk/xenolinux-2.4.26/arch/xen/boot/xenolinux.gz. This is the
+gzip version of XenoLinux kernel image.
+
+
+Build the Domain Control Tools
+==============================
+
+Under '/xeno-1.2.bk/tools', there are three sub-directories:
+'balloon', 'xc' and 'misc', each containing
+a group of tools. You can enter any of the four sub-directories
+and type 'make' to compile the corresponding group of tools.
+Or you can type 'make' under '/xeno-1.2.bk/tools' to compile
+all the tools.
+
+In order to compile the control-interface library in 'xc' you must
+have zlib and development headers installed. Also you will need at
+least Python v2.2.
+
+'make install' in the tools directory will place executables and
+libraries in /usr/bin and /usr/lib. You will need to be root to do this!
+
+As noted earlier, 'make dist' installs files to a local 'install'
+directory just outside the BK repository. These files will then need
+to be installed manually onto the server.
+
+The Example Scripts
+===================
+
+The scripts in tools/examples/ are generally useful for
+administering a Xen-based system. You can install them by running
+'make install' in that directory.
+
+The python scripts (*.py) are the main tools for controlling
+Xen domains.
+
+'defaults' and 'democd' are example configuration files for starting
+new domains.
+
+'xendomains' is a Sys-V style init script for starting and stopping
+Xen domains when the system boots / shuts down.
+
+These will be discussed below in more detail.
+
+
+Installation
+==============================
+
+First:
+# cp /xen-1.2.bk/xen/xen.gz /boot/xen.gz
+# cp /xen-1.2.bk/xenolinux-2.4.26/arch/xen/boot/xenolinux.gz /boot/xenolinux.gz
+
+Second, you must have 'GNU Grub' installed. Then you need to edit
+the Grub configuration file '/boot/grub/menu.lst'.
+
+A typical Grub menu option might look like:
+
+title Xen 1.2 / XenoLinux 2.4.26
+ kernel /boot/xen.gz dom0_mem=131072 com1=115200,8n1 noht
+ module /boot/xenolinux.gz root=/dev/sda4 ro console=tty0
+
+The first line specifies which Xen image to use, and what command line
+arguments to pass to Xen. In this case we set the maximum amount of
+memory to allocate to domain0, and enable serial I/O at 115200 baud.
+We could also disable smp support (nosmp) or disable hyper-threading
+support (noht). If you have multiple network interface you can use
+ifname=ethXX to select which one to use. If your network card is
+unsupported, use ifname=dummy
+
+The second line specifies which XenoLinux image to use, and the
+standard linux command line arguments to pass to the kernel. In this
+case, we're configuring the root partition and stating that it should
+(initially) be mounted read-only (normal practice).
+
+The following is a list of command line arguments to pass to Xen:
+
+ ignorebiostables Disable parsing of BIOS-supplied tables. This may
+ help with some chipsets that aren't fully supported
+ by Xen. If you specify this option then ACPI tables are
+ also ignored, and SMP support is disabled.
+
+ noreboot Don't reboot the machine automatically on errors.
+ This is useful to catch debug output if you aren't
+ catching console messages via the serial line.
+
+ nosmp Disable SMP support.
+ This option is implied by 'ignorebiostables'.
+
+ noacpi Disable ACPI tables, which confuse Xen on some chipsets.
+ This option is implied by 'ignorebiostables'.
+
+ watchdog Enable NMI watchdog which can report certain failures.
+
+ noht Disable Hyperthreading.
+
+ ifname=ethXX Select which Ethernet interface to use.
+
+ ifname=dummy Don't use any network interface.
+
+ com1=<baud>,DPS[,<io_base>,<irq>]
+ com2=<baud>,DPS[,<io_base>,<irq>]
+ Xen supports up to two 16550-compatible serial ports.
+ For example: 'com1=9600,8n1,0x408,5' maps COM1 to a
+ 9600-baud port, 8 data bits, no parity, 1 stop bit,
+ I/O port base 0x408, IRQ 5.
+ If the I/O base and IRQ are standard (com1:0x3f8,4;
+ com2:0x2f8,3) then they need not be specified.
+
+ console=<specifier list>
+ Specify the destination for Xen console I/O.
+ This is a comma-separated list of, for example:
+ vga: use VGA console and allow keyboard input
+ com1: use serial port com1
+ com2H: use serial port com2. Transmitted chars will
+ have the MSB set. Received chars must have
+ MSB set.
+ com2L: use serial port com2. Transmitted chars will
+ have the MSB cleared. Received chars must
+ have MSB cleared.
+ The latter two examples allow a single port to be
+ shared by two subsystems (eg. console and
+ debugger). Sharing is controlled by MSB of each
+ transmitted/received character.
+ [NB. Default for this option is 'com1,tty']
+
+ dom0_mem=xxx Set the maximum amount of memory for domain0.
+
+ tbuf_size=xxx Set the size of the per-cpu trace buffers, in pages
+ (default 1). Note that the trace buffers are only
+ enabled in debug builds. Most users can ignore
+ this feature completely.
+
+ sched=xxx Select the CPU scheduler Xen should use. The current
+ possibilities are 'bvt', 'atropos' and 'rrobin'. The
+ default is 'bvt'. For more information see
+ Sched-HOWTO.txt.
+
+Boot into Domain 0
+==============================
+
+Reboot your computer; After selecting the kernel to boot, stand back
+and watch Xen boot, closely followed by "domain 0" running the
+XenoLinux kernel. Depending on which root partition you have assigned
+to XenoLinux kernel in Grub configuration file, you can use the
+corresponding username / password to log in.
+
+Once logged in, it should look just like any regular linux box. All
+the usual tools and commands should work as per usual.
+
+
+Start New Domains
+==============================
+
+You must be 'root' to start new domains.
+
+Make sure you have successfully configured at least one
+physical network interface. Then:
+
+# xen_nat_enable
+
+The xc_dom_create.py program is useful for starting Xen domains.
+You can specify configuration files using the -f switch on the command
+line. The default configuration is in /etc/xc/defaults. You can
+create custom versions of this to suit your local configuration.
+
+You can override the settings in a configuration file using command
+line arguments to xc_dom_create.py. However, you may find it simplest
+to create a separate configuration file for each domain you start.
+
+xc_dom_create.py will print the local TCP port to which you should
+connect to perform console I/O. A suitable console client is provided
+by the Python module xenctl.console_client: running this module from
+the command line with <host> and <port> parameters will start a
+terminal session. This module is also installed as /usr/bin/xencons,
+from a copy in tools/misc/xencons. An alternative to manually running
+a terminal client is to specify '-c' to xc_dom_create.py, or add
+'auto_console=True' to the defaults file. This will cause
+xc_dom_create.py to automatically become the console terminal after
+starting the domain.
+
+Boot-time output will be directed to this console by default, because
+the console name is tty0. It is also possible to log in via the
+virtual console --- once again, your normal startup scripts will work
+as normal (e.g., by running mingetty on tty1-7). The device node to
+which the virtual console is attached can be configured by specifying
+'xencons=' on the OS command line:
+ 'xencons=off' --> disable virtual console
+ 'xencons=tty' --> attach console to /dev/tty1 (tty0 at boot-time)
+ 'xencons=ttyS' --> attach console to /dev/ttyS0
+
+
+Manage Running Domains
+==============================
+
+You can see a list of existing domains with:
+# xc_dom_control.py list
+
+In order to stop a domain, you use:
+# xc_dom_control.py stop <domain_id>
+
+To shutdown a domain cleanly use:
+# xc_dom_control.py shutdown <domain_id>
+
+To destroy a domain immediately:
+# xc_dom_control.py destroy <domain_id>
+
+There are other more advanced options, including pinning domains to
+specific CPUs and saving / resuming domains to / from disk files. To
+get more information, run the tool without any arguments:
+# xc_dom_control.py
+
+There is more information available in the Xen README files, the
+VBD-HOWTO and the contributed FAQ / HOWTO documents on the web.
+
+
+Other Control Tasks using Python
+================================
+
+A Python module 'Xc' is installed as part of the tools-install
+process. This can be imported, and an 'xc object' instantiated, to
+provide access to privileged command operations:
+
+# import Xc
+# xc = Xc.new()
+# dir(xc)
+# help(xc.domain_create)
+
+In this way you can see that the class 'xc' contains useful
+documentation for you to consult.
+
+A further package of useful routines (xenctl) is also installed:
+
+# import xenctl.utils
+# help(xenctl.utils)
+
+You can use these modules to write your own custom scripts or you can
+customise the scripts supplied in the Xen distribution.
+
+
+Automatically start / stop domains at boot / shutdown
+=====================================================
+
+A Sys-V style init script for RedHat systems is provided in
+tools/examples/xendomains. When you run 'make install' in that
+directory, it should be automatically copied to /etc/init.d/. You can
+then enable it using the chkconfig command, e.g.:
+
+# chkconfig --add xendomains
+
+By default, this will start the boot-time domains in runlevels 3, 4
+and 5. To specify a domain is to start at boot-time, place its
+configuration file (or a link to it) under /etc/xc/auto/.
+
+The script will also stop ALL domains when the system is shut down,
+even domains that it did not start originally.
+
+You can also use the "service" command (part of the RedHat standard
+distribution) to run this script manually, e.g:
+
+# service xendomains start
+
+Starts all the domains with config files under /etc/xc/auto/.
+
+# service xendomains stop
+
+Shuts down ALL running Xen domains.
--- /dev/null
+Pervasive Debugging
+===================
+
+Alex Ho (alex.ho at cl.cam.ac.uk)
+
+Introduction
+------------
+
+The pervasive debugging project is leveraging Xen to
+debug distributed systems. We have added a gdb stub
+to Xen to allow for remote debugging of both Xen and
+guest operating systems. More information about the
+pervasive debugger is available at: http://www.cl.cam.ac.uk/netos/pdb
+
+
+Implementation
+--------------
+
+The gdb stub communicates with gdb running over a serial line.
+The main entry point is pdb_handle_exception() which is invoked
+from: pdb_key_pressed() ('D' on the console)
+ do_int3_exception() (interrupt 3: breakpoint exception)
+ do_debug() (interrupt 1: debug exception)
+
+This accepts characters from the serial port and passes gdb
+commands to pdb_process_command() which implements the gdb stub
+interface. This file draws heavily from the kgdb project and
+sample gdbstub provided with gdb.
+
+The stub can examine registers, single step and continue, and
+read and write memory (in Xen, a domain, or a Linux process'
+address space). The debugger does not currently trace the
+current process, so all bets are off if context switch occurs
+in the domain.
+
+
+Setup
+-----
+
+ +-------+ telnet +-----------+ serial +-------+
+ | GDB |--------| nsplitd |--------| Xen |
+ +-------+ +-----------+ +-------+
+
+To run pdb, Xen must be appropriately configured and
+a suitable serial interface attached to the target machine.
+GDB and nsplitd can run on the same machine.
+
+Xen Configuration
+
+ Add the "pdb=xxx" option to your Xen boot command line
+ where xxx is one of the following values:
+ com1 gdb stub should communicate on com1
+ com1H gdb stub should communicate on com1 (with high bit set)
+ com2 gdb stub should communicate on com2
+ com2H gdb stub should communicate on com2 (with high bit set)
+
+ Symbolic debugging infomration is quite helpful too:
+ xeno.bk/xen/arch/i386/Rules.mk
+ add -g to CFLAGS to compile Xen with symbols
+ xeno.bk/xenolinux-2.4.24-sparse/arch/xen/Makefile
+ add -g to CFLAGS to compile Linux with symbols
+
+ You may also want to consider dedicating a register to the
+ frame pointer (disable the -fomit-frame-pointer compile flag).
+
+ When booting Xen and domain 0, look for the console text
+ "Initializing pervasive debugger (PDB)" just before DOM0 starts up.
+
+Serial Port Configuration
+
+ pdb expects to communicate with gdb using the serial port. Since
+ this port is often shared with the machine's console output, pdb can
+ discriminate its communication by setting the high bit of each byte.
+
+ A new tool has been added to the source tree which splits
+ the serial output from a remote machine into two streams:
+ one stream (without the high bit) is the console and
+ one stream (with the high bit stripped) is the pdb communication.
+
+ See: xeno.bk/tools/nsplitd
+
+ nsplitd configuration
+ ---------------------
+ hostname$ more /etc/xinetd.d/nsplit
+ service nsplit1
+ {
+ socket_type = stream
+ protocol = tcp
+ wait = no
+ user = wanda
+ server = /usr/sbin/in.nsplitd
+ server_args = serial.cl.cam.ac.uk:wcons00
+ disable = no
+ only_from = 128.232.0.0/17 127.0.0.1
+ }
+
+ hostname$ egrep 'wcons00|nsplit1' /etc/services
+ wcons00 9600/tcp # Wanda remote console
+ nsplit1 12010/tcp # Nemesis console splitter ports.
+
+ Note: nsplitd was originally written for the Nemesis project
+ at Cambridge.
+
+ After nsplitd accepts a connection on <port> (12010 in the above
+ example), it starts listening on port <port + 1>. Characters sent
+ to the <port + 1> will have the high bit set and vice versa for
+ characters received.
+
+ You can connect to the nsplitd using
+ 'tools/xenctl/lib/console_client.py <host> <port>'
+
+GDB 6.0
+ pdb has been tested with gdb 6.0. It should also work with
+ earlier versions.
+
+
+Usage
+-----
+
+1. Boot Xen and Linux
+2. Interrupt Xen by pressing 'D' at the console
+ You should see the console message:
+ (XEN) pdb_handle_exception [0x88][0x101000:0xfc5e72ac]
+ At this point Xen is frozen and the pdb stub is waiting for gdb commands
+ on the serial line.
+3. Attach with gdb
+ (gdb) file xeno.bk/xen/xen
+ Reading symbols from xeno.bk/xen/xen...done.
+ (gdb) target remote <hostname>:<port + 1> /* contact nsplitd */
+ Remote debugging using serial.srg:12131
+ continue_cpu_idle_loop () at current.h:10
+ warning: shared library handler failed to enable breakpoint
+ (gdb) break __enter_scheduler
+ Breakpoint 1 at 0xfc510a94: file schedule.c, line 330.
+ (gdb) cont
+ Continuing.
+
+ Program received signal SIGTRAP, Trace/breakpoint trap.
+ __enter_scheduler () at schedule.c:330
+ (gdb) step
+ (gdb) step
+ (gdb) print next /* the variable prev has been optimized away! */
+ $1 = (struct task_struct *) 0x0
+ (gdb) delete
+ Delete all breakpoints? (y or n) y
+4. You can add additional symbols to gdb
+ (gdb) add-sym xenolinux-2.4.24/vmlinux
+ add symbol table from file "xenolinux-2.4.24/vmlinux" at
+ (y or n) y
+ Reading symbols from xenolinux-2.4.24/vmlinux...done.
+ (gdb) x/s cpu_vendor_names[0]
+ 0xc01530d2 <cpdext+62898>: "Intel"
+ (gdb) break free_uid
+ Breakpoint 2 at 0xc0012250
+ (gdb) cont
+ Continuing. /* run a command in domain 0 */
+
+ Program received signal SIGTRAP, Trace/breakpoint trap.
+ free_uid (up=0xbffff738) at user.c:77
+
+ (gdb) print *up
+ $2 = {__count = {counter = 0}, processes = {counter = 135190120}, files = {
+ counter = 0}, next = 0x395, pprev = 0xbffff878, uid = 134701041}
+ (gdb) finish
+ Run till exit from #0 free_uid (up=0xbffff738) at user.c:77
+
+ Program received signal SIGTRAP, Trace/breakpoint trap.
+ release_task (p=0xc2da0000) at exit.c:51
+ (gdb) print *p
+ $3 = {state = 4, flags = 4, sigpending = 0, addr_limit = {seg = 3221225472},
+ exec_domain = 0xc016a040, need_resched = 0, ptrace = 0, lock_depth = -1,
+ counter = 1, nice = 0, policy = 0, mm = 0x0, processor = 0,
+ cpus_runnable = 1, cpus_allowed = 4294967295, run_list = {next = 0x0,
+ prev = 0x0}, sleep_time = 18995, next_task = 0xc017c000,
+ prev_task = 0xc2f94000, active_mm = 0x0, local_pages = {next = 0xc2da0054,
+ prev = 0xc2da0054}, allocation_order = 0, nr_local_pages = 0,
+ ...
+5. To resume Xen, enter the "continue" command to gdb.
+ This sends the packet $c#63 along the serial channel.
+
+ (gdb) cont
+ Continuing.
+
+Debugging Multiple Domains & Processes
+--------------------------------------
+
+pdb supports debugging multiple domains & processes. You can switch
+between different domains and processes within domains and examine
+variables in each.
+
+The pdb context identifies the current debug target. It is stored
+in the xen variable pdb_ctx and defaults to xen.
+
+ target pdb_ctx.domain pdb_ctx.process
+ ------ -------------- ---------------
+ xen -1 -1
+ guest os 0,1,2,... -1
+ process 0,1,2,... 0,1,2,...
+
+Unfortunately, gdb doesn't understand debugging multiple process
+simultaneously (we're working on it), so at present you are limited
+to just one set of symbols for symbolic debugging. When debugging
+processes, pdb currently supports just Linux 2.4.
+
+ define setup
+ file xeno-clone/xeno.bk/xen/xen
+ add-sym xeno-clone/xenolinux-2.4.25/vmlinux
+ add-sym ~ach61/a.out
+ end
+
+
+1. Connect with gdb as before. A couple of Linux-specific
+ symbols need to be defined.
+
+ (gdb) target remote <hostname>:<port + 1> /* contact nsplitd */
+ Remote debugging using serial.srg:12131
+ continue_cpu_idle_loop () at current.h:10
+ warning: shared library handler failed to enable breakpoint
+ (gdb) set pdb_pidhash_addr = &pidhash
+ (gdb) set pdb_init_task_union_addr = &init_task_union
+
+2. The pdb context defaults to Xen and we can read Xen's memory.
+ An attempt to access domain 0 memory fails.
+
+ (gdb) print pdb_ctx
+ $1 = {valid = 0, domain = -1, process = -1, ptbr = 1052672}
+ (gdb) print hexchars
+ $2 = "0123456789abcdef"
+ (gdb) print cpu_vendor_names
+ Cannot access memory at address 0xc0191f80
+
+3. Now we change to domain 0. In addition to changing pdb_ctx.domain,
+ we need to change pdb_ctx.valid to signal pdb of the change.
+ It is now possible to examine Xen and Linux memory.
+
+ (gdb) set pdb_ctx.domain=0
+ (gdb) set pdb_ctx.valid=1
+ (gdb) print hexchars
+ $3 = "0123456789abcdef"
+ (gdb) print cpu_vendor_names
+ $4 = {0xc0158b46 "Intel", 0xc0158c37 "Cyrix", 0xc0158b55 "AMD",
+ 0xc0158c3d "UMC", 0xc0158c41 "NexGen", 0xc0158c48 "Centaur",
+ 0xc0158c50 "Rise", 0xc0158c55 "Transmeta"}
+
+4. Now change to a process within domain 0. Again, we need to
+ change pdb_ctx.valid in addition to pdb_ctx.process.
+
+ (gdb) set pdb_ctx.process=962
+ (gdb) set pdb_ctx.valid =1
+ (gdb) print pdb_ctx
+ $1 = {valid = 0, domain = 0, process = 962, ptbr = 52998144}
+ (gdb) print aho_a
+ $2 = 20
+
+5. Now we can read the same variable from another process running
+ the same executable in another domain.
+
+ (gdb) set pdb_ctx.domain=1
+ (gdb) set pdb_ctx.process=1210
+ (gdb) set pdb_ctx.valid=1
+ (gdb) print pdb_ctx
+ $3 = {valid = 0, domain = 1, process = 1210, ptbr = 70574080}
+ (gdb) print aho_a
+ $4 = 27
+
+
+
+
+Changes
+-------
+
+04.02.05 aho creation
+04.03.31 aho add description on debugging multiple domains
+++ /dev/null
-Xen Scheduler HOWTO
-===================
-
-by Mark Williamson
-(c) 2004 Intel Research Cambridge
-
-
-Introduction
-------------
-
-Xen offers a choice of CPU schedulers. All available schedulers are
-included in Xen at compile time and the administrator may select a
-particular scheduler using a boot-time parameter to Xen. It is
-expected that administrators will choose the scheduler most
-appropriate to their application and configure the machine to boot
-with that scheduler.
-
-Note: the default scheduler is the Borrowed Virtual Time (BVT)
-scheduler which was also used in previous releases of Xen. No
-configuration changes are required to keep using this scheduler.
-
-This file provides a brief description of the CPU schedulers available
-in Xen, what they are useful for and the parameters that are used to
-configure them. This information is necessarily fairly technical at
-the moment. The recommended way to fully understand the scheduling
-algorithms is to read the relevant research papers.
-
-The interface to the schedulers is basically "raw" at the moment,
-without sanity checking - administrators should be careful when
-setting the parameters since it is possible for a mistake to hang
-domains, or the entire system (in particular, double check parameters
-for sanity and make sure that DOM0 will get enough CPU time to remain
-usable). Note that xc_dom_control.py takes time values in
-nanoseconds.
-
-Future tools will implement friendlier control interfaces.
-
-
-Borrowed Virtual Time (BVT)
----------------------------
-
-All releases of Xen have featured the BVT scheduler, which is used to
-provide proportional fair shares of the CPU based on weights assigned
-to domains. BVT is "work conserving" - the CPU will never be left
-idle if there are runnable tasks.
-
-BVT uses "virtual time" to make decisions on which domain should be
-scheduled on the processor. Each time a scheduling decision is
-required, BVT evaluates the "Effective Virtual Time" of all domains
-and then schedules the domain with the least EVT. Domains are allowed
-to "borrow" virtual time by "time warping", which reduces their EVT by
-a certain amount, so that they may be scheduled sooner. In order to
-maintain long term fairness, there are limits on when a domain can
-time warp and for how long. [ For more details read the SOSP'99 paper
-by Duda and Cheriton ]
-
-In the Xen implementation, domains time warp when they unblock, so
-that domain wakeup latencies are reduced.
-
-The BVT algorithm uses the following per-domain parameters (set using
-xc_dom_control.py cpu_bvtset):
-
-* mcuadv - the MCU (Minimum Charging Unit) advance determines the
- proportional share of the CPU that a domain receives. It
- is set inversely proportionally to a domain's sharing weight.
-* warp - the amount of "virtual time" the domain is allowed to warp
- backwards
-* warpl - the warp limit is the maximum time a domain can run warped for
-* warpu - the unwarp requirement is the minimum time a domain must
- run unwarped for before it can warp again
-
-BVT also has the following global parameter (set using
-xc_dom_control.py cpu_bvtslice):
-
-* ctx_allow - the context switch allowance is similar to the "quantum"
- in traditional schedulers. It is the minimum time that
- a scheduled domain will be allowed to run before be
- pre-empted. This prevents thrashing of the CPU.
-
-BVT can now be selected by passing the 'sched=bvt' argument to Xen at
-boot-time and is the default scheduler if no 'sched' argument is
-supplied.
-
-Atropos
--------
-
-Atropos is a scheduler originally developed for the Nemesis multimedia
-operating system. Atropos can be used to reserve absolute shares of
-the CPU. It also includes some features to improve the efficiency of
-domains that block for I/O and to allow spare CPU time to be shared
-out.
-
-The Atropos algorithm has the following parameters for each domain
-(set using xc_dom_control.py cpu_atropos_set):
-
- * slice - The length of time per period that a domain is guaranteed.
- * period - The period over which a domain is guaranteed to receive
- its slice of CPU time.
- * latency - The latency hint is used to control how soon after
- waking up a domain should be scheduled.
- * xtratime - This is a true (1) / false (0) flag that specifies whether
- a domain should be allowed a share of the system slack time.
-
-Every domain has an associated period and slice. The domain should
-receive 'slice' nanoseconds every 'period' nanoseconds. This allows
-the administrator to configure both the absolute share of the CPU a
-domain receives and the frequency with which it is scheduled. When
-domains unblock, their period is reduced to the value of the latency
-hint (the slice is scaled accordingly so that they still get the same
-proportion of the CPU). For each subsequent period, the slice and
-period times are doubled until they reach their original values.
-
-Atropos is selected by adding 'sched=atropos' to Xen's boot-time
-arguments.
-
-Note: don't overcommit the CPU when using Atropos (i.e. don't reserve
-more CPU than is available - the utilisation should be kept to
-slightly less than 100% in order to ensure predictable behaviour).
-
-Round-Robin
------------
-
-The Round-Robin scheduler is provided as a simple example of Xen's
-internal scheduler API. For production systems, one of the other
-schedulers should be used, since they are more flexible and more
-efficient.
-
-The Round-robin scheduler has one global parameter (set using
-xc_dom_control.py cpu_rrobin_slice):
-
- * rr_slice - The time for which each domain runs before the next
- scheduling decision is made.
-
-The Round-Robin scheduler can be selected by adding 'sched=rrobin' to
-Xen's boot-time arguments.
+++ /dev/null
-Virtual Block Devices / Virtual Disks in Xen - HOWTO
-====================================================
-
-HOWTO for Xen 1.2
-
-Mark A. Williamson (mark.a.williamson@intel.com)
-(C) Intel Research Cambridge 2004
-
-Introduction
-------------
-
-This document describes the new Virtual Block Device (VBD) and Virtual Disk
-features available in Xen release 1.2. First, a brief introduction to some
-basic disk concepts on a Xen system:
-
-Virtual Block Devices (VBDs):
- VBDs are the disk abstraction provided by Xen. All XenoLinux disk accesses
- go through the VBD driver. Using the VBD functionality, it is possible
- to selectively grant domains access to portions of the physical disks
- in the system.
-
- A virtual block device can also consist of multiple extents from the
- physical disks in the system, allowing them to be accessed as a single
- uniform device from the domain with access to that VBD. The
- functionality is somewhat similar to that underpinning LVM, since
- you can combine multiple regions from physical devices into a single
- logical device, from the point of view of a guest virtual machine.
-
- Everyone who boots Xen / XenoLinux from a hard drive uses VBDs
- but for some uses they can almost be ignored.
-
-Virtual Disks (VDs):
- VDs are an abstraction built on top of the functionality provided by
- VBDs. The VD management code maintains a "free pool" of disk space on
- the system that has been reserved for use with VDs. The tools can
- automatically allocate collections of extents from this free pool to
- create "virtual disks" on demand.
-
- VDs can then be used just like normal disks by domains. VDs appear
- just like any other disk to guest domains, since they use the same VBD
- abstraction, as provided by Xen.
-
- Using VDs is optional, since it's always possible to dedicate
- partitions, or entire disks to your virtual machines. VDs are handy
- when you have a dynamically changing set of virtual machines and you
- don't want to have to keep repartitioning in order to provide them with
- disk space.
-
- Virtual Disks are rather like "logical volumes" in LVM.
-
-If that didn't all make sense, it doesn't matter too much ;-) Using the
-functionality is fairly straightforward and some examples will clarify things.
-The text below expands a bit on the concepts involved, finishing up with a
-walk-through of some simple virtual disk management tasks.
-
-
-Virtual Block Devices
----------------------
-
-Before covering VD management, it's worth discussing some aspects of the VBD
-functionality that will be useful to know.
-
-A VBD is made up of a number of extents from physical disk devices. The
-extents for a VBD don't have to be contiguous, or even on the same device. Xen
-performs address translation so that they appear as a single contiguous
-device to a domain.
-
-When the VBD layer is used to give access to entire drives or entire
-partitions, the VBDs simply consist of a single extent that corresponds to the
-drive or partition used. Lists of extents are usually only used when virtual
-disks (VDs) are being used.
-
-Xen 1.2 and its associated XenoLinux release support automatic registration /
-removal of VBDs. It has always been possible to add a VBD to a running
-XenoLinux domain but it was then necessary to run the "xen_vbd_refresh" tool in
-order for the new device to be detected. Nowadays, when a VBD is added, the
-domain it's added to automatically registers the disk, with no special action
-by the user being required.
-
-Note that it is possible to use the VBD functionality to allow multiple domains
-write access to the same areas of disk. This is almost always a bad thing!
-The provided example scripts for creating domains do their best to check that
-disk areas are not shared unsafely and will catch many cases of this. Setting
-the vbd_expert variable in config files for xc_dom_create.py controls how
-unsafe it allows VBD mappings to be - 0 (read only sharing allowed) should be
-right for most people ;-). Level 1 attempts to allow at most one writer to any
-area of disk. Level 2 allows multiple writers (i.e. anything!).
-
-
-Virtual Disk Management
------------------------
-
-The VD management code runs entirely in user space. The code is written in
-Python and can therefore be accessed from custom scripts, as well as from the
-convenience scripts provided. The underlying VD database is a SQLite database
-in /var/db/xen_vdisks.sqlite.
-
-Most virtual disk management can be performed using the xc_vd_tool.py script
-provided in the tools/examples/ directory of the source tree. It supports the
-following operations:
-
-initialise - "Formats" a partition or disk device for use storing
- virtual disks. This does not actually write data to the
- specified device. Rather, it adds the device to the VD
- free-space pool, for later allocation.
-
- You should only add devices that correspond directly to
- physical disks / partitions - trying to use a VBD that you
- have created yourself as part of the free space pool has
- undefined (possibly nasty) results.
-
-create - Creates a virtual disk of specified size by allocating space
- from the free space pool. The virtual disk is identified
- in future by the unique ID returned by this script.
-
- The disk can be given an expiry time, if desired. For
- most users, the best idea is to specify a time of 0 (which
- has the special meaning "never expire") and then
- explicitly delete the VD when finished with it -
- otherwise, VDs will disappear if allowed to expire.
-
-delete - Explicitly delete a VD. Makes it disappear immediately!
-
-setexpiry - Allows the expiry time of a (not yet expired) virtual disk
- to be modified. Be aware the VD will disappear when the
- time has expired.
-
-enlarge - Increase the allocation of space to a virtual disk.
- Currently this will not be immediately visible to running
- domain(s) using it. You can make it visible by destroying
- the corresponding VBDs and then using xc_dom_control.py to
- add them to the domain again. Note: doing this to
- filesystems that are in use may well cause errors in the
- guest Linux, or even a crash although it will probably be
- OK if you stop the domain before updating the VBD and
- restart afterwards.
-
-import - Allocate a virtual disk and populate it with the contents of
- some disk file. This can be used to import root file system
- images or to restore backups of virtual disks, for instance.
-
-export - Write the contents of a virtual disk out to a disk file.
- Useful for creating disk images for use elsewhere, such as
- standard root file systems and backups.
-
-list - List the non-expired virtual disks currently available in the
- system.
-
-undelete - Attempts to recover an expired (or deleted) virtual disk.
-
-freespace - Get the free space (in megabytes) available for allocating
- new virtual disk extents.
-
-The functionality provided by these scripts is also available directly from
-Python functions in the xenctl.utils module - you can use this functionality in
-your own scripts.
-
-Populating VDs:
-
-Once you've created a VD, you might want to populate it from DOM0 (for
-instance, to put a root file system onto it for a guest domain). This can be
-done by creating a VBD for dom0 to access the VD through - this is discussed
-below.
-
-More detail on how virtual disks work:
-
-When you "format" a device for virtual disks, the device is logically split up
-into extents. These extents are recorded in the Virtual Disk Management
-database in /var/db/xen_vdisks.sqlite.
-
-When you use xc_vd_tool.py to add create a virtual disk, some of the extents in
-the free space pool are reallocated for that virtual disk and a record for that
-VD is added to the database. When VDs are mapped into domains as VBDs, the
-system looks up the allocated extents for the virtual disk in order to set up
-the underlying VBD.
-
-Free space is identified by the fact that it belongs to an "expired" disk.
-When "initialising" with xc_vd_tool.py adds a real device to the free pool, it
-actually divides the device into extents and adds them to an already-expired
-virtual disk. The allocated device is not written to during this operation -
-its availability is simply recorded into the virtual disks database.
-
-If you set an expiry time on a VD, its extents will be liable to be reallocated
-to new VDs as soon as that expiry time runs out. Therefore, be careful when
-setting expiry times! Many users will find it simplest to set all VDs to not
-expire automatically, then explicitly delete them later on.
-
-Deleted / expired virtual disks may sometimes be undeleted - currently this
-only works when none of the virtual disk's extents have been reallocated to
-other virtual disks, since that's the only situation where the disk is likely
-to be fully intact. You should try undeletion as soon as you realise you've
-mistakenly deleted (or allowed to expire) a virtual disk. At some point in the
-future, an "unsafe" undelete which can recover what remains of partially
-reallocated virtual disks may also be implemented.
-
-Security note:
-
-The disk space for VDs is not zeroed when it is initially added to the free
-space pool OR when a VD expires OR when a VD is created. Therefore, if this is
-not done manually it is possible for a domain to read a VD to determine what
-was written by previous owners of its constituent extents. If this is a
-problem, users should manually clean VDs in some way either on allocation, or
-just before deallocation (automated support for this may be added at a later
-date).
-
-
-Side note: The xvd* devices
----------------------------
-
-The examples in this document make frequent use of the xvd* device nodes for
-representing virtual block devices. It is not a requirement to use these with
-Xen, since VBDs can be mapped to any IDE or SCSI device node in the system.
-Changing the the references to xvd* nodes in the examples below to refer to
-some unused hd* or sd* node would also be valid.
-
-They can be useful when accessing VBDs from dom0, since binding VBDs to xvd*
-devices under will avoid clashes with real IDE or SCSI drives.
-
-There is a shell script provided in tools/misc/xen-mkdevnodes to create these
-nodes. Specify on the command line the directory that the nodes should be
-placed under (e.g. /dev):
-
-> cd {root of Xen source tree}/tools/misc/
-> ./xen-mkdevnodes /dev
-
-
-Dynamically Registering VBDs
-----------------------------
-
-The domain control tool (xc_dom_control.py) includes the ability to add and
-remove VBDs to / from running domains. As usual, the command format is:
-
-xc_dom_control.py [operation] [arguments]
-
-The operations (and their arguments) are as follows:
-
-vbd_add dom uname dev mode - Creates a VBD corresponding to either a physical
- device or a virtual disk and adds it as a
- specified device under the target domain, with
- either read or write access.
-
-vbd_remove dom dev - Removes the VBD associated with a specified device
- node from the target domain.
-
-These scripts are most useful when populating VDs. VDs can't be populated
-directly, since they don't correspond to real devices. Using:
-
- xc_dom_control.py vbd_add 0 vd:your_vd_id /dev/whatever w
-
-You can make a virtual disk available to DOM0. Sensible devices to map VDs to
-in DOM0 are the /dev/xvd* nodes, since that makes it obvious that they are Xen
-virtual devices that don't correspond to real physical devices.
-
-You can then format, mount and populate the VD through the nominated device
-node. When you've finished, use:
-
- xc_dom_control.py vbd_remove 0 /dev/whatever
-
-To revoke DOM0's access to it. It's then ready for use in a guest domain.
-
-
-
-You can also use this functionality to grant access to a physical device to a
-guest domain - you might use this to temporarily share a partition, or to add
-access to a partition that wasn't granted at boot time.
-
-When playing with VBDs, remember that in general, it is only safe for two
-domains to have access to a file system if they both have read-only access. You
-shouldn't be trying to share anything which is writable, even if only by one
-domain, unless you're really sure you know what you're doing!
-
-
-Granting access to real disks and partitions
---------------------------------------------
-
-During the boot process, Xen automatically creates a VBD for each physical disk
-and gives Dom0 read / write access to it. This makes it look like Dom0 has
-normal access to the disks, just as if Xen wasn't being used - in reality, even
-Dom0 talks to disks through Xen VBDs.
-
-To give another domain access to a partition or whole disk then you need to
-create a corresponding VBD for that partition, for use by that domain. As for
-virtual disks, you can grant access to a running domain, or specify that the
-domain should have access when it is first booted.
-
-To grant access to a physical partition or disk whilst a domain is running, use
-the xc_dom_control.py script - the usage is very similar to the case of adding
-access virtual disks to a running domain (described above). Specify the device
-as "phy:device", where device is the name of the device as seen from domain 0,
-or from normal Linux without Xen. For instance:
-
-> xc_dom_control.py vbd_add 2 phy:hdc /dev/whatever r
-
-Will grant domain 2 read-only access to the device /dev/hdc (as seen from Dom0
-/ normal Linux running on the same machine - i.e. the master drive on the
-secondary IDE chain), as /dev/whatever in the target domain.
-
-Note that you can use this within domain 0 to map disks / partitions to other
-device nodes within domain 0. For instance, you could map /dev/hda to also be
-accessible through /dev/xvda. This is not generally recommended, since if you
-(for instance) mount both device nodes read / write you could cause corruption
-to the underlying filesystem. It's also quite confusing ;-)
-
-To grant a domain access to a partition or disk when it boots, the appropriate
-VBD needs to be created before the domain is started. This can be done very
-easily using the tools provided. To specify this to the xc_dom_create.py tool
-(either in a startup script or on the command line) use triples of the format:
-
- phy:dev,target_dev,perms
-
-Where dev is the device name as seen from Dom0, target_dev is the device you
-want it to appear as in the target domain and perms is 'w' if you want to give
-write privileges, or 'r' otherwise.
-
-These may either be specified on the command line or in an initialisation
-script. For instance, to grant the same access rights as described by the
-command example above, you would use the triple:
-
- phy:hdc,/dev/whatever,r
-
-If you are using a config file, then you should add this triple into the
-vbd_list variable, for instance using the line:
-
- vbd_list = [ ('phy:dev', 'hdc', 'r') ]
-
-(Note that you need to use quotes here, since config files are really small
-Python scripts.)
-
-To specify the mapping on the command line, you'd use the -d switch and supply
-the triple as the argument, e.g.:
-
-> xc_dom_create.py [other arguments] -d phy:hdc,/dev/whatever,r
-
-(You don't need to explicitly quote things in this case.)
-
-
-Walk-through: Booting a domain from a VD
-----------------------------------------
-
-As an example, here is a sequence of commands you might use to create a virtual
-disk, populate it with a root file system and boot a domain from it. These
-steps assume that you've installed the example scripts somewhere on your PATH -
-if you haven't done that, you'll need to specify a fully qualified pathname in
-the examples below. It is also assumed that you know how to use the
-xc_dom_create.py tool (apart from configuring virtual disks!)
-
-[ This example is intended only for users of virtual disks (VDs). You don't
-need to follow this example if you'll be booting a domain from a dedicated
-partition, since you can create that partition and populate it, directly from
-Dom0, as normal. ]
-
-First, if you haven't done so already, you'll initialise the free space pool by
-adding a real partition to it. The details are stored in the database, so
-you'll only need to do it once. You can also use this command to add further
-partitions to the existing free space pool.
-
-> xc_vd_tool.py format /dev/<real partition>
-
-Now you'll want to allocate the space for your virtual disk. Do so using the
-following, specifying the size in megabytes.
-
-> xc_vd_tool.py create <size in megabytes>
-
-At this point, the program will tell you the virtual disk ID. Note it down, as
-it is how you will identify the virtual device in future.
-
-If you don't want the VD to be bootable (i.e. you're booting a domain from some
-other medium and just want it to be able to access this VD), you can simply add
-it to the vbd_list used by xc_dom_create.py, either by putting it in a config
-file or by specifying it on the command line. Formatting / populating of the
-VD could then done from that domain once it's started.
-
-If you want to boot off your new VD as well then you need to populate it with a
-standard Linux root filesystem. You'll need to temporarily add the VD to DOM0
-in order to do this. To give DOM0 r/w access to the VD, use the following
-command line, substituting the ID you got earlier.
-
-> xc_dom_control.py vbd_add 0 vd:<id> /dev/xvda w
-
-This attaches the VD to the device /dev/xvda in domain zero, with read / write
-privileges - you can use other devices nodes if you choose too.
-
-Now make a filesystem on this device, mount it and populate it with a root
-filesystem. These steps are exactly the same as under normal Linux. When
-you've finished, unmount the filesystem again.
-
-You should now remove the VD from DOM0. This will prevent you accidentally
-changing it in DOM0, whilst the guest domain is using it (which could cause
-filesystem corruption, and confuse Linux).
-
-> xc_dom_control.py vbd_remove 0 /dev/xvda
-
-It should now be possible to boot a guest domain from the VD. To do this, you
-should specify the the VD's details in some way so that xc_dom_create.py will
-be able to set up the corresponding VBD for the domain to access. If you're
-using a config file, you should include:
-
- ('vd:<id>', '/dev/whatever', 'w')
-
-In the vbd_list, substituting the appropriate virtual disk ID, device node and
-read / write setting.
-
-To specify access on the command line, as you start the domain, you would use
-the -d switch (note that you don't need to use quote marks here):
-
-> xc_dom_create.py [other arguments] -d vd:<id>,/dev/whatever,w
-
-To tell Linux which device to boot from, you should either include:
-
- root=/dev/whatever
-
-in your cmdline_root in the config file, or specify it on the command line,
-using the -R option:
-
-> xc_dom_create.py [other arguments] -R root=/dev/whatever
-
-That should be it: sit back watch your domain boot off its virtual disk!
-
-
-Getting help
-------------
-
-The main source of help using Xen is the developer's e-mail list:
-<xen-devel@lists.sourceforge.net>. The developers will help with problems,
-listen to feature requests and do bug fixes. It is, however, helpful if you
-can look through the mailing list archives and HOWTOs provided to make sure
-your question is not answered there. If you post to the list, please provide
-as much information as possible about your setup and your problem.
-
-There is also a general Xen FAQ, kindly started by Jan van Rensburg, which (at
-time of writing) is located at: <http://xen.epiuse.com/xen-faq.txt>.
-
-Contributing
-------------
-
-Patches and extra documentation are also welcomed ;-) and should also be posted
-to the xen-devel e-mail list.
+++ /dev/null
-###########################################
-Xen HOWTO
-
-University of Cambridge Computer Laboratory
-
-http://www.cl.cam.ac.uk/netos/xen
-#############################
-
-
-Get Xen Source Code
-=============================
-
-The public master BK repository for the 1.2 release lives at:
-'bk://xen.bkbits.net/xeno-1.2.bk'
-
-To fetch a local copy, first download the BitKeeper tools at:
-http://www.bitmover.com/download with username 'bitkeeper' and
-password 'get bitkeeper'.
-
-Then install the tools and then run:
-# bk clone bk://xen.bkbits.net/xeno-1.2.bk
-
-Under your current directory, a new directory named 'xeno-1.2.bk' has
-been created, which contains all the necessary source codes for the
-Xen hypervisor and Linux guest OSes.
-
-To get newest changes to the repository, run
-# cd xeno-1.2.bk
-# bk pull
-
-
-Build Xen
-=============================
-
-Hint: To see how to build Xen and all the control tools, inspect the
-tools/misc/xen-clone script in the BK repository. This script can be
-used to clone the repository and perform a full build.
-
-To build Xen manually:
-
-# cd xeno-1.2.bk/xen
-# make clean
-# make
-
-This will (should) produce a file called 'xen' in the current
-directory. This is the ELF 32-bit LSB executable file of Xen. You
-can also find a gzip version, named 'xen.gz'.
-
-To install the built files on your server under /usr, type 'make
-install' at the root of the BK repository. You will need to be root to
-do this!
-
-Hint: There is also a 'make dist' rule which copies built files to an
-install directory just outside the BK repo; if this suits your setup,
-go for it.
-
-
-Build Linux as a Xen guest OS
-==============================
-
-This is a little more involved since the repository only contains a
-"sparse" tree -- this is essentially an 'overlay' on a standard linux
-kernel source tree. It contains only those files currently 'in play'
-which are either modified versions of files in the vanilla linux tree,
-or brand new files specific to the Xen port.
-
-So, first you need a vanilla linux-2.4.24 tree, which is located at:
-http://www.kernel.org/pub/linux/kernel/v2.4
-
-Then:
- # mv linux-2.4.24.tar.gz /xeno-1.2.bk
- # cd /xeno-1.2.bk
- # tar -zxvf linux-2.4.24.tar.gz
-
-You'll find a new directory 'linux-2.4.24' which contains all
-the vanilla Linux 2.4.24 kernel source codes.
-
-Hint: You should choose the vanilla linux kernel tree that has the
-same version as the "sparse" tree.
-
-Next, you need to 'overlay' this sparse tree on the full vanilla Linux
-kernel tree:
-
- # cd /xeno-1.2.bk/xenolinux-2.4.24-sparse
- # ./mkbuildtree ../linux-2.4.24
-
-Finally, rename the buildtree since it is now a 'xenolinux' buildtree.
-
- # cd /xeno-1.2.bk
- # mv linux-2.4.24 xenolinux-2.4.24
-
-Now that the buildtree is there, you can build the xenolinux kernel.
-The default configuration should work fine for most people (use 'make
-oldconfig') but you can customise using one of the other config tools
-if you want.
-
- # cd /xeno-1.2.bk/xenolinux-2.4.24
- # ARCH=xen make oldconfig { or menuconfig, or xconfig, or config }
- # ARCH=xen make dep
- # ARCH=xen make bzImage
-
-Assuming the build works, you'll end up with
-/xeno-1.2.bk/xenolinux-2.4.24/arch/xen/boot/xenolinux.gz. This is the
-gzip version of XenoLinux kernel image.
-
-
-Build the Domain Control Tools
-==============================
-
-Under '/xeno-1.2.bk/tools', there are three sub-directories:
-'balloon', 'xc' and 'misc', each containing
-a group of tools. You can enter any of the four sub-directories
-and type 'make' to compile the corresponding group of tools.
-Or you can type 'make' under '/xeno-1.2.bk/tools' to compile
-all the tools.
-
-In order to compile the control-interface library in 'xc' you must
-have zlib and development headers installed. Also you will need at
-least Python v2.2.
-
-'make install' in the tools directory will place executables and
-libraries in /usr/bin and /usr/lib. You will need to be root to do this!
-
-As noted earlier, 'make dist' installs files to a local 'install'
-directory just outside the BK repository. These files will then need
-to be installed manually onto the server.
-
-The Example Scripts
-===================
-
-The scripts in tools/examples/ are generally useful for
-administering a Xen-based system. You can install them by running
-'make install' in that directory.
-
-The python scripts (*.py) are the main tools for controlling
-Xen domains.
-
-'defaults' and 'democd' are example configuration files for starting
-new domains.
-
-'xendomains' is a Sys-V style init script for starting and stopping
-Xen domains when the system boots / shuts down.
-
-These will be discussed below in more detail.
-
-
-Installation
-==============================
-
-First:
-# cp /xen-1.2.bk/xen/xen.gz /boot/xen.gz
-# cp /xen-1.2.bk/xenolinux-2.4.24/arch/xen/boot/xenolinux.gz /boot/xenolinux.gz
-
-Second, you must have 'GNU Grub' installed. Then you need to edit
-the Grub configuration file '/boot/grub/menu.lst'.
-
-A typical Grub menu option might look like:
-
-title Xen 1.2 / XenoLinux 2.4.24
- kernel /boot/xen.gz dom0_mem=131072 com1=115200,8n1 noht
- module /boot/xenolinux.gz root=/dev/sda4 ro console=tty0
-
-The first line specifies which Xen image to use, and what command line
-arguments to pass to Xen. In this case we set the maximum amount of
-memory to allocate to domain0, and enable serial I/O at 115200 baud.
-We could also disable smp support (nosmp) or disable hyper-threading
-support (noht). If you have multiple network interface you can use
-ifname=ethXX to select which one to use. If your network card is
-unsupported, use ifname=dummy
-
-The second line specifies which XenoLinux image to use, and the
-standard linux command line arguments to pass to the kernel. In this
-case, we're configuring the root partition and stating that it should
-(initially) be mounted read-only (normal practice).
-
-The following is a list of command line arguments to pass to Xen:
-
- ignorebiostables Disable parsing of BIOS-supplied tables. This may
- help with some chipsets that aren't fully supported
- by Xen. If you specify this option then ACPI tables are
- also ignored, and SMP support is disabled.
-
- noreboot Don't reboot the machine automatically on errors.
- This is useful to catch debug output if you aren't
- catching console messages via the serial line.
-
- nosmp Disable SMP support.
- This option is implied by 'ignorebiostables'.
-
- noacpi Disable ACPI tables, which confuse Xen on some chipsets.
- This option is implied by 'ignorebiostables'.
-
- watchdog Enable NMI watchdog which can report certain failures.
-
- noht Disable Hyperthreading.
-
- ifname=ethXX Select which Ethernet interface to use.
-
- ifname=dummy Don't use any network interface.
-
- com1=<baud>,DPS[,<io_base>,<irq>]
- com2=<baud>,DPS[,<io_base>,<irq>]
- Xen supports up to two 16550-compatible serial ports.
- For example: 'com1=9600,8n1,0x408,5' maps COM1 to a
- 9600-baud port, 8 data bits, no parity, 1 stop bit,
- I/O port base 0x408, IRQ 5.
- If the I/O base and IRQ are standard (com1:0x3f8,4;
- com2:0x2f8,3) then they need not be specified.
-
- console=<specifier list>
- Specify the destination for Xen console I/O.
- This is a comma-separated list of, for example:
- vga: use VGA console and allow keyboard input
- com1: use serial port com1
- com2H: use serial port com2. Transmitted chars will
- have the MSB set. Received chars must have
- MSB set.
- com2L: use serial port com2. Transmitted chars will
- have the MSB cleared. Received chars must
- have MSB cleared.
- The latter two examples allow a single port to be
- shared by two subsystems (eg. console and
- debugger). Sharing is controlled by MSB of each
- transmitted/received character.
- [NB. Default for this option is 'com1,tty']
-
- dom0_mem=xxx Set the maximum amount of memory for domain0.
-
- tbuf_size=xxx Set the size of the per-cpu trace buffers, in pages
- (default 1). Note that the trace buffers are only
- enabled in debug builds. Most users can ignore
- this feature completely.
-
- sched=xxx Select the CPU scheduler Xen should use. The current
- possibilities are 'bvt', 'atropos' and 'rrobin'. The
- default is 'bvt'. For more information see
- Sched-HOWTO.txt.
-
-Boot into Domain 0
-==============================
-
-Reboot your computer; After selecting the kernel to boot, stand back
-and watch Xen boot, closely followed by "domain 0" running the
-XenoLinux kernel. Depending on which root partition you have assigned
-to XenoLinux kernel in Grub configuration file, you can use the
-corresponding username / password to log in.
-
-Once logged in, it should look just like any regular linux box. All
-the usual tools and commands should work as per usual.
-
-
-Start New Domains
-==============================
-
-You must be 'root' to start new domains.
-
-Make sure you have successfully configured at least one
-physical network interface. Then:
-
-# xen_nat_enable
-
-The xc_dom_create.py program is useful for starting Xen domains.
-You can specify configuration files using the -f switch on the command
-line. The default configuration is in /etc/xc/defaults. You can
-create custom versions of this to suit your local configuration.
-
-You can override the settings in a configuration file using command
-line arguments to xc_dom_create.py. However, you may find it simplest
-to create a separate configuration file for each domain you start.
-
-xc_dom_create.py will print the local TCP port to which you should
-connect to perform console I/O. A suitable console client is provided
-by the Python module xenctl.console_client: running this module from
-the command line with <host> and <port> parameters will start a
-terminal session. This module is also installed as /usr/bin/xencons,
-from a copy in tools/misc/xencons. An alternative to manually running
-a terminal client is to specify '-c' to xc_dom_create.py, or add
-'auto_console=True' to the defaults file. This will cause
-xc_dom_create.py to automatically become the console terminal after
-starting the domain.
-
-Boot-time output will be directed to this console by default, because
-the console name is tty0. It is also possible to log in via the
-virtual console --- once again, your normal startup scripts will work
-as normal (e.g., by running mingetty on tty1-7). The device node to
-which the virtual console is attached can be configured by specifying
-'xencons=' on the OS command line:
- 'xencons=off' --> disable virtual console
- 'xencons=tty' --> attach console to /dev/tty1 (tty0 at boot-time)
- 'xencons=ttyS' --> attach console to /dev/ttyS0
-
-
-Manage Running Domains
-==============================
-
-You can see a list of existing domains with:
-# xc_dom_control.py list
-
-In order to stop a domain, you use:
-# xc_dom_control.py stop <domain_id>
-
-To shutdown a domain cleanly use:
-# xc_dom_control.py shutdown <domain_id>
-
-To destroy a domain immediately:
-# xc_dom_control.py destroy <domain_id>
-
-There are other more advanced options, including pinning domains to
-specific CPUs and saving / resuming domains to / from disk files. To
-get more information, run the tool without any arguments:
-# xc_dom_control.py
-
-There is more information available in the Xen README files, the
-VBD-HOWTO and the contributed FAQ / HOWTO documents on the web.
-
-
-Other Control Tasks using Python
-================================
-
-A Python module 'Xc' is installed as part of the tools-install
-process. This can be imported, and an 'xc object' instantiated, to
-provide access to privileged command operations:
-
-# import Xc
-# xc = Xc.new()
-# dir(xc)
-# help(xc.domain_create)
-
-In this way you can see that the class 'xc' contains useful
-documentation for you to consult.
-
-A further package of useful routines (xenctl) is also installed:
-
-# import xenctl.utils
-# help(xenctl.utils)
-
-You can use these modules to write your own custom scripts or you can
-customise the scripts supplied in the Xen distribution.
-
-
-Automatically start / stop domains at boot / shutdown
-=====================================================
-
-A Sys-V style init script for RedHat systems is provided in
-tools/examples/xendomains. When you run 'make install' in that
-directory, it should be automatically copied to /etc/init.d/. You can
-then enable it using the chkconfig command, e.g.:
-
-# chkconfig --add xendomains
-
-By default, this will start the boot-time domains in runlevels 3, 4
-and 5. To specify a domain is to start at boot-time, place its
-configuration file (or a link to it) under /etc/xc/auto/.
-
-The script will also stop ALL domains when the system is shut down,
-even domains that it did not start originally.
-
-You can also use the "service" command (part of the RedHat standard
-distribution) to run this script manually, e.g:
-
-# service xendomains start
-
-Starts all the domains with config files under /etc/xc/auto/.
-
-# service xendomains stop
-
-Shuts down ALL running Xen domains.
+++ /dev/null
-Pervasive Debugging
-===================
-
-Alex Ho (alex.ho at cl.cam.ac.uk)
-
-Introduction
-------------
-
-The pervasive debugging project is leveraging Xen to
-debug distributed systems. We have added a gdb stub
-to Xen to allow for remote debugging of both Xen and
-guest operating systems. More information about the
-pervasive debugger is available at: http://www.cl.cam.ac.uk/netos/pdb
-
-
-Implementation
---------------
-
-The gdb stub communicates with gdb running over a serial line.
-The main entry point is pdb_handle_exception() which is invoked
-from: pdb_key_pressed() ('D' on the console)
- do_int3_exception() (interrupt 3: breakpoint exception)
- do_debug() (interrupt 1: debug exception)
-
-This accepts characters from the serial port and passes gdb
-commands to pdb_process_command() which implements the gdb stub
-interface. This file draws heavily from the kgdb project and
-sample gdbstub provided with gdb.
-
-The stub can examine registers, single step and continue, and
-read and write memory (in Xen, a domain, or a Linux process'
-address space). The debugger does not currently trace the
-current process, so all bets are off if context switch occurs
-in the domain.
-
-
-Setup
------
-
- +-------+ telnet +-----------+ serial +-------+
- | GDB |--------| nsplitd |--------| Xen |
- +-------+ +-----------+ +-------+
-
-To run pdb, Xen must be appropriately configured and
-a suitable serial interface attached to the target machine.
-GDB and nsplitd can run on the same machine.
-
-Xen Configuration
-
- Add the "pdb=xxx" option to your Xen boot command line
- where xxx is one of the following values:
- com1 gdb stub should communicate on com1
- com1H gdb stub should communicate on com1 (with high bit set)
- com2 gdb stub should communicate on com2
- com2H gdb stub should communicate on com2 (with high bit set)
-
- Symbolic debugging infomration is quite helpful too:
- xeno.bk/xen/arch/i386/Rules.mk
- add -g to CFLAGS to compile Xen with symbols
- xeno.bk/xenolinux-2.4.24-sparse/arch/xen/Makefile
- add -g to CFLAGS to compile Linux with symbols
-
- You may also want to consider dedicating a register to the
- frame pointer (disable the -fomit-frame-pointer compile flag).
-
- When booting Xen and domain 0, look for the console text
- "Initializing pervasive debugger (PDB)" just before DOM0 starts up.
-
-Serial Port Configuration
-
- pdb expects to communicate with gdb using the serial port. Since
- this port is often shared with the machine's console output, pdb can
- discriminate its communication by setting the high bit of each byte.
-
- A new tool has been added to the source tree which splits
- the serial output from a remote machine into two streams:
- one stream (without the high bit) is the console and
- one stream (with the high bit stripped) is the pdb communication.
-
- See: xeno.bk/tools/nsplitd
-
- nsplitd configuration
- ---------------------
- hostname$ more /etc/xinetd.d/nsplit
- service nsplit1
- {
- socket_type = stream
- protocol = tcp
- wait = no
- user = wanda
- server = /usr/sbin/in.nsplitd
- server_args = serial.cl.cam.ac.uk:wcons00
- disable = no
- only_from = 128.232.0.0/17 127.0.0.1
- }
-
- hostname$ egrep 'wcons00|nsplit1' /etc/services
- wcons00 9600/tcp # Wanda remote console
- nsplit1 12010/tcp # Nemesis console splitter ports.
-
- Note: nsplitd was originally written for the Nemesis project
- at Cambridge.
-
- After nsplitd accepts a connection on <port> (12010 in the above
- example), it starts listening on port <port + 1>. Characters sent
- to the <port + 1> will have the high bit set and vice versa for
- characters received.
-
- You can connect to the nsplitd using
- 'tools/xenctl/lib/console_client.py <host> <port>'
-
-GDB 6.0
- pdb has been tested with gdb 6.0. It should also work with
- earlier versions.
-
-
-Usage
------
-
-1. Boot Xen and Linux
-2. Interrupt Xen by pressing 'D' at the console
- You should see the console message:
- (XEN) pdb_handle_exception [0x88][0x101000:0xfc5e72ac]
- At this point Xen is frozen and the pdb stub is waiting for gdb commands
- on the serial line.
-3. Attach with gdb
- (gdb) file xeno.bk/xen/xen
- Reading symbols from xeno.bk/xen/xen...done.
- (gdb) target remote <hostname>:<port + 1> /* contact nsplitd */
- Remote debugging using serial.srg:12131
- continue_cpu_idle_loop () at current.h:10
- warning: shared library handler failed to enable breakpoint
- (gdb) break __enter_scheduler
- Breakpoint 1 at 0xfc510a94: file schedule.c, line 330.
- (gdb) cont
- Continuing.
-
- Program received signal SIGTRAP, Trace/breakpoint trap.
- __enter_scheduler () at schedule.c:330
- (gdb) step
- (gdb) step
- (gdb) print next /* the variable prev has been optimized away! */
- $1 = (struct task_struct *) 0x0
- (gdb) delete
- Delete all breakpoints? (y or n) y
-4. You can add additional symbols to gdb
- (gdb) add-sym xenolinux-2.4.24/vmlinux
- add symbol table from file "xenolinux-2.4.24/vmlinux" at
- (y or n) y
- Reading symbols from xenolinux-2.4.24/vmlinux...done.
- (gdb) x/s cpu_vendor_names[0]
- 0xc01530d2 <cpdext+62898>: "Intel"
- (gdb) break free_uid
- Breakpoint 2 at 0xc0012250
- (gdb) cont
- Continuing. /* run a command in domain 0 */
-
- Program received signal SIGTRAP, Trace/breakpoint trap.
- free_uid (up=0xbffff738) at user.c:77
-
- (gdb) print *up
- $2 = {__count = {counter = 0}, processes = {counter = 135190120}, files = {
- counter = 0}, next = 0x395, pprev = 0xbffff878, uid = 134701041}
- (gdb) finish
- Run till exit from #0 free_uid (up=0xbffff738) at user.c:77
-
- Program received signal SIGTRAP, Trace/breakpoint trap.
- release_task (p=0xc2da0000) at exit.c:51
- (gdb) print *p
- $3 = {state = 4, flags = 4, sigpending = 0, addr_limit = {seg = 3221225472},
- exec_domain = 0xc016a040, need_resched = 0, ptrace = 0, lock_depth = -1,
- counter = 1, nice = 0, policy = 0, mm = 0x0, processor = 0,
- cpus_runnable = 1, cpus_allowed = 4294967295, run_list = {next = 0x0,
- prev = 0x0}, sleep_time = 18995, next_task = 0xc017c000,
- prev_task = 0xc2f94000, active_mm = 0x0, local_pages = {next = 0xc2da0054,
- prev = 0xc2da0054}, allocation_order = 0, nr_local_pages = 0,
- ...
-5. To resume Xen, enter the "continue" command to gdb.
- This sends the packet $c#63 along the serial channel.
-
- (gdb) cont
- Continuing.
-
-Debugging Multiple Domains & Processes
---------------------------------------
-
-pdb supports debugging multiple domains & processes. You can switch
-between different domains and processes within domains and examine
-variables in each.
-
-The pdb context identifies the current debug target. It is stored
-in the xen variable pdb_ctx and defaults to xen.
-
- target pdb_ctx.domain pdb_ctx.process
- ------ -------------- ---------------
- xen -1 -1
- guest os 0,1,2,... -1
- process 0,1,2,... 0,1,2,...
-
-Unfortunately, gdb doesn't understand debugging multiple process
-simultaneously (we're working on it), so at present you are limited
-to just one set of symbols for symbolic debugging. When debugging
-processes, pdb currently supports just Linux 2.4.
-
- define setup
- file xeno-clone/xeno.bk/xen/xen
- add-sym xeno-clone/xenolinux-2.4.25/vmlinux
- add-sym ~ach61/a.out
- end
-
-
-1. Connect with gdb as before. A couple of Linux-specific
- symbols need to be defined.
-
- (gdb) target remote <hostname>:<port + 1> /* contact nsplitd */
- Remote debugging using serial.srg:12131
- continue_cpu_idle_loop () at current.h:10
- warning: shared library handler failed to enable breakpoint
- (gdb) set pdb_pidhash_addr = &pidhash
- (gdb) set pdb_init_task_union_addr = &init_task_union
-
-2. The pdb context defaults to Xen and we can read Xen's memory.
- An attempt to access domain 0 memory fails.
-
- (gdb) print pdb_ctx
- $1 = {valid = 0, domain = -1, process = -1, ptbr = 1052672}
- (gdb) print hexchars
- $2 = "0123456789abcdef"
- (gdb) print cpu_vendor_names
- Cannot access memory at address 0xc0191f80
-
-3. Now we change to domain 0. In addition to changing pdb_ctx.domain,
- we need to change pdb_ctx.valid to signal pdb of the change.
- It is now possible to examine Xen and Linux memory.
-
- (gdb) set pdb_ctx.domain=0
- (gdb) set pdb_ctx.valid=1
- (gdb) print hexchars
- $3 = "0123456789abcdef"
- (gdb) print cpu_vendor_names
- $4 = {0xc0158b46 "Intel", 0xc0158c37 "Cyrix", 0xc0158b55 "AMD",
- 0xc0158c3d "UMC", 0xc0158c41 "NexGen", 0xc0158c48 "Centaur",
- 0xc0158c50 "Rise", 0xc0158c55 "Transmeta"}
-
-4. Now change to a process within domain 0. Again, we need to
- change pdb_ctx.valid in addition to pdb_ctx.process.
-
- (gdb) set pdb_ctx.process=962
- (gdb) set pdb_ctx.valid =1
- (gdb) print pdb_ctx
- $1 = {valid = 0, domain = 0, process = 962, ptbr = 52998144}
- (gdb) print aho_a
- $2 = 20
-
-5. Now we can read the same variable from another process running
- the same executable in another domain.
-
- (gdb) set pdb_ctx.domain=1
- (gdb) set pdb_ctx.process=1210
- (gdb) set pdb_ctx.valid=1
- (gdb) print pdb_ctx
- $3 = {valid = 0, domain = 1, process = 1210, ptbr = 70574080}
- (gdb) print aho_a
- $4 = 27
-
-
-
-
-Changes
--------
-
-04.02.05 aho creation
-04.03.31 aho add description on debugging multiple domains