The Mosix HOWTO

Kris Buytaert

Revision History
Revision v0.1513 March 2002
Revision v0.1318 Feb 2002
Revision ALPHA 0.0309 October 2001

Table of Contents
1. Introduction
1.1. Introduction
1.2. Disclaimer
1.3. Distribution policy
1.4. New versions of this document
1.5. Feedback
2. So what is mosix Anyway ?
2.1. A very, very brief introduction to clustering
2.2. The story so far
2.3. Mosix in action: An example
2.4. Components
2.5. Work in Progress
3. Features of Mosix
3.1. Pros of Mosix
3.2. Cons of Mosix
3.3. Extra Features in OpenMosix
4. Requirements and Planning
4.1. Hardware requirements
4.2. Hardware Setup Guidelines
4.3. Software requirements
4.4. Planning your cluster
5. Distribution specific installations
5.1. Installing Mosix
5.2. Getting Mosix
5.3. Getting OpenMosix
5.4. openMosix General Instructions
5.5. RedHat
5.6. Suse 7.1 and Mosix
5.7. Debian and Mosix
5.8. Other distributions
6. Cluster Installation
6.1. Cluster Installations
6.2. Installation scripts [LUI, ALINKA]
6.3. The easy way: Automatic installation
6.4. The hard way: When scripts don't work
6.5. Kick Start Installations
6.6. DSH, Distributed Shell
7. ClumpOS
7.1. What is Clump/OS
7.2. How does it work
7.3. Requirements
7.4. Getting Started
7.5. Problems ?
7.6. Expert Mode
8. Administrating openMosix
8.1. Basic Administration
8.2. Configuration
8.3. Informations about the other nodes
8.4. Additional Informations about processes
8.5. the userspace-tools
9. Tuning Mosix
9.1. Optimising Mosix
9.2. Where to place your files
10. Special Cases
10.1. Laptops and PCMCIA Cards
10.2. Diskless nodes
10.3. Very large clusters
11. Common Problems
11.1. My processes won't migrate
11.2. setpe reports
11.3. I don`t see all my nodes
12. Other Programs
12.1. mexec
12.2. mosixview
12.3. mpi
12.4. mps
12.5. pmake
12.6. pvm
12.7. qps
13. Hints and Tips
13.1. Locked Processes
13.2. Choosing your processes
A. More Info
A.1. Further Reading
A.2. Links
A.3. Supporting Mosix
B. Credits
C. GNU Free Documentation License
0. PREAMBLE
1. APPLICABILITY AND DEFINITIONS
2. VERBATIM COPYING
3. COPYING IN QUANTITY
4. MODIFICATIONS
5. COMBINING DOCUMENTS
6. COLLECTIONS OF DOCUMENTS
7. AGGREGATION WITH INDEPENDENT WORKS
8. TRANSLATION
9. TERMINATION
10. FUTURE REVISIONS OF THIS LICENSE
How to use this License for your documents

Chapter 1. Introduction

1.1. Introduction

This document gives a brief description to Mosix, a software package that turns a network of GNU/Linux computers into a computer cluster. Along the way, some background to parallel processing is given, as well as a brief introduction to programs that make special use of Mosix's capabilities. The HOWTO expands on the documentation as it provides more background information and discusses the quirks of various distributions.

Kris Buytaert got involved in this piece of work when Scot Stevenson was looking for somebody to take over the Job, this was during February 2002 The first new versions of this HOWTO are rewrites of the Mosix Howto draft and the Suse Mosix HOWTO

("FEHLT", in case you are wondering, is German for "missing"). You will notice that some of the headings are not as serious as they could be. Scot had planned to write the HOWTO in a slightly lighter style, as the world (and even the part of the world with a burping penguin as a mascot) is full of technical literature that is deadly. Therefore some parts still have these comments

Initially this was a draft version of a text intended to help Linux users with SuSE distributions install the Mosix cluster computer package - in other words, to turn networked computers running SuSE Linux into a Mosix cluster. This HOWTO is written on the basis of a monkey-see, monkey-do knowledge of Mosix, not with any deep insight into the workings of the system.

The original text did not cover Mosix installations based on the 2.4.* kernel. Note that SuSE 7.1 does not ship with the vanilla sources to that kernel series.


Chapter 2. So what is mosix Anyway ?

2.1. A very, very brief introduction to clustering

Most of the time, your computer is bored. Start a program like xload or top that monitors your system use, and you will probably find that your processor load is not even hitting the 1.0 mark. If you have two or more computers, chances are that at any given time, at least one of them is doing nothing. Unfortunately, when you really do need CPU power - during a C++ compile, or coding Ogg Vobis music files - you need a lot of it at once. The idea behind clustering is to spread these loads among all available computers, using the resources that are free on other machines.

The basic unit of a cluster is a single computer, also called a "node". Clusters can grow in size - they "scale" - by adding more machines. A cluster as a whole will be more powerful the faster the individual computers and the faster their connection speeds are. In addition, the operating system of the cluster must make the best use of the available hardware in response to changing conditions. This becomes more of a challenge if the cluster is composed of different hardware types (a "heterogenous" cluster), if the configuration of the cluster changes unpredictably (machines joining and leaving the cluster), and the loads cannot be predicted ahead of time.


2.1.1. A very, very brief introduction to clustering

2.1.1.1. HPC vs Failover vs Loadbalancing

Basically there are 3 types of clusters, the most deployed ones are probably the Failover Cluster and the Loadbalancing Cluster, HIGH Performance Computing.

Failover Clusters consist of 2 or more network connected computers with a separate heartbeat connection between the 2 hosts. The Heartbeat connection between the 2 machines is being used to monitor wether all the services are still in use, as soon as a service on one machine breaks down the other machine tries to take over.

With loadbalancing clusters the concept is that when a request for say a webserver comes in, the cluster checks wich machine is the lease busy and then sends the request to that machine. Actually most of the times a Loadbalancing cluster is also Failover cluster but with the extra load balancing functionality and often with more nodes.

The last variation of clustering is the High Performance Computing Cluster, this machine is being configured specially to give data centers that require extreme performance the performance they need. Beowulfs have been developed especially to give research facilities the computing speed they need. These kind of clusters also have some loadbalancing features, they try to spread different processes to more machines in order to gain perfomance. But what it mainly comes down to in this situation is that a process is being parralellised and that routines that can be ran separately will be spread on different machines in stead of having to wait till they get done one after another.


2.2. The story so far


2.2.3. openMosix

openMosix is in addition to whatever you find at mosix.org and in full appreciation and respect for Prof. Barak's leadership in the outstanding Mosix project .

Moshe Bar has been involved for a number of years with the Mosix project (www.mosix.com) and was co-project manager of the Mosix project and general manager of the commercial Mosix company.

After a difference of opinions on the commercial future of Mosix, he has started a new clustering company - Qlusters, Inc. - and Prof. Barak has decided not to participate for the moment in this venture (although he did seriously consider joining) and held long running negotiations with investors. It appears that Mosix is not any longer supported openly as a GPL project. Because there is a significant user base out there (about 1000 installations world-wide), Moshe Bar has decided to continue the development and support of the Mosix project under a new name, openMosix under the full GPL2 license. Whatever code in openMosix comes from the old Mosix project is Copyright 2002 by Amnon Bark. All the new code is copyright 2002 by Moshe Bar.

openMosix is a Linux-kernel patch which provides full compatibility with standard Linux for IA32-compatible platforms. The internal load-balancing algorithm transparently migrates processes to other cluster members. The advantage is a better load-sharing between the nodes. The cluster itself tries to optimize utilization at any time (of course the sysadmin can affect these automatic load-balacing by manuel configuration during runtime).

This transparent process-migration feature make the whole cluster look like a BIG SMP-system with as many processors as available cluster-nodes (of course multiplicated with 2 for dual-processor systems). openMosix also provides a powerful mized for HPC-applications, which unlike NFS provides cache consistency, time stamp consistency and link consistency.

There could (and will) be significant changes in the architecure of the future openMosix versions. New concepts about auto-configuration, node-discovery and new user-land tools are discussed in the openMosix-mailinglist.

To approach standardization and future compatibility the proc-interface changes from /proc/mosix to /proc/hpc and the /etc/mosix.map was exchanged to /etc/hpc.map. Adapted commandline user-space tools for openMosix are already available on the web-page of the project and from the current version (1.1) Mosixview supports openMosix as well.

The hpc.map will be replaced in the future with a node-autodiscovery system.

openMosix is supported by various competent people (see www.openMosix.org) working together around the world. The gain of the project is to create a standardize clustering-environent for all kinds of HPC-applications.

openMosix has also a project web-page at http://openMosix.sourceforge.net with a CVS tree and mailinglist for the developer and user.


2.3. Mosix in action: An example

Mosix clusters can take various forms. To demonstrate, let's assume you are a student and share a dorm room with a rich computer science guy, with whom you have linked computers to form a Mosix cluster. Let's also assume you are currently converting music files from your CDs to Ogg Vobis for your private use, which is legal in your country. Your roommate is working on a project in C++ that he says will bring World Peace. However, at just this moment he is in the bathroom doing unspeakable things, and his computer is idle.

So when you start a program called FEHLT to convert Bach's .... from .wav to .ogg format, the Mosix routines on your machine compare the load on both nodes and decide that things will go faster if that process is sent from your Pentium-233 to his Athlon XP. This happens automatically - you just type or click your commands as you would if you were on a standalone machine. All you notice is that when you start two more coding runs, things go a lot faster, and the response time doesn't go down.

Now while you're still typing ...., your roommate comes back, mumbling something about red chile peppers in cafeteria food. He resumes his tests, using a program called 'pmake', a version of 'make' optimized for parallel execution. Whatever he's doing, it uses up so much CPU time that Mosix even starts to send subprocesses to your machine to balance the load.

This setup is called *single-pool*: All computers are used as a single cluster. The advantage/disadvantage of this is that you computer is part of the pool: Your stuff will run on other computers, but their stuff will run on your's, too.


Chapter 3. Features of Mosix


Chapter 4. Requirements and Planning


Chapter 5. Distribution specific installations


5.4. openMosix General Instructions


5.4.3. MFS

At first the CONFIG_MOSIX_FS option in the kernel configuration has to be enabled. If the current kernel was compiled without this option recompilation with this option enabled is required. Also the UIDs and GUIDs in the cluster must be equivalent. The CONFIG_MOSIX_DFSA option in the kernel is optional but of course required if DFSA should be used. To mount MFS on the cluster there has to be an additional fstab-entry on each nodes /etc/fstab.

for DFSA enabled:
mfs_mnt         /mfs            mfs     dfsa=1          0 0
for DFSA disabled:
mfs_mnt          /mfs           mfs     dfsa=0          0 0
the syntax of this fstab-entry is:
[device_name]           [mount_point]   mfs     defaults        0 0
After mounting the /mfs mount-point on each node, each nodes filesystem is accessable through the /mfs/[openMosix_ID]/ directories.

With the help of some symbolic links all cluster-nodes can access the same data e.g. /work on node1
on node2 :      ln -s /mfs/1/work /work
on node3 :      ln -s /mfs/1/work /work
on node3 :      ln -s /mfs/1/work /work
...
Now every node can read+write from and to /work !

The following special files are excluded from the MFS:

the /proc directory
special files which are not regular-files, directories or symbolic links e.g. /dev/hda1

Creating links like:
ln -s /mfs/1/mfs/1/usr         
or
ln -s /mfs/1/mfs/3/usr
is invalid.

The following system calls are supported without sending the migrated process (which executes this call on its home (remote) node) going back to its home node:

read, readv, write, writev, readahead, lseek, llseek, open, creat, close, dup, dup2, fcntl/fcntl64, getdents, getdents64, old_readdir, fsync, fdatasync, chdir, fchdir, getcwd, stat, stat64, newstat, lstat, lstat64, newlstat, fstat, fstat64, newfstat, access, truncate, truncate64, ftruncate, ftruncate64, chmod, chown, chown16, lchown, lchown16, fchmod, fchown, fchown16, utime, utimes, symlink, readlink, mkdir, rmdir, link, unlink, rename

Here are situations when system calls on DFSA mounted filesystems may not work:

diffrent mfs/dfsa configuration on the clusternodes
dup2 if the second file-pointer is non-DFSA
chdir/fchdir if the parent dir is non-DFSA
pathnames that leave the DFSA-filesystem
when the process which executes the system-call is being traced
if there are pending requests for the process which executes the system-call


5.6. Suse 7.1 and Mosix


5.6.3. Setup


5.7. Debian and Mosix

Installing mosix on a Debian based machine can be done as described below First step is downloading the packages from the net. Since we are using a debian setup we needed
http://packages.debian.org/unstable/net/mosix.html
http://packages.debian.org/unstable/net/kernel-patch-mosix.html
http://packages.debian.org/unstable/net/mps.html
You can also apt-get install them ;) Next part is making the kernel mosix capable. Copy the patch.$kernel version in to your /usr/src/linux-$version directory run
patch -p0 < patches.2.4.10
Check your kernel config and run
make dep ; make clean ; make bzImage ; make modules ; make modules_install
You now wil need to edit your /etc/mosix/mosix.map This file has a bit a strange layout. We have 2 machines 192.168.10.65 and 192.168.10.94 This gives us a mosix.map that looks like
1 192.168.10.65 1
2 192.168.10.94 1
After rebooting with this kernel (lilo etc you know the drill), you then should have a cluster of mosix machines that talk to eachoter and that do migration of processes. You can test that by running the following small script ..
awk 'BEGIN {for(i=0;i<10000;i++)for(j=0;j<10000;j++);}'
a couple of times, and monitor it`s behaviour with mon where you will see that it spreads the load between 2 different nodes. If you have enabled Process-arrival messages in your kernel you will notice that each time a remote (guest) process arrives on your node a Weeeeeee will be printed and each time a local proces returns you will see a Woooooo on your console. So basically If you don`t see any of those messages during the running of a program and if you have this option enabled in your kernel you might conclude that no processes migrate. We also setup Mosixview (0.8) on the debian machine
apt-get install mosixview
In order to be able to actually use Mosixview you will need to run it from a user who can log in to the different nodes as root. We suggest you set this up using ssh. Please note that there is a difference between the ssh and ssh2 implemtations .. if you have a identity.pub ssh wil check authorized_keys, if you have id_rsa.pub you will need authorized_keys2 !! Mosixview gives you a nice interface that shows the load of different machines and gives you the possibility to migrate processes manually. A detailed discussion of Mosixview can be found elsewhere in this document.


Chapter 6. Cluster Installation


Chapter 7. ClumpOS


7.2. How does it work

At boot-time, clump/os will autoprobe for network cards, and, if any are detected, try to configure them via DHCP. If successful, it will create a mosix.map file based on the assumption that all nodes are on local CLASS C networks, and configure MOSIX using this information. clump/os Release 4 best supports machines with a single connected network adapter. The MOSIX map created in such cases will consist of a single entry for the CLASS-C network detected, with the node number assigned reflecting the IP address received from DHCP. (On the 192.168.1 network, node #1 will be 192.168.1.1, etc.) If you use multiple network adapters Expert mode is recommended as the assignment of node numbers is sensitive to the order in which network adapters are detected. (Future releases will support complex topologies and feature more intelligent MOSIX map creation.) clump/os will then display a simple SVGA monitor (clumpview) indicating whether the node is configured, and, if it is, showing the load on all active nodes on the network. When you've finished using this node, simply press [ESC] to exit the interface and shutdown.

Alternatively, or if autoconfiguration doesn't work for you, then you can use clump/os in Expert mode. Please note that clump/os is not a complete distribution or a rescue disk; the functionality present is the bare minimum required for a working MOSIX server node.

It works for us, but may not work for you; if you experience difficulties, please email us with as much information about your system as possible -- after you have investigated the problem. (See Problems? and Expert mode. You might also consider subscribing to the clump/os mailing list.)


7.3. Requirements

As the purpose of clump/os is to add nodes to a cluster, it is assumed that you already have a running MOSIX cluster -- or perhaps only a single MOSIX node -- from which you will be initiating jobs. All machines in the cluster must conform to the following requirements:

clump/os Machine(s) 586+ CPU,
bootable CDROM
NIC
64M+ RAM (the system is loaded entirely into a ramdisk; this means that you should have at least 64M of RAM (and likely more) to accomodate the approx. 16M ramdisk, space needed for Linux itself, and space for your work. This approach was chosen so that the same CDROM can be used to configure multiple systems.)
Master Machine(s) Linux 2.4.17, MOSIX 1.5.7 (manually configured)
Network Environment Running DHCP server (f you don't, or won't, run DHCP, you can still manually configure your system; see Problems? and Expert Mode. Using DHCP is highly recommended, however, and will greatly simplify your life in the long run. )

The following network modules are present, although not all support autoprobing; if you don't see support for your card in this list, then clump/os will not work for you even in Expert Mode.

3c501.o 3c503.o 3c505.o 3c507.o 3c509.o 3c515.o 3c59x.o 8139cp.o 8139too.o 82596.o 8390.o ac3200.o acenic.o at1700.o cs89x0.o de4x5.o depca.o dgrs.o dl2k.o dmfe.o dummy.o e2100.o eepro.o eepro100.o eexpress.o epic100.o eth16i.o ewrk3.o fealnx.o hamachi.o hp-plus.o hp.o hp100.o lance.o lp486e.o natsemi.o ne.o ne2k-pci.o ni5010.o ni52.o ni65.o ns83820.o pcnet32.o sis900.o sk98lin.o smc-ultra.o smc9194.o starfire.o sundance.o sungem.o sunhme.o tlan.o tulip.o via-rhine.o wd.o winbond-840.o yellowfin.o

Please also note that clump/os may not work on a laptop, definately doesn't support PCMCIA cards, and will probably not configure MOSIX properly if your machine contains multiple connected ethernet adapters; see Note 1. This is a temporary limitation of the configuration scripts, and the Release 3/4 kernels which are compiled without CONFIG_MOSIX_TOPOLOGY


7.5. Problems ?

If you don't find your issue here, please consider posting to the clump/os mailing list. (Please note that only subscribers are permitted to post; click on the link for instructions.) You should also make certain that you are using the latest versions of MOSIX and clump/os, and that the versions -- clump/os R4.x and MOSIX 1.5.2 at the time of this writing -- are in sync.


7.6. Expert Mode

If you hold down shift during the boot process, you have the option of booting into Expert mode; this will cause clump/os to boot to a shell rather than to the graphical interface. From this shell you can attempt to insert the appropriate module for your network adapter (if autoprobing failed), and/or configure your network and MOSIX manually. Type "halt" to shut down the system. (Note that since the system resides in RAM you can't hurt yourself too badly by rebooting the hard way if you have to -- unless you have manually mounted any partitions rw, that is, and we don't recommend doing so at this point.)

If you want to run clumpview, execute:
     open -s -w -- clumpview --drone --svgalib
This will force the node into 'drone' mode (local processes will not migrate), and will force clumpview to use SVGALIB; the open command will ensure that a separate vt is used.

Please be advised that the environment provided was initially intentionally minimalistic; if you require additional files, or wish to copy files from the system to another machine, your only option is nc (netcat -- a great little utility, btw), or mfs if MOSIX is configured. From version R5.4 on size is no longer a primary consideration.

Expert mode (and clump/os for that matter) is 'single-user'; this is one of the reasons that utilities such as ssh are not included. These and other similar decisions were made in order to keep clump/os relatively small, and do not affect cluster operation.

From version R5.4, if you experience problems in Expert Mode, you can boot into Safe Mode; in Safe Mode no attempt is made at autoconfiguration.


Chapter 8. Administrating openMosix


8.3. Informations about the other nodes

/proc/hpc/nodes/[openMosix_ID]/cpus             -how many cpu's the node has
/proc/hpc/nodes/[openMosix_ID]/load             -the openMosix load of this node
/proc/hpc/nodes/[openMosix_ID]/mem              -available memory as openMosix believes
/proc/hpc/nodes/[openMosix_ID]/rmem             -available memory as Linux believes
/proc/hpc/nodes/[openMosix_ID]/speed            -speed of the node relative to PIII/1GHz
/proc/hpc/nodes/[openMosix_ID]/status           -status of the node
/proc/hpc/nodes/[openMosix_ID]/tmem             -available memory
/proc/hpc/nodes/[openMosix_ID]/util             -utilization of the node


8.5. the userspace-tools

These following tools are providing easy adminitration to openMosix clusters.
migrate -send a migrate request to a process
                syntax: 
                        migrate [PID] [openMosix_ID]


mon             -is a ncurses-based terminal monitor
                 several informations about the current status are displayed in bar-charts

mosctl          -is the openMosix main configuration utility
                syntax:
                        mosctl  [stay|nostay]
                                [stay|nolstay]
                                [block|noblock]
                                [quiet|noquiet]
                                [nomfs|mfs]
                                [expel|bring]
                                [gettune|getyard|getdecay]

                        mosctl  whois   [openMosix_ID|IP-address|hostname]

                        mosctl  [getload|getspeed|status|isup|getmem|getfree|getutil]   [openMosix_ID]

                        mosctl  setyard [Processor-Type|openMosix_ID||this]

                        mosctl  setspeed        interger-value

                        mosctl  setdecay interval       [slow fast]

more detailed:

stay            -no automatic process migration
nostay          -automatic process migration (default)
lstay           -local processes should stay
nolstay         -local processes could migrate
block           -block arriving of guest processes
noblock         -allow arriving of guest processes
quiet           -disable gathering of load-balancing informations
noquiet         -enable gathering of load-balancing informations
nomfs           -disables MFS
mfs             -enables MFS
expel           -send away guest processes
bring           -bring all migrated processes home
gettune         -shows the current overhead parameter
getyard         -shows the current used Yardstick
getdecay        -shows the current decay parameter
whois           -resolves openMosix-ID, ip-addresses and hostnames of the cluster
getload         -display the (openMosix-) load
getspeed        -shows the (openMosix-) speed
status          -displays the current status and configuration
isup            -is a node up or down (openMosix kind of ping)
getmem          -shows logical free memory
getfree         -shows physical free mem
getutil         -display utilization
setyard         -sets a new Yardstick-value
setspeed        -sets a new (openMosix-) speed value
setdecay        -sets a new decay-interval





mosrun          -run a special configured command on a chooosen node
                syntax:
                        mosrun  [-h|openMosix_ID| list_of_openMosix_IDs] command [arguments]

The mosrun-command can be executed with several more comandline options. To ease this up there are several preconfigured run-scripts for executing jobs with a special (openMosix) configuration.

nomig           -runs a command which process(es) won't migrate
runhome         -executes a command locked to its home node
runon           -runs a command which will be directly migrated and locked to a node
cpujob  	-tells the openMosix cluster that this is a cpu-bound process
iojob           -tells the openMosix cluster that this is a io-bound 
process
nodecay         -executes a command and tells the cluster not to refresh the load-balancing statistics
slowdecay       -executes a command with a slow decay interval for collecting load-balancing statistics
fastdecay       -executes a command with a fast decay interval for collecting load-balancing statistics



setpe           -manuell node configuration utility
                syntax:
                        setpe   -w -f   [hpc_map]
                        setpe   -r [-f  [hpc_map]]
                        setpe   -off

-w reads the openMosix configuration from a file (typically /etc/hpc.map)
-r writes the current openMosix configuration to a file (typically /etc/hpc.map)
-off turns the current openMosix configuration off


tune            openMosix calibration and optimizations utility.
                (for further informations review the tune-man page)

Additional to the /proc interface and the commandline-openMosix utilities (which are using the /proc interface) there is a pachted "ps" and "top" available (they are called "mps" and "mtop") which displays also the openMosix-node ID on a column. This is usefull for finding out where a specific process is currently being computed.

The administrator can have a overview about the current status of the cluster and its nodes with the "Mosix Cluster Information Tool PHP" which can be found at http://wijnkist.warande.uu.nl/mosix/ . (the path to the NODESDIR has to be adjusted to $NODESDIR="/proc/hpc/nodes/")

For smaller cluster it might also be usefull to use Mosixview which is a GUI for the most common administration tasks.


Chapter 9. Tuning Mosix

9.1. Optimising Mosix

Login a normal terminal as root. Type
       setpe -r 
which, if everything went right, will give you a listing of your /etc/mosix.map. If things did not go right, try
        setpe -w -f /etc/mosix.map 
to set up your node. Then, type
       cat /proc/$$/lock
to see if your child processes are locked in your mode (1) or can migrate (0). If for some reason you find your processes are locked, you can change this with
        echo 0 > /proc/$$/lock
until you fix the problem. Repeat the whole configuration scheme for a second computer. The programs tune_kernel and prep_tune that Mosix uses to calibrate the individual nodes do not work with the SuSE distribution. However, you can fake it. First, bring the computer you want to tune and another computer with Mosix installed down to single user mode by typing
        init 1
as root. All other computers on the network should be shutdown if possible. On both machines, run the following commands:
        /etc/init.d/network start
        /etc/init.d/mosix start
        echo 1 > /proc/mosix/admin/quiet
This fakes prep_tune and the first parts of tune_kernel. Note that if you have a laptop with a pcmcia network card, you will have to run
        /etc/init.d/pcmcia start
instead of "/etc/init.d/network start". On the computer you want to tune, run tune_kernel and follow instructions. Depending on your machines, this can take a while - if you have a dog, this might be the time to go on that long, long walk you've always promised him. tune_kernel will create a program called "pg" in /root for testing reasons. Ignore it. After tuning is over, copy the contents of /tmp/overheads to the file /etc/overheads (and/or recompile the kernel). Repeat the tuning procedure for each computer. Reboot, enjoy Mosix, and don't forget to brag to your friends about your new cluster.


Chapter 10. Special Cases


10.2. Diskless nodes

At first you have to setup a DHCP-server which answers the DHCP-request for an ip-adress when a diskless client boots. This DHCP-Server (i call it master in this howto) acts additional as an NFS-server which exports the whole client-filesystems so the diskless- cluster-nodes (i call them slaves in this howto) can grab this FS (filesystem) for booting as soon as it has its ip. Just run a "normal"-MOSIX setup on the master-node. Be sure you included NFS-server-support in your kernel-configuration. There are two kinds (or maybe a lot more) types of NFS:
kernel-nfs
or
nfs-daemon
It does not matter which one you use but my experiences shows to use kernel-nfs in "older" kernels (like 2.2.18) and daemon-nfs in "newer" ones. The NFS in newer kernels sometimes does not work properly. If your master-node is running with the new MOSIX-kernel start with one filesystem as slave-node. Here the steps to create it: Calculate at least 300-500 MB for each slave. Create an extra directory for the whole cluster-filesystem and make a symbolic-link to /tftpboot. (The /tftpboot-directory or link is required because the slaves searches for a directory named /tftpboot/ip-adress-of-slave for booting. You can change this only by editing the kernel-sources) Then create a directory named like the ip of the first slave you want to configure, e.g. mkdir /tftpboot/192.168.45.45 Depending on the space you have on the cluster-filesystem now copy the whole filesystem from the master-node to the directory of the first slave. If you have less space just copy:
/bin
/usr/bin
/usr/sbin
/etc
/var
You can configure that the slave gets the rest per NFS later. Be sure to create empty directories for the mount-points. The filesystem-structure in /tftpboot/192.168.45.45/ has to be similar to / on the master.
/tftpboot/192.168.45.45/etc/HOSTNAME                    //insert the hostname of the slave
/tftpboot/192.168.45.45/etc/hosts                       //insert the hostname+ip of the slave
Depending on your distribution you have to change the ip-configuration of the slave :
/tftpboot/192.168.45.45/etc/rc.config
/tftpboot/192.168.45.45/etc/sysconfig/network
/tftpboot/192.168.45.45/etc/sysconfig/network-scripts/ifcfg-eth0
Change the ip-configuration for the slave as you like. Edit the file
/tftpboot/192.168.45.45/etc/fstab               //the FS the slave will get per NFScoresponding to
/etc/exports                                    //the FS the master will export to the slaves
e.g. for a slave fstab:
master:/tftpboot/192.168.88.222  /       nfs     hard,intr,rw    0 1
none    /proc   nfs     defaults        0 0
master:/root     /root   nfs     soft,intr,rw    0 2
master:/opt      /opt    nfs     soft,intr,ro    0 2
master:/usr/local        /usr/local      nfs     soft,intr,ro    0 2
master:/data/      /data nfs     soft,intr,rw    0 2
master:/usr/X11R6        /usr/X11R6      nfs     soft,intr,ro    0 2
master:/usr/share        /usr/share      nfs     soft,intr,ro    0 2
master:/usr/lib        /usr/lib      nfs     soft,intr,ro    0 2
master:/usr/include        /usr/include      nfs     soft,intr,ro    0 2
master:/cdrom        /cdrom      nfs     soft,intr,ro    0 2
master:/var/log  /var/log        nfs     soft,intr,rw    0 2
e.g. for a master exports:
/tftpboot/192.168.45.45       	*(rw,no_all_squash,no_root_squash)
/usr/local  			*(rw,no_all_squash,no_root_squash)
/root			        *(rw,no_all_squash,no_root_squash)
/opt			        *(ro)
/data			        *(rw,no_all_squash,no_root_squash)
/usr/X11R6      		*(ro)
/usr/share		  	*(ro)
/usr/lib			*(ro)
/usr/include		      	*(ro)
/var/log		        *(rw,no_all_squash,no_root_squash)
/usr/src		        *(rw,no_all_squash,no_root_squash)
If you mount /var/log (rw) from the NFS-server you have on central log-file! (it worked very well for me. just "tail -f /var/log/messages" on the master and you always know what is going on)

The cluster-filesystem for your first slave will be ready now. Configure the slave-kernel now. If you have the same hardware on your cluster you can reuse the configuration of the master-node. Change the configuration for the slave like the following:
CONFIG_IP_PNP_DHCP=y
and
CONFIG_ROOT_NFS=y
Use as less modules as possible (maybe no modules at all) because the configuration is a bit tricky. Now (it is well described in the beowulf-howtos) you have to create a nfsroot-device. It is only used for patching the slave-kernel to boot from NFS.
mknod /dev/nfsroot b 0 255
rdev bzImage /dev/nfsroot
Here "bzImage" has to be your diskless-slave-kernel you find it in /usr/src/linux-version/arch/i386/boot after succesfull compilation. Then you have to change the root-device for that kernel
rdev -o 498 -R bzImage 0
and copy the kernel to a floppy-disk
dd if=bzImage of=/dev/fd0
Now you are nearly ready! You just have to configre DHCP on the master. You need the MAC-adress (hardware adress) of the network card of your first slave. The easiest way to get this adress is to boot the client with the already created boot-floppy (it will fail but it will tell you its MAC-adress). If the kernel was configured alright for the slave the system should come up from the floppy, booting the diskless-kernel, detecting its network-card and sending an DHCP- and ARP request. It will tell you its hardware adress during that moment! It looks like : 68:00:10:37:09:83. Edit the file /etc/dhcp.conf like the following sample:
option subnet-mask 255.255.255.0;
default-lease-time 6000;
max-lease-time 72000;
subnet 192.168.45.0 netmask 255.255.255.0 {
     range 192.168.45.253 192.168.45.254;
     option broadcast-address 192.168.45.255;
     option routers 192.168.45.1;
}  
host firstslave
{
     hardware ethernet 68:00:10:37:09:83;
     fixed-address firstslave;
     server-name "master";
}
Now you can start DHCP and NFS with their init scripts:
/etc/init.d/nfsserver start
/etc/init.d/dhcp start
You got it!! It is (nearly) ready now!

Boot your first-slave with the boot-floppy (again). It should work now. Shortly after recognizing its network-card the slave gets its ip-adress from the DHCP-server and its root-filesystem (and the rest) per NFS.

You should notice that modules included in the slave-kernel-config must exist on the master too, because the slaves are mounting the /lib-directory from the master. So they use the same modules (if any).

It will be easier to update or install additional libraries or applications if you mount as much as possible from the master. On the other hand if all slaves have their own complete filesystem in /tftpboot your cluster may be a bit faster because of not so many read/write hits on the NFS-server.

You have to add a .rhost file in /root (for user root) on each cluster-member which should look like this:
  
node1   root
node2   root
node3   root
....
You also have to enable remote-login per rsh in the /etc/inetd.conf. You should have these two lines in it

if your linux-distribution uses inetd:
  
shell   stream  tcp     nowait  root    /bin/mosrun mosrun -l -z /usr/sbin/tcpd in.rshd -L
login   stream  tcp     nowait  root    /bin/mosrun mosrun -l -z /usr/sbin/tcpd in.rlogind
And for xinetd:
  
service shell
{
socket_type     = stream
protocol        = tcp
wait            = no
user            = root
server          = /usr/sbin/in.rshd
server_args     = -L
}
service login
{
socket_type     = stream
protocol        = tcp
wait            = no
user            = root
server          = /usr/sbin/in.rlogind
server_args     = -n
}
You have to restart inetd afterwards so that it reads the new configuration.
  
/etc/init.d/inetd restart
or There may be another switch in your distribution-configuration-utility where you can configure the security of the system. Change it to "enable remote root login". Do not use this in insecure environments!!! Use SSH instead of RSH! You can use MOSIXVIEW with RSH or SSH. Configuring SSH for remote login without password is a bit tricky. Take a look at the "HOWTO use MOSIX/MOSIXVIEW with SSH?" at this website. If you want to copy files to a node in this diskless-cluster you have now two possibilities. You can use rcp or scp for copying remote or you can use just cp and copy files on the master to the cluster-filesystem of one node. The following two commands are equal:
rcp /etc/hosts 192.168.45.45./etc
cp /etc/hosts /tftpboot/192.168.45.45/etc/


Chapter 11. Common Problems


Chapter 12. Other Programs


12.2. mosixview


12.2.3. Installation of the RPM-distribution

Download the latest version of MOSIXVIEW rpm-package for your linux-distribution Then just execute e.g.:
rpm -i mosixview-1.0.suse72.rpm
This will install the all binaries in /usr/bin To uninstall:
rpm -e mosixview
Installation of the source-distribution Download the latest version of MOSIXVIEW and unzip+untar the sources and copy the tarball to e.g. /usr/local/.
gunzip mosixview-1.0.tar.gz
tar -xvf mosixview-1.0.tar
Automatic setup-script Just cd to the mosixview-directory and execute
./setup [your_qt_2.3.x_installation_directory]
Manual compiling Set the QTDIR-Variable to your actual QT-Distribution, e.g.
export QTDIR=/usr/lib/qt-2.3.0  (for bash)
or
setenv QTDIR /usr/lib/qt-2.3.0          (for csh)
Hints : (from the testers of mosixview who compiled it on diffrent linux-distributions, thanks again) Create the link /usr/lib/qt pointing to your QT-2.3.x installation e.g. if QT-2.3.x is installed in /usr/local/qt-2.3.0
ln -s /usr/local/qt-2.3.0 /usr/lib/qt
Then you have to set the QTDIR environment variable to
export QTDIR=/usr/lib/qt        (for bash)
or
setenv QTDIR /usr/lib/qt                (for csh)
There is no need to "make clean" and delete config.cache and Makefile because all versions >= 0.6 are already contains "cleaned" source-code. That means there are no precompiled binaries any more and (maybe) less problems to compile by yourself! // (If compiling fails because of not finding qwidget.h, qobject.h or any other header files you have to delete the files config.cache and Makefile and then configure+make. (happens on my RedHat-Cluster)) // After that the rest should work fine:
./configure
make
then do the same in the subdirectory mosixcollector, mosixload and mosixview_client.
cd mosixcollector
./configure
make
cd ..
cd mosixload
./configure
make
cd ..
cd mosixmem
./configure
make
cd ..
cd mosixhistory
./configure
make
cd ..
cd mosixview_client
./configure
make
cd ..
Copy all binaries to /usr/bin
cp mosixview/mosixview /usr/bin
cp mosixview_client/mosixview_client/mosixview_client /usr/bin
cp mosixcollector/mosixcollector_daily_restart /usr/bin
cp mosixcollector/mosixcollector/mosixcollector /usr/bin
cp mosixload/mosixload/mosixload /usr/bin
cp mosixload/mosixload/mosixmem /usr/bin
cp mosixload/mosixload/mosixhistory /usr/bin
And the mosixcollector init-script to your init-directory e.g.
cp mosixcollector/mosixcollector.init /etc/init.d/mosixcollector
or
cp mosixcollector/mosixcollector.init /etc/rc.d/init.d/mosixcollector
Now copy the mosixview_client binary on each of your cluster-nodes to /usr/bin/mosixview_client
rcp mosixview_client/mosixview_client your_node:/usr/bin/mosixview_client
You can now execute mosixview (cd .. to quit the subdirectory mosixview_client)
./mosixview/mosixview
(do not use the & to force mosixview in the background!) If the "make install" fails just copy the mosixview binary wherever you want or create a symbolic link from /usr/bin/install (or wherever install is) to /usr/bin/ginstall and "make install" again.


12.2.4. the main window

This picture shows the main application-window of MOSIXVIEW. The function will be explained in the following HowTo. (Click to enlarge)

MOSIXVIEW reads the /etc/mosix.map at startup and builds a raw with a lamp, a button, a slider, a lcd-number two progressbars and a some labels for each cluster-member. The green lights at the left are displaying the MOSIX-Id and the status of the cluster-node. Red if down, green for avaiable. The status can set to autorefresh with the checkbox like the other dynamic objects. If you click on a button displaying an host-name (or ip) a configuration-dialog will pop up. It default shows the MOSIX-Name and some buttons to execute the most common used "mosctl"-commands. (described later in this HowTo) Use the "nslookup-checkbox" to get even hostname+ip in the config-dialog. Do not enable this option if your cluster-nodes only have ip-adresses and no hostnames in DNS! With the "speed-sliders" you can set the MOSIX-speed for each host. The current speed is displayed by the lcd-number. The load-balancing of the whole cluster can be influenced by this values. Processes in a MOSIX-Cluster are migrating easier to a node with more MOSIX-speed than to nodes with less speed. Sure it is not the physically speed you can set but it the speed MOSIX "thinks" a node has. e.g. a cpu-intensive job on a cluster-node which speed is set to the lowest value of the whole cluster processes will search for a better processor for running on and migrate away easily. The progressbars in the middle gives an overview of the load on each cluster-member. It displays in percent so it does not represent exactly the load written to the file /proc/mosix/nodes/x/load (by MOSIX), but it should give an overview. The next progressbar is for the used memory the nodes. I shows the currently used memory in percent from the avaiable memory on the hosts (the label to the right displays the avaiable mem). How many CPUs your cluster have is written in the box to the right. The last line of the main windows contains a configuration button for "all-nodes". You can configure all nodes in your cluster similar by this option. How good the load-balancing works is displayed by the progressbar in the last line. 100% is very good and means that all nodes nearly have the same load.


12.2.5. the configuration-window

This dialog will popup if an "cluster-node"-button is clicked. If your all cluster-members have DNS-hostnames the "nslookup"-option in the main-window can set to "enabled". The hostname and the ip-adress will be shown, otherwise only the MOSIX-name will be displayed. The MOSIX-configuration of each host can be changed easily now. All commands will be executed per "rsh" or "ssh" on the remote hosts (even on the local node) so "root" has to "rsh" (or "ssh") to each host in the cluster without prompting for a password (it is well described in a beowulf documentation or on the HowTo's on this page how to configure it). The commands are:
automigration on/off
quiet yes/no
bring/lstay yes/no
exspel	yes/no
mosix start/stop

If the MOSIXVIEW-client is properly installed on the remote cluster-nodes click the "remote proc-box"-button to open the MOSIXVIEW-client (proc-box) from remote. xhost +hostname will be set and the display will point to your localhost. The client is executed on the remote also per "rsh" or "ssh". (the binary mosixview_client must be copied to e.g. /usr/bin on each host of the cluster) The MOSIXVIEW-client is a process-box for managing your programs. It is usefull to manage programs started and running local on the remote nodes. The client is also described later in this HowTo. If you are logged on your cluster from a remote workstation insert your local hostname in the edit-box below the "remote proc-box". Then the MOSIXVIEW-Client will be displayed on your workstation and not on the cluster-member you are logged on (maybe you have to set "xhost +clusternode" on your workstation). There is a history in the combo-box so you have to write the hostname only once.


12.2.11. the MOSIXCOLLECTOR

The MOSIXCOLLECTOR is a daemon which should/could be started on one cluster-member. It logs the MOSIX-load of each node to the directory /tmp/mosixview/* These history log-files analyzed by MOSIXLOAD, MOSIXMEM and MOSIXHISTORY (as described later) gives an nonstop overview of the load, memory and processes in your cluster. There is one main log-file called /tmp/mosixview/mosix.load. Additional to this there are additional files in this directory to which the data is written. At startup MOSIXCOLLECTOR writes its PID (process id) to /tmp/mosixcollector.pid. It won't start if this file exist! The MOSIXCOLLECTOR-daemon restarts once a day (depending on when started) and saves the current history to /tmp/mosixview[date]/* These backups are done automatically but you can also trigger this manual. There is an option to write a checkpoint to the history. These checkpoints are graphically marked as a blue vertical line if you analyze the history log-files with MOSIXLOAD or MOSIXMEM. For example you can set a checkpoint when you start a job on your cluster and another one at the end.. Here is the explanation of the possible commandline-arguments:
mosixcollector -d//starts the collector as a daemon
mosixcollector -k//stops the collector
mosixcollector -c//stops the collector and deletes the history-files
mosixcollector -n//writes a checkpoint to the history
mosixcollector -r//saves the current history and starts a new one
mosixcollector -help//print out a short help
mosixcollector -h//print out a short help
You can start this daemon whith its init-script in /etc/init.d or /etc/rc.d/init.d. You just have to create a symbolic link to one of the runlevels for automatic startup. How to analyze the created logfiles is described in the following MOSIXLOAD-section.


12.2.12. MOSIXLOAD

This picture shows the graphical Log-Analyzer MOSIXLOAD

With MOSIXLOAD you can have a non-stop MOSIX-load history. The history log-files created by MOSIXCOLLECTOR are displayed in a graphically way so that you have a long-time overview what happened and happens on your cluster. MOSIXLOAD can analyze the current "online" logfiles but you can also open older backups of your MOSIXCOLLECTOR history logs by the filemenu. The logfiles are placed in /tmp/mosixview/* (the backups in /tmp/mosixview[date]/*) and you have to open only the main history file "mosix.load" to take a look at older load-informations. (the [date] in the backup directories for the log-files is the date the history is saved) The start time is displayed on the top/left and you have a full-day view in MOSIXLOAD (24 h). If you are using MOSIXLOAD for looking at "online"-logfiles (current history) you can enable the "refresh"-checkbox and the view will auto-refresh (or use the manual refresh-button). The load-lines are normally black if the load of one node is smaller 50. If the load increases to >50 the lines are drawn yellow and red if load is higher 80. These values are MOSIX-informations. MOSIXLOAD gets these informations from the files /proc/mosix/nodes/[mosix ID]/load. The X-button of each nodes calculates the nodes avarage MOSIX-load. Clicking it will open a small new window in which you get the avarage load-value and a graphic which displays it coloured (black ok, yellow critique, red alert). If there are checkpoints written to the load-history by the MOSIXCOLLECTOR they are displayed as a vertical blue line. You now can compare the load values at a certain moment much easier.


12.2.13. MOSIXMEM

This picture shows the graphical Log-Analyzer MOSIXMEM

With MOSIXMEM you can have a non-stop memory history similar to MOSIXLOAD. The history log-files created by MOSIXCOLLECTOR are displayed in a graphically way so that you have a long-time overview what happened and happens on your cluster. MOSIXMEM can analyze the current "online" logfiles but you can also open older backups of your MOSIXCOLLECTOR history logs by the filemenu. The logfiles are placed in /tmp/mosixview/* (the backups in /tmp/mosixview[date]/*) and you have to open only the main history file "mosix.load" to take a look at older load-informations. (the [date] in the backup directories for the log-files is the date the history is saved) The start time is displayed on the top/left and you have a full-day view in MOSIXMEM (24 h). If you are using MOSIXMEM for looking at "online"-logfiles (current history) you can enable the "refresh"-checkbox and the view will auto-refresh (or use the manual refresh-button). The displayed values are MOSIX-informations. MOSIXMEM gets these informations from the files
/proc/mosix/nodes/[mosix ID]/mem.
/proc/mosix/nodes/[mosix ID]/rmem.
/proc/mosix/nodes/[mosix ID]/tmem.
The X-button of each nodes calculates the nodes avarage MOSIX-mem. Clicking it will open a small new window in which you get the avarage mem-value. If there are checkpoints written to the load-history by the MOSIXCOLLECTOR they are displayed as a vertical blue line. You now can compare the load values at a certain moment much easier.


Chapter 13. Hints and Tips


Appendix A. More Info


Appendix B. Credits

Scot W. Stevenson

I have to thank Scot W. Stevenson for al the work he did on this HOWTO before I took over. He made a great start for this document.

Assaf Spanier

worked together with Scott in drafting the layout and the chapters of this howto. and now promised to help me out with this document.

Matthias Rechenburg

Matthias Rechenburg should be thanked for the work he did on Mosixview and the accompaning documentation , which we included in this howto.

Jean-David Marrow

is the author of Clump/OS, he contributed the documentation on his distribution to the Howto.


Appendix C. GNU Free Documentation License

Version 1.1, March 2000

Copyright (C) 2000 Free Software Foundation, Inc. 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.


1. APPLICABILITY AND DEFINITIONS

This License applies to any manual or other work that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. The "Document", below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as "you".

A "Modified Version" of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language.

A "Secondary Section" is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document's overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (For example, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them.

The "Invariant Sections" are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License.

The "Cover Texts" are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License.

A "Transparent" copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, whose contents can be viewed and edited directly and straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup has been designed to thwart or discourage subsequent modification by readers is not Transparent. A copy that is not "Transparent" is called "Opaque".

Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML designed for human modification. Opaque formats include PostScript, PDF, proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML produced by some word processors for output purposes only.

The "Title Page" means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, "Title Page" means the text near the most prominent appearance of the work's title, preceding the beginning of the body of the text.


3. COPYING IN QUANTITY

If you publish printed copies of the Document numbering more than 100, and the Document's license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects.

If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages.

If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a publicly-accessible computer-network location containing a complete Transparent copy of the Document, free of added material, which the general network-using public has access to download anonymously at no charge using public-standard network protocols. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public.

It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document.


4. MODIFICATIONS

You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version:

  1. Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission.

  2. List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has less than five).

  3. State on the Title page the name of the publisher of the Modified Version, as the publisher.

  4. Preserve all the copyright notices of the Document.

  5. Add an appropriate copyright notice for your modifications adjacent to the other copyright notices.

  6. Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below.

  7. Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document's license notice.

  8. Include an unaltered copy of this License.

  9. Preserve the section entitled "History", and its title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section entitled "History" in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence.

  10. Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the "History" section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission.

  11. In any section entitled "Acknowledgements" or "Dedications", preserve the section's title, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein.

  12. Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles.

  13. Delete any section entitled "Endorsements". Such a section may not be included in the Modified Version.

  14. Do not retitle any existing section as "Endorsements" or to conflict in title with any Invariant Section.

If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version's license notice. These titles must be distinct from any other section titles.

You may add a section entitled "Endorsements", provided it contains nothing but endorsements of your Modified Version by various parties--for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard.

You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one.

The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version.


10. FUTURE REVISIONS OF THIS LICENSE

The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See http://www.gnu.org/copyleft/.

Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License "or any later version" applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation.