The beauty of OpenMosix is that you can run standard Linux programs and potentially reap
the benefits of a pseudo-SMP machine.
Back-end nodes can be added/removed without the need to kill any processes. Only the home
node needs to stay alive throughout the processes life.
Early March 2003 I was presented with 7x Dell Poweredge 2650 with dual 2.8 GHz Xeons
with Hyper-Threading & 4GB ECC ram. These came with rack-kits and I have to say
that these are by far the easiest rack units to install that I have ever done. I had no choice
in the hardware but Dell boxes have performed flawlessly.
Grab the OpenMosix patch from
http://openmosix.sourceforge.net/ that matches your kernel, I used
openMosix-2.4.20-2.gz.
Note that I use the .config from RedHat 7.3 linux-2.4.18 but build a 2.4.20
kernel. This was done so
as to make as compatible a kernel as possible with
the system.
$ mkdir ~/Build $ cd ~/Build $ gtar -xjf ~/linux-2.4.20.tar.bz2 $ gunzip -cd ~/openMosix-2.4.20-2.gz | patch -p0 $ cd ~/Build/linux-2.4.20/ $ cp /usr/src/linux-2.4.18-3/configs/kernel-2.4.18-i686-smp.config .config $ make oldconfig openMosix process migration support (CONFIG_MOSIX) [N/y/?] (NEW) y Support clusters with a complex network topology (CONFIG_MOSIX_TOPOLOGY) [N/y/?] (NEW) n Stricter security on openMosix ports (CONFIG_MOSIX_SECUREPORTS) [N/y/?] (NEW) y Level of process-identity disclosure (0-3) (CONFIG_MOSIX_DISCLOSURE) [1] (NEW) 1 openMosix File-System (CONFIG_MOSIX_FS) [N/y/?] (NEW) y Poll/Select exceptions on pipes (CONFIG_MOSIX_PIPE_EXCEPTIONS) [N/y/?] (NEW) y Disable OOM Killer (CONFIG_openMosix_NO_OOM) [N/y/?] (NEW) n ... I had a problem compiling these modules so just dropped them as I didn't need them # CONFIG_DRM is not set # CONFIG_FUSION is not set # CONFIG_INTERMEZZO_FS is not set ... $ vi Makefile EXTRAVERSION = -cm1 $ make dep clean bzImage modules $ sudo make modules_install $ sudo cp System.map /boot/System.map-2.4.20-cm1 $ sudo cp .config /boot/config-2.4.20-cm1 $ sudo cp arch/i386/boot/bzImage /boot/vmlinuz-2.4.20-cm1 $ sudo /sbin/mkinitrd /boot/initrd-2.4.20-cm1.img 2.4.20-cm1 $ sudo vi /etc/grub.conf title OpenMosix 2.4.20 HT root (hd0,0) kernel /vmlinuz-2.4.20-cm1 ro root=/dev/sda2 initrd /initrd-2.4.20-cm1.img title OpenMosix 2.4.20 No HT root (hd0,0) kernel /vmlinuz-2.4.20-cm1 ro root=/dev/sda2 noht initrd /initrd-2.4.20-cm1.img |
A simple test program I wrote performed twice as well with Hyper-Threading enabled so I decided
to leave it enabled. At light loads (loadaverage 4 and less) processes tend not to migrate where at
load 2 they should. At higher loads things tend to balance out. This cluster is intended to be driven
hard with long batch jobs so I would hope that the load would stay above 4. The new development
kernel is supposed to have a scheduler that is HT aware so will perform better at lower loads.
Add the following line to /etc/rc.d/rc.local of your cluster nodes. This assumes that eth0 is bound
to the private network so you may need to alter things to suit your network.
/sbin/omdiscd -i eth0
To set up MFS/DFSA create the mount point /mfs on each node and add to /etc/fstab
none /mfs mfs dfsa=1 0 0also on all nodes.
openmosix-tools NEEDED openmosixview USEFUL but BETALaunching openmosixview presents a screen similar to this:
The most useful of which I find to be openmosixmigmon:
Mousing-over the green/black squares (which represents pid's) identifies the process
information. as shown. A neat feature is that you can drag-n-drop the processes
between servers.
mtop, an OpenMosix aware top screenshot is depicted below:
Note that the highlighted column (the column titled N#) is the node that the processes are
running on. You don't need to know this but it's nice to see. :)
Thu Aug 14 13:32:52 NZST 2003 c.mills Created Fri Aug 15 08:21:33 NZST 2003 c.mills Fix typos, HDD per node comment, "simple test program" code, ip_fwdClark Mills c.mills@auckland.ac.nz