If you plan to use SSI clustering in a production system, you probably want to move to a hardware-based cluster. That way you can take advantage of the high-availability and scalability that a hardware-based SSI cluster can offer.
Hardware-based SSI clusters have significantly higher availability. If a UML host kernel panics, or the host machine has a hardware failure, its UML-based SSI cluster goes down. On the other hand, if one of the SSI kernels panic, or one of the hardware-based nodes has a failure, the cluster continues to run. Centralized kernel services can failover to a new node, and critical user-mode programs can be restarted by the application monitoring and restart daemon.
Hardware-based SSI clusters also have significantly higher scalability. Each node has one or more CPUs that truly work in parallel, whereas a UML-based cluster merely simulates having multiple nodes by time-sharing on the host machine's CPUs. Adding nodes to a hardware-based cluster increases the volume of work it can handle, but adding nodes to a UML-based cluster bogs it down with more processes to run on the same number of CPUs.
You can build hardware-based SSI clusters with x86 or Alpha machines. More architectures, such as IA64, may be added in the future. Note that an SSI cluster must be homogeneous. You cannot mix architectures in the same cluster.
The cluster interconnect must support TCP/IP networking. 100 Mbps ethernet is acceptable. For security reasons, it should be a private network. Each node should have a second network interface for external traffic.
Right now, the most expensive requirement of an SSI cluster is the shared drive, required for the shared GFS root. This will no longer be a requirement when CFS, which is described below, is available. The typical configuration for the shared drive is a hardware RAID disk cabinet attached to all nodes with a Fibre Channel SAN. For a two-node cluster, it is also possible to use shared SCSI, but it is not directly supported by the current cluster management tools.
The GFS shared root also requires one Linux machine outside of the cluster to be the lock server. It need not be the same architecture as the nodes in the cluster. It just has to run memexpd, a user-mode daemon. Eventually, GFS will work with a Distributed Lock Manager (DLM). This would eliminate the need for the external lock server, which is a single point of failure. It could also free up the machine to be another node in your cluster.
In the near future, the Cluster File System (CFS) will be an option for the shared root. It is a stateful NFS that uses a token mechanism to provide tight coherency guarantees. With CFS, the shared root can be stored on the internal disk of one of the nodes. The on-disk format can be any journalling file system, such as ext3 or ReiserFS.
The initial version of CFS will not provide high availability. Future versions of CFS will allow the root to be mirrored across the internal disks of two nodes. A technology such as the Distributed Replicated Block Device (DRBD) would be used for this. This is a low-cost solution for the shared root, although it has a performance penalty.
Future versions will also allow the root to be stored on a disk shared by two or more nodes, but not necessarily shared by all nodes. If the CFS server node crashes, its responsibilities would failover to another node attached to the shared disk.
Start with the installation instructions for SSI.
If you'd like to install SSI from CVS code, follow these instructions to checkout modulename ssic-linux, and these instructions to checkout modulenames ci-linux and cluster-tools. Read the INSTALL and INSTALL.cvs files in both the ci-linux and ssic-linux sandboxes. Also look at the README file in the cluster-tools sandbox.
For more information, read Section 7.
Закладки на сайте Проследить за страницей |
Created 1996-2024 by Maxim Chirkov Добавить, Поддержать, Вебмастеру |