metaset - configure disk sets
/usr/sbin/metaset -s setname [-M-a -h hostname]
/usr/sbin/metaset -s setname -A{enable | disable}
/usr/sbin/metaset -s setname [-A{enable | disable}] -a -h hostname...
/usr/sbin/metaset -s setname -a [-l length] [-L] drivename...
/usr/sbin/metaset -s setname -C {take | release | purge}
/usr/sbin/metaset -s setname -d [-f] -h hostname...
/usr/sbin/metaset -s setname -d [-f] drivename...
/usr/sbin/metaset -s setname -j
/usr/sbin/metaset -s setname -r
/usr/sbin/metaset -s setname -w
/usr/sbin/metaset -s setname -t [-f] [-u tagnumber] [y]
/usr/sbin/metaset -s setname -b
/usr/sbin/metaset -s setname -P
/usr/sbin/metaset -s setname -q
/usr/sbin/metaset -s setname -o [-h hostname]
/usr/sbin/metaset [-s setname]
/usr/sbin/metaset [-s setname] -a | -d [ [m] mediator_host_list]
The metaset command administers sets of disks in named disk sets. Named disk sets include any disk set that is not in the local set. While disk sets enable a high-availability configuration, Solaris Volume Manager itself does not actually provide a high-availability environment.
A single-owner disk set configuration manages storage on a SAN or fabric-attached storage, or provides namespace control and state database replica management for a specified set of disks.
In a shared disk set configuration, multiple hosts are physically connected to the same set of disks. When one host fails, another host has exclusive access to the disks. Each host can control a shared disk set, but only one host can control it at a time.
When you add a new disk to any disk set, Solaris Volume Manager checks the disk format. If necessary, it repartitions the disk to ensure that the disk has an appropriately configured reserved slice 7 (or slice 6 on an EFI labelled device) with adequate space for a state database replica. The precise size of slice 7 (or slice 6 on an EFI labelled device) depends on the disk geometry. For tradtional disk sets, the slice is no less than 4 Mbytes, and probably closer to 6 Mbytes, depending on where the cylinder boundaries lie. For multi-owner disk sets, the slice is a minimum of 256 Mbytes. The minimal size for slice 7 might change in the future. This change is based on a variety of factors, including the size of the state database replica and information to be stored in the state database replica.
For use in disk sets, disks must have a dedicated slice (six or seven) that meets specific criteria:
If the existing partition table does not meet these criteria, or if the -L flag is specified, Solaris Volume Manager repartitions the disk. A small portion of each drive is reserved in slice 7 (or slice 6 on an EFI labelled device) for use by Solaris Volume Manager. The remainder of the space on each drive is placed into slice 0. Any existing data on the disks is lost by repartitioning.
After you add a drive to a disk set, it can be repartitioned as necessary, with the exception that slice 7 (or slice 6 on an EFI labelled device) is not altered in any way.
After a disk set is created and metadevices are set up within the set, the metadevice name is in the following form:
/dev/md/setname/{dsk,rdsk}/dnumber
where setname is the name of the disk set, and number is the number of the metadevice (0-127).
If you have disk sets that you upgraded from Solstice DiskSuite software, the default state database replica size on those sets is 1034 blocks, not the 8192 block size from Solaris Volume Manager. Also, slice 7 on the disks that were added under Solstice DiskSuite are correspondingly smaller than slice 7 on disks that were added under Solaris Volume Manager.
If disks you add to a disk set have acceptable slice 7s (that start at cylinder 0 and that have sufficient space for the state database replica), they are not reformatted.
Hot spare pools within local disk sets use standard Solaris Volume Manager naming conventions. Hot spare pools with shared disk sets use the following convention:
setname/hot_spare_pool
where setname is the name of the disk set, and hot_spare_pool is the name of the hot spare pool associated with the disk set.
To create and work with a disk set in a multi---node environment, root must be a member of Group 14 on all hosts, or the /.rhosts file must contain an entry for all other host names. This is not required in a SunCluster 3.x enviroment.
Tagged data occurs when there are different versions of a disk set's replicas. This tagged data consists of the set owner's nodename, the hardware serial number of the owner and the time it was written out to the available replicas. The system administer can use this information to determine which replica contains the correct data.
When a disk set is configured with an even number of storage enclosures and has replicas balanced across them evenly, it is possible that up to half of the replicas can be lost (for example, through a power failure of half of the storage enclosures). After the enclosure that went down is rebooted, half of the replicas are not recognized by SVM. When the set is retaken, the metaset command returns an error of "stale databases", and all of the metadevices are in a read-only state.
Some of the replicas that are not recognized need to be deleted. The action of deleting the replicas also causes updates to the replicas that are not being deleted. In a dual hosted disk set environment, the second node can access the deleted replicas instead of the existing replicas when it takes the set. This leads to the possibility of getting the wrong replica record on a disk set take. An error message is displayed, and user intervention is required.
Use the -q to query the disk set and the -t, -u, and -y, options to select the tag and take the disk set. See OPTIONS.
SVM provides support for a low-end HA solution consisting of two hosts that share only two strings of drives. The hosts in this type of configuration, referred to as mediators or mediator hosts, run a special daemon, rpc.metamedd(1M). The mediator hosts take on additional responsibilities to ensure that data is available in the case of host or drive failures.
A mediator configuration can survive the failure of a single host or a single string of drives, without administrative intervention. If both a host and a string of drives fail (multiple failures), the integrity of the data cannot be guaranteed. At this point, administrative intervention is required to make the data accessible. See mediator(7D) for further details.
Use the -m option to add or delete a mediator host. See OPTIONS.
The following options are supported:
-a
-a | -d | -m mediator_host_list
In a single metaset command you can add or delete two mediator hosts. See EXAMPLES.
-A {enable | disable}
-b
-C {take | release | purge}
take
release
purge
-d
This option fails on a multi-owner disk set if attempting to withdraw the master node while other nodes are in the set.
-f
When used to forcibly take ownership of the disk set, this causes the disk set to be grabbed whether or not another host owns the set. All of the disks within the set are taken over (reserved) and fail fast is enabled, causing the other host to panic if it had disk set ownership. The metadevice state database is read in by the host performing the take, and the shared metadevices contained in the set are accessible.
You can use this option to delete the last drive in the disk set, because this drive would implicitly contain the last state database replica.
You can use -f option to delete hosts from a set. When specified with a partial list of hosts, it can be used for one-host administration. One-host administration could be useful when a host is known to be non-functional, thus avoiding timeouts and failed commands. When specified with a complete list of hosts, the set is completely deleted. It is generally specified with a complete list of hosts to clean up after one-host administration has been performed.
-h hostname...
-j
As a host boots and is brought online, it must go through three configuration levels to be able to use a multi-owner disk set:
metaset -s multinodesetname -j
to join the host to the owner list. After the cluster reconfiguration, when the host reenters the cluster, the node is automatically joined to the set. The metaset -j command joins the host to all multi-owner sets that the host has been added to. In a single node situation, joining the node to the disk set starts any necessary resynchronizations.
-L
-l length
-M
This option is required when creating a multi-owner disk set. Its use is optional on all other operations on a multi-owner disk set and has no effect. Existing disk sets cannot be converted to multi-owner sets.
-o
-P
If you need to delete a disk set but cannot take ownership of the set, use the -P option.
-q
This option is not for use with a multi-owner disk set.
-r
This option is not for use with a multi-owner disk set.
-s setname
-t
This option is not for use with a multi-owner disk set.
-u tagnumber
w
Instead of releasing a set, a host can issue
metaset -s multinodesetname -w
to withdraw from the owner list. A host automatically withdraws on a reboot, but can be manually withdrawn if it should not be able to use the set, but should be able to rejoin at a later time. A host that withdrew due to a reboot can still appear joined from other hosts in the set until a reconfiguration cycle occurs.
metaset -w withdraws from ownership of all multi-owner sets of which the host is a member. This option fails if you attempt to withdraw the master node while other nodes are in the disk set owner list. This option cancels all resyncs running on the node. A cluster reconfiguration process that is removing a node from the cluster membership list effectively withdraws the host from the ownership list.
-y
Example 1 Defining a Disk Set
This example defines a disk set.
# metaset -s relo-red -a -h red blue
The name of the disk set is relo-red. The names of the first and second hosts added to the set are red and blue, respectively. (The hostname is found in /etc/nodename.) Adding the first host creates the disk set. A disk set can be created with just one host, with the second added later. The last host cannot be deleted until all of the drives within the set have been deleted.
Example 2 Adding Drives to a Disk Set
This example adds drives to a disk set.
# metaset -s relo-red -a c2t0d0 c2t1d0 c2t2d0 c2t3d0 c2t4d0 c2t5d0
The name of the previously created disk set is relo-red. The names of the drives are c2t0d0, c2t1d0, c2t2d0, c2t3d0, c2t4d0, and c2t5d0. There is no slice identifier ("sx") at the end of the drive names.
Example 3 Adding Multiple Mediator Hosts
The following command adds two mediator hosts to the specified disk set.
# metaset -s mydiskset -a -m myhost1,alias1 myhost2,alias2
Example 4 Purging a Disk Set from the Node
The following command purges the disk set relo-red from the node:
# metaset -s relo-red -P
Example 5 Querying a Disk Set for Tagged Data
The following command queries the disk set relo-red for a list of the tagged data:
# metaset -s relo-red -q
This command produces the following results:
The following tag(s) were found: 1 - vha-1000c - Fri Sep 20 17:20:08 2002 2 - vha-1000c - Mon Sep 23 11:01:27 2002
Example 6 Selecting a tag and taking a Disk set
The following command selects a tag and takes the disk set relo-red:
# metaset -s relo-red -t -u 2
Example 7 Defining a Multi-Owner Disk Set
The following command defines a multi-owner disk set:
# metaset -s blue -M -a -h hahost1 hahost2
The name of the disk set is blue. The names of the first and second hosts added to the set are hahost1 and hahost2, respectively. The hostname is found in /etc/nodename. Adding the first host creates the multi-owner disk set. A disk set can be created with just one host, with additional hosts added later. The last host cannot be deleted until all of the drives within the set have been deleted.
/etc/lvm/md.tab
The following exit values are returned:
0
>0
See attributes(5) for descriptions of the following attributes:
|
mdmonitord(1M), metaclear(1M), metadb(1M), metadetach(1M), metahs(1M), metainit(1M), metaoffline(1M), metaonline(1M), metaparam(1M), metarecover(1M), metarename(1M), metareplace(1M), metaroot(1M), metassist(1M), metastat(1M), metasync(1M), metattach(1M), md.tab(4), md.cf(4), mddb.cf(4), md.tab(4), attributes(5), md(7D)
Disk set administration, including the addition and deletion of hosts and drives, requires all hosts in the set to be accessible from the network.
Закладки на сайте Проследить за страницей |
Created 1996-2024 by Maxim Chirkov Добавить, Поддержать, Вебмастеру |