The OpenNET Project / Index page

[ новости /+++ | форум | теги | ]

Кластера.


<< Предыдущая ИНДЕКС Поиск в статьях src Установить закладку Перейти на закладку Следующая >>

_ RU.OS.CMP (2:5077/15.22) ________________________________________ RU.OS.CMP _
 From : Nikita V. Belenki                   2:5030/251.28   30 Jan 28  18:59:46
 Subj : Кластера.
_______________________________________________________________________________
Hello All!

А действительно, где ещё?

=== Cut ===
From: "Main, Kerry" <[email protected]>
Newsgroups: comp.os.vms
Subject: RE: VMS Market Positioning: time to open the source vault
Message-ID: <[email protected]>
Date: Sat, 29 Jan 2000 13:23:04 -0500
Organization: Info-Vax<==>Comp.Os.Vms Gateway
X-Gateway-Source-Info: Mailing List
Lines: 150
Content-Type: text/plain;
              charset="iso-8859-1"
Mime-Version: 1.0
Path:
nuq-read.news.verio.net!iad-artgen.news.verio.net!sea-feed.news.verio.net!news1
.ltinet.net!news-spur1.maxwell.syr.edu!news.maxwell.syr.edu!newsfeed.cwix.com!m
vb.saic.com!info-vax
Xref: iad-artgen.news.verio.net comp.os.vms:41117

John,

[beware - long response]

Even long time OVMS system managers are not that familiar with this
capability, so I will expand here (by the way, this is in use by local
Customer and their 5 mission critical clusters across Canada) :

>>> How do you upgrade an OS without rebooting the node and how do you
reboot a node without disturbing the logged-in user?<<

Setup :
- OVMS Cluster 6.2 and above with 3 systems - I'll call SYS, SYSB and SYSC.
- application / users that depend on a TCP alias DNS or DECnet alias for
access to systems.
- UCX V4.1, 4.2 V5.0A on all systems
- OVMS based DNS (can integrate with other non-VMS DNS nodes with TCPIP
V5.0A BIND 8.1.2 support)
- metric process running on each system (under ucx$config)
- generic batch and print queue setups.
- 3 node cluster with each node's HW config'ed such that any two nodes can
handle the peak loads ie. you plan HW so that you can migrate a node in and
out as required.
- each node has their own system disk, but use common cluster files as much
as possible (sysuaf, queue files ect). Yes, it is more work to manage, but
it allows OS upgrades with ZERO application impact - read on.

How do you implement ZERO application availability impact with OpenVMS ?

First, do not confuse APPLICATION availability with SYSTEM availability -
these are tied together with other OS's and HW platforms, but not with the
OVMS config above.

Business Requirement:

Due to increasing business demand, Operations determine each system in the
cluster needs to have their memory upgraded (fill in any requiement which
normally requires system reboot).

Under most OS platforms, this needs to be scheduled with the users, business
groups because end users will be impacted as each node is brought down.

Not so with OpenVMS. Each system is upgraded transparently to the end users.
Lets look at how SYSB is done (other nodes are done a day or two later)

At 5:00pm Wed evening, before going home, the Operators disable logins on
SYSB (also set system to keep logins disabled on reboot if you want to run
autogen or some other utility after system comes back up). Current users and
application connections continue. All new connections and telnet sessions go
to SYSA or SYSC. Users dont know (or care) what system they are on.

[easy way to verify this is working is to ping the TCP cluster alias a
number of times. If it is working, only the IP address of SYSA and SYS C
should be returned]

The OPS folks do a "$ Stop /queue/next" on system specific queues. This
allows current jobs to complete, but other jobs are passed to other systems
config'ed as part of the generic queue. If DECnet involved, they do a "NCP>
set exec alias incoming disabled" which prevents any new DECnet alias
connection requests to that system (SYSA and SYSC will handle).

Assuming application does not require a persistent connection after a user
logging out and going home, when the OPS folks come in the next morning at
07:00, they check to ensure no users or applic connections are attached to
SYSB. If some users did not logout (telnet or db connection still active),
then for security reasons, they should consider an inactive process killer
for after hours and weekend monitoring. However, these few users can be
called and asked to simply relogin to the applic. This will force them to
either SYSA or SYSC.

When there are no users on the system, they simply shut down the system. In
this case, since they want to run autogen after adding memory, they make
sure logins and DECnet cluster alias / queues remain disabled on SYSB.
Memory is added. System rebooted. Autogen run. System rebooted, and logins /
alias / queues and are re-enabled.

SYSB now begins to take its share of the load with the new memory. Users
have seen ZERO application availability impact and SYSB has had its memory
upgraded.

Other than OpenVMS, know any other OS platform that can do this ?

>>> Can a user distinguish between a network break and a system break? <<<

A user does not need to know what system they are on in the cluster. A VMS
based UCX/TCPIP V5 DNS does true load balancing (std DNS only does round
robin only) and will not direct a user request to a system that has its
logins disabled. Perhaps others can comment if other TCP pkgs have a similar
capability ? Hopefully they do, as they could implement the same solution
with their offerings as well.

>>> Zero downtime, as far as the user is concerned, is a myth<<<

Again, SYSTEM downtime and APPLICATION downtime are two different topics if
using OpenVMS. Mosts users do NOT CARE if systems are rebooted as long as
their APPLICATION is available.

>>> It's unnecessary to fire testimonials at this list <<<

Many complain about Compaq not promoting OpenVMS, so don't you think its
good to point out these recent testimonials that illustrate Customers with
similar requirements to a thread question - in this case, Cust's using TB
storage solutions on OpenVMS ?

:-)

Regards,

Kerry Main
Senior Consultant,
Compaq Canada
Professional Services
Voice : 613-592-4660
FAX   : 819-772-7036
Email : [email protected]



-----Original Message-----
From: John Macallister [mailto:[email protected]]
Sent: Saturday, January 29, 2000 12:02 PM
To: [email protected]
Subject: RE: VMS Market Positioning: time to open the source vault


>Also, with respect to uptime, how many other OS's (on ANY platform) can
>state they do not need any (read ZERO) availability downtime for OS/HW
>upgrades, tuning reboots etc.

How do you upgrade an OS without rebooting the node and how do you reboot a
node without disturbing the logged-in user?

Can a user distinguish between a network break and a system break?

Zero downtime, as far as the user is concerned, is a myth. A VMScluster, as
a whole, may run continuously for long periods and it is a convenient way to
manage a continuously available service but an active connection is
vulnerable to software or hardware interruptions on VMS as on any other
system.

It's unnecessary to fire testimonials at this list, although it's
interesting to hear about VMS applications, because you're preaching to the
converted. I was trying to point out that the lack of support for newer,
higher capacity, devices must be losing Compaq orders for VMS.

John

Name: John B. Macallister  E-mail: [email protected]
Post: Nuclear and Astrophysics Laboratory, Keble Road, Oxford OX1 3RH,UK
Phone: +44-1865-273388 (direct)  273333 (reception)  273418 (Fax) === Cut ===

Kit.

--- FMail/Win32 1.46
 * Origin: Handle with Care (overseas) (2:5030/251.28)

<< Предыдущая ИНДЕКС Поиск в статьях src Установить закладку Перейти на закладку Следующая >>



Партнёры:
PostgresPro
Inferno Solutions
Hosting by Hoster.ru
Хостинг:

Закладки на сайте
Проследить за страницей
Created 1996-2024 by Maxim Chirkov
Добавить, Поддержать, Вебмастеру