Since this has come up a few times as a recommendation (but without
details on how to do it), I figured I'd provide my [really terse
and unclean] notes on how to set up a SIMH/VAX image you can use
to temporarily satellite boot VAXen into a cluster just to lay a
fresh copy of VMS down onto them (and then on reboot have the
targets be standalone machines).
Please let me know if you have suggestions for cleaning this up!
I'd love feedback for this.
Also, apologies to Oleg & co in Area 63, that's just the random area I
chose when writing these notes up and it's only used for the
satellite boot host temporarily. Feel free to clean that up
for "your" install!
Note that this assumes OpenVMS VAX 7.3. You could swap this around for any
5.x/6.x/7.x version without much trouble.
Make sure you have nothing you care about on any of the drives,
read carefully, and use your brain to properly adjust device
names/paths/addresses/etc for your install. These were just
written up/tested with two simh instances (one for boot host, one
for satellite).
simh config looks like:
;;; vms.ini
set cpu 256m
set cpu idle=vms
set console telnet=4000
set cpu conhalt
;
load -r conf/ka655x.bin
;
attach nvr conf/nvram.bin
;
set rq0 ra72
att rq0 disk/ra72-0.img
;
set rq3 lock
set rq3 rrd40
attach -r rq3 dist/your_vms_vax_install_cd.iso
;
set rl disable
set ts disable
set tq enable
set tq tk70
;
set dz disable
set vh enable
set vh lines=16
attach -am vh 3000
;
set xq enable
; DECnet Phase IV node 63.1
set xq mac=AA-00-04-00-01-FC
; this will probably be completely different depending on
; your simh release and host OS
att xq e1000g0
;;;; END
boot install media.
standalone:
$ backup dua3:vms073.b/save_set dua0:/init /list
halt, reboot.
set boot dua0
boot
install just these:
The following options will be provided:
OpenVMS library
OpenVMS optional
OpenVMS Help Message
DECnet Phase IV
SYSTEM: <whatever>
SYSTEST: <whatever>
FIELD: <whatever>
SCSNODENAME: CLBOOT
SCSSYSTEMID: 64513 (1024*[area]64 + [node]1)
skip loading PAKs for now
set UTC for timezone
(box reboots)
log in as SYSTEM
load your PAKs
edit SYS$SYSTEM:MODPARAMS.DAT
should have:
SCSNODE="CLBOOT"
SCSSYSTEMID=64513
VAXCLUSTER=0
add these:
WINDOW_SYSTEM=0
DUMPFILE = 0 ! Disallow AUTOGEN to create or size dump file
MIN_INTSTKPAGES=20
MIN_CHANNELCNT=255
MIN_SPTREQ=6000
MIN_GBLPAGES=100000
MIN_GBLPAGFIL=6024
MIN_GBLSECTIONS=550
MIN_NPAGEDYN=5374720
MIN_NPAGEVIR=23675136
MIN_PAGEDYN=274000
MIN_PQL_MBYTLM=40000
save the file: (^z)
apply these changes:
@sys$update:autogen getdata shutdown nofeedback
going to make SYSTEM's password not expire to make life easy
for the moment:
set def sys$system
run authorize
UAF> modify system/FLAG=NODISUSER/NOPWDEXPIR
^z
get the cluster configured:
set def SYS$MANAGER
@cluster_config_lan
1. ADD CLBOOT to existing cluster, or form a new cluster.
Will the LAN be used for cluster communications (Y/N)? y
Enter this cluster's group number: 4095
###http://h71000.www7.hp.com/doc/731final/4477/4477pro_003.html#cluster_group_number
#The cluster group number uniquely identifies each OpenVMS Cluster system
on a LAN. This number must be from 1 to 4095 or from 61440 to 65535.
#Rule: If you plan to have more than one OpenVMS Cluster system on a LAN,
you must coordinate the assignment of cluster group numbers among system
managers.
Enter this cluster's password: <whatever>
Will CLBOOT be a boot server [Y]? y
Enter a value for CLBOOT's ALLOCLASS parameter [0]: 1
#http://h71000.www7.hp.com/doc/84final/4477/4477pro_010.html#disk_alloclass
#6.2.2 Specifying Node Allocation Classes
#A node allocation class can be assigned to computers, HSG or HSV
controllers. The node allocation class is a numeric value from 1 to 255
that is assigned by the system manager.
#The default node allocation class value is 0. A node allocation class
value of 0 is appropriate only when serving a local, single-pathed disk.
If a node allocation class of 0 is assigned, served devices are named
using the node-name$device-name syntax, that is, the device name prefix
reverts to the node name.
#The following rules apply to specifying node allocation class values:
#When serving satellites, the same nonzero node allocation class value
must be assigned to the serving computers and controllers.
#All cluster-accessible devices on computers with a nonzero node
allocation class value must have unique names throughout the cluster. For
example, if two computers have the same node allocation class value, it is
invalid for both computers to have a local disk named DGA0 or a tape named
MUA0. This also applies to HSG and HSV subsystems.
#System managers provide node allocation classes separately for disks and
tapes. The node allocation class for disks and the node allocation class
for tapes can be different.
#The node allocation class names are constructed as follows:
#$disk-allocation-class$device-name
#$tape-allocation-class$device-name
#Caution: Failure to set node allocation class values and device unit
numbers correctly can endanger data integrity and cause locking conflicts
that suspend normal cluster operations.
#Figure 6-5 includes satellite nodes that access devices $1$DUA17 and
$1$MUA12 through the JUPITR and NEPTUN computers. In this configuration,
the computers JUPITR and NEPTUN require node allocation classes so that
the satellite nodes are able to use consistent device names regardless of
the access path to the devices.
#Note: System management is usually simplified by using the same node
allocation class value for all servers, HSG and HSV subsystems; you can
arbitrarily choose a number between 1 and 255. Note, however, that to
change a node allocation class value, you must shut down and reboot the
entire cluster (described in Section 8.6). If you use a common node
allocation class for computers and controllers, ensure that all devices
have unique unit numbers.
Does this cluster contain a quorum disk [N]? n
Do you want to run AUTOGEN now [Y]? y
<reboots>
set up LANCP to handle MOP booting
MCR LANCP
LANCP> LIST DEVICE/MOPDLL
LANCP> DEFINE DEVICE XQA0:/MOPDLL=ENABLE
(or on one line: MCR LANCP SET DEV XQA0:/MOPDLL=ENABLE)
... if you're running decnet and you want to populate LANCP with
nodenames/hw addrs
from NCP, run this...
@SYS$EXAMPLES:LAN$POPULATE.COM
READY TO ROCK!!!!
for every satellite, on the master:
@CLUSTER_CONFIG_LAN and select
1. ADD a VAX node to the cluster.
### installing VMS from a remote CD on a satellite-booted node:
$ mount/over=id $1$dua3:
(that's $ALLOCLASS$REMOTEDEVICE:)
$ MOUNT/FOREIGN DUA0:
$ backup $1$dua3:[000000]vms073.b/save_set dua0:/init /image/verify/list $
DISMOUNT DUA0:
$ MOUNT/OVER=ID DUA0:
$ COPY $1$dua3:[000000]vms073.* DUA0:[000000]
### if you want DECWindows you'll need DECW073.* too
$ DISMOUNT DUA0:
$ @SYS$SYSTEM:SHUTDOWN
(make sure to pick REMOVE_NODE)
SET BOOT DUA0:
BOOT
...
set up the satellite:
### FROM
http://labs.hoffmanlabs.com/node/192
OpenVMS Satellite Bootstrap with LANCP
LANCP is used by the
CLUSTER_CONFIG_LAN.COM to configure the MOP satellite
bootstrap.
MOP downloads can be performed using LANCP. LANCP is available on V6.2 and
later (the following shows V8.3 syntax), and can be enabled to service
cluster and DECserver MOP service (download) requests.
An OpenVMS Alpha satellite at the hardware address 01-02-03-04-05-06:
$ RUN SYS$SYSTEM:LANCP
SET NODE HLSAT1 -
/ADDRESS=01-02-03-04-05-06
/FILE=APB.EXE -
/ROOT=HL_ALPHA_SYS: -
/BOOT_TYPE=ALPHA_SATELLITE
An OpenVMS VAX satellite at the hardware address 11-22-33-44-55-66:
$ RUN SYS$SYSTEM:LANCP
SET NODE VAXSYS/ADDRESS=11-22-33-44-55-66 -
/FILE=NISCS_LOAD.EXE -
/ROOT=HL_VAX_SYS: -
/BOOT_TYPE=VAX_SATELLITE
The /BOOT_TYPE keyword value is ALPHA_SATELLITE, I64_SATELLITE,
VAX_SATELLITE, and (for everything else) OTHER.
To enable MOP operations on a specific or on all network devices, see the
LANCP command DEFINE DEVICE /ALL /MOPDLL=NOEXCLUSIVE, or potentially
/MOPDLL=(ENABLE,EXCLUSIVE)
OpenVMS I64 systems and Integrity servers do not use MOP in any form in
the console bootstrap processing. To perform a satellite bootstrap of an
Integrity Itanium host, you must establish the necessary settings within
the EFI console for a network bootstrap, and must configure a bootp and
tftp and PXE server to respond to the EFI console request; a PXE server.
OpenVMS I64 and OpenVMS Alpha systems running V8.* releases can service
PXE requests from OpenVMS I64 hosts.
These download services are typically implemented using the host-based
InfoServer support.
### END
-Ryan