Application Dependencies on mounted disks (NAS/Synology)
This page is an attempt to capture NFS-mounted disk dependencies that require actions on other hosts to fix mount point definitions.
General
-
obs
machines get home directories from the NFS-mounted disks
- keep in mind the mountain IDL license server is on
mountainapp1
(used by OBS machines, AO potentially, and LBC CMU), so if that machine is not available, we have to move where the machines point to their license.
- Some machines need a hard power-cycle when swapping IP address of the disks because they get stuck in unmounting the bad nfs mounts.
This was more of a problem with the old SAN because we swapped IPs to use different notes. The /etc/hosts
file on many machines had the IP address as the machine disk
. This may not be a problem with the latest NAS.
- during the Oct-2016 SAN problems, somehow we lost a link on the mountain that the LBTPlot tools need to find TCS telemetry correctly for the current year, since previous years are stored in 2015, 2014, ... Manually created the link in
/lbt/data/telemetry
: 2016 -> tcs
The
TCS servers mount the disks
-
/lbt/data
(from /volume3/lbto_data)
Contains much miscellaneous stuff
-
/lbt/data/logs
(from /volume3/lbto_logs)
Contains log files
-
/lbt/data/telemetry/tcs
(from /volume6/lbto_telemetry_tcs)
Contains telemetry files
-
/lbt/data/repository
(from /volume6/lbto_repository)
Contains GCS images
Note that
/lbt
is a gluster mount.
jet mounts the following
- /lbt/data (from /volume3/lbto_data)
The only directory needed is /lbt/data/UT
- /lbt/data/telemetry/tcs (from /volume6/lbto_telemetry_tcs)
Contains telemetry files
Note that
/lbt
is a local mount. The NFS mounts are all for telemetry. jet does not have any gluster mounts.
jet is in the same boat as the
TCS hosts - it would be nice if the machine didn't hang!
maybe jet should write its telemetry local all the time - we did this before and just rsync'ed from there. It's dangerous for jet to hang. It is very nice for the UT leapseconds file to be shared, however. Maybe this could be gluster shared?
How is the gluster affected? Gluster supposedly should "just work". Sometimes, it needs to be checked (Hooper/TCS team).
At least some of the gluster mount points are directories that are in an NFS mount. So if the NFS mount fails the gluster mounts properly, but when the NFS mount finally happens, the cluster mount point is "covered up". The 'mount' command shows the gluster mount, but the mount point directory is empty.
AO
mounts for ao-data and lbto-data
multiple machines: adsecdx, wfsdx, adsecsx, wfssx
Instruments
mt-archive |
uses NFS mounts - because it writes to Repository disk? |
DIMM |
writes telemetry to mounted disk - which is used by LBTPlot tool writes log data to mounted disk |
LBC |
the repository disk is mounted to write the guide thumbnails for FACSUM IDL license file points to mountainapp1 |
|
the logs disk is mounted to copy logs to |
LUCI |
no dependencies - it uses newdata which is mounted from mt-archive and doesn't write telemetry |
MODS |
no dependencies - it uses newdata |
Miscellaneous
ovms |
writes telemetry to mounted disk |
ssh |
rsyncs telemetry to Tucson, so the disks are mounted there |
linuxapps |
will eventually write telemetry and maybe logs |
Not Affected
- remote ops
- big displays
- weatherstation PC