Two Different ways to run Simulation
There are two different ways to use the TCS
(Telescope Control System) and the MCSPU
(Mount Control System Processor Unit) in simulation mode. The older method (before June 2017) uses an architecture very similar to the architecture of the real TCS
. The newer method uses a "canned" VirtualBox
appliance which combines both pieces of software and which is accessed via a web browser.
This section describes how to use the older method. The older method is useful to LBT programmers who are modifying the TCS
/MCSPU software and who are using the simulation mode to test their modifications. It provides all the mcspu and TCS
logs and event files which the real system produces. So, it allows the programmer to look inside the TCS
/MCSPU software while it is running, to tinker with the configuration files, etc.
The newer method is more appropriate for users who's main concern is an instrument and how it interacts with the TCS
. For this type of user the TCS
/MCSPU simulation is a black box. The user just wants it to respond to their instrument software with a faithful imitation of the real telescope. The older type of simulation can be used with instrument software, too, but it can actually be more work to use if you have no interest in looking under the hood of the TCS
. For documentation on using the newer, canned, version of the TCS
simulator, the reader should refer to the wiki section entitled Using TCS Simulator Appliance.
LBT Simulation Services
The control software that runs on each LBT instrument connects to the LBT Telescope Control System (TCS
) by communicating with the TCS
subsystem using an ICE RPC framework. ICE is a software product made by an organization called ZeroC
. RPC, or Remote Procedure Call, is a general software concept describing any scheme that allows a program to, in effect, make calls to functions (or subroutines, or methods) which reside in a completely different program. The other program can be running on the same computer as the program making the call, or it can be on some other computer as long as there is a network connection between the two computers. So ICE RPC allows the instrument control software to execute functions in the TCS
in order to make the telescope do things like presets and offsets, and to read back TCS
In normal operation the TCS
software is connected to a large number of hardware devices which control all the telescope's systems. Although not originally designed to run in a simulation mode, it was not too hard to add this feature to the TCS
. This means we can run a copy of the TCS
software on a computer which is connected to no telescope hardware at all, yet it behaves as though it is connected to a real telescope. At least, well enough so that any instrument software connected to it via the ICE RPC, cannot detect the difference. As long as the RPC commands return the right things at the right time to the instrument, the illusion is complete.
The control software for any LBT instrument can be directed to use a simulation copy of the TCS
just by changing the IP address which it uses to communicate with the TCS IIF
subsystem. So, any LBTO instrument can be connected to a simulation of the TCS
for testing. The programmers who wrote the LUCI
control software went a step further and made the LUCI
control software have its own simulation mode also. So it is possible to run a copy of the LUCI
control software on a computer which is not connected to any of the LUCI
instrument hardware. That LUCI
simulation can be pointed at the TCS
simulation and normal LUCI
observation scripts can be run with no risk to (or any wear and tear on) any hardware. The timing behavior of the TCS
simulator is roughly the same as that of the real TCS
. (Actual telescope timing for things like guide star acquisition and offsetting is variable depending on observing conditions and countless other environmental factors. So you cannot use the simulation to get precise timing information.)
The TCS and MCSPU Simulations
The simulation of the TCS
runs (usually) on a computer called tcs-sim.tucson.lbto.org. (It can also be run on the programmer's desktop machine or a VM.) That simulation in turn depends on another simulation of the MCSPU
system. The MCSPU
controls the mount and rotator movement, among other things. It runs on (usually) a computer called mcspu-sim.tucson.lbto.org. (It can also be run on the programmer's desktop machine or a VM.) Just as the real TCS
is running nearly all the time, we endeavor to keep the TCS
simulations on tcs-sim and mcspu-sim running all the time too. So, users do not have to worry about starting and stopping the TCS
simulation. The simulations probably will need to be restarted occasionally. They are not quite as robust as the real thing. Contact the software group if you find that the TCS
simulation is having trouble. If you are using tcs-sim, log into it as user "tcs". If you are using mcspu-sim, you can log into that machine as user telescope.
A programmer may wish to compile and run the TCS
on his desktop machine or on a VM. See document T481s00430.pdf for some information on how to configure the software to run locally in simulation mode.
If you are using machines other than tcs-sim and mcspu-sim, then make sure the computer running the TCS
simulation has had it's config files set-up to tell it how to communicate with the computer running the MCSPU
simulation. The computer running the TCS
simulation should have the TCSLOCALE environment variable set to "simulator". On that machine the mcs/etc/mcspu.conf file in the programmer's TCS
source code should identify the IP of the computer running the MCSPU
in the "simulator:mcspuIP" parameter. Refer to T481s00430 for more details.
Access to and control of the TCS Simulation
Using the TCS
simulation software will require you to do a few things to the TCS
which, at the real telescope, are normally handled by the telescope operator. This is true no matter which instrument or simulated instrument you are using.
First log in to the computer which is running the TCS
simulation. Normally this is done with "ssh" using the -X option. If you are using tcs-sim, it is already properly configured. If you are using some other computer on which you have just compiled the TCS
, then you have to make sure the PATH environment variable is correct. Normally this is done by cd'ing to the trunk/ directory which has the TCS
source which you compiled, and executing a script there called "run-from-here". At any time you can check if this is set-up correctly by typing:
If the system cannot find TCSGUI, you have to run the run-from-here script. If this "which" command shows that there is a TCSGUI program in your PATH, make sure that it is the one you intend to use. In the Tucson environment, for instance, if it shows "/lbt/tcs/current/tcs/bin/TCSGUI", that is not the right one. You want it to show the path to the version of the TCS
code you just compiled.
Make sure that the mcspu program is running. "ssh -X" into the computer which is running the mcspu simulation. Then, if that computer is mcspu-sim, type
If it is any other computer (such as your desktop or a VM made for the purpose) type
in the appropriate user's home directory. If the MCSPU
program is already running, it will tell you so and display a copy of the Engineering Interface (see T48s00197 for instructions on how to use the Engineering Interface). If it is not running, this command will start it running and then open an Engineering Interface window. The mcspu software must be running before you try to use the TCS
software, or at least before you do anything with TCS
that moves the simulated telescope or rotators.
commands you will need most of the time are those which bring up 5 different TCS
GUIs. These commands are:
TCSGUI, IIFGUI, PSFGUI, PCSGUI, MCSGUI (TCSGUI is really all you need, since it can be used to open all the others.) These GUIs will allow you control all the things one normally needs to control in order to use the simulation.
Note that whether a subsystem's GUI is open or not has no effect on the operation of the subsystem. The GUI shows you what the subsystem is doing and allows you to command it, but closing an open GUI has no effect the TCS
. They are completely separate programs.
The first GUI to use is the TCSGUI. This shows you which TCS
subsystems are running and allows you to access all the other subsystem GUIs. type
In simulation mode we usually do not run the DDS, the ENV
, and the ECS
subsystems so those will normally show as "stopped". All the rest should be running, except possibly the AOS
. The text next to the subsystem names shows the name of the computer which the subsystem is running on in green letters on a black background. Click "Start" and "Stop" buttons as needed to make the GUI look like the example above. If the scripts you are going to run do not use the AO system, you should stop it (click the AOS
"stop" button on both left and right sides of the GUI). In that case, you may need to set up the PSF
system for a "rigid" secondary, or vice versa. See PSF
The next thing to check is the instrument authorization. This is done with the IIF
GUI. Click the "GUI" button for the IIF
subsystem (third one from the top). That will open this:
You will probably want to keep this GUI open because it shows you, what is happening to the telescope during execution of an instrument script. The scrolling window at the bottom shows important IIF
error messages. If a script fails because the object is below the horizon or the co-pointing limit was exceeded, it will show up there. (The LSS
GUI can be used to see messages from ALL subsystems.) Disregard the label at the top center of this GUI which says "Simulator No". That is what you need it to say in order to use the TCS
in simulation mode. (See next GUI.)
The first thing to look at is the authorization at the top on the left and right. If you are using LUCIFER on both sides, then it should show both sides authorized as lucifer (as in this example). In that case you can use it as it is (skip the "IIF control" GUI described next). However, if your script is, for example, monocular, using only the left side LUCI
, (or needs a different combination of instruments) then you would need to change the instrument authorization. This is done with the "Control" button at the top of the IIF
GUI. Click that to open this GUI:
The top section allows you to change the authorization. Pull down the menu at the top under "left instrument" and select the desired instrument. Do the same on the right side. So, in our left monocular example it would show "LUCIFER" on the left and "none" on the right. Then click the "authorize" button in the top row center. Notice that this GUI also allows you to select pseudo monocular mode (center region check boxes) and to cancel or abort a preset. (If IIF
gets stuck, and cancel or abort don't work, re-authorizing will often get it sorted out - just re-click the "authorize" button. More on that later.) There are 2 additional pull-down lists below the instrument selection pull-downs, labelled "Left FOcal Station" and "right Focal Station". These allow you to further specify the identity of the focal station in the case where LUCI
is the selected instrument. LUCI
being used with ARGOS is treated as a different focal station by the TCS
compared to LUCI
not being used with ARGOS.
You can close the control menu immediately after authorizing, but completion of authorization can take many seconds. Watch the scrolling messages at the bottom of the IIF
GUI to see the messages that say left side is authorized and right side is authorized. (It can take about 30 seconds to move the imaginary M3.)
Just below the "authorize" button is the "Simulator" button. Do not click this button. The text next to it should say "No". One might think that, since we are simulating, that this should say yes, but that is not the case. This is for a different kind of simulation which is not useful in this context. This text is also echoed on the IIF
GUI, so don't be misled by it.
After the authorization is configured the way you want it, the next step is to open the MCSGUI. Click on the "GUI" button for the MCS
. It will open this GUI:
The "Tracking State" status in the top section should show "HOLDING" for AZ and EL. If the last user left the TCS
while it was tracking a target, the Tracking State will be "TRACKING". In that case, you should probably click the "HOLD" button on left and/or right side so that the tracking state becomes "HOLDING". If the Tracking State shows "TRACKER_VEL_MODE", as in this example, then you must click the associated "HOLD" button (the left one in this case) so that it shows "HOLDING". If the status displays in this GUI are showing just a hyphen, "-", then the TCS
is not able to communicate with the mcspu simulator. It may not be running or its rpcserver isn't running, or the mcspuIP parameter in mcs.conf is not set correctly. You may find it useful to keep the MCSGUI open while you are running your instrument.
If you are using MODS
, then you should next check the state of the instrument rotators for that instrument. The rotators which your simulation will use must be turned on. When you click on the "Rotators" button (center left), this GUI will popup:
Select the rotator GUI which you need to see. For LUCI
-1 it would be "Left Front". That would bring up a rotator GUI which will probably look like one of the 2 different states shown below:
Notice the top left "Drive Power" status in the GUI. If it says "off", as in the top image of the GUI, then you have to click the "ON" button to turn on the rotator. It takes about 15 seconds for it to turn on. If your left front rotator GUI looks like the bottom version, then the rotator is already ON and is ready to use. The cable chain "Tracker State" must show "override on" in simulation mode. That allows the rotator to move, even though the associated cable chain is not enabled. (The cable chain is not simulated.) You can change this by using the GUI which is opened when "Modal Cmds" button is clicked.
One usually closes these GUIs after they are turned on (unless you need to watch the rotator behavior while you run your simulated observations).
The PCS Subsystem GUI
The PCSGUI shows very detailed information on RADEC and DETXY offset status which may be of interest to an observer to verify that a script is doing what it is intended. If this is of interest, open the PCS
GUI by clicking on the "GUI" button for the PCS
subsystem. Like any of the GUIs, it can be closed at any time without affecting the operation of the underlying subsystem.
AOS Control System
If you intend to use scripts which make AO observations, then the AOS
system (in the TCSGUI) must be running. Further the PSF
subsystem may need to be configured for your AO requirements. If you are doing a seeing-limited observation (non-AO), then the PSF
GUI should show that the "rigid" secondary is being used. For AO observation is should show the "Adaptive" secondary is being used. Open the left or right PSF
gui by clicking on the "GUI" button for the left or right PSF
. The PSF
GUI is shown below
Note that next to the "simulator" button at the left side it must always say "yes". "Yes" is the default state for the PSF
in the TCS
simulation mode (i.e., when TCSLOCALE=="simulator"), but it could be changed to "No" if some one accidentally clicked the "Simulator" button. So, verify that it is a "yes".
If you intend to do seeing-limited observations then the "Mirror" status should show "Rigid". In that case the AOS
subsystem could be running or stopped; it does not matter. If your scripts intend to use the AO system, then the Mirror status MUST be "Adaptive" and the AOS
subsystem must be running.
To change the "Mirror" status, click the "control" button in the PSF
GUI. That will open the GUI shown below:
To change the type of secondary, click the "secondary" button in the upper left corner. It will toggle between rigid and adaptive with each click. Regardless of whether you intend to explicitly use AO or not, if the Secondary has been selected to be "Adaptive", then the AO subsystem must be running.
Useful TCS Information
These are some useful facts about the TCS
and some explanation of TCS
There Must be an Active Preset to Command the Telescope
After a preset is sent to the TCS
, and the preset acquisition phase is able to complete (does not fail), there then exists an "active preset". That means the TCS
has an astronomical target to track and it is tracking it and guiding on it too (if the user specified a telescope mode that does guiding). The IIFGUI has a left and right "preset" status display. Those displays are yellow while a preset's acquisition phase is in progress. Typically "acquisition in progress" means the telescope is slewing to the astronomical target. After it gets to that target, then, if the preset mode is GUIDE or ACTIVE, the guider has to acquire the guide star and then the telescope has to be collimated. All of this can go on for a minute or two after the telescope has arrived at the target and the MCSGUI shows that is is "tracking" and "on Source". After all that is done successfully, the preset's acquisition phase is said to be completed or "finished" and the preset is now "active". The yellow preset "Active" status in the IIFGUI will turn to green on black when this state change happens. If the preset mode is "TRACK", then there is no guide star, so no guiding or collimation is done. The acquisition phase will be finished as soon as the telescope is "on Source" as shown by the MCSGUI. If there is an error because, for example, the guider could not find the guide star, then the preset will fail. The telescope will continue tracking open loop in this case. That is, the PCS
will continue sending tracking polynomials to the MCSPU
which will continue to use them to drive the telescope, but the TCS
preset will not be "active".
Once there is an active preset running, then the telescope can respond to offset commands. A typical observing script has a preset as its first observation item. After that it has a series of offsets and exposures and, possibly even other presets. When the script finishes running and all your data is saved, the preset is STILL active, the telescope is still tracking. If a preset is active, you could then execute a script which does only an offset and nothing else. (All scripts do not have to have presets in them.) If there is no active preset, then an attempt to execute that same script will generate an error from the TCS
. The error (displayed in the IIF
event log window) will say "no active preset".
Stopping A Preset, IIF Error Recovery
There may be times when a preset is running and you want to stop it or cancel it. If the MCSGUI shows that the "tracking state" is "TRACKING", then you can abort the preset gracefully by clicking the AZ and EL "HOLD" buttons in the MCSGUI. This makes the simulated telescope mount stop moving and ignore the tracking polynomials coming from the PCS
. Within a few seconds the difference between the telescope's actual position and the commanded position will become too large for the IIF
to tolerate and it will cancel all active presets and re-initialize itself.
The "control" button at the top of the IIFGU will open the control GUI. That GUI also gives you the opportunity to cancel or abort a preset. Experience has shown that in simulation mode the "cancel" and "abort" buttons may not be effective if there has been some kind of error (one that didn't fail the preset). In the real world, the cancel and abort buttons always work, but in simulation mode they are not so reliable. The method described above ("hold" buttons in the MCSGUI) is more effective, if the MCSGUI shows that it is "TRACKING". If you have a problem getting the IIFGUI "preset" displays to show "cancelled" or "failed", one way to usually get the IIF
straightened out is to open the "Control" GUI again and re-authorize. That is, just click the "authorize" button at the top of the GUI. That reinitializes a lot of IIF
logic and stops any active preset.
Mount and Rotator Control
The mount can be commanded manually even if there is no preset active. This is done with the MCSGUI using the "HOLD" and "SlewToHold" buttons. If the mount is in "TRACKING" state (as shown by the MCSGUI), clicking one of the HOLD buttons will make the associated axis come to a stop and ignore the tracking polynomials it is receiving from the PCS
. If there is an active preset, this will kill it. To move the AZ or EL axis to a particular position and have it hold there, type the desired position (in degrees) into the text box next to the "SlewToHold" button for each axis. While you type, your key strokes will be echoed in red. After you hit carriage return, the number will turn black if it is legal. If it remains red, it is out of range and will not be used - it was ignored. Try again. After the text in the the text box is black, then you can click the associated "SlewToHold" button and the axis will move to the indicated position. Most of the other buttons (like stow pins or HBS) do not do anything useful in simulation mode.
If you open one of the Rotator GUIs, you will find it has the same type of manual control as the AZ and EL axes.
IIF Subsystem restart
In rare cases it may be necessary to stop and restart the IIF
subsystem. If that is done, the ICE connection which LUCI
, or MODS
, or any other instrument which has, becomes stale and the instrument will no longer be able to communicate with the TCS
. For the LUCI
, this is nearly always fixed by clicking the "reconnect to IIF
" button in the LUCI
Telescope Service GUI.
Failure of a Subsystem
Rarely you will look at the TCSGUI and see that a subsystem has stopped. If so, you can just restart it. Rarely a subsystem will get hung somehow and will not respond to clicking its "stop" button on the TCSGUI. In that case you can use the "Kill" button, which will always stop the subsystem.
Starting the LUCI Software Simulation
1- Using x2go or opennx (at the moment only x2go works. We should have opennx soon), connect to luci-sim2.tucson.lbto.org as user \x93lucifer\x94 (or some other VM that has been setup for LUCI
simulation). You can use ssh instead of nx for all of it, but Java GUIs are particularly slow over ssh.
2- Follow the instructions for starting LUCI
in simulation mode which you will find HERE
. It describes setting up both LUCI
-1 and LUCI
-2. If you only need one of them, you can just start that one. Note that the wiki instructions describe a shut down procedure too.
3- Once the LUCI
software is running and initialized, you can execute whatever LUCI
observing scripts you need to run, or other special scripts. Just copy them into /home/lucifer/lcsp/scripts/ (or some other local directory). When you click the scheduler GUI's "Load" button it will ask for a script file, so, just navigate to the directory holding your script and select the desired script file.
4- Note that the simulated data files get written into /home/readout2/DATA/ and /home/readout1/DATA/ on luci-sim2. These FITS files contain just simulated detector noise. They can use up the available disk space fairly quickly (one or two days of heavy use will do it). You can manually delete some files by going to ~readout1/DATA/ and ~readout2/DATA/. Do NOT delete everything in those DATA directories. There must be at least one directory in there (because of a GEIRS bug). It should be the latest directory. Files within todays directory can be deleted, but always maintain at least one directory there. (Do a mkdir manually if you have to.)
Stop LUCI Software
It is desirable to stop the simulation when you are done using it. If it continues to run, it will continue putting messages in the logs. If the TCS
is stopped, it will generate lots of log messages indicating that it cannot communicate with the TCS
. The machine running the simulation does not have huge amounts of disk space, so it has a noticeable effect. To shut down the simulation, do the shutdown steps described HERE
Stop GEIRS Software
For instructions on stopping the GEIRS, click HERE.