The attached MS Word documents are from Doug Summers about his own analysis of Pointing models constructed from PCS logging data.

The following VCAN documents may be related to Pointing/Collimation:
  • LBTO T481x00003, (P. Wallace, "A rigorous algorithm for telescope pointing").
  • LBTO T481s00220, (Functional requirements for the LBT AGW-SW).
  • LBTO T481x00200, (More attention starting at PG 15 - pointing PCS logs data).
  • LBTO M008s00022, (Discussion of pointing preset tests, and pointing models).
  • LBTO I603o00706, (Start on PG 34 section 6....this is specific to MODS, but still of some interest).
  • LBTO T481x00130, (Range balancing).
  • LBTO T481s00066, (Binodal Astigmatism).
  • LBTO M002s00101, (Telescope and Enclosure Coordinate System Definitions).
  • LBTO M002s00105, (LBTO Coordinate System Description)
  • LBTO T509g00502, (Optical Path Difference and Vibration Monitoring System).

In addition, the following external articles may be of interest of well:
  • (also can be found in PDF form at: This is an explanation of motivation and brief theory behind the Tpoint software.
  • TPoint User's guides (This will be helpful to use the Tpoint software to produce pointing models). There does not seem to be an official Tpoint user manual. Software Bisque has incorporated the Tpoint software as an add-on to their own proprietary software suite (with an integrated GUI), which is probably not applicable to LBTO as we use the Linux command line version. Various versions of Linux Tpoint guides can be found on the Internet. I attached one such document here for reference.
  • SPIE 8131 - 81310H-1 "Maintaining Hexapod Range." This is a article appeared in the SPIE conference prepared by LBTO staffs. It can be found on SPIE's website:

The following (currently) open IT issues may also be related to collimation and pointing (not necessarily need to resolve them right away, but it may be good to be aware of and keep an eye on them):

The followings are some notes regarding existing scripts and procedures. These are what is being used by Doug Summers to generate and improve pointing models. We may rewrite some of these tools in Python 3 in the near future to provide a better interface and functionality. Updates of instructions will be provided once the newer versions are released.

First of all, we will describe briefly the concept of a pointing model. Pointing refers to point the telescope system (the mount and eventually the camera sensor) to some target objects (a star or something else) on the sky. And such pointing needs to be very accurate to be useful in acquiring the target quickly. Otherwise excessive searching and adjustments will have to be performed.

Due to several factors, including mechanical, optical, and others, the telescope system on its own is difficult to point accurately. The software control system thus will have to "correct" such defects and compensate them so that accurate pointing can be achieved. One such popular telescope pointing modeling software is called "TPoint" and is used worldwide by many observatories as well as advanced amateur astronomers.

We use TPoint as well in the telescope control system to correct pointing errors. The typical setup of "TPoint" consists of taking several measurements on the sky. For example, slew the telescope to a known star and then record the position reported by the telescope control system, we call this "p_t". Since the star position is known, we will call it "p_s". Ideally the two positions "p_t" and "p_s" will be the same if the mount operates perfectly. In reality, there is usually a difference delta, and we can compute and record such delta value. Each such star position measurement is called a sample. After collecting enough samples (usually ranging from a few dozens to hundreds), the TPoint software can then build a sophisticated model accounting for all kinds of "features" in the telescope system that could affect its pointing.

We don't usually perform such samples manually and for the sole purpose of constructing pointing models since the telescope time is usually very valuable. Such a procedure is only needed when there is no model to start with in the telescope control system (for example everything is lost, or significant structural changes happened on the telescope and we need to start over again). When it is needed, usually Dave Thompson will perform such sampling and it is a relatively long process.

Normally what is being done is that every night when the telescope is operated, the PCS subsystem in TCS is writing out information regarding target acquisition information. The log file normally lives in "/lbt/data/share/PointingLogs", each instrument will have its own log file. These log files contain acquisition information. Each observation to a known target (i.e., its position is known) usually requires an acquisition correction (i.e., move the object to the center of the field of view, no matter the correction is made manually, or automatically). Such an acquisition record can be used as a TPoint sample. As we accumulate more of these acquisition correction information, we have more TPoint samples, thus we could use them to build and refine TPoint models we have and hopefully also improving the telescope's pointing accuracy.

Combining PCS acquisition logs and use them as TPoint input sample is the normal process of generating/refining the telescope's pointing model. Usually one will wait for a period of time until there is enough PCS pointing acquisition log entries and then run the scripts described below to convert them into acceptable TPoint input data, and then run a TPoint modeling session to generate a new pointing model. Note currently pointing models are per instrument and focal station. The location where these pointing models can be used is at: "/lbt/data/config/tcs/PCS/PCSInstrument.conf" in which the pointing model for each focal station is listed. Also notice that these directories (including the one mentioned in the previous paragraph) all live on the mountain storage. You will need to log in to a mountain machine (such as obs3) to access them.

We will now describe how we can process the PCS pointing logs into TPoint input data. First let's list all of the scripts we have now. Below is a list of current scripts we have, with a brief comment on their role and functions. Later we will explain how to use them. Currently these scripts live on the local SVN server: "". There is a plan to move them to Github in the near future. Updates will be provided when that happens.
  • "": this is a perl script that is used to break up a file into multiple files with a numeric suffix (e.g., 001-NNN).
  • "": this is a perl script which I believe is used to remove PCS observation acquisition log entries based on some given filters.
  • "": this is a perl script used to extract acquisition log entries within certain dates by using a Modified Julian Date time.
  • "": this is a perl script used to remove multiple headers from a collected PCS pointing acquisition log file.
  • "": this is a perl script used to remove the first N observation records from each header delineated record set in the logs.
  • "": this is a perl script used to break one log file into two, based on an input offset time in seconds.
  • "": this is a bash script that is the entry to start the overall whole process.
  • "": this is a bash script mostly identical to the previous one, other than some minor customization made by Doug.

Notice some of the scripts are not intended to be used alone on the command line. Instead most of the above scripts are used internally in the bash script. And some scripts may seem odd the way they are designed or how they work. For example, "" is used to split each log file into two parts. There is no reason why we should do thing in such a way. "" only takes input dates in a file and you also cannot specify consecutive dates as a range (you have to list all of them even if that means listing 100 consecutive dates). And the ways the TPoint inputs are generated can also be quite confusing and involves too much manual processing. The reason to mentioned these is that if you feel the same while reading the instructions below, this is normal. These scripts were designed probably quickly to experiment some of the hypotheses awhile back and then stuck the way they were. These are also some of the reasons for a plan to redesign them.

Here is a list of the normal process:
  • First of all, pointing models should be constructed per focal station. So suppose we are constructing pointing models for MODS on the SX side. Then go to "/lbt/data/share/PointingLogs" and obtain the related PCS log file "mods_directgregorian_sx_gcs.log".
  • We will extract the acquisition correction records within a certain time range. So we will be using the script "" to do that. Suppose we have a small file called "filterdates" and it contains the MJD dates we are interested in (one per line). For example:
  • Use the command: "./ -i filterdates mods_directgregorian_sx_gcs.log mods_sx.log". The "-i" flag tells we want to include the time indicated in the file "filterdates", otherwise without it, it will exclude the time indicated in the file. The final argument "mods_sx.log" is the name for the new file to be generated that only contains the records within the range of time specified.
  • Also notice the name "mods_sx.log" is NOT an arbitrary name. Actually the bash script we will be using hardcoded it. The main difference between the script "" and "" is the hardcoded file names inside. These scripts do not have the ability to let you specify the name on the command line or in a configuration elsewhere. I guess the reason to create the second one "" was to experiment on files with different names. So remember to name your files correctly. For MODS, it will be either "mods_sx.log" or "mods_dx.log". For Luci cameras, that will be either "luci_sx.log" or "luci_dx.log". For ARGOS, this will become "argos_sx.log" or "argos_dx.log". Currently we only construct pointing models for these focal stations. This is one more reason to rewrite some of these tools to be more flexible.
  • We will then call the bash script "./ sx_dg 23000 2". The "sx_dg" argument just tells the script to use the "mods_sx.log" file we just generated. Remember all of the process must happen within the same directory, for all files are relative to the current directory path. Below is a list of mapping from focal station name to log file name:
    • sx_bg -> luci_sx.log
    • dx_bg -> luci_dx.log
    • sx_dg -> mods_sx.log
    • dx_dg -> mods_dx.log
    • sx_argos -> argos_sx.log
    • dx_argos -> argos_dx.log
  • Continuing with the previous command. The "23000" actually means an offset in seconds. That is being used by the "" script that is being used internally in the "" script to split each of the temporary log files resulted from using "" into two parts. The final argument "2" in the previous command is another argument being passed to another internally called script "" to discard the first two acquisition records. You might be wondering why does it work like this? And can we use some other values such as "./ sx_dg 5000 1". I think we could, but it has always been used like this, so I would suggest to use these values "23000" and "2" while working with these scripts. The workflow is a bit awkward as well. Once we have newer tools we will design a better workflow as well as reviewing all the steps and decisions involved and evaluate whether or not they are appropriate and necessary.
  • After this step, you will find multiple new files being written in the current directory. There is a file "tp_sx_dg.out" that basically contains a list of all the data files being generated. And all the data files are in a name such as "TP_N.B.IN" or "TP_N.IN". The "N" part in the name is a digit such as "0" or "1" or "2". The total number of these data files will vary depending on your initial size of the input PCS log entries. I think the "TP_N.B.IN" files are the first one in the series of the files with the same "N". For example, "TP_0.B.IN" is the first one with a "0" number. "TP_0.IN" is the second file in the "0" numbered data files. Anyway we should follow the order listed in the file "tp_sx_dg.out". I don't know why it is designed in this way. The way these files are named and organized may cause a bit of confusion. But they are what they are and that's the workflow with these scripts. As a reference, here is what I have in the file "tp_sx_dg.out", we will be using the content in the next step:
indat TP_0.B.IN
indat TP_0.IN
indat TP_1.B.IN
indat TP_1.IN
indat TP_2.B.IN
indat TP_2.IN

  • Finally once we have these files, we will be making another file ourselves to be used as an input file to the TPoint software. Let's call this file "", the ".pro" suffix stands for a TPoint procedure file. Here is what we need to put into this file based on what I have from the previous step:
indat TP_0.B.IN
indat TP_0.IN
indat TP_1.B.IN
indat TP_1.IN
indat TP_2.B.IN
indat TP_2.IN

  • As you can see, most of the contents are the same as the file "tp_sx_dg.out" generated in the previous step. So we can indeed rename that file and do a little bit editing as well. Basically what we put in here is a TPoint procedure that concatenates all the data files (the "TP_N.B.IN" and "TP_N.IN" files) together and creates a single set of data. The order is taken from the order specified in "tp_sx_dg.out". Lastly we call "outdat" to write this unioned dataset out to a new file named "".
  • Now let's start a TPoint session by launching it using the command "tpoint". This assumes you have installed the latest TPoint version (2016 version) and have it on your system path. Once we are at the TPoint shell, execute the command "inpro". This instructs TPoint to load the file "" we just created. Notice we must be in the same directory or TPoint won't be able to find the file. After this, we will execute the procedure "READ_LBT" defined in the pro file. We will do ".read_lbt" in the TPoint shell. Notice it is okay to use lower case names even if the name defined in "" is all upper case letters. And the dot before the name is needed so be careful not to omit it when typing ".read_lbt". After this command, you will see a bunch of screen dump of all the dataset being loaded by TPoint. But for now, we'll quit the TPoint shell (either type "end" or press "Ctrl-D").
  • Now back to the Bash shell, you can see that indeed the file "" as named in the procedure "" has been generated in the current directory. From now on, this is all we needed. I would suggest that you perhaps rename it to include a timestamp and then remove all the previous files the scripts generated, that includes all the "TP_N.B.IN" and "TP_N.IN" and the "tp_sx_dg.out" file (if you didn't rename it to create ""). Deleting all the previous files are important and recommended since next time if you decide to rerun the scripts, they may be confused by the presence of these "TP*" files in the current directory.
  • From now on, we will be using TPoint and the dataset contained in "" to generate new pointing models.

The previous steps are how you can transform a PCS acquisition log file to a usable TPoint dataset. Below we will be providing some brief introduction and discussions on how to use TPoint to analyze and generate pointing models. Notice the steps below are purely TPoint based and should not change even if we update with newer tools to let you more easily and consistently obtain TPoint input datasets. So basically what's been described before could change in the future, and what's following after this will likely going to be the same in the future.

[TPoint brief discussion to be written ...]

Here is a minimal flow on how to generate a new pointing model based on the data we just generated (""):
  • Open a TPoint session by running the "tpoint" command in the same directory as the "" file
  • Issue the command "indat" (this is to load the data in the file "", you will see a list of data being displayed as well)
  • Run the command "fauto" (let TPoint search for an optimal model fit, TPoint will also open up a 9-panel window and show you some plots, to be discussed more later)
  • Run the command "outmod mods_sx.model" (outputting the TPoint model itself in the current directory)
  • Run "end" to quit the current TPoint session.

The above is the bare minimal you will need to do to obtain a TPoint model from the input data. But usually we can also explore the data in TPoint and try out various different settings to see how the resulting model is affected and whether we will be able to achieve better results. Below are some brief discussions on some of these considerations.

[More TPoint operations to be written ...]

[Back testing discussion to be written ...]
I Attachment Action Size Date Who Comment
Pointing-Analysis-March_2017-2.docxdocx Pointing-Analysis-March_2017-2.docx manage 432 K 13 Jun 2018 - 00:18 YangZhang Doug Summers pointing model analysis
Pointing-Analysis-March_2017.docxdocx Pointing-Analysis-March_2017.docx manage 1 MB 13 Jun 2018 - 00:18 YangZhang Doug Summers pointing model analysis
TpointLinuxManual.pdfpdf TpointLinuxManual.pdf manage 367 K 13 Jun 2018 - 00:19 YangZhang Tpoint Linux user guide (found on Internet, fairly old version, but probably still okay).
Topic revision: r9 - 30 Aug 2018, YangZhang
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback