BG February Com Night 3, February 3 2009
Observers: Dave Ashby and MCS team for first part of night, then Andrew Rakich
Telescope Operator: David Gonzalez
Participants: Tom Sargent, Dan Cox and Dave Ashby.
We worked until about MST 20:00 testing and debugging the CPP status interface (item 2). It now seems fairly stable but mcstemp does drop a core when the
program shuts down. Tom needed to get some sleep before he looks for the reason. We also looked very briefly at trajectory sequencing, but more work is
We installed build v111 of the FPGA code (item 1) and found that the motor encoders seems to work, but the absolute did not. Also we could not reset the
motor encoders. After reloading v110, the encoders still failed to load. It has been determined that this is apparently due to some subtle problem with
Dan's code. Later we discovered that the azimuth DSP would hang after a number of cycles. Dan spent the rest of the night working on these
two issues. They appear to be tied to his new code (item 4).
In parallel to Dans debugging effort, we also tested the new build of the servo and estimator code (item 5). Several bugs were identified and corrected. At
the end of the night, the new controller could operate in rate mode and could hold in position mode. Moves in position mode were erratic. We will continue
with this tomorrow.
We handed the telescope over to Andrew at MST 3:00 but we may be less likely to do this tomorrow.
Can't speak for MCS. Field data taking was a wash, for almost all of the night the telescope struggled to collimate, with dome seeing the likely culprit. Noticed a few image jumps of the ~1" scale when M1 wasn't moving. Tried to up the wfsc exposure to 40 s but this seemed to choke the WFSC. See below. Had to restart AGW a few times before I realized that 40s exposures were probably making it spit the dummy.
Got some more "fast" data, but the fact that WFS was having problems converging has to be taken into account when analyzing it.
Still no field data
This is the first of three night of MCS testing. We are not planning on opening the Dome but we do need operator support as this testing is to take place remotely.
On the list for the following three nights:
1. Install a new build of the FPGA VHDL code:
The current version of the FPGA VHDL code is v110. This code has literally been in place for four years without modification. We will first upgrade this code to v111, which is a version with updated SSI logic. This build will be performed on a new version of the Xilinx ISE software so our main goal hear is not to break the telescope. Changing this code early will give us a chance to fully test the code over the next few nights. It is possible that this new version will fix the absolute encoders. However, if it does not, we may not be able to debug the issue remotely. If the new build proves to be stable but it fails to correct the SSI encoder problem, we may need to schedule a trip to debug the problem.
2. Test the CPP status interface
Though we had changed the CPP during the last testing block, the CPP status information was lacking. Dan and Tom expect to have this working. Testing involves verifying the functionality of the slew and track flags as well as the time-to-intercept clock. This information appears to be accurate in the telemetry, but has not been communicated to the higher-level software.
3. Test the trajectory sequence numbers
Tom, Michele and Dan have added the polynomials, a sequence number. Their hope is to identify the source of the epoch jumps that were identified last time. They are operating under a theory that the order of the polynomials has changed someplace in the path. If at some point we are able to reproduce one of the epoch jumps, we should have a bit more info about the pedigree of the trajectory.
4. Test the new DSP-to-Host interface
Dan has developed a new interface between the DSPs and the host where data is snatched from the telemetry rather than through PCI fetches from the DSP memory. This new method, which is how the rotator currently works, dramatically reduces the amount of PCI traffic because of its extensive use of DMA rather than individual PCI fetches. This method also ensures that the data from the DSPs is atomic. The goal here is to streamline the DSP operation as much as possible which will reduce the likelihood of interrupt overruns.
5. Test new controller code
The mount uses four DSPs per axis:
- FPGA interface - Oswald
- Host Interface - Mother
- Encoder Interpolation - Interp
- Motion Control - Servo
Of the four DSPs, two (Oswald and Mother) are responsible for communication and two (Servo and Interp) are responsible for signal processing. This code is difficult to maintain because it was extensively modified in place to add functionality and to fix bugs over the course of the last few years. Because of the way the software is written, different DSPs must be built using a specific version of the compiler. In particular, Mother is built with version 3.0 and the rest of the DSPs use version 5.0. The API developed for the instrument rotators proved to be much better and we have since been moving the mount software toward the same model. For the time being, we have been concentrating our efforts on the Servo DSP.
The Servo DSP provides the following functionality:
- Trajectory Evaluation - needs some work
- Command Preprocessor - deployed and tested
- Estimator - code is written and is ready for testing
- Controller (the servo) - code is written and is ready for testing
- Function Generator - code has not been ported yet
It is important that we do not compromise the performance or the stability of this essential code. So we are testing individual blocks of code and deploying them as they become mature. The last piece of code to be deployed was the command preprocessor (CPP). Next on the list is the estimator and controller.
Once the major pieces of the servo DSP code are reworked and tested, the underlying communication (Oswald and Mother) code will be updated and tested. We will not be able to address this issue this time, but this is on the horizon.
PS: MCS guys having gone to bed we are opening up for field measurements
Details (Times in MST):
10:10 Open Dome, T=1.8 degC. , Wind (LBTO) 2-3 m/sec.
10:20 preset to BS 9145 GS 0 in Acquire mode
10:30 acquired star, about 10" off the hotspot, resending preset in ACTIVE and collimating approaching M1 y Limit IE = -23 CA = -13
10:31 came too close to ylimit (-2.43) add 1 mm to global Y on M1 and M2. M1 global Y goes from -0.5 to +0.5; M2 global Y goes from -0.2 to +0.8
10:35 seeing ~ 0.5"
10:40 switching to IDL collimating on GS 0 image exhibiting 1" jumps when doing active optics with M1, will switch to split mode and keep an eye on active optics build up on m2.
*BS9145 - Field Aberration Measurements
|| Doug's Angle
10:53 GS 3 not found. Back on axis to collimate , readjust pointing IE = -27 CA = -8
|| Doug's Angle
11:10 collimation not converging well and image bouncing around 1" or so with active optics in split mode. Seeing degraded to ~1". I suspect that we are suffering from having just opened the dome recently.
11:30 collimation still not converging, mostly bouncing in Z4-Z6, sending preset in acquire mode, will look at defocussed image to see if dome seeing is obvious; can't really draw any conclusions from that, but looks pretty ugly.
11:36 will get another data set for out "fast" WFS experiments, I'll just leave the telescope collimating in an active optics loop with 5 second exposures, and we can compare this directly to the previous long collimation loop (WFSC 21-55)
11:37 collimating with 5 s exposures wfsc 57-77 This should show us worse convergence, but it shouldn't blow up.
11:50 seeing looks mildly better, will try again to converge collimating with EXP = 40 s
11:55 IDL having problems "error finding new filename". I've tried a few things, now will try making a new directory and starting again.
12:00 that didn't work. Trying sending an active preset in GCS. Failed waiting on image from AZCAM.
12:10 killed and restarted GCS. Sent ACTIVE preset. GCS is happily guiding but not doing any WFSing.
12:12 stopped and restarted AGW. Stopped and restarted GCS. send active preset. that did it. now back to 40 s collimation loops...
12:20 notice that when I send EXP=40, IDL reports "readWFSCam -e -25536", does this mnean that we can't do 40s exposures?
12:22 seems that the AZCam hanged again. cycling power on AGW again, this time I won't try sending 40 s exposures, limit it to 30 s.
12:30 collimating .in "primary" mode wfsc80-93
12:35 noticed large image jump 1 " and no collimation was happening at the time.
12:40 failing to converge to any useful values with collimation. This is with the primary only mode. Suspect dome seeing even though vent doors are open and the wind is 0.8-2.8 m/s, and seeing in the guide image ~ 0.5", so will take another set of "fast" data. writing to /090203w_fast
12:45 "collimating" wfsc 3-7
12:53 taking 12X 5sec exposures wfsc 8-19
12:56 collimating wfsc 20- (failing to converge below 300 nm in two successive measurements in ~ 0.6-0.7" seeing)
13:01 12X 5 sec EXP wfsc 25-36
13:05 collimating wfsc 37-41
13:10 12X 5 sec EXP wfsc 42-53
13:14 collimating wfsc 54-57
13:18 12X 5 sec EXP wfsc 58-69 (seeing 0.4" - 0.5")
13:23 collimating wfsc 70-72 (sems that just as dawn is breaking we can finally collimate our telescope to <200 nm)
13:25 12X 5 sec EXP wfsc 73-84
13:26 note that for a while now the guide image has had a tendency to be elongated from bottom left to top right in teh display, despite having measured aberrations well below seeing limit. Also individual subaps in the WFSC image are often elongated. Vibration again it seems.
HH:MM Close dome T=?degC, D=?degC, wind (LBTO) ? m/sec.
- 15 Jan 2009