You are here: Foswiki>Software Web>SubSystems>TelemetryController (06 Oct 2009, NormCushing)Edit Attach

Development Process

  1. Create requirements specification and put in in the CAN.
  2. Create preliminary design and put it in the CAN. The preliminary design covers the interfaces and architecture only.
  3. Increment over the following steps until the released product meets all of the stated requirements.
    1. Extract subset of requirements which are not implemented, but are needed most. This subset shall be logically consistent and have a predicted development time between two and four weeks.
    2. Develop test plan to cover this increment.
    3. Extend design to cover increment
    4. Extend implementation
    5. Test implementation
    6. Deply to mountain
    7. Test deployment.

Requirements Collection

This section defines how the telemetry requirements are collected and provides notes on what is collected. The actual requirements are recorded in Jira. Also the document CAN 481s345a contains the official requirements.


In this section we define the different view points of the telemetry controller. Viewpoints help us classify the sources of requirements.


Domain viewpoints represent domain characteristics that influence the system requirements. The following are a list of domain viewpoints for telemetry.

  • Coding Standards
  • Subsystem Architectural Specification
  • UI Standards


Indirect viewpoints represent stakeholders that don't won't use telemetry but still influence the requirements. These viewpoints typically provide business requirements. The following are a list of indirect viewpoints.

  • Technical Direction - John Hill (interviewed)
  • Instrument Science - Mark Wagner (interviewed)
  • Product Management - Norm Cushing and Joar Brynnel (interviewed)
  • Mountain Operations - John Little (interviewed)
  • Systems Engineering - Dave Ashby (interviewed)


Interactors represent users of telemetry. These viewpoints provide user and system interface requirements. The following are a list telemetry interactors.

  • Diagnostician
    • Hardware Engineer - Dave Ashby (interviewed)
    • Instrument Commissioner - Olga Kuhn (interviewed)
    • Network and Systems Administrator - Alex Lovell-Troy (interviewed)
    • Software Engineer - Luca Fini, Michele De La Peña and Titus Purdin (interviewed)
  • Network Infrastructure - Alex Lovell-Troy (data)
  • Observer - David Thompson (interviewed)
  • TCS Subsystems
    • AOS - Luca Fini (data)
    • ECS - Michele De La Peña (data)
    • GCS - Torsten Leibold (data)
    • MCS - Tom Sargent (data)
    • MCSPU - Tom Sargent (data)
    • OSS - Paul Grenz (data)
    • PCS - Michele De La Peña (data)
    • PMC - Chris Biddick (data)
    • PSF - Chris Biddick (data)
  • Telescope Operator - Aaron Ceranski (interviewed)


My preliminary method of requirements discovery was through interviews. After I've completed all of the interviews, I'll observe all applicable users in action to determine if any requirements have been forgotten.


For each indirect viewpoint, I interviewed at least one person with that viewpoint. Because the opportunity presented itself, I also interviewed Olga Kuhn, David Thompson, Luca Fini and Aaron Ceranski. Some of these interviews where combined. Here are the interviews.

Analysis Strategy

When deciding how to choose which requirements to develop first, we considered several scheduling strategies. Here are the strategies we considered.

  • Highest risk - The requirements representing the greatest risk are implemented first.
  • Most architecturally significant - The requirements which when implemented contribute the most to the final architecture are scheduled first.
  • Most general - The requirements which provide the most functionality are implemented first.
  • Most important - The requirements which are most important in the long term are implemented first.
  • Most urgent - The requirements which satisfy the most immediate needs of the stakeholders are implemented first.
  • Smallest impact - The requirements which when implemented affect any existing infrastructure the least are scheduled first.

Since high risk requirements may fail to be satisfied and thus are more liked to be changed, the highest risk strategy push the requirements volatility to the beginning of the project. As the project progresses, the requirements stabilize relatively quickly.

When the architecture of the software is focused on first, the project can be partitioned into independent pieces more quickly. Thus the most architecturally significant strategy works well for big projects where the work needs to be spread over many people.

If the relationship between the stakeholders and the project team is not secure. Quickly making the product functional helps demonstrate the team's competence, increasing the stakeholder's confidence in the project team. The most general strategy helps build up the stakeholders' confidence.

If the project is at risk of having its funding cut before completion but must still produce a viable product, the most important functionality needs to be implemented first. If the risk is realized, the product is more likely to please the stakeholders despite its lack of completion. If this is the situation, the most important strategy would be the best choice.

If the stakeholders have immediate, critical needs that the product would satisfy if it already existed, the most urgent strategy would be appropriate. This allows for the stakeholders to gain significant utility relatively quickly from early, immature releases of the product.

If the product is to be integrated into an existing critical system, early immature releases of the product have a significant risk of failure. The functionality of the early releases should be chosen to minimize the impact of product failure upon the entire system. Thus, in this scenario, the smallest impact strategy would be appropriate.

These strategies are not mutually exclusive. More than one may be combined when selecting which requirements to implement first. Also strategies can adjust over time. For instance, once the high risk requirements are implemented, the project could switch to satisfying the most architecturally significant requirements next.

We are our own stakeholders, so the most general strategy is not appropriate. We don't have any immediate, critical needs for telemetry since some critical telemetry data is already able to be collected. Thus the most urgent strategy is not appropriate either. The stakeholders recognize telemetry is important for the LBT to meet its science requirements, so it is unlikely project funding will be cut. Thus the most important strategy is probably not the best. The telemetry controller is being integrated into the existing LBT infrastructure, but a failure of the telemetry controller will not significantly impact the existing operation of the telescope. Thus the smallest impact strategy is not appropriate either. Only the highest risk and most architecturally significant strategies remain to be considered.

Telemetry will interact with many other software components. These interactions will require changes to existing code. Because of our code ownership culture, the telemetry project developers are unlikely to be the ones making the changes to the other software components; thus coordination will be necessary. This is an argument for the project to use most architecturally significant. If early on we focus on the architecture required to define an API and Application Binary Interface (ABI), the risk of coordination problems impacting the telemetry release is reduced.

Telemetry does have some high-risk requirements. Previous work on telemetry was unable to collect telemetry data at 4 kHz. Collecting telemetry data at this rate is a requirement on the telemetry controller. Also we have data through-put constraints. We have a dedicated 1 Gbit/s VLAN for use by telemetry. Anecdotal evidence suggests that the PCI bus is limited to 500 Mibit/s (Mi = mebi = 2^20) throughput with the hard drive through-put be limited to 60 MiB/s. It is not certain we'll be able to guarantee collection of all of telemetry data at its minimum useable rate all of the time and still supply a single 4 kHz diagnostic telemetry stream. Creating a relational database that can tolerate schema changes, provide sufficient data access for the live feed and still be searchable when searches may cross the temporal schema transition boundaries may prove impossible with the current allocated hardware.

From these arguments, the most appropriate strategy for choosing first requirements to satisfy is the high risk strategy. Primarily, we analyze the high-risk requirements mentioned above. Secondarily, we analyze how the telemetry controller will communicate with the subsystems providing telemetry data. The results of the secondary analysis will yield the software interface requirements which will define the API and ABI.

Telemetry Data Definitions

Network and System

For each of the 25 computers, the following is collected:
  • Name: Network Data
  • Minimum generation rate: 1/5 min
  • Structure:
    • inbound tcp traffic count (unsigned long)
    • outbound tcp traffic count (unsigned long)
    • memory usage (float)
    • cpu usage (float)
    • cpu percent idle (float)
    • cpu percent in user space (float)
    • cpu percent in kernel space (float)
    • disk space in '/' (unsigned long)




  • Name: guiding image
  • Minimum generation rate: 1 Hz (This needs to be verified by David Thompson)
  • Structure:
    • FITS file containing 100x100 pixels with each pixel being 16 bits. This is about 22 KB


MCSPU_telemData.sxc: MCSPU telemetry data descriptions


Hexapod Telemetry Data (both sides, updated every half second)
  • brakereleased (true or false)
  • connected (is the subsystem connected to the UMAC)
  • error (current error code)
  • ishomed (has the hexapod been homed)
  • kinpos[0..5] <float*6> (the six axes kinematic positions)
  • legpos[0..5] <float*6> (the actual leg positions)
  • legstate[0..5] <int*6> (the state of the physical legs)


PCS_TELVariables.xls: PCS telemetry data definitions


PMC_Telemetry_Requirements.doc: PMC telemetry data descriptions


PSF_Telemetry_Requirements.doc: PSF telemetry data definitions



A source provides data to TEL. TEL will be running all of the time. Since sources may come and go, they will need to be able to register with the controller when they come on-line. Since the type of data a source sends may change from session to session, the source will need to describe its data to the controller each time it registers.


The monitor will operate like KSystemGuard.


This will have a web interface. The exporter will interact with the user in the following manner: User -> Exporter : User enters time range of interest User <- Exporter: Exporter presents all possible types of telemetry available during that time range User -> Exporter: User selects one or more sources of interest User <- Exporter: Exporter presents user with selection User -> Exporter: Accepts selection User -> Exporter: User selects export format User -> Exporter: Indictates where to write data. User <- Exporter: Exporter writes data.

-- TonyEdgin - 07 May 2007
I Attachment Action Size Date Who Comment
AOS_adaptive_secondary_mirror_telemetry.docdoc AOS_adaptive_secondary_mirror_telemetry.doc manage 203 K 19 Dec 2006 - 22:34 UnknownUser Adaptive secondary mirror telemetry data definitions
Aaron_Ceranski_2006_12_01.wavwav Aaron_Ceranski_2006_12_01.wav manage 28 MB 06 Dec 2006 - 22:55 UnknownUser Interview of Aaron Ceranski
Alex_LovellTroy_2007_01_18.mp3mp3 Alex_LovellTroy_2007_01_18.mp3 manage 5 MB 23 Jan 2007 - 17:36 UnknownUser Interview of Alex Lovell-Troy
ECS_TELVariables.xlsxls ECS_TELVariables.xls manage 20 K 11 Jan 2007 - 22:45 UnknownUser ECS telemetry data definitions
Joar_Brynnel_2006_11_07.mp3mp3 Joar_Brynnel_2006_11_07.mp3 manage 8 MB 10 Nov 2006 - 17:59 UnknownUser Interview of Joar Brynnel and Dave Ashby
John_Hill_2006_11_01.wavwav John_Hill_2006_11_01.wav manage 20 MB 07 Nov 2006 - 22:33 UnknownUser Interview of John Hill
John_Little_2006_11_13.mp3mp3 John_Little_2006_11_13.mp3 manage 3 MB 16 Nov 2006 - 22:56 UnknownUser Interview of John Little
Luca_Fini_2006_09_29.mp3mp3 Luca_Fini_2006_09_29.mp3 manage 4 MB 17 Nov 2006 - 15:37 UnknownUser Interview of Luca Fini
MCSPU_telemData.sxcsxc MCSPU_telemData.sxc manage 6 K 19 Dec 2006 - 22:30 UnknownUser MCSPU telemetry data descriptions
Mark_Wagner_2006_11_03.mp3mp3 Mark_Wagner_2006_11_03.mp3 manage 5 MB 10 Nov 2006 - 17:57 UnknownUser Interview of Mark Wagner, Dave Thompson and Olga Kuhn
Norm_Cushing_2006_10_12.mp3mp3 Norm_Cushing_2006_10_12.mp3 manage 3 MB 10 Nov 2006 - 17:36 UnknownUser Interview of Norm Cushing
PCS_TELVariables.xlsxls PCS_TELVariables.xls manage 19 K 23 Jan 2007 - 15:18 UnknownUser PCS telemetry data definitions
PMC_Telemetry_Requirements.docdoc PMC_Telemetry_Requirements.doc manage 33 K 19 Dec 2006 - 22:10 UnknownUser PMC telemetry data descriptions
PSF_Telemetry_Requirements.docdoc PSF_Telemetry_Requirements.doc manage 22 K 19 Dec 2006 - 22:40 UnknownUser PSF telemetry data definitions
Titus_Purdin_2007_01_31.mp3mp3 Titus_Purdin_2007_01_31.mp3 manage 8 MB 31 Jan 2007 - 23:18 UnknownUser Interview of Titus Purdin and Michele De La Pena
Topic revision: r36 - 06 Oct 2009, NormCushing
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback