top of page

Using LabVIEW Real-Time to Control the World's Largest Telescope

*As Featured on NI.com

Original Authors: Jason Spyromilio, European Southern Observatory

Edited by Cyth Systems

The Very Large Telescope (VLT) is currently operated by the European Southern Observatory in Cerro Paranal, Chile.
The Very Large Telescope (VLT) is currently operated by the European Southern Observatory in Cerro Paranal, Chile.

The Challenge

Using commercial off-the-shelf (COTS) solutions for high-performance computing (HPC) in active and adaptive optics real-time control in extremely large telescopes.


The Solution

Combining the NI LabVIEW graphical programming environment with multicore processors to develop a real-time control system and prove that COTS technology can control the optics in the European Extremely Large Telescope (E-ELT), which is currently under design, construction, and deployment/ and prototyping phases.


The European Southern Observatory (ESO) is an astronomical research organization supported by 13 European countries. We have experience developing and deploying some of the world’s most advanced telescopes. Our organization currently operates at three sites in the Chilean Andes – the La Silla, Paranal, and Chajnantor observatories. We have always commanded highly innovative technology, from the first common-user adaptive optics systems at the 3.6 m telescope on La Silla to the deployment of active optics at La Silla’s 3.5 m New Technology Telescope (NTT) to the integrated operation of the Very Large Telescope (VLT) and the associated interferometer at Paranal. In addition, we are collaborating with our North American and East Asian partners in constructing the Atacama Large Millimeter Array (ALMA), a $1 billion (USD) 66-antenna submillimeter telescope scheduled for completion at the Llano de Chajnantor in 2012.

The Very Large Telescope (VLT) is currently operated by the European Southern Observatory in Cerro Paranal, Chile.
The Very Large Telescope (VLT) is currently operated by the European Southern Observatory in Cerro Paranal, Chile.

The next project on our design board is the E-ELT. The design for this 42 m primary mirror diameter telescope is in phase B and received $100 million (USD) in funding for preliminary design and prototyping. After phase B, operations are expected to begin in 2025.

Left: For a size comparison, two humans and a car stand next to the E-ELT. The M1 primary mirror, which is 42 m in diameter, features segmented mirror construction.

Right: The E-ELT features a total of five mirrors.


Grand-Scale Active and Adaptive Optics

The 42 m telescope draws on the ESO and astronomical community experience with active and adaptive optics and segmented mirrors. Active optics incorporates a combination of sensors, actuators, and a control system so that the telescope can maintain the correct mirror shape or collimation. We actively maintain the correct configuration for the telescope to reduce any residual aberrations in the optical design and increase efficiency and fault tolerance. These telescopes require active optics corrections every minute of the night, so the images are limited only by atmospheric effects.

Adaptive optics uses a similar methodology to monitor the atmospheric effects at frequencies of hundreds of hertz and corrects them using a deformed, suitably configured thin mirror. Turbulence scale length determines the number of actuators on these deformable mirrors. The wavefront sensors run fast to sample the atmosphere and transform any aberrations to mirror commands. This requires extremely fast and precise hardware and software.

Controlling the complex system requires an extreme amount of processing capability. To control systems deployed in the past, we developed proprietary control systems based on virtual machine environment (VME) real-time control, which were expensive and time-consuming. We are working with NI’s engineers to benchmark the control system for the E-ELT primary segmented mirror, called M1, using COTS software and hardware. Together we are also exploring possible COTS-based solutions to the telescope’s adaptive mirror real-time control, called M4.

M1 is a segmented mirror that consists of 984 hexagonal mirrors (Figure 1), each weighing nearly 330 lb with diameters between 1.5 and 2 m, for a total 42 m diameter. In comparison, the primary mirror of the Hubble Space Telescope has a 2.4 m diameter. The single primary mirror of the E-ELT alone will measure four times the size of any optical telescope on the earth and incorporate five mirrors (Figure 2).


Defining the Extreme Computational Requirements of the Control System

In the M1 operation, adjacent mirror segments may tilt with respect to the other segments. We monitor this deviation using edge sensors and actuator legs that can move the segment 3 degrees in any direction when needed. The 984 mirror segments comprise 3,000 actuators and 6,000 sensors (Figure 3).

The system, controlled by LabVIEW software, must read the sensors to determine the mirror segment locations and, if the segments move, use the actuators to realign them. LabVIEW computes a 3,000 by 6,000 matrix by 6,000 vector product and must complete this computation up to 1,000 times per second to produce effective mirror adjustments.

Sensors and actuators also control the M4 adaptive mirror. However, M4 is a thin deformable mirror – 2.5 m in diameter and spread over 8,000 actuators (Figure 4). This problem is similar to the M1 active control, but instead of retaining the shape, we must adapt the shape based on measured wavefront image data. The wavefront data maps to a 14,000 value vector, and we must update the 8,000 actuators every few milliseconds, creating a matrix vector multiply of an 8 by 14 k control matrix by a 14 k vector. Rounding up the computational challenge to 9 by 15 k, this requires about 15 times the large segmented M1 control computation.

Left: A thin, flexible mirror spread across 8,000 actuators, the M4 can be deformed every few milliseconds to compensate for atmospheric interference.

Right: LabVIEW software controls the M1 system comprised of 984 segments at 1.5 m each with six sensors and three actuator legs that provide 3 degrees of freedom for movement deviation.


NI engineers are simulating the layout and designing the high-channel-count data acquisition, synchronization system, and control loop. At the heart of all these operations is a very large LabVIEW matrix-vector function that executes the bulk of the computation. M1 and M4 control requires enormous computational ability, which we approached with multiple multicore systems. Because M4 control represents a15 by 3 k submatrix problems, we require 15 machines that must contain as many cores as possible. Therefore, the control system must command multicore processing. This is a capability that LabVIEW offers using COTS solutions, making a very attractive proposition for this problem.


Addressing the Problem with LabVIEW in Multicore HPC Functionality

Because we required the control system to be engineered before the E-ELT’s construction began, the system configuration could have an effect on the construction characteristics of the telescope. It was critical to thoroughly test the solution as if it were running the actual telescope. To meet this challenge, NI engineers not only implemented the control system but also a system that runs a real-time simulation of the M1 mirror to perform a hardware-in-the-loop (HIL) control system test. HIL is a testing method commonly used in automotive and aerospace control design to validate a controller using an accurate, real-time system simulator. NI engineers created an M1 mirror simulator that responds to the control system outputs and validates its performance. The NI team developed the control system and mirror simulation using LabVIEW and deployed it to a multicore PC running the LabVIEW Real-Time Module for deterministic execution.

In similar real-time HPC applications, communication and computation tasks are closely related. Failures in the communication system result in whole system failures. Therefore, the entire application development process includes the communication and computation interplay design. NI engineers needed a fast, deterministic data exchange at the core of the system and immediately determined that this application cannot rely on standard Ethernet for communication because the underlying network protocol is nondeterministic. They used the LabVIEW Real-Time Module time-triggered network feature to exchange data between the control system and the M1 mirror simulator, resulting in a network that moves 36 MB/s deterministically.

NI developed the full M1 solution that incorporates two Dell Precision T7400 Workstations, each with eight cores, and a notebook that provides an operator interface. It also includes two networks – a standard network that connects both real-time targets to the notebook and a 1 GB time-triggered Ethernet network between the real-time targets for exchanging I/O data (Figure 5).

As for system performance, we learned that the controller receives 6,000 sensor values, executes the control algorithm to align the segments, and outputs 3,000 actuator values during each loop. The NI team created this control system to achieve these results and produced a telescope real-time simulation in an actual operation called “the mirror.” The mirror receives the 3,000 actuator outputs, adds a variable representative of atmospheric disturbances such as wind, executes the mirror algorithm to simulate M1, and outputs 6,000 sensor values to complete the loop. The entire control loop is completed in less than 1 ms to adequately control the mirror (Figure 6).

The benchmarks NI engineers established for their matrix-vector multiplications include the following:

  • LabVIEW Real-Time Module with a machine with two quad-core processors, using four cores and single precision at 0.7 ms

  • LabVIEW Real-Time Module with a machine with two quad-core processors, using eight cores and single precision at 0.5 ms

The M4 compensates for measured atmospheric waveform aberrations, and NI engineers determined the problem could only be solved using a state-of-the-art, multicore blade system. Dell invited the team to test the solution on its M1000, a 16-blade system (Figure 7), and the test results were encouraging. Each of the M1000 blade machines features eight cores, which translates into the fact that engineers distributed the LabVIEW control problem onto 128 cores.

NI engineers proved that we can, in fact, use LabVIEW and the LabVIEW Real-Time Module to implement a COTS-based solution and control multicore computation for real-time results. Because of this performance breakthrough, our team continues to set benchmarks for both computer science and astronomy in E-ELT implementation, which will further scientific advancements in space and atmospheric observation.

Left: NI engineers validated the mirror control system (right) with the M1 mirror HIL simulation (left).

Right: To achieve required loop rates, NI engineers set up a highly deterministic network and called it from an application using timed sequences and timed loops.

Bottom: This illustrates the current NI approach to implement M4. The problem is approximately 15 times more demanding than the M1 controller.


Original Authors:

Jason Spyromilio, European Southern Observatory

Edited by Cyth Systems













Comments


bottom of page