TUD

Institut für Automatisierungstechnik

IRPN

IRPN - Image Recognition and Processing for Navigation

IRPN - Image Recognition and Processing for Navigation

Laufzeit: 
01/2015 - 05/2018 
Projektleiter: 
Prof. Dr. techn. K. Janschek 
Mitarbeiter: 
Dr.-Ing. F. Schnitzer, Dipl.-Ing. A. Sonnenburg, Dr.-Ing. S. Dyblenko, Dipl.-Inf. (FH) Mario Herhold, Dipl.-Inf. (FH) Christian Beissert 
Finanzierung: 
ESA/ESTEC 
Kooperation: 
Airbus Defence and Space GmbH (Bremen, Germany), Airbus Defence and Space GmbH (Friedrichshafen, Germany), Astos Solutions GmbH (Stuttgart, Germany), Jena-Optronik GmbH (Jena, Germany) 
Veröffentlichungen: 
F. Schnitzer, A. Sonnenburg, K. Janschek, and M. Sanchez Gestido, "Lessons-learned from on-ground testing of image-based non-cooperative rendezvous navigation with visible-spectrum and thermal infrared cameras." in 10th International ESA Conference on Guidance, Navigation & Control Systems - ESA GNC 2017, Salzburg, Austria, May/Jun. 2017.
Sonnenburg, A.: Image Recognition and Processing for Navigation (IRPN). Presentation at ESA Clean Space Industrial Days, May 2016 

Due to the increasing number of space debris and faulty satellites in the Earth orbit, in the recent years On-Orbit Servicing was put more and more into the focus of research. The different challenges for spacecraft (S/C) operations led to several strategies and projects. However, the majority of the demonstrated experiments for On-Orbit Servicing aimed at rendezvous with cooperative targets.

ESA's Clean Space initiative aims at Active Debris Removal (ADR) for de-orbiting defunct satellites or pieces of space junk. In such a rendezvous mission the target is uncooperative. In order to perform the approach sequence, several constraints have to be taken into account, especially in low Earth orbit (LEO) where the priority debris for removal is located (most of the times in sun-synchronous orbits). Vision-based relative navigation is a key technology for those missions.

At the Institute of Automation at TU Dresden (TUD) the vision-based relative navigation with uncooperative targets was studied in the past, including the estimation of the target object's pose and motion and the 3d reconstruction of the shape of an unknown target.

Thus, in the context of the Clean Space initiative, TUD was prime in the activity Image Recognition and Processing for Navigation (IRPN). The developed system estimates the relative pose between chaser S/C and the uncooperative target satellite ENVISAT using different complementary vision-based sensors (cameras in the visible spectrum and thermal infrared spectrum) and light detection and ranging (LIDAR) by using image processing techniques and an estimation filter.


Fig. 1: Illustration of a chaser spacecraft approaching to satellite ENVISAT

The conditions (target with reflective surface materials, ambiguities of geometric features, quick change of illumination conditions due to target rotation/tumbling) result in very demanding needs for the navigation in the ADR scenario, in particular for the image processing. Thus, the verification and validation scenarios for testing those algorithms have to consider before mentioned effects to guarantee the algorithms? performance and robustness. As a consequence in IRPN several test environments with several test scenarios have been developed and set up.

PROJECT AND SYSTEM OVERVIEW

The Image Recognition and Processing for Navigation (IRPN) activity consisted of the development and testing of a distributed vision-based navigation system for relative navigation between spacecraft by means of different sensors. In the reference mission a chaser S/C approaches ENVISAT in LEO from a distance of 100 m to 2 m. ENVISAT was assumed to be out of control, performing tumbling motions and to be uncooperative in the sense of being uncontrolled and not prepared for any kind of rendezvous techniques.

The main functional parts of the system developed in IRPN are shown in Fig. 2: The system estimates the relative pose between the S/Cs using different complementary vision-based sensors (cameras in the visible spectrum (VIS) and infrared spectrum (IR)) and light detection and ranging (LIDAR), sensor specific processing algorithms (image recognition and processing, IRP) and a navigation function (NAV). The measurements from different sensors are first processed by the IRPs and those estimations are merged and filtered afterwards in a relative navigation filter in the navigation function. The main computing rate is 10 Hz, except for the LIDAR part of the system, which has a computing interval of 0.3 s.


Fig. 2: In the IRPN project algorithms are developed to process camera images (visible spectrum and thermal infrared) and/or LIDAR point clouds to determine the relative pose of the uncooperative satellite ENVISAT.

The activity was performed by following contractors: The Institute of Automation at Technische Universität Dresden was prime, responsible for the IRP algorithms for VIS and IR, for the navigation function and for the test campaigns. Airbus Defence & Space (Bremen) contributed the IRP algorithms for processing the LIDAR data. For testing the algorithms with synthetic image data, the Camera Simulator from Astos Solution GmbH was used. Jena-Optronik GmbH provided the software component for simulating realistic behaviour and errors of the LIDAR sensor. The sensor data was generated according to trajectory data that had been simulated by Airbus Defence & Space (Friedrichshafen).


Fig. 3: System concept, consortium and allocation of responsibilities in the IRPN project.

All IRPs and the overall IRPN system have been tested in different test environments (model-in-the-loop (MIL), processor-in-the-loop (PIL)) and on one hand with synthetic (rendered) images and on the other hand with real images (generated on-ground in rendezvous simulator).

ALGORITHMS FOR IMAGE RECOGNITION AND PROCESSING FOR NAVIGATION

The IRPN algorithms can be divided into four main parts: Three IRPs (VIS, IR and LIDAR) and NAV. All IRPs directly determine the relative state between sensor and target. All IRPs consist of two stages: An initialization stage and a tracking stage.

The camera IRPs (VIS and IR) use for initialization a dense stereo reconstruction to determine a coarse initial relative pose estimate by comparing disparity-based hypotheses for their congruence with the image data. For tracking, they use this initial pose to further improve the accuracy and track the relative movement over time by utilizing the a priori known 3d model of the target which is fit to the image by matching projected lines and edges in the image and minimizing the distances between them.


Fig. 4: Image-based pose estimation using line tracking algorithms to determine the relative pose and the uncertainty of this estimate for the known target.

An additional measurement is derived from the image data by determining the Optical Flow fields, which characterize the motion of the target satellite within the field of view of the camera. In the ESA GNC IRPN activity a simulation model of Optical Correlator (OCOR) was designated for this task. The Optical Flow vectors are used for improving the expected relative pose before conducting the actual pose tracking. The design of the LIDAR-based pose estimation consists of an architecture that uses a smart scanning LIDAR which is able to control its field of view and its laser power level. For initialization the LIDAR IRP only assumes the target being in the field of view and computes a coarse estimate of the relative pose. Its tracking algorithm performs an iterative matching scheme between the point cloud and the known geometry of the target for very robust and lighting independent relative pose estimation.


Fig. 5: Determination of Optical Flow fields (see green lines in the right figure) with TUD's optical correlator hardware (OCOR).

The Navigation Function incorporates fusion and filtering of all available information (including local sensor level measurements) by usage of an Extended Kalman Filter (EKF). It provides especially estimates for all chaser-target spacecraft relative states with low latency (and the corresponding estimated uncertainty). Beside the relative navigation specific sensor measurements (camera/LIDAR pose estimates), also information from the chaser AOCS (chaser position/attitude, acceleration) and the known (relative) S/C dynamics are considered by the filter algorithm for better robustness and performance. Furthermore, NAV is designed to be able to deal with different delays of the sensor measurements.


Fig. 6: Navigation Function for estimating relative position, velocity, attitude and attitude rate of satellite ENVISAT

ALGORITHM VERIFICATION AND VALIDATION

The test campaigns for verification and validation of the algorithms have been performed in three test environments: Model-in-the-loop (MIL) tests with a MATLAB Simulink facility and synthetic image data, processor-in-the-loop (PIL) tests on an Extended dSPACE System with synthetic image data and tests using real on-ground image data (rendezvous spacecraft simulator). All images, simulated LIDAR measurements, AOCS data and Optical Flow data were generated offline based on representative reference trajectories. Using the offline generated input data the test campaigns could be performed in real-time (or faster).

MIL Tests

A MATLAB Simulink model has been developed in order to validate the IRPN algorithms. It contains the IRP algorithms and the navigation function as well as the different interface blocks for reading the data from hard disk and saving the test results to files. The step time of the Simulink model had been set to 0.1 s, but on a Core-i7-4790K CPU (4x4.0 GHz) the algorithms performed faster than real-time.


Fig. 7: The IRPN MATLAB Simulink MIL model for processing the image and LIDAR data.

For Monte Carlo experiments in the MIL facility, synthetic image and LIDAR data were used based on trajectories from different approach scenarios. Each trajectory has a length of 4000 s, comprising 40,000 image sets each for VIS and IR as well as 13,334 LIDAR data sets.


Fig. 8: VIS (left) and IR (right) synthetic images from trajectories of different scenarios. (top/bottom).

PIL Tests

In the PIL facility the automatic code generation and compilation for an external hardware target and the possibilities for parallelization have been demonstrated. This showed the general real-time capability of the IRPN system.


Fig. 9: Architecture overview for the PIL facility. The IRPN algorithms are implemented on different boards of the dSPACE System. Gateway PCs provide the LIDAR and image data for each time step. A Control PC starts the system and logs the result data.

In general, the PIL tests showed the same results as was shown in the MIL tests. LIDAR IRP allowed robust initialization at distance of 100 m. As expected, LIDAR IRP tracking was most robust due to the independence from external illumination. Combining LIDAR IRP with VIS and/or IR IRP algorithms improved the quality of the estimates, especially after the synchronization with the target rotation.


Fig. 11: Mean absolute relative position error over all experiments of two scenarios (x, y and z coordinate of target body frame w. r. t. chaser body frame; in m) and standard deviation (3-σ-sleeve). At t ≈ 800 s the chaser starts to move to the target's rotation axis; at t ≈ 1600 s the chaser starts to synchronize its rotation with the target rotation; after t ≈ 1750 s the chaser is moving towards the target with a direct approach. There are hold points during the approach at t ≈ 0 ... 500 s (100 m), at t ≈ 700 ...1200 s (50 m), at t ≈ 2000 ... 2400 s (11 m) and at t ≈ 2600 ... 4000 s (2 m). The estimation errors increase when starting the transfer to the rotation axis and decrease when synchronizing with the target's rotation. At a distance of approx. 10m - 15m (t ≈ 1800 ... 2500 s) the highest accuracy is reached. After this, the target is only partially visible by the sensors and hence the estimation errors are increased a bit.


Fig. 12: Results of a simulation experiment (overlying the model data (red, based on the estimated pose) over the IR image data).

Generally, the highest accuracy is reached when the target is at a distance of approx. 10m - 15 m, when the LIDAR scan pattern covers most of the target and most of the edges of ENVISAT can be seen in the images of the cameras. In general, the position error in those parts was less than 1-3 cm and the attitude error was less than 0.3° in these parts of the trajectory.

Tests with Images Generated with On-ground S/C Rendezvous Simulator

In a final stage of the activity, real image sensors (VIS cameras + IR cameras, no LIDAR) were used to generate real image data to test the IRP algorithms. The cameras were attached to TUD?s Spacecraft Rendezvous Simulator MiPOS. The camera motion is based on the same trajectories as used in the MIL and PIL tests. During this image acquisition phase, the image data were saved on hard disk and only processed offline afterwards.

The results of the tests with the real camera images differed clearly from the results of the MIL and PIL tests. This was caused mainly by the edges of the silhouette and of other features which were better recognizable in the synthetic image data while being faded in the real image data. Furthermore, in the real image data there were bigger differences between the mock-ups and the a priori known model used for the pose tracking. Additionally, the barrel distortion of the IR images showed to be much bigger than expected. The differences of the test results showed that tests with real image data are necessary and that one should not rely on synthetic image data only.


Fig. 13: VIS (left) and IR (right) on-ground rendezvous simulator images for mid-range/close-range (top/bottom).

IMAGE GENERATION WITH ON-GROUND S/C RENDEZVOUS SIMULATOR

For the generation of on-ground image data, the Spacecraft Rendezvous Simulator MiPOS (Mini Proximity Operation Simulator) was used. Two cameras for VIS and two cameras for IR were mounted to the 6-DOF portal robot and moved step-by-step along an approach trajectory facing a mock-up of ENVISAT. At every step the four images were saved to hard disk. Due to the limited workspace of MiPOS and the resulting limitations in the cameras' depth of field, two down-scaled mock-ups of ENVISAT had to be used: One with scale 1:25 for mid-range (simulated target distance from 50m to 11 m) and another with scale 1:5 for close-range (simulated target distance from 5m to 2 m).


Fig. 14: Spacecraft Rendezvous Simulator MiPOS used for the HIL test image acquisition.

Due to the usage of thermal infrared cameras, the mock-ups must not only have a visual signature, but also a thermal signature. We used the effects that different materials heat-up with different gradients and that bare metal parts reflect the temperature of the surrounding, which is colder than the heated mock-up, to generate thermal signatures.

For the mid-range simulations, a detailed model of ENVISAT was provided by ESA. It has real MLI foil on its surface and sensor/payload models are made of metal. However, the internal structure the real satellite is not modelled. For illumination and heating during image generation Halogen spotlights were used.

For the close-range simulations, the mock-up was developed at TU Dresden. It comprises only those features of ENVISAT, which are important in this distance/scale, and mainly consists of two large aluminium plates representing the front side of the Payload Module and the ASAR antenna. Stickers on the front, which are up-scaled photographs of the surfaces of the 1:25 scale mock-up, add visible details to the model, the radiator plates were cut out from the stickers (bare metal surface). For heating of the stickers, resistors are mounted on the rear of the plates. A Halogen floodlight was used for the illumination.


Fig. 15: ENVISAT mock-up with 1:25 scale (left) for the mid-range HIL tests and ENVISAT mock-up with 1:5 scale (right) for the close-range HIL tests.

ACTIVITY EXTENSION FOR REAL-TIME ON-BOARD PROCESSING

Looking forward to future applications of algorithms for space craft rendezvous the project extension RTOP (Real-time on-board Processing) was performed by TU Dresden and Jena-Optronik GmbH. This extension was focused on real-time prototyping and testing of selected algorithms for processing of infrared images and LIDAR data into representative commercially available processing hardware: a representative flight computer, a representative FPGA. The algorithms were selected, taking into account their possible usage for real-time on-board image processing in the context of Active Debris Removal scenarios. Another objective of the extension was preliminary allocation of the IRPN algorithms to a (representative) hardware architecture. The preliminary allocation of the IRPN algorithms was iteratively tried considering several system configurations starting from single LEON4-N2X CPU (Cobham Gaisler AB) and ending with system consisting from multiple Freescale P4080 CPUs (NXP Semiconductors) and radiation hardened Virtex-5QV FPGAs (Xilinx, Inc.). The second part of the extension was aimed at the implementation and testing of selected basic LIDAR algorithms and infrared image recognition and processing (IR-IRP) algorithms in a PIL facility consisting of one LEON4-N2X CPU and one FPGA Virtex-4 XC4VLX200. The PIL facility was built on basis of RASTA development platform provided by ESA as Customer Furnished Item (CFI). LIDAR algorithms were provided, implemented and tested by the Jena-Optronik GmbH using symmetric multiprocessing on LEON4-N2X. FPGA implementations and tests of selected IR-IRP algorithms were accomplished by TU Dresden using High-Level-Synthesis approach. One IR-IRP algorithm was implemented on the LEON4 by Jena-Optronik GmbH for comparison and it was found that the FPGA implementation can run about 100 times faster than CPU of comparable clock frequency. The FPGA implementations can also be ported to radiation hardened FPGA Virtex-5QV.


Fig. 16: RASTA Development Platform (ESA CFI).

ACKNOWLEDGMENTS

The activity Image Recognition and Processing for Navigation was funded by ESA/ESTEC . The Institute of Automation at Technische Universität Dresden was the prime contractor. The project consortium furthermore comprised Airbus Defence & Space (Friedrichshafen and Bremen), Astos Solution GmbH and Jena-Optronik GmbH. In any case, the view expressed in this publication can in no way be taken to reflect the official opinion of the European Space Agency.

Stand: 28.01.2019 10:43
Autor: Webmaster IFA