English | Ελληνικά
Aircraft Recognition
and Tracking Device
Project
Algorithms
A series of filters were developed to achieve aircraft reconnaissance and surveillance. As mentioned before, the system is divided into two (2) subsystems intended to improve system's performance, so as it can achieve real-time operation
A. Target recognition algorithm

The first subsystem carries out the image processing part. The first step in this section is to receive, process and save 0images (CreateImages()) (a pre-processing has taken place in advance to prepare the system's movement data that will be needed afterwards (SystemMovement())). Next step is to activate the Motion Detection Filter (MDF) (BGShiftThreshold()). The camera's movement data which are received from the motor's subsystem are entered in this filter (data packets have been previously checked and mistakes, if any, are corrected), as well as all the available images. The filter implements a modification of the SAD (Sum of Absolute Differences) algorithm, which is presented below:

Equation 1


Then, if the motion detector triggers the recognition mechanism,a series of filters are activated to assure that the moving object will be qualified as a valid target or be rejected as noise. The first of these filters examines the image's morphological elements (TargetPassFilter()). It accepts, as an input, a mask image, which indicates the areas in which the motion was detected. The first priority of the filter is to process these areas with the "proper-closing" method. This method includes "opening" and "closing", which in turn includes the methods of "dilation" and "erosion". It is at this point that the "filling holes" process steps in, in order to ensure fully "closed" areas. The "filling" process is followed by the method of "image reconstruction by erosion", which removes from the mask any objects that do not meet the necessary criteria on size and circularity, according Heywood's circularity factor.
The next step is to divide the objects into those objects that exist in the last frame and those ones in the previous frame (ClassifyTargets()). This procedure is based on the existence of high variance in the brightness of the image. In that way, we can determine both the latter and the former position of a moving object. Combined with the foregoing methods, this one is an alternative for calculating the optical flow, merely for objects of interest, it being fast and efficient as compared with other algorithms.
It is at this point that the processing of the image ends, while the process of taking logical decisions begins. The first element, based on which we can tell whether an object is a potential target or not, is to certify its existence in the last two (2) frames (LocateTargets()). This is achieved by checking on its size in both frames. The algorithm, by benefiting from the time locality (given that a possible target will be found in two time-contiguous frames), checks on possible previous positions for each object and creates object position tables.
At the core of decision-taking, potential targets are distinguished into "verified", "under examination" or "new". In this step, which is the most complicated one, the movement of objects is identified (IdentifyTargetMovement()). "Verified" targets are a special case and are examined at the beginning of the subsystem to achieve instant recognition; this step will be analyzed later on. In order to introduce new targets in the system, the tables of the previous step are organized into pairs of indexes and are entered into the Filter of Shape and Color Recognition (SCRF) (ShapeColorMatch()). This filter separates the internal shape and color data for each object. Specifically, the shape is worked out on the basis of a number of calculations concerning circularity, elongation, convexity, thinness and for the holes that it contains. As for the color information, major peaks are collected in the target's image histogram in order to find its optimum position in the last frame. An object "under examination" is checked with the same criteria except that now shape and color data (which were previously stored) are directly available for processing. Moreover, it is checked for changes in the direction of flight and in the distance from the last position.
The final step in decision taking, is the validation of targets (AircraftValidation()). The quasi linear aircraft's movement (in most cases), is one of the criteria for target validation. Constant checks on angle and the area created by target positions, complete validation process.
As it was aforementioned, a filter is fitted before at the Motion Detection Filter (MDF), for target verification purposes (AircraftVerification()). If the core of the filter (VerifyMatch()) finds a valid target in the expected position (by the direction of its motion and the relative motion of the system), then the target is considered "verified". The procedure is based on the appropriate selection of the peaks of the image histogram after a normalized cross-correlation (NCC) process.
B. Target tracking algorithm

The second subsystem carries out the control of servo motor. It takes as input, data packets from the output of target recognition algorithm. These packets consist of target's position on image, the rate of magnification, the target's speed and direction etc.
Initially, these packets go through a transitional stage. At this stage, they are so transformed as to allow the second subsystem to recognize them (TransformCoordinations()). These transformations are necessary because, data coming from the recognition algorithm, are measured in pixels and, consequently, target's position on image should be converted into degrees (Pixels2Degrees()). Here arises the problem of the variable ratio pixels/degree (due to the variable magnification). This problem was solved after having the camera calibrated. The ratios that were the result of the calibration of the camera solve the above problem as they now give the angle of view of the camera as a function of the magnification (AOP()). In that way, it is known for each value of the magnification, what is the corresponding angle of view of the camera. Then the absolute position of the target in relevance with the reference point of our servo motor, is calculated (ABSPositioning()). This is the output of this transitional stage, and is channeled to next algorithms/controllers of the system ((Filtering()), (PTUSpeedController())).
In the next step, the target's absolute position (ABSPositioning()) is given as input to the Target Position Filter (TPF). This filter has an internal memory that stores past target's positions (FilterMemory()). These pass through a combination of lowpass filters (MFiltering()), which operate with the help of the memory of previous estimations of target's position; in this way it is possible to reject any momentary erroneous assessment, based mainly on the target movement at earlier times (Filtering()).
Next, the output of the Target Position Filters (TPF), enters as input into the Target Position Predictor (TPP) that estimates future target positions (CoordinationsPrediction()). This algorithm is based on "extrapolation". By that method, new points are created apart from the known ones, based on past values of the input signal, with the use of the cubic Hermite spline method.
Finally, the output of the last algorithm (CoordinationsPredictor()), enters as an input into the Servo Position Controller (SPC) (PTUPositionController()). The major problem here is the recognition of the servo motor system. This is because we have no knowledge of the internal system of the servo motor and its controllers, separately. So we cannot know the tables A, B, C, D of the system and consequently the state vector.
Because we need to achieve tracking of angles and not set point control, we will not use a PID controller, but a statefeedback controller with a specific time-varying feed forward element, in which parameters are chosen in such a way so that error converges to zero (0) (PositionController()).
In reality, the controller is a "degenerated" state-feedback and output feedback controller with a feed forward element. In a genuine state feedback controller we would need the gain vector, however here the gain vector is the gain K, because we estimated and approached our system as a first degree one. We have the differential equation that characterizes our controller:

Equation 2


Moreover, the error e (deviation from the desired position) is expressed in equations:

Equation 3


Equation 4


As it was aforementioned, it is not desired a simple set point control but rather a controller that takes the time variation of the reference signal (actually it reformulates the dynamics of the error - error dynamics reformulation). Thus, the controller is expressed in the following equation:

Equation 5


Concerning the foregoing Equations, K (gain) is the inverse of time constant with which the error of the servo motor converges to zero (0), θ is its angle, θ- is its previous angle, θdes is the desired angle of convergence, θref is the angle which is created by the position controller as a reference signal, p is the pole of the servo motor system, while θ{dot} is the feed-forward element.
At the same time the algorithm adjusts the angular velocity of the servo motor, by processing the absolute target position (ABSPositioning()). It should be noticed that the angular velocity of servo motor is regulated with a digital signal and not with voltage input. Therefore, angular velocity is a "setting" of the system and is not the derivative of its position.
The Servo Angular Velocity Controller (SAVC) intervenes to change the angular velocity of the servo motor as a function of time (PTUSpeedController()). The main reason for its existence is that it does not allow the system to develop its maximum speed, because high angular velocity would make the system faster but much more sensitive as well. (The controller's form is comparable to that of the Servo Movement Controller (SMC).)