This site will look much better in a browser that supports web standards, but it is accessible to any browser or Internet device.

29/03 - 02/04/04 OCIAM Oxford University

Home | About | Programme | Problems | Registration

Distribution-independent safety analysis

Keith Slater and Paul Cowell, National Air Traffic Services



In various contexts, NATS need to ensure low probabilities of errors. A representative example is given below, but generically the error probabilities depend on the tails of some probability distributions for which there is no theoretical model, but considerable amounts of data. In these circumstances, questions that arise include:

  1. What are the best probability density funcions to fit?

  2. How sensitive are the results to the choice of pdf?

  3. Can results be statistically justified without underlying theoretical models?

  4. Are there other ways of arriving at conclusions without curve fitting a p.d.f. to the data?

  5. What results from extreme value theory or other areas might help?

Lots of real data will be supplied to the Study Group.

Illustrative example

Part of NATS safety system requires that all of its radars at 22 sites have maximum safe ranges declared. NATS regularly undertakes an analysis of its radar performance to confirm or modify such ranges The maximum safe range for a radar depends on the separation required between aircraft. The declaration is typically of the form ``radar X can support 5Nm separations between pairs of aircraft at any range up to 120Nm from the radar''. It should be noted that radar performance is not constant. It can depend on many factors including: icing conditions at the radar and other weather factors; the presence of new structures such as wind-farms; and modifications to the radar and its associated equipment.

To determine maximum safe range the following process is followed:

  1. Several hours of data are recorded from many radars simultaneously.

  2. The recorded data is post-processed to determine the true position of targets and hence the individual position errors for each radar return and for each radar.

  3. The error data are then analysed statistically by another partly automated process to produce a maximum safe range estimate for a particular radar.

In this last stage, the distribution of radar bearing errors, x, is currently fitted by a sum of two symmetric exponentials,


where pi>0, p1+p2=1, lambda1>lambda2>0, so that the central part of the density is dominated by the first term and the tails by the second. This is an empirical choice: we know no theory of the distribution. The fitted distribution is then used to estimate the Horizontal Overlap Probabilities (HOP), i.e. the probability that 2 aircraft that are really on the same bearing appear to be separated by more than a certain angle theta0:

where X1 and X2 are independent samples from the distribution (1), so the question is answered using the tail probabilities of the convolution of (1) with itself.

The question then arises as to how well the observed data really characterize the tail of the distribution: do the computed HOP depend more-or-less directly on the data (and are fairly robust to what form of distribution is assumed); or is there quite a strong dependence on the form fitted. So this is a typical context in which the 5 questions listed earlier arise.

Other examples

Looking further afield than radar we find that all of our safety data comes in this form. Typical data is from

  1. radar azimuth or position errors (as above);

  2. altimetry system errors;

  3. gain/loss in longitudinal separations between pairs of aircraft under procedural control;

  4. crash location data around airports used for constructing individual risk contours.

We model the radar errors as mixed exponentials, altimetry system errors as mixed Gaussian or exponential plus other combinations.

There is no real model for gain/loss data and the crash location is Weibull (at least in part). Decisions on separation standards and maximum safe ranges need to err on the cautious side but not be unrealistically constraining. Such decisions usually depend on some sort of convolution of the fitted p.d.f.s and so sensitivity to the treatment of the error data is important.

Home | Contact

This page last modified by C. Breward
Monday, 15-Mar-2004 10:57:27 GMT
Email corrections and comments to