PHT Interactive Analysis User Manual - (V9.0)

5.) The processing algorithms

The PHT Interactive Analysis includes all the data reduction/correction steps which are known to be necessary (and achievable at the present stage of knowledge) for deriving useful astronomical data from the raw ISOPHOT telemetry. Furthermore, PIA offers a user friendly way to access, test and change all the parameters governing these steps.

We can classify the correction steps per reduction level they can be applied on. Almost all the correction steps in PIA can be freely chosen. Each reduction step takes the data from one level to the next.

In this section we are going to describe briefly the processing algorithms used in PIA per reduction level, putting the emphasis on the meaning of the parameters, which are accessible from the graphical I/F. Please note that we are neither going to discuss the physical origin of the effects to be corrected for, nor the accuracy of the methods applied. For this subject please refer to the ISO Baseline Requirements Document for Off-line Processing (BRD).


5.1) ERD level

On the ERD level two corrections to the data can be applied: ramps linearity correction and ramps deglitching. For the latter two different algorithms can be used. In addition an automatic recognition of CRE saturation is applied. The only reduction step possible is reducing the data from read-out level to signal per ramp level.

Linearity correction

Nonlinearities in the CRE (Cold Read-out Electronics) output voltages and also de-biasing effects (mainly affecting the long-wave detectors with low bias P3, C100 and C200) both result in ramp Nonlinearities. They generally follow a fixed pattern for every detector pixel. These patterns are modelled in the Cal G files P#CRELIN.FITS. For read-out voltages measured as CRE output, unlinearities are given as voltage differences to the correct ones (according to calibration).

The linearity correction factors are given in 20 mV intervals, from -1.2 V to 1.2 V, thus covering the full CRE dynamic range. Typical corrections are in the order of a few percent. When applying the correction every read-out voltage is corrected by adding the value resulting from linear interpolation between the correction factors in the corresponding 20 mV interval. A keyword 'PR_LINE' is written on the measurement header indicating that linearity correction has been performed, and avoiding it being done again by mistake, then PIA checks before applying the correction for the existence of the keyword.

Note: the linearity corrections are NOT applied looking for 'more' linear ramps. In general they have ramps which consequently fit better to a straight line, but in individual cases they could lead to the opposite effect.

Main PIA routine: lin_voltages

Ramp deglitching correction

With a frequency in the order of 0.1 Hz a cosmic particle hits a PHT detector pixel with such energy, that this seriously disturbs the voltage ramp in performance at the hitting time.

One of the most common effects following is a local 'jump' between 2 read-outs, or even 3 read-outs, and an immediate relaxation. Thus the general observation is of normal ramp behaviour, a jump in-between, coming back to the same behaviour as before the jump, leaving us with two perfect parts of a ramp with a gap between the parts.

Since the normal operational read-out frequency of the PHT detectors ranges from 8 to 64 Hz, depending on the detector used, the information redundancy on voltage differences between read-outs allows us to easily pick out those anomalous jumps and to correct the ramps affected.

Algorithm: the individual voltage differences between consecutive read-outs are computed in every ramp with at least a given number (MINP) of read-outs. Average and distribution standard deviation of all the differences but the highest one are calculated. The voltage differences, which are larger by a given number of standard deviations (FSIG) are determined. Every outlyer and the following value is replaced by the average difference found. This process is iterated for every ramp a given number of times (ITER). The processing has three parameters, MINP, FSIG and ITER, which can be changed. The measurement header gets the three keywords 'PR_DGLP', 'PR_DGLF' and 'PR_DGLI' with their corresponding values attached, at the time of performing the correction.

Main PIA routine: ramp_deglitch

Two threshold glitch recognition

An alternative to the ramp deglitching algorithm is offered by PIA, which makes use of the knowledge that a momentary increase in the responsivity occurs very often after a cosmic hit, with a relaxation time depending on several parameters, like illumination of the detector, energy of the hit, etc. In this case not only one read-out, but several can be seriously affected. To cope with this, two thresholds are used, one for detecting a main glitch (and thus a serious statistical outlyer), and a second one, for flagging all following read-outs still affected. By this algorithm, a different approach was taken concerning the general stability of a signal measurement, which makes it possible to extend the statistical analysis to all read-outs corresponding to an expected constant signal (a chopper plateau or a raster point). This also extends the applicability to those measurements with a few read-outs which cannot be deglitched by the "Ramp deglitching correction" presented above, because of the impossibility of a statistical analysis.

Algorithm: for a read-out distribution corresponding to an expected constant signal, the voltage differences for all read-out pairs is computed, except for the ones corresponding to the destructive read-outs and the first read-out of the following ramp. Since this distribution generally shows transient behaviour and this can affect the statistical analysis, a normalisation is performed dividing this distribution by the result of its median filtering. A running window for median filtering with PR_MEDW elements is used, which should be sufficiently large to preserve local fluctuations but also give the general transient behaviour. This baseline corrected distribution is then used for computing average value and standard deviation. All points lying away from the average value by PR_THR1 standard deviations are flagged as main glitches, and not used for the next computation of a new average and standard deviation. This is repeated PR_ITER times. All the normalised voltage differences following a main glitch are flagged as bad points, until the first one is within PR_THR2 standard deviations to the average value. All the points flagged are then not used when computing the slope of a ramp. Again the parameters are quoted into the header of the measurement.

Main PIA routine: pair_deglitch

Saturation recognition

Integration of a ramp above the saturation limit (~ 1.2 V) is not possible. The CRE output in this case is normally just the limit level for every read-out of a ramp after saturation has been reached. However, anomalies have been observed for cases of extremely high saturation, producing read-out voltages which vary in the region 0.8-1.2 Volts. An automatic recognition for all cases has been introduced in PIA.

Algorithm: all read-outs with voltages above a certain value (MAXVOLT) are flagged as bad, and also all voltages below a certain value (this has nothing to do with saturation, but allows control of the dynamic ranges accepted, the parameter is MINVOLT). In addition if a negative gradient is found for read-out values above 0.6 Volt, all the read-outs following this gradient within a ramp are bad flagged. The parameters MAXVOLT and MINVOLT are changeable. The keywords 'PR_LVOLT' and 'PR_FVOLT' with the corresponding values are added to the measurement header.

Main PIA routine: acc_volt

Processing to SRD level

Processing from ERD to SRD is basically the reduction from read-outs to signals per ramp (V -> V/s). But not only the voltages are reduced, also all the other variables contained in the phterd structure are reduced. Please note that from PIA V8.0 on a special procedure for reducing PHT-C and PHT-P chopped measurements was developed, under the name "Pattern procedure". This is the default for processing such measurements, although the traditional data reduction ("Ramp per ramp") is still possible.

Ramp per ramp procedure:

The reduction voltages to signal per ramp is done by the fitting of a first to third order polynom to the read-out voltages of a ramp versus the corresponding times (the 'normal' reduction is done by using the first order polynomial, then all the calibration applies for this case only). The fitted slope of the polynomial (in V/s) becomes the signal of the ramp, and the uncertainty computed on this parameter by the fitting procedure becomes the signal uncertainty. Both signal and uncertainty are accomodated for every ramp and every detector pixel into the phtsrd structure, together with the initial time of the ramp. In addition a flag is set for every signal, coded bitwise and containing several informations, partly depending on the number of valid read-outs used for fitting the slope.  If the number is exactly 2, a flag bit ("2 readouts/ramp") is set indicating that the uncertainty is not completely valid (*), and if less than 2, the value and the corresponding uncertainty are set to 0 and another flag bit ("1 readout/ramp") set. In this last case the fitting procedure is not applied, of course. The invalid read-outs are given not only by the saturation recognition explained above, but also by the read-out selection criteria applied. In addition if all the read-outs corresponding to a ramp are "Not on target", another flag bit is set. This makes possible to accept on the ERD level the "Not on target" read-outs for deriving signals but rejecting the signals from further processing (default choice).

The other ERD variables are reduced by the medians of their values within each ramp. The only exceptions are the variables STIM (slope time), which is taken as the initial time of every ramp, and TEMP (detector temperature), which is interpolated to the central point of every ramp, taking into account that this variable is not read in with the same frequency as the read-outs, but with a 2 second sampling.

The keyword 'PR_NDEG' appended to the measurement header indicates the degree of the polynomial fitted for reducing the data to the SRD level.

(*) In case of a ramp with only two read-outs the signal uncertainty cannot be derived from the ramp slope uncertainty. PIA looks for the whole distribution (all the signals corresponding to the same chopper plateaux) and artificially creates an uncertainty depending on whether the distribution contains several ramps with more than two read-outs or not. In the first case, with points with valid signal uncertainties, all those which cannot be derived get an uncertainty which is set to four times the median of the valid ones. In the second case the uncertainties are set to four times the median of all individual differences between neighbouring signals.
 

A special treatment: ramp sub-division
A better time resolution on the signal level can be reached by using the method of ramp sub-division. The user can define a pseudo-ramp with fewer read-outs [N_p] than the original one [N_o] (this makes sense if N_p <= N_o / 2). If so, the original ramps are separated into several pieces of N_p read-outs (the possible remaining read-outs of an original ramp after definition of all the N_p pieces are taken as an extra pseudo-ramp if its number is larger than N_p / 2) and a signal is derived for each of these pseudo-ramps.
An important reason for ramp sub-division is that in some cases with a very low number of ramps per chopper plateaux, its statistical treatment is at least dubious, and significances and probabilities can hardly be derived. The disadvantage of sub-dividing ramps is that the high frequency read-out noise is less band-pass filtered. Measurements reduced using ramp sub-division quote this by a keyword 'PR_SEPAR' set to the number of read-outs taken per pseudo-ramp.
Main PIA routine: process_tmperd

Ramps Pattern procedure:

This is a special procedure developed for reducing PHT-C and PHT-P data, taken in one of the chopper modes (rectangular, triangular or sawtooth) excepting P32 data. Basically all the data from the on-source chopper plateaux are compressed to one on-source and all the off-source chopper plateaux are compressed to one off-source chopper plateau, both of them with an n number of pseudo-ramps (by default 4 ramps on and 4 ramps off), independently of the original number of ramps per chopper plateau.
This is done in the following way: pairs of valid read-outs are taken and their voltage differences computed (the differences between a destructive read-out and the first read-out of the next ramps are not taken). All the voltage differences are then grouped into n logical ramps (n/2 off and n/2 on), depending on the time difference to the begin of each corresponding chopper plateau. The n averages are then the signals. In this way a very high signal to noise ratio is reached, while the dominating "pattern" per chopper plateau (induced mainly by the flux change when chopping between on and off) is maintained. All the other variables are reduced in the same way and accomodated into the phtsrd structure. A keyword ("PR_PATT") is added to the header of the measurement to indicate that it was reduced under this scheme and for steering further data reduction. The procedure is complemented on the SRD level, as explained under next section in Processing to the SCP level, Ramps Pattern.

Main PIA routine: process_erd_to_pattern.pro


5.2) SRD level

On the SRD level there are also three corrections to the data, which can be applied: reset interval normalisation, signal deglitching and dark current subtraction. A special automatic signal selection can be applied by doing the so-called stability analysis, a more refined analysis is given by the signal drift analysis. The reduction to the next step is from signal per ramp to signal per chopper plateau (SCP).

Reset Interval normalisation

A signal dependance on the reset interval used for read-outs sampling has been established. The effect is detector and CRE dependent and leads to signal differences if the same flux is measured using different ramp integration times. Tables for correcting for this effect have been designed and a normalisation can be applied to the signal, which it would have been measured, if using an integration time of 1/4 s.
Since this correction has being applied for generating the calibration files, which are of course affected by this effect, its application is mandatory for a correct calibration of the analysed data.

Main PIA routine: corr_reset_int

Signal deglitching correction

The main effect produced by cosmic hits on the detectors was described in the previous chapter, including the momentaneous increase of the responsivity, which can have a relaxation time in the order of several signals. Although, in principle, this can be corrected for by using the "Two threshold deglitching method" on the read-out level, there are several cases in which residual effects are still present on the signal level. Here again, the signal redundance helps us to a large extent to eliminate such effects. In general, there are several signals per chopper plateau, enough to look for local changes in the ramp slopes.

Algorithm: for every chopper position, a 'running mean' method is applied. A signal average is computed using a region composed by a pre-determined number of valid signals (NSIG). The individual signals with their uncertainties in this region are compared to the region average and its standard deviation, and those signals which are far away by a number of sigmas (SIGMA) are flagged as suspicious. A new signal region is taken, moving by a number of signals (NJUMP) and the same procedure is applied with the signals corresponding to the new region. All the signals flagged a number of times (NFLAG) as being suspicious are flagged as bad. The method is therefore controlled by four parameters, NSIG, SIGMA, NJUMP and NFLAG, which are accessible and changeable by the corresponding test interfaces.

The keyword 'PRS_DEGL' is written to the measurement header indicating that this correction has been performed.

Main PIA routine: deglitch_all

Orbital position dependent dark current subtraction

The Cal G files P##DARKC (with ## for P1, P2, C1, etc), contain tables with dark currents for every detector pixel, binned reflecting the dependence of the dark currents on the orbital position. This dependence is not surprising, since for several detectors (mainly Ge:Ga) the cosmic hit radiation plays a dominant role by the so called dark currents. Subtraction of this background is performed by interpolating the values in those tables to the actual position of a measurement performed. Interpolation to the level of one value for every signal, every raster point, every measurement or every AOT can be chosen within PIA.

When applying the dark current subtraction to a measurement, the keyword 'PRS_DARK' is written to the measurement header indicating that this correction has been performed, and avoiding it being done again by mistake, then PIA checks before applying the correction for the existence of the keyword.

Main PIA routine: darkcur_orb

Simple Dark current subtraction (obsolete)

Average values for ISOPHOT detector pixel dark currents are quoted in the Cal G files PDARKCURR, from which they are read by PIA. They are used simply by subtracting them from every signal. There is no parameter associated with this correction. As stated in the title, this correction is now obsolete and has been replaced by the orbit position dependent dark current subtraction, explained above.

When applying the dark current subtraction to a measurement, the keyword 'PRS_DARK' is written into the measurement header indicating that this correction has been performed, and avoiding it being done again by mistake, then PIA checks before applying the correction for the existence of the keyword.

Main PIA routine: darkcur

Signal linearization

A dependence of the detectors response on the illumination level has been established. Since all measurements are affected by this effect (also the ones used for establishing the Responsivity, the FCS measurements) the linearization of the detector response is better handled by a linearization of all signals, as recorded by the detectors. Tables for normalizing signals to a default central detector response are used for correcting for this effect. Since several calibration files have been produced using this normalization this correction is mandatory, but the level on which has to be applied depends on the data reduction type.
By the correction an input signal is just replaced by an output signal, which should be the one seen if the detector responsivity would be constant for all signal levels.
This correction should be applied on the SRD level only by chopped measurements which are processed by the "Ramp Pattern" procedure (since the files derived for calibrating this mode used signal linearized data). By all other data it should be first applied on the SCP level. The reason is given by the fluctuations by normal signal distributions, which are minimized when averaging over large number of signals (a special worst case is given by very low illumination, when some signals in the distribution are negative).

Stability analysis

The ISOPHOT detectors need a certain operation time until they stabilize. Since this operation time depends on the detector material, flux applied and flux seen before among other parameters (e.g. temperature) we are confronted generally with drifting signals, which can reach a stable value or not, depending on the measurement time, in addition to all the parameters mentioned before.

In order to be able to recognize the total or partial validity of a measurement (and thus, have confidence in main signals and uncertainties quoted), a stability analysis was introduced into PIA, which should tell us if a measurement has been completely stable, partially stable or not stable at all, and, depending on this, make a selection of signals for obtaining a valid region before performing the reduction to the SCP level.

Algorithm: based on the 'Mann method' for trend recognition in a data sample, PIA looks for an upward or downward trend in the slopes distribution within a staring measurement, per raster point. The trend is given with a certain level of confidence (parameter DCLV). If the distribution shows no trend, it is considered stable and all the slopes are valid. If not, the 'last' portion of the distribution is taken, according to the parameter INTDIV (from 1 to 3), through deselection of the first 1/2^DINT part of the distribution under analysis. This processing is repeated until either a stable part of the distribution is found, or not more than a given number of points is left (DMNP), which should be a number at least large enough to perform this kind of statistical analysis (~ 10). The region taken for the last iteration then determines the valid part of a distribution, invalidating all the signals before this cut-off for obtaining signal means, medians, etc. The parameter set used for the analysis is written into the measurement header through the keywords 'PRS_DCLV', 'PRS_DINT' and 'PRS_DMNP' respectively.

For statistical analysis (and also for better judgement of the drift behaviour and its influence on the values quoted), at the time of performing this analysis a text file is produced, which quotes the stability level (total, partial, or inexistent) per raster point, the number of points accepted and the signal variation per minute in the accepted region, relative to the signal average in this region (in [%/minute]). For this, a linear fit only is applied to the accepted signals, and its slope (and uncertainty) is divided by the mean signal.

Main PIA routine: pia_mann

Signal Drifts' modelling

The most sophisticated way of coping with the problems described at the beginning of the previous section is to adjust a function describing the drift behaviour of the detector signals to derive a final signal, which would be measured after observing for an infinite time. PIA offers several functions and graphical interfaces for deriving final signals from signal distributions. A complete overview is given in the chapter PIA Signal Drifts' Modelling I/F.

Main PIA routines: pia_drift_mod, drift_mod_proc


Processing to the SCP level

Chopped measurements which have been processed using the "Ramps Pattern" procedure can be further processed using the special calibration derived for this processing. This is the default choice. However, they can be also processed by the traditional data reduction of simply averaging signals in the "Chopper Plateau by Chopper Plateau"  procedure. For PHT-S staring measurements there is a special developed data reduction strategy called the "Dynamic Calibration", which is largely explained under the section 3.3.8.

Chopper Plateau by Chopper Plateau:
Processing from SRD to SCP means performing averages and medians over the signals per chopper plateau or per raster point (or over the whole measurement for the staring case and a unique pointing). In addition all the accompanying variables in the SRD level are also reduced to values per chopper plateau or raster point.
[For chopped raster measurement divided into chopper plateaux (P32 case): chopper plateaux and raster positions are in this case not synchronized. It is necessary therefore to provide for the case of a chopper plateau beginning in one raster position and continuing in the next one, after the on raster flag is ON again. If this happens, all the signals from a raster point are valid until the on raster flag is OFF, from that point until end of the chopper plateau all signals are considered invalid, despite their raster flags. ]

Algorithm: the 'weighted mean' method is used for averaging the signals, unless it is disabled or the number of signals is too low (< 15). Every mean computed by two or more elements (N > 1) is considered completely valid. Those chopper plateaus or raster points containing only one valid element get <S>=S and <sigma<S>> = unc, and the flag bit "Only one signal" is set . Chopper plateaus with only invalid signals get <S>=0 and a flag bit "No valid signal", and are not considered in the further data reduction .

In addition medians, first and third quartiles are calculated, per chopper plateau or raster point, using all the valid signals. Also the medians of all the other variables are taken, except for the time, for which the central time of the valid points is computed, and the raster point IDs, for which the unique raster point is taken.

Ramps Pattern:
The obtained "signals" in the "Ramps Pattern" reduction from ERD to SRD are used for deriving one on and one off signal, corrected with calibration tables for signal losses due to chopping. The user has no influence on the choice of which signals are taken in this case (the choice is fixed independently for each detector).

Main PIA routine: process_tmpsrd
 

"Dynamic Calibration":
As explained in section 3.3.8. this method is based on a comparison of the temporal evolution of signals  per ramp corresponding to a sky measurement with parametrised signals corresponding to a set of calibrators. Each calibrator is covering (for a given detector pixel) a certain flux range. The calibration is based on the fact that the transient behaviour of a PHT-S detector pixel (operated in the P40 staring mode) depends mainly on the flux it is seeing. Due to the dark current measurement at the begin of the P40 observation template the starting conditions are always very similar. The characteristic transient behaviour changes slowly with increasing flux, and so it is possible to have a good quality calibration by interpolating between the fluxes of known calibrators, which show the most similar transients behaviour. The output is calibrated data in Jy / MJy/sr, thus AAP level.

Main PIA routine: calib_dynsrd


5.3) SCP level

On the SCP level, three data corrections can be applied, the dark current subtraction (which can also be applied on the SRD level and is therefore not described here), the vignetting correction and the straylight subtraction. For processing there are several alternatives, some of which depend on the measurement type: background subtraction, within the same measurement if source chopped against background, or using another measurement as background; responsivity calculation for an FCS measurement; power calculation for source measurements (reducing to SPD level), including responsivity calculation for the absolute photometry case.

Vignetting correction

The ISOPHOT detector pixels can be telescope vignetted depending on the aperture used and on the chopper throw. The Cal G files PPVIGN, PC1VIGN and PC2VIGN (for the P detectors, C100 and C200 respectively), contain the inverse of the vignetting percentages per pixel, aperture and chopper throw. PIA reads the parameters directly from the corresponding Cal G files. They are used simply by multiplying the SCP signals with them. There is no parameter associated with this correction.

When applying this correction, a keyword 'PRC_VIGN' is written into the measurement header.

Main PIA routine: vignet

Straylight subtraction

The FCS measurements are affected by straylight coming from some part of the field mirror, which lies outside the central field of view (pointing to the source in the case of a source measurement), thus the contribution is generally correlated with the background level of the observations. Therefore, from the signals measured with the FCS it is necessary to subtract this contribution for a total validity of the tables used for deriving the detector responsivity. The problem here lies in the fact that until now, no full assessment of the straylight has been done. Only indications about the approximate level of the straylight are given, without discrimination between wavelengths, apertures or detectors used, nor the exact location of the straylight source.

A (very!) first order approximation can be given by multiplying the signal value from the celestial background contribution by the straylight factor, and subtracting this value from the FCS signal, prior to the responsivity calculation. The user can change the straylight factor manually.

Please note that this is a very crude approximation and should give the PIA user just a hint about whether or not the straylight level could be important for the responsivity determination. In the future a better assessment of the straylight problem will surely be followed by the creation of a new calibration file, according to the parameters on which this correction depends.

Main PIA routine: pia_get_strayl

Background subtraction within the measurement

If a measurement contains at least two chopper steps and none of them correspond to an FCS position, PIA offers the possibility of performing subtraction of one of them from another one. The only parameters associated with this subtraction are the chopper steps. The default choices are: a) central step as source position and interpolation of the neighbouring steps as background, for triangular and sawtooth chopped modes; b) second step as source position and interpolation of the two chopper plateaux before and after each chopper plateau corresponding to the second step, for rectangular chopped measurements. Interpolation is always applied linearly. In addition individual choices are possible, in which case the subtractions are taken one by one for every pair of chosen chopper positions.

The mean and median valid signals are substracted. Normal uncertainty propagation is calculated for the means (using the signals uncertainties) for every subtracted signal, e.g. every point belonging on the SCP level to the 'source' step. The differences calculated are accomodated within a new structure, and the mean of the valid mean differences and the median of the valid median differences are computed.

Valid points are those with a flag value of less than 2. The subtracted values get a flag 0 for the case where both involved values in the subtraction are 0, 1 if at least one of them has an assigned flag 1 and 2 if at least one of them has an assigned flag 2. In this case the subtraction is declared as invalid and the subtracted value is set to 0.

The reason for performing the subtraction this way instead of directly subtracting the means per chopper step is given by the influence of the general signal drifts on the subtracted mean uncertainty, which in this case is minimized. Also computing all subtracted values individually allows for a control on the stability of the signal difference, and this is exploited fully by the PIA graphical I/F.

PIA can handle the product of a subtraction as a new (pseudo-)measurement. In this case the measurement looks like a staring measurement, and its signal (mean or median) should correspond to a measurement of the source without any background. The other parameters are reduced just using the medians (temperature, FCS powers, etc.) except in the case of the time assigned, which is just the central time of the observation. This way the full use of the graphical I/F is given, and saving, restoring and further reduction are also possible. PIA recognizes from the internal name that this 'measurement' is the product of a subtraction and indicates this in the graphical I/F handling.

Main PIA routine: subtr_bckg

Background subtraction using another measurement

From every non FCS measurement another non FCS measurement can be subtracted. No parameter is involved here, and only the characteristics of the two measurements involved are taken into account. The 'background' measurement has to be a staring measurement, thus containing only one unique average signal and one unique median signal. If the 'source' measurement is chopped, those values are subtracted from every signal, mean from means and median from medians. Uncertainty propagation and flag setting are similar to those described for background subtraction within a measurement. If the measurements are taken with different apertures, the necessary normalisation is done automatically. Trying to subtract measurements from different filters yields an error message.

The same applies here as in the previous paragraph concerning accomodating the subtracted values as a new (pseudo-)measurement. The variables included in the corresponding buffer are derived from the source measurement exclusively in this case.

Main PIA routine: subtr_meas

Compensation to signal losses due to chopping frequency

Whenever chopping between different signals  is used (eg. source vs background), there is a loss by the differential signal, due to the fact that neither of the two levels can stabilize in short times. The subtracted signal losses are largely dependent on the chopping frequency. To correct for this effect, tables per detector pixel and chopper frequency have been determined and put into calibration files. The tables contain the predicted "seen" portion of the differential signal together with the uncertainty. This correction is not necessary (and the PIA GUI's do not allow it) by chopped measurements which have been reduced using the "Ramp Pattern" procedure.

Main PIA routine: chop_freq_corr

Actual responsivity calculation

Pure FCS staring measurements and also chopped measurements of type 'source vs FCS#' or 'FCS1 vs FCS2' can be used for determining the detector responsivity level at the time of the measurement performance (called 'actual responsivity' as opposed to 'default responsivity', which is the nominal value per detector pixel). This is not valid for the PHT-S array, since, for several reasons, the PHT observations corresponding to PHT-S do not include FCS measurements.

Algorithm: the SCP signals corresponding to an FCS illumination (with a given electrical power P_el) are multiplied with the capacity corresponding to the detector used and divided by the expected in-band power from the FCS illumination. This last value is derived from the FCS power tables (e.g. PPFCSPOW.FITS for the PHT-P detectors). The FCS power tables contain, for a set of optical powers, the FCS electrical powers necessary to illuminate the detector on such a level. PIA determines the optical power level, by interpolating logarithmically with the actual electrical power applied to the FCS. In the case of PHT-P detectors the in-band powers on the FCS power tables are quoted in units of [W/mm^2], and therefore have to be normalized by multiplying with the aperture used. For the C-arrays this is not necessary, since the unit is [W/pixel].

Correction for non-flatness of the FCS illumination: the FCS illumination onto the detectors is not flat. For the C-arrays a correction is applied via the so-called FCS illumination matrices, contained in the Cal G files P##FCSILL.FITS, and giving the asymmetry per pixel of the array. A similar correction should be applied for the different apertures used by PHT-P observations. This has not yet been implemented, pending calibrations results.

Responsivities obtained using both means and medians are computed within PIA, and the graphical I/F makes use of this, giving the user the possibility of choosing among them. The responsivities accepted are accomodated into a buffer with 'actual' responsivities, which can be used for the power calibration, as explained below.

Main PIA routine: pia_respons

Interpolating responsivities

In the case of mapping, the AOT structure foresees two FCS calibration measurements before and after performance of the raster measurement. The intention is to access the two responsivity levels, which could be rather different in the case of a long raster measurement. A crude approach to use both responsivities obtained is a linear interpolation in time. The correction of all raster points according to the time of their observation is taken out of the linear interpolation of both responsivities. This approach works for cases in which long responsivity drifts are large in comparison to local fluctuations (eg. after "seeing" the source).

Algorithm: same as for establishing the actual responsivity, but performed for the two FCS measurements FCS1 and FCS2. The results R1 and R2 are computed together with the corresponding measurement times T1 and T2. When converting a point measured at time Ti, the following interpolation applies: Ri = R1 + (R2-R1) / (T2-T1) * (Ti-T1)

(At the time of interpolating, the average of both R1 and R2 is computed as actual responsivity. Thus, performing power calibration using actual responsivity immediately after interpolation is an alternative to using the responsivity average.)

Main PIA routine: respons_interpol.pro (in conjunction with process_scp.pro)

Power calibration (+ Filter to filter [external flat-field] correction)

The conversion from signals per chopper plateau/raster point in V/s to in-band powers (also per chopper plateau/raster point) is done by multiplying the signal to the capacitance and dividing by the responsivity (P = s*C / R). The capacitances are fixed parameters per PHT sub-system and in principle default numbers, which are not subjected to calibration. In order to maintain a default responsivity per detector / to make possible to calibrate data with an actual responsivity obtained by an FCS measurement taken with a different filter, a Filter to filter correction factor is necessary for PHT-P and PHT-C data. In the case of the C detector arrays, an external flat-field matrix has to be used, accounting also for the non-flat illumination of the pixels, which is filter dependent, yielding: P_i = s_i* C /  (R_i * Ff_i), with i for the corresponding pixel and Ff_i being the Filter to filter / Flat-field factor corresponding to the pixel and filter used.

PIA contains four different power calibration methods: a) using the default responsivities per detector pixel and orbital position (as contained in the Cal G files P#RES.PRO for the P (#=P) and C detectors (#=C1 or #=C2) in [A/W] or deriving them from PSPECAL.FITS for PHT-S), b) using the actual responsivities as obtained from a corresponding FCS measurement, c) the absolute photometry case and d) interpolating actual responsivities as obtained from two different FCS measurements.

The only conversion possible for PHT-S is using default responsivities, since there are no FCS measurements associated with PHT-S observations. The unit for the calibration values given in the corresponding Cal G file PSPECAL.FITS is basically [V/s / Jy], reflecting the fact that the conversion for PHT-S directly converts from signals into fluxes. In order to maintain a flat s/w architecture among the different PHT sub-instruments PIA does a pseudo-conversion into optical powers in Watts. For this purpose the calibration values are divided by the variables for the conversion from optical power into fluxes (internally coded as 'C1' variables), as given in Table 13 of the PHT-Observers manual (the same values are used again for flux extraction in the further reduction, therefore cancelling, thus the use of the term pseudo-conversion).

In the case of absolute photometry the individual signals per chopper plateau corresponding to the FCS signals are used for deriving an actual responsivity distribution. This distribution is then used point by point on the source signals.

The responsivities (per pixel) used for the conversion are quoted in the header of the measurement under the keywords 'PRC_R###' (### = pixel number). In the absolute photometry case, an average of the responsivities used per pixel is written into the header.

Main PIA routine: process_scp
 

 A special case: PHT-S chopped data using dedicated "dynamic calibration for chopped data"

This calibration method is explained under Section 3.3.9. It's based on a calibration using a spectral response function (SPRF) obtained with calibration observations taken exclusively in chopped modes and applying a first order correction linked to the signal level.
 

Main PIA routine: calib_choppmeas_sl.pro



5.4) SPD level

On the SPD level, no data corrections are applied. For processing there are two alternatives: background subtraction as in the SCP level, within the same measurement if observation modus 'source chopped versus background', or using another measurement as background; and flux conversion.

Background subtraction on the SPD level

The background subtraction on this level is completely equivalent to the one described in the former level ('SCP background subtraction'). There are reasons for performing the background subtraction on the one or on the other level. Generally, performing internal background subtraction (within the measurement) should be done on the SCP level, to avoid an artificial increase of the uncertainties, added by the power calibration because of the responsivity uncertainty. While for external background subtraction, the SPD level should be recommended, then the responsivities to be used for source and background are generally different in this case.

Main PIA routine: subtr_bckg

Flux density / Surface brightness extraction

To maintain full flexibility, PIA does not distinguish between point and extended sources performing both conversions to fluxes and brighnesses at the same time, and leaving the user to decide how to use the results of the conversions. The conversions are defined by

F_nu = P / (C1 * FPSF) for extracting the flux density in the case of P1, P2 or P3 observations or
F_nu = P / C1 for extracting the flux density per beam in the case of C100/C200 observations (*)

and

B_nu = P / (C1 * OBS * Omega) for surface brighntess extraction,

'P' being the optical power as the main SPD value, 'C1' a constant per detector (as given in the Cal G file 'PFLUXCONV.FITS' for PHT-P and PHT-C, and internal coded for PHT-S), 'FPSF' the point spread function corresponding to detector, filter and aperture used (extracted from the Cal G file 'PPSF.FITS'), 'OBS' an obscuration factor of the secondary mirror of the ISO telescope (OBS=0.91) and 'Omega' the solid angle of the aperture or array projected on the sky. It has been found, that the Omega values are not simple, but depend on filter and aperture. The calibration values are contained in the Cal G files 'P#OMEGA.FITS'.

The reason for the special treatment of PHT-S data is explained in the 'Power calibration' section. Here the same variables C1 are used as defined there.

(*) The Point Spread Function factors for C100 and C200 are defined both for the whole array and for an individual pixel. However, they can be used at first in the computation of the flux integration when performing the final photometry in the astrophysical applications. By C200 staring or chopped observations the detector array is pointing centrally at the chosen object, thus in the intersection of all 4 pixels. Therefore in these cases it is of no use the PSF factor for individual pixels. In observations with the C100 detector the central pixel (#5) is pointing centrally at the chosen object. PIA accounts for this by using this factor if only pixel 5 is chosen for final photometry, as explained further below.

Main PIA routine: process_spd


5.5) Astrophysical applications

On the AAP level, the only data correction which can be applied is a flat-fielding correction by PHT-C100 or PHT-C200 data.  The methods applicable directly on the AAP data are all the ones described under the paragraph mapping flat fielding further below, except the so-called "Central FF", which can be used only by mapping. Please note that once the data is corrected after applying a flat fielding it MUST be saved into the internal buffer (or into a file which should be read in later) in order that the correction is contained in the buffer measurement when reading it for map obtention.

For general common processing there are two alternatives:

  • background subtraction as in the SCP level: within the same measurement if observation modus 'source chopped versus background', or using another measurement as background.
  • concatenation with other measurements: several measurements can be put together with all their informations (eg. sky positions), so that images can be created out of several single staring measurements (eg. by sparse map observations), etc.

  •  

     

    The possibilities for further data reduction diverge according to the different observation types. Multi-filter and multi-aperture photometry, polarimetry, mapping and spectro-photometry are the various possibilities. The only data correction which can be applied on this level is the colour corrections to the fluxes coming from multi-measurements observations.

    The only real data reduction implemented on this level is mapping, since the others are in principle just combinations of several measurements' fluxes and/or brightnesses (photometry and polarimetry with the additional possibility of performing colour corrections) or lines and continuum fitting (spectro-photometry).

    Mapping is possible with and without pointing information, which refers to the information on central pointing of every raster point position, as measured by ISO, and contained in the products 'IRPH######.FITS'.

    Colour Corrections - (applicable to photometry and polarimetry)

    Colour correction factors have been determined for black-body,1/nu and 1/nu^2 spectrae of different temperatures and for power law spectrae of different indices, folding them with the PHT filter transmissions spectra. Determination of colour correction expectations are made by interpolating within these tables for any temperature or any power law index, as required by the user.

    Multi-filter photometry

    A combination of fluxes and surface brightnesses from different detector pixels, and also from different measurements are possible within this processing. Depending on the observation type and the processing these observations have undergone in PIA, photometric observations on the AAP level may contain one or several chopper plateaux. For instance, an absolute photometry observation contains several chopper plateaux fluxes, while a source vs background chopped observation should have been reduced through background subtraction on the SCP level, and then reduced to AAP, thus showing just one main flux on this level.

    Algorithm: fluxes and brightnesses are added (weighted by their uncertainties) for every chopper plateau within one measurement. The addition of several pixels (for the PHT-C-arrays) is done and the PSF correction applied in this case (for PHT-P measurements it is already done by the flux extraction) by dividing through the PSF factor for the whole array. Several measurements (also from different detectors) can be processed together in the same way, thus obtaining a flux/brightness spectral distribution. For this a normalization is applied according to the apertures used in the case of PHT-P measurements.
    C100 / C200 pixels can be selected/deselected for doing photometry. By C200 observations is only allowed to deselect one pixel at once. Its value is replaced by the average of the other 3 pixels before applying the PSF correction. By C100 observations up to one corner pixel and one side pixel of the 3x3 detector pixels can be deselected, their values replaced by the averages of the pixels of the same type. As a special, very important case, it is possible to deselect all but the central pixel (#5). In this case, PIA will use the point spread function factor CPSF for an individual C100 pixel for converting into flux.

    Main PIA routine: pia_multi_aap

    Multi-aperture photometry

    A combination of fluxes and surface brightnesses from measurements using different apertures and the same detector are possible within this processing. Apart from the fact that only data from one (PHT-P) detector can be used, the algorithm is very similar to the one used in the multi-filter photometry.

    Algorithm: fluxes and brightnesses are added (weighted by their uncertainties) for every chopper plateau within one measurement. Several measurements (only from one given detector and filter) can be processed together in the same way, thus obtaining flux/brightness vs area distributions.

    Main PIA routine: pia_multi_aap

    Spectro-photometry

    A PHT-S spectrum per measurement and raster point can be analysed and searched for lines by this processing. All the flux/brightnesses corresponding to one raster position are passed to an interactive programme, designed for lines and continuum fitting, which is described under 'Multiple spectral fitting'.

    Main PIA routine: pia_specphot

    Mapping without pointing information

    Mapping is basically a transformation from time coordinates (as given in the phtaap buffer) into a system of spatial coordinates. For this all the points corresponding to a measurement within a certain spatial coordinates bin have to be combined. Since PHT has the capability of oversampling with different detector pixels, co-addition of surface brightnesses from different detector pixels also has to be performed.

    Algorithm: for every chopper plateau, the theoretical position is calculated. This is derived from raster point ID, together with central map position, number of raster lines and raster points (all information present in the measurement header), combined with the chopper position and the pixel offset. Using all the positions calculated, a map is defined, according to the maximum resolution achieved by the number of raster points and chopper positions given. Brightnesses corresponding to the same position are co-added taking into account the dwell time on every position for obtaining the brightness value.

    Pixels can be individually selected or deselected. Only the selected pixels (by default all pixels) are used for producing the map.
     

    Main PIA routine: pia_mapw

    Mapping using pointing information

    In this case, the coordinates for every raster position are taken from the corresponding 'IRPH######.FITS' file instead of the theoretical positions, while basically the algorithm works in a similar way as in the mapping without pointing information. The differences are mainly given by the necessary conversion between RA-DEC and map x-y coordinates, and the non one-to-one correspondence between observed positions and map grid.

    Algorithm: For every chopper plateau the real position is calculated, by combining chopper position, raster position and pixel offset. Binning of the map is user defined. The surface brightnesses which correspond to a map cell are combined, taking into account the dwell time on every position and the overlap factor. The overlap factor is given by the portion of the detector pixel at a certain position falling into the map cell. The RA-DEC positions for every map pixel are calculated using a coordinates transformation matrix, defined by a reference sky position, a reference map point, the pixel size of the map and the rotation of the map [x,y] axes relative to the sky [-RA,DEC], the so-called ISO roll angle.

    Main PIA routine: pia_mapw

    Mapping flat fielding

    Due to the observation redundance (same sky positions seen by different pixels, sometimes several times)  mapping offers, a good possibility of correcting for the residual response differences between the different C100 and C200 pixels.

      Various flat fielding options can be chosen:
      central FF: individual maps are produced for every selected pixel, and the central part of all the individual maps (common to all of them), are used for deriving mean quotients per pixel to the first selected pixel. These factors are then used for normalizing the pixels before general map co-addition.
      Main PIA routines: ffmap and ffmap_simple
      normalisation with median filtered distribution: a baseline correction is applied to the data, by dividing the original individual  pixel flux distributions by their median normalised distributions (using a certain width for the median obtention). The map is co-added using the baseline corrected distributions (and thus implicitely flat-fielded) and then re-normalised to the total flux in the original distribution.
      Main PIA routine: ff_med_smooth
     normalisation with partial distributions: it is possible to choose a part of the flux distribution for each pixel which should yield in total the same flux level (eg.  part of the background). Regions thus chosen are averaged per pixel and pixel to pixel normalisation factors obtained from the averages.
      Main PIA routine: ff_bck
    normalisation using 1st Quartile: this is a very simple method of computing the 1st Quartile QI of each detector pixel (as a value which could be representative for the background measured) and normalising multyplying fluxes, brightnesses and uncertainties of each pixel I with FFI = <Q> /  QI
    It is specially useful by mini-maps, on which the other methods cannot be applied.


    5.6) Algorithms of general use:

    In this subsection algorithms are described, which are used in several parts of PIA.

    Weighted Mean:

    Used by PIA as the default whenever an average of data elements with associated uncertainties is calculated. A minimum of 15 elements is required. If weighting with uncertainties is not desired this option can be centrally disabled (see Customize in PIA Top Window, s.3.1).

    The squares of the inverse uncertainties are used as weights for every element: w_i = 1 / (unc_i)^2

    All elements with invalid uncertainties get lower weighted by taking as weight the median weight in the measurement multiplied by a factor 16.

    The mean is computed by <S>= SUM (S_i * w_i) / SUM (w_i)

    while the uncertainty is <sig<S>> = SQRT (1/(N-1) * SUM( (<S>-S_i)^2 * w_i^2) / SUM(w_i^2))

    with N as the total number of S_i elements. Eg, the uncertainty of a signal per chopper plateau is the standard deviation of the mean value. In addition the standard deviation of the signals distribution is quoted as Sigma = <sig<S> * SQRT(N-1).



     
     
     Chapter history:
       Date  Author      Description 
    15/06/1996      Carlos GABRIEL (ESA VILSPA-SAI)  First Version 
    10/06/1997     Carlos GABRIEL (ESA VILSPA-SAI)  Update (V6.3)
    13/10/1997     Carlos GABRIEL (ESA VILSPA-SAI)  Update (V6.5)
    16/02/1998     Carlos GABRIEL (ESA VILSPA-SAI) Update (V7.0)
    20/08/1999     Carlos GABRIEL (ESA VILSPA-SAI) Update (V8.0)