Companies are still failing to meet the expectations of regulatory authorities

Published: 9-Oct-2017

The latest white paper from NSF written by Pete Gough outlines, out of specification (OOS) and out of trend (OOT) results and suggests how companies can be compliant

The issue of OOS results first came to prominence with the Barr case nearly 20 years ago. In spite of the fact that Judge Wolin gave his landmark ruling back in February 1993, companies are still failing to meet the expectations of regulatory authorities in this vital compliance area.

In the first half of 2011 the US FDA issued no fewer than five warning letters to companies for failing to adequately investigate and follow up OOS results as part of the batch release process. The companies, based in Sweden, Israel, Spain, Mexico and Germany are all global organisations and include generics.

Since the Barr case the US FDA has led the way in defining standards for the investigation of OOS results, culminating in the publication of the final Guidance for Industry on this subject in October 2006.

This American guidance has become the generally accepted global standard but in 2010 the UK MHRA published its own guidance as not all pharmaceutical quality control laboratories were following the accepted practice when OOS results occurred. Although it is less detailed, in general the MHRA guideline is compatible with that of the FDA and it improves upon it in some areas.

Both the US and the UK guidance make it clear that the investigation process to be followed should be the same for analytical results that are OOS, OOT or indeed for any result that is outside the usual pattern of results (often referred to as atypical results). In order to be able to identify OOT and atypical results it is essential that laboratory results are continuously trended in some way.

For release test results this is normally accomplished by plotting them on a control chart and for stability programme results by plotting the regression line.

The investigation process flow is similar in the US and UK guides; an initial laboratory investigation which, if inconclusive, is followed by an investigation in production and possible additional laboratory testing.

In order that laboratories can perform a meaningful investigation following an OOS or OOT result, it is essential that all apparatus and instruments are preserved after finishing the analysis until after the results have been checked against both the applicable specification and the normal pattern of results.

If an OOS or OOT result is identified, then this must be immediately reported to a supervisor and the initial laboratory investigation started. It is considered appropriate to re-measure previously prepared solutions, providing this is done to support a written hypothesis as to the cause of the suspect result.

If the laboratory investigation identifies an error that justifies invalidating the original result, then this should be documented and the original analysis repeated exactly as per the method; i.e. with no additional replication.

If, on the other hand, the laboratory investigation is inconclusive then the investigation must proceed outside the laboratory. This production investigation should seek to identify any errors or deviations within the manufacture or packaging of the lot that could cause the suspect result.

Obviously, if such a production error is identified, the disposition of the batch should be determined on the basis of the original laboratory result.

If the investigation is still inconclusive after the production investigation then, and only then, further testing of the sample originally submitted to the laboratory can be considered. This is defined as re- testing.

One issue that caused much debate for many years after the Barr judgement was just how many re-tests should be performed. Judge Wolin proposed that seven would be a reasonable number but gave no justification for this.

Many statisticians have looked at this issue in the intervening 18 years and the generally accepted view is that the minimum number of re-tests is five if one is to be able to have any degree of confidence when comparing the re-test results to the original results.

It is also generally accepted that the law of diminished returns applies once the number of re-tests is over about nine or ten. So it turns out that Judge Wolin was about right.

Today, a common practice is to have the original analyst and a more experienced one each run three re-tests, to give a total of six results to compare with the original figure.

Click here to download the full version of the white paper

You may also like