Impending IDMP deadline means it is time to act

Published: 3-Nov-2015

Data interoperability is key to improving the efficiency of drug development and authorisation. Katrin Spaepen, Director of Strategy, Vault Submissions, Veeva Systems, looks at impending requirements of the Identification of Medicinal Products standards

You need to be a subscriber to read this article.
Click here to find out more.

The European Medicines Agency (EMA) has instituted regulatory requirements for Identification of Medicinal Products (IDMP), a set of standards defined by the International Organisation for Standardisation (ISO). Despite broad expectations that the deadline is to be extended, compliance is currently required by 1 July, 2016. IDMP requires life sciences companies selling products in Europe to assemble data sourced from multiple systems and produce a single output file for submission.

This standard will ultimately benefit life sciences companies by compelling them to align business processes and their supporting systems in ways that lead to dramatic efficiency improvements

The emergence of IDMP and other cross-disciplinary standards is driving interoperability, while encouraging life sciences companies to find ways to co-ordinate across various systems, departments and companies – including external service providers.

The initiative has enjoyed cross-border support, with Europe officially adopting the standard, the US and Canada voicing support, and other regions expected to follow suit. This standard, like many others, will ultimately benefit life sciences companies by compelling them to align business processes and their supporting systems in ways that lead to dramatic efficiency improvements. Historically, efficiency improvements were not enough to drive widespread change, but with new regulations, there’s no more waiting it out.

Systems need to intercommunicate

The purpose of IDMP is to improve the accuracy of safety reporting by standardising how drug products are identified globally. As with other big changes, significant preparations are required. One of the main challenges in complying with the new IDMP requirements lies in the fragmented landscape of technology systems that house a company’s data. Information is generated and managed in departmental systems within safety, regulatory, manufacturing, and many other departments. To aggregate the necessary information, the IDMP systems need to ‘talk to each other’.

Aggregating the data requires more than just technical integrations. For example, data points must be traced to their origins and subsequent points of usage. Vocabularies and definitions must be mapped to the IDMP standard.

Although it is a challenge, herein lies the opportunity: the process of establishing a consistent nomenclature and authoritative source for each data point will compel significant process improvements across a number of functional areas, including regulatory, manufacturing, supply chain management and marketing.

With standard product nomenclature and a mechanism to unify related data about registrations, regulatory groups can manage product and label changes more effectively. Manufacturing teams can better assess and manage the impact of product changes. Product marketing teams can dramatically reduce the manual churn associated with gathering data, including local trade names, indications and dosage strengths for promotional materials.

To comply with IDMP, companies will need to focus on interoperability – which is easier said than done in today’s disjointed technology landscape. Information is locked in disparate systems, each supporting a different functional group, with additional systems at work to manage data for local affiliates. Even more challenging, data lives in unstructured file sources, such as investigator brochures, summary of product characteristics (SmPCs), 3.2.S/3.2.P sections of dossiers, and protocols. To aggregate data from these disconnected domains, life sciences companies need to develop consistent interfaces that use the same vocabulary and data definitions.

Start with the source applications

To manage the changes effectively, companies need a mutual understanding between IT and the business departments. This extends beyond IDMP compliance and involves the product data management capabilities of the company. All parties need a common understanding of the data flows, including ownership of roles and responsibilities. Once there is agreement, the technology side can be put in place.

Since data required for IDMP is housed in a variety of source applications and unstructured content files, companies must first determine where the ‘golden’ records reside

Since data required for IDMP is housed in a variety of source applications and unstructured content files, companies must first determine where the ‘golden’ records reside. This assessment requires companies to identify source applications and content that will generate the data, including and accommodating geo-specific differences. As data will be sourced from multiple systems around the globe, there are likely to be regional instances housing country-specific data for things such as labelling or contract manufacturers.

With the data sources identified, the next step is to determine the best path for aggregation. This is likely to involve an extract, transform, and load strategy, in which programmatic integrations extract the identified source data from each transactional system. There are other extraction methods, but point-in-time extractions introduce risks around data discrepancies as the systems get out of sync.

Once the source data is identified, quality issues must be addressed. To cleanse the data, companies need to check for errors and inconsistencies using rules and logic that govern terminology mapping and data matching. Robust data governance processes are needed to monitor changes to data, as authoritative data can be generated in many different ways (i.e. concatenation of data elements from multiple systems).

The clean data can then be brought into the IDMP system of record. RIM systems managing product and registration information already house a substantive portion of the data required by IDMP. In many ways they are the ideal gateway to deliver IDMP data to health authorities and pharmacovigilance teams that generate individual case safety reports (ICSRs).

Data and semantics

For systems to interoperate, the data housed in the systems also needs to interoperate. This semantic interoperability is key to a successful implementation of IDMP. Each functional group (and system) needs to have a joint understanding of the data fields and the values they capture. For example, there can be only one way to describe the therapeutic area of a product.

Once compliant, the data quality improvements and visibility will produce countless efficiency improvements for regulatory, pharmacovigilance, manufacturing, and supply chain management teams

In addition, organisations need to embrace coded values and controlled vocabularies. Such standardised values make information transfer more automated and less reliant on cleansing. The IDMP modellers recognised this and responded with the ISO 11239 specification, which provides controlled terms for a variety of IDMP data, including pharmaceutical dose forms, units of presentation, routes of administration and packaging.

Once compliant, the data quality improvements and visibility will produce countless efficiency improvements for regulatory, pharmacovigilance, manufacturing, and supply chain management teams. The controlled vocabularies in IDMP will also help companies build interoperable systems.

In turn, interoperability will automate the exchange and cleansing of data, thereby providing the entire organisation with an authoritative source for product information worldwide. Although implementing the necessary process and system changes will not be easy, companies will be closer to their goal of truly global operations.

You may also like