“The only thing constant in life is change.” – Francoise de la Rochefoucauld 1613 – 1680
Traditionally, upgrading or replacing an interface engine employs the “Test, Switch, Pray” methodology. This means running the new interface engine through a battery of tests to attempt to cover all of the possible use and test cases it may encounter in production. In scope, this is equivalent to the testing a new interface would receive after development. Once testing is complete, the interface engine is switched over and the production department prays that there are no problems.
While the thoroughness of this approach may ease the fears of production staff, it rarely delivers on its intended goal of eliminating the inherent risk of upgrading an interface engine. By spending significant time testing interfaces without the payoff of a flawless transition, organizations are often left with blown budgets and deflated expectations.
To help organizations better prepare for and minimize the risk of performing these updates, iNTERFACEWARE with nearly two decades of integration experience, designed a more efficient and accurate migration strategy. A basic tenet of iNTERFACEWARE’s philosophy is to begin with real data when developing and testing interfaces. As a result, testing is not only faster but also more secure when upgrading or replacing an interface.
Nothing is more risky to upgrade or replace than an interface engine within an integrated multi- corporate environment. Not only does the interface engine play a key role in the production infrastructure but it also connects two different systems, significantly increasing the impact and visibility of an unsuccessful migration. Therefore, minimizing the risk and associated cost is of paramount concern to virtually every stakeholder.
Rather than attempting to recreate the production environment by creating a number of simulated test messages, iNTERFACEWARE advocates testing new interfaces with the best test messages available: messages from the real world.
By adhering to the following approach, organizations are able to greatly increase the testing accuracy of new or updated interfaces by using the same data in both their test and production environments. Additionally this process, which uses a lightweight modern interface engine to create a concurrent flow of data and then reconcile the original and new results for analysis, allows for a much more rapid testing cycle by focusing only on items where differences exist. As a result, testing and implementation times are reduced by up to 85% while all but eliminating the ‘pray’ phase of the aforementioned methodology.
At its simplest, an interface engine acts as a bridge between two systems. The source system generates the message and submits it to the interface system. The interface system translates, transforms, and transmits the modified message to the target system.
In order to gather the messages, a new interface, created using our lightweight interface engine, is inserted between the source system and the old (production) interface:
It is important to use a lightweight but full-featured interface engine in order to guarantee the stability and security of the entire integration process.
Note: For our example, we will be using the Iguana Integration Engine to create our new interfaces. Each Iguana logo used throughout our illustrations represent a single channel (interface) managed from one central instance of Iguana.
It is also important not to modify the old interface, as that would require an entirely new series of tests. The new interface acquires the message from the source system and passes it on to the old interface without modifying it. It also writes the message as a single unformatted data string (along with the date/time stamp and a status flag) to an incoming message repository. (The incoming message repository will normally be a relational database.)
A second new interface polls the database for messages with a specific status (i.e. S for submit), changes the status back to normal and submits the original message to the new (test) interface. By driving the submittal based on a status field, simple SQL update statements can be used on the incoming message repository in order to select (or reselect) any number of messages for submittal and testing.
Then the new interface manages the messages.
The next step is to insert a similar function in between the old interface and the target system. This component will act in the same manner as the lightweight interface between the source system and the old interface. All it needs to do is pass the message on to the target system after placing a copy of it in an outgoing message repository.
A fourth and final lightweight interface receives the outgoing test message from the new interface and compares it with the message sent from the old interface by retrieving its match from the outgoing message repository. It will compare the two outgoing messages and place both messages and the results into a results repository for further analysis.
The message results repository holds the results generated by matching the outgoing messages of the old and new systems. These results can be used to ignore messages that remain the same and ensure that any message pairs that differ can be tracked and explained based on the functional requirements of the update/upgrade.
This test structure offers a number of benefits to the testing of an interface that are not normally available:
- Test data is readily available
- Test data accurately reflects production data even if the messages have changed over time
- Test messages can be modified, rerun, deleted without affecting the production system
- Does not affect production
There are some local requirements that need to be considered when this tactic is utilized:
- “Matching Criteria” needs to be customized to allow the incoming message to be properly associated with the outgoing message. This will normally require that parts of the source message be extracted or a serial be used.
- If the message includes Personal Health Information or other Confidential Information, encryption, de-identification or other security measures may need to be taken.
By employing a testing methodology that combines real data along with the direct comparison of live and test results, a highly accurate and responsive test environment can be deployed without undue expense and risk. This lowers the cost, risk and time required for the deployment of an upgraded or replacement interface system.