Modern metallurgical processes often rely on strict control to achieve the desired end result. An example is the separation of the platinum group metals, where good control of the refining processes is necessary to ensure a clean separation of the products and acceptable purity of the final product.
Frequently, the success of these processes relies on the results of laboratory testing and chemical analysis. Quick turnaround of these test results will ensure the processes are kept under control.
However, in many cases the notification that materials are ready for testing, as well as the transfer of the results of the tests themselves, is a manual process. This lays itself open to delays in the transfer of information, and also opens the door for errors in transcription.
Computerised systems
In modern operations it is likely that both sides of the equation will make use of computerised systems to ensure control and adequate monitoring of their individual processes. It therefore makes sense to use an electronic means of data transfer between the systems. The advantages of such an approach are manifold:
* The possibility of mistakes in transcription is much reduced. Such errors could include, for example, wrong transcription of a sample number, leading to action being taken on the wrong material, and, possibly even worse, wrongly entering a critical test result.
* Valuable time is taken on both sides in the manual transfer of data, and such mundane tasks are hardly job-enriching.
* Lastly, inevitably delays will occur; in processes involving the treatment of valuable materials such as platinum metals, this could lead to an increase in cost of process inventory.
Figure 1 illustrates a possible integration chain between a process monitoring/control system and a laboratory information management system (LIMS).
When a process transaction is recorded in the process system database, an application programming interface (API) captures the details; this can either be from a trigger activated by the process system, or the API can be programmed to scan the database at intervals for details of new transactions.
For convenience, the APIs used for the whole interfacing package may be located on a separate server, or they can equally well reside on the process server. In the implementation illustrated the API passes the details of the transaction (material, batch/lot/sample number) to an intermediate server, whence an API residing on the LIMS server will read the details of the transaction and register them in its own system. At the same time a barcode printer automatically prints out a barcoded label with the same details for accompanying the sample to the laboratory for testing. In the laboratory the barcoded label is scanned into the LIMS and the intra-laboratory process is initiated.
Automatic notification
When test results are available and have been approved by appropriate laboratory staff, an API again resident on the LIMS server passes the results to the intermediate server. From here they are picked up by a process API resident on the API server and passed to the process server. The API can be programmed so that when critical test results are available, or conversely when a complete test regimen has been reported for a particular material, a notification that the batch can be moved on is automatically generated.
In the implementation illustrated, an intermediate data store was available and was used as a buffer between the two sets of APIs on the process and LIMS sides. While not essential, this enabled the two sets of APIs to be developed and tested independently of each other, and furthermore insulated any particular system from programming faults on the other. Even so, it is critical that the two sets of programmers develop a good working relationship and work together from the very first stages of the implementation.
As an example, in the implementation on which this article is largely based, separate material codes were used by the process and LIMS systems and it was important that these differences were recognised and allowed for. In this case, database tables were created in the process system to effect the conversions between codes. This approach eases updating for new materials in the future.
Such integrations will only work successfully if there is buy in from both sides. Failure of one side or the other to accept the efficiency of electronic data transfer and to insist on processing data by old-fashioned manual methods will severely jeopardise, if not eliminate completely, the potential benefits of electronic data transfer.
Insistence on registering samples manually in the LIMS, for example, will inevitably lead to mistakes in batch identification entries, which in turn will lead to the API which returns results to the process side being unable to identify the material batch to which the results refer, in turn leading to delays in the processing of materials.
The principle of data transfer between two disparate systems by custom-developed APIs has been shown by practical experience to deliver the expected benefits and to benefit users of both systems.
For more information contact Tim Wickins, ApplyIT, +27 (0)12 349 2822, [email protected], www.applyit.com
© Technews Publishing (Pty) Ltd | All Rights Reserved