Tuesday 29 September 2015

AUTOVERIFICATION IN CLINICAL LABORATORY



AUTOVERIFICATION IN CLINICAL LABORATORY
The automatic release of results from clinical instruments via algorithms running in a laboratory information system (LIS) may improve efficiency, reduce overall turnaround time, and be accomplished within current regulatory frameworks. Autoverification is a process whereby clinical laboratory results are released without manual human intervention. Autoverification uses predefined computer rules to govern release of results. Autoverification rules may include decisions based on instrument error flags (e.g. short sample, possible bubbles, or clot), interference indices (e.g. hemolysis, icterus, and lipemia), reference ranges, analytical measurement range (AMR), critical values, and delta checks (comparison of current value to previous values, if available,from the same patient). Rules may also define potentially absurd (physiologically improbable) values for some analytes and additionally may control automated dilutions and conditions for repeat analysis of specimens. More sophisticated application of autoverification rules can generate customized interpretive text based on patterns of laboratory values. Autoverification is commonly performed using the laboratory information system (LIS) and/or middleware software that resides between the laboratory instruments and the LIS. Autoverification can greatly reduce manual review time and effort by laboratory staff, limiting staff screen fatigue caused by reviewing and verifying hundreds to thousands of results per shift. Ideally, autoverification allows laboratorystaff to focus manual review on a small portion of potentially problematic specimens and test results. However, improperly designed autoverification can lead to release of results that should have been held, potentially negatively impacting patient management. There is a guideline document produced by the Clinical and Laboratory Standards Institute (CLSI) on autoverification of clinical laboratory test results which focuses on the process for validating and implementing autoverification protocols-CLSI AUTO 10-A; Volume 26 Number 4.

The autoverification rules evolved over more than a decade, with a steady increase in autoverification rate to the current rate of 99.5%. The high rate of autoverification is driven in large part by the highest volume tests or test panels (e.g. basic metabolic panel, albumin, alanine aminotransferase, and troponin T), which all have autoverification rates exceeding 99.0%. This frees up staff time to deal with assays such as certain drug levelsor endocrinology tests that require offline steps such as manual dilutions, or to investigate questionable test results. Some tests in our study currently have autoverification rates under 90%; however, these tests comprise a small fraction of the total test volume. Informatics support is critical to successful implementation and maintenance of autoverification. The most common problems interfering with autoverification would be interruptions of network, LIS, middleware, and/or the interfaces between these systems.
The biggest risk associated with this type of process is releasing large numbers of results without proper review or editing. This results from poor planning, implementation, or failure to follow procedures. A lesser risk is to not release results that do meet the autoverification procedure. Other than affecting turnaround time and workflow, the latter is a “safe” failure. Any autofiling procedures or modifications to software should not interfere with or inactivate the normal procedures and functions of either the instrument or LIS. To as large an extent as possible, the process should be fault tolerant of user mistakes. The worst that should happen is that results are not released when they could have been released. The process should be tested thoroughly with all possibilities of instrument flags, errors, and ranges and it should also periodically be revalidated. During testing and validation, each criterion for autofiling needs to have an individual example tested and determined that it performs as expected. That is, the system releases data when it is supposed to release and held when it is supposed to be held. Combinations of criteria need to be considered in addition to the order that results are released by an analyzer. Ultimately, the process will only be as good as the planning and testing.
Autofiling/autoverification does not “make decisions.” Thecommon misconception is that the LIS is “deciding” to release or not release data. The software cannot make decisions or deal with ambiguity or inference. It is not an expert system. There is no “learning” that the software is capable of doing, and it can only deal with situations, data, and patterns of results that have been pre-determined. An algorithm is followed and actions are taken or not taken based on the conditions presented. If an instrument flag or result is not accounted for the algorithm cannot address or act on it, and it may perform an action (result release) that is not intended. Autofiling will not make manual follow-up disappear. For example, if the policy for platelet counts below 30 K is that they are routinely confirmed by a manual smear review before results are released, an autofiling program cannot change that process. It should hold the results but it will not eliminate the need for the review. A final point is important to consider. Do not overestimate the effect that autoverification has on workflow. Instrument flags and alerts tend to be indications that the data is suspect or require attention. The data from abnormal samples will have numerous flags and alerts and an autoverification program will not make the flags disappear. Generally an in-patient population will have a higher percentage of specimens that flag than an out-patient population. Data from specimens from an intensive care unit may never autofile while those from an executive health clinic may all file. The rate of release is dependent on the acuity of the patient population, not the instrument or software in use.




The Joint Commission on Accreditation of Healthcare Organizations (JCAHO) does not appear to have standards that are directly applicable to autoverification. Standards under quality control that may apply indirectly include QC 1.3 and QC 1.4. QC 1.3 states, “The laboratory’s quality control system includes daily surveillance of results by appropriate personnel.” The description of the intent of this standard include statements that “a computer may be used to screen results, using similar specific criteria, so that only outliers need to be reviewed manually.” QC 1.4 states, “The laboratory takes remedial action for deficiencies identified through quality control measures or authorized inspections and documents such actions.” Autoverification must meet the intent of this standard by incorporating quality control into the process. A LIS requirement for autofiling must include the ability to use bar coded sample specimens. These are ubiquitous in laboratories today although there may still be a small subset of samples that are labeled without bar codes. The bar coded or otherwise machine readable label allows the positive identification of the sample. This is necessary so that correct criteria, particularly delta checking information, are applied by the autoverification algorithms. It is important to have positive identification of a specimen. An LIS that requires creating lists of accessions, especially manually created lists, and then applying an algorithm to the list in sequence introduces too many chances for mix-up or phase shifts.
The LIS software must have some facility or function for autofiling as a feature. For regulatory reasons, laboratory personnel must work within the constraints of the software or vendor. Modifications to instruments or software can introduce the possibility of FDA oversight. Some vendors will perform custom programming for a fee. This tends to be expensive and makes support and upgrades very difficult. Finally, algorithms and programs should be designed from the assumption of error. Criteria for release should be explicitly spelled out. The opposite approach would be to design the release algorithm on the assumption that all results presented by the instrument are acceptable for release unless criteria are met that invalidates them. This is a subtle but important distinction. For
example suppose the data module of an analyzer flags results with a 0 if there is no error and a 1, 2, or 3 to indicate various alert conditions. A criterion can be defined to release results providing the instrument flag is not a 2 or 3. The assumption is that a 0 or 1 is acceptable. But what if the instrument sends a 4, or more likely, a text character that indicates a severe instrument malfunction? The results would be released if the criterion for release is “release if flag is not equal to 2 or 3.” If the criterion for release is instead “release if flag is equal to 0 or 1” then any unexpected characters in the data stream would not result in accidental release of results. The LIS should provide the functionality to hold or fail an entire cup (entire specimen such as a CBC) as well as to fail individual tests (HGB only) allowing maximum flexibility in defining criteria. The LIS needs to be capable of holding failed cups or tests in a recheck or re-filing queue. Not every sample will be autofiled and manual data release should take place in a normal fashion. If samples are re-run the autoverification program should be able to display both the original results and re-check results. Autoverification should not be applied to re-check data. Once implemented, the process remains dynamic. Initially the rate of autoverification should be calculated to determine if it meets the goals established during implementation. This rate may change over time as the patient population changes. Changes in reimbursement may affect test-ordering patterns. As the complexity of tests ordered changes, autoverification rates will vary. Large numbers of relatively normal patients being screened are what make this process the most efficient. With managed care, screening testing is becoming less frequent, patient populations may consist of a larger percentage of “abnormal” samples, lowering the autoverification rates. New instrument models with expanded capabilities and parameters may make this process easier. The data handling abilities are becoming more sophisticated with much more versatility in turning flags on and off, setting ranges, and storing information from the LIS. Laboratory information systems are also becoming more sophisticated with better integration of autofiling functionality. Setting the criteria and rules will become easier with resulting test efficiencies. Autoverification can positively affect workflow in the laboratory.

USE OF MIDDLE WARE



USE OF MIDDLE WARE
Middleware Overview
With increasing demands on productivity and decreasing resources, clinical laboratories are looking for ways to reduce staffing costs, reduce review rates, optimize sample throughput, and improve accuracy and consistency of reported results. While laboratory information systems help to achieve these goals, the use of middleware to increase efficiency has become an industry standard.
Middleware—software that optimizes the data flow from analyzer to LIS—is a necessity for any viable laboratory. Through the use of customizable rule sets, QC monitors, and tools that aid technologists in releasing results quickly and accurately, middleware improves efficiency and productivity. Middleware eliminates the need for staff to review every result, and also reduces the number (and cost) of paper printouts.

Autoverification
Perhaps the main benefit of middleware is autoverification. By applying rules to test results coming from the analyzer, the middleware filters the data before it reaches the LIS. Results that do not meet release criteria are either automatically scheduled for rerun or re-analysis or are reviewed by technologists who have control over the results from that point.

How Does Autoverification Work?
Middleware programs use Boolean logic—“IF”, “AND” , “OR” statements— to establish rules for analyzer data review. Results from the analyzer that meet release criteria are sent directly to the LIS, while results that fail these criteria are held for further action by a technologist or by other laboratory staff. For example, if chemistry analyzer test results fail release criteria, the test(s) may be automatically re-ordered or the technologist may be alerted to review the results to decide if further testing is required or if the results can be released. In a hematology application, differential results that fail release criteria may require that a blood smear is made and reviewed microscopically by a technologist prior to them being released to the LIS.
The Clinical and Laboratory Standards Institute® recommends the following as minimum requirements when considering software tools for autoverification:
ü  ability to use multiple data elements in an unrestricted fashion
ü  ability of the laboratory to define and implement changes to algorithms quickly and easily
ü  retrieval of selected information from multiple data sources (e.g., EMR, pharmacy, instrument results, other laboratory data, diagnosis code)
ü  application of algorithms in real time
ü  flexible user interface that provides laboratory-defined information on the autoverification process in real time

Laboratory Performance Metrics
Another benefit of middleware is the ability to track laboratory performance metrics. Because results are released in “real time”—i.e. as they come off the analyzer or as they are reviewed by a technologist— the software can track analyzer throughput as well as technologist performance and overall turnaround times. A further benefit is the ability to track data at a granular level, such as determining review rate by review rule, follow-up action, patient age, or species (in multi-species applications).

Implementation
Implementing middleware in a laboratory involves working closely with a middleware vendor (such as LabThroughPut). A team incorporating laboratory operations managers, IT administrators, and vendor programmers works together to install the middleware, to ensure that instrument interfaces and data transfer are properly implemented and to determine that review rules are properly created and that the right follow-up actions are performed. Any middleware’s Autoverification rules must be customizable to meet specific laboratory standard operating procedures, and so that changes can be implemented if needed.