Saturday 12 September 2009

A paper that suggests considering dependability not Safety / Security in isolation

Stud Health Technol Inform. 1996;27:190-9.
Related Articles, Links

Safety and security of information systems.

Shaw R.

Lloyd's Register, Croydon, UK.

This paper discusses some of the similarities and differences between the attributes of safety and security. It places these attributes within the broader topic of dependability and tries to identify what aspects of safety and security are unique and which aspects may be viewed within the attributes of reliability and availability. The paper then suggests that, rather than analyse systems from the single perspective of safety or security, they should be analysed from the broader perspective of dependability.

More Refs to the six safety first principles of health information systems

Foundations for an Electronic Medical Record
http://www.opengalen.org/download/FoundationsofEMR.pdf
A.L. Rector, W.A. Nolan, S. Kay
Medical Informatics Group, Department of Computer Science, University of Manchester, Manchester M13 9PL
Tel +44-161-275-6188/7183  FAX: +44-161-275-6204  email {arector,skay}@cs.man.ac.uk
http://www.cs.man.ac.uk   Published in Methods of Information in Medicine 30: 179-86, 1991

References include:

17. Barber B, Jensen OA, Lamberts H, Roger-France R, De Schouwer P, Zollner H. The six
safety first principles of health information systems: a programme of implementation,
part 1: Safety and Security. In: O’Moore R, Bengtsson S, Bryant JR, Bryden JS, eds.
MIE 90. Lecture Notes in Medical Informatics no.40. Berlin: Springer-Verlag, 1990: 608-
13.
18. Barber b, Jensen OA, Lamberts H, Roger-France R, De Schouwer, Zollner H. The six
safety first principles of health information systems: A programme of implementation.
Part 2: Convenience and Legal Issues. In O’Moore R, Bengtsson S, Bryant JR, Bryden
JS eds. MIE 90. Lecture Notes in Medical Informatics no.40. Berlin: Springer-Verlag,
1990: 614-9.

Sunday 6 September 2009

An Early Bibliography on Clinical Safety

3] Barber B et al.: The Six Safety First Principles of Health Information Systems: A Programme
of Implementation - Part 1 Safety and Security; O.A.Jensen et al: Part 2 Convenience
and Legal Issues, pp 608 - 619; in O'Moore et al (eds): Medical Informatics
Europe 90, Lecture Notes in Medical Informatics No 40, Springer Verlag, Berlin 1990.

[4] Barber B, Vincent R, Scholes M: Worst Case Scenarios: The Legal and Ethical Imperative;
in Richards B et al (eds): HC92 Current Perspectives in Healthcare Computing,
1992, British Journal of Healthcare Computing, pp 282 - 288.

Info-Vigilance or Safety in Health Information Systems

The paper examines the issues of security and safety in Health Information Systems and focuses the need for the development of appropriate Guidelines for the effective use of IEC 61508 standard.

http://cmbi.bjmu.edu.cn/news/report/2001/medinfo_2001/Papers/Ch14/809_Barber.pdf

Health Informatics Requirements for an EMR

http://www.opengalen.org/download/FoundationsofEMR.pdf

Safety as a System Property

While tidying my office this morning I came across a seminal article written by Nancy Leveson in 1995 in which she describes safety as an emergent property of a system. This is a short article that I would recommend to everyone involved in Clinical Safety.
The text of the article which was also published in Communications of the ACM (Nov 1995/ Vol 38 no 11) is on the following page from Peter Neumann's website http://www.csl.sri.com/users/neumann/risks-new.htmlin a section entitled: Section 7.1 Safety as a System Property (New section after Section 7.1) p297 in original book.

When computers are used to control potentially dangerous devices, new issues and concerns are raised for software engineering. Simply focusing on building software that matches its specifications is not enough. Accidents occur even when the individual system components are highly reliable and have not "failed." That is, accidents in complex systems often arise in the interactions among the system components, each one operating according to its specified behavior but together creating a hazardous system state. In general, safety is not a component property but an emergent property as defined by system theory. Emergent properties arise when system components operate together. Such properties are meaningless when examining the components in isolation -- they are imposed by constraints on the freedom of action of the individual parts. For example, the shape of an apple, although eventually explainable in terms of the cells of the apple, has no meaning at that lower level of description.
One implication of safety being an emergent property is that reuse of software components, such as commercial off-the-shelf software, will not necessarily result in safer systems. The same reused software components that killed people when used to control the Therac-25 had no dangerous effects in the Therac-20. Safety does not even exist as a concept when looking only at a piece of software -- it is a property that arises when the software is used within a particular overall system design. Individual components of a system cannot be evaluated for safety without knowing the context within which the component will be used.

Therefore, solutions to software safety problems must start with system engineering, not with software engineering. In the standard system safety engineering approach, system hazards (states that can lead to accidents or losses) are identified and traced to constraints on individual component behavior. Hazards are then either eliminated from the overall system design or they are controlled by providing protection (such as interlocks) against hazardous behavior. This protection may be at the system or component level or both. Building the software for these systems requires changes in the entire software development process and integration with the system-level safety efforts. (See [22] for more information about this approach).
One of the most important changes requires imposing discipline on the engineering process and product. Computers allow more interactive, tightly coupled, and error-prone designs to be built, and thus may encourage the introduction of unnecessary and dangerous complexity. Trevor Kletz suggests that computers do not introduce new forms of error, but they increase the scope for introducing conventional errors by increasing the complexity of the processes that can be controlled. In addition, the software controlling the process may itself be unnecessarily complex and tightly coupled.
Adding even more complexity in an attempt to make the software "safer" may cause more accidents than it prevents. Proposals for safer software design need to be evaluated as to whether any added complexity is such that more errors will be introduced than eliminated and whether a simpler way exists to achieve the same goal.
Besides the software process itself, new requirements are needed for the training and education of the software engineers who work on safety-critical projects. Most accidents are not the result of unknown scientific principles but rather of a failure to apply well-known, standard engineering practices. Engineering has accumulated much knowledge about how accidents occur, and procedures to prevent them have been incorporated into engineering practice. We are now replacing electromechanical devices with computers, but those building the software often know little about basic safety engineering practices and safeguards. It would be tragic if we had to repeat the mistakes of the past simply because we refused to learn them. The most surprising response to my new book has been complaints from software engineers that it includes analysis of accidents not caused by computers.
Finally, safety is a complex, socio-technical problem for which there is no simple solution. The technical flaws that lead to accidents often can be traced back to root causes in the organizational culture. Concentrating only on technical issues and ignoring managerial and organizational deficiencies will not result in effective safety programs. In addition, blaming accidents on human operators and not recognizing the impact of system design on human errors is another dead-end approach. Solving the safety problem will require experts in multiple fields, such as system engineering, software engineering, cognitive psychology, and organizational sociology working together as a team.

Safety as a System Property

While tidying my office this morning I came across a seminal article written by Nancy Leveson in 1995 in which she describes safety as an emergent property of a system. This is a short article that I would recommend to everyone involved in Clinical Safety. http://portal.acm.org/citation.cfm?doid=219717.219816

Saturday 28 February 2009

The use of phased safety cases in other domains

Both of the emerging Health Informatics standards for Clinical Risk Management, TS 29321 and TR 29322, include the concept of staged safety case reports. Whilst some in the industry may see the production of so many documents as an issue, this doesn't concern me specifically because documentation for successive phases can simply be added as an appendix to previous documentation. However, I do see two areas where additional guidance is necessary:
  • no guidance is provided about how each of the stages relate to each other
  • in particular what information might flow from one safety case to another.
This additional information would be particularly useful when different contractual parties are involved in the supply chain. Including this information in a future issue of the standard will enable parties to enter into contracts and agreements on the basis of the Standard and they should not have to negotiate these aspects separately unless there are good reasons for doing things in a different way.

The following illustrates the generic approach. I envisage developing this further to meet the specific characteristics of heavily configured COTS as used in health care.
  1. Concept / requirements Safety Case Report. Produced when the role and broad functionality of the new system is determined. This document identifies the safety objectives of the system and its applicable system safety requirements which are based on regulatory requirements and the service provider’s internal safety standards as appropriate;
  2. Design Safety Case Report. Produced once the system has been designed and developed to meet the specified operational and/or engineering requirements. This document describes the system configuration identified safety requirements for the installation and commissioning and operational phases and describes how the safety objectives and requirements have been met within the evolving design. A full hazard and risk assessment is usually included at this stage;
  3. Installation and pre-commissioning Safety Case Report. Produced when the system is undergoing procedural and/or engineering readiness testing against the design specifications, followed by operational trials. At this phase, the risk assessment is tested and validated by actual trials and testing of the installed system, and specific safety related operational, engineering and/or management procedures are developed to obviate or control the identified risks; and
  4. Commissioning and routine operations Safety Case Report. Produce prior to release to service. Demonstrates how the safety of the system will continue to be monitored and improved as any hazards are identified as they arise, and how risks are mitigated during actual operations.

ISO/TS 25238 - Health Informatics - Classification of Safety Risks from Health Software

Whilst the controls that are justified for a high level of risk may also be suitable for lower levels of risk, their application for lower levels of risk may not justified when other considerations are taken into account.

ISO/TS 25238 provides a framework for grouping health software products in a set of classes or types according to the risk that they may present. This provides a mechanism for screening individual products to allow different levels of rigour in the application of design and production controls that are broadly matched to risk.

The approach advocated in this TS is in effect a Preliminary Hazard Assessment with the output, effectively a Safety Integrity Level (SIL), linked to a set of recomended controls defined by the organisation undertaking the Assessment. The categorisation of risk is done using a 2 dimensional risk matrix but importantly the likelihood is defined such that it does not take account of the effect of any controls within the product. In essence the approach is as follows:
  1. First identify some foreseeable hazards that a health software product might present to a patient if it were to malfunction or to be the cause of an adverse event.
  2. Then assign a Consequence category using a predefined table of qualitative categories and descriptions.
  3. Then assign a Likelihood category using a predefined table of values. This is estimated based on the likelihood of the occurrence of the identified Consequence without taking account of any mitigation expected from the product itself; however, expected mitigation from the environment should be included.
The output of the risk matrix is a set of classes to which appropriate process and product based controls can be applied.

The Lifecycle of Health Software

I have some observations about the life-cycle of Health Software as modeled in TR29322. Here is Section 4.4 in its entirety. I will comment separately:

The life cycle of a health software system typically comprises:
a) concept development and requirements capture;
b) detailed design;
c) software development;
d) software verification;
e) software release/marketing;
f) system validation and deployment;
g) use;
h) decommissioning.
ISO/TS 29321 applies to all life-cycle stages in which the manufacturer is responsible; typically, these will typically be a) to c) although, depending on the contract, the customer may be involved in concept definition/design. This Technical Report applies to all those life-cycle stages for which the customer/health organization is responsible, which will normally at least include deployment, use and decommissioning, although an out-sourcing contract may place that in the hands of the manufacturer.

However “deployment” can be in the hands of the manufacturer, the health organization or both. The manufacturer is likely, for example, to be heavily involved with the first deployment of a health software product as part of a health organization system. The standard which will apply to deployment will depend primarily on which body is responsible for ensuring patient safety. Where the manufacturer and the health organization work together on deployment and perhaps share responsibility for risk analysis etc., the manufacturer may work to ISO/TS 29321 (and thereby use the experience to build on the manufacturer clinical safety case) and the health organization may work to this Technical Report (using the experience to build the organization's clinical safety case and draw on the manufacturer's deployment clinical safety case report).
Since the hand-over from implementation in the user environment to live use will often involve the manufacturer/supplier and the user, a formal user acceptance protocol should be agreed upon and documented and include:
  • a procedural work-through with users;
  • and a dress rehearsal.
Whereas defining responsibilities for the purpose of determining which standard applies is important, the fact that the processes in ISO/TS 29321 and this Technical Report are the same makes the boundary less important.

Wednesday 25 February 2009

Safety Cases - A historical overview

I have just come across this paper by Peter Wilkinson which provides a review of the historical background to the development of safety cases as a tool to manage and regulate major hazard industries, primarily in the UK. This paper describes a number of applications where Safety Cases have been used and considers some successes and failures in their application.

Safety Standards in Health Informatics - Interest Group

Earlier today I attended the second meeting of an interest group that has been formed under the auspices of Intellect to review two emerging Health Informatics Standards related to Clinical Risk Management ISO/TS 29321 and ISO/TR 29322. Both of these Standards are based on Safety Case principles - a concept that is relatively new in healthcare.

Monday 9 February 2009

Health IT Risks - Background

Risk management is at the heart of many practices within medicine and healthcare. It is clearly in every patients' interest for clinicians to hold the principles of the Hippocratic Oath so close to their hearts. With the emergence of more and more specialist disciplines and the continual drive for improvements, care is increasingly being delivered by multidisciplinary teams. However, multidisciplinary team working introduces huge challenges for communication and sharing of information.

Information Technology(IT) offers tremendous opportunities for joining up disparate business processes. There is also a perception that the introduction of computer systems will inevitably improve safety in many areas of clinical practice. Information technology certainly has huge potential to improve the consistency of the healthcare experience and the quality of clinical record keeping. However, we need to be careful because the interactions within healthcare are complex and there are numerous ways in which the clinical business processes can fail in ways that could cause harm to patients.

For many the most obvious pitfalls are those relating to Information Governance and privacy in particular. The Health Insurance Portability and Accountability Act (HIPPA) in the US and Data Protection Act(DPA) legislation in Europe has established statutory requirements that need to be met to address these risks. However, this legislation and associated standards do not properly address the risks of harm to patients arising from anomalous behaviour within information system.

Examples of anomalous behaviour that could lead to harm include:
  • Incorrect output from clinical decision support functionality
  • Failure to record details of an allergy correctly
  • Failure to reliably transfer or refer the care of a patient to another provider
Since these scenarios have the potential to increase the risk of harm to patients they are described as Clinical Hazards. The practice of identifying, risk assessing and mitigating Clinical Hazards is termed Clinical Risk Management.

Sunday 8 February 2009

Background

I have implemented of Safety Management Systems and developed Safety Cases in both Aviation and Healthcare. This Blog is a repository of links to published papers that I have found relevant or interesting and a platform from which to share my experience of Clinical Risk Management with others engaged in this area of work.

James Savage Msc BSc RAF (Retd)