Saturday 12 September 2009

A paper that suggests considering dependability not Safety / Security in isolation

Stud Health Technol Inform. 1996;27:190-9.
Related Articles, Links

Safety and security of information systems.

Shaw R.

Lloyd's Register, Croydon, UK.

This paper discusses some of the similarities and differences between the attributes of safety and security. It places these attributes within the broader topic of dependability and tries to identify what aspects of safety and security are unique and which aspects may be viewed within the attributes of reliability and availability. The paper then suggests that, rather than analyse systems from the single perspective of safety or security, they should be analysed from the broader perspective of dependability.

More Refs to the six safety first principles of health information systems

Foundations for an Electronic Medical Record
http://www.opengalen.org/download/FoundationsofEMR.pdf
A.L. Rector, W.A. Nolan, S. Kay
Medical Informatics Group, Department of Computer Science, University of Manchester, Manchester M13 9PL
Tel +44-161-275-6188/7183  FAX: +44-161-275-6204  email {arector,skay}@cs.man.ac.uk
http://www.cs.man.ac.uk   Published in Methods of Information in Medicine 30: 179-86, 1991

References include:

17. Barber B, Jensen OA, Lamberts H, Roger-France R, De Schouwer P, Zollner H. The six
safety first principles of health information systems: a programme of implementation,
part 1: Safety and Security. In: O’Moore R, Bengtsson S, Bryant JR, Bryden JS, eds.
MIE 90. Lecture Notes in Medical Informatics no.40. Berlin: Springer-Verlag, 1990: 608-
13.
18. Barber b, Jensen OA, Lamberts H, Roger-France R, De Schouwer, Zollner H. The six
safety first principles of health information systems: A programme of implementation.
Part 2: Convenience and Legal Issues. In O’Moore R, Bengtsson S, Bryant JR, Bryden
JS eds. MIE 90. Lecture Notes in Medical Informatics no.40. Berlin: Springer-Verlag,
1990: 614-9.

Sunday 6 September 2009

An Early Bibliography on Clinical Safety

3] Barber B et al.: The Six Safety First Principles of Health Information Systems: A Programme
of Implementation - Part 1 Safety and Security; O.A.Jensen et al: Part 2 Convenience
and Legal Issues, pp 608 - 619; in O'Moore et al (eds): Medical Informatics
Europe 90, Lecture Notes in Medical Informatics No 40, Springer Verlag, Berlin 1990.

[4] Barber B, Vincent R, Scholes M: Worst Case Scenarios: The Legal and Ethical Imperative;
in Richards B et al (eds): HC92 Current Perspectives in Healthcare Computing,
1992, British Journal of Healthcare Computing, pp 282 - 288.

Info-Vigilance or Safety in Health Information Systems

The paper examines the issues of security and safety in Health Information Systems and focuses the need for the development of appropriate Guidelines for the effective use of IEC 61508 standard.

http://cmbi.bjmu.edu.cn/news/report/2001/medinfo_2001/Papers/Ch14/809_Barber.pdf

Health Informatics Requirements for an EMR

http://www.opengalen.org/download/FoundationsofEMR.pdf

Safety as a System Property

While tidying my office this morning I came across a seminal article written by Nancy Leveson in 1995 in which she describes safety as an emergent property of a system. This is a short article that I would recommend to everyone involved in Clinical Safety.
The text of the article which was also published in Communications of the ACM (Nov 1995/ Vol 38 no 11) is on the following page from Peter Neumann's website http://www.csl.sri.com/users/neumann/risks-new.htmlin a section entitled: Section 7.1 Safety as a System Property (New section after Section 7.1) p297 in original book.

When computers are used to control potentially dangerous devices, new issues and concerns are raised for software engineering. Simply focusing on building software that matches its specifications is not enough. Accidents occur even when the individual system components are highly reliable and have not "failed." That is, accidents in complex systems often arise in the interactions among the system components, each one operating according to its specified behavior but together creating a hazardous system state. In general, safety is not a component property but an emergent property as defined by system theory. Emergent properties arise when system components operate together. Such properties are meaningless when examining the components in isolation -- they are imposed by constraints on the freedom of action of the individual parts. For example, the shape of an apple, although eventually explainable in terms of the cells of the apple, has no meaning at that lower level of description.
One implication of safety being an emergent property is that reuse of software components, such as commercial off-the-shelf software, will not necessarily result in safer systems. The same reused software components that killed people when used to control the Therac-25 had no dangerous effects in the Therac-20. Safety does not even exist as a concept when looking only at a piece of software -- it is a property that arises when the software is used within a particular overall system design. Individual components of a system cannot be evaluated for safety without knowing the context within which the component will be used.

Therefore, solutions to software safety problems must start with system engineering, not with software engineering. In the standard system safety engineering approach, system hazards (states that can lead to accidents or losses) are identified and traced to constraints on individual component behavior. Hazards are then either eliminated from the overall system design or they are controlled by providing protection (such as interlocks) against hazardous behavior. This protection may be at the system or component level or both. Building the software for these systems requires changes in the entire software development process and integration with the system-level safety efforts. (See [22] for more information about this approach).
One of the most important changes requires imposing discipline on the engineering process and product. Computers allow more interactive, tightly coupled, and error-prone designs to be built, and thus may encourage the introduction of unnecessary and dangerous complexity. Trevor Kletz suggests that computers do not introduce new forms of error, but they increase the scope for introducing conventional errors by increasing the complexity of the processes that can be controlled. In addition, the software controlling the process may itself be unnecessarily complex and tightly coupled.
Adding even more complexity in an attempt to make the software "safer" may cause more accidents than it prevents. Proposals for safer software design need to be evaluated as to whether any added complexity is such that more errors will be introduced than eliminated and whether a simpler way exists to achieve the same goal.
Besides the software process itself, new requirements are needed for the training and education of the software engineers who work on safety-critical projects. Most accidents are not the result of unknown scientific principles but rather of a failure to apply well-known, standard engineering practices. Engineering has accumulated much knowledge about how accidents occur, and procedures to prevent them have been incorporated into engineering practice. We are now replacing electromechanical devices with computers, but those building the software often know little about basic safety engineering practices and safeguards. It would be tragic if we had to repeat the mistakes of the past simply because we refused to learn them. The most surprising response to my new book has been complaints from software engineers that it includes analysis of accidents not caused by computers.
Finally, safety is a complex, socio-technical problem for which there is no simple solution. The technical flaws that lead to accidents often can be traced back to root causes in the organizational culture. Concentrating only on technical issues and ignoring managerial and organizational deficiencies will not result in effective safety programs. In addition, blaming accidents on human operators and not recognizing the impact of system design on human errors is another dead-end approach. Solving the safety problem will require experts in multiple fields, such as system engineering, software engineering, cognitive psychology, and organizational sociology working together as a team.

Safety as a System Property

While tidying my office this morning I came across a seminal article written by Nancy Leveson in 1995 in which she describes safety as an emergent property of a system. This is a short article that I would recommend to everyone involved in Clinical Safety. http://portal.acm.org/citation.cfm?doid=219717.219816