A systematic review was conducted to identify the strategies used in the development of patient safety indicators for acute care hospitals. The data sources used were MEDLINE, EMBASE, Internet sites and bibliographic references of the selected documents. Fourteen indicators development projects were included. The use of various terms related to patient quality and safety was observed with varied definitions. The literature review and the participation of experts and other representations characterized the projects. Of the 285 indicators identified, 125 were classified into more than one quality dimension. The most frequent combination was safety and effectiveness. A greater number of drug indicators were identified, And most represent outcome information. The importance of considering in the development of the indicators cultural variations, clinical practice, the availability of information systems and the capacity of hospitals to implement effective monitoring systems was observed.
Safety; Quality Indicators in Health Care; Hospitalization
A systematic review was conducted to identify the strategies used in the development of patient safety indicators for acute care hospitals. The data sources were MEDLINE, EMBASE, websites, and reference lists from articles and documents. Fourteen projects on the development of indicators were included. The review showed the use of different terms with varying definitions of quality and patient safety. The literature review and participation by specialists and other stakeholders characterized the projects. Of the 285 identified indicators, 125 were classified in more than one quality dimension. The most frequent combination was safety and effectiveness. Most were medication indicators, and more than half were outcome indicators. In developing patient safety indicators,
Safety; Health Care Quality Indicators; Hospitalization
Unsafe health care results in significant avoidable morbidity and mortality, additional spending on maintaining health systems and is a major concern today 1 . Studies in hospitals in several countries show the association between the occurrence of adverse events, incidents that cause harm to patients 2 , and increased length of stay, mortality and hospital expenditure 3,4 . Developed country estimates indicate that at least 5% of patients admitted to hospitals contract infection 1 . In Brazil, recent research in three teaching hospitals in Rio de Janeiro identified an incidence of 7.6% of patients with adverse events, being 66, 7% of these with preventable adverse events 5. This context has stimulated in the last decade the promotion of different initiatives to ensure safer health care6,7 . Among them, the creation of programs for the monitoring of the quality and security based on indicators stands out.
The quality of health care is defined by the World Health Organization (WHO) as ” the degree to which health services for individuals and populations increase the likelihood of desired outcomes and are consistent with current professional knowledge ” 2 (p. 22 ). Safety is an important dimension of quality that refers to the right of people to have the ” risk of unnecessary harm associated with reduced health care to an acceptable minimum ” 2 ( p.21 ). Errors, violations and failures in the care process increase the risk of incidents that cause harm to patients 2 .
Quality is a multidimensional concept, which requires different approaches to its evaluation. Quality indicator can be defined as a quantitative measure on some aspect of patient care 8 . Its use allows monitoring of health service performance, scheduling quality improvement actions and guides patients to make more informed choices. The usefulness of the indicators depends on their validity, reliability and viability 8,9,10 .
Hospitals are responsible for a significant and complex portion of the health care provided to patients. The inclusion of safety indicators in quality monitoring programs represents an important strategy to guide measures that promote the safety of hospitalized patients. In Brazil, there is still no set of indicators defined for this purpose, in addition to the lack of research on the subject.
The objective of this review is to identify the strategies for the development of patient safety indicators in hospitals, in order to contribute to the development of a set of security measures adapted to the Brazilian reality.
A systematic review of the literature on the strategies used in the development of patient safety indicators for acute care hospitals was carried out. The electronic search used the following sources: MEDLINE, through the PubMed interface, and EMBASE. The period covered by the study was from 2002 to 2008. 45 articles on patient safety were selected in MEDLINE. The Medical Subject Heading Terms (MeSH) used to index them were the basis for composing the search equation, later adapted for EMBASE ( Table 1 ).
The selection criteria of the articles included those who: (a) described the process used in the development of the indicators and (b) developed specific patient safety indicators or referred to safety as one of the evaluation dimensions. No restrictions were placed on the age of the patients, health problem or hospital sector. The exclusion criteria were: (a) only indicators for long-term care or outpatient care; and (b) indicators / events related to notification and surveillance systems 11 . No texts were selected in the format of: letters, editorials, news, professional commentaries, case studies and articles without abstract.
In addition to the electronic search, manual search of documents was done on Internet sites. The bibliographic references of the articles and documents included in the review were also checked. Several articles and documents referred to a single indicator development project. The project was considered as the unit of analysis of this review.
The sites surveyed were restricted to those cited by the projects included in the review and representing national or international organizations ( Table 2 ). Within the sites, a targeted search was made in the sections of publications or in the pages directed to patient safety and indicators. The search fields and the links section were not used . Additional information on the included projects was also searched on their respective websites.
The titles and abstracts of articles retrieved in the electronic search were independently assessed by two reviewers, applying the inclusion and exclusion criteria; The differences were resolved by consensus. The research on the websites, the verification of the bibliographic references and the complete reading of the selected texts were made exclusively by one of the authors, using the same inclusion and exclusion criteria.
The study variables were: country (ies) where the project was developed, organization (s) involved, year of study and description / objective (s); Conceptual model, methods and criteria for selection and validation of indicators; Title of the indicator, source of data, level of information and size (s) of quality. To collect data, a standard form was developed based on the analysis of the literature on the development of indicators 9,10,12 , which was pre-tested with three projects.
In order to quantify the total number of indicators obtained, they were aggregated when no differences in the specifications (numerator, denominator, target population and time period for the measure) were identified. When variations were observed, they were counted individually. The indicators were also classified by clinical areas or care sector, namely: clinicians; Surgery / anesthesia; Intensive care unit (ICU) for adults; Pediatric ICU; Obstetrics / gynecology; Infection control; Use of medications; Others (including indicators not classified in the previous items). We sought to group the indicators according to the population covered: children (under 18 years) and adults. When the indicator referred to practices or processes applicable to the two age groups, they were grouped in the “both” category. In the case of indicators that could be grouped in more than one area, it was chosen to classify them in the one where the clinical processes were more specialized. For example, the indicator ” Infections related to central venous catheter in adult ICU ” was classified in the adult ICU category, not infection category. The indicators were also classified according to the level of information (structure, process and result) provided by the developer organization. When this information was not available, the criteria developed by Donabedian 13 were used to classify them. They were grouped in the “both” category. In the case of indicators that could be grouped in more than one area, it was chosen to classify them in the one where the clinical processes were more specialized. For example, the indicator ” Infections related to central venous catheter in adult ICU ” was classified in the adult ICU category, not infection category. The indicators were also classified according to the level of information (structure, process and result) provided by the developer organization. When this information was not available, the criteria developed by Donabedian 13 were used to classify them. They were grouped in the “both” category. In the case of indicators that could be grouped in more than one area, it was chosen to classify them in the one where the clinical processes were more specialized. For example, the indicator ” Infections related to central venous catheter in adult ICU ” was classified in the adult ICU category, not infection category. The indicators were also classified according to the level of information (structure, process and result) provided by the developer organization. When this information was not available, the criteria developed by Donabedian 13 were used to classify them. It was chosen to classify them in the one where the clinical processes were more specialized. For example, the indicator ” Infections related to central venous catheter in adult ICU ” was classified in the adult ICU category, not infection category. The indicators were also classified according to the level of information (structure, process and result) provided by the developer organization. When this information was not available, the criteria developed by Donabedian 13 were used to classify them. It was chosen to classify them in the one where the clinical processes were more specialized. For example, the indicator ” Infections related to central venous catheter in adult ICU ” was classified in the adult ICU category, not infection category. The indicators were also classified according to the level of information (structure, process and result) provided by the developer organization. When this information was not available, the criteria developed by Donabedian 13 were used to classify them. Was classified in the adult ICU category and not in the infections category. The indicators were also classified according to the level of information (structure, process and result) provided by the developer organization. When this information was not available, the criteria developed by Donabedian 13 were used to classify them. Was classified in the adult ICU category and not in the infections category. The indicators were also classified according to the level of information (structure, process and result) provided by the developer organization. When this information was not available, the criteria developed by Donabedian 13 were used to classify them.
A total of 2,047 articles and documents were identified in the review (1,950 through electronic research, 55 in the bibliographic references and 42 on the Internet sites). Of these, 142 were selected for complete reading of the text and 47 were included in the study, referring to 14 projects. Eight projects have been recovered through the electronic survey 14,15,16,17,18,19,20,21 four on websites 22,23,24,25 and two in the references 26,27 . Four projects are from the United States 14,15,19,20 , three from Australia 23,25,27 , two from Canada 21,22 and one from the Netherlands 16 .
Only four projects aimed to specifically develop patient safety indicators: the study of the American organization Agency for Healthcare Research an Quality (AHRQ) 14 , the international organization OECD 18 and the Safety Improvement for Patients in Europe (SimPatIE) project. European Society for Quality in Healthcare (ESQH) 24 . The fourth is a Canadian study focusing on the drug area 21 .
In other projects, safety was approached as one of the quality dimensions 15,16,17,19,20,22,23,25,26,27 . Four are national initiatives 16,22,25 and international 17 to monitor the performance and quality of health systems and organizations. Five targeted specific sectors or populations: ICUs of adult patients 15 , pediatric ICUs 19 , children 20 , maternity units 23 and medications 27 . And a project corresponds to an initiative for the development and continuous revision of indicators linked to the accreditation process of an Australian national agency 26 .
Development process of indicators
• Conceptual model
Eleven projects 14,15,17,18,20,24,25,27,28,29,30 presented a definition of patient safety or safety. Although in variations, the definitions include in their context the common notion of reducing the risk of harm to patients caused by health care, such as the definition of safety adopted by Canada: “the potential risks of an intervention or the environment Are avoided or minimized ” 29 . Although restricted to the scope of hospitals, the safety concept of the Project Performance Assessment Tool for Quality Improvement in Hospitals (PATH) 17 (p. 490) is what addresses a broader context, including the safety of patients, staff and the environment, Being thus defined: ” size of performance,
Two projects presented a definition for patient safety indicators 14,24 . In the AHRQ study, patient safety indicators (PSIs) are defined as: “specific quality indicators that also reflect the quality of care in hospitals, but focus on aspects of patient safety. Patients experience as a result of exposure to the health system, and are susceptible to prevention through changes at the system or provider level “ 31 . For the SimPatIE 24 project , PSIs are quality indicators that “directly or indirectly monitor preventable adverse events.”
The scope of AHRQ’s work was limited to the identification of potentially avoidable care process results, based on the diagnoses registered in the hospital administrative banks coded according to the International Classification of Diseases – 9th Revision, with Clinical Modification (CID-9-CM) . The SimPatIE project included in its proposal measures on organizational processes that can expose patients to undesirable effects and outcome measures.
In projects that approach safety as a dimension of quality, seven presented the conceptual models used 15,17,22,25,27,28,30 . Two 15,30 organized indicators according to the dimensions proposed by the United States Institute of Medicine (IOM) 32 : safety, effectiveness, patient-centered care, opportunity, efficiency and equity ( Table 3 ). The health indicators project of Canada 22 and the accrediting agency Australian Council for Healthcare Standards (ACHS) 28.33 organize the indicators according to the performance evaluation dimensions of their respective national health systems. The dimensions of safety, effectiveness, adequacy, Efficiency, accessibility, continuity and competence / capability are common to the Canadian and Australian models. The Canadian model also includes the dimension acceptability and the Australian, the responsiveness dimensions ( ” responsive “) and sustainability.
It is interesting to note that the Australian Institute of Health and Welfare (AIHW), in developing its proposed performance indicators 25 , developed the following axes to organize the indicators based on the Australian conceptual model and national health priorities: better health , Focus on prevention, access (fair and timely care), high quality of care (adequate and safe), integration and continuity of care, patient care, efficiency, sustainability and equity.
Another two different categorizations are the PATH 17 project , which includes the dimensions of safety, clinical effectiveness, efficiency, team orientation, responsive governance, and patient-centered care; And that of the Australian drug-specific project, which includes careful selection, appropriate choice, and safe and effective use of drugs 27 . Finally, the OECD project also defined a conceptual model for your indicators program, after the publication of its patient safety indicators, based on the models of its member countries, such as Canada and Australia 34 .
It was observed that the projects use a variety of terms to describe the dimensions of quality, and sometimes the definitions used for each of them overlap, which causes difficulties of comparison. Generally, dimensions such as acceptability, responsiveness ( ” responsive “) and patient – centered care together in context a common attribute related to meeting the needs of patients and respect their preferences 17,32,33,34 . Similarly, continuity, accessibility and opportunity dimensions encompass common aspects related to the ability of the health system to respond promptly to patients’ continuing health needs 22,32,33 .
• Methods and criteria for selection and validation of indicators
All projects mention the review of information on indicators used by national and international monitoring programs and / or literature review to recover scientific evidence on potential indicators. In eight projects 16,17,18,22,25,26,27,30 , little information was obtained on these revisions. The project to develop indicators for pediatric ICUs 30 has established a “request for proposal of indicators” public and national to allow the participation of different hospitals and other organizations.
Another characteristic common to the 14 projects was the participation of groups of specialists. In eight projects 17,18,22,23,24,25,27,30 , the involvement of these professionals occurred from the initial phases of the development of indicators, which included, in four projects 17,22,24,25 , participation In defining conceptual models. In eight projects 21,22,23,25,27,30,31,35 , the use of multidisciplinary groups was observed, with the participation of doctors, nurses, pharmacists and / or other specialists. There was also mention of the participation of representatives of private and government agencies, researchers and / or managers in nine projects 16,21,22,23,24,25,26,27,30 . And in six 22,23,24,25,26,27 , The participation of consumers through representative organizations. Eight projects sought to ensure the representativeness of the participating groups in relation to different aspects, such as: geographical location, professional practice or hospital profile 21,22,23,25,26,30,31,35 .
It is interesting to highlight the observation made in the design of indicators for maternity hospitals on the limitations found for the literature review; These are due to different levels of clinical knowledge and the diversity of participants’ origins and interests 23 . Although this diversity has established the basis for a broad debate, it has also provided inconsistencies in the prioritization and classification of indicators. The project team estimates that it would have been better if the groups had been separated early in the work and a list of indicators was identified by clinicians. Thus, the opinion of consumers, epidemiologists and other professionals would be heard in the final selection decision. However,
Six projects 23,27,30,31,35,36 mentioned some additional evaluation of the indicators to support the analysis by the expert groups and the selection of the indicators. In the two projects developed by the AHRQ, the analysis of data from administrative banks was done to help the selection and refinement of the specifications of the indicators 31,35 and also to explore potential biases and the relationship between them 35 .
The PATH 36 project carried out a survey in 11 European countries to analyze the relevance of indicators and the work required to collect data. Although cautiously analyzed for sample biases, the results were considered important to combine theory with practice and also to facilitate the selection of indicators. The Australian Maternity Indicators Development Project 23 used secondary data to test the robustness of some indicators and to observe their prevalence, in order to evaluate their inclusion in the set to be selected. Field testing performed in Australian hospitals assisted in the final selection of the set of indicators on the use of drugs 27 . The indicators project for pediatric ICUs conducted a national survey on the Internet, to get the approval of the final seven candidates indicators and raise recommendations on them, using an instrument developed by the Joint Commission, American accrediting agency 30 . There were a total of 286 representative responses from 135 acute care hospitals.
All 14 projects have established some formal process for the selection of indicators. The following consensus techniques were used: Modified Delphi Method (RAND / UCLA appropriateness method ) 14,18,20 , in three projects; Variations of the Delphi technique 21,22,23 , also in three; Nominal Group with modifications 15,17 , in two. In addition to the Delphi technique, health indicators project of Canada 22.29 held two National Conferences Consensus.
There was a significant variability in the terms used to indicate the selection criteria of the indicators. In an attempt to classify, aspects mentioned by the projects were grouped into three broad categories: relevance and importance of the indicator 15,16,17,18,22,23,24, 25,26,27,30,31,35 , strength of evidence 15,17,18,22,23,24,25,27,30,31,35 (ie, validity and reliability of the indicators) and viability 15,16,17,18,22,23,24,25, 26 , 27.30 . Importance and relevance comprise from the meaning attributed by different representative groups (professionals, consumers, politicians and others) to the aspect of measured care, To the potential of the measure to promote improvements and positive impacts on health and the capacity of providers and health systems to act on the results obtained. Feasibility, on the other hand, ranges from analysis of data availability to the assessment of the necessary resources available for the development of data sources and the maintenance of monitoring systems.
Nine projects 16,23,27,29,30,37,38,39,40 mentioned the performance of pilot tests; In three, no information was obtained on these 16,29,30 . The scope and objectives of the analyzes presented significant variations. In a project 37 , teams from 13 US UTIs evaluated the validity (construct and content) and reliability of potential indicators developed for these units. Independent reviewers evaluated the indicators developed for maternity hospitals with the purpose of analyzing their usefulness and their statistical validity to monitor the quality of care 23 . To do this, they used a method to score indicators on a scale of 1 to 4 in 20 preselected items, such as ” Any training needs and other resources; Assist the refinement of indicators and discuss strategies for large scale project implementation 39 . The OECD project conducted two surveys to investigate the availability and comparability of data for the construction of indicators among participating countries 40,41 .
More recently, the AHRQ has established a pilot project to assess the validity of PSI criteria by reviewing medical records 42 .
• Characteristics of patient safety indicators
A total of 368 indicators were identified. Of these, 83 were excluded: 47 because they were equal; 18 because they do not provide details about the specifications, 16 because they are intended for outpatient care and two because they are indicative of specific processes / structures in some countries. There were 285 indicators (the complete list is available with the authors), of which, 160 (56.1%) were patient safety indicators only and 125 (43.9%) were classified in more than one dimension. The most frequently found combination was safety and effectiveness (112; 89.6%), that is, indicators that measure both safety and effectiveness of care. The highest concentration of indicators was in the area of medicines (22.1%). More than half of the indicators represent outcome information (63, 9%) ( Table 4 ). The source of the data was not specified for 62.1% of the indicators, mainly because two projects 16,26 leave to the hospitals the choice of the data source for the construction of the indicators.
All the indicators identified in this review have face validity, attributed by specialists, and only 24.5% (64) underwent some additional validation during the development phase. Although the projects sought to construct the indicators based on the best available scientific evidence or the experience of other monitoring programs, a significant variability was observed in the degree and availability of this evidence 14,17,18,19,20,23,24 . The need for further analysis and validation was particularly indicated when the use of indicators is intended for hospital comparisons 14,20,21,27 .
The development of the 20 patient safety indicators by AHRQ 14 in 2002 is considered pioneer ( Table 5 ). This work had a significant influence on the development of the PSI of the OECD project 18 and the SimPatIE project 24 . Together, the three projects developed a total of 44 safety indicators, 13 of which originated from the AHRQ study and were selected by the OECD and the SimPatIE project ( Table 5 ). After development, the 20 PSI of the AHRQ have been the most studied, used in research, public reports and comparisons between hospitals 3,4 . Preliminary results of the test on the criterion validity of five PSI 42 show variations in their positive predictive values (percentage of identified cases that actually present the event) ( Table 5 ). The National Quality Forum (NQF) 43, which assesses indicators through a formal consensus process, endorsed five AHRQ PSI for adult patient care, listed in Table 5 . In addition, the NQF also endorsed four pediatric safety indicators developed by AHRQ 20 : accidental puncture and laceration; Decubitus ulcer; Iatrogenic pneumothorax in non-neonates; And reaction to transfusion 43 . The National Quality Forum (NQF) 43 , which assesses indicators through a formal consensus process, endorsed five AHRQ PSI for adult patient care, listed in Table 5 . In addition, the NQF also endorsed four pediatric safety indicators developed by AHRQ 20 : accidental puncture and laceration; Decubitus ulcer; Iatrogenic pneumothorax in non-neonates; And reaction to transfusion 43 . The National Quality Forum (NQF) 43 , which assesses indicators through a formal consensus process, endorsed five AHRQ PSI for adult patient care, listed in Table 5 . In addition, the NQF also endorsed four pediatric safety indicators developed by AHRQ 20 : accidental puncture and laceration; Decubitus ulcer; Iatrogenic pneumothorax in non-neonates; And reaction to transfusion 43 . The NQF also endorsed four pediatric safety indicators developed by AHRQ 20 : accidental puncture and laceration; Decubitus ulcer; Iatrogenic pneumothorax in non-neonates; And reaction to transfusion 43 . The NQF also endorsed four pediatric safety indicators developed by AHRQ 20 : accidental puncture and laceration; Decubitus ulcer; Iatrogenic pneumothorax in non-neonates; And reaction to transfusion 43 .
In this review, the use of several terms related to the quality and safety of the patient with varied definitions was observed. Polysemy has characterized the area of patient safety, prompting WHO to set up a working group in 2005 to develop a comprehensive, recently published taxonomy 2 . However, in conducting this review, it has been found that similar efforts have been undertaken in parallel, such as the development by the SimPatIE 11project of a vocabulary for the area of patient safety. It is also observed that, although international countries and projects seek to develop their conceptual models based on those of other projects, there are variations in the terms and definitions adopted, which makes comparisons difficult.
A concern observed in all 14 projects was to ensure the feasibility of monitoring programs without imposing a cost overhead and work with data collection and construction of new information systems. The need to ensure a reasonable balance between the feasibility and validity of the indicators is clearly addressed by the Dutch project 16 , noting that the quest for a high degree of validity can raise costs and make indicators less interpretable and transparent by requiring models Of increasingly sophisticated adjustment. Similarly, projects were concerned with maintaining a small set of indicators that could be more easily managed and used optimally.
The need to improve data quality and / or incorporate new data and / or develop information systems was addressed in eight projects 22,23,24,25,31,35,39,41 . Results of the survey conducted by the PATH project showed significant variations in data quality and availability among the 11 participating countries 36 . Among the issues raised are the continued use of the International Classification of Diseases – 9th Revision, instead of the 10th ; The relative or absolute lack of coding of secondary diagnoses; Problems in recording data reflecting forms of financing and culture; And the limited link between hospitals and primary care. The project uses the strategy of dividing indicators into two categories: (a) so-called “nuclear” are those whose data are generally available in most European countries, are constructed on the basis of the best available scientific evidence and are considered valid; (B) so-called “adapted”, which include indicators to be used in specific situations, due to variability in data availability, use in hospitals of different profiles or their validity. For reasons similar to PATH, the SimPatIE project did not recommend a single set of PSI for use across Europe. The panel of experts classified the indicators developed into four categories: (1) immediately applicable to all European health systems; (2) immediately applicable to parts of European health systems; (3) implementation in Europe is not currently feasible (future use); (4) not appropriate as a PSI for Europe. Finally, the Dutch project includes indicators that are mandatory by hospitals and others that are not, given the variability in the availability of data 16 .
The set of 285 indicators identified in this review covers a broad spectrum of care provided by hospitals to patients in different age groups. Includes areas where the occurrence of safety problems is more incident, such as medication, hospital infection and surgical care 1,3,4,5 .
This review also found interest in the use of administrative databases to monitor the quality and safety of health care 14,17,18,20 . The OECD project, recognizing that these data are the most readily available to allow for international comparisons, has developed a manual 44 to adapt AHRQ ISPs to the 10th Review of the International Classification of Diseases used by several of its member countries. However, not all indicators are based on retrospective analysis based on administrative data. Standardized instruments for data collection in small sample files may allow for the prospective, Considering the feasibility for hospitals and the costs involved in this process 17,24,27,37,38 . On the other hand, the poor quality in medical records, a constant stumbling block for monitoring, can be circumvented by the use of small surveys as a data source for the construction of indicators 24.45 .
Besides the multiplicity of concepts and their definitions, during this review it was observed that the terms security, quality and its other dimensions are not yet clearly differentiated, and the boundaries between them remain unclear. Perhaps because of this, a strategy used is the construction of monitoring systems whose indicators measure several quality dimensions together 15,16,17,23,26,27 . The use of the same indicator to evaluate different aspects of the quality of care does not extinguish the need for clear and standardized conceptualizations, since it seems evident that differentiated actions are required, for example, to act on the efficiency and safety of care.
Limitations of the systematic review
This revision contains some limitations that need to be highlighted. Although we have sought to increase the number of projects using the Internet to recover unpublished material, not all possibilities have been covered. The sites surveyed were restricted to those of nationally and internationally representative organizations, which may have excluded regional experiences. The electronic search equation used, although comprehensive, may have excluded processes of developing indicators in specific areas. Documents in German, French, Italian, Korean and Norwegian that appeared in the review were not analyzed, and initiatives from those countries may have been excluded because of the languages. Yet, There is no other published review on strategies for developing patient safety indicators and it is concluded that the present work has been able to bring together an important and comprehensive set of information. In addition, it has included initiatives from countries that are well known in the area of patient safety, such as the United States, Australia and Canada, as well as international organizations such as the OECD and WHO.
The results obtained in this review reinforce the importance of developing patient safety indicators based on the best available scientific evidence and adapting them to the reality of each country to ensure its viability. This process should consider cultural and clinical variations, the availability of information systems, and the capacity of hospitals and health systems to implement effective quality monitoring programs. The participation of different representative groups in the processes of developing indicators, such as consumers, legitimizes these processes and seeks to meet the different needs and expectations of the various actors involved. In addition, it provides the construction of more understandable and feasible indicators.
Patient safety is part of a broader concept, the quality of health care. Implementing a global, multidimensional program to assess and improve the quality of health care should be a priority for governments at all levels. The projects included in this review developed indicators to allow the monitoring and comparison, aiming to guide the design of actions for quality improvement. To do this, investing in the development of local capacities and existing information systems is an essential condition.
CSD Gouvêa and C. Travassos contributed in the writing, critical review of the content and final approval of the version to be published.
A Frederico Tadeu Oliveira Clerk for participation in the review and selection of abstracts of the systematic review.
1. World Health Organization / World Alliance for Patient Safety. Summary of the evidence on patient safety: implications for research. The Research Priority Setting Working Group of the World Alliance for Patient Safety. Geneva: World Health Organization; 2008. [ Links ]
2. Runciman W, Hibbert P, Thomson R, Van Der Schaaf T, Sherman H, Lewalle P. Towards an International Classification for Patient Safety: key concepts and terms. Int J Qual Health Care 2009; 21: 18-26. [ Links ]
3. Raleigh VS, Cooper J, Bremner SA, Scobie S. Patient safety indicators for England from hospital administrative data: case-control analysis and comparison with US data. BMJ 2008; 337: a1702. [ Links ]
4. Zhan C, Miller MR. Excess length of stay, charges, and mortality attributable to medical injuries during hospitalization. JAMA 2003; 290: 1868-74. [ Links ]
5. Mendes W, Martins M, Rozenfeld S, Travassos C. The assessment of adverse events in hospitals in Brazil. Int J Qual Health Care 2009; 21: 279-84. [ Links ]
6. Allegranzi B, Storr J, Dziekan G, Leotsakos A, Donaldson L, Pittet D. The First Global Patient Safety Challenge “Clean Care is Safer Care”: from launch to current progress and achievements. J Hosp Infect 2007; 65 Suppl 2: 115-23. [ Links ]
7. Catalan K. JCAHO’S national patient safety goals 2006. J Perianesth Nurs 2006; 21: 6-11. [ Links ]
8. Mainz J. Defining and classifying clinical indicators for quality improvement. Int J Qual Health Care 2003; 15: 523-30. [ Links ]
9. McGlynn EA, Asch SM. Developing a clinical performance measure. Am J Prev Med 1998; 14 Suppl 3: 14-21. [ Links ]
10. Campbell SM, Braspenning J, Hutchinson A, Marshall M. Research methods used in developing quality indicators in primary care. What Saf Health Care 2002; 11: 358-64. [ Links ]
11. Kristensen S, Mainz J, Bartels P. A patient safety vocabulary. Safety Improvement for Patients in Europe. SimPatIE – Work Package 4; 2007. http://www.simpatie.org/Main/pf1175587453/wp1175588035/wp1176820943(accessed 16 Sep 2007). [ Links ]
12. Joint Commission on Accreditation of Health Organizations. First on indicator development and application. Measuring quality in healthcare. Chicago: Joint Commission on Accreditation of Health Organizations; 1990. [ Links ]
13. Donabedian A. The quality of care: how can it be assessed? JAMA 1988; 260: 1743-8. [ Links ]
14. Romano PS, Geppert JJ, Davies S, Miller MR, Elixhauser A, McDonald KM. A national profile of patient safety in US hospitals. Health Aff (Millwood) 2003; 22: 154-66. [ Links ]
15. Berenholtz SM, Dorman T, Ngo K, Pronovost PJ. Qualitative review of intensive care unit quality indicators. J Crit Care 2002; 17: 1-12. [ Links ]
16. Berg M, Meijerink Y, Gras M, Goossensen A, Schellekens W, Haeck J, et al. Feasibility first: developing public performance indicators on patient safety and clinical effectiveness for Dutch hospitals. Health Policy 2005; 75: 59-73. [ Links ]
17. Veillard J, Champagne F, Klazinga N, Arah OA, Guisset AL. A performance assessment framework for hospitals: the WHO regional office for Europe PATH project. Int J Qual Health Care 2005; 17: 487-96. [ Links ]
18. McLoughlin V, Millar J, Mattke S, France M, Jonsson PM, Somekh D, et al. Selecting indicators for patient safety at the health system level in OECD countries. Int J Qual Health Care 2006; 18 Suppl 1: 14-20. [ Links ]
19. Scanlon MC, Mistry KP, Jeffries HE. Determining pediatric intensive care unit quality indicators for measuring pediatric intensive care unit safety. Pediatr Crit Care Med 2007; 8 Suppl 2: S3-10. [ Links ]
20. McDonald KM, Davies SM, Haberland CA, Geppert JJ, Ku A, Romano PS. Preliminary assessment of pediatric health care quality and patient safety in the United States using readily available administrative data. Pediatrics 2008; 122: 344-25. [ Links ]
21. Nigam R, MacKinnon NJ, UD, Hartnell NR, Levy AR, Gurnham ME, et al. Development of Canadian safety indicators for medication use. Healthc Q 2008; 11 (3 Spec No.): 47-53. [ Links ]
22. Canadian Institute for Health Information. The Health Indicators Project: the next 5 years report from the Second Consensus Conference on Population Health Indicators; 2005. http://secure.cihi.ca/cihiweb/dispPage.jsp?cw_page=indicators_e#consensus (accessed February 15, 2009). [ Links ]
23. Women’s Hospitals Australasia. Supporting excellence in maternity care: The Core Maternity Indicators Project. Http://www.safetyandquality.gov.au/internet/safety/publishing.nsf/Content/com pubs_InfoStrategy (accessed 24 Feb 2009). [ Links ]
24. Kristensen S, Mainz J, Bartels P. Establishing a set of patient safety indicators. Safety Improvement for Patients in Europe. SImPatIE – Work Package 4; 2007. http://www.simpatie.org/Main/pf1175587453/wp1175588035/wp1176820943 (accessed 16 Sep 2007). [ Links ]
25. Australian Institute of Health and Welfare. A set of performance indicators across the health and aged care system. Http://www.aihw.gov.au/indicators/index.cfm (accessed 9 / Jan / 2009). [ Links ]
26. The Australian Council on Healthcare Standards. Australasian Clinical Indicator Report: 1998-2006. Determining the potential to improve quality of care: 8th edition. Http://www.achs.org.au/cire ports (accessed 25 Jan 2009). [ Links ]
28. The Australian Council on Healthcare Standards. ACHS Clinical Indicator Summary Guide 2004: an approach to demonstrating the dimensions of quality. Last: Australian Council on Healthcare Standards; 2004. [ Links ]
29. Canadian Institute for Health Information. National Consensus Conference on Population Health Indicators. Final report. Http://secure.cihi.ca/cihiweb/dispPage.jsp?cw_page=indicators_e#consensus (accessed February 15, 2009). [ Links ]
29. National Association of Children’s Hospitals and Related Institutions / The Child Health Corporation of America / Medical Management Planning, Inc. National pediatric practices & measures. Focus on PICU. Http://www.childrenshospitals.net/ (accessed 02 / Mar / 2009). [ Links ]
30. McDonald KM, Romano PS, Geppert J, Davies SM, Duncan BW, Shojania KG, et al. Measures of patient safety based on hospital administrative data – the patient safety indicators.Http://www.qualityindicators.ahrq.gov/psi_download.htm (accessed 9 / Aug / 2007). [ Links ]
31. Committee on Quality of Health Care in America, Institute of Medicine. Crossing the quality chasm: a new health system for the 21st century. Washington DC: National Academies Press; 2001. [ Links ]
32. National Health Performance Committee 2004. http://www.aihw.gov.au/publications/index.cfm/title/10085(accessed 11 Mar 2009). [ Links ]
34. McDonald KM, Romano PS, Davies S, Haberland C, Geppert J, Ku A, et al. Technical report on pediatric health care quality based on hospital administrative data: The Pediatric Quality Indicators. Http://www.qualityindicators.ahrq.gov/pdi_download.htm (accessed 03 Dec 2008). [ Links ]
35. Veillard J, Guisset AL, Garcia-Barbero M. Selection of indicators for hospital performance measurement: a report on the 3 rd and 4 th Workshop. Http://www.euro.who.int/document/e84679.pdf (accessed on 09 / Apr / 2009). [ Links ]
36. Pronovost PJ, Berenholtz SM, Ngo K, McDowell M, Holzmueller C, Haraden C, et al. Developing and pilot testing quality indicators in the intensive care unit. J Crit Care 2003; 18: 145-55. [ Links ]
37. Nigam R, MacKinnon NJ, Nguyen T. Validation of Canadian medication-use safety indicators. Http://www.patientsafetyinstitute.ca/English/research/cpsiResearchCompetitions/2005/ Pages / MacKin non.aspx (accessed 19 Dec 2008). [ Links ]
38. Groene O. Pilot test of the performance assessment tool for quality improvement in hospitals. Report on WHO Workshop. Http://www.euro.who.int/document/hph/path_performanceass.pdf (accessed 10 Mar 2009). [ Links ]
39. Mattke S, Kelley E, Scherer P, Hurst J, Lapetra MLG; The HCQI Expert Group Members. Health Care Quality Indicators Project. Initial indicators report. OECD Health Technical Papers, 22. http://www.oecd.org/health/hcqi(accessed April 9, 2009). [ Links ]
40. Armesto SG, Lapetra MLG, Wei L, Kelley E; The Members of the HCQI Expert Group. Health Care Quality Indicators Project 2006. Data collection; Update report. OECD Health Working Papers, 29. http://www.oecd.org/health/hcqi (accessed 09/09/2009 ). [ Links ]
41. Lessons from the AHRQ PSI Validation Pilot Project. Slide presentation from the AHRQ 2008 Annual Conference (text version). Http://www.ahrq.gov/about/annualmtg08/091008slides/Romano.htm (accessed on 09 / Apr / 2009). [ Links ]
42. National Quality Forum. National voluntary consensus standards for hospital care 2007: performance measures. A consensus report. Http://www.qualityforum.org/Measures_List.aspx (accessed 04 May 2009). [ Links ]
43. Drösler S. Facilitating cross-national comparisons of indicators for patient safety at the health level system in the OECD countries. OECD Health Technical Papers, 19. http://www.oecd.org/health/hcqi (accessed 9/09/2009 ). [ Links ]
44. WHO Regional Office for Europe. Performance assessment tool for quality improvement in hospitals. Indicators descriptions (core). Http://www.pathqualityproject.eu/ (accessed 26 Apr 2009). [ Links ]
45. The Australian Council on Healthcare Standards. Hospital-wide clinical indicators. Clinical indicator users’ manual. Version 10 for use in 2007. http://www.achs.org.au/pdf/HOSPITAL_WIDE_INDICATORS_Example.pdf(accessed March 15, 2009). [ Links ]
46. The Australian Council on Healthcare Standards. Gynecology version 6. ACHS clinical indicator users’ handbook 2008. Version 6 for use in 2008. http://www.ranzcog.edu.au/fellows/pdfs/Gynaecology_Indicators.pdf(accessed 10 Mar 2009). [ Links ]
47. The Australian Council on Healthcare Standards. Obstetrics version 6. ACHS clinical indicator users’ manual 2008. Version 6 for use in 2008. http://www.ranzcog.edu.au/fellows/pdfs/Obstetric_Indicators.pdf (accessed 10 Mar 2009). [ Links ]
48. AHRQ Quality Indicators. Patient safety indicators: technical specifications. Version 3.2.Http://www.qualityindicators.ahrq.gov/psi_download.htm (accessed 15 Dec 2008). [ Links ]
50. Agency for Healthcare Research and Quality. Quality indicators. Patient Safety Quality Indicators Composite Measure Workgroup final report. Http://www.qualityindicators.ahrq.gov/psi_download.htm (accessed 15 Dec 2008). [ Links ]
52. World Health Organization. 1 st Workshop on Pilot Implementation of the Performance Assessment Tool for Quality Improvement in Hospitals. Http://www.euro.who.int/document/E84680.pdf (accessed on 10 / Mar / 2009). [ Links ]
53. Millar J, Mattke S; The Members of the OECD Safety Panel. Selecting indicators for patient safety at the level of health systems in OECD countries. OECD Health Technical Papers, 18. http://www.oecd.org/dataoecd/53/26/33878001.pdf (accessed on 09 / Apr / 2009). [ Links ]
54. AHRQ Quality Indicators. Pediatric quality indicators: technical specifications. Version 3.2. Http://www.qualityindicators.ahrq.gov/psi_download.htm (accessed 15 Dec 2008). [ Links ]
55. Agency for Healthcare Research and Quality. Pediatric Quality Indicators Composite Measure Workgroup final report. Http://www.qualityindicators.ahrq.gov/pdi_download.htm (accessed 15 Dec 2008). [ Links ]
56. Kristensen S, Mainz J, Bartels P. Catalog of Patient Safety Indicators. Safety Improvement for Patients in Europe. SImPatIE – Work Package 4. http://www.simpatie.org/Main/pf1175587453/wp1175588035/wp1176820943 (accessed September 16, 2007). [ Links ]
Cite This Work
To export a reference to this article please select a referencing stye below:
Related ServicesView all
Related ContentAll Tags
Content relating to: "Hospitals"
A hospital is a health care institution with specialised facilities and medical equipment where doctors and nurses diagnose and treat people with illnesses or injuries.
Literature Review on The Effectiveness of Electronic Medical Records
An electronic medical record can be defined as a software program part or complete medical record of a patient kept in a computer....
Out of Hospital Management of Neurogenic Shock
212 words INTRODUCTION In the following essay I will examine the evidence supporting the management of a critically injured patient in the out of hospital arena, specifically focusing on the out of h...
DMCA / Removal Request
If you are the original writer of this dissertation and no longer wish to have your work published on the UKDiss.com website then please: