Scientific Papers

Bridging language barriers in developing valid health policy research tools: insights from the translation and validation process of the SHEMESH questionnaire | Israel Journal of Health Policy Research


The lack of validated Hebrew-language research tools for innovations in the clinical setting

In many countries where English is not the national language, validated research tools require translation. However, the translated versions usually have not undergone a validation process, which may threaten the validity of information collected using this tool [1]. In Israel, for instance, there is a general lack of validated research tools in Hebrew for examining innovations in clinical settings, and the existing ones have not undergone a complete validation process.

For example, Tal et al. (2019) examined hospital staff members’ perceptions of adopting technological innovations. They used a translated version of a questionnaire originally developed in Spanish. To validate the questionnaire, they employed a pre-test exam among 25 physicians using the Hebrew version [2]. Although the original questionnaire had been validated in Spanish, it is unclear whether the translated version was examined for errors and what procedures were used, what pre-testing was performed before the pilot testing, and whether cultural adaptations were required.

The importance of a validation process

A validation process helps to ensure that a research tool is performing in a way that is both valid (measuring what it is meant to measure) and reliable (measuring the same way every time) [3]. The process of validation is particularly important for research tools that were developed in another language. In that case, there are special considerations, such as the attention to cultural differences between the original and translated versions [1, 4]. Using the translation of a research tool alone without cultural adaptations may pose the risk of distorting its original meaning [5]. The translation process should ensure that the meaning and the structure of the translated version and the original one will be alike. This is not merely a pragmatical and technical task, but also requires professional skills to adapt the research tool culturally [6]. For example, the Beliefs about Medicines Questionnaire examines perceptions about the representation of medication, e.g., the necessity of a prescribed medication, or concerns about its use. The questionnaire underwent a validation process in its original language [7]. Over the years, the questionnaire was translated into different languages, and was administered in diverse cultures. Surprisingly, Garans et al. (2014) showed that the translation had different meanings for some of the items in the Norwegian, Swedish, and Danish versions. In addition, each version was far from the original version [8], calling into question whether all the different versions can truly be called the Beliefs about Medicines Questionnaire, and whether they are all measuring the same thing the same way. This example demonstrates the importance of conducting a formal and structured validation process. This is especially important when translating research tools into different languages and cultures [7], as well as for tool translations in public health field [9]. The validation methodology includes several methods, but each method has its advantages and disadvantages.

Validation process methods: back-translation

One well-known method for validating the translation of a research tool is back-translation. This method includes a translation of the research tool from the original language, and a back translation of that version into the original language to detect inconsistencies [10].

While back-translation is a helpful tool, it is insufficient to accomplish validation by itself. The process of back-translation itself can perpetuate or even create errors [8]. Translations should also consider cultural adaptation, which can be difficult to back-translate correctly [1]. That is, a mere translation of words, even one which is correct, does not always convey the intended meaning [11]. Lastly, it is difficult to detect failures in translation and discrepancies between different translators [12].

To increase the back-translation’s accuracy, it is suggested to use several independent professional translators, and to compare the original and the translated versions by a multidisciplinary committee to resolve discrepancies [1, 13]. This requires the investigators themselves to have a strong understanding of both the original and the new language, along with cultural competence.

Validation process methods: cognitive interviews

In addition to back-translation, cognitive interviews should be used to identify and correct errors in research tools, especially questionnaires [14]. Cognitive interviewing is conducted within a small sample size, and seeks to explore how responders understand the questions and interpret them, in order to detect items whose wording may be interpreted differentially across respondents, as opposed to meaning the same thing to everyone [15]. The two techniques for cognitive interviews are: (1) think aloud—a respondent-driven method which the interviewees are asked to share their thoughts on their answers; and/or (2) probing—an interviewer-driven method which the interviewees are asked specific questions of their answers [16]. Cognitive interviews have clear benefits; however, this approach has been criticized for potential biases due to the small sample, the artificial conditions the interviews are being held, and the lack of a conceptual framework to guide the exploration and therefore the possibility for interviewer’s subjectivity [17].

Psychometric validation

The final step of the validation process is pilot testing and psychometric validation. Psychometric validation is the process of examining the statistical properties of a research tool when it is subjected to pilot testing [18], and consists of several parts. The most common maneuver is to compute coefficient alpha for the subscales of the instrument. One wants the alpha to be high, such as 0.8, which indicates a high degree of internal reliability among the items. It is also possible to give the test to the same people on different days to compute test–retest reliability, but this is not always necessary. Reporting psychometric indices such as coefficient alpha, and sometimes test–retest reliability, is an important part of being able to claim that the new instrument has been ‘validated’ for use. In addition, factor analyses are often used as part of this validation step [19].

In this paper we will present a case report of the translation and validation process of the SHEMESH questionnaire (‘Organizational Readiness to Change Assessment’; In Hebrew: ‘SHE’elon Muchanut Ergunit le’SHinuy’). SHEMESH is an implementation science research tool adapted from the Organizational Readiness to Change Assessment (ORCA), originally in English. Both the ORCA and the SHEMESH are intended to measure how favorable the environment is, at a particular study site, to introduce an innovation in care. In this instance, we plan to use the SHEMESH as part of our study of a change in practice in the psychiatric emergency department, here in Israel.

Implementation science

Over the past three decades, it has been increasingly recognized that it is not enough to develop new treatments or prove their effectiveness [20]. There is a necessary additional step, namely to help ensure that proven treatments are adopted and sustained [21, 22]. This has led to developing a new field of inquiry called Implementation Science. The purpose of Implementation Science is to develop reproducible ways of facilitating the uniform adoption of proven clinical practices, and of addressing the many barriers that can prevent such adoption [21, 23]. Implementation Science is dedicated to better understanding the complexity of adapting interventions in healthcare settings into practice [24]. The number of investigators, publications, and grants in this field has increased many-fold over the years [24, 25], reflecting an increasing interest in it and appreciation of its importance.

PARHIS (promoting action on research implementation in health services): a conceptual model for implementation science

Implementation science theoretical frameworks help simplify the complexity of implementation, in order to focus on key factors to measure and assess their influence [26]. One widely used framework is the Promoting Action on Research Implementation in Health Services (PARIHS) framework [27]. The PARIHS has undergone revisions as a result of empirical and theoretical work [27], but the original PARIHS framework proposed that Successful Implementation (SI) is a function of three inputs: (1) Evidence—end users’ assessments of the evidence strength for the innovation, including their expectations that it will be feasible to use in their setting, and applicable to their patients, and their patients’ unique needs; (2) Context—factors in the environment that support (or resist) the implementation of changes in practice; and (3) Facilitation—efforts of the research team or champions within the clinical team to promote the change. SI is the extent to which the innovation is completely implemented and adopted as part of standard practice, as opposed to incompletely adopted, or resisted [28].

The need for a validated research tool: ORCA

The Organizational Readiness for Change Assessment (ORCA) questionnaire was developed to assess PARIHS constructs in the context of implementation studies and programs [29]. It was developed in the context of the Veterans Health Administration in the United States, but since then has been used in different settings and contexts [30,31,32]. The purpose of ORCA is to help apply the PARIHS framework by providing actionable measures of the key components of the framework. ORCA is based on the three PARIHS constructs, and organized in three sections: Evidence, Context, and Facilitation. ORCA has been validated for use, and its three scales had a high Cronbach’s alpha reliability when subjected to validation (0.74, 0.75, and 0.95, respectively, in the original validation study) [29]. While the ORCA has been validated in English, there is currently no Hebrew version of ORCA. In fact, to our knowledge, there are no validated instruments in Hebrew for use as part of implementation science projects.

It should be noted that when one uses the ORCA in any language, the questions must be adapted anew for each new study. That is, the questions’ wording must be changed to reflect the study’s setting and the innovation being adopted. Thus, no two uses of the ORCA are entirely the same. Nevertheless, much of the language in the ORCA is conserved from use to use, and therefore, one can say that the underlying ORCA has been validated [29].

ORCA is intended to support implementation projects by assessing factors related to organizational readiness to change. Measuring ORCA at baseline may help identify which sites will have difficult achieving SI; or which factors pose challenges at a given site, such as sites where there are weak perceptions of the evidence in favor of the change, or where some aspect of the underlying context is weak. These trouble spots can then potentially be addressed through facilitation. Used over time during the intervention, the ORCA can help reveal whether implementation strategies have helped improve the Evidence or Context constructs, and to what extent improvements differ among study sites [29]. A Facilitation scale can be added to ORCA at the middle or the end of a project, but this is not appropriate at baseline, before facilitation has begun [29].

Goals of this paper

This case report study outlines the translation and validation process of the SHEMESH questionnaire. The SHEMESH will be used in our larger study examining the use of remote video-link for patient triage and admission decisions at psychiatric hospitals. The broader study is organized using the PARIHS (Promoting Action on Research Implementation in Health Services) model, and so we will be using the ORCA (Organizational Readiness for Change Assessment) as part of this project [33]. As mentioned earlier, to the best of our knowledge, there is currently no validated research tool in Hebrew that examines attitudes towards innovations in clinical settings, nor is there a research tool to collect data for implementation studies. Therefore, before using ORCA, we set out to create a Hebrew version and validate our version. We named the ORCA’s Hebrew version SHEMESH, which means ‘sun’ in Hebrew and is the acronym derived from ‘SHE’elon Muchanut Ergunit le’SHinuy’, translating to Questionnaire of Organizational Readiness to Change. Once validated, the SHEMESH would be fit for use not only in our study, but as part of other Implementation Science studies in Israel. Our process, as described here, can also be an example for other Israeli researchers for how they can develop valid tools for their own research, as opposed to relying on informal and unvalidated adaptations that may compromise the validity of their research conclusions.



Source link