• If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!



Page history last edited by Quan Nha HONG 1 year, 9 months ago

Questions on the MMAT version 2018


1. I would like to request your permission to use the MMAT in a review.

Please feel free to use the MMAT. It is public and free for use in education and research.


2. I would like to use the new version of the MMAT. Where can I find it? 

The latest version of the MMAT can be downloaded on this page: Download the MMAT .


3. In the 2011 version of the MMAT, you provided the readers with a scoring metrics, but you discourage to use the scoring in the new version. I would like to know if I could still use the metrics or I should definitely avoid applying it? 

Currently in the literature on critical appraisal, it is discouraged to use metrics because it is not informative. By presenting a single number, it is not possible to know what aspects of studies are problematic. For that reason, we suggested in the new version to avoid using metrics. That being said, we understand that it might be easier to present a number in a paper. There is no good or bad way to present the results.  If you feel that it is better to present a single number or that the reviewers of the paper ask for it, you can use it. However, do not limit to presenting only a single number without any description of the quality. We often see people presenting a global score and nothing else in the results or discussion or description of included studies.  This often raises the question of why quality appraisal was performed. SEE QUESTION #13 BELOW FOR MORE INFORMATION ABOUT SCORING.


4. Do you have a recommendation for what qualifies a study as low or high quality?

We have not studied cut off value that could characterize low vs high quality studies.  The categories are arbitrary. We have seen some papers with 2 (e.g., lower vs higher quality) or 3 categories (e.g., low, medium, and high). What is important is that you clearly indicate how you interpreted and used the results of the appraisal in your review (transparency); e.g., the appraisal can be used to compare results with vs. without lower quality studies.


5. Should a study be a mixed methods to be able to use MMAT?

No. The MMAT was developed for reviews that combine qualitative, quantitative and/or mixed methods studies. Thus, the MMAT can be used to appraise the quality of qualitative studies, quantitative studies and mixed methods studies. 


6. There is a comment box in the form. Should a researcher write down a comment if his/her answer no or cannot tell?

The comment box is useful to add a note to justify your ratings. This is especially useful when there is more than one reviewer and that you have to discuss discrepancies. 


7. Why is the MMAT is preferable over other Critical Appraisal Tools? 

The MMAT was developed to be used when conducting a systematic mixed studies review (i.e., systematic reviews that include qualitative, quantitative and/or mixed methods studies). If your review is not mixed, it might be difficult to justify its use. There are currently more than 500 critical appraisal tools. Several criteria have been suggested for choosing an appropriate tool such as tool purpose, tool development, validity, reliability, available guide, etc. (e.g. see Bai et al. (2012). Quality assessment tools project report. Ottawa, ON: Canadian Agency for Drugs and Technologies in Health; Crowe, M., & Sheppard, L. (2011). A review of critical appraisal tools show they lack rigor: Alternative tool structure is proposed. Journal of Clinical Epidemiology, 64(1), 79-89. doi:10.1016/j.jclinepi.2010.02.008; Deeks et al. (2003). Evaluating non-randomised intervention studies. Health Technology Assessment, 7(27), i-186. doi:10.3310/hta7270). We are also developing a website on critical appraisal tools: https://www.catevaluation.ca/.


8. Do I need to rate the 2 screening questions?

The screening questions are used to exclude non-empirical papers. Since the MMAT focuses on methodological quality, the criteria will not be applicable for non-empirical papers. Thus, the two screening questions are asked to make sure you can use the MMAT. Some reviewers contacted us to know if they could skip these 2 screening questions because their inclusion criteria were limited to empirical studies. Thus, they did not see the relevance of answering these questions (all included studies rated Yes to these two questions). 


9. What is sensitivity analysis?

Sensitivity analysis is usually performed during the synthesis stage of a systematic review (after the selection, data extraction and quality appraisal). It is a way of dealing with the results of the quality appraisal. Sensitivity analysis consist of removing the studies of low quality from your synthesis and check if the results remain similar. For example, if you do a thematic synthesis in your review, you will identify a list of themes that were mentioned in all the included studies. When performing sensitivity analysis, you can check what themes were identified in the low quality studies and if they differ from the rest. If they do, then you should be more cautious about these themes. The same can be done for other synthesis methods such as meta-analysis (e.g. removing the findings from studies of low quality to see if the results of the meta-analysis will change). Also, in the interpretation of your findings, you can discuss the impact of these low quality studies on your findings and nuance your recommendations accordingly. 


10. How to appraise mixed methods studies with the MMAT?

For appraising the quality of mixed methods studies, we suggest to use 3 categories of items of the MMAT (category #1 + one of the quantitative categories (either #2, 3 or 4)  + category #5). The individual components are appraised using the appropriate categories (category #1 for the qualitative component; and one of the quantitative category – either category #2 to #4 – for the quantitative component). The category #5 is about the mixing. 


11. I have two articles that discuss the same study, and both meet the inclusion criteria. When using the MMAT, should each article separately be appraised separately OR appraise both articles as a single study?

We would suggest to consider both as a single study.  


12. In item 2.5 (Did the participants adhere to the assigned intervention?), are we focusing on the participants who were not loss to follow throughout the intervention or are we focusing on the specific components of the intervention (specific parts of the intervention) that participants adhered to? 

Item 2.5 is related to compliance/adherence with the intervention assessed (not loss to follow-up). For example, did the participants completed/received the intervention that they were assigned to? Where there participants who cross over from one treatment to another?


13. I would like to present an overall score. What should I do?

Several MMAT users have contacted us for advice to report the results of the MMAT. Please find some suggestions in this document.


14. Can you send me examples of reviews that have used the MMAT?

We have created a Zotero library group with some selected recent papers reporting systematic reviews in which the MMAT 2018 was used, and appraisal results presented. You can access this library at this link: https://www.zotero.org/groups/2536629/mmat/library. If you have a new publication, please add it in library or send us the reference (mixed.methods.appraisal.tool@gmail.com). Thank you. 


15. To rate the screening questions, is it acceptable if a paper only presents a research objective?

Yes, some papers will presents research objectives instead of research questions. The two screening questions could be:   

S1. Are there clear research questions or clear research objectives?

S2. Do the collected data allow to address the research questions or research objectives?


16. What should we do if there is variability in criteria interpretation? 

At the beginning of the appraisal process, we suggest that the members rate independently some papers (ideally at least one paper for each category of the MMAT). Then, we suggest that you meet to compare your ratings and to agree on how to interpret the MMAT criteria.


17. What does "Can't tell" mean in the response category? 

 The ‘Can’t tell’ response category means that the paper do not report appropriate information to answer ‘Yes’ or ‘No’, or that report unclear information related to the criterion. Rating ‘Can’t tell’ could lead to look for companion papers, or contact authors to ask more information or clarification when needed.





Questions on the MMAT version 2011


1. Would the MMAT be appropriate for a systematic literature review that identified papers consisting of either a qualitative or quantitative research design (not both within the same study)? 

Yes. The MMAT has been designed for systematic mixed studies reviews that include studies with diverse designs (qualitative, quantitative and mixed methods).


2. When there are multiple publications from a single study, do we critique each publication separately or critique the publications as one study?

If all the publications are related with the same study, they can be assessed as a set of publications (thus complete the MMAT only one time and not for each articles). The MMAT criteria should be applied to all main outcomes reported in the set of articles. For example, for criteria 2.3, 80% or above of the all complete outcome data should be reported in the set of articles.


3. What is the meaning of “can´t tell”?

Typically, the code ‘I cannot tell’ is associated with papers that do not report appropriate information to answer ‘yes’ or ‘no’ for a criterion, or that report unclear information related to this criterion.  Coding ‘I cannot tell’ could lead you to look for additional information on the study or contact authors to ask more information or clarification when needed.


4. How to compute “can´t tell” in the overall quality score?

As mentioned in our wiki, “For each retained study, an overall quality score may be not informative (in comparison to a descriptive summary using MMAT criteria)”. Indeed, this gives the same ‘weight’ to each criterion, which is an issue. Having said this, we understand that an overall scoring may be useful to summarize results of the appraisal and compare papers. For scoring, ‘I cannot tell’ can be considered as a “no” (0).


5. Is the response option “can´t tell” also the appropriate response option if a criterion is “not applicable”?

In the MMAT, we developed methodological quality for five study types: (1) qualitative, (2) randomized controlled, (3) non-randomized, (4) quantitative descriptive, and (5) mixed methods.  If the studies included in your review fall in one of these types of study, all the related criteria should be applicable. If you have any examples of situations where the code ‘not applicable’ would be relevant, please send them to us so we can see how the tool can be modified accordingly. 


6. In the explanation of your item, what should I do when an article meet only some of the criteria?  For example, the following description is provided for item 1.1:

1.1. Are the sources of qualitative data (archives, documents, informants, observations) relevant to address the research question (objective)?

 E.g., consider whether (a) the selection of the participants is clear, and appropriate to collect relevant and rich data; and (b) reasons why certain potential participants chose not to participate are explained.

Certain studies meet criteria (a) very well, but do not appear to meet criteria (b), i.e., they do not state reasons why certain potential participants may have chosen not to participate. In this case, should I rate this item as “yes” or “no”?

The explanations provided in the manual are only examples.  Hence, for item 1.1, a reviewer might decide that the sources of data are relevant when the selection of the participants is clear. They can also use other criteria not found in the manual but relevant for their topic.  What is important is that the item be applied in a similar way for all included papers. Also, we suggest that the two reviewers meet before reviewing the papers to agree on how to rate each item.       


7. Regarding item 2.3 (Are there complete outcome data (80% or above), how should we rate this item when there is more than 80% of data for the primary outcomes but not for secondary outcomes?   

There are two possible approaches to rate this item: (1) consider only the primary outcomes of the study or (2) consider only the outcome(s) of interest in the systematic mixed studies review (for example, if your project is on the re-hospitalization of patients enrolled in a home-based intervention, you can decide to look only at the re-hospitalization outcome even if is it a secondary outcome in the study). The evaluators should establish their approach before starting to work on appraisal. 


8. How should we calculate the withdrawal/drop-out to check if it is below 20% (item 2.4)? 

The withdrawal/drop-out rate should be calculated for each follow-up in each group and the denominator should be the number of randomized patients in each group. 


9. We are encountering several studies for which it is difficult for us to determine whether they belong the either the category “4. descriptive” or “3.quantitative non-randomized”. That is, there are several studies with a single-group design (surveys) in which categorical variables are regressed upon the outcome variable. In some definitions of cross-sectional studies these studies belong to the category of cross-sectional studies and thus in the MMAT to the quantitative non-randomized studies. However, as we read the MMAT criteria for cross-sectional studies we have the impression that you are looking for studies in which (existing or created groups with or without an intervention or exposure are compared). Thus, would you recommend us to discuss these ‘survey single-group papers using regression’ under the header of either descriptive studies or quantitative non-randomized studies? 

Cross-sectional designs are in a grey zone in the MMAT, and we will try to clarify this in the update of the tool. Some cross-sectional studies can be in the quantitative non-randomized since they describe a population, and measure the likelihood and significance of relationships between variables (‘Cross-sectional analytic study’ as defined in the MMAT). Our suggestion is to use the non-randomized set for analytical studies, and use the descriptive set for pure descriptive surveys (e.g., prevalence/incidence study). We have discussed this issue with others and the opinions are divided.  You can report this limitation in your results. Obviously, it is important to use the same set of MMAT criteria for all papers with similar design in your review.  


10. What is the reason behind the fact that for descriptive studies ‘complete outcome data’ is not a quality requirement as it is for quantitative non-randomized studies (i.e. compare question 3.4 and 4.4)? We are eager to know the reasoning behind this choice as to be better able to appraise the studies.

The difference between items 3.4 and 4.4 is the ‘outcome data’ (included in 3.4, but not in 4.4). For descriptive study, we generally talk about response rate, not outcome data (see references in the manual, e.g., Draugalis et al, 2008). Outcome data are generally collected to test the effectiveness or association between variables. Hence, we did not include ‘complete outcome data’ as a methodological criteria for descriptive studies.


11. Our question concerns question 3.4 of the MMAT, which is the question: “Are there complete outcome data (80% or above), and, when applicable, an acceptable response rate (60% or above), or an acceptable follow-up rate for cohort studies (depending on the duration of follow-up)?  Would it possible to give an example when a response rate is ‘not applicable’? For me it is hard to think about an example.

Response rates are not appropriate when the data are administrative data (e.g., patient files), or when a large convenience sample is sufficient for the analysis (e.g., systematic sampling up to reach the sufficient sample size according to the power calculation). Hence, in several designs, such as case-control study or quasi-experimental studies, we won’t talk about response rate per se but more of complete outcome data.


12. For criteria 3.4, does outcome data refer to the percentage of complete data of the participants that responded to the questionnaire, or of the complete data of the participants that were invited to participate (i.e. in that way non-response is integrated within outcome data percentage). 

Having complete or incomplete outcome data is linked with the concepts of attrition, exclusion and missing data. There are several reasons that could explain that data were not collected for all the sample (such as withdrawal, lost to follow-up, participants fail to complete questionnaire, enrolled participants later found ineligible,…). 


13. Is the "Mixed Methods Appraisal Tool (MMAT) – Version 2011" the latest version of the MMAT?

Yes. We are currently working on updating this tool. 


14. I would like to request permission to use the MMAT in my systematic review. 

This is a public resource. You can use it. Please, cite the latest version the MMAT.


15. In order to categorize them into high and low quality articles, I was wondering which cut-off value you make use of (is 50% low or high quality?). 

There is no rule as it depends on each review (context): some mixed studies reviews include almost only “high quality” studies (very few “low quality”), while others include almost only “low quality” studies (very few “high quality”). Remember that these “low” and “high” are based on the appraisal of methodological quality, which in turn depends on what is reported in the articles. Thus, I suggest you can compare results from studies with lower vs. higher MMAT score (there is no absolute ‘high’ and no absolute ‘low’); you can decide/describe for your review (context) what ‘lower’ and ‘higher score’ mean, as this depends on studies included in your review, and compare results from ‘lower score’ and ‘higher score’ (similar to a sensitivity analysis comparing older vs. newer studies).


16. How to get an overall score for mixed methods studies? 

Since the mixed methods component only has 3 items (compared to 4 in the other components), a 0 in MM would be equivalent to 1 in QUAL or QUAN (see table below). The numbers in the table correspond to items rated “yes”.  























 Here are some examples of overall scorings for mixed methods studies:  






















17. We included single group pre-post-study design in our review. We are not sure which category to choose in the MMAT (i.e., non-randomized or descriptive study). If we choose the non-randomized category, how should be interpret item 3.3 on the comparability of groups?  

This question was raised by several MMAT userrs. Some MMAT users have either decided to remove item 3.3 on the comparability of groups or replace it with another item they judge important, others have interpreted item 3.3 for single group pre-post by checking if the pre- and post-subjects are comparable (e.g., if a high drop-out rate at post), or others used the descriptive set of criteria (since no comparison group is a common characteristic of descriptive studies). Appraisal is a value judgment and the interpretation of the items might vary based on the context of the review (types of studies). We usually recommend that the research team agree on how to interpret them (based on clear rationale) and apply them uniformly across all included studies.


18. Can you send me a list of the papers that have cited and used the MMAT?

Here is a non-exhaustive list of papers: 

1. Akhlaq, A., A. Sheikh, and C. Pagliari, Barriers and facilitators to health information exchange in low-and middle-income country settings: a systematic review protocol. Journal of Innovation in Health Informatics, 2015. 22(2): p. 284-292.

2. Anderiesen, H., et al., A systematic review–physical activity in dementia: The influence of the nursing home environment. Applied ergonomics, 2014. 45(6): p. 1678-1686.

3. Asghari, S., et al., HIV Point-of-Care Testing in Canadian Settings.

4. Ball, L., et al., Effect of nutrition care provided by primary health professionals on adults’ dietary behaviours: a systematic review. Family practice, 2015: p. cmv067.

5. Belanger, E., C. Rodríguez, and D. Groleau, Shared decision-making in palliative care: a systematic mixed studies review using narrative synthesis. Palliative Medicine, 2011: p. 0269216310389348.

6. Benjamin, K. and T.T. Donnelly, Barriers and facilitators influencing the physical activity of Arabic adults: A literature review. Avicenna, 2013: p. 8.

7. Bishop, F.L. and M.M. Holmes, Mixed methods in CAM research: a systematic review of studies published in 2012. Evidence-Based Complementary and Alternative Medicine, 2013.

8. Blondell, S.J., et al., Barriers and Facilitators to HIV Testing in Migrants in High-Income Countries: A Systematic Review. AIDS and Behavior, 2015: p. 1-13.

9. Bluebond-Langner, M., et al., Preferred place of death for children and young people with life-limiting and life-threatening conditions: a systematic review of the literature and recommendations for future inquiry and policy. Palliative medicine, 2013. 27(8): p. 705-713.

10. Boerleider, A.W., et al., Factors affecting the use of prenatal care by non-western women in industrialized western countries: a systematic review. BMC pregnancy and childbirth, 2013. 13(1): p. 81.

11. Brom, L., et al., Congruence between patients’ preferred and perceived participation in medical decision-making: a review of the literature. BMC medical informatics and decision making, 2014. 14(1): p. 25.

12. Brunero, S., Y.H. Jeon, and K. Foster, Mental health education programmes for generalist health professionals: An integrative review. International journal of mental health nursing, 2012. 21(5): p. 428-444.

13. Busetto, L., K.G. Luijkx, and H.J.M. Vrijhoef, Implementation of integrated care for type 2 diabetes: a protocol for mixed methods research. International journal of integrated care, 2014. 14.

14. Busetto, L., et al., Intervention types and outcomes of integrated care for diabetes mellitus type 2: a systematic review. Journal of evaluation in clinical practice, 2016.

15. Cabieses, B., et al., A systematic review on the development of asthma and allergic diseases in relation to international immigration: the leading role of the environment confirmed. PLOS, 2014.

16. Chan, Z.C., et al., Gender differences in the academic and clinical performances of undergraduate nursing students: A systematic review. Nurse education today, 2014. 34(3): p. 377-388.

17. Dagenais, C., et al., Knowledge transfer on complex social interventions in public health: a scoping study. PLOS, 2013.

18. Dahan‐Oliel, N., et al., Quality of life in osteogenesis imperfecta: A mixed‐methods systematic review. American Journal of Medical Genetics Part A, 2016. 170(1): p. 62-76.

19. Davis, K.G. and S.E. Kotowski, Prevalence of Musculoskeletal Disorders for Nurses in Hospitals, Long-Term Care Facilities, and Home Health Care A Comprehensive Review. Human Factors: The Journal of the Human Factors and Ergonomics Society, 2015: p. 0018720815581933.

20. Dawson, A., et al., Collaborative approaches towards building midwifery capacity in low income countries: A review of experiences. Midwifery, 2014. 30(4): p. 391-402.

21. Dawson, A. and D. Jackson, The primary health care service experiences and needs of homeless youth: A narrative synthesis of current evidence. Contemporary nurse, 2013. 44(1): p. 62-75.

22. Dawson, A., et al., Improving Access to Emergency Contraception Pills through Strengthening Service Delivery and Demand Generation: A Systematic Review of Current Evidence in Low and Middle-Income Countries. PLOS, 2014.

23. Dawson, A., et al., Workforce interventions to improve access to emergency contraception pills: a systematic review of current evidence in low-and middle-income countries and recommendations for improving performance. BMC health services research, 2015. 15(1): p. 180.

24. Dawson, A.J., et al., Task shifting and sharing in maternal and reproductive health in low-income countries: a narrative synthesis of current evidence. Health policy and planning, 2013: p. czt026.

25. Dawson, A.J. and C.S. Homer, How does the mining industry contribute to sexual and reproductive health in developing countries? A narrative synthesis of current evidence to inform practice. Journal of clinical nursing, 2013. 22(23-24): p. 3597-3609.

26. de Knegt, N.C., et al., Behavioral pain indicators in people with intellectual disabilities: a systematic review. The Journal of Pain, 2013. 14(9): p. 885-896.


28. Dheensa, S., et al., Health-care professionals' responsibility to patients' relatives in genetic medicine: a systematic review and synthesis of empirical research. Genetics in Medicine, 2015.

29. Donovan, L.A., et al., Hospital-based bereavement services following the death of a child: A mixed study review. Palliative medicine, 2015. 29(3): p. 193-210.

30. Drageset, J., et al., Cancer-Related Pain and Symptoms Among Nursing Home Residents: A Systematic Review. Journal of pain and symptom management, 2014. 48(4): p. 699-710. e1.

31. Druetz, T., et al., The community case management of pneumonia in Africa: a review of the evidence. Health policy and planning, 2015. 30(2): p. 253-266.

32. Durham, J. and A. Bains, Research protocol: a realist synthesis of contestability in community-based mental health markets. Systematic reviews, 2015. 4(1): p. 32.

33. Ellis, S., et al., Supporting children facing a parent's cancer diagnosis: a systematic review of children's psychosocial needs and existing interventions. European journal of cancer care, 2016.

34. Engels, G., et al., Clinical pain in schizophrenia: a systematic review. The Journal of Pain, 2014. 15(5): p. 457-467.

35. Frantzen, K.K., et al., Parental Self-perception in the Autism Spectrum Disorder Literature: a Systematic Mixed Studies Review. Review Journal of Autism and Developmental Disorders, 2015: p. 1-19.

36. Friedrichs, A., et al., Patient Preferences and Shared Decision Making in the Treatment of Substance Use Disorders: A Systematic Review of the Literature. PloS one, 2016. 11(1).

37. Gagnon, M.-P., et al., Systematic review of factors influencing the adoption of information and communication technologies by healthcare professionals. Journal of medical systems, 2012. 36(1): p. 241-277.

38. Gagnon, M.-P., et al., Barriers and facilitators to implementing electronic prescription: a systematic review of user groups' perceptions. Journal of the American Medical Informatics Association, 2014. 21(3): p. 535-541.

39. Grawburg, M., et al., A systematic review of the positive outcomes for family members of people with aphasia. Evidence-Based Communication Assessment and Intervention, 2012. 6(3): p. 135-149.

40. Grawburg, M., et al., Third-party disability in family members of people with aphasia: a systematic review. Disability and rehabilitation, 2013. 35(16): p. 1324-1341.

41. Henneghan, A., Modifiable factors and cognitive dysfunction in breast cancer survivors: a mixed-method systematic review. Supportive Care in Cancer, 2016. 24(1): p. 481-497.

42. Humphries, S., et al., Barriers and facilitators to evidence-use in program management: a systematic review of the literature. BMC health services research, 2014. 14(1): p. 171.

43. Hutchinson, M., et al., Deriving COnsensus on the characteristics of advanced practice nursing: meta-summary of more than 2 decades of research. Nursing research, 2014. 63(2): p. 116-128.

44. Igwesi-Chidobe, C.N., E.L. Godfrey, and A.P. Kengne, Effective components of exercise and physical activity-related behaviour-change interventions for chronic non-communicable diseases in Africa: protocol for a systematic mixed studies review with meta-analysis. BMJ open, 2015. 5(8): p. e008036.

45. Jais, C., et al., How can human factors be used in the design of dementia care environments? 2015.

46. Johnson, M., et al., A systematic review of speech recognition technology in health care. BMC medical informatics and decision making, 2014. 14(1): p. 94.

47. Joseph, A., Y.-S. Choi, and X. Quan, Impact of the Physical Environment of Residential Health, Care, and Support Facilities (RHCSF) on Staff and Residents A Systematic Review of the Literature. Environment and Behavior, 2015: p. 0013916515597027.

48. Kanagasabai, P.S., et al., Association between motor functioning and leisure participation of children with physical disability: an integrative review. Developmental Medicine & Child Neurology, 2014. 56(12): p. 1147-1162.

49. Khanassov, V., I. Vedel, and P. Pluye, Barriers to implementation of case management for patients with dementia: a systematic mixed studies review. The Annals of Family Medicine, 2014. 12(5): p. 456-465.

50. Khanassov, V., I. Vedel, and P. Pluye, Case management for dementia in primary health care: a systematic mixed studies review based on the diffusion of innovation model. Clinical interventions in aging, 2014. 9: p. 915.

51. Lamont, S., Y.H. Jeon, and M. Chiarella, Assessing patient capacity to consent to treatment: an integrative review of instruments and tools. Journal of clinical nursing, 2013. 22(17-18): p. 2387-2403.

52. Lamont, S., Y.-H. Jeon, and M. Chiarella, Health-care professionals’ knowledge, attitudes and behaviours relating to patient capacity to consent to treatment An integrative review. Nursing ethics, 2013. 20(6): p. 684-707.

53. Langston, B., et al., The prevalence and severity of fatigue in men with prostate cancer: a systematic review of the literature. Supportive Care in Cancer, 2013. 21(6): p. 1761-1771.

54. Leblanc, J., et al., The role of nurses in HIV screening in health care facilities: A systematic review. International journal of nursing studies, 2015.

55. Lee, K., et al., The use of movement-based interventions with children diagnosed with autism for psychosocial outcomes—A scoping review. Research in Autism Spectrum Disorders, 2016. 24: p. 52-67.

56. Leeman, J., et al., What strategies are used to build practitioners¿ capacity to implement community-based interventions and are they effective?: a systematic review. Implementation Science, 2015. 10(1): p. 80.

57. Lewis, K.B., D. Stacey, and D.D. Matlock, Making Decisions About Implantable Cardioverter-Defibrillators from Implantation to End of Life: An Integrative Review of Patients’ Perspectives. The Patient-Patient-Centered Outcomes Research, 2014. 7(3): p. 243-260.

58. Li, H., et al., What are the similarities and differences in structure and function among the three main models of community health centers in China: a systematic review. BMC health services research, 2015. 15(1): p. 1.

59. Macpherson, E., C. Dhaliwal, and J. Richardson, The relationship between the physical functioning of older adults and their Use of a personal health record: a systematic review. Current Geriatrics Reports, 2014. 3(3): p. 142-154.

60. Madhani, F.I., et al., An Integrative Review of the Methods Used to Research the Prevalence of Violence against Women in Pakistan. Advances in Nursing, 2014. 2014.

61. Marcus, R. and E. Page, Economic strengthening activities in child protection interventions: an adapted systematic review. ODI-Oak Foundation, 2013.

62. Marcus, R. and E. Page, Changing discriminatory norms affecting adolescent girls through communication activities. 2014.

63. Martin, S., et al., Effectiveness and impact of networked communication interventions in young people with mental health conditions: A systematic review. Patient education and counseling, 2011. 85(2): p. e108-e119.

64. McCann, D., R. Bull, and T. Winzenberg, Sleep Deprivation in Parents Caring for Children With Complex Needs at Home A Mixed Methods Systematic Review. Journal of family nursing, 2014: p. 1074840714562026.

65. McCloughen, A., et al., Physical health and wellbeing of emerging and young adults with mental illness: An integrative review of international literature. International Journal of Mental Health Nursing, 2012. 21(3): p. 274-288.

66. McGinn, C.A., et al., Comparison of user groups' perspectives of barriers and facilitators to implementing electronic health records: a systematic review. BMC medicine, 2011. 9(1): p. 46.

67. McInnes, S., et al., An integrative review of facilitators and barriers influencing collaboration and teamwork between general practitioners and nurses working in general practice. Journal of advanced nursing, 2015.

68. McLachlan, S.-A., A. Clements, and J. Austoker, Patients’ experiences and reported barriers to colonoscopy in the screening context—a systematic review of the literature. Patient education and counseling, 2012. 86(2): p. 137-146.

69. Moeller-Saxone, K., et al., Promoting resilience in adults with experience of intimate partner violence or child maltreatment: a narrative synthesis of evidence across settings. Journal of public health, 2014: p. fdu030.

70. Morton, K.L., et al., The school environment and adolescent physical activity and sedentary behaviour: a mixed-studies systematic review. Obesity Reviews, 2016. 17(2): p. 142-158.

71. Mpofu, E., et al., Socio-structural influences on the work participation of refugees: an exploratory systematic mixed studies review. Vulnerable Groups & Inclusion, 2012. 3.

72. Owiti, J.A., et al., Illness perceptions and explanatory models of viral hepatitis B & C among immigrants and refugees: a narrative systematic review. BMC public health, 2015. 15(1): p. 151.

73. Park, C., S. Nam, and R. Whittemore, Incorporating Cultural Perspectives into Diabetes Self-Management Programs for East Asian Immigrants: A Mixed-Study Review. Journal of Immigrant and Minority Health, 2015: p. 1-14.

74. Pedersen, V.H., J. Armes, and E. Ream, Perceptions of prostate cancer in Black African and Black Caribbean men: a systematic review of the literature. Psycho‐Oncology, 2012. 21(5): p. 457-468.

75. Peek, S.T., et al., Factors influencing acceptance of technology for aging in place: a systematic review. International journal of medical informatics, 2014. 83(4): p. 235-248.

76. Pieper, M.J., et al., Interventions targeting pain or behaviour in dementia: A systematic review. Ageing research reviews, 2013. 12(4): p. 1042-1055.

77. Piil, K., et al., Controlled rehabilitative and supportive care intervention trials in patients with high-grade gliomas and their caregivers: a systematic review. BMJ supportive & palliative care, 2014: p. bmjspcare-2013-000593.

78. Puts, M., et al., Factors influencing adherence to cancer treatment in older adults with cancer: a systematic review. Annals of oncology, 2014. 25(3): p. 564-577.

79. Puts, M.T., et al., A systematic review of factors influencing older adults’ decision to accept or decline cancer treatment. Cancer treatment reviews, 2015. 41(2): p. 197-215.

80. Radhakrishnan, K., et al., Barriers and Facilitators for Sustainability of Tele‐Homecare Programs: A Systematic Review. Health services research, 2016. 51(1): p. 48-75.

81. Randall, D., A. Anderson, and J. Taylor, Protecting children in research Safer ways to research with children who may be experiencing violence or abuse. Journal of Child Health Care, 2015: p. 1367493515587060.

82. Rehm, R.S., Nursing's contribution to research about parenting children with complex chronic conditions: An integrative review, 2002 to 2012. Nursing outlook, 2013. 61(5): p. 266-290.

83. Reichenpfader, U., et al., Leadership in evidence-based practice: a systematic review. Leadership in Health Services, 2015. 28(4).

84. Renzi, C., K.L. Whitaker, and J. Wardle, Over-reassurance and undersupport after a ‘false alarm’: a systematic review of the impact on subsequent cancer symptom attribution and help seeking. BMJ open, 2015. 5(2): p. e007002.

85. Robert, E., et al., Protocol: a realist review of user fee exemption policies for health services in Africa. BMJ open, 2012. 2(1): p. e000706.

86. Roelofs, T.S., K.G. Luijkx, and P.J. Embregts, Intimacy and sexuality of nursing home residents with dementia: a systematic review. International Psychogeriatrics, 2015. 27(03): p. 367-384.

87. Schadewaldt, V., et al., Views and experiences of nurse practitioners and medical practitioners with collaborative practice in primary health care–an integrative review. BMC family practice, 2013. 14(1): p. 132.

88. Scott, S.D., et al., A protocol for a systematic review of the use of process evaluations in knowledge translation research. 2014.

89. Shaw, R.L., M. Larkin, and P. Flowers, Expanding the evidence within evidence-based healthcare: thinking about the context, acceptability and feasibility of interventions. Evidence Based Medicine, 2014. 19(6): p. 201-203.

90. Simone, K., et al., What are the features of targeted or system-wide initiatives aimed at increasing diversity in health professions trainees? A BEME systematic review.

91. Sturgess, D., J. Woodhams, and M. Tonkin, Treatment Engagement From the Perspective of the Offender Reasons for Noncompletion and Completion of Treatment—A Systematic Review. International journal of offender therapy and comparative criminology, 2015: p. 0306624X15586038.

92. Taylor, E. and S. Hignett, Evaluating evidence: defining levels and quality using critical appraisal mixed methods tools. HERD: Health Environments Research & Design Journal, 2014. 7(3): p. 144-151.

93. Taylor, G. and P. Pereznieto, Review of evaluation approaches and methods used by interventions on women and girls’ economic empowerment. Overseas Development Institute, 2014: p. 1-62.

94. Tobiano, G., et al., Patient participation in nursing care on medical wards: An integrative review. International journal of nursing studies, 2015. 52(6): p. 1107-1120.

95. Trépanier, A., et al., Factors associated with intended and effective settlement of nursing students and newly graduated nurses in a rural setting after graduation: A mixed-methods review. International journal of nursing studies, 2013. 50(3): p. 314-325.

96. Tretteteig, S., S. Vatne, and A.M.M. Rokstad, The influence of day care centres for people with dementia on family caregivers: an integrative review of the literature. Aging & mental health, 2015(ahead-of-print): p. 1-13.

97. Truant, T.L., et al., Complementary and alternative medicine (CAM) use in advanced cancer: a systematic review. J Support Oncol, 2013. 11(3): p. 105-13.

98. Uphoff, E., et al., A systematic review of socioeconomic position in relation to asthma and allergic diseases. European Respiratory Journal, 2014: p. ERJ-01145-2014.

99. van der Maaden, T., et al., Antibiotic Use and Associated Factors in Patients with Dementia: A Systematic Review. Drugs & aging, 2015. 32(1): p. 43-56.

100. Van Kessel, G., C. MacDougall, and L. Gibbs, Resilience—Rhetoric to Reality: A Systematic Review of Intervention Studies After Disasters. Disaster medicine and public health preparedness, 2014. 8(05): p. 452-460.

101. Velvin, G., et al., Systematic review of the psychosocial aspects of living with Marfan syndrome. Clinical genetics, 2015. 87(2): p. 109-116.

102. Wheeler, C., et al., Implementation of the Crisis Resolution Team model in adult mental health settings: a systematic review. BMC psychiatry, 2015. 15(1): p. 74.

103. Whitaker, R., et al., Intervention Now To Eliminate Repeat Unintended Pregnancy in Teenagers (INTERUPT): a systematic review of intervention effectiveness and cost-effectiveness, qualitative and realist synthesis of implementation factors and user engagement. BMJ open, 2014. 4(4): p. e004733.

104. Whittemore, R., et al., Psychological experience of parents of children with type 1 diabetes a systematic mixed-studies review. The Diabetes Educator, 2012. 38(4): p. 562-579.

105. Witteman, H.O., et al., User-centered design and the development of patient decision aids: protocol for a systematic. 2015.

106. Wu, R.C., et al., Effects of clinical communication interventions in hospitals: a systematic review of information and communication technology adoptions for improved communication between clinicians. International journal of medical informatics, 2012. 81(11): p. 723-732.

107. Zwanikken, P.A., et al., A systematic review of outcome and impact of Master’s in health and health care. BMC medical education, 2013. 13(1): p. 18.



Return to FrontPage


Visitors (since 2018-08-07): 


Comments (0)

You don't have permission to comment on this page.