Back to Journals » International Journal of General Medicine » Volume 15

Agreement Between International Radiologists on the Appropriateness and Urgency in Lumbar Spine MRI Referrals

Authors Alanazi AH, Cradock A, Toomey R, Galligan M, Ryan J, Stowe J, Rainford L

Received 23 March 2022

Accepted for publication 13 June 2022

Published 28 July 2022 Volume 2022:15 Pages 6315—6324

DOI https://doi.org/10.2147/IJGM.S366653

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 3

Editor who approved publication: Dr Scott Fraser



Ali Hasayan Alanazi,1 Andrea Cradock,1 Rachel Toomey,1 Marie Galligan,2 John Ryan,3 John Stowe1 , Louise Rainford1

1Radiography and Diagnostic Imaging, University College Dublin, Dublin, Ireland; 2School of Medicine, University College Dublin, Dublin, Ireland; 3Ziltron LTD, Dublin, Ireland

†John Stowe passed away on July 27, 2021

Correspondence: Ali Hasayan Alanazi, South Central Buildings, APT 29, Sandyford, Dublin, Air Code D18 RW02, Ireland, Tel +35 3833782878, Email [email protected]

Purpose: To determine how radiologists across health-care jurisdictions internationally assess the appropriateness and urgency levels of lumbar spine Magnetic Resonance Imaging MRI referrals.
Patients and Methods: Clinical information was extracted from 203 lumbar spine MRI referrals. Texts were divided into 10 datasets and embedded into a software to facilitate the classification process. Participant radiologists were recruited at the Image Perception Lab, at the Radiological Society of North America Congress, 2019 and through the institution radiology network. Radiologists were asked if they use referral guidelines in their practices. Radiologists assigned appropriateness and urgency levels based on the referral text. Appropriateness level descriptors were: indicated, indicated but needs more information or not indicated. Urgency levels were categorized: urgent, semi-urgent, or not urgent. All cases containing neurological symptoms with/without red flags were extracted and exact agreement between radiologists’ responses on the indication status was calculated.
Results: Seventy radiologists from 25 countries participated; 42% of participants indicated non-use of referral guidelines. Poor-moderate radiology agreements were recorded for appropriateness and referral urgency level decisions. 79.6% of responses indicated that cases containing neurological symptoms with/without red flags were indicated for scanning.
Conclusion: Despite referral guidelines promotion, nearly half of participants stated non-usage. Subsequently, a varied agreement levels were found in assigning the appropriateness of the referrals. Appropriateness of referrals with neurological symptoms (with/without red flags) recorded good agreement.

Keywords: back pain, lumbar spine referral, magnetic resonance imaging, examination appropriateness

Introduction

Referral vetting is a fundamental principle for radiation protection and patient safety in radiology departments.1 It involves scrutiny of the referred requests by radiologists or nominated specialist imaging radiographers to ensure that the written indications for scanning are adherent to guidelines adopted by the department – often international guidelines, such as the American College of Radiology Appropriateness Criteria (ACR),2 the Royal College of Radiologists guidelines (RCR),3 or to national and local guidelines. Although these guidelines have been shown to minimize inappropriate scanning,4–7 some guidelines have a considerable level of ambiguity which causes variance in interpretation of guidelines.1,8 Several studies have reported various levels of agreement between radiologists or radiographers on the appropriateness for lumbar spine MRI. For example, a study in Spain9 reported substantial agreement (Kappa = 0.62) between radiologists in assigning the appropriateness of lumbar spine MRI referrals according to the ACR criteria.2 In Ireland,10 a fair agreement with kappa = 0.26 was found between three experienced MR radiographers who were recruited to assign the appropriateness of 1021 lumbar spine MRI referrals based on RCR guidelines (iRefer).3

Radiologists, or in a limited number of European countries MRI specialist radiographers, are responsible for vetting MRI requests to ensure referred requests’ appropriateness. Lumbar spine MRI examinations (LSMRI) referrals are of particular interest as the demand for LSMRI is rapidly growing, and inappropriate management of patient referrals can impact waiting lists and the appropriate prioritization of patients.11 In addition to avoidable delays in diagnosis and treatment for patients, alternative examinations might be required for those inappropriately referred for LSMRI, resulting in increased workloads, health-care costs, and low health-care outcomes.12 In contrast, appropriate referral vetting helps reduce the unjustified use of diagnostic imaging: improves healthcare quality, patient safety, and reduces cost and resource utility.13

However, the literature demonstrates variations between radiologists and radiographers from the same institution or country in assigning the referrals’ appropriateness, and there remains a paucity of literature related to how radiologists with different levels of experiences across different health-care jurisdictions internationally assess the appropriateness of lumbar spine MRI referrals.

This paper aims to investigate the extent of variation between radiologists from different countries in assigning the appropriateness and urgency level of a bank of 203 clinical LSMRI referral cases. The study also aims to identify the extent of variation in radiologists’ categorization of the appropriateness of the cases with neurological symptoms with/without red flags as stated in the ACR guidelines.2 This paper is novel in its approach as it is the first paper to investigate the variation in opinions at international level which could give insight into how international radiologists interpret the written indications and what factors influence their judges and agreements.

Materials and Methods

Referral Cases

Ethical approval was granted by the institutional ethics committee of University College Dublin and formal confirmation of the participating radiology department was attained as part of the ethics process. To develop a text referral case bank, a public hospital, which is an affiliated university teaching hospital, agreed to participate and facilitate the use of referral texts to be collated for use in the study. Clinical indications and patients’ demographics as well as referring departments were retrospectively extracted from 203 LSMRI referrals in the form of “referral texts” in an Excel sheet. Information pertaining to the local service and patients’ identities was edited to ensure referrals were anonymized. Information pertaining to clinical indications was not edited.

Referral Text

The extracted text data was randomly divided into ten datasets, each containing 20–21 referral texts. The number of the datasets was determined to be ten with 20–21 cases each to minimize time commitment from participants. The Appendix presents an example of dataset number 4. The datasets were inserted into a password-protected, web-based user interface (Ziltron Ltd., Dublin, Ireland). The data collection tool was accessed via four 4th generation Apple iPad tablet computers running IOS10.3.3 (Cupertino, CA, USA). The resultant data were saved in real-time in secure cloud-based storage.

Participating Radiologists

Radiologists from different countries and experience levels volunteered to assess the appropriateness of the collected LSMRIs referral texts as well as the examination urgency level at the Medical Image Perception Lab in the Radiology Society of North America conference RSNA 2019, which was funded by National Cancer Institute (NIC). Prior to commencing the research activity, the aim of the study was explained and verbal consent to proceed was obtained from all participants. The participants agreed to the use of their classifications and their information in the research, such as years of experience, country, and involvement in referrals vetting as part of the consent process. Advertisement of the Image Perception Lab research activity was approved by the RSNA scientific committee, and the research activity was promoted at the conference venue.

Each participant was assigned to classify one of the 10 datasets (20–21 cases) and was requested to provide the following demographic details: country of practice, number of years of experience post radiology qualification, and number of reported LSMRI cases per month. Participants were also asked if guidelines are used in their practices or not. Resident radiologists whilst having experience of MRI referral vetting were categorized as zero years to facilitate recognition of their early-stage professional status within the radiology participant cohorts. The participant then proceeded to view individual text referrals and used a drop-down menu to select whether the referral was indicated, indicated as routine but subject to additional information, or not indicated. If the referral was indicated, a further drop-down menu asked the participants to indicate the urgency level of the case, categorized as: urgent: scan within 48 hours, semi-urgent: scan within four weeks, or routine: scan can wait more than four weeks. The time intervals were chosen arbitrary based on the current practice in the hospital from where the data was extracted.

After the conference finished, the number of participants who had assessed each dataset was not equal, which caused some difficulties in testing the agreement between the radiologists in the whole stacked datasets as one score. Another recruitment round took place through the institution radiology network. We contacted 9 MSK radiologists directly, all of whom have not been involved in the previous texts review. Participants were given access to the Ziltron software in which the datasets were stored and directed to the number of the dataset that they are required to complete.

Analysis

All data analysis tests were performed using SPSS (version 26). Different inter-rater agreement tests were applied to calculate the agreement in different sets as follows:

  1. Intra-class correlation coefficient ICC based on a mean-rating (k = 7), absolute agreement, and two-way random effects models were used to determine the agreement between participants in each dataset for both appropriateness and urgency level. Also, ICC with one-way random effects models was applied for the whole stacked referrals (n = 203) to measure the agreement between all radiologists in the appropriateness and urgency level of referrals. Levels of agreement were in accordance with the suggested levels by Koo and Li in which ICC less than 0.5 indicates poor agreement, 0.5–0.75 moderate, 0.75–0.9 good, and excellent if over 0.9.14
  2. Agreement between experienced participants (board-certified participants) from the same countries in each dataset was calculated using ICC with the absolute agreement and two-way random-effects models. The reason is to investigate if experienced radiologists from the same countries and classified the same dataset have a higher agreement than agreement found between all participants in the same dataset or not.
  3. Referrals containing neurological symptoms, and neurological symptoms with red flags were extracted, 17 referrals with both neurological symptoms and red flags (Set A), and 23 referrals with neurological symptom only (Set B). The percentage agreement was calculated for both Set A and B manually by dividing the number of exact agreements in observations (eg, indicated) by the total number of observations (all ratings including indicated, indicated but needs more information, and not indicated). The reason why the percentage agreement was selected is to separately calculate the agreement on each class (eg, indicated). Referrals in Set A were also categorized under ACR red flags categories to calculate percentages of exact agreement on class “indicated” and urgency level for each category.2

Results

A total of seventy radiologists (n = 70) were recruited for assigning referrals appropriateness and urgency level. Of those, 61 were recruited in the Medical Image Perception Lab and the remaining 9 were recruited after the conference. Participants originated from 25 countries, the majority originated from the United States, UK, Mexico, and Saudi Arabia. Figure 1 shows the number of participants from each country. Overall, 42% of the participants (n = 30) indicated no use of referral guidelines in their daily practice (Table 1). Agreement between radiologists in assigning the referral appropriateness varied: a poor agreement was found in 3 datasets, moderate in 6 datasets and only one dataset recorded good agreement (ICC in the ten datasets = −0.019 to 0.834). Agreement on referrals appropriateness for the whole datasets together (n = 203) was moderate with ICC = 0.621 and 95% CI [0.53–0.69]. Poor agreement was found in 6 datasets, and 4 datasets recorded moderate agreement (ICC = −0.410 to 0.699) for the urgency of referrals. Also, poor agreement was found between the radiologists in assigning the urgency level of the whole referrals (ICC = 0.464 with 95% CI [0.34–0.57]). Table 2 shows the agreement on the appropriateness and urgency level of LSMRIs in each data set and the whole stacked sets together, and Table 1 provides the participants’ demographics and the data set(s) completed.

Table 1 Demographics of Radiologists Participated in Datasets 1–10

Table 2 Inter-Rater Agreement (ICC) on the Appropriateness and Urgency Level of LSMRI Cases

Figure 1 A histogram showing the number of participants and their country of practice.

The agreement levels were higher between experienced radiologists from the same countries than agreements recorded for all experienced and non-experienced radiologists from different countries. Most of the agreements recorded moderate agreement with ICCs ranged from 0.547 to 0.628, and good agreement with ICC = 0.776 was recorded between pair of experienced participants from France. Table 3 shows the agreement results and participants’ nationalities in the datasets.

Table 3 Agreement Between Experienced Radiologists from the Same Countries

Further analysis of Set A and Set B together (cases with neurological symptoms with/without red flags) recorded a percentage agreement of 79.6% for participant responses for indicated cases, 8.2% agreed referrals as indicated but need more information”, and 12.1% agreed that the referrals were not indicated despite cases of neurological symptoms with and without red flags being present. For referrals with red flags, Set A (n = 17), we found 91.5% of the participants agreed that the cases are indicated. A percent of 45.3% of the participants agreed that cases with red flags are urgent and need to be scanned within 48 hours. Referrals included key red flags that conformed with those presented by the ACR guidelines are categorized and displayed in Table 4.

Table 4 ACR Categories and Percent of Agreement on Indicated Judges and Urgency Levels for Referrals Included Neurological Symptoms and Red Flags

Discussion

This study investigated the variation amongst international radiologists in assigning the appropriateness and urgency level of LSMRI referrals. Varied agreement levels were found in assigning the appropriateness and urgency level of the referrals within the datasets. Moderate agreement on the referrals appropriateness on whole stacked datasets was found with ICC = 0.621 (95% CI [0.53, 0.69]) and poor for the urgency level with ICC = 0.464 (95% CI [0.34, 0.57]). We found good consensus between radiologists (79.6% of the responses were indicated) on the appropriateness of the 40 cases (Set A and B) that included neurological symptoms with and without red flags. However, a higher agreement with greater than 91% was recorded for the indication of cases with red flags in Set A (Table 4).

The overall moderate agreement of the referrals’ appropriateness for all datasets is similar to the agreement score reported in the study of Francisco et al 9 however, this study was conducted on a national scale (radiologists were from one country), whilst our study included radiologists from different countries. For the agreements within the datasets, most of the ten datasets recorded moderate or poor agreement in assigning the appropriateness of the LSMRI referrals, and only one set recorded good agreement. We speculate that this moderate-poor agreement (high-moderate variation) in radiologists’ opinions could be attributed to the differences in experience and countries between participants in each dataset (Table 1), the service that they come from (public or private), and their involvement in vetting MRI referrals.8,15 Moreover, we found 42% of the participants stated they did not use referral guidelines in their workplaces which also in line with previous studies16–20 and this lack of adherence to the guidelines might also influence the extent of variation among the radiologists’ decisions. Causes of non-adherence to the guidelines were not investigated as the study was not designed for this purpose; however, we might speculate some common reasons such as lack of awareness, lack of guidelines availability in workplaces.19 Another reason is that some radiologists rely on their clinical experience when they vet referrals without referring to the guidelines.10

To investigate the influence of factors such as experience and country of practice on the agreement variability found within the datasets, we measured the agreement between only experienced radiologists from the same countries as presented in Table 3. We found that all agreement levels improved to moderate and good agreement level after excluding the non-experienced participants and participants from different countries. This finding supports our speculation regarding the influence of difference in experience and counties of practice between participants on the agreements found within the datasets.

Regarding the good agreement between all participants (experienced and non-experienced) in the set 4, we identified that sentences in this set contained phrases clearly indicating the scans’ appropriateness. Randomization was applied using an Excel program; however, despite this process, dataset 4 contained texts that were more likely to achieve higher agreement than other data sets. For instance, referrals included phrases such as “normal exam”, “non-specific low back pain” which were classified as not indicated by most of the participants and other referrals with phrases indicating serious illness such as “for intervention”, “stenosis on previous MRI”, “reduced saddle sensation”, “urinary incontinence”, “progressively getting worse”, and “vertebral fractures” were categorized as indicated by all radiologists.

Good agreement was found on the appropriateness of referrals which contained clinical indications with neurological symptoms with and without red flag signs (79.6% of the responses agreed that these cases are indicated). Most of the disagreement in this dataset came from the cases with neurological symptoms but without red flags where there was a range of opinions regarding indication status. For example, “note degenerative lumbar spine on x-ray with positive nerve tension signs bilaterally, ? Lumbar spine foraminal stenosis” - this referral was classified as not indicated by 2 radiologists, indicated but needs additional information by 2 radiologists, and indicated by 3 radiologists. The lack of information provided in some referrals is also another cause of disagreement (16 out of 40 referrals), in which some radiologists agreed on the need for scanning, but they required more information relating to symptoms history, physiotherapy, side and location of pain, and imaging protocol. However, in all cases with red flags, such as urinary incontinence, bowel incontinence, back pain not resolved with conservative management, we found a consensus with 100% of these referrals deemed appropriate for scanning. Whereas referrals falling under other red flag categories such as “global or progressive motor weakness in lower limbs” and “history of cancer” recorded less consensus on that these referrals are indicated with percentages 85.7% and 78.5%, respectively.

Investigating how variant radiologists are in assigning the urgency level of the referrals has also resulted in poor agreement (high variation) in 6 datasets and moderate agreement in 4 datasets. Overall agreement for the whole urgency classifications of the 203 cases also showed poor agreement. The literature review revealed that no studies had been conducted to measure the variability between radiologists or specialist MRI radiographers in assigning the urgency level of the MRI or any other radiology referrals. However, it is possible that this variation is because there are currently no guidelines or clear definitions for urgency examination timings, and radiologists might be influenced by the difference in variables in their practices, such as the demand level on MRI scanning. For instance, in some non-acute hospital departments, urgent, the least urgent cases could be scanned within 48 hours, whilst in a department with high pressure, least urgent cases might take more than 4 weeks.21 In datasets with the moderate agreement (4-7-8-10), we noticed the abundance of negation phrases written within referrals which might lead most of the participants to select routine scan time, commenting included: “no red flags”, “no neuro”, “nil neurology”, “normal neuro”, “no motor features”, “no focal deficit”, and other instructions related to the time of the scan such as “for annual surveillance”, “repeat up to date MRI”, “interval MRI”, “to be done in 3 months”, “OPD in 4 months”. These datasets also contained referrals with the presence of some words that indicate the necessity of urgent examination, such as “vertebra fracture”, “CSF leak”, “reduced saddle sensation” and “urinary incontinence”, in which most of the participants selected urgent scan time.

With regard to the urgency level of the cases that included red flags, we found good agreement between participants on the urgency level of cases with new onset of urinary and fecal symptoms in the context of LBP (Table 4), in which most of the responses suggested urgent scan within 48 hours (71.4% agreement on cases with urinary incontinence and 92.8% agreement on cases with fecal incontinence). This finding is in accordance with recommendations reported by Bell, Collie and Statham in which the authors advise urgent scanning for patients present with both LBP and urinary symptoms.22 Other categories did show variant agreement between participants with no apparent trend, except that there was complete agreement between 7 radiologists that the case with “Back pain not resolved with conservative management” can be considered semi-urgent and wait up to 4 weeks. In general, most of the variations in determining urgency level were found within cases stating LBP with “history of cancer” or “global or progressive motor weakness in the lower limbs”.

This study has some limitations. First, only a few radiologists provided information regarding their opinions when selecting “indicated but subjected to additional information”, which would have assisted in understanding what information they deemed missing. Second, the causes of non-adherence to the guidelines were not investigated as we did not ask the reasons if the participant was not compliant. Third, it is acknowledged a greater proportion of radiologists originating from USA participated, this is to be expected at the RSNA venue; however, every effort was made to recruit participants from different countries of practice.

Conclusion

To conclude, we generally report poor to moderate agreement between radiologists from different counties and varying experiences in assigning the appropriateness and urgency level of LSMRI referrals. Overall moderate agreement on referrals appropriateness was recorded for the whole datasets together. Improved agreement scores were noticed between experienced radiologists from the same countries. High appropriateness agreement was found on referrals with red flag indications. These results and their interpretations highlight differences in radiologists’ opinions particularly across different healthcare environment and indicate the potential for improvement in the use of referral guidelines. Further research is warranted to investigate what are the causes of the variety between radiologists in assigning the appropriateness and urgency level of LSMRI and reasons for non-compliance with the guidelines and the subsequent potential impact on imaging resources and patient management.

Ethics Approval

Ethical approvals were obtained from the relevant institutional review from University College Dublin (Reference Numbers: LS-E-19-171-Alanazi-Rainford and LS-E-19-69-Alanazi-Rainf).

Acknowledgments

We would like to thank all the participating radiologists who assigned the referrals appropriateness.

Funding

No fund has been received for this study.

Disclosure

John Ryan is an employee of Ziltron LTD, Dublin, Ireland. The authors report no other potential conflicts of interest in this work.

References

1. Malone J, Guleria R, Craven C, et al. Justification of diagnostic medical exposures: some practical issues. Report of an International Atomic Energy Agency Consultation. Br J Radiol. 2012;85:523–538. doi:10.1259/bjr/42893576

2. Nandini D, Daniel F, Judah B, et al. ACR appropriateness criteria low back pain. J Am Coll Radiol. 2016;13:1069–1078. doi:10.1016/j.jacr.2016.06.008

3. Remedios D, France B, Alexander M. Making the best value of clinical radiology: iRefer Guidelines. Clin Radiol. 2017;72:705–707. doi:10.1016/j.crad.2017.05.009

4. Arye B, Sigal T, Anat M, et al. Preauthorization of CT and MRI examinations: assessment of a managed care preauthorization program based on the ACR appropriateness criteria® and the Royal College of Radiology guidelines. J Am Coll Radiol. 2006;3:851–859. doi:10.1016/j.jacr.2006.04.005

5. Avoundjian T, Risha G, Dorcas Y, et al. Evaluating two measures of lumbar Spine MRI overuse: administrative data versus chart review. J Am Coll Radiol. 2016;13:1057–1066. doi:10.1016/j.jacr.2016.04.013

6. European Society of Radiology. Summary of the proceedings of the international forum 2016: ‘Imaging referral guidelines and clinical decision support - how can radiologists implement imaging referral guidelines in clinical routine? Insights Imaging. 2017;8:1–9. doi:10.1007/s13244-016-0523-4

7. Kevin Y W, Christopher J Y, Melissa C, et al. Reducing inappropriate lumbar spine MRI for low back pain: radiology support, communication and alignment network. J Am Coll Radiol. 2018;15:116–122. doi:10.1016/j.jacr.2017.08.005

8. Kristin B L, Ingelin B. Geographical variation in radiological services: a nationwide survey. BMC Health Serv Res. 2007;7:1–11. doi:10.1186/1472-6963-7-1

9. Francisco K, Estanislao A, Ana R, et al. Appropriateness of lumbar spine magnetic resonance imaging in Spain’. Eur J Radiol. 2013;82(6):1008–1014. doi:10.1016/j.ejrad.2013.01.017

10. Alanazi A, Cradock A, Alsharif W, Bisset J, Barber J, Rainford L. Radiography An investigation of lumbar spine magnetic resonance referrals in two Irish university teaching centres: radiology clinical judgement versus iRefer guideline compliance. Radiography. 2022;28(2):460–465. doi:10.1016/j.radi.2021.12.011

11. McCarthy. A, Rainford L, Byrne C, Lohan D, Butler M L. Magnetic resonance imaging (MRI) waiting lists: a snapshot of current service reality. Eur Soc Radiol. 2019;1–13. doi:10.26044/ecr2019/C-0112

12. Hedayat S, Rahim O, Atefeh E, et al. Evidence for policy making: clinical appropriateness study of lumbar spine MRI prescriptions using RAND appropriateness method. Int J Heal Policy Manag. 2013;1:17–21. doi:10.15171/ijhpm.2013.04

13. Timothy WF, Britt S, Roger C. Appropriate use of diagnostic imaging in low back pain: a reminder that unnecessary imaging may do as much harm as good. J Orthop Sports Phys Ther. 2011;41:838–846. doi:10.2519/jospt.2011.3618

14. Terry KK, Mae Y. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. J Chiropr Med. 2016;15:155–163. doi:10.1016/j.jcm.2016.02.012

15. Navid M, Ferial F, Homayoun HK, Hossein M, Hossein K, Marzieh N. Appropriateness of physicians’ lumbosacral MRI requests in private and public centers in Tehran, Iran. Med J Islam Repub. 2016;30:1–7.

16. Stefan T, Douglas S, Manaster BJ. Do radiologists use the American College of Radiology musculoskeletal appropriateness criteria? Am J Roentgenol. 2000;175:545–547. doi:10.2214/ajr.175.2.1750545

17. Andre B, Anthony B, Barbara J, John J, Amish A, Judith K. Do clinicians use the American College of Radiology appropriateness criteria in the management of their patients? Am J Roentgenol. 2009;192:1581–1585. doi:10.2214/AJR.08.1622

18. Daniel KP, James ES. The use of ACR appropriateness criteria: a survey of radiology residents and program directors. Clin Imaging. 2015;39:334–338. doi:10.1016/j.clinimag.2014.10.011

19. Remedios D, Drinkwater K, Warwick R. National audit of appropriate imaging. Clin Radiol. 2014;69:1039–1044. doi:10.1016/j.crad.2014.05.109

20. Jonathan L, Jonathan P, Paul B, Louise R. Paediatric imaging radiation dose awareness and use of referral guidelines guidelines amongst radiology practitioners and radiographers. Insights Imaging. 2016;7:145–153. doi:10.1007/s13244-015-0449-2

21. Derek JE, Alan JF, Kaveh GS, Stephanie M. Management of MRI wait lists in Canada. Healthc Policy. 2009;4:76–86.

22. Bell DA, Collie D, Statham PF. Cauda equina syndrome - what is the correlation between clinical assessment and MRI scanning? Br J Neurosurg. 2007;21:201–203. doi:10.1080/02688690701317144

Creative Commons License © 2022 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.