Back to Journals » Patient Preference and Adherence » Volume 16

Feasibility of Testing Client Preferences for Accessing Injectable Opioid Agonist Treatment (iOAT): A Pilot Study

Authors Dobischok S , Metcalfe RK, Matzinger EA, Lock K, Harrison S, MacDonald S, Amara S, Schechter MT, Bansback N, Oviedo-Joekes E

Received 28 September 2022

Accepted for publication 23 November 2022

Published 23 December 2022 Volume 2022:16 Pages 3405—3413


Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 2

Editor who approved publication: Dr Johnny Chen

Sophia Dobischok,1 Rebecca K Metcalfe,1 Elizabeth Angela Matzinger,1 Kurt Lock,1,2 Scott Harrison,3 Scott MacDonald,3 Sherif Amara,4 Martin T Schechter,1,5 Nick Bansback,1,5 Eugenia Oviedo-Joekes1,5

1Centre for Health Evaluation & Outcome Sciences, Providence Health Care, Vancouver, BC, Canada; 2BC Centre for Disease Control, Provincial Health Services Authority, Vancouver, BC, Canada; 3Providence Health Care, Providence Crosstown Clinic, Vancouver, BC, Canada; 4SafePoint Supervised Consumption Site, Fraser Health Authority, Surrey, BC, Canada; 5School of Population and Public Health, University of British Columbia, Vancouver, BC, Canada

Correspondence: Eugenia Oviedo-Joekes, Centre for Health Evaluation and Outcome Sciences, St. Paul’s Hospital, 575-1081 Burrard St, Vancouver, BC, V6Z 1Y6, Canada, Tel +1 604-682-2344 Ext. 62973, Fax +1-604-806-8210, Email [email protected]

Purpose: Injectable opioid agonist treatment (iOAT) is an effective treatment for opioid use disorder (OUD). To our knowledge, no research has systematically studied client preferences for accessing iOAT. Incorporating preferences could help meet the heterogenous needs of clients and make addiction care more person-centred. This paper presents a pilot study of a best-worst scaling (BWS) preference elicitation survey that aimed to assess if the survey was feasible and accessible for our population and to test that the survey could gather sound data that would suit our planned analyses.
Patients and Methods: Current and former iOAT clients (n = 18) completed a BWS survey supported by an interviewer using a think-aloud approach. The survey was administered on PowerPoint, and responses and contextual field notes were recorded manually. Think-aloud audio was recorded on Audacity.
Results: Clients’ feedback fell into five categories: framing of the task, accessibility, conceptualization of attributes and levels, formatting, and behaviour predicting questions. Survey repetitiveness was the most consistent feedback. The data simulation showed that 100 responses should provide an adequate sample size.
Conclusion: This pilot demonstrates the type of analysis that can be done with BWS in our population, suggests that such analysis is feasible, and highlights the importance of the interviewer and participant working side-by-side throughout the task.

Keywords: opioid use disorder, opioid agonist treatment, injectable opioid agonist treatment, diacetylmorphine, hydromorphone, best worst scaling


Opioid use disorder (OUD),1 particularly if untreated, poses great harms to the individual, families, and communities2, and contributes substantially to the global burden of disease.3 Improving access to evidence-based OUD care is one way to curb Canada’s public health crisis of opioid poisoning and overdose deaths, which has taken over 10,000 lives in British Columbia since the crisis was declared in 2016.4,5 Opioid agonist treatment (OAT) with long-lasting oral opioids such as methadone or suboxone has expanded in the wake of traditional non-pharmacological-based therapies’ not reaching service users who might still use street opioids to meet their needs.6,7 OAT can retain clients in care and reduce the major risks of untreated OUD.8–10 At the same time, OAT retention rates can be much lower than the high rates its advocates anticipate,11 and clients who discontinue oral treatments face elevated overdose risk.12 Other opioid formulations are available for OUD, particularly when OAT is not fully effective,13,14 to widen the breath of treatment options and meet clients’ heterogenous needs and preferences.15

In a person-centered care approach, having a diversified selection of opioids and formulations could support clients’ individualized treatment goals, as there are significant interindividual variations in response to opioids.16,17 Injectable opioid agonist treatment (iOAT) with, for example, diacetylmorphine or hydromorphone, has been shown to be safe, effective, and cost-effective,18–23 particularly for people with long-term OUD for whom other approaches have been ineffective either at engaging them or retaining them in treatment. Over the years, iOAT has been provided in highly regulated and structured settings in which clients must visit the clinic in-person up to three times a day, every day, for a witnessed injection at pre-specified times. The daily visits provide clients and the clinical team with an opportunity to engage in comprehensive care and develop a therapeutic relationship that can support clients’ diverse needs, either onsite or through referrals.24,25 In most contexts (eg, Canada, Europe), only a small number of clients for very short period (eg, a few doses) are allowed to take the medication outside the iOAT site.26 In Canada, providers and stakeholders support iOAT expansion (eg, increase the number of iOAT clients in pre-existing settings, establish new iOAT with diverse approaches, increase access to take-home doses, combine iOAT with greater diversity of medications and formulations) within the continuum of care as another tool in their repertoire to provide person-centered addiction care.24,27

One way to support person-centered addiction care is to measure clients’ preferences with tools that reflect and respect their priorities.28 Prior studies have emphasized the effectiveness, safety and cost-effectiveness of iOAT. However, client preferences are infrequently considered in substance use treatment,29 and assessment of client preferences for iOAT delivery specifically remains a major gap in healthcare providers’ and policymakers’ provision of person-centered care for these clients.24,30 Healthcare research is increasingly recognizing that client preferences help predict and explain client behaviour31,32 and that congruency between clients’ preferred treatment and administered treatment optimizes care management and improves treatment outcomes.33–35

Research tools that quantitatively capture robust data while remaining relevant to clients might come at the expense of being complex and cumbersome for both researchers and participants. Developed with input from service users, best-worst scaling (BWS) is a quantitative preference elicitation tool where participants select both the best and worst items from a list of at least three levels across multiple choice sets.36 BWS has built a strong reputation in healthcare research37,38 across healthcare settings including cancer research,39,40 stroke care,41 and mental health services.42 BWS has also been implemented with priority populations including unhoused women40,43 and Indigenous peoples.44 BWS has the advantage of being less cognitively burdensome43,45 than other preference elicitation tasks because participants make a single best and worst choice for separate choice sets rather than compare multiple more complex scenarios at once.45 However, to our knowledge, BWS has never been applied to OAT, iOAT, or addiction care broadly, and iOAT preferences have never been investigated with robust quantitative methodologies.

As part of a study whose overarching objective is to determine how iOAT can be improved to increase its effectiveness and uptake, we developed a BWS survey to assess preferences for iOAT delivery amongst current and former iOAT clients. During the pilot stage, we gathered feedback on key task elements to meet two goals: 1) assess if the survey was feasible and accessible for our population considering cognitive fatigue and policy inequities that make not all desired options available to clients in a timely manner and 2) to test that the survey could gather sound data that would meet standards for planned analyses. Ultimately, our work can support researchers and clinicians who seek to implement quantitative person-reported outcome measures28 by testing a framework of feasible data collection and output expectations.

Materials and Methods


The present pilot study was conducted in the lower mainland of British Columbia, Canada, where the recruitment for the main study participants will take place. At the time of this pilot, there were five clinics offering iOAT: four in downtown Vancouver and one in Surrey. Crosstown Clinic is the site with the largest client base, as it was the purpose-built clinic for the NAOMI18 and SALOME23 clinical trials which many current Crosstown clients participated in.24 Around the time of our study, clients at Crosstown (not specifically our participants) had an average age of 44.98 (standard deviation [SD] = 9.46). Approximately one-third of clients self-identified as women, and one-third as having Indigenous ancestry. These clients faced multiple lifetime structural vulnerabilities, including nonstable housing (66.7%) and histories of sex work (42.06%). More than half of the participants reported chronic medical problems and one-quarter had a lifetime suicide attempt. Participants had a lifetime average of 15.19 (SD = 9.02) years of street heroin injection and reported extensive histories of substance use treatment, including oral OAT (eg, methadone), outpatient withdrawal treatments (eg, detoxification), and outpatient counseling.46

Pilot Study Participants

We sought 15–20 participants from different iOAT sites in BC’s lower mainland. Our recruitment plan was to purposively sample specific groups, including participants from different iOAT sites, current and former clients, and different genders. Special effort was made to recruit Indigenous clients and younger clients. When input from specific groups (eg, older men who are long time iOAT clients) was saturated, we intentionally recruited participants with diverse experiences (eg, younger men recently engaged in iOAT). Interviews took place in our Vancouver field office or at a space provided in the different iOAT sites over a period of three weeks. All participants received a $30 honorarium per hour or fraction. The study received behavioural ethics approval from the Providence Health Care Research Ethics Board in partnership with Fraser Health Authority [H19-00217] and all participants provided informed consent. This study complies with the Declaration of Helsinki.

Materials and Set Up

The methodology for developing this BWS survey has been described at length previously.47 Briefly, before piloting the survey, we engaged in extensive consultation with iOAT stakeholders, experts, and iOAT clients to generate a preliminary list of twenty-one items (seven attributes, three levels apiece) that were important to iOAT delivery (see Table 1). Based on this list, we conceived a beta version of a BWS case 2 survey in an orthogonal main effects plan so that individual-level preferences could be obtained. The survey consists of eighteen choice sets, each with seven levels, where participants must select the most wanted and least wanted level in each set. A test wireframes version of the survey was piloted in PowerPoint where participants selected options using a thumbs up/down. The pilot survey mimicked most of the features that would be used in a web version but enabled elements to be quickly and iteratively modified in response to participant feedback (see Figure 1).

Table 1 Preliminary List of Items That are Important for iOAT Delivery Informed by iOAT Experts, Stakeholders, and Clients

Figure 1 Example of a pilot BWS choice set from the preliminary survey in PowerPoint.

The first ten slides described the study rationale, instructions for the task, and provided overarching descriptions of the attributes and levels shown in Table 1. An example on an unrelated topic (ice cream preferences) was presented. The participant viewed the slides on a monitor, while the interviewer controlled them on a separate computer to accommodate for social distancing practices (pilot occurred in November 2021). Participants had access to and control over a mouse to select their own most/least wanted option for each of the eighteen choice sets. To assess how clients’ actions relate to their preferences, we also piloted 6 follow-up behaviour predicting questions.


In preparation for the pilot sessions, the interviewers worked with the investigators to develop a pre-written think-aloud interview script to probe participants consistently about specific aspects of the task. The interviewer led the participants through the educational material, then the participant navigated the survey with the interviewer’s support. Throughout the pilot, the interviewers met before and after the sessions with the principal investigator to discuss emerging interviewing strategies specific to the task. These meetings provided opportunities for ongoing training since skills developed iteratively as the aspects of the task participants needed support on became clear. Previously, the principal investigator trained interviewers to work with the target population more generally, as our research projects interface directly with clients and require a person-centered perspective wherein the interviewer takes a listening role and provides space for the client to share their perspective. At all times, participants were reminded that they could take breaks and that participation was voluntary.

Think Aloud Approach

Each pilot interview followed a think aloud approach wherein participants verbalized their thoughts throughout the survey. The interviewer demonstrated thinking aloud prior to the task and prompted the participant (eg, What would you change about this question?) to solicit input on specific items including task framing; interpretation of the attributes and levels (eg, poor phrasing/terminology); survey accessibility (eg, font size, colours, amount of information presented per slide); survey formatting (eg, order of slides); and general feedback. Task framing was especially crucial and required that the interviewer competently explained:

  1. That some of the levels may be hypothetical (eg, the feature does not yet exist at any iOAT site in Canada, or the feature is present at their iOAT site thus a lack thereof would constitute a hypothetical).
  2. They are making their selections based on their current, not past or future, preferences.
  3. They are making their selections based on what they most/least want in iOAT (not most/least important/need).

Think-aloud audio was captured on Audacity and transcribed by an outside transcription service based in Canada. Feedback related to accessibility, phrasing, and formatting was integrated immediately and tested in the subsequent interviews. Substantial changes (eg, framing of the tasks, formatting the survey navigation) were discussed with preference elicitation task experts prior to integration.


Survey responses and contextual field notes were manually recorded by the interviewer. Based on these responses, we conducted a simulation of a pragmatic sample size (n = 100) to test if our model would generate sensible data and if it would be amenable to informing policy decisions. With the simulated sample, we ran a conditional logit model and reported normalized results. The simulation also allowed us to assess if 100 participants would be an adequate number for sufficient precision and model convergence.


Eighteen current (n = 15) and former (n = 3) iOAT clients were recruited from four different iOAT sites in British Columbia’s Lower Mainland: Crosstown Clinic; Molson; Downtown Community Health Centre; and Safepoint. Seven clients were female, and the majority of clients were in the 36–59 age range. We captured five younger clients in the 18–35 age range and one client over 60. No clients self-identified as Indigenous, and one client self-identified as “half Indigenous”.

The participants’ feedback fell into five categories: framing of the task, accessibility, conceptualization of attributes and levels, formatting, and behaviour predicting questions. All feedback and the revisions made in response are summarized in Appendix 1. Key findings are highlighted below.

Materials and Set-Up

The participants found that the survey displayed on a monitor was accessible so long as the interviewer was present to click through or read sections as needed. Minor changes were made (eg, changing the background colour of the slides from black to white, making the thumbs up/down icons clickable) to make the survey more intuitive and accessible.

Overall Feedback

Survey repetitiveness was the most consistent feedback. Working side-by-side with the interviewer was important to mitigate the repetitiveness. The introduction slides were updated to warn participants about the repetition so that interviewers could verbally manage participant expectations before the task, and the interviewing guide was updated with participant engagement strategies (eg, explain the reason for the repetition, click for the participant, take breaks). Participants were excited to share their preferences which helped sustain their engagement.

Completing the interview side-by-side was also important to maintain task framing, as participants were occasionally unsure if they were choosing based on past/future wants, if they should select items that were already options or not currently present at their clinic, and sometimes slipped into alternate framings (most/least important/needed) ununiformly (no specific levels prompted alternate framings). To mitigate this, the interviewer verbally emphasized the three key elements of task framing described above (A-C) during the introductory slides and reinforced the framings throughout the interview.

Finally, one participant suggested a level that was not reflected (community support). Again, the interviewer reinforced to the participant that we were unable to cover everything, and that an open text box is available at the conclusion of the survey.

Example BWS Calculation

Data from our simulation of 100 participants based on our participants’ 18 responses (Figure 2) indicated that the two most desired options were level 1.1, “I decide the type of injectable medications I am prescribed (my drug of choice)” followed by level 5.2, “I come to take my injectable medications onsite and leave when I see fit”. The two least desired options were level 1.2, “The prescriber and the system decide the type of injectable medication I am prescribed” and level 5.3, “The staff decides when and if I can take my injectable medication and when I can leave the site”. These results are consistent with items that were expected to be wanted the most/least by our population. The simplicity and strength of the presentation did not offer interpretation difficulties (eg, service users might want more autonomy over many other options). The simulation results also confirmed that 100 responses should provide an adequate sample size. In this pilot, we did not run any preference subgroup analyses (eg, latent class analysis or predictors of subgroup membership), so it remains unclear if this number of participants is sufficient for more complex models.

Figure 2 Most and least wanted aspects of iOAT delivery: simulation of 100 participants based on the responses of our 18 participants.


The present pilot study provides confidence that current and former iOAT clients could feasibly complete a BWS case 2 task, primarily due to their excitement to share their preferences in the representative attributes and levels. Second, this pilot showed that a streamlined introduction and completing the task side-by-side with the support of an interviewer maintained task framing and ensured that the client could easily ask questions and clarifications, receive prompts, and make connections between their views and the task. The role of an interviewer trained in listening and opening space is crucial, as it allows participants’ voices to be captured within the BWS framework. A comprehensive interviewing guide is thus essential to prepare the interviewer to competently highlight the key features in the introductory slides (task framing, repetitiveness, space at the end of the survey to include items we missed) and sustain participant engagement. While high cognitive effort potentially effects survey completion,48 the support of a trained interviewer coupled with the participants’ genuine enthusiasm to share their preferences on this topic removed some of the barriers to participant engagement.

The simulation from our data set showed that the pilot provided meaningful data, as the most commonly desired and least desired options are consistent with our expectations from interviews with our population. Our aim was primarily to demonstrate the possibilities of the BWS analysis with no intention to provide conclusions regarding treatment preferences beyond this. We have demonstrated the type of insights that can be revealed when applying BWS methodology to our population, and we confirmed that it is viable to launch the task with a greater number of participants.

This pilot study has several limitations. Among the possible revisions to the final version, we were unable to accommodate including a new attribute/level suggested by a participant as it would compromise the BWS experimental design. To mitigate this, we included an open text box for participants to share ideas not included in the set attributes and levels at the end of the survey. Also, the interviewer guide emphasizes to remind the participants at the beginning of the task that the survey cannot cover it all and there will be an open text box at the end for recording their thoughts.


This pilot was the first application of a novel BWS survey to assess the preferences of current and former iOAT clients. Our pilot demonstrates how participants can be supported to make the eighteen-choice set BWS case 2 design feasible with our population and demonstrates the type of analysis that can be done once full data collection occurs. Once this survey is carried out with many participants, we can leverage the preference data to engage clients with heterogenous needs/preferences in treatment and improve the continuity of care for current iOAT clients.


The authors respectfully acknowledge the unceded and traditional territory of the Coast Salish Peoples, including the traditional territories of xʷməθkwəýəm (Musqueam), Sḵwxw ú7mesh (Squamish), and Səlí̓ lwətaɬ (Tsleil-Waututh) Nations, upon which this research took place. We graciously thank all the clients who provided feedback during this pilot, and the research and clinical teams.


This work was supported by the Canadian Institute of Health Research Project Grant [F18-00932], the Canada Foundation for Innovation [JELF-CRC 40559] and Canada Research Chairs [F21-00475]. The funders were not involved in any research activities relevant to this paper.


Dr Martin T Schechter reports grants from CIHR during the conduct of the study. All research activities were approved by the Providence Health Care Research Ethics Board (harmonized board of record) and Fraser Health Authority (harmonized partner board) [H19-00217]. All participants provided informed consent and were monetarily compensated for their time. The authors have no other personal or financial conflicts of interest to disclose.


1. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 5th. American Psychiatric Publishing; 2013.

2. Jiang R, Lee I, Lee TA, Pickard AS. The societal cost of heroin use disorder in the United States. PLoS One. 2017;12(5). doi:10.1371/journal.pone.0177323

3. Degenhardt L, Charlson F, Ferrari A, et al. The global burden of disease attributable to alcohol and drug use in 195 countries and territories, 1990–2016: a systematic analysis for the Global Burden of Disease Study 2016. Lancet Psychiatry. 2018;5(12):987–1012. doi:10.1016/S2215-0366(18)30337-7

4. Eibl JK, Morin K, Leinonen E, Marsh DC. The State of Opioid Agonist Therapy in Canada 20 Years after Federal Oversight. Canadian J Psychiatry. 2017;62(7):444–450. doi:10.1177/0706743717711167

5. Illicit Drug Toxicity Deaths in BC: January 1, 2012 – June 30, 2022; 2022.

6. de Jong CAJ, Roozen HG, van Rossum LGM, Krabbe PFM, Kerkhof AJFM. High abstinence rates in heroin addicts by a new comprehensive treatment approach. Am J Addictions. 2007;16(2):124–130. doi:10.1080/10550490601184472

7. Srivastava A, Kahan M, Nader M. Primary care management of opioid use disorders: abstinence, methadone, or buprenorphine-naloxone? Can Fam Physician. 2017;63(3):200.

8. Mattick RP, Breen C, Kimber J, Davoli M. Buprenorphine maintenance versus placebo or methadone maintenance for opioid dependence. Cochrane Database Sys Rev. 2014;2014(2):548. doi:10.1002/14651858.CD002207.pub4

9. Mattick RP, Breen C, Kimber J, Davoli M. Methadone maintenance therapy versus no opioid replacement therapy for opioid dependence. Cochrane Database Sys Rev. 2009;1(3):1222. doi:10.1002/14651858.CD002209.pub2

10. Beck T, Haasen C, Verthein U, et al. Maintenance treatment for opioid dependence with slow-release oral morphine: a randomized cross-over, non-inferiority study versus methadone. Addiction. 2014;109(4):617–626. doi:10.1111/add.12440

11. Piske M, Zhou H, Min JE, et al. The cascade of care for opioid use disorder: a retrospective study in British Columbia, Canada. Addiction. 2020;115(8):1482–1493. doi:10.1111/add.14947

12. Pearce LA, Min JE, Piske M, et al. Opioid agonist treatment and risk of mortality during opioid overdose public health emergency: population based retrospective cohort study. THE BMJ. 2020:368. doi:10.1136/bmj.m772

13. Wei X, Wang L, Wang X, Li J, Li H, Jia W. A study of 6-year retention in methadone maintenance treatment among opioid-dependent patients in xi’an. J Addict Med. 2013;7(5):342–348. doi:10.1097/ADM.0b013e31829da05b

14. Perreault M, Julien D, White ND, Rabouin D, Lauzon P, Milton D. Psychological predictors of retention in a low-threshold methadone maintenance treatment for opioid addicts: a 1-year follow-up study. Subst Use Misuse. 2015;50(1):24–31. doi:10.3109/10826084.2014.957769

15. Schottenfeld RS, O’Malley SS. Meeting the growing need for heroin addiction treatment. JAMA Psychiatry. 2016;73(5):437. doi:10.1001/jamapsychiatry.2016.0139

16. Taqi MM, Faisal M, Zaman H. OPRM1 A118G polymorphisms and its role in opioid addiction: implication on severity and treatment approaches. Pharmgenomics Pers Med. 2019;12:361–368. doi:10.2147/PGPM.S198654

17. Li Y, Kantelip JP, Gerritsen-Van Schieveen P, Davani S. Interindividual variability of methadone response: impact of genetic polymorphism. Mol Diagn Ther. 2008;12(2):109–124. doi:10.1007/BF03256276

18. Oviedo-Joekes E, Brissette S, Marsh DC, et al. Diacetylmorphine versus methadone for the treatment of opioid addiction. N Eng J Med. 2009;361(8):777–786. doi:10.1056/nejmoa0810635

19. Strang J, Metrebian N, Lintzeris N, et al. Supervised injectable heroin or injectable methadone versus optimised oral methadone as treatment for chronic heroin addicts in England after persistent failure in orthodox treatment (RIOTT): a randomised trial. Lancet. 2010;375(9729):1885–1895. doi:10.1016/S0140-6736(10)60349-2

20. Demaret I, Quertemont E, Litran G, et al. Efficacy of heroin-assisted treatment in Belgium: a randomised controlled trial. Eur Addict Res. 2015;21(4):179–187. doi:10.1159/000369337

21. Bansback N, Guh D, Oviedo-Joekes E, et al. Cost-effectiveness of hydromorphone for severe opioid use disorder: findings from the SALOME randomized clinical trial. Addiction. 2018;113(7):1264–1273. doi:10.1111/add.14171

22. Nosyk B, Guh DP, Bansback NJ, et al. Cost-effectiveness of diacetylmorphine versus methadone for chronic opioid dependence refractory to treatment. CMAJ. 2012;184(6):E317–E328. doi:10.1503/cmaj.110669

23. Oviedo-Joekes E, Guh D, Brissette S, et al. Hydromorphone compared with diacetylmorphine for long-term opioid dependence. JAMA Psychiatry. 2016;73(5):447. doi:10.1001/jamapsychiatry.2016.0109

24. Marchand K, Foreman J, MacDonald S, Harrison S, Schechter MT, Oviedo-Joekes E. Building healthcare provider relationships for patient-centered care: a qualitative study of the experiences of people receiving injectable opioid agonist treatment. Subst Abuse Treat Prev Policy. 2020;15(1). doi:10.1186/s13011-020-0253-y

25. Palis H, Marchand K, Beaumont S, et al. Physician communication in injectable opioid agonist treatment: collecting patient ratings with the communication assessment tool. J Addict Med. 2020;14(6):480–488. doi:10.1097/ADM.0000000000000631

26. Oviedo-Joekes E, MacDonald S, Boissonneault C, Harper K. Take home injectable opioids for opioid use disorder during and after the COVID-19 Pandemic is in urgent need: a case study. Subst Abuse Treat Prev Policy. 2021;16(1). doi:10.1186/s13011-021-00358-x

27. Eydt E, Glegg S, Sutherland C, et al. Service delivery models for injectable opioid agonist treatment in Canada: 2 sequential environmental scans. CMAJ Open. 2021;9(1):E115–E124. doi:10.9778/cmajo.20200021

28. Trujols J, Portella MJ, Iraurgi I, Campins MJ, Siñol N, Cobos JPDL. Patient-reported outcome measures: are they patient-generated, patient-centred or patient-valued? J Mental Health. 2013;22(6):555–562. doi:10.3109/09638237.2012.734653

29. Friedrichs A, Spies M, Härter M, Buchholz A. Patient preferences and shared decision making in the treatment of substance use disorders: a systematic review of the literature. PLoS One. 2016;11(1):e0145817. doi:10.1371/journal.pone.0145817

30. Roux P, Rojas Castro D, Ndiaye K, et al. Willingness to receive intravenous buprenorphine treatment in opioid-dependent people refractory to oral opioid maintenance treatment: results from a community-based survey in France. Subst Abuse Treat Prev Policy. 2017;12(1). doi:10.1186/s13011-017-0131-4

31. Ryan M. Using conjoint analysis to elicit preferences for health care. Br Med J. 2000;320(7248):1530–1533. doi:10.1136/bmj.320.7248.1530

32. Bridges JFP, Hauber AB, Marshall D, et al. Conjoint analysis applications in health - A checklist: a report of the ISPOR Good Research Practices for Conjoint Analysis Task Force. Value Health. 2011;14(4):403–413. doi:10.1016/j.jval.2010.11.013

33. Mühlbacher AC, Juhnke C. Patient preferences versus physicians’ judgement: does it make a difference in healthcare decision making? Appl Health Econ Health Policy. 2013;11(3):163–180. doi:10.1007/s40258-013-0023-3

34. Wilder CM, Elbogen EB, Moser LL, Swanson JW, Swartz MS. Medication preferences and adherence among individuals with severe mental illness and psychiatric advance directives. Psychiatric Services. 2010;61(4):380–385. doi:10.1176/ps.2010.61.4.380

35. Gaulen Z, Brenna IH, Fadnes LT, et al. The Predictive Value of Degree of Preference for Extended-Release Naltrexone for Treatment Adherence, Opioid Use, and Relapse. Eur Addict Res. 2022;28(1):56–67. doi:10.1159/000518436

36. Cheung KL, Wijnen BFM, Hollin IL, et al. Using Best–Worst Scaling to Investigate Preferences in Health Care. Pharmacoeconomics. 2016;34(12):1195–1209. doi:10.1007/s40273-016-0429-5

37. Flynn TN, Louviere JJ, Peters TJ, Coast J. Best-worst scaling: what it can do for health care research and how to do it. J Health Econ. 2007;26(1):171–189. doi:10.1016/j.jhealeco.2006.04.002

38. Mühlbacher AC, Kaczynski A, Zweifel P, Johnson FR. Experimental measurement of preferences in health and healthcare using best-worst scaling: an overview. Health Econ Rev. 2016;6(1):1–4. doi:10.1186/s13561-015-0079-x

39. Molassiotis A, Emsley R, Ashcroft D, et al. Applying Best-Worst scaling methodology to establish delivery preferences of a symptom supportive care intervention in patients with lung cancer. Lung Cancer. 2012;77(1):199–204. doi:10.1016/j.lungcan.2012.02.001

40. Wittenberg E, Bharel M, Saada A, Santiago E, Bridges JFP, Weinreb L. Measuring the preferences of homeless women for cervical cancer screening interventions: development of a best–worst scaling survey. Patient. 2015;8(5):455–467. doi:10.1007/s40271-014-0110-z

41. Mohapatra S, Cheung KL, Hiligsmann M, Anokye N. Most important factors for deciding rehabilitation provision for severe stroke survivors post hospital discharge: a study protocol for a best–worst scaling experiment. Methods Protoc. 2021;4(2):27. doi:10.3390/mps4020027

42. Castillo WC, Ross M, Tariq S, Dos Reis S. Best-worst scaling to prioritize outcomes meaningful to caregivers of youth with mental health multimorbidities: a pilot study. J Dev Behav Pediatrics. 2018;39(2):101. doi:10.1097/DBP.0000000000000525

43. Wittenberg E, Bharel M, Bridges JFP, Ward Z, Weinreb L. Using best-worst scaling to understand patient priorities: a case example of Papanicolaou tests for homeless women. Ann Fam Med. 2016;14(4):359–364. doi:10.1370/afm.1937

44. Howard K, Anderson K, Cunningham J, et al. What Matters 2 Adults: a study protocol to develop a new preference-based wellbeing measure with Aboriginal and Torres Strait Islander adults (WM2Adults). BMC Public Health. 2020;20(1). doi:10.1186/s12889-020-09821-z

45. Ratcliffe J, Couzner L, Flynn T, et al. Valuing child health utility 9D health states with a young adolescent sample: a feasibility study to compare best-worst scaling discrete-choice experiment, standard gamble and time trade-off methods. Appl Health Econ Health Policy. 2011;9(1):15–27. doi:10.2165/11536960-000000000-00000

46. Palis H, Marchand K, Guh D, et al. Men’s and women’s response to treatment and perceptions of outcomes in a randomized controlled trial of injectable opioid assisted treatment for severe opioid use disorder. Subst Abuse Treat Prev Policy. 2017;12(1). doi:10.1186/s13011-017-0110-9

47. Dobischok S, Metcalfe R, Matzinger E, et al. Measuring the preferences of injectable opioid agonist treatment (iOAT) clients: Development of a Person-Centered Scale (Best-Worst Scaling). International Journal of Drug Policy. 2022. In press

48. Höhne JK, Schlosser S, Krebs D. Investigating cognitive effort and response quality of question formats in web surveys using paradata. Field Methods. 2017;29(4):365–382. doi:10.1177/1525822X17710640

Creative Commons License © 2022 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.