Back to Journals » Advances in Medical Education and Practice » Volume 12

A Survey-Weighted Analytic Hierarchy Process to Quantify Authorship

Authors Ing EB 

Received 9 July 2021

Accepted for publication 7 September 2021

Published 15 September 2021 Volume 2021:12 Pages 1021—1031


Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 2

Editor who approved publication: Dr Md Anwarul Azim Majumder

Edsel B Ing

University of Toronto, Toronto, ON, Canada

Correspondence: Edsel B Ing
Michael Garron Hospital, 650 Sammon Ave, K306, Toronto, ON, M4C 5M5, Canada
Tel +1 416 465-7900
Fax +1 416 385-3880
Email [email protected]

Background: Authorship is a pinnacle activity in academic medicine that often involves collaboration and a mentor–mentee relationship. The International Committee of Medical Journal Editors criteria for authorship (ICMJEc) are intended to prevent abuses of authorship and are used by more than 5500 medical journals. However, the binary ICMJEc have not yet been quantified.
Aim: To develop a numeric scoring rubric for the ICMJEc to corroborate the authenticity of authorship claims.
Methods: The four ICMJEc were separated into the nine authorship components of conception, design, data acquisition, data analysis, interpretation of data, draft, revision, final approval and accountability. In spring 2021, members of an international association of medical editors rated the importance of each authorship component using an 11-point Likert scale ranging from 0 (no importance) to 10 (most important). The median component scores were used to calibrate the pairwise comparisons in an analytic hierarchy process (AHP). The AHP priority weights were multiplied against a four-level perceived effort/capability grade to calculate an authorship score.
Results: Sixty-six decision-making medical editors completed the survey. The components had the median scores/AHP weights: conception 7.5/5.3%; design 8/8.9%; data acquisition 7/3.6%; data analysis 7/3.6%; interpretation of data 8/8.9%; draft 8/8.9%; revision 8/8.9%; final approval 9/20.1%; and accountability 10/31.8%, with Kruskal–Wallis Chi2 = 65.11, p < 0.001.
Conclusion: The editors rated accountability as the most important component of authorship, followed by the final approval of the manuscript; data acquisition had the lowest median importance score for authorship. The scoring rubric ( transforms the binary tetrad ICMJEc into 9 quantifiable components of authorship, providing a transparent method to objectively assess authorship contributions, determine authorship order and potentially decrease the abuse of authorship. If desired, individual journals can survey their editorial boards and use the AHP method to derive customized weightings for an ICMJEc-based authorship index.

Keywords: authorship, ICMJE, academic medicine, ethics, medical editors, analytic hierarchy process, survey


The authorship of medical publications is integral to academic medicine and encompasses multiple physician education competencies including scholarship, collaboration and health advocacy,1 and frequently involves a mentor–mentee relationship.2 Authorship is critical for scientific progress, academic advancement and the attainment of research grants. In a “publish or perish” environment, the escalation of multi-authored articles and increasing numbers of authors per manuscript3 abuses of authorship such as guest authorship, gift authorship, ghost authorship, coercive authorship, and disputes in the order of authorship are increasingly recognized.4–6 Abuse of authorship violates the trust that is fundamental to scientific communication and impugns the research itself.7 As such, ensuring the authenticity of authorship claims is an important education leadership directive.

The objective of this paper is to decrease the abuses of authorship by developing a numeric index to improve the documentation of authorship claims because the four International Committee of Medical Journal Editors (ICMJE) criteria8 are binary, not quantified, and several criteria have combined rather than individual components. There have been several suggestions to quantify authorship9–12 but none have surveyed the opinion of medical editors, the acknowledged experts on authorship. Also, many of the previous works9–11 did not incorporate all four of the authorship criteria proposed by the ICMJE. This study used a cross-sectional survey of medical editors to rank the relative importance of the tetrad ICMJE criteria and applied the median responses to objectively calibrate an Analytic Hierarchy Process (AHP). The AHP method assigns a priority weight to each authorship criterion based on its perceived importance. The AHP priority weights were multiplied by an author “effort” rating to calculate a numeric index authorship score. (see The advantages of this numeric index include more accurate documentation of author contributions, which can improve the authenticity of authorship claims, assist co-authors with publication disputes, guide editorial decision-making and policy, and facilitate evidence-based research on authorship.


The study was approved by the research ethics boards of Michael Garron Hospital and Johns Hopkins University and is compliant with the tenets of the Declaration of Helsinki.

The primary study population was the medical editors who belonged to the listserv of the World Association of Medical Editors (WAME). The study design was a cross-sectional online survey that rated the relative importance of the ICMJE criteria (ICMJEc) for authorship in spring 2021.

The four ICMJEc for authorship were segregated into nine distinct components.

  1. “Conception” or substantial contributions to the conception of the work (ICMJEc 1)
  2. “Design” or substantial contributions to the design of the work (ICMJEc 1)
  3. “Data acquisition” or substantial contributions to the acquisition of data for the work (ICMJEc 1)
  4. “Data analysis” or substantial contributions to the analysis of data for the work (ICMJEc 1)
  5. “Interpretation of data” or substantial contributions to the interpretation of data for the work (ICMJEc 1)
  6. “Draft” or drafting the work (ICMJEc 2)
  7. “Revision” or Revising the work critically for important intellectual content (ICMJEc 2)
  8. “Final approval” or final approval of the version to be published (ICMJEc 3)
  9. “Accountability” or agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. (ICMJEc 4)

The survey platform was SurveyPlanet (SurveyPlanet LLC, Marina Del Rey, California). Nine Likert-like rating scales were placed in a matrix fashion to rate the relative importance of the nine ICMJE components of authorship (see Figure 1).

Figure 1 Survey of the Relative Importance of the Different Components of the International Committee of Medical Journal Editors Criteria for Authorship.

To approximate even gradations in the Likert-like rating scale, the balanced term method for adverb intensifiers of acceptability13 was adapted to construct an 11-point balanced scale (See top of Figure 1). The anchor values were designated from 0 to 10, with zero representing not at all important, and ten being the most important. In this study the term “somewhat” was substituted for Krsacock and Moroney’s descriptors of “quite” and “fairly”.13

Permission to post to the WAME listserv was obtained with the proviso that the “study does not necessarily reflect the views of WAME or its officers and members” and that no identifying details such as journal affiliation would be published. The email addresses of the WAME members were private and unavailable for the study. No monetary incentives were provided for survey completion in this unfunded study.

Although there are up to 844 members on the WAME listserv, it was not known how many were active decision-making medical editors as opposed to non-medical managing editors, copy editors, or translation editors. As such it is difficult to determine an appropriate sample size. If there were 844 decision-making medical editors, which is doubtful, the survey sample size estimate assuming a 5% margin of error, 50% response distribution, and 95% confidence interval would be 265 respondents (33% survey response rate).14 This was an optimistic response rate for an external institution, online survey, which usually has a 10–15% response rate.15

To optimize participation the survey ( was designed for a completion time of fewer than 4 minutes.16 Participants with no decision-making editorial experience were excluded. The age and gender of the medical editors were collected. The SurveyPlanet software automatically records the country of origin of survey respondents.

To prevent multiple submissions from the same participant, only one survey was allowed from each internet protocol (IP) address. Item non-response error was prevented by requiring a response to each question before the survey would proceed. Solicitations to complete the survey were distributed on the WAME listserv.

Statistical analysis was performed with SPSS 27 for Windows (IBM, Markham, Ontario) and Stata SE 15.1 for Windows (Stata Corp, College Station, Texas). A p-value of less than 0.05 was considered statistically significant.

The data of the nine authorship components were compared with the Kruskal–Wallis test and Wilcoxon signed-rank tests. The Analytic Hierarchy Process (AHP)17,18 was used to calibrate the relative importance of each of the ICMJE authorship components because an AHP can minimize cognitive errors and facilitate quantification of criteria that are difficult to express numerically.19 The reliability of the AHP pairwise comparisons was checked with a Consistency Ratio that measures how uniform the pair-wise comparisons are relative to purely random judgments. An ideal AHP Consistency Ratio is less than 10%.

The AHP intensity of importance for each pairwise comparison was entered in an online AHP calculator20 based on the difference in the median score of the authorship components, using a conversion key (bottom right of Figure 2).

Figure 2 Eight of Thirty-Six Pairwise Comparisons Entered in the Analytic Hierarchy Process (AHP) Calculator.

Notes: The Analytic Hierarchy Process weights were assigned based on the difference in the survey medians of the authorship component and the table in the bottom right of Figure 2. The first eight pairwise comparisons of the AHP are shown. The remaining comparisons are in the Supplementary Materials.

The nine resulting AHP authorship priority weights were multiplied against a four-level perceived effort/capability grade (No effort = 0, Low effort = 0.33, Medium effort = 0.67, High effort = 1) to derive an authorship score with a maximum score of 100. The scoring rubric was placed on an online spreadsheet.

To test the generalizability of the WAME editor ratings, a separate sample of ophthalmology journal editors was recruited in spring 2021, and the results were compared.


Requests to complete the survey were distributed on the WAME listserv four times between March 4, 2021, and April 25, 2021. Of the 844 members on the WAME listserv, it was unknown how many were retired, non-medical managing editors, copy editors, or translation editors. There were 71 survey responses yielding a minimum response rate of 71/844 = 8.4% and an 11.1% margin of error. Five survey participants had no experience as a decision-making medical editor and were excluded leaving 66 respondents. Seventy-seven percent (51/66) of the editors were 55 years of age or older. Seventy-three percent (48/66) of the editors identified themselves as male. Ninety-two percent (61/66) of the editors had edited for more than 5 years, and almost half (32/66) were editors for more than 15 years. Approximately 51% (34/66) of the editors were from North America, 27% (18/66) from Asia, 11% (7/66) from Europe and the United Kingdom, 8% (5/66) from Oceania and 3% (2/66) from South America.

The median editors’ rating of the ICMJE components ranged from 7 to 10 (Table 1). The inter-rater reliability of the survey editors was low with Krippendorf’s α = 0.10. (see Supplemental Materials) Notwithstanding the low inter-rater reliability, Kruskal–Wallis H-test of the nine authorship components showed a statistically significant difference between the nine groups, X2(8) = 65.11, p < 0.001 with a large effect size of eta-squared = 0.97, with the threshold designation of 0.14. Of the 36 Wilcoxon signed-rank tests there were 24 (67%) statistically significant pairs.

Table 1 The Importance of the ICMJE Authorship Components Rated by 66 World Association of Medical Editors*

Sub-analysis of the data by continent and showed no statistically significant geographic trends. (see Supplemental Materials).

The difference in the median scores of each authorship criteria was used to perform the 36 pairwise comparisons in the AHP calculator (see Figure 2). The consistency ratio for the AHP was 2.2% and well below the maximum tolerated limit of 10%.

The AHP priority weights for each authorship component ranged from 3.6% to 31.8% (Table 2). The AHP priority weightings were used for the authorship components instead of the percent proportions (third column of Table 2), because the medians were not derived from interval data, and because the percent proportions did not reflect the differential importance of the authorship components as suggested by the statistical tests.

Table 2 The Analytic Hierarchy Process Priority Weights Used to Determine the Authorship Component Scores

The sum-product of the AHP priority weights and the effort/capability level in Figure 3 were tabulated with an online spreadsheet. ( The minimum score for authorship is 21.5%, but higher scores may not qualify as authorship given the quadruple requirements of the ICMJE. The calculator informs users if they do not meet the ICMJE recommendations for authorship. The AHP priority weights can be hidden so that users do not “game” the system.

Figure 3 Sample Output from the Online S-AHP Model Authorship Calculator showing the Effort/Capability Levels for Each Authorship Component.

Notes: *The guarantor(s) accept responsibility for the scientific accuracy and overall integrity of the manuscript including study supervision, ethics, full access to the data, data handling, interpretation of results, reporting of results, study conduct, the decision to publish.35 Unlike Ivanis et al,10 “Final Approval” was not considered as a dichotomous variable. In a multi-authored paper where multiple authors with different areas or expertise, differing interpretations of the study data, and varying opinions on the literature revise a work, the final approval of a manuscript can have multiple levels of complexity.

To test the generalizability of the authorship ratings, an independent online sample of 36 ophthalmology journal editor volunteers was recruited. There was no statistically significant difference in the nine authorship component ratings by the ophthalmology versus WAME editors. Both groups rated the median score for accountability as the “most important=10”, final approval as “extremely important = 9”, design and data interpretation as “largely important =8”, and data acquisition as “somewhat important = 7”. The remaining four authorship components differed by at most one adjacent importance category. The results are listed in the Supplemental Materials.


A numeric index using the tetrad ICMJEc, based on the opinions of experienced medical editors, objectively weighted by an AHP, with an accompanying online calculator has not been previously published. This model is hereafter designated as the Survey-Analytic Hierarchy Process or S-AHP numeric index. The S-AHP divides the four ICMJEc into 9 specific, individually weighted components of authorship, and requires authors to clarify the effort/capability level of each component. The specificity of the S-AHP and its stipulation for the advance attestation of morality helps to corroborate the authenticity of authorship claims.

Authorship is an important component of academic medicine that bestows credit for intellectual achievement with concomitant academic, social and financial ramifications, and indentures accountability for the publication.8 Authorship and the perceived contribution to co-authored articles may influence decisions on hiring, salary, resource allocation, grant applications, the attainment of advanced degrees, promotion, tenure, and honors.6

Despite the ICMJEc, abuses in authorship including undeserved credit (honorary authorship and guest authorship), coercive authorship, disputes in the order of authorship, and omission of authors or ghost authorship persist,21–23 which are compounded by the increasing number of co-authors on publications over time.24 A survey of researchers suggested that 58% of individuals credited as authors should not have been, 51% experienced unethical pressure regarding authorship order, and 35% were excluded from authorship when they qualified.5 As such improving the authenticity of authorship is an important directive.

It is difficult to study, measure or compare authorship without numbers. The ICMJEc use terms such as “substantial contributions” but what this constitutes is indeterminate without a metric.25 A numeric index for authorship can decrease confusion and abuses of authorship by enumerating the specific requirements for authorship and we review some prior attempts. The Quantitative Uniform Authorship Declaration (QUAD) uses four superscripted numbers following each author’s initials to indicate the percentage contribution to the article,9 but excludes the explicit final approval and accountability criteria of the ICMJE. Researchers may overestimate their perceived contribution to authorship26 and distort the QUAD results. A five-level ordinal rating10 for the pre-2013 ICMJEc, seven criteria, four-level index,27 harmonic authorship credits, fractional authorship credit based on the order of authorship,28 a percentage-based Author Contribution Index,12 an AHP model to increase the accountability of co-authors in collaborative research,18 and a 13 criteria Authorship Order Score have been described.11 Masud’s work primarily applies to the determination of authorship order and did not include the criterion of accountability.

Possible rationalizations for the S-AHP component weightings are as follows. Data acquisition and data analysis were assigned the lowest scores at 3.6% each. Although data acquisition is labor-intensive and must be performed accurately, it is remote from the intellectual challenge required to write a paper. Data analysis with a software program is not meaningful unless the proper test is performed and appropriately interpreted. Statistically significant associations may not be clinically significant or practical. Perhaps this is why the editors collectively assigned the interpretation of data a higher component score (8.9%) than data analysis. However, in some applications such as data mining projects, data analysis and the interpretation of data may be equally challenging.

Since it is difficult to formulate feasible, novel, and relevant research ideas, the conception of a project was assigned a higher weight (5.3%) than data acquisition and analysis. Project design had a component score of 8.9% in keeping with the deliberations needed to direct achievable goals and plan data collection and analysis. Drafting and revising the manuscript for intellectual content were also assigned scores of 8.9%. An article draft can be time-consuming, but revising a draft can be just as onerous, with repetition of calculations and further literature search. The S-AHP emphasizes the final approval (20.1%) and accountability (31.8%) components of authorship. Although authors may spend less time on these concluding elements, ICMJEc 3 and 4 are contingent on the proper supervision and performance of ICMJEc 1 and 2, and reflect the editors’ concern for scientific accuracy, ethics and academic integrity. In coauthored papers, the final approval of a manuscript acknowledges that all the authors have resolved their differences of opinion and collectively support the group’s scholarship. Accountability requires appropriate research training, knowledge and the willingness to be publicly responsible for the criticisms and corrections of the scientific work. Accountability is an essential bulwark against authorship misconduct, given that more than 18,000 papers have been retracted after publication.29,30

There are several limitations to this work. The minimum estimated 8% survey response rate is low, although the actual number of active, decision-making WAME editors was not known. During the eight-week survey period, fewer than 50 editors posted a message on the WAME listserv. The inter-rater agreement of the editors was low but given the 99 possible response options and an international pool of raters, this was not unexpected. The effort/capability grade was arbitrary, and the time that a researcher invests may not reflect quality or ability. One’s perceived self-efficacy to be responsible for all aspects of a publication may be incorrect, especially since accountability is largely prospective, in comparison with the other ICMJEc which are retrospective evaluations. Accountability transgressions may not appear until after publication unless the editors or article reviewers detect the impropriety ahead of time. Finally, the S-AHP will not prevent researchers from paltering with the ICMJEc. “Ensuring adherence to the standard guidelines and ethical scientific research rests entirely with the authors themselves”.31

Notwithstanding the above limitations, the individually-specified authorship components of the S-AHP may yield more accurate data than the grouped ICMEc questions.32 The behaviour-analytic literature suggests that there is greater fidelity in self-reporting when more accurate descriptions and reinforcement are provided;33 our 4-level quantification instrument more accurately describes the effort level expected of each activity than a binary ICMJE tick sheet. Also, our online calculator asks users to sign their attestation at the beginning of the form rather than the end, which induces greater morality.34

Additional strengths of the S-AHP model include its use of the widely accepted tetrad ICMJEc. The survey participants were experienced medical editors. The large effect size of the omnibus Kruskal–Wallis test and 24/36 (67%) statistically significant Wilcoxon signed-rank pairwise comparisons support distinct differences in the relative importance of the ICMJE components for authorship. The AHP was objectively weighted using the median survey scores, and the 2.2% consistency ratio for 36 pairwise comparisons was exceptional. The generalizability of the S-AHP model was supported by the very similar ratings of WAME editors with their ophthalmology counterparts. There were no trends upon the geographic subanalysis of the ratings from our international editors, which further supports the generalizability of the S-AHP model. However, the editorial boards of each journal, or specialty or country can customize their own AHP weightings if desired.

The online calculator for the S-AHP numeric index has several advantages. It can be completed almost as quickly as a tick sheet, but the scoring instrument compels researchers to contemplate the effort they spent on each authorship component with an emphasis on accountability. If an individual journal wants to develop its own numeric index for authorship by surveying its own editorial board, the AHP weights can be easily adjusted in the spreadsheet. The last line of the calculator indicates whether the tetrad ICMJEc for authorship are satisfied. Low S-AHP scores may make undeserving researchers realize that their claim to authorship is an ethical faux-pas. If no author selects the highest level of accountability (guarantor), this may be a red flag. If the principal author of a multi-authored publication reports high effort/capability scores for the first two ICMJEc but low scores for the last two criteria, the editors should ensure that the senior responsible author has thoroughly reviewed the final manuscript and is guarantor of the manuscript. The senior responsible author, who is often the last author is expected to have high effort or capability scores for the final approval of the article and accountability components of authorship.

Many other activities can enhance the integrity of authorship besides the S-AHP numeric index. Before the initiation of research, the eligibility, responsibilities of authorship and order of authorship should be clarified.35 The contemporaneous recording of dates and time logs for the various components of authorship, and correlation with research notes and lab files can increase the authenticity of authorship claims. To grow a culture of ethical authorship the ICMJEc should be learnt in medical schools, research training programs and continuing medical education courses. Journals that subscribe to the ICMJEc should post the most recent criteria in their author information section.36 The use of the Open Researcher and Contributor ID (ORCID) increases the transparency of a researcher’s ability by listing the education and qualifications, invited positions and distinctions, societal memberships and prior publications of the author. Designating the first author as guarantor of the article may most accurately identify the responsible individual when authorship misconduct occurs.37 Guest authorship may be disincentivized if the h-index scores of “middle man” coauthors38 (authors other than the first author or corresponding author) are diminished when there are more than 10 authors, or if the h-index credit is apportioned to coauthors based on their S-AHP score.

The ICJMEc for authorship excludes administrative support, fund-raising, and the donation of equipment or study subjects. To cultivate research collaborations and institutional cooperation journals should incorporate contributorship models, and universities should acknowledge the academic value of contributorship. A Contributor Roles Taxonomy (CRediT) contributorship model with digital badges has been suggested in addition to traditional authorship to clarify attribution credit, reduce author disputes and increase collaboration and the sharing of data and code.39,40

Future work includes reanalysis with a larger pool of editors, and the comparison of the authorship scores from different specialty journals, and quantitative versus qualitative versus mixed methods research.

In conclusion, authorship is essential to academic medicine, but abuses of authorship harm the integrity and advancement of science. A numeric index for authorship promotes ethical research in medical education and practice by potentially decreasing abuses of authorship, and may help reform the “middle-man” co-author concerns38 with h-index citations. The unique attributes of the S-AHP numeric index include its use of all four of the most recent ICMJEc, its survey of medical editors (the acknowledged experts on medical authorship), and ranking of the relative importance of the components of authorship with an analytic hierarchy process to minimize bias. The S-AHP found that the ICMJE components for authorship have different levels of importance. Accountability and final approval of the manuscript were the paramount components of authorship. Although data acquisition and data analysis were also important, they were assigned the lowest priority in the hierarchy of authorship components. Unlike previous authorship indices, the S-AHP numeric index transformed the binary tetrad ICMJEc into nine distinct weighted authorship components and combined this with a four-level ordinal effort scale to increase the specificity of authorship tasks. The study is the first to provide an online spreadsheet calculator for authorship The S-AHP numeric index calculator is also unique because it solicits a preceding declaration of morality, yet requires little more time to complete than a typical ICMJEc declaration form. The specificity of the S-AHP numeric index calculator and the requirement for early attestation may help discourage practices such as guest authorship and gift authorship, and help adjudicate disputes in the order of authorship.


AHP, analytic hierarchy process; CRediT, Contributor Roles Taxonomy; ICMJE, International Committee of Medical Journal Editors; ICMJEc, Authorship criteria of the International Committee of Medical Journal Editors; ORCID, Open Researcher and Contributor ID; S-AHP, Survey-weighted Analytic Hierarchy Process; WAME, World Association of Medical Editors.

Data Sharing Statement

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Ethics Approval

Michael Garron Hospital REB; NR-297.


  1. This research was completed in part to meet the Capstone requirements of the Masters of Education for the Health Professions degree at the Johns Hopkins University Faculty of Education. I thank Professor John Shatzer (Johns Hopkins University) and Dr. Joseph Gasser (University of Toronto, Ophthalmology) for their invaluable supervision during this project.
  2. I thank Dr. Margaret Winkler of the World Association of Medical Editors (WAME) for her assistance in distributing the survey. (The survey results do not necessarily reflect the views of WAME or its officers or members.)
  3. I thank Royce C. Ing for his assistance in programming the online calculator.
  4. The survey results do not necessarily reflect the views of the World Association of Medical Editors or its officers or members.
  5. The material was presented online at the Johns Hopkins Decennial Masters of Education in the Health Professions Conference on July 27, 2021.

Author Contributions

The author made a significant contribution to the work reported, whether that is in the conception, study design, execution, acquisition of data, analysis and interpretation, or in all these areas; took part in drafting, revising or critically reviewing the article; gave final approval of the version to be published; has agreed on the journal to which the article has been submitted; and agreed to be accountable for all aspects of the work.


There is no funding to report.


EI is a Professor at the University of Toronto Temerty Faculty of Medicine, a section editor for the Canadian Journal of Ophthalmology, a member of the World Association of Medical Editors, has a Masters of Public Health (Harvard), a PhD in diagnostic prediction models (Kingston), and is a fellow in the Masters of Education in the Health Professions program at the Johns Hopkins University Faculty of Education. The author reports no other conflicts of interest in this work.


1. Royal College of Physicians and Surgeons of Canada. CanMEDS: better standards, better physicians, better care; 2015 [cited December 11, 2020]. Available from: Accessed September 08, 2021.

2. Lypson M, Philibert I. Residents and authorship: rights, obligations, and avoiding the pitfalls. J Grad Med Educ. 2012;4(2):138–139. doi:10.4300/JGME-04-02-31

3. Papadakis M. How many scientists does it take to write a COVID-19 case report? Account Res. 2021;28(3):186–190. doi:10.1080/08989621.2020.1821369

4. Marušic A, Bošnjak L, Jeroncic A, Jefferson T. A systematic review of research on the meaning, ethics and practices of authorship across scholarly disciplines. PLoS One. 2011;6(9):e23477. doi:10.1371/journal.pone.0023477

5. Uijtdehaage S, Brian M, Durning S. Whose paper is it anyway? Authorship criteria according to established scholars in health professions education. Acad Med. 2018;93(8):1171–1175. doi:10.1097/ACM.0000000000002144

6. Chapman C, Bicca-Marques J, Pengfei F, et al. Games academics play and their consequences: how authorship, h-index and journal impact factors are shaping the future of academia. Proc R Soc B. 2019;286:2047. doi:10.1098/rspb.2019.2047

7. Council of Science Editors. Authorship and authorship responsibilities; 2020 [cited December 8, 2020]. Available from:,in%20their%20instructions%20fo. Accessed September 08, 2021.

8. ICMJE International Committee of Medical Journal Editors; 2021 [cited November 16, 2020]. Available from: Accessed September 08, 2021.

9. Verhagen J, Wallace K, Collins S, Scott T. QUAD system offers fair shares to all authors. Nature. 2003;426(6967):602. doi:10.1038/426602a

10. Ivanis A, Hren D, Sambunjak D, Marusić M, Marusić A. Quantification of authors’ contributions and eligibility for authorship: randomized study in a general medical journal. J Gen Intern Med. 2008;23(9):1303–1310. doi:10.1007/s11606-008-0599-8

11. Masud N, Masuadi E, Moukaddem A, et al. Development and validation of Authorship Order Score (AOS) for scientific publication. Health Prof Educ. 2020;6(3):434–443. doi:10.1016/j.hpe.2020.04.006

12. Boyer S, Ikeda T, Lefort M, Malumbres-Olarte J, Schmidt J. Percentage-based Author Contribution Index: a universal measure of author contribution to scientific articles. Res Integr Peer Rev. 2017;2(1):1–8. doi:10.1186/s41073-017-0042-y

13. Krsacok S, Moroney W. Quantification of adverb intensifiers for use in ratings of acceptability, adequacy, and relative goodness. Hum Factors Ergonom Soc Ann Meet Proc. 2002;46(24):1944–1948. doi:10.1177/154193120204602402

14. Raosoft. Sample size calculator; 2004 [cited December 12, 2020]. Available from: Accessed September 8, 2021.

15. PeoplePulse. Survey response rates; 2021. Available from: Accessed September 8, 2021.

16. Galesic M, Bosnjak M. Effects of questionnaire length on participation and indicators of response quality in a web survey. Public Opin Quart. 2009;73(2):349–360. doi:10.1093/poq/nfp031

17. Saaty T. The Analytical Hierarchy Process. New York: McGraw Hill; 1980.

18. Sheskin T. An analytic hierarchy process model to apportion co‐author responsibility. Sci Eng Ethics. 2006;12(3):555–565. doi:10.1007/s11948-006-0053-4

19. Song B, Kang S. A method of assigning weights using a ranking and nonhierarchy comparison. Adv Decis Sci. 2016;2016:1–9. doi:10.1155/2016/8963214

20. Goepel K. Business performance management Singapore; 2019 [cited December 12, 2020]. Available from: Accessed September 8, 2021.

21. Carneiro Marco A, Cangussú Silvia D, Fernandes G. Ethical abuses in the authorship of scientific papers. Rev Bras. 2007;51(1):1–5. doi:10.1111/j.1440-1673.1994.tb00114.x

22. Kwok L. The White Bull effect: abusive coauthorship and publication parasitism. J Med Ethics. 2005;31(9):554–556. doi:10.1136/jme.2004.010553

23. Gulielmi G. Who gets credit? Survey digs into the thorny question of authorship. Nature. 2018. Available from

24. MEDLINE/PubMed Resources. Number of authors per MEDLINE/PubMed citation; 2020 [cited December 11, 2020]. Available from: Accessed September 8, 2021.

25. Helgesson G, Master Z, Bülow W. How to handle co-authorship when not everyone’s research contributions make it into the paper. Sci Eng Ethics. 2021;27(2):27. doi:10.1007/s11948-021-00303-y

26. Herz N, Dan O, Censor N, Bar-Haim Y. Opinion: authors overestimate their contribution to scientific work, demonstrating a strong bias. PNAS. 2020;117:6282–6285. doi:10.1073/pnas.2003500117

27. Ahmed S, Maurana C, Engle J, Uddin D, Glaus K. A method for assigning authorship in multiauthored publications. Fam Med. 1997;29:42–44.

28. Hagen N, DeSalle R. Harmonic allocation of authorship credit: source-level correction of bibliometric bias assures accurate publication and citation analysis. PLoS One. 2008;3(12):e4021. doi:10.1371/journal.pone.0004021

29. Curran O. HowStuffWorks. Database of 18,000 retracted scientific papers now online; 2018. Available from: Accessed September 8, 2021.

30. Retraction Watch. The retraction watch leaderboard; 2021 [cited January 16, 2021]. Available from: Accessed September 8, 2021.

31. Ali M. ICMJE criteria for authorship: why the criticisms are not justified? Graefes Arch Clin Exp Ophthalmol. 2021;259:289–290. doi:10.1007/s00417-020-04825-2

32. Thompson F, Subar A, Brown C, et al. Cognitive research enhances accuracy of food frequency questionnaire reports: results of an experimental validation study. J Am Dietit Assoc. 2002;102(2):212–225. doi:10.1016/s0002-8223(02)90050-7

33. Fryling M. A developmental-behavioral analysis of lying. Int J Psychol Psychol Ther. 2016;16(1):13–22.

34. Shu L, Mazar N, Gino F, Ariely D, Bazerman M. Signing at the beginning makes ethics salient and decreases dishonest self-reports in comparison to signing at the end. PNAS. 2012;109(38):15197–15200. doi:10.1073/pnas.1209746109

35. Albert T, Wager E. How to handle authorship disputes: a guide for new researchers. COPE Rep. 2003. doi:10.24318/cope.2018.1.1

36. Misra D, Ravindran V, Agarwal V. Integrity of authorship and peer review practices: challenges and opportunities for improvement. J Korean Med Sci. 2018;33(46):e287. doi:10.3346/jkms.2018.33.e287

37. Hussinger K, Pellens M, Sartori G. Scientific misconduct and accountability in teams. PLoS One. 2019;14(5):e0215962. doi:10.1371/journal.pone.0215962

38. Kreiner G. The Slavery of the h-index – measuring the unmeasurable. Front Hum Neurosci. 2016;10:556. doi:10.3389/fnhum.2016.0056

39. Brand A, Allen L, Altman M, Hlawa M, Scott J. Beyond authorship: attribution, contribution, collaboration and credit. Learn Publ. 2015;28:151–155. doi:10.1087/20150211

40. British Medical Journal. Authorship & contributorship. The BMJ; 2021. Available from: Accessed September 8, 2021.

Creative Commons License © 2021 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.