Back to Journals » International Journal of General Medicine » Volume 16

Use of Novel Open-Source Deep Learning Platform for Quantification of Ki-67 in Neuroendocrine Tumors – Analytical Validation

Authors Zehra T, Shams M, Ali R, Jafri A, Khurshid A, Erum H, Naqvi H, Abdul-Ghafar J 

Received 18 October 2023

Accepted for publication 22 November 2023

Published 4 December 2023 Volume 2023:16 Pages 5665—5673

DOI https://doi.org/10.2147/IJGM.S443952

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 2

Editor who approved publication: Dr Scott Fraser



Talat Zehra,1 Mahin Shams,2 Rabia Ali,3 Asad Jafri,3 Amna Khurshid,3 Humaira Erum,3 Hanna Naqvi,3 Jamshid Abdul-Ghafar4

1Pathology Department, Jinnah Sindh Medical University, Karachi, Pakistan; 2Pathology Department, United Medical and Dental College, Karachi, Pakistan; 3Histopathology Department, Liaquat National Hospital, Karachi, Pakistan; 4Department of Pathology and Clinical Laboratory, French Medical Institute for Mothers and Children (FMIC), Kabul, Afghanistan

Correspondence: Jamshid Abdul-Ghafar, Department of Pathology and Clinical Laboratory, French Medical Institute for Mothers and Children (FMIC), Kabul, Afghanistan, Tel+93792827287, Email [email protected]

Background: Neuroendocrine tumors (NETs) represent a diverse group of neoplasms that arise from neuroendocrine cells, with Ki-67 immunostaining serving as a crucial biomarker for assessing tumor proliferation and prognosis. Accurate and reliable quantification of Ki-67 labeling index is essential for effective clinical management.
Methods: We aimed to evaluate the performance of open-source/open-access deep learning cloud-native platform, DeepLIIF (https://deepliif.org), for the quantification of Ki-67 expression in gastrointestinal neuroendocrine tumors and compare it with the manual quantification method.
Results: Our results demonstrate that the DeepLIIF quantification of Ki-67 in NETs achieves a high degree of accuracy with an intraclass correlation coefficient (ICC) = 0.885 with 95% CI (0.848– 0.916) which indicates good reliability when compared to manual assessments by experienced pathologists. DeepLIIF exhibits excellent intra- and inter-observer agreement and ensures consistency in Ki-67 scoring. Additionally, DeepLIIF significantly reduces analysis time, making it a valuable tool for high-throughput clinical settings.
Conclusion: This study showcases the potential of open-source/open-access user-friendly deep learning platforms, such as DeepLIIF, for the quantification of Ki-67 in neuroendocrine tumors. The analytical validation presented here establishes the reliability and robustness of this innovative method, paving the way for its integration into routine clinical practice. Accurate and efficient Ki-67 assessment is paramount for risk stratification and treatment decisions in NETs and AI offers a promising solution for enhancing diagnostic accuracy and patient care in the field of neuroendocrine oncology.

Keywords: digital image analysis, histopathology, Ki-67 proliferation index, neuroendocrine tumors, machine learning

Introduction

Ki-67 is a widely used biomarker for quantifying cellular proliferation in tumors, including neuroendocrine tumors (NETs). The Ki-67 index is calculated by dividing the number of Ki-67 positive cells by the total number of cells evaluated in a tissue sample.1 A higher Ki-67 index indicates a higher rate of cellular proliferation and is associated with a more aggressive tumor phenotype. In NETs, Ki-67 quantification is used to help determine the tumor grade and prognosis as well as to monitor response to treatment. However, it is important to note that Ki-67 is not specific for NETs and its expression can be influenced by other factors such as the stage of the cell cycle. Therefore, Ki-67 should be interpreted in the context of other clinical and pathological features.2 Artificial intelligence (AI) can be used to support the detection and quantification of Ki-67 in NETs. This can be done using a variety of techniques such as deep learning algorithms, computer vision and image analysis. Deep learning is a type of AI that has shown promise in the diagnosis and prognostication of NETs.2–4 Deep learning (also known as deep structured learning) is a subfield of machine learning based on artificial neural networks (ANNs) in which the statistic models are established from input training data.5 Deep learning algorithms can be trained on large amounts of medical imaging data to accurately identify and quantify Ki-67 positive cells in tissue samples using digital pathology images.3 The use of AI in Ki-67 detection has the potential to improve the accuracy and consistency of Ki-67 quantification, as well as reduce inter-observer variability. Additionally, AI can also reduce the time and resources required for manual Ki-67 quantification, allowing for a faster and more efficient evaluation of large numbers of tissue samples.4 Deep learning algorithms can also be used to predict patient outcomes and treatment response, by analyzing imaging and clinical data to identify predictive features associated with disease progression and survival. This can enable the development of personalized treatment plans for patients. However, it is important to note that deep learning algorithms in NET analysis are still in the stage of development and require validation. Also, comparison of AI-based Ki-67 quantification to traditional manual methods is needed to ensure its accuracy and reliability.5–8

Various studies have been conducted in the past on Ki-67. These are all on whole slide images (WSIs).1–4,6 We did this study on digital images made at 10x through microscope connected camera on open-source software. It is also the only AI immunohistochemical (IHC) scoring model available for free via cloud-native platform with user-friendly interface (https://deepliif.org) for anyone to upload their images and get results. We used this software on data of our own population and validated this open-source software. DeepLIIF model was not adjusted for our dataset. We used it out-of-the-box for this study. Original DeepLIIF was trained/validated on restained/co-registered scanned IHC and multiplex immunofluorescence-stained slides. DeepLIIF code, pretrained models, training/validation/testing datasets are publicly available via their GitHub (https://github.com/nadeemlab/DeepLIIF). We wanted to give a message that pathologist using microscope not having the facility of scanner can also get the benefits from AI based deep learning model. These are proof of concept studies mainly for research. No previous studies have been performed in low-resource settings with fully open-source state-of-the-art AI approaches that can generalize to low-quality microscope snapshots with large tumor coverage (10x). For comprehensive analytical validation, an expert pathologist (P1) manually counted the IHC+/- tumor cells in our neuroendocrine cancer images which is an extremely tedious/laborious task, especially when this is done at 10X resolution with much larger tumor coverage; no previous manual vs AI counting studies have been performed at 10X, most were limited to 40X with much lower cell count. Through this analytical validation, we wanted to showcase that the DeepLIIF model can easily be used for accurate estimate of Ki-67 index in much larger tumor coverage in low-resource settings, where scanners and commercial AI solutions are not accessible/affordable. This analytical validation gives us the confidence and sets up the stage for much large-scale clinical validation that we are undertaking now. Moreover, DeepLIIF provides an easy solution for developing region pathologists to take advantage of advanced AI solutions where they are needed the most with declining pathologist numbers and increasing patient load. This study was conducted to see the concordance between open-source AI-based DeepLIIF (https://deepliif.org) software Ki-67 quantification versus manual quantification in NETs.

Materials and Methods

The study was conducted at Liaquat National Hospital from 2nd June 2023 to 2nd August 2023 and included all previously diagnosed cases of NETs from Liaquat National Hospital Laboratory after taking ethical opinion from ethical review committee. Antibody stained for Ki-67 was used in the study. The digital images were prepared from microscope connected camera at 10x from the hotspot regions. A total of 100 images were prepared from hot spot areas. Five consultant histopathologists (P1 to P5) scored the 100 digital images using the eyeballing method. Out of five, one Histopathologist P1 who manually counted the cells and prepared the scores was considered as standard and this score was compared with the score of other pathologists including the AI software. Since the scoring for Ki-67 is only done in tumor cells, P1 counted only the Ki-67+/- tumor cells. As shown in previous studies,9 disagreement among pathologists on tumor cell annotations (due to clear large irregular cellular morphology) on IHC images is less than 20%, making single-pathologist counts as manual reference good enough for Ki-67 scoring purposes.

Different tools within the DeepLIIF user-interface were used to select tumor regions and exclude stromal regions (Figure 1). The software offers an exclusion/inclusion region-of-interest lasso/selection tool that was used to exclude all the stromal cells after running DeepLIIF, as shown in Figure 1. This was done manually by pathologist as these tools were provided by software. The values of manual and automated scores were compared for concordance. The results were prepared by the kappa method.

All statistical analyses were performed using SPSS software version 24. Normality of continuous data was assessed by using Kolmogorov–Smirnov test. The data was represented by Med [IQR]. Agreement of P1 (gold standard) with P2, P3, P4, P5 and AI was measured by applying Kappa analysis, and validity was assessed by correlation. ROC plotted and AUC evaluated for each variable to find the quality of test. Diagnostic accuracy was also calculated for P2, P3, P4, P5 and AI by using Ki-67 cutoff score as 20.

Results

By using Kolmogorov–Smirnov test, the distribution of P1, P2, P3, P4, P5 and AI were found to be non-normal (p value < 0.05). The descriptive statistics of these variables is reported in Table 1. In Table 2, the agreement measure of P1 was assessed with P2, P3, P4, P5 and AI to observe the inter-rater reliability. The kappa statistic was found as 0.798, 0.674, 0.589, 0.609 and 0.860 for P2, P3, P4, P5 and AI respectively which represents moderate strength of agreement with P4, good strength of agreement with P2, P3 and P5, and excellent strength of agreement with AI.

Table 1 Descriptive Statistics for P1, P2, P3, P4, P5 and AI

Table 2 Measurement of Agreement

Figure 2 shows that the finding of Ki-67 by P1 is strongly positively correlated with P2, P3 and P4 while very highly positively correlated with P5 and AI. Here Cronbach’s alpha = 0.981, which is indicating excellent internal consistency and Intraclass Correlation Coefficient (ICC) = 0.885 with 95% CI (0.848–0.916) which is indicating good reliability. In Figure 3, ROC curves were plotted for P2, P3, P4, P5 and AI to assess the quality of the test by taking P1 as gold standard. The quality of test was assessed by the value of area under the curve, evaluated by ROC, which indicated excellent test quality for all the pathologists and AI.

Figure 1 Digital image of Ki-67 neuroendocrine tumor prepared at 10x was uploaded in the software and the concordance between manual versus automated score was observed.

In Table 3, the diagnostic accuracy of P2, P3, P4, P5 and AI was assessed by taking P1 as gold standard and using Ki-67 cutoff score as 20. The diagnostic accuracy was calculated as 90%, 84%, 80%, 81% and 93% for P2, P3, P4, P5 and AI, respectively. The highest diagnostic accuracy was found for AI ie 93%, as compared to other pathologists.

Table 3 Sensitivity, Specificity, PPV, NPV and Diagnostic Accuracy of P2, P3, P4, P5 & AI Using P1 as Gold Standard

Discussion

NETs are a diverse group of neoplasms that originate from neuroendocrine cells dispersed throughout the body. NETs can occur in various organs, including the gastrointestinal tract (GIT), pancreas, lungs, and more, reflecting the widespread distribution of neuroendocrine cells in the human body.10,11 These tumors exhibit a remarkable spectrum of clinical behaviors, ranging from indolent, slow-growing lesions to highly aggressive malignancies. Histopathological diagnosis plays a pivotal role in characterizing these tumors, enabling clinicians to determine their precise nature, grade, and potential for metastasis. One of the prognostic factors for GIT NETs is the proliferative grade of the tumor calculated using the proportion of tumor cells positive for Ki-67 immunostaining.12–14

Traditional methods of quantifying Ki-67 expression in NETs rely on manual assessment by pathologists. This process can be time-consuming, subjective and prone to inter-observer variability.14,15 Using a novel open-source deep learning platform for the quantification of Ki-67 in NETs has the potential to significantly enhance the accuracy, efficiency and reproducibility of the diagnostic procedure. The analytical validation is carried out to ensure that the results generated are reliable, consistent and comparable to established methods.1,15 Neural networks and advanced machine learning techniques can be utilized to automatically analyze histopathological images and accurately identify and quantify the Ki-67 protein expression, which is a marker for cell proliferation and an important factor in determining the aggressiveness of the tumor.1 The deep learning model is trained using the annotated dataset. The model learns to recognize patterns and features in the images that are indicative of Ki-67 expression levels. The model’s performance is then assessed using a separate unseen dataset. The results obtained from the deep learning platform should be compared to the gold standard, which involves manual assessment of Ki-67 expression by expert pathologists. This step is critical for establishing the accuracy and reliability of the deep learning platform.16,17

Deep learning models can process large volumes of histopathological images quickly and accurately, potentially reducing the time required for assessment. This is particularly valuable in clinical settings where timely diagnosis is critical. Deep learning algorithms provide consistent and objective results, mitigating the inherent variability associated with manual evaluations. This leads to more reliable and reproducible quantification of Ki-67 expression.18,19 Neural networks can detect subtle patterns in images that may not be easily discernible by human observers. This can lead to more precise quantification, especially in cases with varying staining intensities. Once trained, the deep learning model can be applied to a large number of images without diminishing the accuracy. This feature is particularly helpful when dealing with datasets that are too extensive for manual evaluation.4,5,20 Accurate and reliable quantification of Ki-67 expression can provide clinicians with essential information to determine the tumor’s aggressiveness, prognosis and potential treatment strategies. Automation and standardization of this process can lead to faster diagnoses, improved treatment planning, and enhanced patient outcomes.21,22

Recently, several studies demonstrated that deep learning algorithms could be used to generate automatic Ki-67 proliferation index with high sensitivity and specificity in breast carcinomas and pancreatic neuroendocrine neoplasms.23–25 In this study, the median value was 25 for P1 and P2, 30 for P3, P4 and P5 and 21 for AI (Table 1). The Cronbach’s alpha was 0.981, which indicated excellent internal consistency and ICC was 0.885 with 95% CI (0.848–0.916) which indicated good reliability (Figure 1). The Interclass Correlation Coefficient Results for digital image analysis in another study were 0.706, 0.606 and 0.912 for one, two and three field of view respectively.3 A study by Swati et al showed that eyeball estimation method had the highest concordance rate (84.2%), followed by WSI analysis of the entire slide (73.7%).3

Figure 2 Validity by Correlation Matrix showing that the finding of Ki-67 by P1 is strongly positively correlated with P2, P3 and P4 while very highly positively correlated with P5 and AI.

Tang et al showed that the digital image analysis (DIA) and manual count were highly concordant (ICC = 0.98). The ICC between DIA and the mean eyeball estimation of all observers was 0.88. The ICC for intra-observer consistency was 0.39±0.26.20

In another study, the digital image analysis Ki-67 showed excellent agreement with the Ki-67 index in the routine pathology reports with 95% confidence interval (CI): 0.94–0.96. The observed kappa value was 0.86 (95% CI: 0.81–0.91).21 In our study, the strength of agreement was good for P2, P3 and P5, moderate for P4 and excellent for AI. The kappa values observed in this study were 0.798, 0.674, 0.589, 0.609 and 0.860 for P2, P3, P4, P5 and AI, respectively (Table 2). Sarag et al conducted a study in which digital image analysis of Ki-67 showed excellent correlation with manual counting.26

The diagnostic accuracy in our study was calculated as 90%, 84%, 80%, 81% and 93% for P2, P3, P4, P5 and AI, respectively. The highest diagnostic accuracy was found for AI ie 93%, as compared to other pathologists. The sensitivity was 88.89%, 77.78%, 68.89%, 68.89% and 97.78% for P2, P3, P4, P5 and AI, respectively. The specificity was 90.91%, 89.09%, 89.09%, 90.91% and 89.09% for P2, P3, P4, P5 and AI, respectively (Table 3 and Figure 2).

Figure 3 ROC Plots prepared for pathologists P2, P3, P4, P5 and AI to access the quality of test by taking Pathologist P1 score as gold standard.

Therefore, developing an automated counting method using digital image analysis can enhance the scoring process and aid the pathologist, but requires careful modification of the software as it can induce errors such as counting stromal cells or lymphocytes and show discordant results due to background nonspecific staining.27,28

We have done studies in past on simple digital images on various pathologies.10,29–31 In doing so, we unleash the power of AI on simple digital images in resource limited setup. The results of above-mentioned studies were highly encouraging and tried to give a way forward to pathologists working in developing countries to adopt digital and computational pathology in resource limited setup.10

We wanted to give a message that pathologist using microscope not having the facility of scanner can also get the benefits from AI-based deep learning model. These are proof of concept studies mainly for research. No previous studies have been performed in low-resource settings with fully open-source state-of-the-art AI approaches that can generalize to low-quality microscope snapshots with large tumor coverage (10x). For comprehensive analytical validation, an expert pathologist (P1) manually counted the IHC+/- tumor cells in our neuroendocrine cancer images which is an extremely tedious/laborious task, especially when this is done at 10x resolution with much larger tumor coverage; no previous manual vs AI counting studies have been performed at 10x, most were limited to 40x with much lower cell count. Through this analytical validation, we wanted to showcase that the DeepLIIF model can easily be used for accurate estimate of Ki-67 index in much larger tumor coverage in low-resource settings, where scanners and commercial AI solutions are not accessible/affordable.32,33 This analytical validation gives us the confidence and sets up the stage for much large-scale clinical validation that we are undertaking now. Moreover, DeepLIIF provides an easy solution for developing region pathologists to take advantage of advanced AI solutions where they are needed the most with declining pathologist numbers and increasing patient load.

Conclusion

The integration of a novel open-source deep learning platform for the quantification of Ki-67 in NETs has the potential to revolutionize the diagnostic process and the medical community can confidently embrace this technology as a valuable tool in the assessment and management of these tumors. However, validation of an open-access AI platform for immunohistochemistry is crucial to ensure that the tool provides accurate and reliable results.

Abbreviations

NET, Neuroendocrine tumor; ANNs, Artificial neural networks; AI, Artificial intelligence; WSI, whole-slide image; IHC, Immunohistochemistry; ICC, Intraclass Correlation Coefficient; GIT, Gastrointestinal tract; DIA, digital image analysis.

Data Sharing Statement

Data and materials of this work are available from the corresponding author on reasonable request.

Ethics Approval and Consent to Participate

All procedures performed in accordance with the ethical standards of the Institute Ethics Committee (ERC) and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. The study was approved by ERC of Liaquat National Hospital (App.#R.C – Histopathology – 03/2023/23). Informed consent was obtained from all subjects and/or their legal guardian(s) at the time of interview.

Acknowledgments

We are thankful to Dr. Saad Nadeem and DeepLiif for releasing this software.

Funding

No financial support was provided for this study.

Disclosure

The authors report no conflicts of interest in this work.

References

1. Vesterinen T, Säilä J, Blom S, Pennanen M, Leijon H, Arola J. Automated assessment of Ki-67 proliferation index in neuroendocrine tumors by deep learning. APMIS. 2022;130(1):11–20. PMID: 34741788; PMCID: PMC9299468. doi:10.1111/apm.13190

2. Wang HY, Li ZW, Sun W, et al.. Automated quantification of Ki-67 index associates with pathologic grade of pulmonary neuroendocrine tumors. Chin Med J. 2019;132(5):551–561. PMID: 30807354; PMCID: PMC6416093. doi:10.1097/CM9.0000000000000109

3. Satturwar SP, Pantanowitz JL, Manko CD, et al. Ki-67 proliferation index in neuroendocrine tumors: can augmented reality microscopy with image analysis improve scoring?. Cancer Cytopathol. 2020;128(8):535–544. doi:10.1002/cncy.22272

4. Liu SZ, Staats PN, Goicochea L, et al. Automated quantification of Ki-67 proliferative index of excised neuroendocrine tumors of the lung. Diagn Pathol. 2014;9(1):174. doi:10.1186/s13000-014-0174-z

5. Cui M, Zhang DY. Artificial intelligence and computational pathology. Laboratory Investigation. 2021;101(4):412–422. doi:10.1038/s41374-020-00514-0

6. Basile ML, Kuga FS, Del Carlo Bernardi F. Comparation of the quantification of the proliferative index KI-67 between eyeball and semi- automated digital analysis in gastro-intestinal neuroendocrine tumors. Surg Exp Pathol. 2019;2(21). doi:10.1186/s42047-019-

7. Cives M, Strosberg JR. Gastroenteropancreatic neuroendocrine tumors. Ca a Cancer J Clinicians. 2018;68(6):471–487. doi:10.3322/caac.21493

8. Uxa S, Castillo-Binder P, Kohler R, Stangner K, Müller GA, England K. Ki-67 gene expression. Cell Death Differ. 2021;28(12):3357–3370. doi:10.1038/s41418-021-00823-x

9. Beck A, Glass B, Elliott H, et al. An empirical framework for validating artificial intelligence–derived PD-L1 positivity predictions applied to urothelial carcinoma. J Immunother Cancer. 2019;7(1):730.

10. Shaikh A, Jamal N, Shabbir A, Arif B, Ferozuddin N. Use of artificial intelligence in health diagnostics-a validation study on chorionic villi. Pk J Pathol. 2021;32(4):147–151.

11. Guilmette JM, Nose V. Neoplasms of the neuroendocrine pancreas: an update in the classification, definition and molecular genetic advances. Adv Anat Pathol. 2019;26(1):13–30. doi:10.1097/PAP.0000000000000201

12. Grosse C, Noack P, Silye R. Accuracy of grading pancreatic neuroendocrine neoplasms with Ki-67 index in fine-needle aspiration cellblock material. Cytopathology. 2019;30(2):187–193. doi:10.1111/cyt.12643

13. Abi-Raad R, Lavik JP, Barbieri AL, Zhang X, Adeniran AJ, Cai G. Grading pancreatic neuroendocrine tumors by Ki-67 index evaluated on fine-needle aspiration cell block material. Am J Clin Pathol. 2020;153(1):74–81. doi:10.1093/ajcp/aqz110

14. Volynskaya Z, Mete O, Pakbaz S, Al-Ghamdi D, Asa SL. Ki-67 quantitative interpretation: insights using image analysis. J Pathol Inform. 2019;10(1):8. doi:10.4103/jpi.jpi_76_18

15. Chen PC, Gadepalli K, Macdonald R. An augmented reality microscope with real-time artificial intelligence integration for cancer diagnosis. Nat Med. 2019;25(9):1453–1457. doi:10.1038/s41591-019-0539-7

16. Hacking SM, Sajjan S, Lee L, et al. Potential pitfalls in diagnostic digital image analysis: experience with Ki-67 and PHH3 in gastrointestinal neuroendocrine tumors. Pathol Res Pract. 2020;216(3):152753. doi:10.1016/j.prp.2019.152753

17. Dogukan FM, Yilmaz Ozguven B, Dogukan R, Kabukcuoglu F. Comparison of monitor-image and printout-image methods in Ki-67 scoring of gastroenteropancreatic neuroendocrine tumors. Endocr Pathol. 2019;30(1):17–23. doi:10.1007/s12022-018-9554-3

18. Tschuchnig ME, Oostingh GJ, Gadermayr M. Generative adversarial networks in digital pathology: a survey on trends and future potential. Patterns. 2020;1(6):100089. doi:10.1016/j.patter.2020.100089

19. Tang LH, Gonen M, Hedvat C, Modlin IM, Klimstra DS. Objective quantification of the Ki-67 proliferative index in neuroendocrine tumors of the gastroenteropancreatic system: a comparison of digital image analysis with manual methods. Am J Surg Pathol. 2012;36(12):1761–1770. PMID: 23026928. doi:10.1097/PAS.0b013e318263207c

20. Lea D, Gudlaugsson EG, Skaland I, Lillesand M, Søreide K, Søreide JA. Digital image analysis of the proliferation markers Ki-67 and Phosphohistone H3 in gastroenteropancreatic neuroendocrine neoplasms: accuracy of grading compared with routine manual hot spot evaluation of the Ki-67 index. Appl Immunohistochem Mol Morphol. 2021;29(7):499–505. PMID: 33758143; PMCID: PMC8354564. doi:10.1097/PAI.0000000000000934

21. Jahn SW, Plass M, Moinfar F. Digital pathology: advantages, limitations and emerging perspectives. J Clin Med. 2020;9(11):3697. doi:10.3390/jcm9113697

22. Fulawka L, Blaszczyk J, Tabakov M, Halon A. Assessment of ki-67 proliferation index with deep learning in DCIS (ductal carcinoma in situ). Sci Rep. 2022;12(1):3166. doi:10.1038/s41598-022-06555-3

23. Feng M, Deng Y, Yang L, et al. Automated quantitative analysis of ki-67 staining and he images recognition and registration based on whole tissue sections in breast carcinoma. Diagn Pathol. 2020;15(1):65. doi:10.1186/s13000-020-00957-5

24. Niazi MK, Tavolara TE, Arole V, Hartman DJ, Pantanowitz L, Gurcan MN. Identifying tumor in pancreatic neuroendocrine neoplasms from Ki-67 images using transfer learning. PloS one. 2018;13(4):e0195621. doi:10.1371/journal.pone.0195621

25. Boukhar SA, Gosse MD, Bellizzi AM, Rajan KDA. Ki-67 proliferation index assessment in gastroenteropancreatic neuroendocrine tumors by digital image analysis with stringent case and hotspot level concordance requirements. Am J Clin Pathol. 2021;156(4):607–619. doi:10.1093/AJCP/AQAA275

26. Owens R, Gilmore E, Bingham V, et al. Comparison of different anti-Ki-67 antibody clones and hotspot sizes for assessing proliferative index and grading in pancreatic neuroendocrine tumours using manual and image analysis. Histopathology. 2020;77(4):646–658. doi:10.1111/his.14200

27. Trikalinos NA, Chatterjee D, Lee J, et al. Accuracy of grading in pancreatic neuroendocrine neoplasms and effect on survival estimates: an institutional experience. Ann Surg Oncol. 2020;27(9):3542–3550. doi:10.1245/s10434-020-08377-x

28. Zehra T, Anjum S, Mahmood T, et al.. A novel deep learning-based mitosis recognition approach and dataset for uterine leiomyosarcoma histopathology. Cancers. 2022;14(15):3785. doi:10.3390/cancers14153785

29. Zehra T, Parwani A, Abdul-Ghafar J, Ahmad Z. A suggested way forward for adoption of AI-Enabled digital pathology in low resource organizations in the developing world. Diagn Pathol. 2023;18(1):1–6. doi:10.1186/s13000-023-01352-6

30. Zehra T, Shams M, Ahmad Z, Chundriger Q, Ahmed A, Jaffar N. Ki-67 quantification in breast cancer by digital imaging ai software and its concordance with manual method. JCPSP. 2023;33(5):544–547.

31. Zehra T, Jaffar N, Shams M, et al.. Use of a novel deep learning open-source model for quantification of Ki-67 in breast cancer patients in Pakistan: a comparative study between the manual and automated methods. Diagnostics. 2023;13(19):3105. doi:10.3390/diagnostics13193105

32. Ghahremani P, Marino J, Dodds R, Nadeem S. Deepliif: An Online Platform for Quantification of Clinical Pathology Slides. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2022:21399–21405.

33. Ghahremani P, Marino J, Hernandez-Prera J, et al.. An AI-ready multiplex staining dataset for reproducible and accurate characterization of tumor immune microenvironment. ArXiv Preprint arXiv. 2023;2305:16465v1. PMID: 37292462; PMCID: PMC10246071.

Creative Commons License © 2023 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.