Abstract
Early detection, regular monitoring of eyelid tumors and post-surgery recurrence monitoring are crucial for patients. However, frequent hospital visits are burdensome for patients with poor medical conditions. This study validates a novel deep learning-based mobile application, based on YOLOv5 and Efficient-Net v2-B architectures, for self-diagnosing eyelid tumors, enabling improved health support systems for such patients. 1195 preprocessed clinical ocular photographs and biopsy results were collected for model training. The best-performing model was chosen and converted into a smartphone-based application, then further evaluated based on external validation dataset, achieved 0.921 accuracy for triple classification outcomes (benign/malignant eyelid tumors or normal eye), generally superior to that of general physicians, resident doctors, and ophthalmology specialists. Intelligent Eyelid Tumor Screening application exhibited a straightforward detection process, user-friendly interface and treatment recommendation scheme, provides preliminary evidence for recognizing eyelid tumors and could be used by healthcare professionals, patients and caregivers for detection and monitoring purposes.
Introduction
The early diagnosis of eyelid tumors is critical for providing timely interventions, selecting appropriate treatments, and predicting prognoses. Different eyelid tumors have various treatment protocols owing to their distinct embryogenic origins. Nevus pigmentosus is the most common primary benign eyelid tumor, accounting for 35% of all eyelid lesions1,2. It is characterized by flat or raised lesions that arise from melanocytes3. Patients seek the removal of these lesions for cosmetic reasons, whereas a conservative treatment is a valid option for patients. In contrast, surgical excision (even large excision) is the first-line treatment for sebaceous carcinoma (SGC)4; even though it has a lower prevalence than basal cell carcinoma (BCC), it might have more aggressive potential and cancerogenic effects. The risks of malignant tumor metastasis and recurrence increase without optimal clinical interventions, appropriate therapy protocols, or high follow-up frequencies. The prompt diagnosis and treatment of malignant eyelid tumors are essential for preventing intraorbital and intracranial extensions and/or systemic spreading. Therefore, the early diagnosis and timely treatment of eyelid tumors are crucial.
In recent years, artificial intelligence (AI) techniques like deep learning and machine learning methods have played promising roles in the fields of ophthalmology, such as blepharoptosis identification5, postoperative prediction6, eyelid measurement7, tumor classification8 and disease prediction9. However, few studies have demonstrated the applications of deep learning-based methods in clinical practice.
Smartphones have powerful processors, large amounts of memory, high-resolution touch screens, wireless technology and multiple-lens cameras, which are well-suited for medical use. They have been used to collect patient data, access results, perform live video consultations and screen for diseases10,11,12,13. Because of the overlapping features exhibited by eyelid tumors in the early stages, it is difficult for ophthalmologists to conduct differential diagnoses with the naked eye. Thus there is a growing demand for the assistance of inexpensive and portable smartphone-based applications in the field of eyelid tumor detection.
In this study, we developed and validated a smartphone-based application, Intelligent Eyelid Tumor Screening System, to practically identify eyelid tumors using ocular clinical photographs in clinical settings.
Results
A total of 1195 eyes derived from 616 patients with eyelid tumors were retrospectively gathered for training, fine-tuning, and internally validating the AI system (Table 1). The mean age of patients (standard deviation, SD) was 59.3 ± 17.5 years, and 63.76% of the patients were male. A total of 427 eyes of 410 patients were histopathologically diagnosed with benign tumors, whereas 206 eyes of 206 patients were diagnosed with malignant tumors. A total of 562 eyes were normal eyes without tumors. All the tumors have biopsy results. The most common malignant eyelid tumor in our datasets was BCC (79/206, 38.35%), followed by SGC (49/206, 22.81%), squamous cell carcinoma (SCC) (19/206, 9.22%), and eyelid melanoma (11/206, 5.34%). The most common benign eyelid tumor was nevus (231/427, 54.10%), followed by cysts (31/427, 7.26%), seborrheic keratosis (22/427, 5.15%), and xanthelasma (17/427, 3.98%). Seventeen tumors were located in the bilateral eyelids, and 19 tumors were located in both the upper and lower eyelids.
Table 1 Components of the developmental dataset and external validation dataset
Full size table
The mean average precision (mAP) of the YOLOv5-based detection model in the ocular localization task was 0.95, with an F1 score of 0.99. The triple-classification model for normal eyes, malignant tumors and benign tumors, achieved macro-averages of 0.87, 0.87, and 0.87 in terms of the precision, recall and F1 score metrics, respectively, on the internal validation set. The combined model of localization and classification achieved an accuracy of 0.921 (95% CI 0.816–1.000), a sensitivity of 0.882 (95% CI 0.722–1.000), a specificity of 0.952 (95% CI 0.833–1.000), and an AUC of 0.917 (95% CI 0.828–1.000) on an external validation set. The detailed training, validating and testing dataset constrction is shown in Fig. 1. The performance of the combined model of localization and classification is shown in Fig. 2. The representative outputs of the two models are shown in Fig. 3a, b. The corresponding losses and matrices are presented in Supplementary Fig. 1. Gradient-weighted class activation mapping (Grad-CAM) was used to make our combined model more transparent by producing visual explanations as shown in Fig. 4.
Fig. 1: Schematic diagram of this study.
figure 1
The development dataset was randomly divided into training dataset 1 and validation dataset 1 in an 8:2 ratio for developing the ocular localization model. Training dataset 1 was then divided into training dataset 2 and validation dataset 2 in a 9:1 ratio for developing the tumor classification model. The combined model was evaluated on validation dataset 1.
Full size image
Fig. 2: Performance of deep learning algorithms.
figure 2
The performance of the localization and classification combined model. a The receiver operating characteristic (ROC) curves produced by the deep learning algorithms on the external test set. b The precision-recall (PR) curves yielded by the deep learning algorithms on the external test set.
Full size image
Fig. 3: The representative outputs of the two models.
figure 3
Representative results concerning the ability of the developed deep learning-based system to automatically predict eyelid tumors. a Ocular localization. b Triple-class classification.
Full size image
Fig. 4: Grad-CAM for the combined model visual explanation.
figure 4
Grad-CAM was used to make our combined model more transparent by producing visual explanations. Benign tumors shown on the left and malignant tumors on the right side of the diagram.
Full size image
The “Intelligent Eyelid Tumor Screening” WeChat application was established based on the developed models (Fig. 5 and the Supplementary Movie 1). Users can easily complete the identification process by following the given instructions (Fig. 5a). The standard preparation guidelines for higher-quality image input are shown in Fig. 5b. Users can look back at the previous identification records for long-term follow-up work (Fig. 5c) and attain a basic understanding of common eyelid tumors (Fig. 5d). Users can also register online for further offline checkups (Fig. 5e). The inner account allows professionals to review and record medical data (Fig. 5f).
Fig. 5: The operation interface of the “Intelligent Eyelid Tumor Screening” application.
figure 5
Our application exhibited a straightforward detection process, user-friendly interface, outcomes consisting of the probabilities of benign and malignant tumors, as well as guidance for tumor introduction and further medical treatment recommendations. a The screening process. b Standard preparation guidelines. c Historical records. d Popularization of the science related to common eyelid tumors. e Offline checkup registration process. f Inner account.
Full size image
Discussion
We developed and validated a smartphone-based application that provides a holistic and quantitative technique for detecting and identifying eyelid tumors in patients. Our system achieved an AUC of 0.917 on external datasets and proved reliable and stable results under different settings when used by patients and professionals. We evaluated this system in real-world clinical settings. Intelligent Eyelid Tumor Screening captured photographs of patients who visited our clinic and then automatically detected and categorized them into normal, benign, and malignant eyes, and the predicted probabilities were given. No further information, including the chief complaints, basic information, or tumor descriptions of the patients, was considered during tumor screening and identification.
Eyelid tumors are common but are often not given sufficient care because they do not impact vision; most patients seek medication for cosmetic reasons. Although benign tumors account for most eyelid tumors, malignant tumors have considerable potential for morbidity and mortality. BCC is the most common malignant eyelid tumor and is usually observed in the elderly population with excessive sun exposure. Even though it has a low fatality rate, BCC can be associated with significant morbidity and costs14. SCC is reported to be a common malignancy of the ocular surface, particularly in areas with high ultraviolet light exposure and skin damage; it is frequently over-diagnosed by pathologists and histologically confused with other benign entities15. SGC is a rare but aggressive neoplasm, and its five-year mortality rate can reach 30%16. The early detection and identification of malignant tumors can potentially increase the probability of timely treatments, further improving patient prognoses.
In our study, 33.54% of the eyelid tumors were malignant. However, malignant tumors accounted for approximately 12% to 15% of all tumors identified in previous epidemiological investigations conducted in both eastern and western countries17,18. This difference may have been due to the sample size employed in our study, and hard-to-treat patients were more likely to visit our hospital. As in previous studies, BCC, SCC and SGC were the top three malignant tumors in our dataset, with nevus, xanthelasma and seborrheic keratosis being the top three benign tumors.
Differentiating malignant and benign tumors with the naked eye can be challenging for junior ophthalmologists and nonspecialist physicians because of the relative rarity of each subtype, the overlapping clinical features between different subtypes and a lack of ophthalmologic training, leading to minimal specialized clinical experience.
AI systems have the advantages of high accuracy and efficiency when capturing information in ways that the human brain cannot. Several automatic and semiautomatic approaches have been developed to detect eyelid tumors19,20,21. Li et al.20 also developed an AI system for distinguishing malignant eyelid tumors from benign tumors in multicenter clinics. They trained a tumor localization model with an average precision of 76.2%, which meant that approximately one-quarter of masses were incorrectly located. Lee et al.21 developed two models for classifying hand-cropped images into two or three categories without an ocular localization model. Therefore, the human ophthalmologists in our study were assigned to precisely delineate the tumors during the development stage of the AI model so that our model could achieve better performance, and this approach might be more appropriate for decision-making in clinical settings.
Lord et al.22 first proposed the novel use of smartphones in ophthalmology. The detailed use of smartphone-based applications in ophthalmology was described later by various researchers23,24,25, and such applications have more recently been employed in various clinical practices10,11,12. Previous studies utilized photographs to detect ocular and visual abnormalities12,26. In our study, we analyzed more than 1200 ocular photographs and accurately distinguished malignant eyelid tumors from benign tumors and normal eyes. Moreover, to utilize our application in various scenarios, we used different blur, brightness and smartphone platforms to evaluate the stability of our system.
This study has several limitations. First, we focused on only a single static image type; different lighting conditions, camera settings, and shooting scenarios and a lack of three-dimensional vision may have influenced the feature extraction process, even when we evaluated the stability of our system under different conditions. However, our system introduced the suggested shooting distance and lighting conditions and allowed users to continuously import images until a qualified photo was acquired, thereby improving the success rate of the identification process. Second, the system provided results based only on the given images, without additional clinical information assisting the analysis procedure. Ting et al.27 suggested that the inclusion of clinical information may improve the accuracy of model detection and identification. Third, our recruited dataset may not have fully represented the epidemiological population; even though we achieved effective identification in clinical applications, large-scale screening is still needed in the future to validate our system in a real-world setting. In addition, premalignant lesions were not included. Fourth, the use of AI with imaging modalities raises several ethical concerns. AI algorithms require large volumes of patient data for training, testing and validation purposes. Our system collects facial information from users, which raises questions about data access and privacy protection for patients. We aim to add a virtual mask for privacy protection in future research. AI systems can inherit biases that are present in training data. With the use of AI in medical imaging identification tasks, determining the responsibility for errors and misdiagnoses in application scenarios can be challenging. When a false positive occurs, individuals may experience unnecessary stress, anxiety, and concern about their diagnosis. Conversely, a false negative can result in missing opportunities for early treatment or intervention, potentially worsening their condition.
Given the complexity inherent in distinguishing eyelid tumors, the concept of accurately identifying eyelid tumors is attractive. At present, our model provides outcomes consisting of the probabilities of benign and malignant tumors, as well as guidance for tumor introduction and further medical treatment recommendations. In future work, we could focus on expanding our approach to three-dimensional graphs and further extending the subtype identification capabilities of our system to improve our model and identify more subtypes.
This study introduced the first smartphone-based portable eyelid tumor detection application for WeChat: “Intelligent Eyelid Tumor Screening”. Smartphone-based applications constitute an emerging research area with respect to designing small-sized, low-power, high-quality and affordable systems that can perform eyelid tumor screening and automated detection. Based on the obtained results, eyelid tumors could be identified within 2 seconds. At this stage, the recognition sensitivity can reach 88% with a specificity of 95%, and the recognition sensitivity can be further improved after continuously learning from uploaded data. This smartphone-based eyelid tumor detection application has the potential to be used by health care professionals, patients and caregivers for detecting and monitoring eyelid tumors, providing an alternative to frequent hospital visits and invasive biological processes.
Methods
Image Acquisition
Our dataset included data from subjects who visited Beijing Tongren Hospital (Beijing, China) from January 2014 to December 2022 for eye examinations and ophthalmology consultations because of the discovery of eyelid tumors. Photos were taken via a digital camera (DSC-F828, Sony, Japan) and smartphones (Xiaomi, iPhone, and Huawei) upon their first visits, with their eyes open or closed, with or without everted eyelids to best expose the tumors. The photographs were taken in outpatient clinics and inpatient wards; hence, the lighting and background conditions of the images were not uniform, further enriching the diversity of our datasets. We included participants who underwent surgical resection and received histopathological diagnoses for the AI model training. The flow chart demonstrating the detailed data selection and model development process of our proposed screening system is shown in Fig. 1.
Ground truth and image processing
We resized all images to a resolution of 640×640 pixels before developing the proposed AI model. We assessed the quality of the images and filtered out unqualified images. The images were labeled via LabelMe (https://github.com/wkentaro/labelme) based on their histopathological diagnoses, which were considered the ground truths in this study. The tumor and lateral healthy eye regions were then cropped to train the deep learning model.
Model architecture and training
The YOLOv5 framework introduced by Jocher et al.28 was used for ocular localization. The YOLOv5 is a convolutional neural network for anchor-based wide-range object detection, instance segmentation and image classification. Anchors are preset reference frames with different sizes and aspect ratios on images, and they reduce the difficulty of model training. Three anchor boxes were preset to match the statistical characteristics of the ground-truth boxes. An automatic anchor check was applied to calculate class-specific confidence scores based on repeated experiments to train the model without underfitting it. The resized images were augmented to improve the ability of the model to generalize and to reduce the possibility of overfitting during the training stage; the augmentation methods included HSV color space transformation, random flipping, rotation and translation for the localization model. We fine-tuned the pre-trained models on the ImageNet public dataset. A batch size of 64 and 100 epochs were applied. The loss was computed as a combination of the class loss, objectness loss and localization loss, followed by backward gradient optimization for the algorithm. The box coordinates were directly predicted by activating the last layer.
Based on the output of the ocular localization model, a mobile-size EfficientNet v2-B framework was used for triple-class classification model training. Efficient-Net29 is a simple and highly effective compound scaling method that achieves a balance among the network width, depth and resolution. Random flipping and HSV color space transformation were used for data augmentation in the classification model development. A batch size of 32 and 100 epochs were applied. Class weights were used to trade off the effect of imbalanced distributions between classes.
To train and ensure fair evaluation of subsets, the development dataset was randomly divided into training dataset 1 and validation dataset 1 in an 8:2 ratio for developing the ocular localization model. Training dataset 1 was then divided into training dataset 2 and validation dataset 2 in a 9:1 ratio for developing the tumor classification model. The combined model was evaluated on validation dataset 1. The tumor images of a patient were not split into the training and validation datasets at the same time.
Evaluation of deep learning model performance
The performance of the combined model of localization and classification was further evaluated on an independent external validation dataset. We used the accuracy, sensitivity, specificity, and receiver operating characteristic curve metrics to assess the performance of the model. The area under the curve (AUC) with a 95% confidence interval (CI) was calculated. Statistical analyses were performed via Python 3.12.1 (https://www.python.org/). An additional 38 eyelid tumor images (11 eyes with malignant tumors, 27 eyes with benign tumors, and 38 eyes without tumors) were prospectively collected from 38 patients as the external validation dataset to test the combined model.
Model explanation
Gradient-weighted class activation mapping (Grad-CAM) is calculated for the combined model to visualize the model attention. This serves as supporting evidence of model performance and provides insight of model mechanism.
In this study, all procedures were conducted in accordance with the Declaration of Helsinki. The Ethics Committee of Capital Medical University Afflicted Beijing Tongren Hospital (TRECKY2018-056-GZ(2022)-07) approved the study. Written informed consent to publish details, images, or videos was obtained from each subject. Cropping and blurring areas of images or videos were taken to protect patient anonymity.
Data availability
The data in this study are not publicly available, but may be available from the corresponding author upon reasonable request.
Code availability
Python 3.12.1 scripts enabling the main steps of the analysis are not publicly available, but may be available from the corresponding author on reasonable request.
References
Yu, SS, Zhao, Y, Zhao, H, Lin, JY & Tang, X A retrospective study of 2228 cases with eyelid tumors. Int J. Ophthalmol. 11, 1835–1841 (2018).
PubMedPubMed CentralGoogle Scholar
Wang, L et al. Clinicopathological analysis of 5146 eyelid tumours and tumour-like lesions in an eye centre in South China, 2000-2018: a retrospective cohort study. BMJ Open 11, e041854 (2021).
PubMedPubMed CentralGoogle Scholar
Sun, MT, Huang, S, Huilgol, SC & Selva, D Eyelid lesions in general practice. Aust. J. Gen. Pr. 48, 509–514 (2019).
Google Scholar
Owen, JL et al. Sebaceous carcinoma: evidence-based clinical practice guidelines. Lancet Oncol. 20, e699–e714 (2019).
PubMedGoogle Scholar
Hung, JY et al. An outperforming artificial intelligence model to identify referable blepharoptosis for general practitioners. J. Pers. Med. 12, 283 (2022).
PubMedPubMed CentralGoogle Scholar
Yoo, TK, Choi, JY & Kim, HK A generative adversarial network approach to predicting postoperative appearance after orbital decompression surgery for thyroid eye disease. Comput Biol. Med. 118, 103628 (2020).
PubMedGoogle Scholar
Chen, HC et al. Smartphone-based artificial intelligence-assisted prediction for eyelid measurements: algorithm development and observational validation study. JMIR Mhealth Uhealth 9, e32444 (2021).
PubMedPubMed CentralGoogle Scholar
Hui, S. et al. Noninvasive identification of benign and malignant eyelid tumors using clinical images via deep learning system. J. Big Data 9, 84 (2022).
Google Scholar
Dong, L. et al. Retinal photograph-based deep learning system for detection of hyperthyroidism: a multicenter, diagnostic study. J. Big Data 10, 134 (2023).
Google Scholar
Li, F et al. Development and clinical deployment of a smartphone-based visual field deep learning system for glaucoma detection. NPJ Digit Med. 3, 123 (2020).
PubMedPubMed CentralGoogle Scholar
Gupta, S, Thakur, S & Gupta, A Optimized hybrid machine learning approach for smartphone based diabetic retinopathy detection. Multimed. Tools Appl. 81, 14475–14501 (2022).
PubMedPubMed CentralGoogle Scholar
Chen, W et al. Early detection of visual impairment in young children using a smartphone-based deep learning system. Nat. Med. 29, 493–503 (2023).
CASPubMedGoogle Scholar
Raber, FP, Gerbutavicius, R, Wolf, A & Kortüm, K Smartphone-based data collection in ophthalmology. Klin. Monbl Augenheilkd. 237, 1420–1428 (2020).
PubMedGoogle Scholar
Fania, L et al. Basal cell carcinoma: from pathophysiology to novel therapeutic approaches. Biomedicines 8, 449 (2020).
CASPubMedPubMed CentralGoogle Scholar
Reifler, DM & Hornblass, A Squamous cell carcinoma of the eyelid. Surv. Ophthalmol. 30, 349–365 (1986).
CASPubMedGoogle Scholar
Adamopoulos, A, Chatzopoulos, EG, Anastassopoulos, G & Detorakis, E Eyelid basal cell carcinoma classification using shallow and deep learning artificial neural networks. Evol. Syst.(Berl.) 12, 583–590 (2021).
Google Scholar
Sendul, SY et al. Clinical and pathological diagnosis and comparison of benign and malignant eyelid tumors. J. Fr. Ophtalmol. 44, 537–543 (2021).
CASPubMedGoogle Scholar
Xu, XL et al. Eyelid neoplasms in the Beijing Tongren Eye Centre between 1997 and 2006. Ophthalmic Surg. Lasers Imaging 39, 367–372 (2008).
PubMedGoogle Scholar
Wang, L et al. Automated identification of malignancy in whole-slide pathological images: identification of eyelid malignant melanoma in gigapixel pathological slides using deep learning. Br. J. Ophthalmol. 104, 318–323 (2020).
PubMedGoogle Scholar
Li, Z et al. Artificial intelligence to detect malignant eyelid tumors from photographic images. NPJ Digit Med. 5, 23 (2022).
PubMedPubMed CentralGoogle Scholar
Lee, MJ. et al. Differentiating malignant and benign eyelid lesions using deep learning. Sci. Rep 13, 4103 (2023).
CASPubMedPubMed CentralGoogle Scholar
Lord, RK, Shah, VA, San Filippo, AN & Krishna, R Novel uses of smartphones in ophthalmology. Ophthalmology 117, 1274–1274.e3 (2010).
PubMedGoogle Scholar
Hogarty, DT, Hogarty, JP & Hewitt, AW Smartphone use in ophthalmology: What is their place in clinical practice?. Surv. Ophthalmol. 65, 250–262 (2020).
PubMedGoogle Scholar
Chhablani, J, Kaja, S & Shah, VA Smartphones in ophthalmology. Indian J. Ophthalmol. 60, 127–131 (2012).
PubMedPubMed CentralGoogle Scholar
Zvornicanin, E, Zvornicanin, J & Hadziefendic, B The use of smart phones in ophthalmology. Acta Inf. Med. 22, 206–209 (2014).
Google Scholar
Munson, MC, et al. Autonomous early detection of eye disease in childhood photographs. Sci. Adv. 5, eaax6363 (2019).
PubMedPubMed CentralGoogle Scholar
Ting, DSJ, Ang, M, Mehta, JS & Ting, DSW Artificial intelligence-assisted telemedicine platform for cataract screening and management: a potential model of care for global eye health. Br. J. Ophthalmol. 103, 1537–1538 (2019).
PubMedGoogle Scholar
Jocher, G. et al. ultralytics/yolov5: v7.0 (Version v3.0); Zenodo (2022).
Tan, M.X., & Le, Q. V. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks (2019).
Download references
Acknowledgements
This study was supported by the National Natural Science Foundation of China (82071005); the Science and Technology Innovation Program of Hunan Province (2024RC3249); the Sanming Project of Medicine in Shenzhen (No.SZSM202311018). The authors thank Springer Nature Author Services for the English language editing services.
Author information
Authors and Affiliations
Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
Shiqi Hui, Li Dong & Dongmei Li
Institute of Digital Ophthalmology and Visual Science, Changsha Aier Eye Hospital, Changsha, Hunan, China
Jing Xie & Weiwei Dai
Mingsii Co., Ltd, Beijing, China
Li Wei
Aier Academy of Ophthalmology, Central South University, Changsha, Hunan, China
Weiwei Dai & Dongmei Li
Authors
Shiqi Hui
View author publications
You can also search for this author inPubMedGoogle Scholar
2. Jing Xie
View author publications
You can also search for this author inPubMedGoogle Scholar
3. Li Dong
View author publications
You can also search for this author inPubMedGoogle Scholar
4. Li Wei
View author publications
You can also search for this author inPubMedGoogle Scholar
5. Weiwei Dai
View author publications
You can also search for this author inPubMedGoogle Scholar
6. Dongmei Li
View author publications
You can also search for this author inPubMedGoogle Scholar
Contributions
S.H., J.X., W.D., and D.L. contributed to overall study design and manuscript preparation. J.X., W.D., and D.L. contributed to technical and material support. S.H. and J.X. contributed to data analysis. S.H., J.X. W.D., and D.L. contributed to manuscript writing, preparation and submission. S.H., J.X., L.D., L.W., W.D., and D.L. contributed to manuscript review and revision and approved the final manuscript.
Corresponding authors
Correspondence to Weiwei Dai or Dongmei Li.
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Supplementary information
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
Reprints and permissions
About this article
Check for updates. Verify currency and authenticity via CrossMark
Cite this article
Hui, S., Xie, J., Dong, L. et al. Deep learning-based mobile application for efficient eyelid tumor recognition in clinical images. npj Digit. Med. 8, 185 (2025). https://doi.org/10.1038/s41746-025-01539-9
Download citation
Received:20 November 2024
Accepted:24 February 2025
Published:30 March 2025
DOI:https://doi.org/10.1038/s41746-025-01539-9
Share this article
Anyone you share the following link with will be able to read this content:
Get shareable link
Sorry, a shareable link is not currently available for this article.
Copy to clipboard
Provided by the Springer Nature SharedIt content-sharing initiative