Learning feature dependencies for precise tumor region detection and segmentation in optical coherence tomography images

Nagarajan, Anandh and Megala, T. and Poongodai, A. and Udayasankaran, P. and Govindharaj, I. and Shobana, R. (2025) Learning feature dependencies for precise tumor region detection and segmentation in optical coherence tomography images. INTERNATIONAL OPHTHALMOLOGY, 46.0 (1). ISSN 0165-5701

Full text not available from this repository.

Abstract

PurposeAccurate segmentation of tumor-infected regions in retinal Optical Coherence Tomography (OCT) images is critical for early diagnosis and clinical decision-making. However, conventional deep learning and transformer-based models often struggle to delineate overlapping and inter-dependent pixel features, leading to reduced segmentation precision. This study proposes a novel Dependent Inter-Feature Segmentation Method (DIFSM) to improve the localization and segmentation of retinal tumor regions in OCT images.MethodsThe proposed DIFSM framework integrates advanced image preprocessing, inter-feature dependency analysis, and a Vision Transformer (ViT) architecture to distinguish differentiable and non-differentiable features. Overlapping pixel regions are identified through inter-feature intensity and gradient analysis, which are indicative of tumor-affected areas. The Vision Transformer is trained using matched and unmatched inter-feature representations to enhance contextual learning and resolve feature ambiguities. Experiments were conducted on the OCTID dataset comprising high-resolution retinal OCT images, and performance was evaluated using Dice coefficient, Intersection over Union (IoU), precision, sensitivity, specificity, and Mean Square Matching Error (MSME). Comparative analysis was performed against state-of-the-art segmentation models.ResultsThe proposed DIFSM model achieved a Dice coefficient of 96.2% and an IoU of 94.8%, demonstrating excellent spatial overlap with expert-validated tumor regions. Precision, sensitivity, and specificity reached 96.8%, 96.6%, and 96.7%, respectively, while MSME was reduced to 6.11%. Compared to existing methods, DIFSM improved segmentation accuracy by 14.39%, precision by 14.11%, and reduced MSME by 13.5%. The model consistently outperformed benchmark approaches in detecting macular hole and central serous retinopathy-associated tumor regions while maintaining robustness to noise and structural variability.ConclusionThe proposed DIFSM framework effectively addresses the limitations of existing OCT segmentation methods by explicitly modeling inter-feature dependencies and resolving overlapping pixel ambiguities using a Vision Transformer. The significant improvements in segmentation accuracy and error reduction highlight its potential as a reliable and clinically applicable tool for automated retinal tumor detection in ophthalmic imaging. DIFSM offers a promising direction for enhancing OCT-based diagnostic systems and supporting ophthalmologists in early disease identification and treatment planning.

Item Type: Article
Uncontrolled Keywords: Optical coherence tomography, Retinal tumor detection, Dependent inter-feature segmentation method, Vision transformer, Overlapping pixel segmentation, Automated clinical diagnosis
Subjects: Medicine > Ophthalmology
Divisions: Engineering and Technology > Aarupadai Veedu Institute of Technology, Chennai, India > Computer Science and Engineering
Depositing User: Unnamed user with email techsupport@mosys.org
Last Modified: 06 Feb 2026 07:15
URI: https://ir.vmrfdu.edu.in/id/eprint/7374

Actions (login required)

View Item
View Item