Volume 10, Issue 2 (6-2025)                   J Res Dent Maxillofac Sci 2025, 10(2): 111-124 | Back to browse issues page

Ethics code: IR.SBMU.DRC.REC.1402.102

XML Print


Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

Vazirizadeh Y, Mirmohamadsadeghi H, Behnaz M, Kavousinejad S. Development and Evaluation of a Convolutional Neural Network for Automated Detection of Lip Separation on Profile and Frontal Photographs. J Res Dent Maxillofac Sci 2025; 10 (2) :111-124
URL: http://jrdms.dentaliau.ac.ir/article-1-751-en.html
1- 1-Department of Orthodontics, Faculty of Dentistry, Shahed University, Tehran, Iran. 2- Dentofacial Deformities Research Center, Research Institute for Dental Sciences, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
2- Department of Orthodontics, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
3- 1-Dentofacial Deformities Research Center, Research Institute for Dental Sciences, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran. 2- Department of Orthodontics, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran. , dr.shahab.k93@gmail.com
Abstract:   (65 Views)
Background and Aim: Lip incompetence is defined as a habitual gap of more than 3-4 mm between the lips at rest, which can contribute to oral health issues and malocclusions. This study aimed to propose a deep learning-based model for automatic detection of lip separation on orthodontic photographs.   
Materials and Methods: This retrospective observational study employed a balanced dataset of 800 clinical images, comprising 400 cases of lip incompetence and 400 cases of lip competence. An auto-cropping technique based on averaged manual cropping coordinates was used to isolate the lip region. The cropped images were resized to 70×70 pixels and normalized before feeding into a novel attention-based residual connection convolutional neural network (ARN-CNN). The model incorporated both residual connections and attention modules to enhance feature learning and training stability. Data augmentation (e.g., rotation and scaling) was applied to improve generalizability. Training was conducted using 5-fold cross-validation, with an external test set to evaluate performance and reduce overfitting. Metrics such as accuracy, precision, recall, F1 score, receiver-operating characteristic curve-area under the curve (ROC-AUC), and a confusion matrix were used for performance evaluation.   
Results: The ARN-CNN achieved 95% accuracy on the test set. For the competent class, precision was 0.97, recall was 0.94, and F1 score was 0.96. These values were 0.94, 0.96, and 0.95, respectively, for the incompetent class with an AUC of 0.98.
Conclusion: The ARN-CNN model effectively identified lip incompetence, highlighting the potential of deep learning to support orthodontic diagnosis through image-based analysis.
Full-Text [PDF 1003 kb]   (30 Downloads)    
Type of Study: Original article | Subject: orthodontic

Add your comments about this article : Your username or Email:
CAPTCHA

Send email to the article author


Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

© 2025 CC BY-NC 4.0 | Journal of Research in Dental and Maxillofacial Sciences

Designed & Developed by: Yektaweb