Impact of canny edge detection preprocessing on performance of machine learning models for Parkinson's disease classification.
Journal:
Scientific reports
PMID:
40355628
Abstract
This study investigates the classification of individuals as healthy or at risk of Parkinson's disease using machine learning (ML) models, focusing on the impact of dataset size and preprocessing techniques on model performance. Four datasets are created from an original dataset: [Formula: see text] (normal dataset), [Formula: see text] ([Formula: see text] subjected to Canny edge detection and Hessian filtering), [Formula: see text] (augmented [Formula: see text]), and [Formula: see text] (augmented [Formula: see text]). We evaluate a range of ML models-Logistic Regression (LR), Decision Tree (DT), Random Forest (RF), Gradient Boosting (GB), XGBoost (XBG), Naive Bayes (NB), Support Vector Machine (SVM), and AdaBoost (AdB)-on these datasets, analyzing prediction accuracy, model size, and prediction latency. The results show that while larger datasets lead to increased model memory footprints and prediction latencies, the Canny edge detection preprocessing supplemented by Hessian filtering (used in [Formula: see text] and [Formula: see text]) degrades the performance of most models. In our experiment, we observe that Random Forest (RF) maintains a stable memory footprint of 61 KB across all datasets, while models like KNN and SVM show significant increases in memory usage, from 5.7-7 KB on [Formula: see text] to 102-220 KB on [Formula: see text], and similar increases in prediction time. Logistic Regression, Decision Tree, and Naive Bayes show stable memory footprints and fast prediction times across all datasets. XGBoost's prediction time increases from 180-200 ms on [Formula: see text] to 700-3000 ms on [Formula: see text]. Statistical analysis using the Mann-Whitney U test with 100 prediction accuracy observations per model (98 degrees of freedom) reveals significant differences in performance between models trained on [Formula: see text] and [Formula: see text] (p-values < 1e-34 for most models), while the effect sizes measured by estimating Cliff's delta values (approaching [Formula: see text]) indicate large shifts in performance, especially for SVM and XGBoost. These findings highlight the importance of selecting lightweight models like LR and DT for deployment in resource-constrained healthcare applications, as models like KNN, SVM, and XGBoost show significant increases in resource demands with larger datasets, particularly when Canny preprocessing is applied.