A machine-learning-based approach to predict early hallmarks of progressive hearing loss.

Journal: Hearing research
Published Date:

Abstract

Machine learning (ML) techniques are increasingly being used to improve disease diagnosis and treatment. However, the application of these computational approaches to the early diagnosis of age-related hearing loss (ARHL), the most common sensory deficit in adults, remains underexplored. Here, we demonstrate the potential of ML for identifying early signs of ARHL in adult mice. We used auditory brainstem responses (ABRs), which are non-invasive electrophysiological recordings that can be performed in both mice and humans, as a readout of hearing function. We recorded ABRs from C57BL/6N mice (6N), which develop early-onset ARHL due to a hypomorphic allele of Cadherin23 (Cdh23), and from co-isogenic C57BL/6NTac mice (6N-Repaired), which do not harbour the Cdh23 allele and maintain good hearing until later in life. We evaluated several ML classifiers across different metrics for their ability to distinguish between the two mouse strains based on ABRs. Remarkably, the models accurately identified mice carrying the Cdh23 allele even in the absence of obvious signs of hearing loss at 1 month of age, surpassing the classification accuracy of human experts. Feature importance analysis using Shapley values indicated that subtle differences in ABR wave 1 were critical for distinguishing between the two genotypes. This superior performance underscores the potential of ML approaches in detecting subtle phenotypic differences that may elude manual classification. Additionally, we successfully trained regression models capable of predicting ARHL progression rate at older ages from ABRs recorded in younger mice. We propose that ML approaches are suitable for the early diagnosis of ARHL and could potentially improve the success of future treatments in humans by predicting the progression of hearing dysfunction.

Authors

  • Federico Ceriani
    School of Biosciences, University of Sheffield, Sheffield S10 2TN, UK; Centre for Machine Intelligence, University of Sheffield, Sheffield S10 2TN, UK. Electronic address: f.ceriani@sheffield.ac.uk.
  • Joshua Giles
    Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield S1 4DT, UK; Centre for Machine Intelligence, University of Sheffield, Sheffield S10 2TN, UK.
  • Neil J Ingham
    Wellcome Sanger Institute, Genome Campus, Hinxton, Cambridge CB10 1SA, UK; Wolfson Sensory, Pain and Regeneration Centre, Guy's Campus, King's College London, London SE1 1UL, UK.
  • Jing-Yi Jeng
    School of Biosciences, University of Sheffield, Sheffield S10 2TN, UK.
  • Morag A Lewis
    Wellcome Sanger Institute, Genome Campus, Hinxton, Cambridge CB10 1SA, UK; Wolfson Sensory, Pain and Regeneration Centre, Guy's Campus, King's College London, London SE1 1UL, UK.
  • Karen P Steel
    Wellcome Sanger Institute, Genome Campus, Hinxton, Cambridge CB10 1SA, UK; Wolfson Sensory, Pain and Regeneration Centre, Guy's Campus, King's College London, London SE1 1UL, UK.
  • Mahnaz Arvaneh
    Automatic Control and Systems Engineering, University of Sheffield, Sheffield, United Kingdom.
  • Walter Marcotti
    School of Biosciences, University of Sheffield, Sheffield S10 2TN, UK; Neuroscience Institute, University of Sheffield, Sheffield S10 2TN, UK.