Ethical and security challenges in AI for forensic genetics: From bias to adversarial attacks.
Journal:
Forensic science international. Genetics
PMID:
39874746
Abstract
Forensic scientists play a crucial role in assigning probabilities to evidence based on competing hypotheses, which is fundamental in legal contexts where propositions are presented usually by prosecution and defense. The likelihood ratio (LR) is a well-established metric for quantifying the statistical weight of the evidence, facilitating the comparison of probabilities under these hypotheses. Developing accurate LR models is inherently complex, as it relies on cumulative scientific knowledge. Ensuring transparency and rigor in these models is essential for building trust and fostering broader adoption. This is especially true in forensic genetics, where LRs are widely applied. Recently, the integration of Artificial Intelligence (AI), especially deep learning and machine learning, has introduced novel methods for predicting physical traits, ancestry, and age. However, unlike traditional approaches, many of these AI-driven methods function as "black boxes", raising concerns within the forensic community about potential biases, accountability, adversarial effects and other phenomena that could lead to erroneous outcomes. In this study, we use simulated scenarios as a proof-of-concept to illustrate two common applications of AI methods: (i) prediction of biogeographical ancestry and (ii) kinship inference. We critically examine cases where AI models can mislead forensic interpretation, which represents ethical and security challenges. We emphasize the need for rigorous evaluation and ethical oversight in the application of these methods.