[Quality indicators artificial intelligence].

Journal: Der Nervenarzt
PMID:

Abstract

The ability of some artificial intelligence (AI) systems to autonomously evolve and the sometimes very limited possibilities to comprehend their decision-making processes present new challenges to our legal system. At a European level this has led to reform efforts, of which the proposal for a European AI regulation promises to close regulatory gaps in existing product safety law through cross-sectoral AI-specific safety requirements. A prerequisite, however, would be that the EU legislator does not only avoid duplications and contradictions with existing safety requirements but also refrains from imposing exaggerated and unattainable demands. If this were to be taken into consideration, the new safety requirements could also be used to specify the undefined standard of care in liability law. Nevertheless, challenges in the context of provability continue to remain unresolved, posing a risk of rendering the legal protection efforts of the aggrieved party ineffective. It remains to be seen whether the EU legislator will address this need for reform with the recently proposed reform of product liability law by the Commission.

Authors

  • Annegret Lamadé
    Philipps-Universität Marburg, Marburg, Deutschland. annegretlamade@gmail.com.
  • Dustin Beekmann
    Philipps-Universität Marburg, Marburg, Deutschland.
  • Simon Eickhoff
    Institute of Systems Neuroscience, Medical Faculty, Heinrich Heine University Düsseldorf, Düsseldorf, Germany.
  • Christian Grefkes
    Department of Neurology, University of Cologne, Cologne, Germany.
  • Caroline Tscherpel
    Zentrum der Neurologie und Neurochirurgie, Universitätsklinikum Frankfurt, Frankfurt, Deutschland.
  • Uta Meyding-Lamadé
    Klinik für Neurologie, Nordwestkrankenhaus in Frankfurt am Main, Frankfurt am Main, Deutschland.
  • Burc Bassa
    Klinik für Neurologie, Nordwestkrankenhaus in Frankfurt am Main, Frankfurt am Main, Deutschland.