Large-scale foundation models and generative AI for BigData neuroscience.

Journal: Neuroscience research
Published Date:

Abstract

Recent advances in machine learning have led to revolutionary breakthroughs in computer games, image and natural language understanding, and scientific discovery. Foundation models and large-scale language models (LLMs) have recently achieved human-like intelligence thanks to BigData. With the help of self-supervised learning (SSL) and transfer learning, these models may potentially reshape the landscapes of neuroscience research and make a significant impact on the future. Here we present a mini-review on recent advances in foundation models and generative AI models as well as their applications in neuroscience, including natural language and speech, semantic memory, brain-machine interfaces (BMIs), and data augmentation. We argue that this paradigm-shift framework will open new avenues for many neuroscience research directions and discuss the accompanying challenges and opportunities.

Authors

  • Ran Wang
    Department of Psychiatry, The First Hospital of Hebei Medical University, Shijiazhuang, Hebei, China.
  • Zhe Sage Chen
    Department of Psychiatry, New York University School of Medicine, New York, NY 10016, United States of America.