Disturbance-Aware On-Chip Training with Mitigation Schemes for Massively Parallel Computing in Analog Deep Learning Accelerator.
Journal:
Advanced science (Weinheim, Baden-Wurttemberg, Germany)
Published Date:
May 20, 2025
Abstract
On-chip training in analog in-memory computing (AIMC) holds great promise for reducing data latency and enabling user-specific learning. However, analog synaptic devices face significant challenges, particularly during parallel weight updates in crossbar arrays, where non-uniform programming and disturbances often arise. Despite their importance, the disturbances that occur during training are difficult to quantify based on a clear mechanism, and as a result, their impact on training performance remains underexplored. This work precisely identifies and quantifies the disturbance effects in 6T1C synaptic devices based on oxide semiconductors and capacitors, whose endurance and variation have been validated but encounter worsening disturbance effects with device scaling. By clarifying the disturbance mechanism, three simple operational schemes are proposed to mitigate these effects, with their efficacy validated through device array measurements. Furthermore, to evaluate learning feasibility in large-scale arrays, real-time disturbance-aware training simulations are conducted by mapping synaptic arrays to convolutional neural networks for the CIFAR-10 dataset. A software-equivalent accuracy is achieved even under intensified disturbances, using a cell capacitor size of 50fF, comparable to dynamic random-access memory. Combined with the inherent advantages of endurance and variation, this approach offers a practical solution for hardware-based deep learning based on the 6T1C synaptic array.
Authors
Keywords
No keywords available for this article.