A novel obfuscation method based on majority logic for preventing unauthorized access to binary deep neural networks.
Journal:
Scientific reports
Published Date:
Jul 8, 2025
Abstract
The significant expansion of deep learning applications has necessitated safeguarding the deep neural network (DNN) model from potential unauthorized access, highlighting its importance as a valuable asset. This study proposes an innovative key-based algorithm-hardware co-design methodology to protect deep neural network (DNN) models from unauthorized access. The proposed approach significantly reduces model accuracy when an incorrect key is used, preventing unauthorized users from accessing the design. The significance and advancements of binary neural networks (BNNs) in the hardware implementation of cutting-edge DNN models have led us to develop our methodology for BNNs. However, the proposed technique can be broadly applied to various designs for implementing neural network accelerators. The proposed protective approach increases efficiency more than similar solutions across different BNN architectures and standard datasets. We validate our proposed hardware design using post-layout simulations using the Cadence Virtuoso tool and the well-established TSMC 40 nm CMOS technology. The proposed approach yields 43%, 79%, and 71% reductions in area, average power, and weight modification energy per filter in the neural network structures. Additionally, the security of the key circuit has been analyzed and evaluated against Boolean satisfiability-based attacks, structural attacks, reverse engineering, and power-based side-channel attacks.
Authors
Keywords
No keywords available for this article.