Neural transition system abstraction for neural network dynamical system models and its application to Computational Tree Logic verification.
Journal:
Neural networks : the official journal of the International Neural Network Society
PMID:
39999531
Abstract
This paper proposes an explainable abstraction-based verification method that prioritizes user interaction and enhances interpretability. By partitioning the system's state space using a data-driven process, we can abstract the dynamics into words consisting of state labels. When given a trained neural network model, a set-valued reachability analysis method is introduced to estimate the relationship between each subsystem. We construct the neural transition system abstraction with the neural network model and the relationships between partitions. Then, the abstracted model can be verified through Computational Tree Logic (CTL), enabling formal verification of the system's behavior. This approach greatly enhances the interpretability and verification of data-driven models, as well as the ability to validate against the specification. Finally, examples of the Maglev model and handwritten model abstractions are given to illustrate our proposed model verification framework, which demonstrates that the proposed framework has advantages in enhancing model interpretability and verifying user-specified properties based on CTL.