Abstract
Approximation error is a key measure in the process of model validation and verification for neural networks. In this paper, the problems of guaranteed error estimation of neural networks and applications to assured system modeling and assured neural network compression are addressed. First, a concept called guaranteed error estimation of feedforward neural networks is proposed, which intends to provide the worst-case approximation error of a trained neural network with respect to a compact input set essentially containing an infinite number of values. Given different prior information about the original system, two approaches including Lipschitz constant analysis and set-valued reachability analysis methods are developed to efficiently compute upper-bounds of approximation errors. Based on the guaranteed approximation error estimation framework, an optimization for obtaining parameter values from data set is proposed. A robotic arm and neural network compression examples are presented to illustrate the effectiveness of our approach.
Original language | English (US) |
---|---|
Pages (from-to) | 61-69 |
Number of pages | 9 |
Journal | Neural Networks |
Volume | 151 |
DOIs | |
State | Published - Jul 2022 |
Keywords
- Approximation error estimation
- Feedforward neural network
- Lipschitz constant
- Neural network compression
- Reachability
ASJC Scopus subject areas
- Cognitive Neuroscience
- Artificial Intelligence