Guaranteed approximation error estimation of neural networks and model modification

Yejiang Yang, Tao Wang, Jefferson P. Woolard, Weiming Xiang

Research output: Contribution to journalArticlepeer-review

11 Scopus citations

Abstract

Approximation error is a key measure in the process of model validation and verification for neural networks. In this paper, the problems of guaranteed error estimation of neural networks and applications to assured system modeling and assured neural network compression are addressed. First, a concept called guaranteed error estimation of feedforward neural networks is proposed, which intends to provide the worst-case approximation error of a trained neural network with respect to a compact input set essentially containing an infinite number of values. Given different prior information about the original system, two approaches including Lipschitz constant analysis and set-valued reachability analysis methods are developed to efficiently compute upper-bounds of approximation errors. Based on the guaranteed approximation error estimation framework, an optimization for obtaining parameter values from data set is proposed. A robotic arm and neural network compression examples are presented to illustrate the effectiveness of our approach.

Original languageEnglish (US)
Pages (from-to)61-69
Number of pages9
JournalNeural Networks
Volume151
DOIs
StatePublished - Jul 2022

Keywords

  • Approximation error estimation
  • Feedforward neural network
  • Lipschitz constant
  • Neural network compression
  • Reachability

ASJC Scopus subject areas

  • Cognitive Neuroscience
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Guaranteed approximation error estimation of neural networks and model modification'. Together they form a unique fingerprint.

Cite this