Learning DNN Abstractions using Gradient Descent

Deep Neural Networks (DNNs) are being trained and trusted for performing fairly complex tasks, even in business- and safety-critical applications. This necessitates that they be formally analyzed before deployment. Scalability of such analyses is a major bottleneck in their widespread use. There has...

Full description

Saved in:
Bibliographic Details
Published in:IEEE/ACM International Conference on Automated Software Engineering : [proceedings] pp. 2299 - 2303
Main Authors: Mukhopadhyay, Diganta, Siddiqui, Sanaa, Karmarkar, Hrishikesh, Madhukar, Kumar, Katz, Guy
Format: Conference Proceeding
Language:English
Published: ACM 27.10.2024
Subjects:
ISSN:2643-1572
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Deep Neural Networks (DNNs) are being trained and trusted for performing fairly complex tasks, even in business- and safety-critical applications. This necessitates that they be formally analyzed before deployment. Scalability of such analyses is a major bottleneck in their widespread use. There has been a lot of work on abstraction, and counterexample-guided abstraction refinement (CEGAR) of DNNs to address the scalability issue. However, these abstraction-refinement techniques explore only a subset of possible abstractions, and may miss an optimal abstraction. In particular, the refinement updates the abstract DNN based only on local information derived from the spurious counterexample in each iteration. The lack of a global view may result in a series of bad refinement choices, limiting the search to a region of sub-optimal abstractions. We propose a novel technique that parameterizes the construction of the abstract network in terms of continuous real-valued parameters. This allows us to use gradient descent to search through the space of possible abstractions, and ensures that the search never gets restricted to sub-optimal abstractions. Moreover, our parameterization can express more general abstractions than the existing techniques, enabling us to discover better abstractions than previously possible.CCS Concepts* Software and its engineering → Software verification; Model checking; Formal software verification; Software safety; * Computing methodologies → Neural networks; * Theory of computation → Abstraction.
ISSN:2643-1572
DOI:10.1145/3691620.3695303