Youngmin Kwon

Papers from this author

Boundaries of Single-Class Regions in the Input Space of Piece-Wise Linear Neural Networks

Jay Hoon Jung, Youngmin Kwon

Responsive image

Auto-TLDR; Piece-wise Linear Neural Networks with Linear Constraints

Slides Similar

An input space is a set of all the possible inputs for a neural network. An element or a group of elements in the input space can easily be understood by projecting them on their original forms. Even though Piece-wise Linear Neural Networks (PLNNs) are a nonlinear system in general, a PLNN can also be expressed in terms of linear constraints because the Rectified Linear Units (ReLU) function is a piece-wise linear function. A PLNN divides the input space into disjoint linear regions. We proved that all components of the outputs are continuous at the boundary between two different adjacent regions. This continuity implies that the boundary corresponding to a unit itself should be continuous regardless of the regions. Furthermore, we also obtained the boundaries of a single-class region, which has the same predicted classes in the interior of the region. Finally, we suggested that the point-wise robustness of a neural network can be calculated by investigating the boundaries of linear regions and the single-class regions. We obtained adversarial examples in which Euclidean distances from original inputs are less than 0.01 pixels.

Color, Edge, and Pixel-Wise Explanation of Predictions Based onInterpretable Neural Network Model

Jay Hoon Jung, Youngmin Kwon

Responsive image

Auto-TLDR; Explainable Deep Neural Network with Edge Detecting Filters

Poster Similar

We design an interpretable network model by introducing explainable components into a Deep Neural Network (DNN). We substituted the first kernels of a Convolutional Neural Network (CNN) and a ResNet-50 with the well-known edge detecting filters such as Sobel, Prewitt, and other filters. Each filters' relative importance scores are measured with a variant of Layer-wise Relevance Propagation (LRP) method proposed by Bach et al. Since the effects of the edge detecting filters are well understood, our model provides three different scores to explain individual predictions: the scores with respect to (1) colors, (2) edge filters, and (3) pixels of the image. Our method provides more tools to analyze the predictions by highlighting the location of important edges and colors in the images. Furthermore, the general features of a category can be shown in our scores as well as individual predictions. At the same time, the model does not degrade performances on MNIST, Fruit360 and ImageNet datasets.