Christian Micheloni

Papers from this author

Deep Iterative Residual Convolutional Network for Single Image Super-Resolution

Rao Muhammad Umer, Gian Luca Foresti, Christian Micheloni

Responsive image

Auto-TLDR; ISRResCNet: Deep Iterative Super-Resolution Residual Convolutional Network for Single Image Super-resolution

Slides Similar

Deep convolutional neural networks (CNNs) have recently achieved great success for single image super-resolution (SISR) task due to their powerful feature representation capabilities. Most recent deep learning based SISR methods focus on designing deeper / wider models to learn the non-linear mapping between low-resolution (LR) inputs and the high-resolution (HR) outputs. These existing SR methods do not take into account the image observation (physical) model and thus require a large number of network's trainable parameters with a huge volume of training data. To address these issues, we propose a deep Iterative Super-Resolution Residual Convolutional Network (ISRResCNet) that exploits the powerful image regularization and large-scale optimization techniques by training the deep network in an iterative manner with a residual learning approach. Extensive experimental results on various super-resolution benchmarks demonstrate that our method with a few trainable parameters improves results for different scaling factors in comparison with the state-of-art methods.

Fixed Simplex Coordinates for Angular Margin Loss in CapsNet

Rita Pucci, Christian Micheloni, Gian Luca Foresti, Niki Martinel

Responsive image

Auto-TLDR; angular margin loss for capsule networks

Slides Poster Similar

A more stationary and discriminative embedding is necessary for robust classification of images. We focus our attention on the newel CapsNet model and we propose the angular margin loss function in composition with margin loss. We define a fixed classifier implemented with fixed weights vectors obtained by the vertex coordinates of a simplex polytope. The advantage of using simplex polytope is that we obtain the maximal symmetry for stationary features angularly centred. Each weight vector is to be considered as the centroid of a class in the dataset. The embedding of an image is obtained through the capsule network encoding phase, that is identified as digitcaps matrix. Based on the centroids from the simplex coordinates and the embedding from the model, we compute the angular distance between the image embedding and the centroid of the correspondent class of the image. We take this angular distance as angular margin loss. We keep the computation proposed for margin loss in the original architecture of CapsNet. We train the model to minimise the angular between the embedding and the centroid of the class and maximise the magnitude of the embedding for the predicted class. The experiments on different datasets demonstrate that the angular margin loss improves the capability of capsule networks with complex datasets.

Self and Channel Attention Network for Person Re-Identification

Asad Munir, Niki Martinel, Christian Micheloni

Responsive image

Auto-TLDR; SCAN: Self and Channel Attention Network for Person Re-identification

Slides Poster Similar

Recent research has shown promising results for person re-identification by focusing on several trends. One is designing efficient metric learning loss functions such as triplet loss family to learn the most discriminative representations. The other is learning local features by designing part based architectures to form an informative descriptor from semantically coherent parts. Some efforts adjust distant outliers to their most similar positions by using soft attention and learn the relationship between distant similar features. However, only a few prior efforts focus on channel-wise dependencies and learn non-local sharp similar part features directly for the degraded data in the person re-identification task. In this paper, we propose a novel Self and Channel Attention Network (SCAN) to model long-range dependencies between channels and feature maps. We add multiple classifiers to learn discriminative global features by using classification loss. Self Attention (SA) module and Channel Attention (CA) module are introduced to model non-local and channel-wise dependencies in the learned features. Spectral normalization is applied to the whole network to stabilize the training process. Experimental results on the person re-identification benchmarks show the proposed components achieve significant improvement with respect to the baseline.