Toshihiko Yamasaki

Papers from this author

Feature Point Matching in Cross-Spectral Images with Cycle Consistency Learning

Ryosuke Furuta, Naoaki Noguchi, Xueting Wang, Toshihiko Yamasaki

Responsive image

Auto-TLDR; Unsupervised Learning for General Feature Point Matching in Cross-Spectral Settings

Slides Poster Similar

Feature point matching is an important problem because its applications cover a wide range of tasks in computer vision. Deep learning-based methods for learning local features have recently shown superior performance. However, it is not easy to collect the training data in these methods, especially in cross-spectral settings such as the correspondence between RGB and near-infrared images. In this paper, we propose an unsupervised learning method for general feature point matching. Because we train a convolutional neural network as a feature extractor in order to satisfy the cycle consistency of the correspondences between an input image pair, the proposed method does not require supervision and works even in cross-spectral settings. In our experiments, we apply the proposed method to stereo matching, which is a dense feature point matching problem. The experimental results, which simulate cross-spectral settings with three different settings, i.e., RGB stereo, RGB vs gray-scale, and anaglyph (red vs cyan), show that our proposed method outperforms the compared methods, which employ handcrafted features for stereo matching, by a significant margin.

Predicting Online Video Advertising Effects with Multimodal Deep Learning

Jun Ikeda, Hiroyuki Seshime, Xueting Wang, Toshihiko Yamasaki

Responsive image

Auto-TLDR; An Optimized Framework for Predicting the Effect of Video Advertising on Click Through Rate

Slides Poster Similar

With expansion of the video advertising market, research to predict the effects of video advertising is getting more attention. Although effect prediction of image advertising has been explored a lot, prediction for video advertising is still challenging with seldom research. In this research, we propose a method for predicting the click through rate (CTR) of video advertisements and analyzing the factors that determine the CTR. In this paper, we demonstrate an optimized framework for accurately predicting the effects by taking advantage of the multimodal nature of online video advertisements including video, text, and metadata features. In particular, the two types of metadata, i.e., categorical and continuous, are properly separated and normalized. To avoid overfitting, which is crucial in our task because the training data are not very rich, additional regularization layers are inserted. Experimental results show that our approach can achieve a correlation coefficient as high as 0.695, which is a significant improvement from the baseline (0.487).