Kiyoharu Aizawa

Papers from this author

Few-Shot Font Generation with Deep Metric Learning

Haruka Aoki, Koki Tsubota, Hikaru Ikuta, Kiyoharu Aizawa
Track 1: Artificial Intelligence, Machine Learning for Pattern Analysis
Wed 13 Jan 2021 at 12:00 in session PS T1.4

Responsive image

Auto-TLDR; Deep Metric Learning for Japanese Typographic Font Synthesis

Underline Similar papers

Designing fonts for languages with a large number of characters, such as Japanese and Chinese, is an extremely labor-intensive and time-consuming task. In this study, we addressed the problem of automatically generating Japanese typographic fonts from only a few font samples, where the synthesized glyphs are expected to have coherent characteristics, such as skeletons, contours, and serifs. Existing methods often fail to generate fine glyph images when the number of style reference glyphs is extremely limited. Herein, we proposed a simple but powerful framework for extracting better style features. This framework introduces deep metric learning to style encoders. We performed experiments using black-and-white and shape-distinctive font datasets and demonstrated the effectiveness of the proposed framework.

The Aleatoric Uncertainty Estimation Using a Separate Formulation with Virtual Residuals

Takumi Kawashima, Qing Yu, Akari Asai, Daiki Ikami, Kiyoharu Aizawa
Track 1: Artificial Intelligence, Machine Learning for Pattern Analysis
Thu 14 Jan 2021 at 14:00 in session OS T1.5

Responsive image

Auto-TLDR; Aleatoric Uncertainty Estimation in Regression Problems

Underline Similar papers

We propose a new optimization framework for aleatoric uncertainty estimation in regression problems. Existing methods can quantify the error in the target estimation, but they tend to underestimate it. To obtain the predictive uncertainty inherent in an observation, we propose a new separable formulation for the estimation of a signal and of its uncertainty, avoiding the effect of overfitting. By decoupling target estimation and uncertainty estimation, we also control the balance between signal estimation and uncertainty estimation. We conduct three types of experiments: regression with simulation data, age estimation, and depth estimation. We demonstrate that the proposed method outperforms a state-of-the-art technique for signal and uncertainty estimation.

Translating Adult's Focus of Attention to Elderly's

Onkar Krishna, Go Irie, Takahito Kawanishi, Kunio Kashino, Kiyoharu Aizawa
Track 3: Computer Vision Robotics and Intelligent Systems
Fri 15 Jan 2021 at 13:00 in session OS T3.6

Responsive image

Auto-TLDR; Elderly Focus of Attention Prediction Using Deep Image-to-Image Translation

Underline Similar papers

Predicting which part of a scene elderly people would pay attention to could be useful in assisting their daily activities, such as driving, walking, and searching. Many computational models for predicting focus of attention (FoA) have been developed. However, most of them focus on mimicking adult FoA and do not work well for predicting elderly's, due to age-related changes in human vision. Is it possible to leverage the prediction results made by an FoA model of general adults to accurately predict elderly's FoA, rather than training a new network from scratch? In this paper, we consider a novel problem of translating adult's FoA to elderly's and propose an approach based on deep image-to-image translation. Experimental results on two datasets covering both free-viewing and task-based viewing scenarios demonstrate that our model gives remarkable prediction accuracy compared to baselines.