Image Defocus Analysis for Finger Detection on a Virtual Keyboard

Miwa Michio, Honda Kenji, Sato Makoto

Responsive image

Auto-TLDR; Analysis of defocus information when a finger touching a virtual keyboard by using DCT (Discrete Cosine Transform) coefficient without detecting 3D position

Slides Poster

This paper describes the analysis of defocus information when a finger touching a virtual keyboard by using DCT (Discrete Cosine Transform) coefficient without detecting 3D position (especially exact depth value) of thr finger. We use 2 cameras and a half mirror to get the images of the finger with same optical axis. The focal length of the two cameras are slightly different. We can know the finger touching the virtual keyboard when the finger is in the middle of the two focal length from the defocus information, because the position of the virtual keyboard is in the middle of the two focal position where the defocus information of the two cameras are same. Previous work of a virtual keyboard is realized by detecting the feature point of the finger at first. After that the finger position and the depth are checked if they are same to the position and the depth of the keyboard. A problem of the previous virtual keyboard is 3D point of the finger (feature point or edge, small region) should be detected. This is very time consuming. We overcome this problem by comparing the defocus information of the finger in the key area of virtual keyboard on the 2 images captured by the two cameras. In this paper we describe the optical system of the virtual keyboard and the resolution of the depth of the finger. Experimental result make it clear that middle range of the DCT coefficients is effective for detecting the finger. In our system, the finger touch is detected by comparing the DCT coefficients of the 2 images. The minimum distance of the detectable finger is depended on the diameter and the focal length of the lens, the resolution of the image sensor. We formulate this theoretical minimum distance and verify it by experiment. Besides finger, our system can also be generalized to detect other object touching the keyboard or not.

Similar papers

Stabilized Calculation of Gaussian Smoothing and Its Differentials Using Attenuated Sliding Fourier Transform

Yukihiko Yamashita, Toru Wakahara

Responsive image

Auto-TLDR; An attenuated SFT for Gaussian smoothing

Slides Poster Similar

Gaussian smoothing and its first and second differentials are very important for image processing and computer vision. We already have methods based on the sliding Fourier transform (SFT) in order to calculate Gaussian smoothing efficiently. However, errors in floating point calculation accumulate quickly when we use the single-precision floating-point format. Also, the resultant discontinuities at the edges of SFT intervals generate false extreme points easily. To resolve the above-mentioned problems, we propose the attenuated SFT (ASFT) by introducing a decay factor to the original SFT together with a new criterion for determining coefficients to effectively approximate Gaussian function and its differentials. Extensive experiments demonstrate a decisive superiority of the proposed ASFT over the original SFT.

DCT/IDCT Filter Design for Ultrasound Image Filtering

Barmak Honarvar Shakibaei Asli, Jan Flusser, Yifan Zhao, John Ahmet Erkoyuncu, Rajkumar Roy

Responsive image

Auto-TLDR; Finite impulse response digital filter using DCT-II and inverse DCT

Slides Poster Similar

In this paper, a new recursive structure based on the convolution model of discrete cosine transform (DCT) for designing of a finite impulse response (FIR) digital filter is proposed. In our derivation, we start with the convolution model of DCT-II to use its Z-transform for the proposed filter structure perspective. Moreover, using the same algorithm, a filter base implementation of the inverse DCT (IDCT) for image reconstruction is developed. The computational time experiments of the proposed DCT/IDCT filter(s) demonstrate that the proposed filters achieve faster elapsed CPU time compared to the others. The image filtering and reconstruction performance of the proposed approach on ultrasound images are presented to validate the theoretical framework.

Calibration and Absolute Pose Estimation of Trinocular Linear Camera Array for Smart City Applications

Martin Ahrnbom, Mikael Nilsson, Håkan Ardö, Kalle Åström, Oksana Yastremska-Kravchenko, Aliaksei Laureshyn

Responsive image

Auto-TLDR; Trinocular Linear Camera Array Calibration for Traffic Surveillance Applications

Slides Poster Similar

A method for calibrating a Trinocular Linear Camera Array (TLCA) for traffic surveillance applications, such as towards smart cities, is presented. A TLCA-specific parametrization guarantees that the calibration finds a model where all the cameras are on a straight line. The method uses both a chequerboard close to the camera, as well as measured 3D points far from the camera: points measured in world coordinates, as well as their corresponding 2D points found manually in the images. Superior calibration accuracy can be obtained compared to standard methods using only a single data source, largely due to the use of chequerboards, while the line constraint in the parametrization allows for joint rectification. The improved triangulation accuracy, from 8-12 cm to around 6 cm when calibrating with 30-50 points in our experiment, allowing better road user analysis. The method is demonstrated by a proof-of-concept application where a point cloud is generated from multiple disparity maps, visualizing road user detections in 3D.

Camera Calibration Using Parallel Line Segments

Gaku Nakano

Responsive image

Auto-TLDR; Closed-Form Calibration of Surveillance Cameras using Parallel 3D Line Segment Projections

Slides Poster Similar

This paper proposes a camera calibration method suitable for surveillance cameras using the image projection of parallel 3D line segments of the same length. We assume that vertical line segments are perpendicular to the ground plane and their bottom end-points are on the ground plane. Under this assumption, the camera parameters can be directly solved by at least two line segments without estimating vanishing points. Extending the minimal solution, we derive a closed-form solution to the least squares case with more than two line segments. Lens distortion is jointly optimized in bundle adjustment. Synthetic data evaluation shows that the best depression angle of a camera is around 50 degrees. In real data evaluation, we use body joints of pedestrians as vertical line segments. The experimental results on publicly available datasets show that the proposed method with a human pose detector can correctly calibrate wide-angle cameras including radial distortion.

Generic Document Image Dewarping by Probabilistic Discretization of Vanishing Points

Gilles Simon, Salvatore Tabbone

Responsive image

Auto-TLDR; Robust Document Dewarping using vanishing points

Slides Poster Similar

Document images dewarping is still a challenge especially when documents are captured with one camera in an uncontrolled environment. In this paper we propose a generic approach based on vanishing points (VP) to reconstruct the 3D shape of document pages. Unlike previous methods we do not need to segment the text included in the documents. Therefore, our approach is less sensitive to pre-processing and segmentation errors. The computation of the VPs is robust and relies on the a-contrario framework, which has only one parameter whose setting is based on probabilistic reasoning instead of experimental tuning. Thus, our method can be applied to any kind of document including text and non-text blocks and extended to other kind of images. Experimental results show that the proposed method is robust to a variety of distortions.

Feasibility Study of Using MyoBand for Learning Electronic Keyboard

Sharmila Mani, Madhav Rao

Responsive image

Auto-TLDR; Autonomous Finger-Based Music Instrument Learning using Electromyography Using MyoBand and Machine Learning

Slides Poster Similar

Learning musical instrument like piano or electronic keyboard on average takes a decade time. Currently, musical instrument learning requires continuous supervision from the tutor, and self learning to reach expert level is considered impossible. On the other side, it often becomes unrealistic to stay connected with the music tutor for a long time and many learners stop halfway. To address this specific issue, online distance learning platform is implemented for music learning system, yet it does not support self learning, remains tutor dependent, and is not a scalable approach. In addition, there is no way for these platforms to verify whether user pressed a key note with the intended finger, which is significant for learning finger based musical instruments. To overcome this, an autonomous system to evaluate and guide in the learning process by continuously tracking finger movements via a non-camera based solution is proposed. Finger press triggers the muscle movements which are detected at the surface of the forearm in the form of surface Electromyography (sEMG) signals. The paper proposes tracking of finger press on an electronic keyboard using MyoBand [1] wearable device that provided 8 channels of sEMG signals. A machine learning (ML) approach was considered with eleven time and frequency domain features of sEMG signals, to classify musical note played by the instrument on corresponding finger press. The feature set was further standardized using standard scaler approach, and vector dimensions were reduced by Linear Discriminant Analysis (LDA) method. The resulting reduced dimension data was applied on Random Forest (RF) classifier to report best classification accuracy for our application. For training the RF model, several trails of 10 seconds sEMG signals were collected using wearable MyoBand device. Experiments involved single finger press to render a note in the musical instrument, and multiple finger press to define chord sequence on an electronic musical keyboard. Further analysis was performed to maximize the classification accuracy over the number of trials and optimize the position of electrodes for successful identification of musical note played. The proposed method achieves a classification accuracy of 74.25% for 5 musical note played on an electronic keyboard instrument with 4 MyoBand electrodes, and an accuracy of 95.83% with one electrode for identifying between four musical events including two major chords and two musical notes

Extraction and Analysis of 3D Kinematic Parameters of Table Tennis Ball from a Single Camera

Jordan Calandre, Renaud Péteri, Laurent Mascarilla, Benoit Tremblais

Responsive image

Auto-TLDR; 3D Ball Trajectories Analysis using a Single Camera for Sport Gesture Analysis

Slides Poster Similar

Vision is the first indicator for coaches to assess the quality of a sport gesture. However, gesture analysis using computer vision is often restricted to laboratory experiments, far from the real conditions in which athletes train on a daily basis. In this perspective, we introduce 3D ball trajectories analysis using a single camera with very few acquisition constraints. A key point of the proposal is the estimation of the apparent ball size for obtaining ball to camera distance. For this purpose, a 2D CNN is trained using a generated dataset that enables a reliable ball size extraction, even in case of high motion blur. The final objective is not only to be able to determine ball trajectories, but most importantly to retrieve their relevant physical parameters. With a precise estimation of those trajectories, it is indeed possible to extract the ball tangential and rotation speed, related to the so-called Magnus effect. Validation experiments for characterizing table tennis strokes are presented on both a synthetic dataset and on real video sequences.

Photometric Stereo with Twin-Fisheye Cameras

Jordan Caracotte, Fabio Morbidi, El Mustapha Mouaddib

Responsive image

Auto-TLDR; Photometric stereo problem for low-cost 360-degree cameras

Slides Poster Similar

In this paper, we introduce and solve, for the first time, the photometric stereo problem for low-cost 360-degree cameras. In particular, we present a spherical image irradiance equation which is adapted to twin-fisheye cameras, and an original algorithm for the estimation of light directions based on the specular highlights observed on mirror balls. Extensive experiments with synthetic and real-world images captured by a Ricoh Theta V camera, demonstrate the effectiveness and robustness of the proposed 3D reconstruction pipeline. To foster reproducible research, the image dataset and code developed for this paper are made publicly available at the address: https://home.mis.u-picardie.fr/~fabio/PhotoSphere.html

2D Discrete Mirror Transform for Image Non-Linear Approximation

Alessandro Gnutti, Fabrizio Guerrini, Riccardo Leonardi

Responsive image

Auto-TLDR; Discrete Mirror Transform (DMT)

Slides Poster Similar

In this paper, a new 2D transform named Discrete Mirror Transform (DMT) is presented. The DMT is computed by decomposing a signal into its even and odd parts around an optimal location in a given direction so that the signal energy is maximally split between the two components. After minimizing the information required to regenerate the original signal by removing redundant structures, the process is iterated leading the signal energy to distribute into a continuously smaller set of coefficients. The DMT can be displayed as a binary tree, where each node represents the single (even or odd) signal derived from the decomposition in the previous level. An optimized version of the DMT (ODMT) is also introduced, by exploiting the possibility to choose different directions at which performing the decomposition. Experimental simulations have been carried out in order to test the sparsity properties of the DMT and ODMT when applied on images: referring to both transforms, the results show a superior performance with respect to the popular Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) in terms of non-linear approximation.

Rotation Detection in Finger Vein Biometrics Using CNNs

Bernhard Prommegger, Georg Wimmer, Andreas Uhl

Responsive image

Auto-TLDR; A CNN based rotation detector for finger vein recognition

Slides Poster Similar

Finger vein recognition deals with the identification of subjects based on their venous pattern within the fingers. The recognition accuracy of finger vein recognition systems suffers from different internal and external factors. One of the major problems are misplacements of the finger during acquisition. In particular longitudinal finger rotation poses a severe problem for such recognition systems. The detection and correction of such rotations is a difficult task as typically finger vein scanners acquire only a single image from the vein pattern. Therefore, important information such as the shape of the finger or the depth of the veins within the finger, which are needed for the rotation detection, are not available. This work presents a CNN based rotation detector that is capable of estimating the rotational difference between vein images of the same finger without providing any additional information. The experiments executed not only show that the method delivers highly accurate results, but it also generalizes so that the trained CNN can also be applied on data sets which have not been included during the training of the CNN. Correcting the rotation difference between images using the CNN's rotation prediction leads to EER improvements between 50-260% for a well-established vein-pattern based method (Maximum Curvature) on four public finger vein databases.

Extending Single Beam Lidar to Full Resolution by Fusing with Single Image Depth Estimation

Yawen Lu, Yuxing Wang, Devarth Parikh, Guoyu Lu

Responsive image

Auto-TLDR; Self-supervised LIDAR for Low-Cost Depth Estimation

Slides Similar

Depth estimation is playing an important role in indoor and outdoor scene understanding, autonomous driving, augmented reality and many other tasks. Vehicles and robotics are able to use active illumination sensors such as LIDAR to receive high precision depth estimation. However, high-resolution Lidars are usually too expensive, which limits its massive production on various applications. Though single beam LIDAR enjoys the benefits of low cost, one beam depth sensing is not usually sufficient to perceive the surrounding environment in many scenarios. In this paper, we propose a learning-based framework to explore to replicate similar or even higher performance as costly LIDARs with our designed self-supervised network and a low-cost single-beam LIDAR. After the accurate calibration with a visible camera, the single beam LIDAR can adjust the scale uncertainty of the depth map estimated by the visible camera. The adjusted depth map enjoys the benefits of high resolution and sensing accuracy as high beam LIDAR and maintains low-cost as single beam LIDAR. Thus we can achieve similar sensing effect of high beam LIDAR with more than a 50-100 times cheaper price (e.g., \$80000 Velodyne HDL-64E LIDAR v.s. \$1000 SICK TIM-781 2D LIDAR and normal camera). The proposed approach is verified on our collected dataset and public dataset with superior depth-sensing performance.

Computational Data Analysis for First Quantization Estimation on JPEG Double Compressed Images

Sebastiano Battiato, Oliver Giudice, Francesco Guarnera, Giovanni Puglisi

Responsive image

Auto-TLDR; Exploiting Discrete Cosine Transform Coefficients for Multimedia Forensics

Slides Poster Similar

Multimedia Forensics experts work consists in providing answers about integrity of a specific media content and from where it comes from. Exploitation of any traces from JPEG double compressed images is often one of the main investigative path to be used for these purposes. Thus it is fundamental to have tools and algorithms able to safely estimate the first quantization matrix to further proceed with camera model identification and related tasks. In this paper, a technique based on extensive simulation is proposed, with the aim to infer the first quantization for a certain numbers of Discrete Cosine Transform (DCT) coefficients exploiting local image statistics without using any a-priori knowledge. The method provides also a reliable confidence value for the estimation which is of great importance for forensic purposes. Experimental results w.r.t. the state-of-the-art demonstrate the effectiveness of the proposed technique both in terms of precision and overall reliability.

Surface Material Dataset for Robotics Applications (SMDRA): A Dataset with Friction Coefficient and RGB-D for Surface Segmentation

Donghun Noh, Hyunwoo Nam, Min Sung Ahn, Hosik Chae, Sangjoon Lee, Kyle Gillespie, Dennis Hong

Responsive image

Auto-TLDR; A Surface Material Dataset for Robotics Applications

Slides Poster Similar

In this paper, we introduce the Surface Material Dataset for Robotics Applications (SMDRA), a collection of RGB color image, depth data, and pixel-wise friction coefficient data of 10 different materials for computer vision research specifically with robotics applications in mind that require physical contact between the robot and its environment such as robotic manipulators or walking robots. These selected surface materials are both easily accessible around our daily lives and cover a wide range of friction coefficients. Our dataset is unique in that while there is an abundance of RGB-D data due to the popularization of imaging sensors, additional pixel-wise aligned data of a different modality are not readily available. The depth data is collected by an active stereo camera which has shown promise on a variety of different robotic applications. In addition, this dataset is greatly expanded with friction coefficient data. Similarly to humans, this additional information can be helpful in ensuing proper decision making in tasks ranging from grasping orientation and strength to path determination in an unstructured environment. A newly developed friction measuring device was used to obtain this data. We verify that existing Convolutional Neural Network (CNN) architectures, the Fully Convolutional Network (FCN) and U-Net, can be trained on the SMDRA. This result demonstrates that the SMDRA can be utilized to train a neural network model for segmentation and these different modes are not just additional information, but valuable modes that researchers can incorporate and exploit when applying computer vision algorithms on robotic platforms.

Extended Depth of Field Preserving Color Fidelity for Automated Digital Cytology

Alexandre Bouyssoux, Riadh Fezzani, Jean-Christophe Olivo-Marin

Responsive image

Auto-TLDR; Multi-Channel Extended Depth of Field for Digital cytology based on the stationary wavelet transform

Poster Similar

This paper presents a multi-channel Extended Depth of Field (EDF) method for digital cytology based on the stationary wavelet transform. With a coefficient selection rule adapted to a precise color recovery, a sharp image can be reconstructed even on images with transparent overlapping cells. The precision and the color fidelity of the proposed method is analyzed. Moreover, an experiment demonstrating the necessity of volume analysis in cytology to achieve precise segmentation on cell clumps is conducted, and the importance of color fidelity in this context is asserted. The proposed method was tested on pap-stained urothelial cells and gray-scale cervical cells with important overlapping.

Distortion-Adaptive Grape Bunch Counting for Omnidirectional Images

Ryota Akai, Yuzuko Utsumi, Yuka Miwa, Masakazu Iwamura, Koichi Kise

Responsive image

Auto-TLDR; Object Counting for Omnidirectional Images Using Stereographic Projection

Poster Similar

This paper proposes the first object counting method for omnidirectional images. Because conventional object counting methods cannot handle the distortion of omnidirectional images, we propose to process them using stereographic projection, which enables conventional methods to obtain a good approximation of the density function. However, the images obtained by stereographic projection are still distorted. Hence, to manage this distortion, we propose two methods. One is a new data augmentation method designed for the stereographic projection of omnidirectional images. The other is a distortion-adaptive Gaussian kernel that generates a density map ground truth while taking into account the distortion of stereographic projection. Using the counting of grape bunches as a case study, we constructed an original grape-bunch image dataset consisting of omnidirectional images and conducted experiments to evaluate the proposed method. The results show that the proposed method performs better than a direct application of the conventional method, improving mean absolute error by 14.7% and mean squared error by 10.5%.

Weight Estimation from an RGB-D Camera in Top-View Configuration

Marco Mameli, Marina Paolanti, Nicola Conci, Filippo Tessaro, Emanuele Frontoni, Primo Zingaretti

Responsive image

Auto-TLDR; Top-View Weight Estimation using Deep Neural Networks

Slides Poster Similar

The development of so-called soft-biometrics aims at providing information related to the physical and behavioural characteristics of a person. This paper focuses on bodyweight estimation based on the observation from a top-view RGB-D camera. In fact, the capability to estimate the weight of a person can be of help in many different applications, from health-related scenarios to business intelligence and retail analytics. To deal with this issue, a TVWE (Top-View Weight Estimation) framework is proposed with the aim of predicting the weight. The approach relies on the adoption of Deep Neural Networks (DNNs) that have been trained on depth data. Each network has also been modified in its top section to replace classification with prediction inference. The performance of five state-of-art DNNs has been compared, namely VGG16, ResNet, Inception, DenseNet and Efficient-Net. In addition, a convolutional auto-encoder has also been included for completeness. Considering the limited literature in this domain, the TVWE framework has been evaluated on a new publicly available dataset: “VRAI Weight estimation Dataset”, which also collects, for each subject, labels related to weight, gender, and height. The experimental results have demonstrated that the proposed methods are suitable for this task, bringing different and significant insights for the application of the solution in different domains.

Air-Writing with Sparse Network of Radars Using Spatio-Temporal Learning

Muhammad Arsalan, Avik Santra, Kay Bierzynski, Vadim Issakov

Responsive image

Auto-TLDR; An Air-writing System for Sparse Radars using Deep Convolutional Neural Networks

Slides Poster Similar

Hand gesture and motion sensing offer an intuitive and natural form of human-machine interface. Air-writing systems allow users to draw alpha-numerical or linguistic characters in the virtual board in air through hand gestures. Traditionally, radar-based air-writing systems have been based on a network of radars, at least three, to localize the hand target through trilateration algorithm followed by tracking to extract the drawn trajectory, which is then followed by recognition of the drawn character by either Long-Short Term Memory (LSTM) utilizing the sensed trajectory or Deep Convolutional Neural Network (DCNN) utilizing a reconstructed 2D image from the trajectory. However, the practical deployments of such systems are limited since the detection of the finger or hand target by all three radars cannot be guaranteed leading to failure of the trilateration algorithm. Further placement of three or more radars for the air-writing solution is neither always physically plausible nor cost-effective. Furthermore, these solutions do not exploit the full potentials of deep neural networks, which are generally capable of learning features implicitly. In this paper, we propose an air-writing system based on a network of sparse radars, i.e. strictly less than three, using 1D DCNN-LSTM-1D transposed DCNN architecture to reconstruct and classify the drawn character utilizing only the range information from each radar. The paper employs real data using one and two 60 GHz milli-meter wave radar sensors to demonstrate the success of the proposed air-writing solution.

Better Prior Knowledge Improves Human-Pose-Based Extrinsic Camera Calibration

Olivier Moliner, Sangxia Huang, Kalle Åström

Responsive image

Auto-TLDR; Improving Human-pose-based Extrinsic Calibration for Multi-Camera Systems

Slides Poster Similar

Accurate extrinsic calibration of wide baseline multi-camera systems enables better understanding of 3D scenes for many applications and is of great practical importance. Classical Structure-from-Motion calibration methods require special calibration equipment so that accurate point correspondences can be detected between different views. In addition, an operator with some training is usually needed to ensure that data is collected in a way that leads to good calibration accuracy. This limits the ease of adoption of such technologies. Recently, methods have been proposed to use human pose estimation models to establish point correspondences, thus removing the need for any special equipment. The challenge with this approach is that human pose estimation algorithms typically produce much less accurate feature points compared to classical patch-based methods. Another problem is that ambient human motion might not be optimal for calibration. We build upon prior works and introduce several novel ideas to improve the accuracy of human-pose-based extrinsic calibration. Our first contribution is a robust reprojection loss based on a better understanding of the sources of pose estimation error. Our second contribution is a 3D human pose likelihood model learned from motion capture data. We demonstrate significant improvements in calibration accuracy by evaluating our method on four publicly available datasets.

Dynamic Resource-Aware Corner Detection for Bio-Inspired Vision Sensors

Sherif Abdelmonem Sayed Mohamed, Jawad Yasin, Mohammad-Hashem Haghbayan, Antonio Miele, Jukka Veikko Heikkonen, Hannu Tenhunen, Juha Plosila

Responsive image

Auto-TLDR; Three Layer Filtering-Harris Algorithm for Event-based Cameras in Real-Time

Slides Similar

Event-based cameras are vision devices that transmit only brightness changes with low latency and ultra-low power consumption. Such characteristics make event-based cameras attractive in the field of localization and object tracking in resource-constrained systems. Since the number of generated events in such cameras is huge, the selection and filtering of the incoming events are beneficial from both increasing the accuracy of the features and reducing the computational load. In this paper, we present an algorithm to detect asynchronous corners form a stream of events in real-time on embedded systems. The algorithm is called the Three Layer Filtering-Harris or TLF-Harris algorithm. The algorithm is based on an events' filtering strategy whose purpose is 1) to increase the accuracy by deliberately eliminating some incoming events, i.e., noise and 2) to improve the real-time performance of the system, i.e., preserving a constant throughput in terms of input events per second, by discarding unnecessary events with a limited accuracy loss. An approximation of the Harris algorithm, in turn, is used to exploit its high-quality detection capability with a low-complexity implementation to enable seamless real-time performance on embedded computing platforms. The proposed algorithm is capable of selecting the best corner candidate among neighbors and achieves an average execution time savings of 59 % compared with the conventional Harris score. Moreover, our approach outperforms the competing methods, such as eFAST, eHarris, and FA-Harris, in terms of real-time performance, and surpasses Arc* in terms of accuracy.

A Two-Step Approach to Lidar-Camera Calibration

Yingna Su, Yaqing Ding, Jian Yang, Hui Kong

Responsive image

Auto-TLDR; Closed-Form Calibration of Lidar-camera System for Ego-motion Estimation and Scene Understanding

Slides Poster Similar

Autonomous vehicles and robots are typically equipped with Lidar and camera. Hence, calibrating the Lidar-camera system is of extreme importance for ego-motion estimation and scene understanding. In this paper, we propose a two-step approach (coarse + fine) for the external calibration between a camera and a multiple-line Lidar. First, a new closed-form solution is proposed to obtain the initial calibration parameters. We compare our solution with the state-of-the-art SVD-based algorithm, and show the benefits of both the efficiency and stability. With the initial calibration parameters, the ICP-based calibration framework is used to register the point clouds which extracted from the camera and Lidar coordinate frames, respectively. Our method has been applied to two Lidar-camera systems: an HDL-64E Lidar-camera system, and a VLP-16 Lidar-camera system. Experimental results demonstrate that our method achieves promising performance and higher accuracy than two open-source methods.

Benchmarking Cameras for OpenVSLAM Indoors

Kevin Chappellet, Guillaume Caron, Fumio Kanehiro, Ken Sakurada, Abderrahmane Kheddar

Responsive image

Auto-TLDR; OpenVSLAM: Benchmarking Camera Types for Visual Simultaneous Localization and Mapping

Slides Poster Similar

In this paper we benchmark different types of cameras and evaluate their performance in terms of reliable localization reliability and precision in Visual Simultaneous Localization and Mapping (vSLAM). Such benchmarking is merely found for visual odometry, but never for vSLAM. Existing studies usually compare several algorithms for a given camera. %This work is the first to handle the dual of the latter, i.e. comparing several cameras for a given SLAM algorithm. The evaluation methodology we propose is applied to the recent OpenVSLAM framework. The latter is versatile enough to natively deal with perspective, fisheye, 360 cameras in a monocular or stereoscopic setup, an in RGB or RGB-D modalities. Results in various sequences containing light variation and scenery modifications in the scene assess quantitatively the maximum localization rate for 360 vision. In the contrary, RGB-D vision shows the lowest localization rate, but highest precision when localization is possible. Stereo-fisheye trades-off with localization rates and precision between 360 vision and RGB-D vision. The dataset with ground truth will be made available in open access to allow evaluating other/future vSLAM algorithms with respect to these camera types.

RISEdb: A Novel Indoor Localization Dataset

Carlos Sanchez Belenguer, Erik Wolfart, Álvaro Casado Coscollá, Vitor Sequeira

Responsive image

Auto-TLDR; Indoor Localization Using LiDAR SLAM and Smartphones: A Benchmarking Dataset

Slides Poster Similar

In this paper we introduce a novel public dataset for developing and benchmarking indoor localization systems. We have selected and 3D mapped a set of representative indoor environments including a large office building, a conference room, a workshop, an exhibition area and a restaurant. Our acquisition pipeline is based on a portable LiDAR SLAM backpack to map the buildings and to accurately track the pose of the user as it moves freely inside them. We introduce the calibration procedures that enable us to acquire and geo-reference live data coming from different independent sensors rigidly attached to the backpack. This has allowed us to collect long sequences of spherical and stereo images, together with all the sensor readings coming from a consumer smartphone and locate them inside the map with centimetre accuracy. The dataset addresses many of the limitations of existing indoor localization datasets regarding the scale and diversity of the mapped buildings; the number of acquired sequences under varying conditions; the accuracy of the ground-truth trajectory; the availability of a detailed 3D model and the availability of different sensor types. It enables the benchmarking of existing and the development of new indoor localization approaches, in particular for deep learning based systems that require large amounts of labeled training data.

Estimating Gaze Points from Facial Landmarks by a Remote Spherical Camera

Shigang Li

Responsive image

Auto-TLDR; Gaze Point Estimation from a Spherical Image from Facial Landmarks

Slides Poster Similar

From a spherical image, a gaze point, instead of gaze vectors, can be estimated directly because a remote spherical camera can observe a user's face and a gaze target simultaneously. This paper investigates the problem of estimating a gaze point in a spherical image from facial landmarks. In contrast with the existing methods which usually assume gaze points move on a narrow plane, the proposed method can cope with the situation where gaze points vary in depth for a relatively wide field of view. As shown in the results of comparative experiments, we find the orthogonal coordinates of facial landmarks on a unit sphere is a reasonable representation in comparison with spherical polar coordinates; the cues of head pose is helpful to improve the accuracy of gaze points. Consequently, the proposed method achieves a performance on the accuracy of gaze points estimation which is comparable to the state of the art methods.

Cost Volume Refinement for Depth Prediction

João L. Cardoso, Nuno Goncalves, Michael Wimmer

Responsive image

Auto-TLDR; Refining the Cost Volume for Depth Prediction from Light Field Cameras

Slides Poster Similar

Light-field cameras are becoming more popular in the consumer market. Their data redundancy allows, in theory, to accurately refocus images after acquisition and to predict the depth of each point visible from the camera. Combined, these two features allow for the generation of full-focus images, which is impossible in traditional cameras. Multiple methods for depth prediction from light fields (or stereo) have been proposed over the years. A large subset of these methods relies on cost-volume estimates -- 3D objects where each layer represents a heuristic of whether each point in the image is at a certain distance from the camera. Generally, this volume is used to regress a disparity map, which is then refined for better results. In this paper, we argue that refining the cost volumes is superior to refining the disparity maps in order to further increase the accuracy of depth predictions. We propose a set of cost-volume refinement algorithms and show their effectiveness.

User-Independent Gaze Estimation by Extracting Pupil Parameter and Its Mapping to the Gaze Angle

Sang Yoon Han, Nam Ik Cho

Responsive image

Auto-TLDR; Gaze Point Estimation using Pupil Shape for Generalization

Slides Poster Similar

Since gaze estimation plays a crucial role in recognizing human intentions, it has been researched for a long time, and its accuracy is ever increasing. However, due to the wide variation in eye shapes and focusing abilities between the individuals, accuracies of most algorithms vary depending on each person in the test group, especially when the initial calibration is not well performed. To alleviate the user-dependency, we attempt to derive features that are general for most people and use them as the input to a deep network instead of using the images as the input. Specifically, we use the pupil shape as the core feature because it is directly related to the 3D eyeball rotation, and thus the gaze direction. While existing deep learning methods learn the gaze point by extracting various features from the image, we focus on the mapping function from the eyeball rotation to the gaze point by using the pupil shape as the input. It is shown that the accuracy of gaze point estimation also becomes robust for the uncalibrated points by following the characteristics of the mapping function. Also, our gaze network learns the gaze difference to facilitate the re-calibration process to fix the calibration-drift problem that typically occurs with glass-type or head-mount devices.

Vehicle Lane Merge Visual Benchmark

Kai Cordes, Hellward Broszio

Responsive image

Auto-TLDR; A Benchmark for Automated Cooperative Maneuvering Using Multi-view Video Streams and Ground Truth Vehicle Description

Slides Poster Similar

Automated driving is regarded as the most promising technology for improving road safety in the future. In this context, connected vehicles have an important role regarding their ability to perform cooperative maneuvers for challenging traffic situations. We propose a benchmark for automated cooperative maneuvers. The targeted cooperative maneuver is the vehicle lane merge where a vehicle on the acceleration lane merges into the traffic of a motorway. The benchmark enables the evaluation of vehicle localization approaches as well as the study of cooperative maneuvers. It consists of temporally synchronized multi-view video streams, highly accurate camera calibration, and ground truth vehicle descriptions, including position, heading, speed, and shape. For benchmark generation, the lane merge maneuver is performed by human drivers on a test track, resulting in 120 lane merge data sets with various traffic situations and video recording conditions.

Edge-Aware Monocular Dense Depth Estimation with Morphology

Zhi Li, Xiaoyang Zhu, Haitao Yu, Qi Zhang, Yongshi Jiang

Responsive image

Auto-TLDR; Spatio-Temporally Smooth Dense Depth Maps Using Only a CPU

Slides Poster Similar

Dense depth maps play an important role in Computer Vision and AR (Augmented Reality). For CV applications, a dense depth map is the cornerstone of 3D reconstruction allowing real objects to be precisely displayed in the computer. And Dense depth maps can handle correct occlusion relationships between virtual content and real objects for better user experience in AR. However, the complicated computation limits the development of computing dense depth maps. We present a novel algorithm that produces low latency, spatio-temporally smooth dense depth maps using only a CPU. The depth maps exhibit sharp discontinuities at depth edges in low computational complexity ways. Our algorithm obtains the sparse SLAM reconstruction first, then extracts coarse depth edges from a down-sampled RGB image by morphology operations. Next, we thin the depth edges and align them with image edges. Finally, a Warm-Start initialization scheme and an improved optimization solver are adopted to accelerate convergence. We evaluate our proposal quantitatively and the result shows improvements on the accuracy of depth map with respect to other state-of-the-art and baseline techniques.

P2D: A Self-Supervised Method for Depth Estimation from Polarimetry

Marc Blanchon, Desire Sidibe, Olivier Morel, Ralph Seulin, Daniel Braun, Fabrice Meriaudeau

Responsive image

Auto-TLDR; Polarimetric Regularization for Monocular Depth Estimation

Slides Poster Similar

Monocular depth estimation is a recurring subject in the field of computer vision. Its ability to describe scenes via a depth map while reducing the constraints related to the formulation of perspective geometry tends to favor its use. However, despite the constant improvement of algorithms, most methods exploit only colorimetric information. Consequently, robustness to events to which the modality is not sensitive to, like specularity or transparency, is neglected. In response to this phenomenon, we propose using polarimetry as an input for a self-supervised monodepth network. Therefore, we propose exploiting polarization cues to encourage accurate reconstruction of scenes. Furthermore, we include a term of polarimetric regularization to state-of-the-art method to take specific advantage of the data. Our method is evaluated both qualitatively and quantitatively demonstrating that the contribution of this new information as well as an enhanced loss function improves depth estimation results, especially for specular areas.

Polarimetric Image Augmentation

Marc Blanchon, Fabrice Meriaudeau, Olivier Morel, Ralph Seulin, Desire Sidibe

Responsive image

Auto-TLDR; Polarimetric Augmentation for Deep Learning in Robotics Applications

Poster Similar

This paper deals with new augmentation methods for an unconventional imaging modality sensitive to the physics of the observed scene called polarimetry. In nature, polarized light is obtained by reflection or scattering. Robotics applications in urban environments are subject to many obstacles that can be specular and therefore provide polarized light. These areas are prone to segmentation errors using standard modalities but could be solved using information carried by the polarized light. Deep Convolutional Neural Networks (DCNNs) have shown excellent segmentation results, but require a significant amount of data to achieve best performances. The lack of data is usually overcomed by using augmentation methods. However, unlike RGB images, polarization images are not only scalar (intensity) images and standard augmentation techniques cannot be applied straightforwardly. We propose enhancing deep learning models through a regularized augmentation procedure applied to polarimetric data in order to characterize scenes more effectively under challenging conditions. We subsequently observe an average of 18.1% improvement in IoU between not augmented and regularized training procedures on real world data.

Documents Counterfeit Detection through a Deep Learning Approach

Darwin Danilo Saire Pilco, Salvatore Tabbone

Responsive image

Auto-TLDR; End-to-End Learning for Counterfeit Documents Detection using Deep Neural Network

Slides Poster Similar

The main topic of this work is on the detection of counterfeit documents and especially banknotes. We propose an end-to-end learning model using a deep learning approach based on Adapnet++ which manages feature extraction at multiple scale levels using several residual units. Unlike previous models based on regions of interest (ROI) and high-resolution documents, our network is feed with simple input images (i.e., a single patch) and we do not need high resolution images. Besides, discriminative regions can be visualized at different scales. Our network learns by itself which regions of interest predict the better results. Experimental results show that we are competitive compared with the state-of-the-art and our deep neural network has good ability to generalize and can be applied to other kind of documents like identity or administrative one.

NetCalib: A Novel Approach for LiDAR-Camera Auto-Calibration Based on Deep Learning

Shan Wu, Amnir Hadachi, Damien Vivet, Yadu Prabhakar

Responsive image

Auto-TLDR; Automatic Calibration of LiDAR and Cameras using Deep Neural Network

Slides Poster Similar

A fusion of LiDAR and cameras have been widely used in many robotics applications such as classification, segmentation, object detection, and autonomous driving. It is essential that the LiDAR sensor can measure distances accurately, which is a good complement to the cameras. Hence, calibrating sensors before deployment is a mandatory step. The conventional methods include checkerboards, specific patterns, or human labeling, which is trivial and human-labor extensive if we do the same calibration process every time. The main propose of this research work is to build a deep neural network that is capable of automatically finding the geometric transformation between LiDAR and cameras. The results show that our model manages to find the transformations from randomly sampled artificial errors. Besides, our work is open-sourced for the community to fully utilize the advances of the methodology for developing more the approach, initiating collaboration, and innovation in the topic.

One Step Clustering Based on A-Contrario Framework for Detection of Alterations in Historical Violins

Alireza Rezaei, Sylvie Le Hégarat-Mascle, Emanuel Aldea, Piercarlo Dondi, Marco Malagodi

Responsive image

Auto-TLDR; A-Contrario Clustering for the Detection of Altered Violins using UVIFL Images

Slides Poster Similar

Preventive conservation is an important practice in Cultural Heritage. The constant monitoring of the state of conservation of an artwork helps us reduce the risk of damage and number of interventions necessary. In this work, we propose a probabilistic approach for the detection of alterations on the surface of historical violins based on an a-contrario framework. Our method is a one step NFA clustering solution which considers grey-level and spatial density information in one background model. The proposed method is robust to noise and avoids parameter tuning and any assumption about the quantity of the worn out areas. We have used as input UV induced fluorescence (UVIFL) images for considering details not perceivable with visible light. Tests were conducted on image sequences included in the ``Violins UVIFL imagery'' dataset. Results illustrate the ability of the algorithm to distinguish the worn area from the surrounding regions. Comparisons with the state of the art clustering methods shows improved overall precision and recall.

5D Light Field Synthesis from a Monocular Video

Kyuho Bae, Andre Ivan, Hajime Nagahara, In Kyu Park

Responsive image

Auto-TLDR; Synthesis of Light Field Video from Monocular Video using Deep Learning

Slides Similar

Commercially available light field cameras have difficulty in capturing 5D (4D + time) light field videos. They can only capture still light filed images or are excessively expensive for normal users to capture the light field video. To tackle this problem, we propose a deep learning-based method for synthesizing a light field video from a monocular video. We propose a new synthetic light field video dataset that renders photorealistic scenes using Unreal Engine because no light field video dataset is available. The proposed deep learning framework synthesizes the light field video with a full set (9x9) of sub-aperture images from a normal monocular video. The proposed network consists of three sub-networks, namely, feature extraction, 5D light field video synthesis, and temporal consistency refinement. Experimental results show that our model can successfully synthesize the light field video for synthetic and real scenes and outperforms the previous frame-by-frame method quantitatively and qualitatively.

Fourier Domain Pruning of MobileNet-V2 with Application to Video Based Wildfire Detection

Hongyi Pan, Diaa Badawi, E. Cetin

Responsive image

Auto-TLDR; Deep Convolutional Neural Network for Wildfire Detection

Slides Poster Similar

In this paper, we propose a deep convolutional neural network for camera based wildfire detection. We train the neural network via transfer learning and use window based analysis strategy to increase the fire detection rate. To achieve computational efficiency, we calculate frequency response of the kernels in convolutional and dense layers and eliminate those filters with low energy impulse response. Moreover, to reduce the storage for edge devices, we compare the convolutional kernels in Fourier domain and discard similar filters using the cosine similarity measure in the frequency domain. We test the performance of the neural network with a variety of wildfire video clips and prune system performs as good as the regular network in daytime wild fire detection, and it also works well on some night wild fire video clips.

Three-Dimensional Lip Motion Network for Text-Independent Speaker Recognition

Jianrong Wang, Tong Wu, Shanyu Wang, Mei Yu, Qiang Fang, Ju Zhang, Li Liu

Responsive image

Auto-TLDR; Lip Motion Network for Text-Independent and Text-Dependent Speaker Recognition

Slides Poster Similar

Lip motion reflects behavior characteristics of speakers, and thus can be used as a new kind of biometrics in speaker recognition. In the literature, lots of works used two dimensional (2D) lip images to recognize speaker in a text-dependent context. However, 2D lip easily suffers from face orientations. To this end, in this work, we present a novel end-to-end 3D lip motion Network (3LMNet) by utilizing the sentence-level 3D lip motion (S3DLM) to recognize speakers in both the text-independent and text-dependent contexts. A novel regional feedback module (RFM) is proposed to explore attentions in different lip regions. Besides, prior knowledge of lip motion is investigated to complement RFM, where landmark-level and frame-level features are merged to form a better feature representation. Moreover, we present two methods, i.e., coordinate transformation and face posture correction to pre-process the LSD-AV dataset, which contains 68 speakers and 146 sentences per speaker. The evaluation results on this dataset demonstrate that our proposed 3LMNet is superior to the baseline models, i.e., LSTM, VGG-16 and ResNet-34, and outperforms the state-of-the-art using 2D lip image as well as the 3D face. The code of this work is released at https://github.com/wutong18/Three-Dimensional-Lip-Motion-Ne twork-for-Text-Independent-Speaker-Recognition.

Generic Merging of Structure from Motion Maps with a Low Memory Footprint

Gabrielle Flood, David Gillsjö, Patrik Persson, Anders Heyden, Kalle Åström

Responsive image

Auto-TLDR; A Low-Memory Footprint Representation for Robust Map Merge

Slides Poster Similar

With the development of cheap image sensors, the amount of available image data have increased enormously, and the possibility of using crowdsourced collection methods has emerged. This calls for development of ways to handle all these data. In this paper, we present new tools that will enable efficient, flexible and robust map merging. Assuming that separate optimisations have been performed for the individual maps, we show how only relevant data can be stored in a low memory footprint representation. We use these representations to perform map merging so that the algorithm is invariant to the merging order and independent of the choice of coordinate system. The result is a robust algorithm that can be applied to several maps simultaneously. The result of a merge can also be represented with the same type of low-memory footprint format, which enables further merging and updating of the map in a hierarchical way. Furthermore, the method can perform loop closing and also detect changes in the scene between the capture of the different image sequences. Using both simulated and real data — from both a hand held mobile phone and from a drone — we verify the performance of the proposed method.

Total Estimation from RGB Video: On-Line Camera Self-Calibration, Non-Rigid Shape and Motion

Antonio Agudo

Responsive image

Auto-TLDR; Joint Auto-Calibration, Pose and 3D Reconstruction of a Non-rigid Object from an uncalibrated RGB Image Sequence

Slides Poster Similar

In this paper we present a sequential approach to jointly retrieve camera auto-calibration, camera pose and the 3D reconstruction of a non-rigid object from an uncalibrated RGB image sequence, without assuming any prior information about the shape structure, nor the need for a calibration pattern, nor the use of training data at all. To this end, we propose a Bayesian filtering approach based on a sum-of-Gaussians filter composed of a bank of extended Kalman filters (EKF). For every EKF, we make use of dynamic models to estimate its state vector, which later will be Gaussianly combined to achieve a global solution. To deal with deformable objects, we incorporate a mechanical model solved by using the finite element method. Thanks to these ingredients, the resulting method is both efficient and robust to several artifacts such as missing and noisy observations as well as sudden camera motions, while being available for a wide variety of objects and materials, including isometric and elastic shape deformations. Experimental validation is proposed in real experiments, showing its strengths with respect to competing approaches.

Combined Invariants to Gaussian Blur and Affine Transformation

Jitka Kostkova, Jan Flusser, Matteo Pedone

Responsive image

Auto-TLDR; A new theory of combined moment invariants to Gaussian blur and spatial affine transformation

Slides Poster Similar

The paper presents a new theory of combined moment invariants to Gaussian blur and spatial affine transformation. The blur kernel may be arbitrary oriented, scaled and elongated. No prior information about the kernel parameters and about the underlaying affine transform is required. The main idea, expressed by the Substitution Theorem, is to substitute pure blur invariants into traditional affine moment invariants. Potential applications of the new descriptors are in blur-invariant image recognition and in robust template matching.

Real-Time Drone Detection and Tracking with Visible, Thermal and Acoustic Sensors

Fredrik Svanström, Cristofer Englund, Fernando Alonso-Fernandez

Responsive image

Auto-TLDR; Automatic multi-sensor drone detection using sensor fusion

Slides Poster Similar

This paper explores the process of designing an automatic multi-sensor drone detection system. Besides the common video and audio sensors, the system also includes a thermal infrared camera, which is shown to be a feasible solution to the drone detection task. Even with slightly lower resolution, the performance is just as good as a camera in visible range. The detector performance as a function of the sensor-to-target distance is also investigated. In addition, using sensor fusion, the system is made more robust than the individual sensors, helping to reduce false detections. To counteract the lack of public datasets, a novel video dataset containing 650 annotated infrared and visible videos of drones, birds, airplanes and helicopters is also presented. The database is complemented with an audio dataset of the classes drones, helicopters and background noise.

Fingerprints, Forever Young?

Roman Kessler, Olaf Henniger, Christoph Busch

Responsive image

Auto-TLDR; Mated Similarity Scores for Fingerprint Recognition: A Hierarchical Linear Model

Slides Poster Similar

In the present study we analyzed longitudinal fingerprint data of 20 data subjects, acquired over a time span of up to 12 years. Using hierarchical linear modeling, we aimed to delineate mated similarity scores as a function of fingerprint quality and of the time interval between reference and probe images. Our results did not reveal effects on mated similarity scores caused by an increasing time interval across subjects, but rather individual effects on mated similarity scores. The results are in line with the general assumption that the fingerprint as a biometric characteristic and the features extracted from it do not change over the adult life span. However, it contradicts several related studies that reported noticeable template ageing effects. We discuss why different findings regarding ageing of references in fingerprint recognition systems were made.

Detection and Correspondence Matching of Corneal Reflections for Eye Tracking Using Deep Learning

Soumil Chugh, Braiden Brousseau, Jonathan Rose, Moshe Eizenman

Responsive image

Auto-TLDR; A Fully Convolutional Neural Network for Corneal Reflection Detection and Matching in Extended Reality Eye Tracking Systems

Slides Poster Similar

Eye tracking systems that estimate the point-of-gaze are essential in extended reality (XR) systems as they enable new interaction paradigms and technological improvements. It is important for these systems to maintain accuracy when the headset moves relative to the head (known as device slippage) due to head movements or user adjustment. One of the most accurate eye tracking techniques, which is also insensitive to shifts of the system relative to the head, uses two or more infrared (IR) light emitting diodes to illuminate the eye and an IR camera to capture images of the eye. An essential step in estimating the point-of-gaze in these systems is the precise determination of the location of two or more corneal reflections (virtual images of the IR-LEDs that illuminate the eye) in images of the eye. Eye trackers tend to have multiple light sources to ensure at least one pair of reflections for each gaze position. The use of multiple light sources introduces a difficult problem: the need to match the corneal reflections with the corresponding light source over the range of expected eye movements. Corneal reflection detection and matching often fail in XR systems due to the proximity of camera and steep illumination angles of light sources with respect to the eye. The failures are caused by corneal reflections having varying shape and intensity levels or disappearance due to rotation of the eye, or the presence of spurious reflections. We have developed a fully convolutional neural network, based on the UNET architecture, that solves the detection and matching problem in the presence of spurious and missing reflections. Eye images of 25 people were collected in a virtual reality headset using a binocular eye tracking module consisting of five infrared light sources per eye. A set of 4,000 eye images were manually labelled for each of the corneal reflections, and data augmentation was used to generate a dataset of 40,000 images. The network is able to correctly identify and match 91% of corneal reflections present in the test set. This is comparable to a state-of-the-art deep learning system, but our approach requires 33 times less memory and executes 10 times faster. The proposed algorithm, when used in an eye tracker in a VR system, achieved an average mean absolute gaze error of 1°. This is a significant improvement over the state-of-the-art learning-based XR eye tracking systems that have reported gaze errors of 2-3°.

On the Use of Benford's Law to Detect GAN-Generated Images

Nicolo Bonettini, Paolo Bestagini, Simone Milani, Stefano Tubaro

Responsive image

Auto-TLDR; Using Benford's Law to Detect GAN-generated Images from Natural Images

Slides Poster Similar

The advent of Generative Adversarial Network (GAN) architectures has given anyone the ability of generating incredibly realistic synthetic imagery. The malicious diffusion of GAN-generated images may lead to serious social and political consequences (e.g., fake news spreading, opinion formation, etc.). It is therefore important to regulate the widespread distribution of synthetic imagery by developing solutions able to detect them. In this paper, we study the possibility of using Benford’s law to discriminate GAN-generated images from natural photographs. Benford’s law describes the distribution of the most significant digit for quantized Discrete Cosine Transform (DCT) coefficients. Extending and generalizing this property, we show that it is possible to extract a compact feature vector from an image. This feature vector can be fed to an extremely simple classifier for GAN-generated image detection purpose even in data scarcity scenarios where Convolutional Neural Network (CNN) architectures tend to fail.

Multi-Camera Sports Players 3D Localization with Identification Reasoning

Yukun Yang, Ruiheng Zhang, Wanneng Wu, Yu Peng, Xu Min

Responsive image

Auto-TLDR; Probabilistic and Identified Occupancy Map for Sports Players 3D Localization

Slides Poster Similar

Multi-camera sports players 3D localization is always a challenging task due to heavy occlusions in crowded sports scene. Traditional methods can only provide players locations without identification information. Existing methods of localization may cause ambiguous detection and unsatisfactory precision and recall, especially when heavy occlusions occur. To solve this problem, we propose a generic localization method by providing distinguishable results that have the probabilities of locations being occupied by players with unique ID labels. We design the algorithms with a multi-dimensional Bayesian model to create a Probabilistic and Identified Occupancy Map (PIOM). By using this model, we jointly apply deep learning-based object segmentation and identification to obtain sports players probable positions and their likely identification labels. This approach not only provides players 3D locations but also gives their ID information that are distinguishable from others. Experimental results demonstrate that our method outperforms the previous localization approaches with reliable and distinguishable outcomes.

A Multi-Focus Image Fusion Method Based on Fractal Dimension and Guided Filtering

Nikoo Dehghani, Ehsanollah Kabir

Responsive image

Auto-TLDR; Fractal Dimension-based Multi-focus Image Fusion with Guide Filtering

Slides Poster Similar

Fractal Dimension (FD) is widely used for image segmentation because of its successful approach toward quantifying texture information. In this paper, we present a FD-based multi-focus image fusion method that utilizes FD to identify focused regions, as the primary step for the multi-focus image fusion process. The algorithm aims to extract the local FD features of each multi-focus pair estimated using the differential box-counting method. A guided filter is employed to further specify the spatial information and increase the robustness of the FD features to noise. The outcome would be analyzed to achieve a focus map that identifies sharp regions in each partially focused image. Afterwards, the detected regions are combined into a single all-focused image. The experiments, along with the objective assessments, demonstrate the competitive performance of the proposed method compared to several state-of-the-art multi-focus image fusion methods.

IPN Hand: A Video Dataset and Benchmark for Real-Time Continuous Hand Gesture Recognition

Gibran Benitez-Garcia, Jesus Olivares-Mercado, Gabriel Sanchez-Perez, Keiji Yanai

Responsive image

Auto-TLDR; IPN Hand: A Benchmark Dataset for Continuous Hand Gesture Recognition

Slides Poster Similar

Continuous hand gesture recognition (HGR) is an essential part of human-computer interaction with a wide range of applications in the automotive sector, consumer electronics, home automation, and others. In recent years, accurate and efficient deep learning models have been proposed for HGR. However, in the research community, the current publicly available datasets lack real-world elements needed to build responsive and efficient HGR systems. In this paper, we introduce a new benchmark dataset named IPN Hand with sufficient size, variation, and real-world elements able to train and evaluate deep neural networks. This dataset contains more than 4 000 gesture samples and 800 000 RGB frames from 50 distinct subjects. We design 13 different static and dynamic gestures focused on interaction with touchless screens. We especially consider the scenario when continuous gestures are performed without transition states, and when subjects perform natural movements with their hands as non-gesture actions. Gestures were collected from about 30 diverse scenes, with real-world variation in background and illumination. With our dataset, the performance of three 3D-CNN models is evaluated on the tasks of isolated and continuous real-time HGR. Furthermore, we analyze the possibility of increasing the recognition accuracy by adding multiple modalities derived from RGB frames, i.e., optical flow and semantic segmentation, while keeping the real-time performance of the 3D-CNN model. Our empirical study also provides a comparison with the publicly available nvGesture (NVIDIA) dataset. The experimental results show that the state-of-the-art ResNext-101 model decreases about 30% accuracy when using our real-world dataset, demonstrating that the IPN Hand dataset can be used as a benchmark, and may help the community to step forward in the continuous HGR.

Mobile Augmented Reality: Fast, Precise, and Smooth Planar Object Tracking

Dmitrii Matveichev, Daw-Tung Lin

Responsive image

Auto-TLDR; Planar Object Tracking with Sparse Optical Flow Tracking and Descriptor Matching

Slides Poster Similar

We propose an innovative method for combining sparse optical flow tracking and descriptor matching algorithms. The proposed approach solves the following problems that are inherent to keypoint-based and optical flow based tracking algorithms: spatial jitter, extreme scale transformation, extreme perspective transformation, degradation in the number of tracking points, and drifting of tracking points. Our algorithm provides smooth object-position tracking under six degrees of freedom transformations with a small computational cost for providing a high-quality real-time AR experience on mobile platforms. We experimentally demonstrate that our approach outperforms the state-of-the-art tracking algorithms while offering faster computational time. A mobile augmented reality (AR) application, which is developed using our approach, delivers planar object tracking with 30 FPS on modern mobile phones for a camera resolution of 1280$\times$720. Finally, we compare the performance of our AR application with that of the Vuforia-based AR application on the same planar objects database. The test results show that our AR application delivers better AR experience than Vuforia in terms of smooth transition of object-pose between video frames.

GAN-Based Image Deblurring Using DCT Discriminator

Hiroki Tomosada, Takahiro Kudo, Takanori Fujisawa, Masaaki Ikehara

Responsive image

Auto-TLDR; DeblurDCTGAN: A Discrete Cosine Transform for Image Deblurring

Slides Poster Similar

In this paper, we propose high quality image debluring by using discrete cosine transform (DCT) with less computational complexity. Recently, Convolutional Neural Network (CNN) and Generative Adversarial Network (GAN) based algorithms have been proposed for image deblurring. Moreover, multi-scale architecture of CNN restores blurred image cleary and suppresses more ringing artifacts or block noise, but it takes much time to process. To solve these problems, we propose a method that preserves texture and suppresses ringing artifacts in the restored image without multi-scale architecture using DCT based loss named ``DeblurDCTGAN.''. It compares frequency domain of the images made from deblurred image and grand truth image by using DCT. Hereby, DeblurDCTGAN can reduce block noise or ringing artifacts while maintaining deblurring performance. Our experimental results show that DeblurDCTGAN gets the highest performances on both PSNR and SSIM comparing with other conventional methods in both GoPro test Dataset and DVD test Dataset. Also, the running time per pair of DeblurDCTGAN is faster than others.

Radar Image Reconstruction from Raw ADC Data Using Parametric Variational Autoencoder with Domain Adaptation

Michael Stephan, Thomas Stadelmayer, Avik Santra, Georg Fischer, Robert Weigel, Fabian Lurz

Responsive image

Auto-TLDR; Parametric Variational Autoencoder-based Human Target Detection and Localization for Frequency Modulated Continuous Wave Radar

Slides Poster Similar

This paper presents a parametric variational autoencoder-based human target detection and localization framework working directly with the raw analog-to-digital converter data from the frequency modulated continuous wave radar. We propose a parametrically constrained variational autoencoder, with residual and skip connections, capable of generating the clustered and localized target detections on the range-angle image. Furthermore, to circumvent the problem of training the proposed neural network on all possible scenarios using real radar data, we propose domain adaptation strategies whereby we first train the neural network using ray tracing based model data and then adapt the network to work on real sensor data. This strategy ensures better generalization and scalability of the proposed neural network even though it is trained with limited radar data. We demonstrate the superior detection and localization performance of our proposed solution compared to the conventional signal processing pipeline and earlier state-of-art deep U-Net architecture with range-doppler images as inputs.