Xu Yang

Papers from this author

Activity and Relationship Modeling Driven Weakly Supervised Object Detection

Yinlin Li, Yang Qian, Xu Yang, Yuren Zhang

Responsive image

Auto-TLDR; Weakly Supervised Object Detection Using Activity Label and Relationship Modeling

Slides Poster Similar

This paper presents a weakly supervised object detection method based on activity label and relationship modeling, which is motivated by the assumption that configuration of human and object are similar in same activity, and joint modeling of human, active object and activity could leverage the recognition of them. Compared to most weakly supervised method taking object as independent instance, firstly, active human and object proposals are learned and filtered based on class activation map of multi-label classification. Secondly, a spatial relationship prior including relative position, scale, overlaps etc are learned dependent on action. Finally, a multi-stream object detection framework integrating the spatial prior and pairwise ROI pooling are proposed to jointly learn the object and action class. Experiments are conducted on HICO-DET dataset, and our approach outperforms the state of the art weakly supervised object detection methods.

MixedFusion: 6D Object Pose Estimation from Decoupled RGB-Depth Features

Hangtao Feng, Lu Zhang, Xu Yang, Zhiyong Liu

Responsive image

Auto-TLDR; MixedFusion: Combining Color and Point Clouds for 6D Pose Estimation

Slides Poster Similar

Estimating the 6D pose of objects is an important process for intelligent systems to achieve interaction with the real-world. As the RGB-D sensors become more accessible, the fusion-based methods have prevailed, since the point clouds provide complementary geometric information with RGB values. However, Due to the difference in feature space between color image and depth image, the network structures that directly perform point-to-point matching fusion do not effectively fuse the features of the two. In this paper, we propose a simple but effective approach, named MixedFusion. Different from the prior works, we argue that the spatial correspondence of color and point clouds could be decoupled and reconnected, thus enabling a more flexible fusion scheme. By performing the proposed method, more informative points can be mixed and fused with rich color features. Extensive experiments are conducted on the challenging LineMod and YCB-Video datasets, show that our method significantly boosts the performance without introducing extra overheads. Furthermore, when the minimum tolerance of metric narrows, the proposed approach performs better for the high-precision demands.