Lianlei Shan
Paper download is intended for registered attendees only, and is
subjected to the IEEE Copyright Policy. Any other use is strongly forbidden.
Papers from this author
Global-Local Attention Network for Semantic Segmentation in Aerial Images
Minglong Li, Lianlei Shan, Weiqiang Wang
Auto-TLDR; GLANet: Global-Local Attention Network for Semantic Segmentation
Abstract Slides Poster Similar
Errors in semantic segmentation task could be classified into two types: large area misclassification and local inaccurate boundaries. Previously attention based methods capture rich global contextual information, this is beneficial to diminish the first type of error, but local imprecision still exists. In this paper we propose Global-Local Attention Network (GLANet) with a simultaneous consideration of global context and local details. Specifically, our GLANet is composed of two branches namely global attention branch and local attention branch, and three different modules are embedded in the two branches for the purpose of modeling semantic interdependencies in spatial, channel and boundary dimensions respectively. We sum the outputs of the two branches to further improve feature representation, leading to more precise segmentation results. The proposed method achieves very competitive segmentation accuracy on two public aerial image datasets, bringing significant improvements over baseline.
UHRSNet: A Semantic Segmentation Network Specifically for Ultra-High-Resolution Images
Auto-TLDR; Ultra-High-Resolution Segmentation with Local and Global Feature Fusion
Abstract—Semantic segmentation is a basic task in computer vision, but only limited attention has been devoted to the ultra-high-resolution (UHR) image segmentation. Since UHR images occupy too much memory, they cannot be directly put into GPU for training. Previous methods are cropping images to small patches or downsampling the whole images. Cropping and downsampling cause the loss of contexts and details, which is essential for segmentation accuracy. To solve this problem, we improve and simplify the local and global feature fusion method in previous works. Local features are extracted from patches and global features are from downsampled images. Meanwhile, we propose one new fusion called local feature fusion for the first time, which can make patches get information from surrounding patches. We call the network with these two fusions ultra-high-resolution segmentation network (UHRSNet). These two fusions can effectively and efficiently solve the problem caused by cropping and downsampling. Experiments show a remarkable improvement on Deepglobe dataset.