Nakano Teppei

Papers from this author

Crowdsourced Verification for Operating Calving Surveillance Systems at an Early Stage

Yusuke Okimoto, Soshi Kawata, Susumu Saito, Nakano Teppei, Tetsuji Ogawa

Responsive image

Auto-TLDR; Crowdsourcing for Data-Driven Video Surveillance

Poster Similar

This study attempts to use crowdsourcing to facilitate the operation of pattern-recognition-based video surveillance systems at an early stage. Target events (i.e. events to be detected during surveillance) are not frequently observed in recorded video, so achieving reliable surveillance on the basis of machine learning requires a sufficient amount of target data. Acquiring sufficient data is time-consuming. However, operating unreliable surveillance systems can induce many false alarms. Crowdsourcing is introduced to address this problem by verifying the unreliable results in data-driven surveillance. Experimental simulation conducted using monitoring video of Japanese black beef cattle demonstrates that crowdsourced verification successfully reduced false alarms in calving detection systems.

Feature Representation Learning for Calving Detection of Cows Using Video Frames

Ryosuke Hyodo, Nakano Teppei, Tetsuji Ogawa

Responsive image

Auto-TLDR; Data-driven Feature Extraction for Calving Sign Detection Using Surveillance Video

Slides Poster Similar

Data-driven feature extraction is examined to realize accurate and robust calving detection. Automatic calving sign detection systems can support farmers' decision making. In this paper, neural networks are designed to extract information relevant to calving signs, which can be observed from video, such as the frequency in pre-calving postures, statistics in movement, and statistics in rotation. Experimental comparisons using surveillance video demonstrate that the proposed feature extraction methods contribute to reducing false positives and explaining the basis of the prediction compared to the end-to-end calving detection system.

Toward Building a Data-Driven System ForDetecting Mounting Actions of Black Beef Cattle

Yuriko Kawano, Susumu Saito, Nakano Teppei, Ikumi Kondo, Ryota Yamazaki, Hiromi Kusaka, Minoru Sakaguchi, Tetsuji Ogawa

Responsive image

Auto-TLDR; Cattle Mounting Action Detection Using Crowdsourcing and Pattern Recognition

Poster Similar

This paper tackles on building a pattern recognition system that detects whether a pair of Japanese black beefs captured in a given image region is in a “mounting” action, which is known to be a sign critically important to be detected for cattle farmers before artificial insemination. The “mounting” action refers to a cattle’s action where a cow bends over another cow usually when either cow is in estrus. Although a pattern recognition-based approach for detecting such an action would be appreciated as being low-cost and robust, it had not been discussed much due to the complexity of the system architecture, unavailability of datasets, etc. This study presents i) our image dataset construction technique that exploits both object detection algorithm and crowdsourcing for collecting cattle pair images with labels of either “mounting” or not; and ii) a system for detecting the mounting action from any given image of a cattle pair, developed based on the dataset. Starting with an algorithm for extracting regions of cattle pairs from a video frame based on intersection of single cattle regions, we then designed our crowdsourcing microtask in which crowd workers were given simple guidelines to annotate mounting-action-relevant labels to the extracted regions, to finally obtain a dataset. We also introduce our tandem-layered pattern recognition system trained with the dataset. The system is comprised of two serially-connected machine learning components, and is capable of more robustly detecting mounting actions even with a small amount of training data than a normal end-to-end neural network. Experimental comparisons demonstrated that our detection system was capable of detecting estrus with a precision rate of 80% and a recall rate of 76%.