Recently, image-based object detection has made great progress with the introduction of Convolutional Neural Network (CNN). Many trials such as Region-based CNN, Fast R-CNN, and Faster R-CNN, have been proposed for achieving better performance in object detection. YOLO has showed the best performance under consideration of both accuracy and computational complexity. However, these data-driven detection methods including YOLO have the fundamental problem is that they can not guarantee the good performance without a large number of training database. In this paper, we propose a data sampling method using CycleGAN to solve this problem, which can convert styles while retaining the characteristics of a given input image. We will generate the insufficient data samples for training more robust object detection without efforts of collecting more database. We make extensive experimental results using the day-time and night-time road images and we validate the proposed method can improve the object detection accuracy of the night-time without training night-time object databases, because we converts the day-time training images into the synthesized night-time images and we train the detection model with the real day-time images and the synthesized night-time images.
Keyword
CycleGAN; Data Sampling; Image-to-Image Translation