Social Video Advertisement Replacement and its Evaluation in Convolutional Neural Networks

Main Article Content

Cheng Yang
Xiang Yu
Arun Kumar
G.G. Md. Nawaz Ali
Peter Han Joo Chong
Patrick Lam
This paper introduces a method to use deep convolutional neural networks (CNNs) to automatically replace advertisement (AD) photo on social (or self-media) videos and provides the suitable evaluation method to compare different CNNs. An AD photo can replace a picture inside a video. However, if a human being occludes the replaced picture in the original video, the newly pasted AD photo will block the human occluded part. The deep learning algorithm is implemented to segment the human being from the video. The segmented human pixels are then pasted back to the occluded area, so that the AD photo replacement becomes natural and perfect appearance in the video. This process requires the predicted occlusion edge to be closed to the ground truth occlusion edge, so that the AD photo can be occluded naturally. Therefore, this research introduces a curve fitting method to measure the predicted occlusion edge’s error. By using this method, three CNN methods are applied and compared for the AD replacement. They are mask of regions convolutional neural network (Mask RCNN), a recurrent network for video object segmentation (ROVS) and DeeplabV3. The experimental results show the comparative segmentation accuracy of the different models and DeeplabV3 shows the best performance.
Paraules clau
Deep Learning, Image Processing, Image Segmentation, Video Advertisement Replacement

Article Details

Com citar
Yang, Cheng et al. «Social Video Advertisement Replacement and its Evaluation in Convolutional Neural Networks». ELCVIA: electronic letters on computer vision and image analysis, 2021, vol.VOL 20, núm. 1, p. 117-36, doi:10.5565/rev/elcvia.1347.
Biografia de l'autor/a

Cheng Yang, Department of Electrical and Electronic Engineering, Auckland University of Technology, New Zealand

Lecturer in School of Engineering, Computer and Mathematical Sciences, Faculty of Design and Creative Technologies.