首页

当前位置: 首页 >> 讲座报告 >> 正文

Betway必威广研院前沿学术报告(五十九) ——Video style transfer based on auto-encoder and gradient rank preservation

发布时间:2020-12-02 17:19:58来源:育人中心 点击:

报告人 牛毅 副教授 时间 12月3日 16:30
地点 B5-204 报告时间 2020-12-03 16:30:00


报告名称:Video style transfer based on auto-encoder and gradient rank preservation

人:牛毅 副教授

报告时间:2020123 1630

报告地点:B5-204


报告人介绍:

牛毅,男,198212月出生,陕西省优博,深圳鹏城实验室特聘教授,现为必威BETWAY官网,人工智能学院副教授。AI Master乐队鼓手smiley

主要研究包括图像处理,计算成像,基因检测及压缩等方向。

教育背景及主要工作经历如下所示。

2005/092012/12,必威BETWAY官网,电子工程学院,博士

2001/092005/07,必威BETWAY官网,机电工程学院,学士

研究工作经历:

2012/012015/05,必威BETWAY官网,电子工程学院,讲师

2009/092012/10,加拿大McMaster大学,计算机及电子工程系,研究助理

2013/052014/05,加拿大McMaster大学,计算机及电子工程系,博士后

2015/052017/12,必威BETWAY官网,电子工程学院,副教授

2017/12-今, 必威BETWAY官网,人工智能,副教授


报告简介:

The main challenge of the video style transfer over image style transfer is the preservation of temporal consistency. Traditional video stylization techniques estimate the optical flow from the content video to define a pixel-wise temporal loss between the adjacent stylized frames. There are two drawbacks of the above temporal loss definition: 1) Since the stylization changes the subtle local texture of the content video, adopting the motion vector of content video to the stylized video directly will bring artifacts like edge blurring and texture flatten; 2) to eliminate the above artifacts , existing video style transfer techniques adopt a simple masking strategy that exclude the pixels of the boundary regions from temporal loss calculation. This brings a new problem that the total loss function varies in the inner and outer boundary regions which causes significant halo artifacts. To solve the above dilemma, we propose to use an auto-encoder structure to restore the content video from the stylized video such that the temporal loss is calculated from the restored frames instead of the stylized frames. Since both the restored frames and the input frames are “content images”, the estimated optical flow can be adopted directly without masking. In addition, we propose a novel gradient rank loss which forces the edges/textures of the stylized video to hold the gradient rank of the content video to eliminate the potential halo artifacts caused by inaccurate optical flow estimation. Experimental result shows that with the collaboration of auto-encoder and gradient rank loss, the proposed video style transfer technique outperforms the existing technique in providing smooth and halo-free stylized video.


关闭