Yu-Lun (Alex) Liu | 劉育綸

I am an Assistant Professor in the Department of Computer Science at National Yang Ming Chiao Tung University. I work on image/video processing, computer vision, and computational photography, particularly on essential problems requiring machine learning with insights from geometry and domain-specific knowledge.

I am looking for undergraduate / master's / Ph.D. / postdoc students to join my group. If you are interested in working with me and want to conduct research in image processing, computer vision, and machine learning, don't hesitate to contact me directly with your CV and transcripts.

Email  /  CV  /  Google Scholar  /  Facebook  /  Instagram  /  Github  /  YouTube

profile photo For those who personally know me, that might be thinking: Who is this guy?
Hover over to see how I usually look like before a paper submission deadline.
Timeline
2023 -
image
Assistant Professor at NYCU
2022
image
Research Scientist Intern at Meta
Computational Photography Group
Seattle, WA, USA
2017 - 2022
image
Senior Software Engineer at MediaTek Inc.
Multimedia Technology Development (MTD) Division
Intelligent Vision Processing (IVP) Department
2014 - 2017
image
Software Engineer at MediaTek Inc.
Multimedia Technology Development (MTD) Division
Intelligent Vision Processing (IVP) Department
2012 - 2014
image
NCTU: MS
CommLab, Institute of Electronics
2008 - 2012
image
NCTU: BS
Department of Electronics Engineering
News
Research Group

PhD Students

黃怡川
Institute of Computer Science

Research Assistants

葉長瀚
BS, NYCU CE & CS

MS Students

林晉暘
Institute of Data Science
(Co-advised w/
Wei-Chen Chiu)

吳中赫
Institute of Multimedia

嚴士函
Institute of Multimedia

陳捷文
Institute of Computer Science

許皓翔
Institute of Computer Science
(Co-advised w/
Wen-Chieh Lin)

鄭伯俞
Institute of Computer Science
(Co-advised w/
Wei-Chen Chiu)

BS Students

蘇智海
NYCU CS

胡智堯
NTU MED

陳俊瑋
NYCU MATH

李明謙
NYCU ARETEHP

林奕杰
NTHU SCIDM

孫揚喆
NYCU MED

羅宇呈
NYCU MATH

郭玠甫
NYCU MATH

葉柔昀
NYCU EP

陳士弘
NYCU MATH & CS

鄭又豪
NYCU EP

陳昱佑
NYCU CS

丁祐承
NYCU MATH

李宗諺
NYCU CS

楊宗儒
NYCU CS

陳凱昕
NYCU CS

劉珆睿
NYCU CS

吳俊宏
NYCU CS

蔡師睿
NYCU CS

張維程
NYCU CS

李杰穎
NYCU CS

陳映寰
NTHU EE

謝明翰
NTHU EE

陳楊融
NYCU CS

施惟智
NTHU IEEM

端木竣偉
NYCU CS

Research
Learning Continuous Exposure Value Representations for Single-Image HDR Reconstruction
Su-Kai Chen, Hung-Lin Yen, Yu-Lun Liu, Min-Hung Chen, Hou-Ning Hu, Wen-Hsiao Peng, Yen-Yu Lin
ICCV, 2023  
project page / paper

Our flexible approach generates a continuous stack with more images containing diverse EVs, significantly improving HDR reconstruction.

ImGeoNet: Image-induced Geometry-aware Voxel Representation for Multi-view 3D Object Detection
Tao Tu, Shun-Po Chuang, Yu-Lun Liu, Cheng Sun, Ke Zhang, Donna Roy, Cheng-Hao Kuo, Min Sun
ICCV, 2023  
project page / paper

In contrast to prior works that disregard the underlying geometry by directly averaging feature volume across multiple views, our proposed successfully preserves the geometric structure with respect to the ground truth while effectively reducing the number of voxels in free space.

Description
DisCO: Portrait Distortion Correction with Perspective-Aware 3D GANs
Zhixiang Wang, Yu-Lun Liu, Jia-Bin Huang, Shin'ichi Satoh, Sizhuo Ma, Guru Krishnan, Jian Wang
arXiv, 2023  
project page / arXiv

We build a pipeline to use pre-trained 3D GANs for correcting face perspective distortion in close-up portraits.

Description
Progressively Optimized Local Radiance Fields for Robust View Synthesis
Andreas Meuleman, Yu-Lun Liu, Chen Gao, Jia-Bin Huang, Changil Kim, Min H. Kim, Johannes Kopf
CVPR, 2023  
project page / paper / code / video

For handling large unbounded scenes, we dynamically allocate new local radiance fields trained with frames within a temporal window.

Description
Robust Dynamic Radiance Fields
Yu-Lun Liu, Chen Gao, Andreas Meuleman, Hung-Yu Tseng, Ayush Saraf, Changil Kim, Yung-Yu Chuang, Johannes Kopf, Jia-Bin Huang
CVPR, 2023  
project page / arXiv / code / video

RoDynRF tackles the robustness problem of conventional SfM systems such as COLMAP and showcases high-fidelity dynamic view synthesis results on a wide variety of videos.




Denoising Likelihood Score Matching for Conditional Score-based Data Generation
Chen-Hao Chao, Wei-Fang Sun, Bo-Wun Cheng, Yi-Chen Lo, Chia-Che Chang, Yu-Lun Liu, Yu-Lin Chang, Chia-Ping Chen, Chun-Yi Lee
ICLR, 2022  
arXiv / OpenReview

We theoretically formulate a novel training objective, called Denoising Likelihood Score Matching (DLSM) loss, for the classifier to match the gradients of the true log likelihood density.

Description
Learning to See Through Obstructions with Layered Decomposition
Yu-Lun Liu, Wei-Sheng Lai, Ming-Hsuan Yang, Yung-Yu Chuang, Jia-Bin Huang
TPAMI, 2021  
project page / arXiv / code / demo / video

We present a learning-based approach for removing unwanted obstructions, such as window reflections, fence occlusions, or adherent raindrops, from a short sequence of images captured by a moving camera.




Bridging Unsupervised and Supervised Depth from Focus via All-in-Focus Supervision
Ning-Hsu Wang, Ren Wang, Yu-Lun Liu, Yu-Hao Huang, Yu-Lin Chang, Chia-Ping Chen, Kevin Jou
ICCV, 2021  
project page / arXiv / code

In this paper, we propose a method to estimate not only a depth map but an AiF image from a set of images with different focus positions (known as a focal stack).

Description
Hybrid Neural Fusion for Full-frame Video Stabilization
Yu-Lun Liu, Wei-Sheng Lai, Ming-Hsuan Yang, Yung-Yu Chuang, Jia-Bin Huang
ICCV, 2021  
project page / arXiv / poster / slides / code / demo / video / Two minute video

In this work, we present a frame synthesis algorithm to achieve full-frame video stabilization.



Explorable Tone Mapping Operators
Chien-Chuan Su, Ren Wang, Hung-Jin Lin, Yu-Lun Liu, Chia-Ping Chen, Yu-Lin Chang, Soo-Chang Pei
ICPR, 2020  
arXiv

In this paper, a learning-based multimodal tone-mapping method is proposed, which not only achieves excellent visual quality but also explores the style diversity.


Learning Camera-Aware Noise Models
Ke-Chi Chang, Ren Wang, Hung-Jin Lin, Yu-Lun Liu, Chia-Ping Chen, Yu-Lin Chang, Hwann-Tzong Chen
ECCV, 2020  
project page / arXiv / code

We propose a data-driven approach, where a generative noise model is learned from real-world noise.

Description
Single-Image HDR Reconstruction by Learning to Reverse the Camera Pipeline
Yu-Lun Liu*, Wei-Sheng Lai*, Yu-Sheng Chen, Yi-Lung Kao, Ming-Hsuan Yang, Yung-Yu Chuang, Jia-Bin Huang
CVPR, 2020  
project page / arXiv / poster / slides / code / demo / 1-minute video

In contrast to existing learning-based methods, our core idea is to incorporate the domain knowledge of the LDR image formation pipeline into our model.

Description
Learning to See Through Obstructions
Yu-Lun Liu, Wei-Sheng Lai, Ming-Hsuan Yang, Yung-Yu Chuang, Jia-Bin Huang
CVPR, 2020  
project page / arXiv / poster / slides / code / demo / 1-minute video / video / New Scientists

We present a learning-based approach for removing unwanted obstructions, such as window reflections, fence occlusions or raindrops, from a short sequence of images captured by a moving camera.


Attention-based View Selection Networks for Light-field Disparity Estimation
Yu-Ju Tsai, Yu-Lun Liu, Yung-Yu Chuang, Ming Ouhyoung
AAAI, 2020  
paper / code / benchmark

For utilizing the views more effectively and reducing redundancy within views, we propose a view selection module that generates an attention map indicating the importance of each view and its potential for contributing to accurate depth estimation.

Description
Deep Video Frame Interpolation using Cyclic Frame Generation
Yu-Lun Liu, Yi-Tung Liao, Yen-Yu Lin, Yung-Yu Chuang
AAAI, 2019   (Oral Presentation)
project page / paper / poster / slides / code / video

The cycle consistency loss can better utilize the training data to not only enhance the interpolation results, but also maintain the performance better with less training data.

Background modeling using depth information
Yu-Lun Liu, Hsueh-Ming Hang
APSIPA, 2014  
paper

This paper mainly focuses on creating a global background model of a video sequence using the depth maps together with the RGB pictures.

Virtual view synthesis using backward depth warping algorithm
Du-Hsiu Li, Hsueh-Ming Hang, Yu-Lun Liu
PCS, 2013  
paper

In this study, we propose a backward warping process to replace the forward warping process, and the artifacts (particularly the ones produced by quantization) are significantly reduced.

Teaching
CSIC30107: Video Compression
NYCU - Fall 2023 (Instructor)
CSIC30107: Video Compression
NYCU - Spring 2023 (Instructor)
DEE1315: Probability and Statistics
NCTU - Spring 2013 (Teaching Assistant)
Sponsors

My research is made possible by the generous support of the following organizations.


Stolen from Jon Barron's website.
Last updated September 2023.