Chenyang Lei (雷晨阳)
Chenyang Lei is an assistant professor at Centre for Artificial Intelligence and Robotics (CAIR), Chinese Academy of Sciences. He is also a visiting scholar at Princeton University and working with Felix Heide. He received his Ph.D. in computer science from the Hong Kong University of Science and Technology (HKUST), supervised by Qifeng Chen. He was a research intern at MSRA, Nvidia, and Sensetime. He obtained his Bachelor's degree at Zhejiang University in 2018.
I am looking for students, postdocs, and research assistants. If you are a student interested in doing research with me, please send me an email.
Email  / 
CV  / 
Google Scholar  / 
Linked In  / 
Github  / 
Twitter
|
|
- News (Dec 2024): One paper is accepted to AAAI 2025.
- News (Oct 2024): Two papers are accepted to NeurIPS 2024.
- News (Feb 2024): Four papers are accepted to CVPR 2024.
- News (Dec 2023): A paper is accepted to AAAI 2024.
- News (Aug 2023): A paper is accepted to TPAMI 2023.
- News (Aug 2023): A paper is accepted to SIGGRAPH Asia 2023.
- News (Jul 2023): Three papers are accepted to ICCV 2023.
- News (Feb 2023): Two papers are accepted to CVPR 2023.
- News (Sep 2022): I joined Princeton University as a research scholar, working with Felix Heide.
- News (Sep 2022): I joined CAIR, HKISI-CAS as an assistant professor.
- News (Aug 2022): I passed my Ph.D. defense!
- News (Mar 2022): One paper is accepted to CVPR 2022.
- News (Dec 2021): One paper is accepted to TPAMI 2022.
- News (Mar 2021): Two papers are accepted to CVPR 2021.
|
Research
I am interested in exploring system designs of visual computing pipelines, from advanced sensing technologies to more capable artificial intelligence.
My current research topics include:
- Computational imaging and photography
- Video processing, editing, and generation
- 3D from sensors and multiviews
- Multimodal Large Language Models
|
Publication
|
|
SimCMF: A Simple Cross-modal Fine-tuning Strategy from Vision Foundation Models to Any Imaging Modality
Chenyang Lei*,
Liyi Chen*,
Jun Cen, Xiao Chen, Zhen Lei,
Felix Heide, Qifeng Chen, Zhaoxiang Zhang
arxiv, 2024
paper /
project website /
code
This work presents a simple and effective framework SimCMF to study an open problem: the
transferability from vision foundation models trained on natural RGB images to other image modalities of different physical
properties (e.g., polarization).
|
|
Vision-Language-Camera: Introducing Vision Language Models for Unleashing the Power of Camera Manual Mode
Zian Qian, Zhili Chen, Mengxi Sun, Zhaoxiang Zhang, Qifeng Chen, Chenyang Lei
In submission, 2024
paper /
project website /
code
We propose the Vision-Language-Camera Model (VLC), which introduces the vision-language model to camera manual mode and offers guidance on how to adjust camera parameters tailored to the user's specific needs.
|
|
FIRM: Flexible Interactive Reflection ReMoval
Xiao Chen, Xudong Jiang, Yunkang Tao, Zhen Lei, Qing Li, Chenyang Lei†, Zhaoxiang Zhang†
AAAI, 2024
paper /
project website /
code
This paper presents a novel framework for Flexible Interactive Reflection Removal with various forms of guidance, where users can provide sparse visual guidance (e.g., points, boxes, or strokes) or text descriptions for better reflection removal.
|
|
Polarization Wavefront Lidars: Learning to Recover Large-Scale Scene Information from Polarized Wavefronts
Dominik Scheuble*,
Chenyang Lei*,
Mario Bijelic, Seung-Hwan Baek,
Felix Heide
CVPR, 2024
paper /
project website /
code
In this work, we introduce a novel long-range polarization wavefront lidar sensor (PolLidar) that modulates the polarization of the emitted and received light.
|
|
Robust Depth Enhancement via Polarization Prompt Fusion Tuning
Kei IKEMURA*, Yiming Huang*, Felix Heide, Zhaoxiang Zhang, Qifeng Chen, Chenyang Lei†
CVPR, 2024
paper /
project website /
code
In this work, we present a general framework that leverages polarization imaging to improve inaccurate depth measurements from various depth sensors.
|
|
Automatic Controllable Colorization by Imagination
Xiaoyan Cong, Yue Wu,
Qifeng Chen,
Chenyang Lei†
CVPR, 2024
paper /
project website /
code
We present a novel approach to automatic image colorization by imitating the imagination process of human experts.
|
|
Neural Spline Fields for Burst Image Fusion and Layer Separation
Ilya Chugunov, David Shustin, Ruyu Yan,
Chenyang Lei, Felix Heide
CVPR, 2024
paper /
project website /
code
In this work, we use burst image stacks for layer separation. We represent a burst of images with a two-layer alpha-composited image plus flow model constructed with neural spline fields networks trained to map input coordinates to spline control points.
|
|
Thin On-Sensor Nanophotonic Array Cameras
Praneeth Chakravarthula, Jipeng Sun, Xiao Li,
Chenyang Lei,
Gene Chou, Mario Bijelic, Johannes Froech, Arka Majumdar, Felix Heide
SIGGRAPH Asia, 2023
paper /
project website
We propose a thin nanophotonic imager that employs a learned array of metalenses to capture a scene in-the-wild.
|
|
Randomized Quantization: A Generic Augmentation for Data Agnostic Self-supervised Learning
Huimin Wu*,
Chenyang Lei*,
Xiao Sun, Peng-Shuai Wang,
Qifeng Chen,
Kwang-Ting Cheng, Stephen Lin, Zhirong Wu
ICCV, 2023
arxiv /
project website /
code
A modality-agnostic augmentation for constrative learning, which can be applied to image, audio, pointclouds, sensors and other modalities.
|
|
FateZero: Fusing Attentions for Zero-shot Text-based Video Editing
Chenyang Qi,
Xiaodong Cun,
Yong Zhang,
Chenyang Lei,
Xintao Wang,
Ying Shan,
Qifeng Chen
ICCV, 2023
arxiv /
project website /
code
A zero-shot text-driven video style and local attribute editing model.
|
|
Blind Video Deflickering by Neural Filtering with a Flawed Atlas
Chenyang Lei*,
Xuanchi Ren*,
Zhaoxiang Zhang,
Qifeng Chen
CVPR, 2023
arxiv /
project website /
code
We present a general postprocessing framework that can remove different types of flicker from various videos, including videos from video capturing, processing, and generation.
|
|
High-fidelity 3D GAN Inversion by Pseudo-multi-view Optimization
Jiaxin Xie*,
Hao Ouyang*,
Jingtan Piao,
Chenyang Lei ,
Qifeng Chen
CVPR, 2023
arxiv /
project website /
code
We present a high-fidelity 3D generative adversarial network (GAN) inversion framework that can synthesize photo-realistic novel views while preserving specific details of the input image.
|
|
Deep Video Prior for Video Consistency and Propagation
Chenyang Lei,
Yazhou Xing,
Hao Ouyang,
Qifeng Chen
TPAMI , 2022
code
We extend the deep video prior (NeurIPS 2020) to video propagation. We also improve the training efficiency for deep video prior.
|
|
Shape from Polarization for Complex Scenes in the Wild
Chenyang Lei*,
Chenyang Qi*,
Jiaxin Xie*,
Na Fan,
Vladlen Koltun ,
Qifeng Chen
CVPR, 2022
arxiv /
project website /
code
We present a new data-driven approach with physics-based priors to scene-level normal estimation from a single polarization image.
|
|
Robust Reflection Removal with Reflection-free Flash-only Cues
Chenyang Lei,
Qifeng Chen
CVPR , 2021
arxiv /
code /
project website
We propose a simple yet effective reflection-free cue for robust reflection removal from a pair of flash and ambient (no-flash) images.
The reflection-free cue exploits a flash-only image obtained by subtracting the ambient image from the corresponding flash image in raw data space.
The flash-only image is equivalent to an image taken in a dark environment with only a flash on.
|
|
Neural Camera Simulators
Hao Ouyang*,
Zifan Shi*,
Chenyang Lei,
Ka Lung Law,
Qifeng Chen
CVPR , 2021
paper /
code
We present a controllable camera simulator based on deep neural networks to synthesize raw image data under different camera settings, including exposure time, ISO, and aperture.
|
|
Blind Video Temporal Consistency via Deep Video Prior
Chenyang Lei*,
Yazhou Xing*,
Qifeng Chen
NeurIPS , 2020
arxiv /
code /
project website
Applying image processing algorithms independently to each video frame often leads to temporal inconsistency in the resulted video. To address this issue,
we present a novel and general approach for blind temporal video consistency.
|
|
Video Depth Estimation by Fusing Flow-to-Depth Proposals
Jiaxin Xie,
Chenyang Lei,
Zhuwen Li,
Li Erran Li,
Qifeng Chen
IROS , 2020
arXiv /
code /
project website
We present an approach with a differentiable flowto-depth layer for video depth estimation.
|
|
Polarized Reflection Removal with Perfect Alignment in the Wild
Chenyang Lei,
Xuhua Huang,
Mengdi Zhang,
Qiong Yan,
Wenxiu Sun,
Qifeng Chen
CVPR, 2020
arXiv /
code /
project website
Polarization information and perfect alignment are utilized to remove reflection accurately.
|
|
Fully Automatic Video Colorization with Self Regularization and Diversity
Chenyang Lei,
Qifeng Chen
CVPR, 2019
project page /
video /
code
The first dedicated video colorization method without any user input.
|
Services
- Program Committee/Reviewers: CVPR, ICCV, TPAMI, IJCV, AAAI, TIP, IJCAI, IROS, TVCG
|
Honors and Awards
- RedBird PhD Scholarship, HKUST, 2021
- SENG Academic Award for Continuing PhD students, HKUST, 2020
- National Scholarship, 2017
- Outstanding Graduate (Zhejiang University), 2018
- Texas Instruments Scholarship, 2017
- First-Class Scholarship for Outstanding Merits, 2017
- Excellent Student Award, 2016, 2017
|
Teaching Assistant
- COMP 4901J: Deep Learning in Computer Vision (Spring 2019)
- COMP 3031: Principle of Programming Languages (Fall 2019)
- COMP2011: Programming with C++ (Spring 2021)
|
Thank Dr. Jon Barron for sharing the source code of his personal page.
|
|