Research Vision

Imaging enables us to perceive details of objects or scenes that are normally invisible to the human eye due to their size or distance. Computational imaging takes this further by embedding computational techniques directly into the imaging workflow, transcending traditional limitations to achieve ultra-high resolution, multi-dimensional, hyperspectral, ultra-fast, and high dynamic range imaging. This fusion of optics and computation fundamentally expands our ability to observe and understand the world.

At its core, computational imaging operates through two key components: an optical system that encodes light radiation from the target, and computational algorithms that decode this information to reconstruct the target. While powerful, this approach faces two critical challenges that currently limit practical applications: variance sensitivity and computational complexity.

The first challenge stems from sensitivity to physical variations. Computational imaging requires precise harmony between the physical system and its mathematical model. However, real-world factors—optical aberrations, mechanical misalignments, sensor noise, and unpredictable physical variations—can disrupt this harmony. Such system-computation mismatches can significantly degrade imaging performance or cause complete failure.

The second challenge involves computational inverse solving. Unlike conventional imaging, which produces images almost instantaneously, computational imaging typically relies on multiple measurements or complex iterative algorithms to reconstruct high-quality images. These processes result in significantly longer imaging times, making the technology impractical for real-time applications.

These fundamental challenges—variance sensitivity and computational complexity—currently represent the primary bottlenecks in computational imaging, constraining both performance capabilities and broader applications.

My research addresses these challenges through an innovative approach that reimagines computational imaging from the ground up. Instead of treating physical systems and numerical models as separate entities, I propose creating a seamless bridge between them through a unified imaging pipeline that inherently resolves system-computation mismatches.

This research strategy encompasses four interconnected facets:

  • Fundamental Theory: Establishing the theoretical foundations that govern underlying physical phenomena
  • Numerical System Models: Developing precise digital representations that bridge theoretical formulations with actual optical measurements
  • Computational Inverse Solving: Implementing sophisticated algorithms to extract object states and system parameters from optical measurements by inverting the forward model
  • Interdisciplinary Applications: Exploring practical implementations of this unified pipeline across various fields

These elements work together dynamically, where advances in one area inform and improve the others. This interconnected approach converges into a new paradigm: Differentiable Imaging.

Differentiable Imaging employs a two-pronged approach to handle physical uncertainties: it constructs numerical models that explicitly account for deterministic uncertainties (such as known optical aberrations and system misalignments), while developing optimization strategies to handle stochastic uncertainties (such as sensor noise and environmental variations) through numerical methods or neural networks.

This comprehensive approach to uncertainty management offers two key advantages:

  • Flexibility: The automatic differentiation principles underlying this approach provide unprecedented freedom in solving inverse imaging problems, enabling flexible design of both imaging systems and computational algorithms. This allows co-design of optical systems and computational algorithms in an end-to-end manner.

  • Deep Learning Integration: By sharing common ground with neural network backpropagation, differentiable imaging seamlessly integrates with deep learning techniques while maintaining explicit physical constraints. This combination makes differentiable imaging particularly powerful for addressing both core challenges of computational imaging.

By bridging the gap between physical systems and computational models while leveraging modern machine learning techniques, we can create imaging systems that are both more robust to real-world variations and capable of faster image reconstruction.

In essence, differentiable imaging represents an advanced machine learning framework specifically designed for imaging systems—one that embeds physical models into the learning architecture, enabling end-to-end optimization while maintaining physical interpretability and theoretical guarantees that conventional deep learning approaches cannot provide.


Past Research

Building upon computational imaging’s foundation in encoding light radiation fields, my past research focuses on scalar wave field imaging and its applications. To address key challenges in wave field imaging, I have developed several complementary research strategies: digital/imaging processing techniques that develop algorithmic solutions to identify and retrieve key system uncertainties; physics-informed methods that incorporate domain-specific physics priors and construct physics-aware neural networks; and differentiable imaging approaches that effectively manage physical uncertainties.

These strategic investigations have resulted in several significant contributions:

Uncertainty-Aware Computing

Advances algorithmic solutions to identify and compensate for key system uncertainties, particularly misalignment. Notable achievements include misalignment correction in Fourier Ptychographic Microscopy and alignment-free multi-plane phase retrieval. These works demonstrate that while precise optical systems are critical for achieving optimal imaging performance in computational imaging, system uncertainties can be compensated through data redundancy.

Physics-Aware Computing

Develops computational decoding algorithms that leverage physical priors to enhance imaging performance. Key contributions include:

  • Joint Space-Time Framework: Utilizes spatial-temporal physical priors for image reconstruction, enabling precise 3D particle tracking and flow velocity measurements crucial for fluid science, aerosol science, and medical applications.

  • Model-Based Neural Networks: Addresses the challenge of limited experimental training data through physics-informed deep learning architectures for 3D imaging systems, providing innovative solutions to data scarcity in optical imaging applications.

Differentiable Imaging

Pioneers novel methodologies to manage system-computation mismatches and improve computational efficiency. Significant advances include:

  • Single-Shot Complex Field Imaging: Successfully models unpredictable sample-sensor distance variations
  • Pixel Super-Resolution Lensless Imaging: Addresses errors in multiple measurements
  • High-Density Imaging: Resolves multi-scattering light-object interactions
  • Uncertainty-Aware Fourier Ptychography: Tackles various system-level challenges including light source variations, optical element aberrations, and sensor-induced data quality issues

Broader Contributions

Beyond my primary research focus, my work has extended into multiple domains of optics and imaging. I have contributed to fundamental theory development, advanced light field photography, and innovated holographic display techniques. My exploration of diverse imaging modalities has included significant work in phase imaging and ptychography.

I have also engaged extensively with industry and government through impactful collaborations. Notable partnerships include work with Samsung on “Coded Aperture Photography” and “Wearable Display: Head-Mounted Devices.” Additionally, I have made substantial contributions to specialized areas including ray-tracing engines for self-calibrated metrology, end-to-end lens design, and scattering imaging.

This breadth of research and diverse experience has provided me with comprehensive expertise in computational imaging.

Key Insights

My research framework employs a problem-driven strategy that connects fundamental science with practical implementation. This approach systematically tackles computational imaging challenges through rigorous methodology development, targeting key applications in system design, sensing/metrology, biomedical imaging, and fluid dynamics.

Through my research journey, three crucial insights have emerged:

  • The foundation of effective computational imaging lies in precise optical systems and accurate modeling
  • Computational algorithms must be deeply rooted in physical principles to achieve optimal performance
  • Managing the interplay between physical systems and computational methods is essential for both imaging quality and true co-design implementation


Selected Open-Source Code

License: GPL v2 License: MIT

Readme Card Readme Card
Readme Card Readme Card
Readme Card Readme Card
Readme Card Readme Card