Imaging enables us to perceive details of objects or scenes that are normally invisible to the human eye due to their size or distance. Computational imaging takes this further by embedding computational techniques directly into the imaging workflow, transcending traditional limitations to achieve ultra-high resolution, multi-dimensional, hyperspectral, ultra-fast, and high dynamic range imaging. This fusion of optics and computation fundamentally expands our ability to observe and understand the world.
At its core, computational imaging operates through two key components: an optical system that encodes light radiation from the target, and computational algorithms that decode this information to reconstruct the target. While powerful, this approach faces two critical challenges that currently limit practical applications: variance sensitivity and computational complexity.
The first challenge stems from sensitivity to physical variations. Computational imaging requires precise harmony between the physical system and its mathematical model. However, real-world factors—optical aberrations, mechanical misalignments, sensor noise, and unpredictable physical variations—can disrupt this harmony. Such system-computation mismatches can significantly degrade imaging performance or cause complete failure.
The second challenge involves computational inverse solving. Unlike conventional imaging, which produces images almost instantaneously, computational imaging typically relies on multiple measurements or complex iterative algorithms to reconstruct high-quality images. These processes result in significantly longer imaging times, making the technology impractical for real-time applications.
These fundamental challenges—variance sensitivity and computational complexity—currently represent the primary bottlenecks in computational imaging, constraining both performance capabilities and broader applications.
My research addresses these challenges through an innovative approach that reimagines computational imaging from the ground up. Instead of treating physical systems and numerical models as separate entities, I propose creating a seamless bridge between them through a unified imaging pipeline that inherently resolves system-computation mismatches.
This research strategy encompasses four interconnected facets:
These elements work together dynamically, where advances in one area inform and improve the others. This interconnected approach converges into a new paradigm: Differentiable Imaging.
Differentiable Imaging employs a two-pronged approach to handle physical uncertainties: it constructs numerical models that explicitly account for deterministic uncertainties (such as known optical aberrations and system misalignments), while developing optimization strategies to handle stochastic uncertainties (such as sensor noise and environmental variations) through numerical methods or neural networks.
This comprehensive approach to uncertainty management offers two key advantages:
Flexibility: The automatic differentiation principles underlying this approach provide unprecedented freedom in solving inverse imaging problems, enabling flexible design of both imaging systems and computational algorithms. This allows joint co-design of optical systems and computational algorithms across the full physical pipeline.
Deep Learning Integration: By sharing common ground with neural network backpropagation, differentiable imaging seamlessly integrates with deep learning techniques while maintaining explicit physical constraints. This combination makes differentiable imaging particularly powerful for addressing both core challenges of computational imaging.
By bridging the gap between physical systems and computational models while leveraging modern machine learning techniques, we can create imaging systems that are both more robust to real-world variations and capable of faster image reconstruction.
In essence, differentiable imaging represents an advanced machine learning framework specifically designed for imaging systems—one that embeds physical models into the learning architecture, enabling joint optimization across physics and computation while maintaining physical interpretability and theoretical guarantees that conventional deep learning approaches cannot provide.
My research in light field imaging has advanced along three complementary tracks, each addressing the core challenges outlined above:
Develops algorithmic solutions that identify and compensate for key system uncertainties, particularly optical misalignment. Notable contributions include misalignment correction in Fourier Ptychographic Microscopy and alignment-free multi-plane phase retrieval — demonstrating that system uncertainties can be compensated through data redundancy without sacrificing imaging performance.
Integrates physical priors into computational reconstruction to push beyond the limits of purely data-driven approaches. Key contributions include:
Joint Space-Time Framework: Utilizes spatial-temporal physical priors for image reconstruction, enabling precise 3D particle tracking and flow velocity measurements crucial for fluid science, aerosol science, and medical applications.
Model-Based Neural Networks: Addresses the challenge of limited experimental training data through physics-informed deep learning architectures for 3D imaging systems, providing innovative solutions to data scarcity in optical imaging applications.
Develops principled methods to manage system-computation mismatches and improve computational efficiency. Significant advances include:
Work extends into fundamental theory, light field photography, holographic displays, phase imaging, and ptychography. Industry collaborations include projects with Samsung on coded aperture photography and wearable displays, as well as contributions to ray-tracing-based metrology, differentiable lens design, and scattering imaging.
Across these research threads, three core insights have emerged: