This post is published in Nature Research Communities. You can access it via the link.
After completing my BS in computer science, I transitioned to optical engineering for graduate school—but my body seemed to rebel against the very environment I’d chosen. Every time I entered the lab, waves of nausea, headaches, and dizziness would overwhelm me. After months of this mysterious suffering, I reached a dramatic conclusion: I must be allergic to lasers.
The truth was less exotic but equally challenging. I simply lacked the foundational optics knowledge my peers took for granted. Recognizing my struggles, my advisor tried to shield me with computational tasks while minimizing experimental work. But in optical engineering, experiments are inevitable, and I had to face them despite my fears and physical discomfort.
The challenges compounded relentlessly. SNU’s safety protocols prohibited solo experiments, the lab was far from our office, and as the only woman on the team, I struggled to find the three people typically required for my experiments. One weekend stands out among many difficult moments. Desperate to prove myself, I slipped into the lab alone. My plan was simple: complete some measurements and demonstrate that I could handle this work. Instead, disaster struck. I inserted the wrong screw into the optical table—directly into the optical axis. The metric and inch screws had been mixed up, and I couldn’t remove it. Hours of frantic effort yielded nothing. Surrounded by the lab’s black curtains, I broke down completely, crying alone in that dark room without collecting a single data point.
This was just one example of countless struggles during those years. The cruel irony wasn’t lost on me: I had chosen computational imaging as my research focus—a field that barely had a name at the time—which demanded not just computational expertise but exceptional experimental skills. I possessed neither. Those tears in the dark lab seemed to confirm what I feared most: I had chosen the wrong path.
My struggles extended far beyond the optical bench. For years, I found myself caught between two worlds, neither of which seemed to fully accept me. In the optics community, I was the computer scientist who couldn’t align a laser properly. In the computational community, I was the one obsessed with physics that others preferred to abstract away.
The search for appropriate computational algorithms to perform inverse imaging became its own odyssey. Traditional optimization methods felt disconnected from the physical reality of light. Machine learning approaches treated imaging as just another dataset, ignoring the rich physics that governs how images form. I spent countless hours implementing algorithms that worked beautifully in simulation but failed catastrophically when faced with real experimental data.
Each failure taught me something crucial: the gap between our computational models and physical reality wasn’t just a technical challenge—it was a fundamental philosophical problem. We were trying to force the messy, uncertain physical world into neat computational boxes, and the world was refusing to cooperate.
The shift began quietly. When I first explored differentiable holography1 in late 2020, my grasp of uncertainties was far shallower than it is today. Back then, the field appeared to me as little more than a clever technical trick—an approach to optimize holographic reconstructions via gradient descent. I drew inspiration from Yann LeCun’s 2018 declaration: “Deep Learning est mort. Vive Differentiable Programming!” His vision of differentiable programming as a unifying framework resonated deeply, suggesting a way to bridge the computational and physical worlds I had been struggling to connect.
![]() |
---|
Yann LeCun’s post in 2018: Deep Learning est mort. Vive Differentiable Programming 2 |
This concept of differentiable programming planted the seed for differentiable imaging3. If we could make neural networks differentiable, why not entire imaging systems? Why not propagate gradients through optical elements, through light propagation, through the entire image formation process? This vision drove my initial work on differentiable holography, though I didn’t yet fully grasp its implications.
Only between 2020 and 2022, as I delved deeper into making holographic systems differentiable, did the role of uncertainties begin to reveal itself. Each attempt to create end-to-end differentiable systems forced me to confront the same uncertainties that had plagued my experimental work years earlier. But now, instead of seeing them as obstacles, I began to recognize them as fundamental features of reality.
The revelation that transformed my perspective was both simple and profound: uncertainties are the essence of the universe. From quantum mechanics to thermodynamics, from biological systems to cosmic evolution, uncertainty isn’t a flaw in our measurements or models—it’s a fundamental property of reality itself.
This understanding completely reframed my approach to computational imaging. Traditional methods treat uncertainties as noise to be minimized, errors to be corrected, or nuisances to be eliminated. But what if we’re fighting against the very nature of the universe? What if, instead of trying to eliminate uncertainty, we designed systems that expected it, embraced it, and even leveraged it?
When an editor from Advanced Physics Research invited me to contribute to their newly launched journal, I saw an opportunity to articulate this evolving understanding. What emerged was a perspective that transformed my years of failure and frustration into the foundation for a new approach to computational imaging.
As my understanding deepened, I noticed a troubling trend in our field. Too many researchers were falling into what I call the reductionist trap, believing that computational imaging equals code. This dangerous equation—
\[\text{CODE} \equiv \text{COMPUTATIONAL IMAGING} ???\]—has consequences that extend far beyond academic debates.
Let me illustrate this problem through a scenario I’ve witnessed too many times. A talented researcher develops an imaging algorithm that works beautifully in their lab. They share their code openly. But when others try to use it, the code fails catastrophically. The accusations begin: the original researcher must be incompetent, or worse, they must have deliberately hidden bugs in their code. Careers are questioned, reputations damaged, contributions dismissed.
What the accusers fail to understand is that computational imaging code is intimately tied to the specific physical system it was designed for. Consider this: when a misaligned mirror shifts the point spread function by just a few micrometers, when thermal expansion changes the optical path length as the lab heats up during the day, when a different camera model introduces its own unique noise fingerprint—the code cannot magically adapt to these changes. A perfectly functioning algorithm for one setup becomes useless in another, not because of coding errors, but because the physical world has changed while the code remains frozen in its original assumptions.
Traditional microscopists learned this lesson long ago through a simple principle: when you change your sample, you must refocus your microscope. The physical world demands constant adaptation. Yet we expect static code to somehow transcend this fundamental reality. The painful truth is that it can’t. True computational imaging requires a deep understanding of both domains—not just how to write elegant algorithms, but how light actually propagates through imperfect optics, how real sensors deviate from ideal models, and how uncertainties cascade through the entire system.
Here’s where the story takes an unexpected turn. The very complexity that causes so much frustration might also point toward its own solution. The inseparable nature of system and code in computational imaging actually provides a natural protection for intellectual property. Unlike pure software that can be easily copied and deployed anywhere, computational imaging solutions remain tied to their birthplace. The most elegant algorithm becomes merely decorative without the specific optical design it was created for, the precise calibration procedures that make it sing, the deep understanding of physical constraints that guided every line of code. In a way, this protects years of hard-won knowledge—you cannot simply steal the code and expect it to work.
But what if we could change this fundamental limitation? Imagine those same researchers who once hurled accusations being able to deploy imaging systems that automatically adapt to their specific setup. Picture code that isn’t static instructions but a living system that teaches itself how to work in each unique physical context. This transformation would turn computational imaging’s apparent weakness—its dependence on physical specifics—into its greatest strength. Instead of requiring every user to master both optics and computation, imaging systems could observe their own behavior, quantify their own uncertainties, and optimize themselves for whatever reality they encounter. The same algorithm that fails catastrophically in traditional approaches could discover how to succeed in any reasonable physical setup. It would democratize access to advanced imaging techniques, allowing researchers to focus on their scientific questions rather than wrestling with system-specific implementations.
My years of struggle had taught me this lesson viscerally. Every misaligned component that ruined a day’s work, every measurement that defied simulation, every algorithm that worked perfectly in theory but failed in practice—each was teaching me that the physical world refuses to be contained in our neat computational models. The uncertainties I encountered weren’t bugs in the system but features of reality itself. Now I see that differentiable imaging could transform this hard-won understanding into systems that spare others from enduring the same painful learning process, creating a future where computational imaging truly serves all who need it.
The path forward crystallized as I continued developing differentiable holography and began to see its broader implications. Differentiable programming wasn’t just about optimizing neural networks—it was about creating a language that could describe both physical and computational processes in a unified framework. This realization sparked the broader vision of differentiable imaging.
Since 2020, this journey has taken me across three countries / four regions, where I’ve had the privilege of meeting wonderful scientists from diverse communities. Each encounter brought unique inspirations. These cross-pollinations of ideas from different cultures and disciplines enriched my understanding immeasurably, showing me how the same fundamental principles could be viewed through wonderfully different lenses.
![]() |
---|
2024 Nobel Prize in Physics |
When the 2024 Nobel Physics Prize was announced recognizing Geoffrey Hinton’s work, I experienced a moment of delightful recognition. As I watched the news coverage, Hinton’s face seemed strangely familiar. Then it hit me: I had been using his picture in my presentation slides since 2023! There it was, in my slide explaining how deep learning was merely a special case of differentiable programming. The recognition felt like a cosmic validation—here was the work I had been citing to explain our foundational concepts, now honored with science’s highest award.
![]() |
---|
One of the slides from my iCANX Youth Talk in April 2023 4 |
This moment brought everything full circle. Differentiable imaging represents the natural evolution of these ideas—not just applying gradients to optimize parameters, but creating systems where information flows seamlessly between physical and computational domains. The Nobel recognition validated what many of us had been believing: that the fusion of computation and physics wasn’t just a technical trick, but a fundamental new way of understanding and interacting with the world.
Differentiable imaging represents more than just another optimization technique. It creates a closed loop between physical reality and computational models, enabling systems that learn from every photon, every measurement, every uncertainty. Instead of treating noise and variations as enemies to be defeated, these systems incorporate them as information to be processed and understood 5671.
This approach transforms imaging from a one-way capture process into a dynamic conversation between the system and its environment. A differentiable imaging system can optimize its optical elements based on computational feedback, adjust its algorithms based on physical constraints, and quantify its uncertainties to make informed decisions about what it can and cannot resolve.
I must emphasize that differentiable imaging research is still in its early stages. We’re at the beginning of what I believe will be a transformative journey for the entire field. The frameworks are still being developed, the tools are still being refined, and the full potential remains largely unexplored. Yet even in this nascent stage, I’m convinced that differentiable imaging offers solutions to many of the fundamental issues that have plagued computational imaging research for decades. It addresses the rigid separation between optics and computation, the inability to adapt to changing conditions, the challenges of uncertainty quantification, and the barriers between different imaging modalities.
The beauty of this framework lies in its philosophical alignment with the nature of reality itself. By acknowledging that uncertainty is fundamental, not incidental, we stop fighting against the universe and start working with it. Whether we’re designing microscopes that adapt to biological samples, cameras that compensate for atmospheric turbulence, or medical imaging systems that account for patient movement, the principles remain the same: embrace uncertainty, make everything differentiable, and let the system learn.
Looking forward, our recent papers explore how this approach8, combined with concepts like digital twins, could create truly autonomous imaging ecosystems. These systems wouldn’t just capture images—they would understand their own limitations, predict potential problems, and optimize themselves for each unique scenario. While we’re only scratching the surface of what’s possible, the early results suggest a future where imaging systems are as adaptive and intelligent as the researchers who use them.
As I write this, I can’t help but think back to all those moments of frustration—not just that weekend with the stuck screw, but the years of feeling caught between communities, of algorithms that wouldn’t converge, of experiments that wouldn’t align. Each failure contained a lesson about the fundamental nature of uncertainty in our universe.
My “laser allergy” was real, just not in the way I imagined. It was an allergy to the rigid, deterministic view of imaging systems that pretends uncertainties don’t exist. The journey from those early struggles to differentiable imaging has been one of learning to embrace what I once tried to avoid.
By making systems differentiable and adaptive, we’re not just solving technical problems. We’re aligning our technology with the fundamental nature of reality—a reality where uncertainty isn’t a bug but a feature, where adaptation isn’t optional but essential, and where the most profound insights often emerge from our deepest struggles.
The tears in that dark lab have transformed into a vision for systems that expect the unexpected, that learn from chaos, and that find clarity not by eliminating uncertainty but by embracing it as the essence of the universe itself.
Ni Chen, Congli Wang, Wolfgang Heidrich, “∂H: Differentiable Holography,” Laser & Photonics Reviews, 2023. ↩ ↩2
Yann LeCun’s Facebook post: https://www.facebook.com/yann.lecun/posts/10155003011462143 ↩
Ni Chen, Liangcai Cao, Ting-Chung Poon, Byoungho Lee, Edmund Y. Lam, “Differentiable Imaging: A New Tool for Computational Optical Imaging,” Advanced Physcics Research, 2023. ↩
iCANX Youth Talks Vol.8: https://ueurmavzb.vzan.com/v3/course/alive/754913272 ↩
Ni Chen, Yang Wu, Chao Tan, Liangcai Cao, Jun Wang, Edmund Y. Lam, “Uncertainty-Aware Fourier Ptychography,” Light, Science & Applications, 2025. ↩
Ni Chen, Edmund Y. Lam, “Differentiable pixel-super-resolution lensless imaging,” Optics Letters, 2025. ↩
Yang Wu, Jun Wang, Sigurdur Thoroddsen, Ni Chen*, “Single-Shot High-Density Volumetric Particle Imaging Enabled by Differentiable Holography,” IEEE Transactions on Industrial Informatics, 2024. ↩
Ni Chen, David J. Brady, Edmund Y. Lam, “Differentiable Imaging: progress, challenges, and outlook,” Advanced Devices & Instrumentation, 2025. ↩