Open access
Light: Science & Applications 14, Article number: 139 (2025) Cite this article
Metrics
Abstract
Fringe projection profilometry, a powerful technique for three-dimensional (3D) imaging and measurement, has been revolutionized by deep learning, achieving speeds of up to 100,000 frames per second (fps) while preserving high-resolution. This advancement expands its applications to high-speed transient scenarios, opening new possibilities for ultrafast 3D measurements.
Fringe projection profilometry (FPP) is a widely adopted three-dimensional (3D) imaging technique with extensive applications in various industrial processes including additive manufacturing, semiconductor inspection, and others1,2,3,4. It is known for its non-contact nature, high precision, flexibility, and ability to perform full-field measurements. The fundamental principle of FPP is straightforward, relying on a well-defined pinhole model of the imaging system and utilizing triangulation to achieve 3D measurements5. This principle has been extensively employed in various 3D measurement techniques6,7,8. What sets FPP apart is its ability to provide high-precision, full-field 3D measurements by effectively utilizing the phase information embedded in periodic grayscale fringe patterns (e.g., sinusoidal fringes). By analyzing the deformation of the fringes on the object’s surface, the phase distribution can be accurately extracted, and the object’s topography can be further reconstructed through proper calibrations.
Although the principle of FPP is relatively simple, its practical performance is constrained by limitations in physical devices and algorithms. These constraints lead to one of FPP’s fundamental challenges: the trade-off between resolution and speed. For example, achieving higher measurement accuracy often requires capturing multiple images with varying frequencies and phase shifts, which inherently restricts the applicability of FPP in dynamic scenarios. While advancements such as faster projectors and high-speed cameras with higher frame rates can mitigate this issue, increasing the camera’s frame rate typically results in shorter exposure times, thereby lowering the signal-to-noise ratio (SNR) and ultimately compromising the accuracy and resolution of 3D measurements. In recent years, continuous breakthroughs in deep learning methods have enabled numerous successful applications in the field of 3D imaging, while also injecting fresh momentum into the development of FPP9,10,11. These advancements hold great potential for overcoming the longstanding trade-off between spatial resolution and temporal performance in current FPP methods.
A recent study published in Light: Science & Applications by the Smart Computational Imaging Laboratory (SCILab) at Nanjing University of Science and Technology introduced a novel ultrafast single-shot super-resolved FPP (SSSR-FPP) approach enabled by deep learning12. As shown in Fig. 1, this method leverages two high-speed cameras set at different angles of view to provide the absolute phase information of the measured surface, while maintaining adjustable focal lengths, frame rates, and region-of-interest sizes. By employing convolutional neural networks trained on experimental data, SSSR-FPP effectively maps low-resolution (LR, 160 × 160 pixels) and low-SNR raw fringe images to high-resolution (HR, 480 × 480 pixels) and high-SNR absolute phase maps. This innovative approach enables the generation of high-quality 3D images even at ultra-high frame rates of up to 100,000 frames per second (fps), marking a significant step forward for ultrafast 3D imaging.
Fig. 1: Schematic diagram and workflow of the SSSR-FPP system.
figure 1
a The SSSR-FPP system employs two high-speed cameras with adjustable focal lengths, frame rates, and region-of-interest sizes, aiming to generate datasets of mapping LR to HR. b The workflow of the proposed SSSR-FPP method consists of two main steps, each utilizing a convolutional neural network (CNN1 and CNN2) with identical architectures but different input and output configurations. Overall, the workflow is model-constrained, maintaining ultrafast acquisition and super-resolution, while also providing the 3D reconstruction algorithm with good adaptability and generalization
Full size image
The entire process of SSSR-FPP is divided into two steps, each utilizing a CNN with identical architectures but distinct input and output configurations. In the first step, the initial neural network (CNN1) generates HR numerator (sine) and denominator (cosine) terms of the wrapped phase function from the captured LR raw images. These two terms are then used for calculating the HR phase maps through an arctangent function. Since the range of the arctangent function is limited to (−π, π), the resulting phase map is naturally wrapped within a 2π range. To overcome this limitation, SSSR-FPP utilizes a second neural network (CNN2) in the second step, specifically designed to unwrap the phase and accurately map the wrapped phase to the absolute phase with by leveraging the reference surface information. To validate the effectiveness of SSSR-FPP, the researchers conducted experiments on various dynamic targets, demonstrating that the SSSR-FPP technique is capable of achieving HR and ultrafast 3D imaging.
For a typical high-speed imaging system designed for macro scenarios, spatial resolution is primarily constrained by the Nyquist sampling criterion, determined by the pixel size. Recognizing this limitation, the SSSR-FPP framework employs CNN1 trained on experimental data to establish a more realistic mapping relationship from LR to HR. Unlike simulated data or data generated through direct down-sampling methods, the experimental data-driven neural network enables the SSSR-FPP framework to extract richer information with “physically meaningful” prior knowledge of the image formation process. This is crucial for achieving highly accurate phase reconstruction in subsequent stages. Moreover, instead of using a single end-to-end CNN to directly output the phase map, SSSR-FPP divides the process into two distinct steps that incorporate the fundamental principles of FPP. This approach effectively addresses the “domain mismatch” issue and the “black box” problem inherent in deep learning, enhancing the stability and robustness of the reconstruction algorithm under challenging conditions.
Looking ahead, while SSSR-FPP marks a significant breakthrough by pushing FPP’s speed into the 100k fps range, it does not imply that FPP has reached its ultimate potential. Emerging industries, such as real-time 3D modeling for virtual and augmented reality, and online process monitoring in the rapidly evolving field of ultrafast manufacturing, are placing higher demands on 3D imaging. To meet the growing need for faster and more robust detection, future advancements in FPP can be pursued from two perspectives: hardware and methodologies, as illustrated in Fig. 2. In terms of high-speed hardware, novel detection devices—such as event cameras—show great promise in overcoming the limitations of current imaging technologies. With a temporal resolution in the microsecond range, event cameras outperform conventional grayscale cameras by several orders of magnitude. In recent years, these devices have been explored for 3D dynamic measurement applications13,14,15,16. Methodologies, such as computational imaging techniques based on compressive sensing, have already achieved significant breakthroughs in recording speeds for transient imaging, surpassing 108 fps17,18. Furthermore, artificial intelligence technologies leveraging deep neural networks are demonstrating increasingly powerful data processing capabilities, opening new avenues for addressing the inherent spatial-temporal trade-off problem and enabling single-shot reconstructions19,20.
figure 2
Breakthroughs in ultrafast imaging across methodologies and hardware, and their potential application trends in dynamic and transient 3D scenarios
Full size image
These advancements provide valuable insights and inspiration for the development of ultrafast 3D imaging technologies. We are confident that, in the near future, the integration of more advanced technologies and innovative methodologies will further push the speed boundaries of FPP measurement, enabling even greater performance and applicability across a wider range of transient scenarios requiring higher frame rates.
References
Lv, S. Z. & Qian, K. M. Modeling the measurement precision of fringe projection profilometry. Light Sci. Appl. 12, 257 (2023).
ADSMATH Google Scholar
Zhu, S. J. et al. Superfast and large-depth-range sinusoidal fringe generation for multi-dimensional information sensing. Photonics Res. 10, 2590–2598 (2022).
MATH Google Scholar
Juarez-Salazar, R. et al. Three‐dimensional spatial point computation in fringe projection profilometry. Opt. Lasers Eng. 164, 107482 (2023).
MATH Google Scholar
Liu, X. J. et al. 3-D structured light scanning with phase domain-modulated fringe patterns. IEEE Trans. Ind. Electron. 70, 5245–5254 (2023).
MATH Google Scholar
Hartley, R. I. & Sturm, P. Triangulation. Comput. Vis. Image Underst. 68, 146–157 (1997).
MATH Google Scholar
Jing, X. L. et al. Single-shot 3D imaging with point cloud projection based on metadevice. Nat. Commun. 13, 7842 (2022).
ADSMATH Google Scholar
Geng, J. Structured-light 3D surface imaging: a tutorial. Adv. Opt. Photonics 3, 128–160 (2011).
ADSMATH Google Scholar
Yang, T. & Gu, F. F. Overview of modulation techniques for spatially structured-light 3D imaging. Opt. Laser Technol. 169, 110037 (2024).
MATH Google Scholar
Wang, F. Z., Wang, C. X. & Guan, Q. Z. Single-shot fringe projection profilometry based on deep learning and computer graphics. Opt. Express 29, 8024–8040 (2021).
ADSMATH Google Scholar
Li, Y. X. et al. Deep-learning-enabled dual-frequency composite fringe projection profilometry for single-shot absolute 3D shape measurement. Opto-Electron. Adv. 5, 210021 (2022).
Google Scholar
Trusiak, M. & Kujawinska, M. Deep learning enabled single-shot absolute phase recovery in high-speed composite fringe pattern profilometry of separated objects. Opto-Electron. Adv. 6, 230172 (2023).
MATH Google Scholar
Wang, B. W. et al. Single-shot super-resolved fringe projection profilometry (SSSR-FPP): 100,000 frames-per-second 3D imaging with deep learning. Light Sci. Appl. 14, 70 (2025).
Google Scholar
Gallego, G. et al. Event-based vision: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 44, 154–180 (2022).
MATH Google Scholar
Liu, X. et al. Event-based monocular depth estimation with recurrent transformers. IEEE Trans. Circuits Syst. Video Technol. 34, 7417–7429 (2024).
MATH Google Scholar
Mangalore, A. R., Seelamantula, C. S. & Thakur, C. S. Neuromorphic fringe projection profilometry. IEEE Signal Process. Lett. 27, 1510–1514 (2020).
ADS Google Scholar
Li, Y. H. et al. Event-driven fringe projection structured light 3-D reconstruction based on time-frequency analysis. IEEE Sens. J. 24, 5097–5106 (2024).
ADSMATH Google Scholar
Yao, J. L. et al. Discrete illumination‐based compressed ultrafast photography for high‐fidelity dynamic imaging. Adv. Sci. 11, 2403854 (2024).
Google Scholar
Gao, L. et al. Single-shot compressed ultrafast photography at one hundred billion frames per second. Nature 516, 74–77 (2014).
ADSMATH Google Scholar
Liu, H. Y. et al. Deep learning in fringe projection: a review. Neurocomputing 581, 127493 (2024).
MATH Google Scholar
Zuo, C. et al. Deep learning in optical metrology: a review. Light Sci. Appl. 11, 39 (2022).
ADSMATH Google Scholar
Download references
Author information
Authors and Affiliations
College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen, 518060, China
Jie Xu & Jindong Tian
Guangdong Laboratory of Artificial Intelligence and Digital Economy (Shenzhen), Shenzhen, 518107, China
Jindong Tian
Corresponding author
Correspondence to Jindong Tian.
Ethics declarations
Conflict of interest
The authors declare no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Reprints and permissions
About this article
Cite this article
Xu, J., Tian, J. Accelerating fringe projection profilometry to 100k fps at high-resolution using deep learning. Light Sci Appl 14, 139 (2025). https://doi.org/10.1038/s41377-025-01802-4
Download citation
Published27 March 2025
DOIhttps://doi.org/10.1038/s41377-025-01802-4
Share this article
Anyone you share the following link with will be able to read this content:
Provided by the Springer Nature SharedIt content-sharing initiative