Next Article in Journal
Path-Following and Obstacle Avoidance Control of Nonholonomic Wheeled Mobile Robot Based on Deep Reinforcement Learning
Next Article in Special Issue
Study of Crack Growth of Transparent Materials Subjected to Laser Irradiation by Digital Holography
Previous Article in Journal
Automatic Screening of the Eyes in a Deep-Learning–Based Ensemble Model Using Actual Eye Checkup Optical Coherence Tomography Images
Previous Article in Special Issue
Enlargements of Viewing Zone and Screen Size of Holographic Displays Using MEMS SLM Combined with Scanning Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fast 3D Analytical Affine Transformation for Polygon-Based Computer-Generated Holograms

1
Yunnan Provincial Key Laboratory of Modern Information Optics, Kunming University of Science and Technology, Kunming 650500, China
2
Graduate School of Engineering, Chiba University, Yayoi-cho 1-33, Inage-ku, Chiba 263-8522, Japan
3
Bradley Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, VA 24061, USA
*
Authors to whom correspondence should be addressed.
Submission received: 7 June 2022 / Revised: 29 June 2022 / Accepted: 5 July 2022 / Published: 7 July 2022
(This article belongs to the Special Issue Holography, 3D Imaging and 3D Display Volume II)

Abstract

:
We present a fast 3D analytical affine transformation (F3DAAT) method to obtain polygon-based computer-generated holograms (CGHs). CGHs consisting of tens of thousands of triangles from 3D objects are obtained by this method. We have attempted a revised method based on previous 3D affine transformation methods. In order to improve computational efficiency, we have derived and analyzed our proposed affine transformation matrix. We show that we have further increased the computational efficiency compared with previous affine methods. We also have added flat shading to improve the reconstructed image quality. A 3D object from a 3D camera is reconstructed holographically by numerical and optical experiments.

1. Introduction

With the rapid development of display and computer technology, 3D display technology has made great progress [1,2]. The current 3D display technologies generally include binocular parallax display [3], volume 3D display [4], light field display [5] and holographic 3D display. Holographic 3D display is a technology to reconstruct 3D wave-front by using light wave diffraction information based on the principle of wave optics. Because the wave-front information of the 3D scene is reconstructed, it can provide all the depth cues required by the human eye [6,7,8]. The wave-front recording process of holography can be completed and simulated by a computer to generate the so-called computer-generated holograms (CGHs). The use of CGHs avoids the setup of complicated optical paths in optical holography [9,10], as long as the mathematical description of the 3D scene is obtained and transformed into the wave-front distribution in the hologram plane by algorithms. Therefore, the algorithm of calculating CGHs is key, because it directly determines the computational efficiency of the hologram and the quality of the reconstructed image [11,12].
Indeed, the mathematical description of a 3D scene can also be expressed in many forms. According to the geometric information of its surface, it can be discretely expressed as a collection of point sources, giving the so-called point-based method [13]. The wavefront information of the whole 3D object is obtained by calculating and then adding the light field distribution of the spherical wave emitted by each point light source in the hologram plane. However, the required number of point sources is usually as high as millions, leading to computational bottlenecks. By using a look-up tables (LUTs) [14,15,16] and graphic processing units (GPUs) [17,18], we can accelerate the calculations greatly. Another popular CGH calculation method is the use of polygons (usually represented by triangle meshes) to approximate the surface of a 3D scene, giving to the polygon-based method [19,20,21,22,23,24,25,26,27,28,29,30,31,32]. Compared with the point-based method, the number of polygons can be greatly reduced, and the mature theory of computer graphics can be used to make the reconstructed scene more realistic. There are other methods to decompose 3D objects, such as the layer-based [33,34] or line-segment based method [35].
In polygon-based methods, the propagation of light field can be calculated for each polygon by using the angular spectrum theory if the polygon is parallel to the hologram plane [6]. The essence of polygon-based methods is the diffraction calculation between non-parallel planes. Polygon-based methods can be divided into two categories; one is the traditional method based on sampling [8]. This method needs to sample each tilted polygon that is not parallel to the hologram in the spatial and frequency domains. Due to the polygon rotation, sampling in the spatial and frequency domains creates uneven sampling intervals between the two domains. Time-consuming interpolation is needed to alleviate sampling distortion. The other method is the analytical method [8]. Unlike the traditional method, sampling in the spatial domain to obtain the spectrum of a tilted polygon is not needed. Instead, the spectrum of a tilted polygon can be expressed in terms of the spectrum of a unit triangle (also called primitive triangle), which is known analytically, through the 2D affine transform [23]. Therefore, only sampling is needed in the frequency domain for each polygon, bypassing the need for interpolation in the traditional method. Compared with the traditional method, the analytical method effectively reduces the amount of calculations. However, shaping and texture mapping are not easily included in the analytical methods as compared with the traditional methods [8].
Pan et al. [25,26] have developed an analytical method utilizing 3D affine transformation. They have defined a pseudo inverse matrix to map the spatial triangle with a three-dimensional right primitive triangle (which has an analytical spectrum expression). The major issue of the technique is the inaccuracy caused by the inversion of the pseudo matrix. Zhang et al. [28,29] have derived a correct analytical expression in the context of 2D affine transformation [23] and proposed a method to achieve 3D transformation from an arbitrary triangle to a primitive triangle through a 3D rotation of the arbitrary triangle and the use of a 2D affine matrix [29]. Although this method avoids the time-consuming calculation of the pseudo inverse matrix, the process is rather complex. Zhang et al. [8] also have proposed a method called the fast 3D affine transformation (F3DAT) method by translating the primitive triangle to avoid the use of the pseudo inverse matrix to improve the computational efficiency.
In the wave-optics based approach, it can provide accurate depth cues. However, view-dependent properties’ rendering requires additional calculations. Rendering technology of computer graphics makes the reconstructed 3D scene more realistic [36,37,38]. Matsushima et al., have discussed the methods of shading and texturing [20,21,22] and created a large-scale full-color CGH successfully [39]. Subsequently, many methods have been proposed to improve the quality of reconstructed images through texturing [36,37], shading [22,38], and resolution improvement [40]. Additionally, the use of the silhouette method for hidden surface removal has been described [41].
In this study, based on the three-dimensional affine theory [25,29], we present a fast 3D analytical affine transformation (F3DAAT) method to obtain a full-analytical spectrum of a spatial triangle. We obtain the analytical expression of a 3D affine matrix algebraically, and the spectrum of tilted triangles can be obtained directly. Compared with previous methods, we show improved computational efficiency. In addition, in order to improve the image quality, we add flat shading to make the reconstructed image more realistic. We also demonstrate reconstructed 3D objects composed of tens of thousands of polygons numerically as well as the use of a spatial light modulator (SLM) for optical reconstruction.
In Section 2, we briefly introduce the basic principle of the polygon-based method. In Section 3, we present the theory of F3DAAT. In Section 4, we demonstrate the reconstruction of 3D objects numerically and optically. The computational efficiency is also compared with previous methods. The results of adding flat shading are also illustrated.

2. Conventional Polygon-Based Method

Figure 1 shows the basic principle of the polygon-based method. The surface of a 3D object is discretized into many polygons (usually triangles). The hologram is assumed to be on the plane z = 0   . The total complex field distribution on the hologram plane f H O L O ( x , y , z = 0 ) can be expressed as the superposition of the polygon fields from each polygon, f H O L O , i ( x , y ) :
f H O L O ( x , y ) = i = 1 N f H O L O , i ( x , y ) ,
where N is the number of polygons and f H O L O , i ( x , y ) is the complex field on the hologram plane from the i th polygon. Since the complex field distribution of each triangle on the hologram can be expressed by the inverse Fourier transform of its spectrum:
f H O L O , i ( x , y ) = 1 { F H O L O , i ( u , v ) } ,
where 1 { · } represent the inverse Fourier transform and F H O L O , i ( u , v ) represents the spectrum of the i th polygon on the hologram. Therefore, the key to the polygon-based method is how to obtain the spectrum of each triangle on the hologram plane.
As shown in Figure 2, a polygon on the source coordinate system ( x s , y s , z s ) is not necessarily parallel to the hologram plane ( x , y ) ; one needs to rotate the polygon to parallel local coordinate system ( x p , y p , z p ) to be parallel to the hologram plane in order to calculate the diffracted field toward the hologram through standard diffraction theory [6]. The spatial frequencies ( u s , v s ) can be expressed as:
u s = a 1 u p + a 2 v p + a 3 w p ,
v s = a 4 u p + a 5 v p + a 6 w p ,
where a i are the elements of the rotation matrix. ( u s , v s ) (corresponding to ( k s x , k s y ) in Equation (10a,b) in Ref. [8]) are the spatial frequencies corresponding to source coordinates ( x s , y s , z s ) . ( u p , v p , w p ) (corresponding to ( k p x , k p y , k p z ) in Equation (10a,b) in Ref. [8]) are spatial frequencies corresponding to parallel local coordinates ( x p , y p , z p ) . Upon a differential operation in Equation (3a), we have:
Δ u s = a 1 Δ u p + a 2 Δ v p + a 3 Δ w p = a 1 Δ u p + a 2 Δ v p a 3 · ( u p · Δ u p + v · Δ v p ) 1 λ 2 u p 2 v p 2 ,
where w p = 1 λ 2 u p 2 v p 2 with λ being the wavelength of the light source. Since the spatial frequencies ( u p , v p ) are uniformly distributed, we have Δ u p = c o n s t a n t ,   Δ v p = c o n s t a n t . For simplicity, let us assume Δ u p = 1 , Δ v p = 1 , and Equation (4) then can be rewritten as follows:
Δ u s = a 1 + a 2 a 3 · ( u p + v p ) 1 λ 2 u p 2 v p 2 c o n s t a n t .
Since Δ u s (the derivative of u s ) is not constant, Equation (5) indicates that for an arbitrary set of spatial frequencies ( u p , v p ) corresponding to the parallel local coordinates, spatial frequency ( u s , v s ) after rotation has highly nonlinear properties. Because uniform sampling is necessary for the FFT to work correctly in the traditional method, interpolation process is required for spatial frequencies ( u s , v s ) after rotation. This procedure would add substantially to the computational time [30]. Comparing with the traditional method, the analytical method avoids the use of the FFT in obtaining the spectrum of the polygon in the source coordinates and the interpolation process is therefore not required. The analytical method can obtain the analytical expression of the tilted triangle spectrum directly by using affine transformation and a given spectrum expression of a primitive triangle. In this paper, we analytically solve the affine matrix, which avoids the most time-consuming steps in previous methods [8,25,29] and further improves the computational efficiency over the previous methods.

3. Theory

The aim of the polygon-based method is the calculation of the polygon field in the plane of the hologram. However, we cannot obtain the polygon field or its spectrum directly by using standard diffraction theory, which is valid only between parallel planes. One of the conventional approaches to solve this problem is to map the desired light field or its spectrum using affine relations. The traditional affine transformation method is based on procedures such as rotation and translation to establish the relationship between the input and output coordinates, with the output represented by a set of known inputs and affine relations. Hence, the essence of affine transformation method is a mapping method, and the core problem of affine transformation is to find the affine matrix. In our proposed theory, we will find the affine matrix algebraically in a universal way.
In the traditional 3D affine transformation algorithm, there is a global coordinate system ( x , y , z ) as the output coordinate system, as shown in Figure 3. The hologram plane ( x , y ) is located in z = 0 , and the tilted triangle Π with vertexes ( x 1 , y 1 , z 1 ) , ( x 2 , y 2 , z 2 ) and ( x 3 , y 3 , z 3 ) is located in the global coordinate system.
We now define the affine coordinates   ( s , t , p ) as our input coordinates, as shown in Figure 4. A primitive triangle Δ with vertexes ( s 1 , t 1 , p 1 ) , ( s 2 , t 2 , p 2 ) and ( s 3 , t 3 , p 3 ) is in the affine coordinate system, and the analytical expression of the primitive triangle spectrum is obtained by using two-dimensional Fourier transform for p = 0 . The spectrum of the tilted triangle is finally obtained from the affine relationship between the primitive triangle Δ and the tilted triangle Π with the analytical expression of the primitive triangle spectrum derived from the Fourier transform.
Through affine transformation r g t = R r a t + C   , we can map affine coordinates ( s , t , p ) into global (output) coordinates ( x , y , z ) , where   r g t = ( x , y , z ) t is the coordinate vector of tilted triangle Π in the global coordinate system, r a t = ( s , t , p ) t is the coordinate vector of primitive triangle Δ in the affine coordinates and the superscript t denotes the transpose operation. R is a 3 × 3 matrix and C is a 3 × 1 vector. We let ( s 1 , t 1 , p 1 ) =   ( 0 , 0 , 0 ) ,   ( s 2 , t 2 , p 2 ) =   ( 1 , 0 , 0 )   and   ( s 3 , t 3 , p 3 ) =   ( 0 , 1 , 0 ) . Therefore, we can write in terms of matrix multiplication as shown in Equation (6), where T is the affine transformation matrix and G and A represent matrices consisting of vertexes of tilted triangle Π and primitive triangle Δ , respectively. By calculating the twelve parameters of affine matrix T , we can uniquely determine the 3D affine transformation:
G = [ x 1 x 2 x 3 y 1 y 2 y 3 z 1 z 2 z 3 1 1 1 ] = T A = [ r 11 r 12 r 13 c 1 r 21 r 22 r 23 c 2 r 31 r 22 r 33 c 3 0 0 0 1 ] [ s 1 s 2 s 3 t 1 t 2 t 3 p 1 p 2 p 3 1 1 1 ] .
The key to the affine transformation algorithm is to find the elements of affine matrix T . There is no inverse for 4 × 3 matrix A , because inverse only exists for square matrices. The affine matrix T has been found by using the pseudo inverse matrix of A [25]. The accurate method is to avoid the use of pseudo matrices and to find the affine transformation matrix T through direct calculation of T = G A 1 . There are twelve unknown elements in affine matrix T , and so we have to solve twelve equations to get these twelve elements. However, each matrix G and A only contain three vertexes in Equation (6). From this, we can only get nine equations to solve for the nine elements; the remaining elements in the affine matrix T cannot be determined. In light of this, we introduce a new set of vertexes ( s 4 , t 4 , p 4 ) t in matrix A and a 4 × 1 vector [ d 1 , d 2 , d 3 , d 4 ] t   in matrix G to determine all the unknown twelve elements of the affine matrix T . Therefore, we extend G and A to a 4 × 4 matrix G 1 and   A 1 , respectively. In light of this, Equation (6) becomes:
G 1 = [ x 1 x 2 x 3 d 1 y 1 y 2 y 3 d 2 z 1 z 2 z 3 d 3 1 1 1 d 4 ] = T A 1 = [ r 11 r 12 r 13 c 1 r 21 r 22 r 23 c 2 r 31 r 22 r 33 c 3 0 0 0 1 ] [ s 1 s 2 s 3 s 4 t 1 t 2 t 3 t 4 p 1 p 2 p 3 p 4 1 1 1 d 4 ] .
In order to simplify calculations and improve the calculation efficiency, the best choice is let   d 1 = d 2 = d 3 = d 4 = 0 and ( s 4 , t 4 , p 4 ) =   ( 0 , 0 , 1 ) . Then, A 1 becomes
A 1 = [ 0 1 0 0 0 0 1 0 0 0 0 1 1 1 1 0 ] .
Through simple calculations, we can obtain det ( A 1 ) = 1 , indicating the existence of the inverse matrix of A 1 . Then, we can find the affine transformation matrix T by T = G 1 A 1 1 . Note that the vertex ( s 4 , t 4 , p 4 ) t only represents a point located in the primitive coordinate system independent of the primitive triangle, and the value of ( s 4 , t 4 , p 4 ) t and d 4 must be chosen such that det ( A 1 ) 0 to make sure the inverse of A 1 exists. d 1 ~ 3 can take any value; different values of d 1 ~ 3 make a different affine transformation matrix T and the Jacobian determinant also changes at the same time. Again, we have chosen Equation (8) for ease of calculations and there is no effect on the final results that we end up solving. According to the above discussion, the affine transformation matrix T can be obtained by Equation (7):
T = G 1 A 1 1 = [ x 1 + x 2 x 1 + x 3 0 x 1 y 1 + y 2 y 1 + y 3 0 y 1 z 1 + z 2 z 1 + z 3 0 z 1 0 0 0 1 ] .
For primitive triangle Δ , assuming that its surface function has strength of one within the triangle and zero outside, its spectrum analytical expression F Δ ( u s , v t ) can be obtained (see Appendix A):
F Δ ( u s , v t ) = Δ 1 · e j 2 π ( u s s + v t t ) d s d t = 1 2 π 2 [ e j 2 π v t ( u s v t ) v t e j 2 π u s ( u s v t ) u s 1 u s v t ] ,
where u s , v t are the spatial frequencies corresponding to affine coordinates s and t . The singular points u s = 0 , v t = 0 or u s v t = 0 have been discussed in Appendix A. Through affine transformation, the spectrum distribution of tilted triangle Π on the hologram plane is (see the derivation of this equation in Appendix B):
F H O L O ( u , v ) = Π f ( r g ) · e j 2 π ( u x + v y + w z ) d x d y = Δ f ( R r a + C ) · e j 2 π ( u x + v y + w z ) · J · d s d t ,
where u , v , w again are spatial frequencies of the global coordinate system, f ( r g ) = f ( x , y , z ) is the surface function of the tilted triangle r g = ( x , y , z ) , r a = ( s , t , p ) and the Jacobian determinant J = r 11 r 22 r 12 r 21 . Then, the spectrum distribution on the hologram F H O L O is:
F H O L O ( u , v ) = J · e j 2 π ( u x 1 + v y 1 + w z 1 ) · F Δ ( u s , v t ) ,
which has been derived in Appendix C. Note that for an arbitrary tilted polygon, we can set an arbitrary point instead of point ( x 1 , y 1 , z 1 ) of the polygon. For a different choice, it will change the affine transform matrix, but the Jacobian determinant in Equation (12) also changes at the same time, giving the same result in Equation (12).
As shown in Figure 5a, the zero frequency in the global coordinate system is ( u 0 ,   v 0 ,   w 0 ) t = ( 0 ,   0 ,   1 λ ) t . According to the spatial frequency relationship given by Equation (28) in Appendix C, as shown in Figure 5b, the zero frequency in the affine coordinate system ( u s 0 , v t 0 , w p 0 ) t is ( u s 0 , v t 0 , w p 0 ) t = R 1 ( u 0 , v 0 , w 0 ) t :
( u s 0 v t 0 w p 0 ) = ( r 11 u 0 + r 21 v 0 + r 31 w 0 r 12 u 0 + r 22 v 0 + r 32 w 0 r 13 u 0 + r 23 v 0 + r 33 w 0 ) = ( r 31 λ r 32 λ r 33 λ ) .
From the above equation, we can see that the zero frequency in the affine coordinate system has a frequency offset, and this offset will be represented in the reconstruction image with a phase factor. To eliminate this offset, as shown in Figure 5c, we can subtract this frequency offset   ( Δ u s , Δ v t , Δ w p ) t according to
( u s 0 v t 0 w p 0 ) = ( u s 0 v t 0 w p 0 ) ( Δ u s Δ v t Δ w p ) = ( 0 0 1 λ ) ,
where Δ u s = r 31 λ , Δ v t = r 32 λ , Δ w p = r 33 λ 1 λ . Therefore, the spatial frequencies in the “offset” affine coordinate system can be rewritten as
u s = r 11 u + r 21 v + r 31 w r 31 λ v t = r 12 u + r 22 v + r 32 w r 32 λ w p = r 13 u + r 23 v + r 33 w 1 λ ( r 33 1 ) .
The spectrum of a tilted triangle on the hologram plane now has been completely analyzed, and we can directly obtain the spectrum distribution through the initial parameters. For a single tilted triangle, we have F H O L O ( u , v ) = F H O L O , i ( u , v ) as in Equation (2), and for a 3D object we use Equation (1). The complex field distribution of a 3D object reconstructed by the hologram can be expressed as the superposition of a tilted triangle complex field on reconstructed image plane z = z r :
f r e ( x , y , z r ) = 1 { e j 2 π w z r · F H O L O ( u , v ) } ,
where z r is the distance between the reconstructed image plane z = z r and the hologram plane z = 0 . Through the above steps, we can obtain the reconstructed complex field distribution of the 3D object through only one Fourier transform. In the next section, we will verify our proposed method through numerical simulations and optical experiments.

4. Simulations and Optical Experiment

4.1. Numerical Reconstruction

Based on the 3D mesh in Figure 1, the Stanford bunny consists of 59,996 polygons, and we have reconstructed the bunny using our proposed method. The actual size of the bunny was 3.11 × 3.08 × 2.41 mm3. We have increased its size to 6.56 × 6.49 × 5.08 mm3 before the generation of the hologram. In order to improve the computational efficiency and image quality, we have implemented back-face culling by judging normal. The normal vector of a hologram is n h , and n u is the normal vector of a tilted triangle. If n h · n h 0 , the tilted triangle will be calculated, and tilted triangles that do not meet the condition will be discarded.
After back-face culling, the bunny only contains 31,724 polygons. However, in Section 3 the result of Equation (12) is based on the assumption that the amplitude distribution of a triangle is a unit constant and the reconstructed results will lack realism. As shown in Figure 6, in order to reproduce the details of the 3D object, we assign the surface function of the tilted triangle I ( x , y ) as follows:
I ( x , y ) = [ n u x , n u y , n u z ] · [ c o s α , c o s β , c o s γ ] t + b i a s = c o n s t a n t ,
where   n u x , n u y , n u z are the components of unit normal vector n u of the xyz-axis in the global coordinate system, and b i a s   represents ambient reflected light. c o s α , c o s β and c o s γ   are the direction cosine values of illumination directions. In our case, b i a s = 0.2 , α = 60 ° , β = 60 ° , γ = 5 ° .
The surface function usually refers to the strength information, and the amplitude is expressed as I ( x , y ) . The surface function of the titled triangle I ( x , y ) is a constant for each tilted triangle according to Equation (17), so that we can let I = I ( x , y ) (I is a constant here) and by using the property of Fourier transform: I · f ( x , y ) = 1 { I · F ( u , v ) } , the spectrum in the hologram of a tilted triangle with added flat shading and the elimination of the “offset” is based on Equation (12), together with Equation (15), we have:
F H O L O ( u , v ) = I · J · e j 2 π ( u x 1 + v y 1 + w z 1 ) · F Δ ( u s , v t ) .
Figure 7 shows the numerical reconstructions of the bunny based on our proposed F3DAAT method. Figure 7a is the result of calculating Equation (12), and we can see that because the amplitude of each mesh is the same constant the reconstruction result is a lack of realism. Additionally, the bunny has self-occlusion, so that back-face culling by judging normal will have some errors for some polygons. Due to the wrong judgment, visible polygons and invisible polygons are superimposed together and the reconstructed part of the image is shown in the red box on Figure 7a. Figure 7b,c are the reconstruction results based on Equation (18), and from these two reconstructed images we can clearly see the details of various parts of the bunny. Figure 7b focuses on the bunny’s leg, shown in the yellow box, and Figure 7c focuses on the bunny’s ear, shown in the yellow box.
We have also scanned a human face called “Alex” with a 3D camera, and the 3D mesh of “Alex” is shown in Figure 8a, which consists of 49,272 meshes. In this case, there is no need to use back-face culling, because the data is the result from actual image scanning. We have calculated the CGH of “Alex”, and its holographic reconstructed image is shown in Figure 8b.

4.2. Comparison with Previous Methods

The Pseudo Inverse Matrix Method by Pan et al. [25,26] sets up an affine transformation matrix that contains all the information on the transformation. However, they have defined the primitive triangle located at z s = 0 , i.e., the plane of the source local coordinate system, leading to the concept of a pseudo inverse matrix to perform matrix inversion. The introduction of the pseudo inverse matrix has produced calculation errors and slowed down the calculation speed. Zhang et al. [29] have introduced a Full Analytical 3D Affine Method to avoid the use of the pseudo inverse matrix. The method includes three core steps: rotation transformation for the tilted triangle until it is parallel to the hologram plane, 2D affine transform of the rotated triangle and finally the computation of the field distribution on the hologram by using the angular spectrum (AS) method for diffraction. Zhang et al. [8] also have proposed a Fast 3-D Affine Transformation (F3DAT) method based on the 3D affine transformation by Pan et al., to improve the computation efficiency. In the method, they have defined the primitive triangle located at z s 0 , allowing the affine matrix to be fully inverted. The result of the F3DAT method provides a faster calculation time compared with that of the pseudo inverse matrix method and full analytical 3D affine method.
In the present proposed fast 3D analytical affine transformation (F3DAAT) method, we have obtained the affine matrix directly and derived an analytical expression of the spectrum of the primitive triangle. The more meshes that are calculated the more time F3DAAT will save. We have generated the holograms with a resolution of 1024 × 1024, and the hardware includes Intel Core i7-11700 @ 4.8GHz, 16G-byte RAM under the environment of MATLAB 2018b.
The Stanford bunny consisting of 31,724 meshes (after back-face culling) takes 893 s for the calculation, and the calculation of “Alex” of 49,272 meshes takes about 1288 s (See Figure 9). Additionally, shown in Figure 9 where we have used “Alex”, we can see that the computational efficiency of F3DAAT has increased by almost two times compared with the previous methods. The calculation of the four methods is based on the same hardware condition and only CPU is used for the calculation. In one of the most recent studies, Wang et al. [43] proposed a polygon-based method using LUTs (look-up tables) with principal component analysis to speed up the calculation of CGHs. However, the method in the process of pre-computing the affine matrix still needs to solve a pseudo inverse matrix, and our proposed method is more general and efficient for solving the mapping relationship between the two coordinate systems, the global and the affine coordinate systems.

4.3. Optical Experiment

The optical experiment is shown in Figure 10. We have reconstructed 3D objects by loading the phase of the CGHs from the computer onto the spatial light modulator (SLM). The SLM in our experiment is a HOLOEYE PLUTO2 (NIR-011) phase-only SLM with a resolution of 1920 × 1080 (Full HD 1080p) and a pixel pitch of 8 μm and the active area is 15.36 mm × 8.64 mm. The laser is a green light with a wavelength of 532 nm. The spatial filter is used to generate a collimated light for the illumination of the hologram. The polarizer is for adjusting the polarization state of the light to work with the phase-only SLM. A camera (MMRY UC900C Charge-coupled Device) is used to receive the reconstructed image in the image plane of the imaging lens (focal length is 150 mm). Optical reconstructions of the bunny and the face of “Alex” are shown in Figure 11a,b, respectively.

5. Conclusions

In conclusion, we have proposed an improved algorithm to obtain a full-analytical spectrum of a tilted triangle based on 3D affine transformation. Our method avoids the time-consuming steps such as the need to solve for the pseudo inverse matrix or the complex process of 3D rotation and transformation. We have verified our method by calculating complex 3D objects composed of tens of thousands of meshes. In addition, we have added flat shading for realistic image presentation. We have successfully obtained the reconstructed images by numerical and optical reconstructions. Through comparison, it is found that our method improves the computational efficiency by about two times compared with the previous affine methods.

Author Contributions

Conceptualization, B.Z. and Y.Z.; formal analysis, Y.Z.; methodology, software, H.F., F.W.; investigation and validation, H.F., W.Q. and Q.F.; writing—original draft, H.F.; writing—review and editing, Y.Z. and T.-C.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China (61865007) and Yunnan Provincial Science and Technology Department (2019FA025).

Data Availability Statement

Original data are available in Ref. [43].

Acknowledgments

We appreciate the use of the SLM, which is on loan from Yanfei Lv, Department of Physics and Astronomy, Yunnan University. We would like to thank Pin Wang for her help with the first draft.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Full Results of the Analytical Spectrum Expression of Primitive Triangle

Case 1: when u s v t 0 , the integration result of Equation (10) is:
F Δ ( u s , v t ) = Δ 1 · e j 2 π ( u s s + v t t ) d s d t = 1 2 π 2 [ e j 2 π v t ( u s v t ) v t e j 2 π u s ( u s v t ) u s 1 u s v t ] .
Case 2: when u s = 0 , v t 0 , the integration result of Equation (10) is:
F Δ ( u s , v t ) = Δ 1 · e j 2 π v t t d s d t = e j 2 π v t + j 2 π v t 2 π 2 v t 2 .
Case 3: when u s = 0 , v t 0 , the integration result of Equation (10) is:
F Δ ( u s , v t ) = Δ 1 · e j 2 π u s s d s d t = e j 2 π u s + j 2 π u s 2 π 2 u s 2 .
Case 4: when u s v t = 0 , u s 0 , the integration result of Equation (10) is:
F Δ ( u s , v t ) = Δ 1 · e j 2 π u s ( s + t ) d s d t = e j 2 π u s 1 2 π 2 u s 2 .
Case 5: when v t = 0 , u s = 0 , the integration result of Equation (10) is:
F Δ ( u s , v t ) = Δ 1 d s d t = 1 / 2 .

Appendix B. Derivation of the Spectrum Distribution of Tilted Triangle Π on the Hologram Plane

In Figure A1a, we show tilted triangle Π in the global coordinate system. In Figure A1b, we represent Π as a collection of discrete parallel planes Π i . Π i has a length of   Δ y , as shown in Figure A1b. The spectrum of Π i can be expressed as F Π i ( u , v ) =  { Π i } . Therefore, the spectrum of Π i in the hologram plane is F H O L O , Π i ( u , v ) =  F Π i ( u , v ) · e j 2 π w z i . Hence, the spectrum of Π in the hologram plane can be written as:
F H O L O , Π ( u , v ) = i   F H O L O , Π i ( u , v ) = i   F Π i ( u , v ) · e j 2 π w z i .
Now
F Π i ( u , v ) = Π i f i ( x , y ; z i ) · e j 2 π ( u x + v y ) d x d y ,
where f i ( x , y ; z i ) is the surface function of Π i . We put Equation (A7) into Equation (A6) to achieve:
F H O L O , Π ( u , v ) = i   F H O L O , Π i ( u , v ) = i   [ Π i f i ( x , y ,   z i ) · e j 2 π ( u x + v y ) d x d y ] e j 2 π w z i = Π i i f i ( x , y ; z i ) · e j 2 π ( u x + v y ) d x d y   e j 2 π w z i Π f ( x , y , z ) · e j 2 π ( u x + v y + w z ) d x d y ,
where f ( x , y , z ) is the surface function of Π .   Equation (A8) is presented in Equation (11).

Appendix C. Derivation of the Analytical Spectrum Expression of Tilted Triangle on the Hologram

According to affine transformation, r g t = R r a t + C . Therefore, the coordinate relationships between the global coordinate system ( x , y , z ) and affine coordinate system ( s , t , p ) are:
x = r 11 s + r 12 t + r 13 p + c 1 , y = r 21 s + r 22 t + r 23 p + c 2 , z = r 31 s + r 32 t + r 33 p + c 3 .
Additionally, the relationships between the spatial frequencies of the global coordinate system ( u , v , w ) and the affine coordinate system ( u s , v t , w p ) are ( u s , v t , w p ) = R 1 ( u , v , w ) . We can write as follows:
u s = r 11 u + r 21 v + r 31 w , v t = r 12 u + r 22 v + r 32 w , w p = r 13 u + r 23 v + r 33 w .
Note, that we have chosen p = 0 , by using Equations (A9) and (A10), the exponential term in Equation (11) can be rewritten as:
e j 2 π ( u x + v y + w z ) = e j 2 π [ u ( r 11 s + r 12 t + r 13 p + c 1 ) + v ( r 21 s + r 22 t + r 23 p + c 2 ) + w ( r 31 s + r 32 t + r 33 p + c 3 ) ] = e j 2 π ( u c 1 + v c 2 + w c 3 ) · e j 2 π [ u ( r 11 s + r 12 t ) + v ( r 21 s + r 22 t ) + w ( r 31 s + r 32 t ) ] = e j 2 π ( u c 1 + v c 2 + w c 3 ) · e j 2 π [ s ( r 11 u + r 21 v + r 31 w ) + t ( r 12 u + r 22 v + r 32 w ) ] = e j 2 π ( u x 1 + v y 1 + w z 1 ) · e j 2 π ( s u s + t v t ) .
In Equation (A11), we have used the elements in the affine matrix T already solved in Equation (9), i.e., c 1 = x 1 , c 2 = y 1 , c 3 = z 1 . Then, according to Equations (11) and (A11), the analytical spectrum of the tilted triangle can be rewritten as:
F H O L O ( u , v ) = Δ f ( R r a + C ) · e j 2 π ( u x + v y + w z ) · J · d s d t = J · e j 2 π ( u x 1 + v y 1 + w z 1 ) · Δ 1 · e j 2 π ( s u s + t v t ) d s d t = J · e j 2 π ( u x 1 + v y 1 + w z 1 ) · F Δ ( u s , v t ) ,
which is presented in Equation (12).

References

  1. Jason, G. Three-dimensional display technologies. Adv. Opt. Photonics 2013, 5, 456–535. [Google Scholar]
  2. Liu, L.; Pang, L.; Teng, D. Super multi-view three-dimensional display technique for portable devices. Opt. Express 2016, 24, 4421–4430. [Google Scholar] [CrossRef] [PubMed]
  3. Hoffman, D.; Girshick, A.; Akeley, K.; Banks, M. Vergence-accommodation conflicts hinder visual performance and cause visual fatigue. J. Vis. 2008, 8, 1–30. [Google Scholar] [CrossRef]
  4. Smalley, D.; Nygaard, E.; Squire, K.; Wagoner, J.; Rasmussen, J.; Gneiting, S.; Qaderi, K.; Goodsell, J.; Rogers, W.; Lindsey, M. A photophoretic-trap volumetric display. Nature 2018, 553, 486–490. [Google Scholar] [CrossRef]
  5. Jung, J.; Kim, J.; Lee, B. Solution of pseudoscopic problem in integral imaging for real-time processing. Opt. Lett. 2013, 38, 76–78. [Google Scholar] [CrossRef]
  6. Poon, T.-C.; Liu, J.-P. Introduction to Modern Digital Holography with MATLAB; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
  7. Matsushima, K. Introduction to Computer Holography; Springer: New York, NY, USA, 2020; pp. 54–88. [Google Scholar]
  8. Zhang, Y.; Fan, H.-X.; Wang, F.; Gu, F.; Qian, X.; Poon, T.-C. Polygon-based computer-generated holography: A review of fundamentals and recent progress [Invited]. Appl. Opt. 2021, 61, B363–B374. [Google Scholar] [CrossRef]
  9. Wang, Z.; Lv, G.; Feng, Q.; Wang, A.; Ming, G. Resolution Priority Holographic Stereogram Based on Integral Imaging with Enhanced Depth Range. Opt. Express 2019, 27, 2689–2702. [Google Scholar] [CrossRef]
  10. Zhang, X.; Lv, G.; Wang, Z.; Hu, Z.; Ding, S.; Feng, Q. Resolution-Enhanced Holographic Stereogram Based on Integral Imaging Using an Intermediate-View Synthesis Technique. Opt. Commun. 2020, 457, 124656. [Google Scholar] [CrossRef]
  11. Nishitsuji, T.; Yamamoto, Y.; Takashige, S.; Takanori, A.; Hirayama, R.; Nakayama, H.; Kakue, T.; Shimobaba, T.; Ito, T. Special-purpose computer HORN-8 for phase-type electro-holography. Opt. Express 2018, 26, 26722–26733. [Google Scholar] [CrossRef]
  12. Zhao, Y.; Cao, L.; Zhang, H.; Kong, D.; Jin, G. Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method. Opt. Express 2015, 23, 25440–25449. [Google Scholar] [CrossRef]
  13. Nishitsuji, T.; Shimobaba, T.; Kakue, T.; Ito, T. Review of fast calculation techniques for computer-generated holograms with the point-light-source-based model. IEEE Trans. Ind. Inform. 2017, 13, 2447–2454. [Google Scholar] [CrossRef]
  14. Lucente, M. Interactive computation of holograms using a look-up table. J. Electron. Imaging 1993, 2, 28–34. [Google Scholar] [CrossRef]
  15. Kim, S.; Kim, E. Fast computation of hologram patterns of a 3D object using run-length encoding and novel look-up table methods. Appl. Opt. 2009, 48, 1030–1041. [Google Scholar] [CrossRef]
  16. Pan, Y.; Xu, X.; Solanki, S.; Liang, X.; Tanjung, R.; Tan, C.; Chong, T. Fast CGH computation using S-LUT on GPU. Opt. Express 2009, 17, 18543–18555. [Google Scholar] [CrossRef]
  17. Shimobaba, T.; Ito, T.; Masuda, N.; Ichihashi, Y.; Takada, T. Fast calculation of computer-generated-hologram on AMD HD5000 series GPU and OpenCL. Opt. Express 2010, 18, 9955–9960. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Ichihashi, Y.; Oi, R.; Senoh, T.; Yamamoto, K.; Kurita, T. Real-time capture and reconstruction system with multiple GPUs for a 3D live scene by a generation from 4K IP images to 8K holograms. Opt. Express 2012, 20, 21645–21655. [Google Scholar] [CrossRef] [PubMed]
  19. Leseberg, D.; Frère, C. Computer-generated holograms of 3-D objects composed of tilted planar segments. Appl. Opt. 1988, 27, 3020–3024. [Google Scholar] [CrossRef]
  20. Matsushima, K. Formulation of the rotational transformation of wave fields and their application to digital holography. Appl. Opt. 2008, 47, D110–D116. [Google Scholar] [CrossRef] [Green Version]
  21. Matsushima, K.; Schimmel, H.; Wyrowski, F. Fast calculation method for optical diffraction on tilted planes by use of the angular spectrum of plane waves. J. Opt. Soc. Am. A 2003, 20, 1755–1762. [Google Scholar] [CrossRef]
  22. Matsushima, K. Computer-generated holograms for three-dimensional surface objects with shade and texture. Appl. Opt. 2005, 44, 4607–4614. [Google Scholar] [CrossRef] [PubMed]
  23. Ahrenberg, L.; Benzie, P.; Magnor, M.; Watson, M. Computer generated holograms from three dimensional meshes using an analytic light transport model. Appl. Opt. 2008, 47, 1567–1574. [Google Scholar] [CrossRef] [PubMed]
  24. Kim, H.; Hahn, J.; Lee, B. Mathematical modeling of triangle-mesh-modeled three-dimensional surface objects for digital holography. Appl. Opt. 2008, 47, D117–D127. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Pan, Y.; Wang, Y.; Liu, J.; Li, X.; Jia, J. Fast polygon-based method for calculating computer-generated holograms in three-dimensional display. Appl. Opt. 2013, 52, A290–A299. [Google Scholar] [CrossRef]
  26. Pan, Y.; Wang, Y.; Liu, J.; Li, X.; Jia, J. Improved full analytical polygon-based method using Fourier analysis of the three-dimensional affine transformation. Appl. Opt. 2014, 53, 1354–1362. [Google Scholar] [CrossRef]
  27. Zhang, Y.; Zhang, J.; Chen, W.; Zhang, J.; Wang, P.; Xu, W. Research on three-dimensional computer-generated holographic algorithm based on conformal geometry theory. Opt. Commun. 2013, 309, 196–200. [Google Scholar] [CrossRef]
  28. Zhang, Y.; Zhang, J.; Chen, W.; Wang, P.; Wu, S.; Li, J. Fast Computer-generated Hologram Algorithm of Triangle Mesh Models. Chin. J. Lasers 2013, 40, 0709001. [Google Scholar] [CrossRef]
  29. Zhang, Y.-P.; Wang, F.; Poon, T.-C.; Fan, S.; Xu, W. Fast generation of full analytical polygon-based computer-generated holograms. Opt. Express 2018, 26, 19206–19224. [Google Scholar] [CrossRef]
  30. Matsushima, K. Performance of the Polygon-Source Method for Creating Computer-Generated Holograms of Surface Objects. In Proceedings of the ICO Topical Meeting on Optoinformatics/Information Photonics, Petersburg, Russia, 4–7 September 2006. [Google Scholar]
  31. Bracewell, R.; Chang, K.; Jha, A.; Wang, Y. Affine theorem for two-dimensional Fourier transform. Electron. Lett. 1993, 29, 304. [Google Scholar] [CrossRef]
  32. Underkoffler, J. Occlusion Processing and Smooth Surface Shading for Fully Computed Synthetic Holography. In Proceedings of the Practical Holography XI and Holographic Materials III, San Jose, CA, USA, 8 February 1997. [Google Scholar]
  33. Gilles, A.; Gioia, P. Real-time layer-based computer-generated hologram calculation for the Fourier transform optical system. Appl. Opt. 2018, 57, 8508–8517. [Google Scholar] [CrossRef]
  34. Zhang, H.; Cao, L.; Jin, G. Computer-generated hologram with occlusion effect using layer-based processing. Appl. Opt. 2017, 56, F138–F143. [Google Scholar] [CrossRef]
  35. Frère, C.; Leseberg, D.; Bryngdahl, O. Computer-generated holograms of three-dimensional objects composed of line segments. J. Opt. Soc. Am. A 1986, 3, 726–730. [Google Scholar] [CrossRef]
  36. Lee, W.; Im, D.; Paek, J.; Hahn, J.; Kim, H. Semi-analytic texturing algorithm for polygon computer-generated holograms. Opt. Express 2014, 22, 31180–31191. [Google Scholar] [CrossRef] [PubMed]
  37. Ji, Y.; Yeom, H.; Park, J. Efficient texture mapping by adaptive mesh division in mesh-based computer generated hologram. Opt. Express 2016, 24, 28154–28169. [Google Scholar] [CrossRef] [PubMed]
  38. Yamaguchi, K.; Ichikawa, T.; Sakamoto, Y. Calculation method for CGH considering smooth shading with polygon models. Proc. SPIE 2011, 7957, 795706. [Google Scholar]
  39. Matsushima, K.; Sonobe, N. Full-color digitized holography for large-scale holographic 3D imaging of physical and nonphysical objects. Appl. Opt. 2018, 57, A150–A156. [Google Scholar] [CrossRef]
  40. Matsushima, K.; Nakahara, S. Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method. Appl. Opt. 2009, 48, H54–H63. [Google Scholar] [CrossRef]
  41. Matsushima, K.; Nakamura, M.; Nakahara, S. Silhouette method for hidden surface removal in computer holography and its acceleration using the switch-back technique. Opt. Express 2014, 22, 24450–24465. [Google Scholar] [CrossRef]
  42. Stony Brook University 3D Scanning Laboratory. Available online: https://www3.cs.stonybrook.edu/~gu/software/holoimage/index.html (accessed on 4 July 2022).
  43. Wang, F.; Shimobaba, T.; Zhang, Y.; Kakue, T.; Ito, T. Acceleration of polygon-based computer-generated holograms using look-up tables and reduction of the table size via principal component analysis. Opt. Express 2021, 29, 35442–35455. [Google Scholar] [CrossRef]
Figure 1. Principle of the polygon-based method.
Figure 1. Principle of the polygon-based method.
Applsci 12 06873 g001
Figure 2. Coordinate systems of the traditional polygon-based method: source coordinate system ( x s , y s , z s ) , parallel local coordinate system ( x p , y p , z p ) and hologram plane ( x , y ) . Adapted from Zhang et al. [8].
Figure 2. Coordinate systems of the traditional polygon-based method: source coordinate system ( x s , y s , z s ) , parallel local coordinate system ( x p , y p , z p ) and hologram plane ( x , y ) . Adapted from Zhang et al. [8].
Applsci 12 06873 g002
Figure 3. Tilted triangle Π in global coordinate system ( x , y , z ) and hologram plane ( z = 0 ) .
Figure 3. Tilted triangle Π in global coordinate system ( x , y , z ) and hologram plane ( z = 0 ) .
Applsci 12 06873 g003
Figure 4. Primitive triangle Δ in affine coordinate system ( s , t , p ) .
Figure 4. Primitive triangle Δ in affine coordinate system ( s , t , p ) .
Applsci 12 06873 g004
Figure 5. Spectrum shift problem. (a) Spectrum in global coordinate system, (b) spectrum in affine system and (c) spectrum with frequency offsetting.
Figure 5. Spectrum shift problem. (a) Spectrum in global coordinate system, (b) spectrum in affine system and (c) spectrum with frequency offsetting.
Applsci 12 06873 g005
Figure 6. Principle of flat shading.
Figure 6. Principle of flat shading.
Applsci 12 06873 g006
Figure 7. Numerical reconstruction of the bunny. (a) Focus on leg without shading, (b) focus on leg with shading, (c) focus on ear with shading.
Figure 7. Numerical reconstruction of the bunny. (a) Focus on leg without shading, (b) focus on leg with shading, (c) focus on ear with shading.
Applsci 12 06873 g007
Figure 8. Numerical reconstruction of “Alex”: (a) 3D mesh of Alex and (b) numerical reconstruction. The geometric surface with the texture image in (a) is from a publicly accessible geometric archive from the 3D Scanning Laboratory in Stony Brook University [42].
Figure 8. Numerical reconstruction of “Alex”: (a) 3D mesh of Alex and (b) numerical reconstruction. The geometric surface with the texture image in (a) is from a publicly accessible geometric archive from the 3D Scanning Laboratory in Stony Brook University [42].
Applsci 12 06873 g008
Figure 9. Computation time required for different number of meshes for different methods for “Alex”.
Figure 9. Computation time required for different number of meshes for different methods for “Alex”.
Applsci 12 06873 g009
Figure 10. Optical reconstruction experiment.
Figure 10. Optical reconstruction experiment.
Applsci 12 06873 g010
Figure 11. Results of optical reconstruction: (a) bunny and (b) Alex.
Figure 11. Results of optical reconstruction: (a) bunny and (b) Alex.
Applsci 12 06873 g011
Figure A1. (a) Tilted triangle Π in the global coordinate system, (b) discrete version of Π .
Figure A1. (a) Tilted triangle Π in the global coordinate system, (b) discrete version of Π .
Applsci 12 06873 g0a1
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Fan, H.; Zhang, B.; Zhang, Y.; Wang, F.; Qin, W.; Fu, Q.; Poon, T.-C. Fast 3D Analytical Affine Transformation for Polygon-Based Computer-Generated Holograms. Appl. Sci. 2022, 12, 6873. https://0-doi-org.brum.beds.ac.uk/10.3390/app12146873

AMA Style

Fan H, Zhang B, Zhang Y, Wang F, Qin W, Fu Q, Poon T-C. Fast 3D Analytical Affine Transformation for Polygon-Based Computer-Generated Holograms. Applied Sciences. 2022; 12(14):6873. https://0-doi-org.brum.beds.ac.uk/10.3390/app12146873

Chicago/Turabian Style

Fan, Houxin, Bing Zhang, Yaping Zhang, Fan Wang, Wenlong Qin, Qingyang Fu, and Ting-Chung Poon. 2022. "Fast 3D Analytical Affine Transformation for Polygon-Based Computer-Generated Holograms" Applied Sciences 12, no. 14: 6873. https://0-doi-org.brum.beds.ac.uk/10.3390/app12146873

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop