Jing Liu, Guoxian Zhang, Kai Zhao, and Xiaoyu Jiang, "Compressive holography algorithm for the objects composed of point sources," Appl. Opt. 56, 530-542 (2017)

A compressive holography algorithm is proposed for the objects composed of point sources in this work. The proposed algorithm is based on Gabor holography, an amazingly simple and effective encoder for compressed sensing. In the proposed algorithm, the three-dimensional sampling space is uniformly divided into a number of grids since the virtual object may appear anywhere in the sampling space. All the grids are mapped into an indication vector, which is sparse in nature considering that the number of grids occupied by the virtual object is far less than that of the whole sampling space. Consequently, the point source model can be represented in a compressed sensing framework. With the increase of the number of grids in the sampling space, the coherence of the sensing matrix gets higher, which does not guarantee a perfect reconstruction of the sparse vector with large probability. In this paper, a new algorithm named fast compact sensing matrix pursuit algorithm is proposed to cope with the high coherence problem, as well as the unknown sparsity. A similar compact sensing matrix with low coherence is constructed based on the original sensing matrix using similarity analysis. In order to tackle unknown sparsity, an orthogonal matching pursuit algorithm is utilized to calculate a rough estimate of the true support set, based on the similar compact sensing matrix and the measurement vector. The simulation and experimental results show that the proposed algorithm can efficiently reconstruct a sequence of 3D objects including a Stanford Bunny with complex shape.

Stijn Bettens, Hao Yan, David Blinder, Heidi Ottevaere, Colas Schretter, and Peter Schelkens Opt. Express 25(16) 18656-18676 (2017)

References

You do not have subscription access to this journal. Citation lists with outbound citation links are available to subscribers only. You may subscribe either as an OSA member, or as an authorized user of your institution.

You do not have subscription access to this journal. Cited by links are available to subscribers only. You may subscribe either as an OSA member, or as an authorized user of your institution.

You do not have subscription access to this journal. Figure files are available to subscribers only. You may subscribe either as an OSA member, or as an authorized user of your institution.

You do not have subscription access to this journal. Article tables are available to subscribers only. You may subscribe either as an OSA member, or as an authorized user of your institution.

You do not have subscription access to this journal. Equations are available to subscribers only. You may subscribe either as an OSA member, or as an authorized user of your institution.

You do not have subscription access to this journal. Article level metrics are available to subscribers only. You may subscribe either as an OSA member, or as an authorized user of your institution.

1. Constructing the similar compact sensing matrix $\mathbf{\psi}$ (described in Section 4.B.1).

2. Obtaining an initial estimate of the correct subspace. The OMP algorithm is used to find an estimate of the support set based on the measurement vector $\mathbf{y}$ and the similar compact sensing matrix $\mathbf{\Psi}$. The estimated support set is represented as $\widehat{\mathbf{a}}=\{{\widehat{a}}^{1}({\mathit{\gamma}}_{C}^{i}),{\widehat{a}}^{2}({\mathit{\gamma}}_{C}^{j}),\dots ,{\widehat{a}}^{{K}^{\prime}}({\mathit{\gamma}}_{C}^{l})\}$, ${K}^{\prime}\le K$, where ${\widehat{a}}^{1}({\mathit{\gamma}}_{C}^{i})$ indicates that the first element of $\widehat{\mathbf{a}}$ corresponds to the $i$th condensed column ${\mathit{\gamma}}_{C}^{i}$. We obtain an initial estimate of the correct subspace, ${\widehat{\mathbf{S}}}_{ini}$, spanned by the columns $\{{\mathit{\gamma}}_{C}^{i},{\mathit{\gamma}}_{C}^{j},\dots ,{\mathit{\gamma}}_{C}^{l}\}$, defined as ${\widehat{\mathbf{S}}}_{ini}=\mathrm{SPAN}({\mathit{\gamma}}_{C}^{i},{\mathit{\gamma}}_{C}^{j},\dots ,{\mathit{\gamma}}_{C}^{l})$.

For clarity of representation, the columns spanning the initial estimate of the correct subspace, $\{{\mathit{\gamma}}_{C}^{i},{\mathit{\gamma}}_{C}^{j},\dots ,{\mathit{\gamma}}_{C}^{l}\}$, are defined as the contributing condensed columns, and their corresponding similar column groups, $\{{\mathbf{\Gamma}}_{i},{\mathbf{\Gamma}}_{j},\dots ,{\mathbf{\Gamma}}_{l}\}$, are defined as the contributing similar column groups. Each contributing similar column group contains a set of columns from the original sensing matrix, e.g., ${\mathbf{\Gamma}}_{i}=\{{\mathit{\gamma}}_{1}^{i},\dots ,{\mathit{\gamma}}_{{N}_{{\mathbf{\Gamma}}_{i}}}^{i}\}$, where ${N}_{{\mathbf{\Gamma}}_{i}}$ indicates the number of columns in ${\mathbf{\Gamma}}_{i}$.

3. Testing each contributing similar column group one by one to find the true vectors spanning the correct subspace.

For the contributing similar column group ${\mathbf{\Gamma}}_{i}$,

(a) Calculating the candidate subspaces corresponding to${\mathbf{\Gamma}}_{i}$. List all the combination sets of ${\mathbf{\Gamma}}_{i}$ as ${\mathit{F}}_{{\mathbf{\Gamma}}_{i}}=\{\{{\mathit{\gamma}}_{1}^{i}\},\dots ,\{{\mathit{\gamma}}_{{N}_{{\mathbf{\Gamma}}_{i}}}^{i}\},\{{\mathit{\gamma}}_{1}^{i},{\mathit{\gamma}}_{2}^{i}\},\{{\mathit{\gamma}}_{2}^{i},{\mathit{\gamma}}_{3}^{i}\},\dots ,\{{\mathit{\gamma}}_{1}^{i},{\mathit{\gamma}}_{2}^{i},{\mathit{\gamma}}_{3}^{i}\},\dots \}$. The $k$th combination set of ${\mathit{F}}_{{\mathbf{\Gamma}}_{i}}$ is denoted as ${\mathbf{f}}_{{\mathbf{\Gamma}}_{i}}^{k},k=1,\dots ,{R}_{{\mathbf{\Gamma}}_{i}}$, where ${R}_{{\mathbf{\Gamma}}_{i}}$ represents the number of combination sets in ${\mathit{F}}_{{\mathbf{\Gamma}}_{i}}$. In the initial estimate of the correct subspace, ${\widehat{\mathbf{S}}}_{ini}$, replace the contributing condensed column ${\mathit{\gamma}}_{C}^{i}$ with a combination set from ${\mathit{F}}_{{\mathbf{\Gamma}}_{i}}$. As a result, each combination set from ${\mathit{F}}_{{\mathbf{\Gamma}}_{i}}$, e.g., ${\mathbf{f}}_{{\mathbf{\Gamma}}_{i}}^{k}$, together with the ${K}^{\prime}-1$ fixed contributing condensed columns from ${\widehat{\mathbf{S}}}_{ini}$ (expect ${\mathit{\gamma}}_{C}^{i}$), forms a candidate subspace, e.g., the $k$th candidate subspace is represented as ${\mathbf{\Upsilon}}^{k}=\mathrm{SPAN}({\mathbf{f}}_{{\mathbf{\Gamma}}_{i}}^{k},{\mathit{\gamma}}_{C}^{j},\dots ,{\mathit{\gamma}}_{C}^{l}),k=1,2,\dots ,{R}_{{\mathbf{\Gamma}}_{i}}$.

(b) Finding the true combination set in${\mathbf{\Gamma}}_{i}$. For each candidate subspace, ${\mathbf{\Upsilon}}^{k},k=1,2,\dots ,{R}_{{\mathbf{\Gamma}}_{i}}$, the proposed algorithm solves a least-squares problem to approximate the nonzero entries of the original sparse vector $\mathbf{x}$ [Eq. (13)], and sets other entries as zeros [Eq. (14)], resulting in a candidate estimate ${\widehat{\mathbf{x}}}^{\mathit{k}},k=1,2,\dots ,{R}_{{\mathbf{\Gamma}}_{i}}$, as $${\widehat{\mathbf{x}}}_{{\mathbf{\Upsilon}}^{k}}^{k}={({\mathbf{\Upsilon}}^{k})}^{\u2020}\mathbf{y},$$$${\widehat{\mathbf{x}}}_{\{1,2,\dots ,N\}-{\mathbf{\Upsilon}}^{k}}^{k}=\mathbf{0},$$where $\u2020$ indicates the pseudo-inverse operation, ${\widehat{\mathbf{x}}}_{{\mathbf{\Upsilon}}^{k}}^{k}$ is composed of the entries of ${\widehat{\mathbf{x}}}^{k}$ indexed by $i\in {\mathbf{\Upsilon}}^{k}$, and ${\widehat{\mathbf{x}}}_{\{1,2,\dots ,N\}-{\mathbf{\Upsilon}}^{k}}^{k}$ is composed of the entries of ${\widehat{\mathbf{x}}}^{k}$ indexed by $i\in \{1,2,\dots ,N\}-{\mathbf{\Upsilon}}^{k}$. For each candidate estimate ${\widehat{\mathbf{x}}}^{k}(k=1,\dots ,{R}_{{\mathbf{\Gamma}}_{i}})$, calculate the residual ${\mathit{r}}^{k}(k=1,\dots ,{R}_{{\mathbf{\Gamma}}_{i}})$ via Eq. (15): $${\mathit{r}}^{k}=\mathbf{y}-\mathbf{\Phi}{\widehat{\mathbf{x}}}^{k}.$$The ${l}_{2}$ norm of ${\mathit{r}}^{k}$ is indicated as ${\Vert {\mathit{r}}^{k}\Vert}_{2}$. Among the ${R}_{{\mathbf{\Gamma}}_{i}}$ candidate subspaces, find the one with the least residual (the residual with the least ${l}_{2}$ norm), and choose its associate combination set (from ${\mathit{F}}_{{\mathbf{\Gamma}}_{i}}$) as the true combination set, i.e., the true vectors from ${\mathbf{\Gamma}}_{i}$, that span the correct subspace. We denote the true combination set from ${\mathbf{\Gamma}}_{i}$ as ${\mathbf{SC}}_{{\mathbf{\Gamma}}_{i}}$.

4. Repeating Step 3 for the ${K}^{\prime}-1$ remaining contributing similar column groups, $\{{\mathbf{\Gamma}}_{j},\dots ,{\mathbf{\Gamma}}_{l}\}$. We can obtain a number of true combination sets, which are denoted as $\{{\mathbf{SC}}_{{\mathbf{\Gamma}}_{j}},\dots ,{\mathbf{SC}}_{{\mathbf{\Gamma}}_{l}}\}$.

5. Obtaining a final estimate of the correct subspace. We can obtain the final estimate of the correct subspace by combining the true combination sets from all the contributing similar column groups, as $\widehat{\mathbf{S}}=\mathrm{SPAN}({\mathbf{SC}}_{{\mathbf{\Gamma}}_{i}},{\mathbf{SC}}_{{\mathbf{\Gamma}}_{j}},\dots ,{\mathbf{SC}}_{{\mathbf{\Gamma}}_{l}})$.

6. Calculating the estimated signal$\tilde{\mathbf{x}}$. Based on the estimated subspace $\widehat{\mathbf{S}}$, the proposed algorithm solves a least-squares problem to approximate the nonzero entries of the original sparse vector $\mathbf{x}$ [Eq. (16)], and sets other entries as zeros [Eq. (17)], resulting in a final estimate $\tilde{\mathbf{x}}$, as $${\tilde{\mathbf{x}}}_{\widehat{\mathbf{S}}}={(\widehat{\mathbf{S}})}^{\u2020}\mathbf{y},$$$${\tilde{\mathbf{x}}}_{\widehat{\mathbf{S}}-\mathbf{\Upsilon}}=\mathbf{0}.$$

Table 2.

Computational Complexity of the Proposed FCSMP Algorithm

Off-Line Processing

Rough Estimation

Refined Estimation

$O(\frac{(N-1)N}{2}M)$

$O(MV)$

${N}_{\mathrm{cs}}O(KM)$

Table 3.

Computational Complexity of the SP, BP, GSSMP, and FCSMP Algorithms

Algorithm

SP

BP

GSSMP

FCSMP

Computational complexity

$O(MN)$

$O({M}^{2}{N}^{1.5})$

$O(MV)+{C}_{{H}_{\mathrm{cc}}}^{K}O(KM)$

$O(MV)+{N}_{\mathrm{cs}}O(KM)$

Table 4.

Computing Time in Hologram Calculation

BOX

Ray Tracing Method

Proposed Method

0.7605 s

0.0213 s

Cube Frame

Ray Tracing Method

Proposed Method

0.8717 s

0.0498 s

Stanford Bunny

Ray Tracing Method

Proposed Method

1.6287 s

0.0421 s

Table 5.

Reconstruction Errors and Computing Time in the Noiseless Condition

Object

“$B$,” “$O$,” “$X$”

Algorithm

FCSMP

GSSMP

BCS

BP

SP

ERROR

0.0020

0.0009

0.3450

0.2440

1.2817

TIME (s)

49.7432

235.4573

29.4095

57.1620

22.9276

Object

Cube Frame

Algorithm

FCSMP

GSSMP

BCS

BP

SP

ERROR

0.0018

0.0006

0.8166

0.6202

1.3229

TIME (s)

51.5301

247.9542

31.2426

58.3590

29.0700

Object

Stanford Bunny

Algorithm

FCSMP

GSSMP

BCS

BP

SP

ERROR

0.4585

0.3038

0.5266

0.9625

1.3476

TIME (s)

120.7584

397.5426

107.0405

132.5899

101.7001

Tables (6)

Table 1.

Computational Complexity of the Ray Tracing Method and the Proposed Hologram Calculation Algorithm

1. Constructing the similar compact sensing matrix $\mathbf{\psi}$ (described in Section 4.B.1).

2. Obtaining an initial estimate of the correct subspace. The OMP algorithm is used to find an estimate of the support set based on the measurement vector $\mathbf{y}$ and the similar compact sensing matrix $\mathbf{\Psi}$. The estimated support set is represented as $\widehat{\mathbf{a}}=\{{\widehat{a}}^{1}({\mathit{\gamma}}_{C}^{i}),{\widehat{a}}^{2}({\mathit{\gamma}}_{C}^{j}),\dots ,{\widehat{a}}^{{K}^{\prime}}({\mathit{\gamma}}_{C}^{l})\}$, ${K}^{\prime}\le K$, where ${\widehat{a}}^{1}({\mathit{\gamma}}_{C}^{i})$ indicates that the first element of $\widehat{\mathbf{a}}$ corresponds to the $i$th condensed column ${\mathit{\gamma}}_{C}^{i}$. We obtain an initial estimate of the correct subspace, ${\widehat{\mathbf{S}}}_{ini}$, spanned by the columns $\{{\mathit{\gamma}}_{C}^{i},{\mathit{\gamma}}_{C}^{j},\dots ,{\mathit{\gamma}}_{C}^{l}\}$, defined as ${\widehat{\mathbf{S}}}_{ini}=\mathrm{SPAN}({\mathit{\gamma}}_{C}^{i},{\mathit{\gamma}}_{C}^{j},\dots ,{\mathit{\gamma}}_{C}^{l})$.

For clarity of representation, the columns spanning the initial estimate of the correct subspace, $\{{\mathit{\gamma}}_{C}^{i},{\mathit{\gamma}}_{C}^{j},\dots ,{\mathit{\gamma}}_{C}^{l}\}$, are defined as the contributing condensed columns, and their corresponding similar column groups, $\{{\mathbf{\Gamma}}_{i},{\mathbf{\Gamma}}_{j},\dots ,{\mathbf{\Gamma}}_{l}\}$, are defined as the contributing similar column groups. Each contributing similar column group contains a set of columns from the original sensing matrix, e.g., ${\mathbf{\Gamma}}_{i}=\{{\mathit{\gamma}}_{1}^{i},\dots ,{\mathit{\gamma}}_{{N}_{{\mathbf{\Gamma}}_{i}}}^{i}\}$, where ${N}_{{\mathbf{\Gamma}}_{i}}$ indicates the number of columns in ${\mathbf{\Gamma}}_{i}$.

3. Testing each contributing similar column group one by one to find the true vectors spanning the correct subspace.

For the contributing similar column group ${\mathbf{\Gamma}}_{i}$,

(a) Calculating the candidate subspaces corresponding to${\mathbf{\Gamma}}_{i}$. List all the combination sets of ${\mathbf{\Gamma}}_{i}$ as ${\mathit{F}}_{{\mathbf{\Gamma}}_{i}}=\{\{{\mathit{\gamma}}_{1}^{i}\},\dots ,\{{\mathit{\gamma}}_{{N}_{{\mathbf{\Gamma}}_{i}}}^{i}\},\{{\mathit{\gamma}}_{1}^{i},{\mathit{\gamma}}_{2}^{i}\},\{{\mathit{\gamma}}_{2}^{i},{\mathit{\gamma}}_{3}^{i}\},\dots ,\{{\mathit{\gamma}}_{1}^{i},{\mathit{\gamma}}_{2}^{i},{\mathit{\gamma}}_{3}^{i}\},\dots \}$. The $k$th combination set of ${\mathit{F}}_{{\mathbf{\Gamma}}_{i}}$ is denoted as ${\mathbf{f}}_{{\mathbf{\Gamma}}_{i}}^{k},k=1,\dots ,{R}_{{\mathbf{\Gamma}}_{i}}$, where ${R}_{{\mathbf{\Gamma}}_{i}}$ represents the number of combination sets in ${\mathit{F}}_{{\mathbf{\Gamma}}_{i}}$. In the initial estimate of the correct subspace, ${\widehat{\mathbf{S}}}_{ini}$, replace the contributing condensed column ${\mathit{\gamma}}_{C}^{i}$ with a combination set from ${\mathit{F}}_{{\mathbf{\Gamma}}_{i}}$. As a result, each combination set from ${\mathit{F}}_{{\mathbf{\Gamma}}_{i}}$, e.g., ${\mathbf{f}}_{{\mathbf{\Gamma}}_{i}}^{k}$, together with the ${K}^{\prime}-1$ fixed contributing condensed columns from ${\widehat{\mathbf{S}}}_{ini}$ (expect ${\mathit{\gamma}}_{C}^{i}$), forms a candidate subspace, e.g., the $k$th candidate subspace is represented as ${\mathbf{\Upsilon}}^{k}=\mathrm{SPAN}({\mathbf{f}}_{{\mathbf{\Gamma}}_{i}}^{k},{\mathit{\gamma}}_{C}^{j},\dots ,{\mathit{\gamma}}_{C}^{l}),k=1,2,\dots ,{R}_{{\mathbf{\Gamma}}_{i}}$.

(b) Finding the true combination set in${\mathbf{\Gamma}}_{i}$. For each candidate subspace, ${\mathbf{\Upsilon}}^{k},k=1,2,\dots ,{R}_{{\mathbf{\Gamma}}_{i}}$, the proposed algorithm solves a least-squares problem to approximate the nonzero entries of the original sparse vector $\mathbf{x}$ [Eq. (13)], and sets other entries as zeros [Eq. (14)], resulting in a candidate estimate ${\widehat{\mathbf{x}}}^{\mathit{k}},k=1,2,\dots ,{R}_{{\mathbf{\Gamma}}_{i}}$, as $${\widehat{\mathbf{x}}}_{{\mathbf{\Upsilon}}^{k}}^{k}={({\mathbf{\Upsilon}}^{k})}^{\u2020}\mathbf{y},$$$${\widehat{\mathbf{x}}}_{\{1,2,\dots ,N\}-{\mathbf{\Upsilon}}^{k}}^{k}=\mathbf{0},$$where $\u2020$ indicates the pseudo-inverse operation, ${\widehat{\mathbf{x}}}_{{\mathbf{\Upsilon}}^{k}}^{k}$ is composed of the entries of ${\widehat{\mathbf{x}}}^{k}$ indexed by $i\in {\mathbf{\Upsilon}}^{k}$, and ${\widehat{\mathbf{x}}}_{\{1,2,\dots ,N\}-{\mathbf{\Upsilon}}^{k}}^{k}$ is composed of the entries of ${\widehat{\mathbf{x}}}^{k}$ indexed by $i\in \{1,2,\dots ,N\}-{\mathbf{\Upsilon}}^{k}$. For each candidate estimate ${\widehat{\mathbf{x}}}^{k}(k=1,\dots ,{R}_{{\mathbf{\Gamma}}_{i}})$, calculate the residual ${\mathit{r}}^{k}(k=1,\dots ,{R}_{{\mathbf{\Gamma}}_{i}})$ via Eq. (15): $${\mathit{r}}^{k}=\mathbf{y}-\mathbf{\Phi}{\widehat{\mathbf{x}}}^{k}.$$The ${l}_{2}$ norm of ${\mathit{r}}^{k}$ is indicated as ${\Vert {\mathit{r}}^{k}\Vert}_{2}$. Among the ${R}_{{\mathbf{\Gamma}}_{i}}$ candidate subspaces, find the one with the least residual (the residual with the least ${l}_{2}$ norm), and choose its associate combination set (from ${\mathit{F}}_{{\mathbf{\Gamma}}_{i}}$) as the true combination set, i.e., the true vectors from ${\mathbf{\Gamma}}_{i}$, that span the correct subspace. We denote the true combination set from ${\mathbf{\Gamma}}_{i}$ as ${\mathbf{SC}}_{{\mathbf{\Gamma}}_{i}}$.

4. Repeating Step 3 for the ${K}^{\prime}-1$ remaining contributing similar column groups, $\{{\mathbf{\Gamma}}_{j},\dots ,{\mathbf{\Gamma}}_{l}\}$. We can obtain a number of true combination sets, which are denoted as $\{{\mathbf{SC}}_{{\mathbf{\Gamma}}_{j}},\dots ,{\mathbf{SC}}_{{\mathbf{\Gamma}}_{l}}\}$.

5. Obtaining a final estimate of the correct subspace. We can obtain the final estimate of the correct subspace by combining the true combination sets from all the contributing similar column groups, as $\widehat{\mathbf{S}}=\mathrm{SPAN}({\mathbf{SC}}_{{\mathbf{\Gamma}}_{i}},{\mathbf{SC}}_{{\mathbf{\Gamma}}_{j}},\dots ,{\mathbf{SC}}_{{\mathbf{\Gamma}}_{l}})$.

6. Calculating the estimated signal$\tilde{\mathbf{x}}$. Based on the estimated subspace $\widehat{\mathbf{S}}$, the proposed algorithm solves a least-squares problem to approximate the nonzero entries of the original sparse vector $\mathbf{x}$ [Eq. (16)], and sets other entries as zeros [Eq. (17)], resulting in a final estimate $\tilde{\mathbf{x}}$, as $${\tilde{\mathbf{x}}}_{\widehat{\mathbf{S}}}={(\widehat{\mathbf{S}})}^{\u2020}\mathbf{y},$$$${\tilde{\mathbf{x}}}_{\widehat{\mathbf{S}}-\mathbf{\Upsilon}}=\mathbf{0}.$$

Table 2.

Computational Complexity of the Proposed FCSMP Algorithm

Off-Line Processing

Rough Estimation

Refined Estimation

$O(\frac{(N-1)N}{2}M)$

$O(MV)$

${N}_{\mathrm{cs}}O(KM)$

Table 3.

Computational Complexity of the SP, BP, GSSMP, and FCSMP Algorithms

Algorithm

SP

BP

GSSMP

FCSMP

Computational complexity

$O(MN)$

$O({M}^{2}{N}^{1.5})$

$O(MV)+{C}_{{H}_{\mathrm{cc}}}^{K}O(KM)$

$O(MV)+{N}_{\mathrm{cs}}O(KM)$

Table 4.

Computing Time in Hologram Calculation

BOX

Ray Tracing Method

Proposed Method

0.7605 s

0.0213 s

Cube Frame

Ray Tracing Method

Proposed Method

0.8717 s

0.0498 s

Stanford Bunny

Ray Tracing Method

Proposed Method

1.6287 s

0.0421 s

Table 5.

Reconstruction Errors and Computing Time in the Noiseless Condition