Next Article in Journal
Active Learning with Bayesian UNet for Efficient Semantic Image Segmentation
Next Article in Special Issue
Data-Driven Regularization Parameter Selection in Dynamic MRI
Previous Article in Journal
Accelerating 3D Medical Image Segmentation by Adaptive Small-Scale Target Localization
Previous Article in Special Issue
A New Hybrid Inversion Method for 2D Nuclear Magnetic Resonance Combining TSVD and Tikhonov Regularization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Model-Based Optimization Framework for Iterative Digital Breast Tomosynthesis Image Reconstruction

by
Elena Loli Piccolomini
*,† and
Elena Morotti
Department of Computer Science and Engineering, University of Bologna, Via Mura Anteo Zamboni 7, 40126 Bologna, Italy
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Submission received: 2 January 2021 / Revised: 4 February 2021 / Accepted: 8 February 2021 / Published: 13 February 2021
(This article belongs to the Special Issue Inverse Problems and Imaging)

Abstract

:
Digital Breast Tomosynthesis is an X-ray imaging technique that allows a volumetric reconstruction of the breast, from a small number of low-dose two-dimensional projections. Although it is already used in the clinical setting, enhancing the quality of the recovered images is still a subject of research. The aim of this paper was to propose and compare, in a general optimization framework, three slightly different models and corresponding accurate iterative algorithms for Digital Breast Tomosynthesis image reconstruction, characterized by a convergent behavior. The suggested model-based implementations are specifically aligned to Digital Breast Tomosynthesis clinical requirements and take advantage of a Total Variation regularizer. We also tune a fully-automatic strategy to set a proper regularization parameter. We assess our proposals on real data, acquired from a breast accreditation phantom and a clinical case. The results confirm the effectiveness of the presented framework in reconstructing breast volumes, with particular focus on the masses and microcalcifications, in few iterations and in enhancing the image quality in a prolonged execution.

1. Introduction

Digital Breast Tomosynthesis (DBT) is a 3D X-ray cone-beam Computed Tomography (CT) technique for the early detection of breast tumors [1,2].
While the traditional digital mammography provides a unique 2D breast image, DBT reconstructs the breast as a stack of 2D images by using a comparable radiation dose. Hence, DBT is also used in screening programs, because the volumetric reconstruction reduces the tissue overlaps allowing for a better visibility of malignant structures. DBT is characterized by a limited-angle geometry: since the object is scanned only from a narrow angular range, the DBT projection data are incomplete if compared to classical CT cases.
The reconstruction algorithm plays an important role, influencing the accuracy of the recovered breast images. It is well known that traditional fast analytic reconstruction methods, such as Feldkamp [3], produce poor noisy images in limited-angle tomography, hence they have been left in favor of Iterative Reconstruction (IR) algorithms [4,5,6]. IR solvers provide a sequence of solutions, by computing an improved reconstructed volume at each iteration. Iterative approaches have been introduced since the first years of CT, but they have not been used for a long time due to their high computational time request. Recently, IR methods got a renewed interest in scientific communities and among the major vendors, due to the advent of more performing processors [7]. As a consequence, a wide amount of IR methods has been proposed to reconstruct tomographic images and an exhaustive analysis can be found in [8], whereas an overview of IR methods for CT image reconstruction is presented in Section 1.1.
In this work we consider three IR algorithms as solvers of three slightly different model-based formulations. They all solve optimization problems where the objective function both describes the CT process (by modeling the physics of the system and including the presence of noise on the projection data) and introduces some image priors. Such a mathematical approach is quite uncommon in 3D tomographic imaging, where a constrained formulation is preferred [9,10,11,12]. In particular, we mainly consider the objective function as the sum of the Least Squares (LS) data-fitting term and the Total Variation (TV) regularization function. The TV regularizer is chosen by many authors because of its excellent shape recovering and denoising properties, even if it is known that TV can produce staircasing effects when the regularization parameter is too high [8,9,10,11,12,13]: the choice of the regularization parameter plays a fundamental role in the considered framework.
Figure 1 shows the model-based approach workflow, from the numerical modeling of the projection step during the breast scanning, to the reconstructed volume inspection searching for breast cancer objects, via the implementation of an iterative solver for the minimization problem.

1.1. A Short Review on Iterative Methods for CT

The worldwide increasing interest in Compressive Sensing (CS) [14] has promoted the novel model-based iterative approach (which uses an optimization framework to exploit CS theory). Among the wide class of model-based IR procedures, the so called Sparsity-Exploiting Image Reconstruction (SEIR) methods have produced significant improvement to the image quality in all the low-dose CT applications (see [8,13] and references therein). In particular, many authors have introduced the TV regularization function to take advantage of the sparsity in the image gradient domain for edge detection [4,5,15,16,17,18,19,20,21,22]. This property turns into practice as a noise smoothing effect and as a reliable detection of shape and size of anatomical objects.
It is possible to distinguish two main categories of algorithms in the class of SEIR methods: the approximate solvers and the accurate solvers. The first one contains algorithms using, at each step, an algebraic approach (such as SART and SIRT) and then decreasing the TV of the just calculated solution. Examples are the well-known POCS algorithm and its developments [16,23,24]. They provide reliable reconstructions in few iterations, but the quality of the recovered images strongly depends on the tuning of many inner parameters and the algorithm convergence is not guaranteed. On the other hand, the accurate solvers are optimization methods minimizing an objective function, which is defined as a sum of a data-fitting term and a regularization function. The two quantities are typically weighted by a regularization parameter. This class is represented by classical optimization methods adapted to the huge size 3D tomographic reconstruction problems. Their solution is proved to converge to the exact solution of the minimization problem.
Nowadays, only preliminary investigations on simulations or phantoms have been performed to analyze the results of accurate solvers for few-views CT applications [25,26,27].
Concerning existing rules for the regularization parameter choice in tomography, in [28] the authors propose a strategy based on multiresolution and apply it to 2D reconstructions. The proposed rule is very promising, but it is quite expensive for a very large size 3D application, such as DBT image reconstruction. An exhaustive list of existing rules for the selection of the regularization parameter is reported in [28].

1.2. Aim and Contribution of the Paper

The aim of this paper was to propose a new TV-based optimization framework for the reconstruction of DBT images. The framework, described in a rigorous numerical setting, includes both constrained and unconstrained models, thus it is a flexible tool easily enabling the use of different data-fitting or regularization terms as well as the addition of further box constraints, to reconstruct reliable volumes from subsampled noisy data.We are also interested in finding an automatic strategy to set the regularization parameter, which strongly affects the quality of the reconstructions, in order to avoid its manually tuning (which is infeasible in a clinical setting).
The contribution of this work can be summarized as follows.
  • We propose three different models inside a unique optimization framework, combining a data-fitting function (identified in the least squares function) and a regularization TV-like term. We solve the minimization problem with efficient iterative algorithms, converging to the global minimum of the problem, and we analyze and compare the results achieved from real noisy and large-size data sets. Two of these models have been presented in our previous works on low-sampled CT image reconstruction [26,27], but they were applied to small simulated data sets with different geometries (not limited angles). This optimization approach is new for DBT applications.
  • We propose a user independent and computationally effortless rule to set and adapt the regularization parameter at each iteration of the algorithms.
  • In order to assess our proposals, we implement the methods and test them on real large-size and noisy projection data from both a breast accreditation phantom and a human patient. We analyze the algorithms performance in recovering the breast tumor objects of interest (i.e., masses and microcalcifications clusters), by means of measures of merits and visual inspection, at different stages of the Iterative Reconstruction process. We analyze the volume via its recovered slices, both perpendicularly and along the Z direction (see Figure 1).
We remark that in clinical routine almost real time reconstructions are required. However, we underline the importance of improving the image quality with ongoing iterations in longer execution times, for two main reasons: first, having more reliable images can be crucial in difficult diagnosable cases to avoid false interpretations; second, the fast evolution of multiprocessor boards, such as GPUs, is drastically reducing the time per iteration of the methods, hence we can suppose that more iterations could be performed in clinical reconstructions in a next future.
The paper is organized as follows. In Section 2, we state the optimization framework for the image reconstruction, thus we illustrate the three proposed model-based approach and IR solvers in Section 3. Section 4 and Section 5 present the data sets and the experimental results, respectively. Finally, Section 6 contains some conclusions.

2. The Optimization Framework in a Model-Based Formulation

Mathematically, tomographic image reconstruction is an ill-posed inverse problem whose solution can be obtained by minimizing a suitable objective function related to the physical process. To define the model describing image reconstruction, a deep understanding of the acquisition steps characterizing the DBT technique is therefore crucial. A schematic example of a DBT system is shown in the left of Figure 1. In DBT routine, the breast is first compressed along the Z-axis, over the flat detector plane. The source moves along an arc trajectory and emits low-dose radiations from a discrete number of angles. Once the X-ray cone-beam has passed through the body, the detector records its attenuation: the set of the resulting projection images constitutes the raw tomographic data set. The breast volume to be recovered is composed by a stack of high resolution images, parallel to the detector plane along the Z vertical direction.
In order to define the numerical model of tomographic image formation, we recall the physical law underlying the acquisition process. Supposing the 2D detector panel is made of N p recording units, for each fixed projection angle θ and ith recording unit, the Lambert–Beer law relates the projection P i θ , along a ray R θ , to  the attenuation coefficient function μ ( w ) of a point w [29] as:
R θ μ ( w ) d R = l n P i θ P 0 , i = 1 , , N p ,
where P 0 represents the intensity of the energy emitted by the X-ray source. By discretizing the 3D object into N v voxels and by considering all the N θ scanning angles, the integral in (1) yields the following linear system:
M x = b .
In Equation (2), we denote with x the N v dimensional vector stacking the attenuation coefficients of all the voxels, while b is the vector of size N d = N p · N θ storing all the projections (i.e., the right hand sides of (1)) and M is the matrix of size N d × N v , built according to the DBT device geometry and representing the projection process onto the detector.
Some issues arise when solving the linear system (2) as an inverse problem, such as the existence of infinite solutions (since N v > N d ) and the presence of high noise in the reconstructed images (due to the ill-posedness of the problem). The model-based approach is introduced to overcome these numerical controversies, by adding some a priori information. The resulting formulation can be stated as an unconstrained or constrained minimization problem [8], involving a data-fitting function J ( x ) and a prior operator R ( x ) (acting here as a regularizer).
In particular, we settle J ( x ) as the Least Squares (LS) function
L S ( x ) = M x b 2 2
and R ( x ) as the Total Variation (TV) operator defined as [30]:
T V ( x ) = j = 1 N v x j 2 .
The TV operator is not differentiable in the origin, thus we consider the smoothed TV version:
T V β ( x ) = j = 1 N v x j 2 2 + β 2
where β is a small positive parameter [30], when a solver requires the objective function differentiability.
We remark that both the TV and the T V β quantities can be computed with forward differences with boundary periodic conditions on the volumes. In addition, if we reshape the vector x of length N v as a 3-dimensional object x of size N x × N y × N z , we can introduce the notation j x , j y , j z to indicate the indices of the jth voxel with respect to the three Cartesian axes ( j x = 1 , , N x , j y = 1 , , N y and j z = 1 , , N z ). Hence, for all j = 1 , , N v the x j 2 2 elements in Equations (4) and (5) can be read as x j 2 2 = x j x , j y , j z 2 2 and thus expressed as:
x j x , j y , j z 2 2 = ( x j x + 1 , j y , j z x j x , j y , j z ) 2 + ( x j x , j y + 1 , j z x j x , j y , j z ) 2 + ( x j x , j y , j z + 1 x j x , j y , j z ) 2 .
Since the model-based problem statement is a flexible framework, in this work we consider three different LS-TV like formulations, i.e.,:
arg min x 0 f ( x ) = L S ( x ) + λ T V β ( x ) .
arg min x f ( x ) = L S ( x ) + λ T V β ( x )
arg min x T V ( x ) s . t . L S ( x ) ϵ 2 , x 0 .
where λ 0 is a regularization parameter and ϵ 0 is an estimate of a noise measure. The introduction of the box constraint x 0 reflects the non-negativity property of the linear attenuation coefficient μ in (1), hence it is coherent to the DBT numerical formulation.
We remark that exploiting the linearity of the objective function f ( x ) in (7) and (8), the objective function gradient f ( x ) = L S ( x ) + λ T V β ( x ) can be evaluated by separately computing L S ( x ) as:
L S ( x ) = 2 ( M T M x M T b )
and T V β ( x ) through finite forward differences (see [27] for its detailed derivation).

3. Iterative Optimization Methods

For each considered model formulation (7)–(9), we propose an accurate solver whose convergence to the global minimum is demonstrated. We thus apply and compare the Scaled Gradient Projection (SGP), the Fixed Point (FP) and the Chambolle–Pock (CP) methods to solve the optimization problems (7), (8) and (9), respectively.
Among the wide class of optimization methods they have been chosen since they satisfy the requirements necessary to be usable on DBT devices:
  • a fast error decreasing in the initial algorithm execution, in order to obtain a good image in few iterations;
  • a low computational cost per iteration (which is mainly determined by the number of matrix-vector products), to efficiently run the solver in short time;
  • a limited request of memory, to solve real-size problems on commercially affordable hardware.
A challenging issue (common to the implementation of the three algorithms) is the computation of the projection matrix M; since it can not be stored due to its huge dimensions, it must be recalculated at each call. Thus, this section ends with a focus on the algorithm we use to generate M.

3.1. Scaled Gradient Projection Algorithm

The SGP algorithm is a first order accelerated method. We apply it to solve the non-negative constrained and differentiable optimization problem (7). Algorithm 1 reports the main steps of the SGP algorithm.
At each kth iteration, the new solution is computed by moving along a descent direction d ( k ) of a quantity η k > 0 , as:
x ( k + 1 ) = x ( k ) + η k d ( k ) .
The direction d ( k ) is obtained through a projection P + onto the non-negative orthant of the scaled gradient direction S k f ( x ( k ) :
d ( k ) = P + x ( k ) α k S k f ( x ( k ) ) x ( k )
where α k is the step length and S k is the scaling matrix (step 7 in Algorithm 1).
Essentially, the method follows a Gradient Projection approach accelerated by choosing the α k step length with Barzilai–Borwein techniques as proposed in [27] and by introducing a suitable scaling matrix improving the matrix conditioning. In particular, the scaling matrix S k is a diagonal matrix with entries in a limited interval. To update S k (line 5 of Algorithm 1), we compute a splitting of the objective function gradient into its positive and negative parts, as:
f ( x ) = V ( x ) U ( x ) ,
where V ( x ) > 0 and U ( x ) 0 . The diagonal elements s j , j ( k ) of S k are updated, for  j = 1 , , N v as:
s j , j ( k ) = min ρ k , max 1 ρ k , x j ( k ) V j ( x ( k ) )
where { ρ k } k is a decreasing bounded positive sequence, ensuring the procedure convergence [31]. We finally denote by S ρ k the set of diagonal matrices with entries in the interval  [ 1 ρ k , ρ k ] .
Regarding the convergence, it is proved in [32] that the SGP algorithm converges without any further restriction on the step length α k and on the scaling matrix S k to the unique minimum of (7). In [31], the authors proved that the theoretical convergence rate of the SGP method is O (1/k).
Algorithm 1 Scaled Gradient Projection algorithm (SGP)
Input: M , b , λ
1:
Initialize: x ( 0 ) 0 , γ , σ ( 0 , 1 ) , 0 < α m i n α m a x ,
2:
k = 0
3:
while not convergence do
4:
    Compute g ( k ) = 2 ( M T M x ( k ) M T b ) + λ T V β ( x ( k ) )
5:
    Compute S k S ρ k as in (14)
6:
    Define α k [ α m i n , α m a x ] with alternate BB rules
7:
     d ( k ) = P + x ( k ) α k S k g ( k ) x ( k )
8:
     η k = 1
9:
    while f ( x ( k ) + η k d ( k ) ) > f ( x ( k ) ) + σ η k ( g ( k ) ) T d ( k ) do
10:
         η k = γ η k
11:
     x ( k + 1 ) = x ( k ) + η k d ( k )
12:
    k = k+1
Output: x ( k )

3.2. The Fixed Point Algorithm

The FP algorithm solves the unconstrained problem (8) and it has been proposed for image deblurring in [33]. Starting from this approach, we derived the lagged diffusivity FP Algorithm 2 for 3D tomographic image reconstruction.
Algorithm 2 Lagged diffusivity Fixed Point algorithm (FP)
Input: M , b , λ , m a x i t e r
1:
Initialize: x ( 0 ) 0
2:
for k = 0 to m a x i t e r 1 do
3:
    Compute g ( k ) = 2 ( M T M x ( k ) M T b ) + λ T V β ( x ( k ) )
4:
    Solve the linear system H k d ( k ) = g ( k ) , where H k = M T M + λ L ( x ( k ) ) , with the Conjugate Gradient method.
5:
     x ( k + 1 ) = x ( k ) + d ( k )
Output: P + ( x ( k + 1 ) )
At each kth iteration, the FP algorithm updates the solution with the following rule:
x ( k + 1 ) = x ( k ) + d ( k )
where the descent direction d ( k ) is computed by solving a linear system H k d ( k ) = f ( x ( k ) ) (line 4 of Algorithm 2). The matrix H k = M T M + λ L ( x ( k ) ) in line 4 approximates the Hessian matrix. It contains the seven diagonals banded matrix L ( x ( k ) ) which is the discretization matrix of the diffusion operator L ( x ) so that L ( x ) x = T V ( x ) [30]. We solve the linear system with very few iterations of a Conjugate Gradient (CG) algorithm [34]. We stop it far before convergence, both to limit the computational time and to prevent noise from affecting the solution. We remark that each CG iteration requires a matrix-vector product involving H k and that, to save memory space, we perform it without storing the matrix H k . We only store L ( x ( k ) ) and re-compute M and M T at run time. At the end, we project the last computed solution onto the non-negative orthant. For more details on the FP method applied to tomographic image reconstruction and its convergence, see [26] and [35], respectively.

3.3. The Chambolle–Pock Algorithm

The CP algorithm (originally proposed in [36]) solves the constrained minimization problem stated as in (9), by considering an equivalent unconstrained formulation:
arg min x F ( K x ) + G ( x )
where
F ( K x ) = δ B 2 ( ϵ ) ( M x b ) + λ T V ( x ) w i t h λ > 0
and the B 2 ball is the set B 2 ( ϵ ) = { y R N d such   that y 2 ϵ } and its indicator function δ B 2 ( ϵ ) is defined as
δ B 2 ( ϵ ) ( y ) = 0 y B 2 ( ϵ ) y B 2 ( ϵ ) .
In addition, we set:
G ( x ) = δ Ω ( x )
as the indicator function δ Ω ( x ) of the convex set Ω = { x R N v such   that x 0 } to embed the non-negativity constraint in the unconstrained formulation. Equation (17) allows to fully exploit the linearity of the K transform, combining here both the X-ray projector M and the discrete gradient operator, if we build K as the following four-blocks matrix:
K = M x y z .
where x , y and z are the forward difference operators acting along the X , Y and Z axes, respectively.
Considering the convex conjugate F * of F, defined as F * ( y ) = max x { x T y F ( x ) } , and the proximal mappings of F * and G, i.e.,
p r o x σ [ F * ] ( y ) = arg min y ¯ F * ( y ¯ ) + 1 2 σ y y ¯ 2 2 p r o x τ [ G ] ( x ) = arg min x ¯ G ( x ¯ ) + 1 2 τ x x ¯ 2 2 ,
the kth CP iteration can be summarized in the following three steps:
  • compute y ( k + 1 ) as p r o x σ [ F * ] ( y ( k ) + σ K x ¯ ( k ) ) ;
  • compute x ( k + 1 ) as p r o x τ [ G ] ( x ( k ) τ K T y ( k + 1 ) ) ;
  • define x ¯ ( k + 1 ) with an extrapolation step: x ¯ ( k + 1 ) = x ( k + 1 ) + θ ( x ( k + 1 ) x ( k ) ) and θ > 0 .
We denote as ( x ; y ; z ) a matrix of size 3 N v × N v and we use Matlab-like notation to indicate element by element matrix multiplication and division. In particular, the proximal mapping p r o x σ [ F * ] can be computed as the sum of two independent blocks, as in lines 5–6 and 7–9 of the Algorithm 3; its detailed derivation can be found in [37] for 2D imaging applications. We have here extended the method to 3D imaging. For the sake of clarity, we remark that the weights w ( k ) , w ¯ ( k ) and w ( k ) ¯ ¯ in lines 7–9 are stored in 3 N v -dimensional variables w = ( w 1 , w 2 , w 3 ) where w i R N v for i = 1 , 2 , 3 , whereas | w ¯ | is a vector of size N v with components
| w ¯ ( k ) | j = ( w ¯ 1 ( k ) ) j ( w ¯ 2 ( k ) ) j ( w ¯ 3 ( k ) ) j 2 2 for   all j = 1 , , N v .
Algorithm 3 Chambolle–Pock algorithm (CP)
Input:: M , b , ϵ , m a x i t e r
1:
Compute: Γ as an approximation of K 2
2:
Initialize: τ = σ = 1 Γ > 0 , θ [ 0 , 1 ]
3:
Initialize: x ( 0 ) R N v ( x ( 0 ) 0 ) , x ¯ ( 0 ) R N v , y ( 0 ) R N d and w ( 0 ) R 3 · N v to zeros-vectors
4:
for k = 0 to m a x i t e r 1 do
5:
     y ¯ ( k ) = y ( k ) + σ ( M x ¯ ( k ) b )
6:
     y ( k + 1 ) = m a x ( y ¯ ( k ) 2 σ ϵ , 0 ) y ¯ ( k ) y ¯ ( k ) 2
7:
     w ¯ ( k ) = w ( k ) + σ ( x ; y ; z ) x ¯ ( k )
8:
     w ( k ) ¯ ¯ = ( | w ¯ ( k ) | , | w ¯ ( k ) | , | w ¯ ( k ) | )
9:
     w ( k + 1 ) = w ¯ ( k ) . ( λ . / m a x ( λ , w ( k ) ¯ ¯ )
10:
     x ( k + 1 ) = x ( k ) τ ( M T y ( k + 1 ) + ( x ; y ; z ) T w ( k + 1 ) )
11:
     x ( k + 1 ) = P + ( x ( k + 1 ) )
12:
     x ¯ ( k + 1 ) = x ( k + 1 ) + θ ( x ( k + 1 ) x ( k ) )
Output: x ( k + 1 )
The proximal mapping of G is defined as:
p r o x τ [ G ] ( x ) = arg min x ¯ δ Ω ( x ) + 1 2 τ x x ¯ 2 2 = arg min x ¯ Ω 1 2 τ x x ¯ 2 2 = P + ( x )
hence, it is exactly the projection P + ( x ) of x onto the feasible set Ω (line 11 of Algorithm 3). The updated iterate x ( k + 1 ) is computed with an extrapolation strategy as in line 12 of Algorithm 3. The algorithm convergence is demonstrated in [36].
We finally observe that the algorithm needs to compute the value Γ (line 1 of Algorithm 3): to estimate the matrix 2-norm as Γ K 2 = ρ ( K T K ) (where ρ is the spectral radius of a matrix), we perform two iterations of the power method for computation of the maximum eigenvalue in module [38].

3.4. User-Independent Choice of the Regularization Parameter

In model-based optimization approaches, the choice of the regularization parameter λ plays a key role for the quality of the reconstruction and it represents a crucial challenge in a clinical setting, where the trial-and-error approach is not doable for each reconstruction. Moreover, experimental results show that Hence, we propose to reduce the regularization weight along the iterations, by choosing the λ values with a decreasing updating rule. Interestingly, state-of-art studies have already proposed semi-automatic rules for the selection of a decreasing sequence { λ k } k , k = 1 , of regularization parameters defining a sequence of minimization problems, whose solutions converge to a good reconstructed image. See [39] for more details and the convergence proof.
We propose the following fully-automatic strategy to compute a decreasing sequence { λ k } k . At the beginning of our algorithm, we leave out the regularization by setting the first parameter λ 0 = 0 : we are in fact interested in a very good data fitting, to recover as many image features as possible. Next, the starting value λ 1 is set to balance the residual norm and the amount of TV of the first iterate. Afterward, we propose to decrease λ of a constant factor 1 / k at each kth iteration, since we need a very simple and computationally cheap rule, reducing the regularization weight slightly. The resulting strategy is summarized in the following scheme and it can be introduced in each of the previously considered algorithms.
  • Set λ 0 = 0 to initialize the algorithm and run the first iteration (labeled with k = 0) to compute x ( 1 ) ;
  • Set λ 1 = L S ( x ( 1 ) ) T V ( x ( 1 ) ) and use it to compute x ( 2 ) ;
  • For each k 2 , set
    λ k = 1 k λ 1
    and use it to compute x ( k + 1 ) .

3.5. The Projection Matrix Algorithm

In addition to the choice of the model parameter and the solver, in optimization approach a key point consists in numerical modeling the geometric projection process, schematically displayed from a frontal view in Figure 1, through a matrix.
The coefficient matrix M of the linear system (2) is commonly called projection operator in tomography, since it represents the action of the tomographic system in projecting an object onto the detector, whereas the matrix modeling the backprojection of the tomographic data onto a volume is called backprojection operator. In the proposed optimization algorithms, the backprojection coincides with the transpose matrix M T .
Different algorithms have been proposed in literature for the computation of the matrix M. We have adopted the Distance Driven (DD), which accurately models the discretization of the Lambert–Beer’s law (1) for cone-beam projections [40]. In DD, M of size N v × ( N p · N θ ) is constituted by N θ submatrices M θ of size N v × N p . Each element M i , j θ represents the contribution of the jth voxel (for j = 1 , N v ) to the projection onto the i-th detector pixel (for i = 1 , N p ), for a projection angle θ . Images in Figure 2 help in understanding the DD procedure. In Figure 2a, for a scanning angle, we consider the X-ray cone-beam projecting onto the ith blue pixel and intersecting the voxels with bold contours (in the magenta colored area). Only these voxels contribute to the value of the projection in the considered pixel. In Figure 2b, we highlight the ith cell of the detector (the blue area) and its backward footprint on a plane parallel to the detector (the magenta area). The ratio between the magenta area inside the jth voxel and the whole magenta extension is proportional to the value M i , j of the matrix. For all the voxels j not contributing to the ith projection, the corresponding matrix element M i , j = 0 ; hence, M is extremely sparse. However, despite the huge number of nonzero elements, for its very large size, in real applications M cannot be stored and it must be recomputed whenever a matrix-vector product is needed.
We finally remark that we have modified the general approach presented in [40] by efficiently exploiting the characteristics of our specific mammographic setting. Really, since the DBT detector is a stationary flat panel and it is parallel to the compression plane of the breast, the footprints can be directly projected onto the detector plane, thus avoiding the use of an intermediate projection plane and further computational costs.

4. Materials

4.1. DBT System Configuration

Our tests are performed on the digital system Giotto Class of the Italian I.M.S. Giotto Spa company in Bologna [41]. The source executes N θ = 11 scans from equally spaced angles in approximately 30 degrees range; in the highest vertical position, the source is about 70 cm over the detector. The stationary digital detector has a sensitive area of 24 cm × 30 cm and squared pixel pitch of 0.085 mm; the reconstructed voxel dimensions along the three Cartesian axes are Δ x = Δ y = 0.090 mm and Δ z = 1 mm, respectively.
The system uses a polychromatic ray with energies in a narrow range around 20 keV to avoid the photon scattering. As always happens in CT reconstruction algorithms, we approximate the polychromatic beam with a monochromatic one.

4.2. Data Sets

We consider two data sets in our experiments: a breast 3D phantom and a clinical acquisition from a human subject. Both volumes contain the objects of interest for breast cancer detection, i.e., small high contrast microcalcifications and larger but lower contrasted masses.
The phantom is the model 020 of BR3D breast imaging phantom, produced by CIRS Tissue Simulation and Phantom company [42]. It is characterized by a heterogeneous background, where adipose-like and gland-like tissues are mixed in about 50/50 ratio and it is made of six slabs that may be arranged to create multiple anatomical backgrounds. Each slab has a semicircular shape and its size is 10 cm × 18 cm. Inside one of them, we find acrylic spheres simulating breast masses (MSs), 1 cm length fibers and many clusters of calcium carbonate specks simulating microcalcifications (MCs). We report in Table 1 the length of the diameters of all the MSs and of each sphere of a MC cluster. In particular, we reconstruct a volume of 50 slices of 11.4 cm × 21 cm and we analyze in the reconstructed images objects with different diameters, such as the microcalcifications in clusters 3, 5 and 6 (having 230, 165 and 130 μm diameter, respectively) and the second and fourth mass (with diameter 4.7 and 3.1 mm, respectively). All such objects lie on the same slice.
As human DBT data set, we have chosen a case containing microcalcifications, circular and spiculated masses. The clinical volume is constituted of 55 slices of 10.5 cm × 20.7 cm.

4.3. Measure and Graphics of Merits

In order to quantitatively evaluate the reconstructed objects of interest in the volumes, we compute two widely used measure of merits: the Contrast-to-Noise Ratio (CNR) and the Full-Width at Half Maximum (FWHM).
The CNR measure on a mass is calculated as:
C N R M S = μ M S μ B G σ M S σ B G
where μ and σ are the mean and standard deviation computed on the reconstructed volume, in small regions located inside the mass (MS) or in the background (BG). Similarly, we define the CNR measure on a microcalcification as:
C N R M C = M M C μ B G σ B G
where M M C is the maximum intensity inside the considered microcalcification (MC). Higher values of the CNR indices reflect a better detection of an object from the background.
To compute the FWHM parameter, we consider the transverse slice (parallel to the X Y plane) where the microcalcification lies and we extract the Plane Profile (PP) along the Y axes. The FWMH index is computed as:
F W H M = 2 2 ln ( 2 ) d
where d is the standard deviation of the Gaussian curve fitting the PP. We remark that
w = F W H M · Δ y
approximates the width of the examined microcalcification. The Plane Profiles are also useful tools to evaluate the reconstruction accuracy on the transverse plane.
To estimate the solver effectiveness along the Z direction, which is the most challenging purpose in DBT imaging, we plot the Artifact Spread Function (ASF) vector, whose components are computed on a microcalcification as:
A S F ( z ) = | μ M C ( z ) μ B G ( z ) | | μ M C ( z ¯ ) μ B G ( z ¯ ) | , z = 1 , , N z
where μ ( z ) is the mean of the reconstructed values inside a circular region of three pixels diameter inside the considered MC and in the background, z ¯ corresponds to the slice where the object is on focus and N z is the total number of discrete slices. Similarly, we compute the ASF for the masses.

5. Numerical Results and Discussion

In this section we present the results obtained with the proposed optimization approach and the accurate solvers described in Section 3. At first we compare the BR3D phantom reconstructions produced by the SGP, FP and CP solvers in a similar computational time. Then, in order to analyze the best achievable image quality, we have run the SGP up to convergence both on the phantom and on the clinical data set. At last, we test the automatic rule proposed in Section 3.4 to decrease the λ values along the iterations.
When setting a constant value for λ is required, we fixed it by trial and error. The sequence { ρ k } k for the SGP scheme has been defined as:
ρ k = 1 + 10 15 / k 2.1
as suggested in [31].

5.1. Methods Comparison for Early Reconstructions

The aim of this paragraph is to show the behavior of the proposed solvers at different stages of their executions. We fixed 5 and 15 iterations: the workload of 5 iterations is compatible with the execution of a reconstruction on a commercial hardware in a clinical setting, whereas in 15 iterations we get fairly accurate results with all the three methods, reflecting that they are sufficiently close to the computed convergence solution. From a computational perspective, the SGP and the CP iterations are almost equally expensive, hence we stopped them exactly at 5 or 15 iterations. In the special case of FP solver, the number of allowed iterations corresponds to the sum of the external and CG iterations: since we always perform 4 CG iterations, the FP execution is stopped after one or three outer iterations (loop k in Algorithm 2), respectively.
The value of β in (5) has been fixed as β = 0.001 . Since the three considered algorithms solve different optimization problems, the three λ parameters have been chosen independently for each method to achieve the best reconstruction in 5 iterations: we have set λ = 0.005 for both SGP and CP methods and λ = 0.001 for the FP algorithm.
In the following analysis, we focus on the reconstruction of MC cluster number 3 in the BR3D phantom. For each solver, Figure 3 reports a 125 × 125 pixels crop taken from the fifth slice of the reconstructions in 5 and 15 iterations. The images are represented by automatically enhancing the gray level contrast computed on the same considered region. In Figure 4 we compare the PP and ASF curves taken on one MC.
Looking at Figure 3, we observe that the detection of the MC cluster is comparable at equal iterations, whereas the background appears slightly different for the three methods. For example, in the case of CP reconstruction in 15 iterations it looks smoother and more blurred. Focusing on the objects of interest, we notice that in 5 iterations the MCs are perfectly visible; moreover, in 15 iterations the MC edges are sharper as confirmed by the plots (a) and (c) in Figure 4. From plots (b) and (d) of Figure 4, we observe that in all the three reconstructions the object is placed in the correct slice and it is not diffused in the adjacent layers. Hence, we can conclude that the proposed model-based optimization framework yields good quality images in early reconstructions, regardless the considered model formulation.
We highlight that the above results have been computed by a C-language-based scalar implementation of each solver, where it is not possible to reconstruct the whole volume of the phantom because of the too high memory demand. Working on one node HPE ProLiant DL560 Gen10 with 4 Intel(R) Xeon(R) Gold 6140 CPU @ 2.30 GHz and 512 Gb RAM, we considered cropped projections (corresponding to 4.5 · 10 6 measurement data approximately) and we recovered a partial volume ( N v 5 · 10 6 voxels). The execution time of the SGP, FP and CP functions resulted to be 262, 272 and 241 s, respectively, to perform 15 iterations. We remark that the reported time for the CP solver does not include the execution of step 1 since the approximation of Γ has been computed externally. As already mentioned, a parallel implementation is required to reconstruct a whole phantom volume or a real breast. The results reported in the following, have been achieved by a CUDA-based SGP software, exploiting one Nvidia Tesla V100 GPU board.

5.2. SGP Algorithm Insights

In the following, we raise the SGP as the representative solver in the proposed optimization framework. We explore the performance of the model-based approach on many different objects of the BR3D phantom (such as microcalcifications of very small diameter and masses with a low contrast with the background tissue) and we analyze the quality of the reconstructions also after 15 iterations, i.e., approaching convergence.
In particular, we run the SGP solver on the BR3D phantom until the stopping condition:
f ( x ( k ) ) f ( x ( k 1 ) ) f ( x ( k ) ) < 10 6
is satisfied. It occurs after 44 iterations. In Figure 5, we plot the objective function values vs. the number of iterations: we observe that the objective function fast decreases in the first 5 iterations, whereas it exhibits a very flat trend from 10 iterations on, as it is confirmed by the red labeled values. We have seen, in fact, that the reconstructed images are visually almost indistinguishable after 30 iterations.
In Figure 6, we exhibit the reconstructions of the 165 μm MCs of cluster 5, and the 4.7 mm mass (MS 2), obtained by the SGP algorithm after 5, 15 and 30 iterations. In Figure 7 we report the corresponding PP and ASF plots. Figure 6a shows that the MC of cluster 5 can be clearly visible after only 5 iterations and the PP plots of Figure 7a confirms that it gets more and more enhanced from the background. The ASF plot in Figure 7b shows an improvement in the object detection along the Z direction. As visible in Figure 6 and from the PP plot of Figure 7c, MS 2 is out of focus at 5 iterations but its contours become more and more defined when the algorithm approaches to convergence. The previous plots confirm that the proposed model with TV regularization is more effective in recovering high contrast objects such as microcalcifications than low absorbing structures such as masses.
In Table 2, we report the values of the CNR parameter on the examined reconstructed microcalcifications and masses. In particular, recalling the CNR definition (24) for the masses, the background area is a circle with diameter of 80 voxels, whereas we have considered circles of diameter 40 and 25 voxels inside the masses 2 and 4, respectively. When evaluating the CNR index on a MC with Equation (25), we regard as background area a circle of 20 voxels diameter and we compute M on a small circle of diameter 5 voxels containing the microcalcification.
Table 3 shows the values of the FWHM index defined in (26) and computed on one of the reconstructed microcalcification in each cluster. The corresponding MCs widths w, computed in micrometers as in (27), are reported for a comparison with the values of the actual diameters, shown in Table 1. Both tables demonstrate that we can get improved and more accurate reconstructions as the SGP approaches to the convergence: the increasing CNR indexes exhibit good denoising effects whereas the object enhancement is confirmed by the FWHM decreasing values. We remark that the MCs of cluster 6 are not discernible from the background in only 5 iterations (the FWHM is not measurable on the sixth MC cluster), because they are 130 μm width and they should approximately fill inside only two voxels. However, they can be well recovered in more iterations with a good approximation of their real size.

5.3. Experiments on a Human Data Set

We now illustrate the results obtained by reconstructing a real breast volume with the SGP solver at different iterative stages. In Figure 8a–c, we report a crop of a reconstructed slice, where we can distinguish objects of interest, i.e., a spherical mass and a small microcalcification. The plots in Figure 8d,e represent the PP calculated on the mass and the microcalcification, respectively. The mass is well distinguishable since the earliest reconstruction and its shape and gray level intensity do not change remarkably; however the regular blue and thin profile in Figure 8d points out the denoising effects of the TV function in the last iterations. In addition, the microcalcification is detected in few iterations, even if a more time-consuming SGP execution enhances the contrast of the object with respect to the background. Table 4, reporting CNR and FWHM values computed on the objects in Figure 8, gives more insight on the quality of the reconstructions. In particular, it confirms that the noise progressively decreases and the microcalcification gets more and more defined, from 5 to 30 iterations.
In Figure 9, we report the reconstruction of two speculated masses, which can occur in clinical cases. For such breast objects, the previous measures of merits are not applicable. However, we can observe that they both are well recognizable in the earliest reconstruction and the edges become sharper with increasing iterations.

5.4. Experiments with a Variable Regularization Parameter

In all the above experiments, we have set a constant value of the regularization parameter along the iterations, to let the solvers perform at their best on the prefixed model derived by the settled λ . In this paragraph, we show the results achieved with the automatic rule (23) for the choice of a decreasing sequence { λ k } k applied to the SGP solver, to reconstruct the BR3D phantom.
Figure 10 plots the sequence { λ k } k with the blue line, while the red line represents the constant λ value used in the SGP implementation in the previous experiments. We observe that the proposed strategy computes values greater than the heuristically fixed one λ = 0.005 until the fifth iteration. In Figure 11 we compare the PP of one microcalcification from cluster 3, reconstructed at 5 and 15 iterations using both a fixed value and the proposed strategy for the regularization parameter. We can infer from Figure 11a that the resulting larger TV weights in the first iterations produce more accurate results. However, on advanced reconstructions the differences are negligible. We can conclude that the proposed automatic strategy results very efficient.

6. Conclusions

In this paper, we have presented three model-based problem statements under a unique optimization framework for DBT image reconstruction. Each minimization problem has been solved by a convergent and efficient algorithm, for very large-size image reconstruction. We have also proposed a user-independent rule for selecting suitable values of the regularization parameter.
The results obtained with three solvers are encouraging. In early reconstructions, objects of interest of size greater than 150 μm are visible and correctly located in the volume, whereas the object detection quality improves and the noise drastically reduces if more iterations are allowed. When extending the computation from 5 to 30 algorithm iterations, the increasing rate of the CNR value lies in a range + 150 % to + 280 % . At last, we have shown that varying the regularization parameter along the iterations produces better results, especially in the early stage of the algorithm execution, when compared to the use of a fixed value heuristically chosen.
Since the three considered solvers produce comparable high quality reconstructions, we can conclude that the proposed model-based approach can be successfully used to detect the most interesting objects in an early diagnosis of breast tumor.

Author Contributions

Conceptualization, E.L.P.; methodology, E.L.P.and E.M.; software, E.M.; data curation, E.M., validation, E.L.P. and E.M.; writing-original draft preparation, E.L.P. and E.M.; writing-review and editing. E.L.P. and E.M.; funding acquisition, E.L.P. All authors have read and agreed to the published version of the manuscript.

Funding

The research was funded by Gruppo Nazionale per il Calcolo Scienitfico, grant 2020 entitled Ottimizzazione per l’apprendimento automatico e apprendimento automatico per l’ottimizzazione.

Institutional Review Board Statement

Ethical review and approval were waived for this study, due to external agreement with IMS Giotto system on research study.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

3rd Party Data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wu, T.; Stewart, A.; Stanton, M.; McCauley, T.; Phillips, W.; Kopans, D.B.; Moore, R.H.; Eberhard, J.W.; Opsahl-Ong, B.; Niklason, L.; et al. Tomographic mammography using a limited number of low-dose cone-beam projection images. Med. Phys. 2003, 30, 365–380. [Google Scholar] [CrossRef] [PubMed]
  2. Andersson, I.; Ikeda, D.M.; Zackrisson, S.; Ruschin, M.; Svahn, T.; Timberg, P.; Tingberg, A. Breast tomosynthesis and digital mammography: A comparison of breast cancer visibility and BIRADS classification in a population of cancers with subtle mammographic findings. Eur. Radiol. 2008, 18, 2817–2825. [Google Scholar] [CrossRef]
  3. Feldkamp, L.A.; Davis, L.C.; Kress, J.W. Practical cone-beam algorithm. J. Opt. Soc. Am. A 1984, 1, 612–619. [Google Scholar] [CrossRef] [Green Version]
  4. Wu, T.; Moore, R.H.; Rafferty, E.a.; Kopans, D.B. A comparison of reconstruction algorithms for breast tomosynthesis. Med. Phys. 2004, 31, 2636. [Google Scholar] [CrossRef] [PubMed]
  5. Bliznakova, K.; Bliznakov, Z.; Buliev, I. Comparison of algorithms for out-of-plane artifacts removal in digital tomosynthesis reconstructions. Comput. Methods Progr. Biomed. 2012, 107, 75–83. [Google Scholar] [CrossRef]
  6. Reiser, I.; Bian, J.; Nishikawa, R.M.; Sidky, E.Y.; Pan, X. Comparison of reconstruction algorithms for digital breast tomosynthesis. arXiv 2009, arXiv:0908.2610. [Google Scholar]
  7. Beister, M.; Kolditz, D.; Kalender, W.A. Iterative reconstruction methods in X-ray CT. Phys. Med. 2012, 28, 94–108. [Google Scholar] [CrossRef]
  8. Graff, C.; Sidky, E. Compressive sensing in medical imaging. Appl. Opt. 2015, 54, C23–C44. [Google Scholar] [CrossRef] [Green Version]
  9. Sidky, E.Y.; Kao, C.M.; Pan, X. Accurate image reconstruction from few-views and limited-angle data in divergent-beam CT. J. X Ray Sci. Technol. 2009, 14, 119–139. [Google Scholar]
  10. Choi, K.; Wang, J.; Zhu, L.; Suh, T.S.; Boyd, S.P.; Xing, L. Compressed sensing based cone-beam computed tomography reconstruction with a first-order method. Med. Phys. 2010, 37, 5113–5125. [Google Scholar] [CrossRef] [Green Version]
  11. Ritschl, L.; Bergner, F.; Fleischmann, C.; Kachelrieß, M. Improved total variation-based CT image reconstruction applied to clinical data. Phys. Med. Biol. 2011, 56, 1545–1561. [Google Scholar] [CrossRef] [PubMed]
  12. Luo, X.; Yu, W.; Wang, C. An image reconstruction method based on total variation and wavelet tight frame for limited-angle CT. IEEE Access 2017. [Google Scholar] [CrossRef]
  13. Liu, L. Model-based Iterative Reconstruction: A Promising Algorithm for Today’s Computed Tomography Imaging. J. Med. Imaging Radiat. Sci. 2014, 45, 131–136. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Donoho, D. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  15. Sidky, E.Y.; Kao, C.M.; Pan, X. Effect of the data constraint on few-view, fan-beam CT image reconstruction by TV minimization. In Proceedings of the 2006 IEEE Nuclear Science Symposium Conference Record, San Diego, CA, USA, 29 October–4 November 2006; pp. 2296–2298. [Google Scholar] [CrossRef]
  16. Sidky, E.Y.; Pan, X. Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization. Phys. Med. Biol. 2008, 53, 4777–4807. [Google Scholar] [CrossRef] [Green Version]
  17. Sidky, E.Y.; Reiser, I.S.; Nishikawa, R.; Pan, X. Practical iterative image reconstruction in digital breast tomosynthesis by non-convex TpV optimization. In Proceedings of the SPIE Medical Imaging, San Diego, CA, USA, 16–21 February 2008; pp. 2–7. [Google Scholar]
  18. Sidky, E.Y.; Pan, X.; Reiser, I.S.; Nishikawa, R.M.; Moore, R.H.; Kopans, D.B. Enhanced imaging of microcalcifications in digital breast tomosynthesis through improved image-reconstruction algorithms. Med. Phys. 2009, 36, 4920. [Google Scholar] [CrossRef] [Green Version]
  19. Lu, Y.; Chan, H.P.; Wei, J.; Hadjiiski, L.M. Selective-diffusion regularization for enhancement of microcalcifications in digital breast tomosynthesis reconstruction. Med. Phys. 2010, 37, 6003. [Google Scholar] [CrossRef] [Green Version]
  20. Chen, Z.; Jin, X.; Li, L.; Wang, G. A limited-angle CT reconstruction method based on anisotropic TV minimization. Phys. Med. Biol. 2013, 58, 2119–2141. [Google Scholar] [CrossRef]
  21. Ertas, M.; Yildirim, I.; Kamasak, M.; Akan, A. Digital breast tomosynthesis image reconstruction using 2D and 3D total variation minimization. BioMed. Eng. OnLine 2013, 12, 112. [Google Scholar] [CrossRef] [Green Version]
  22. Rose, S.; Andersen, M.S.; Sidky, E.Y.; Pan, X. Noise properties of CT images reconstructed by use of constrained total-variation, data-discrepancy minimization. Med. Phys. 2015, 42, 2690–2698. [Google Scholar] [CrossRef] [Green Version]
  23. Chen, G.H.; Tang, J.; Leng, S. Prior image constrained compressed sensing (PICCS). In Photons Plus Ultrasound: Imaging and Sensing 2008: The Ninth Conference on Biomedical Thermoacoustics, Optoacoustics, and Acousto-Optics; International Society for Optics and Photonics: San Jose, CA, USA, 2008; Volume 6856, p. 685618. [Google Scholar]
  24. Ramirez-Giraldo, J.; Trzasko, J.; Leng, S.; Yu, L.; Manduca, A.; McCollough, C.H. Nonconvex prior image constrained compressed sensing (NCPICCS): Theory and simulations on perfusion CT. Med. Phys. 2011, 38, 2157–2167. [Google Scholar] [CrossRef] [Green Version]
  25. Jensen, T.L.; Jørgensen, J.H.; Hansen, P.C.; Jensen, S.H. Implementation of an optimal first-order method for strongly convex total variation regularization. BIT Numer. Math. 2012, 52, 329–356. [Google Scholar] [CrossRef] [Green Version]
  26. Loli Piccolomini, E.; Morotti, E. A fast TV-based iterative algorithm for digital breast tomosynthesis image reconstruction. J. Algor. Computat. Technol. 2016, 10, 277–289. [Google Scholar] [CrossRef] [Green Version]
  27. Loli Piccolomini, E.; Coli, V.; Morotti, E.; Zanni, L. Reconstruction of 3D X-ray CT images from reduced sampling by a scaled gradient projection algorithm. Comput. Opt. Appl. 2018, 71, 171–191. [Google Scholar] [CrossRef] [Green Version]
  28. Niinimäki, K.; Lassas, M.; Hämäläinen, K.; Kallonen, A.; Kolehmainen, V.; Niemi, E.; Siltanen, S. Multiresolution Parameter Choice Method for Total Variation Regularized Tomography. SIAM J. Imaging Sci. 2016, 9, 938–974. [Google Scholar] [CrossRef] [Green Version]
  29. Epstein, C.L. Introduction to the Mathematics of Medical Imaging; SIAM: Philadelphia, PA, USA, 2007. [Google Scholar]
  30. Vogel, C.R. Computational Methods for Inverse Problems; SIAM: Philadelphia, PA, USA, 2002. [Google Scholar]
  31. Bonettini, S.; Porta, F.; Ruggiero, V. A variable metric inertial method for convex optimization. SIAM J. Sci. Comput 2016, 31, A2558–A2584. [Google Scholar] [CrossRef] [Green Version]
  32. Bonettini, S.; Prato, M. New convergence results for the scaled gradient projection method. Inverse Probl. 2015, 31, 1196–1211. [Google Scholar] [CrossRef] [Green Version]
  33. Vogel, C.R.; Oman, M.E. Fast, robust total variation-based reconstruction of noisy, blurred images. IEEE Trans. Image Process. 1998, 7, 813–824. [Google Scholar] [CrossRef] [Green Version]
  34. Hestenes, M.R.; Stiefel, E. Methods of conjugate gradients for solving linear systems. J. Res. Natl. Bur. Stand. 1952, 49, 409–436. [Google Scholar] [CrossRef]
  35. Chan, T.F.; Mulet, P. On the convergence of the lagged diffusivity fixed point method in total variation image restoration. SIAM J. Numer. Anal. 1999, 36, 354–367. [Google Scholar] [CrossRef] [Green Version]
  36. Chambolle, A.; Pock, T. A First-Order Primal-Dual Algorithm for Convex Problems with Applications to Imaging. J. Math. Imaging Vis. 2011, 40, 120–145. [Google Scholar] [CrossRef] [Green Version]
  37. Sidky, E.; Chartrand, R.; Boone, J.; Pan, X. Constrained TpV-minimization for enhanced exploitation of gradient sparsity: Application to CT image reconstruction. IEEE J. Transl. Eng. Health Med. 2014, 2. [Google Scholar] [CrossRef]
  38. Golub, G.H.; Van Loan, C.F. Matrix Computations; JHU Press: Baltimore, MD, USA, 2012; Volume 3. [Google Scholar]
  39. Lazzaro, D.; Piccolomini, E.L.; Zama, F. A nonconvex penalization algorithm with automatic choice of the regularization parameter in sparse imaging. Inverse Probl. 2019. [Google Scholar] [CrossRef]
  40. De Man, B.; Basu, S. Distance-driven projection and backprojection in three dimensions. Phys. Med. Biol. 2004, 49, 2463. [Google Scholar] [CrossRef] [PubMed]
  41. IMS Giotto Class. Available online: http://www.imsgiotto.com/ (accessed on 25 May 2020).
  42. Computerized Imaging Reference Systems. BR3D Breast Imaging Phantom, Model 020. Available online: https://www.cirsinc.com/products/a11/51/br3d-breast-imaging-phantom/ (accessed on 29 January 2021).
Figure 1. Scheme of the Digital Breast Tomosynthesis (DBT) reconstruction process. On the left, a draft of the frontal (coronal) section of a DBT system acquiring projection images of the breast; in the center, a chart representing the kth iteration of the algorithm computing the sequence { x ( k ) } k of approximate solutions by solving the model-based minimization problem; on the right, the evaluation of reconstructed volumes by inspection of cancer objects of interest.
Figure 1. Scheme of the Digital Breast Tomosynthesis (DBT) reconstruction process. On the left, a draft of the frontal (coronal) section of a DBT system acquiring projection images of the breast; in the center, a chart representing the kth iteration of the algorithm computing the sequence { x ( k ) } k of approximate solutions by solving the model-based minimization problem; on the right, the evaluation of reconstructed volumes by inspection of cancer objects of interest.
Jimaging 07 00036 g001
Figure 2. Schematic draw representing the Distance Driven approach to compute the system matrix. (a) View on the Y Z plane of an X-ray projection onto a single pixel, from a fixed angle. The intersection of the X-ray beam with the volume is highlighted in magenta. (b) The magenta area represents the backward projection of the blue recording unit onto a volume slice parallel to the X Y plane.
Figure 2. Schematic draw representing the Distance Driven approach to compute the system matrix. (a) View on the Y Z plane of an X-ray projection onto a single pixel, from a fixed angle. The intersection of the X-ray beam with the volume is highlighted in magenta. (b) The magenta area represents the backward projection of the blue recording unit onto a volume slice parallel to the X Y plane.
Jimaging 07 00036 g002
Figure 3. Reconstructions of microcalcification cluster number 3 in BR3D phantom obtained with Scaled Gradient Projection (SGP), Fixed Point (FP) and Chambolle–Pock (CP) methods. In the upper row, reconstructions in 5 iterations; in the bottom row, reconstructions in 15 iterations.
Figure 3. Reconstructions of microcalcification cluster number 3 in BR3D phantom obtained with Scaled Gradient Projection (SGP), Fixed Point (FP) and Chambolle–Pock (CP) methods. In the upper row, reconstructions in 5 iterations; in the bottom row, reconstructions in 15 iterations.
Jimaging 07 00036 g003
Figure 4. Plots of the Plane Profile on the left and of the Artifact Spread Function (ASF) vectors on the right, taken over one microcalcification of cluster number 3 in BR3D phantom obtained. In all the plots the red line corresponds to SGP method, the blue line to FP method and the green line to CP method.
Figure 4. Plots of the Plane Profile on the left and of the Artifact Spread Function (ASF) vectors on the right, taken over one microcalcification of cluster number 3 in BR3D phantom obtained. In all the plots the red line corresponds to SGP method, the blue line to FP method and the green line to CP method.
Jimaging 07 00036 g004
Figure 5. Objective function values vs. the iteration number for the SGP execution on the phantom test. The convergence has been reached after 44 iterations by satisfying condition (29). The red labels outline the function values at 5, 15 and 30 iterations.
Figure 5. Objective function values vs. the iteration number for the SGP execution on the phantom test. The convergence has been reached after 44 iterations by satisfying condition (29). The red labels outline the function values at 5, 15 and 30 iterations.
Jimaging 07 00036 g005
Figure 6. SGP results on BR3D phantom. (ac) Reconstructions of MC cluster number 5 obtained after 5, 15 and 30 iterations. (df) Reconstructions of mass number 2 obtained after 5, 15 and 30 iterations.
Figure 6. SGP results on BR3D phantom. (ac) Reconstructions of MC cluster number 5 obtained after 5, 15 and 30 iterations. (df) Reconstructions of mass number 2 obtained after 5, 15 and 30 iterations.
Jimaging 07 00036 g006
Figure 7. SGP results on BR3D phantom. Plots of the Plane Profile on the left and of the ASF vectors on the right, taken over one microcalcification of cluster 5 (upper row) and over mass number 2 (bottom row). In all the plots the black line corresponds to 5 iterations, red line to 15 iterations and blue line to 30 iterations.
Figure 7. SGP results on BR3D phantom. Plots of the Plane Profile on the left and of the ASF vectors on the right, taken over one microcalcification of cluster 5 (upper row) and over mass number 2 (bottom row). In all the plots the black line corresponds to 5 iterations, red line to 15 iterations and blue line to 30 iterations.
Jimaging 07 00036 g007
Figure 8. Results obtained after 5, 15 and 30 SGP iterations on a human breast data set. (ac) Reconstructions of a 440 × 400 pixels region presenting both a spherical mass (pointed by the arrow) and a microcalficication (identified by the circle). (d,e) Plane profiles on the mass and on the microcalcification. In the plots: black line corresponds to 5 iterations and blue line to 30 iterations.
Figure 8. Results obtained after 5, 15 and 30 SGP iterations on a human breast data set. (ac) Reconstructions of a 440 × 400 pixels region presenting both a spherical mass (pointed by the arrow) and a microcalficication (identified by the circle). (d,e) Plane profiles on the mass and on the microcalcification. In the plots: black line corresponds to 5 iterations and blue line to 30 iterations.
Jimaging 07 00036 g008
Figure 9. Results obtained after 5, 15 and 30 SGP iterations on a human breast data set. The reported 558 × 480 pixels crops present two speculated masses.
Figure 9. Results obtained after 5, 15 and 30 SGP iterations on a human breast data set. The reported 558 × 480 pixels crops present two speculated masses.
Jimaging 07 00036 g009
Figure 10. Plot of the set of values λ k versus the iteration k (blue line) in the SGP execution on the phantom test. The red straight line represents the constant value λ = 0.005 used in SGP for the experiments presented in the previous sections.
Figure 10. Plot of the set of values λ k versus the iteration k (blue line) in the SGP execution on the phantom test. The red straight line represents the constant value λ = 0.005 used in SGP for the experiments presented in the previous sections.
Jimaging 07 00036 g010
Figure 11. Plane Profiles on one microcalcification of cluster number 3 of the phantom, obtained with SGP with different regularization parameters, in 5 and 15 iterations. In all the plots the red line corresponds to fix parameter and the blue line to the adaptive choice of λ .
Figure 11. Plane Profiles on one microcalcification of cluster number 3 of the phantom, obtained with SGP with different regularization parameters, in 5 and 15 iterations. In all the plots the red line corresponds to fix parameter and the blue line to the adaptive choice of λ .
Jimaging 07 00036 g011
Table 1. Diameters of the microcalcifications (MC) and of the masses (MS) in the BR3D phantom as reported in [42]. Measures are in micrometers (m).
Table 1. Diameters of the microcalcifications (MC) and of the masses (MS) in the BR3D phantom as reported in [42]. Measures are in micrometers (m).
123456
MC400290230196165130
MS630047003900310023001800
Table 2. Values of the Contrast-to-Noise Ratio (CNR) index computed after 5, 15 and 30 SGP iterations. The CNR value computed on microcalcifications is defined as in (25), whereas the CNR value computed on the masses is defined as in (24).
Table 2. Values of the Contrast-to-Noise Ratio (CNR) index computed after 5, 15 and 30 SGP iterations. The CNR value computed on microcalcifications is defined as in (25), whereas the CNR value computed on the masses is defined as in (24).
5 it.15 it.30 it.
MC cluster 324.2133.3438.00
MC cluster 510.0319.0028.00
MC cluster 67.2711.0217.00
MS 20.821.071.66
MS 40.871.001.33
Table 3. Full-Width at Half Maximum (FWHM) index (26) and w measures (27) computed on the reconstructed MCs of the BR3D phantom, after 5, 15 and 30 SGP iterations.
Table 3. Full-Width at Half Maximum (FWHM) index (26) and w measures (27) computed on the reconstructed MCs of the BR3D phantom, after 5, 15 and 30 SGP iterations.
FWHMw (µm)
5 it. 15 it. 30 it. 5 it. 15 it. 30 it.
MC cluster 34.773.322.70430299243
MC cluster 53.522.652.32317238209
MC cluster 6-2.051.52-185137
Table 4. Valuexs of the of CNR and FWHM indices computed on the mass and the microcalcification observable in Figure 8a–c.
Table 4. Valuexs of the of CNR and FWHM indices computed on the mass and the microcalcification observable in Figure 8a–c.
CNRFWHM
5 it. 15 it. 30 it. 5 it. 15 it. 30 it.
MS0.2390.3810.558---
MC8.7816.5916.498.577.817.29
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Loli Piccolomini, E.; Morotti, E. A Model-Based Optimization Framework for Iterative Digital Breast Tomosynthesis Image Reconstruction. J. Imaging 2021, 7, 36. https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020036

AMA Style

Loli Piccolomini E, Morotti E. A Model-Based Optimization Framework for Iterative Digital Breast Tomosynthesis Image Reconstruction. Journal of Imaging. 2021; 7(2):36. https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020036

Chicago/Turabian Style

Loli Piccolomini, Elena, and Elena Morotti. 2021. "A Model-Based Optimization Framework for Iterative Digital Breast Tomosynthesis Image Reconstruction" Journal of Imaging 7, no. 2: 36. https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020036

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop