Next Article in Journal
Investigating Aging-Related Changes in the Coordination of Agonist and Antagonist Muscles Using Fuzzy Entropy and Mutual Information
Next Article in Special Issue
A Simulation-Based Study on Bayesian Estimators for the Skew Brownian Motion
Previous Article in Journal
On Extensions over Semigroups and Applications
Previous Article in Special Issue
A Confidence Set Analysis for Observed Samples: A Fuzzy Set Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal Noise Enhanced Signal Detection in a Unified Framework

College of Communication Engineering, Chongqing University, Chongqing 400044, China
*
Author to whom correspondence should be addressed.
Submission received: 22 February 2016 / Revised: 18 May 2016 / Accepted: 26 May 2016 / Published: 17 June 2016
(This article belongs to the Special Issue Statistical Significance and the Logic of Hypothesis Testing)

Abstract

:
In this paper, a new framework for variable detectors is formulated in order to solve different noise enhanced signal detection optimal problems, where six different disjoint sets of detector and discrete vector pairs are defined according to the two inequality-constraints on detection and false-alarm probabilities. Then theorems and algorithms constructed based on the new framework are presented to search the optimal noise enhanced solutions to maximize the relative improvements of the detection and the false-alarm probabilities, respectively. Further, the optimal noise enhanced solution of the maximum overall improvement is obtained based on the new framework and the relationship among the three maximums is presented. In addition, the sufficient conditions for improvability or non-improvability under the two certain constraints are given. Finally, numerous examples are presented to illustrate the theoretical results and the proofs of the main theorems are given in the Appendix.

Graphical Abstract

1. Introduction

Noise commonly exists with the useful signal, and more noises in the system often lead to less channel capacity and worse detectability. People usually try to utilize a series of filters and algorithms to remove the unnecessary noise. Hence, understanding and mastering the distribution and characteristics of noise is an essential research topic in traditional signal detection theory. Nevertheless, although it may seem very counterintuitive, noise does play an active role in many signal processing problems, and the performance of some nonlinear systems can be enhanced via adding noise under certain conditions [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27]. The phenomenon that noise benefits system is the so-called stochastic resonance (SR), which was first proposed by Benzi et al. [1] in 1981 when they studied the periodic recurrence of ice gases. The positive effects of noise have drawn the attention of researchers in various fields. For example, the effect of SR on the global stability of complex networks is investigated in [26]. Kohar and Sinha demonstrated how to utilize noise to make a bistable system behave as a memory device in [27].
In the signal detection problem, researchers commonly care about how to increase the output signal-to-noise (SNR) [7,8,9,10,11], the mutual information [12,13], or detection probability with a constant false alarm rate [14,15,16,17,18,19,20], or how to decrease the Bayes risk [21,22] or the probability of error [23] by adding additive noise to the input of system or changing the background noise level. As presented in [8], the output SNR obtained by adding suitable noise to the input of system is higher than the input SNR. The research results in [14] indicate that the detection probability of the sign detector can be increased by adding an appropriate amount of white Gaussian noise.
Depending on the detection metrics, the SR phenomenon for the hypothesis testing or detection problems are usually investigated according to the Bayesian, Minimax or Neyman–Pearson criteria. In [23], the additive noise to optimize the performance of a suboptimal detector is explored according to Bayesian criterion under uniform cost assignment. It is demonstrated that the optimal noise to minimize the probability of error is a constant, and the improvability obtained by adding the constant noise can also be achieved via shifting the decision region without adding any additive noise. The probability distribution of the optimal additive noise to minimize Bayes risk is investigated in [21] according to the restricted Bayesian criterion, which can be extended to Bayesian and Minimax criteria easily. For an M-ary hypothesis testing problem, the optimal noise is determined as a randomization of at most M mass points.
In [15], a mathematical framework is established to search the optimal noise to maximize the detection probability based on Neyman–Pearson criterion. This leads to the very significant conclusion that the optimal noise is determined as the randomization of at most two discrete vectors. In addition, sufficient conditions whether the detection probability can or cannot be increased are given. However, from the perspective of Patel [18], the proof of the optimal noise presented in [15] has a little bit of a drawback. Moreover, the same noise enhanced problem for a fixed detector is researched through establishing a different mathematical framework in [18], where the optimal noise to maximize detection probability is also proved as a random signal consisting of no more than two discrete points and the corresponding probabilities. The researchers in [16] studied the noise enhanced detection performance for variable detectors according to the Neyman–Pearson criterion based on the results in [15]. The optimal noise enhanced solution to maximize the detection probability is determined as a randomization of no more than two detector and constant vector pairs.
Through the comparison and analysis above, it is clear that most researchers have focused on the maximization of detection probability via additive noise and there are few studies which cover the field of the minimization of the false-alarm probability. We cannot exclude the possibility that the false-alarm probability can be decreased by adding noise without deteriorating the original detectability, especially for the case where the detection probability cannot be increased via adding any noise. For example, a noise enhanced model which can increase the detection probability and decrease the false-alarm probability simultaneously by adding noise is first formulated in [25] for a fixed detector. In addition, the model is solved by a convex combination of the optimal noises for two limited cases, i.e., the minimization of false-alarm probability and the maximization of detection probability. Nevertheless, it is obvious that the convex combination is usually not the optimal solution of the maximum overall improvement of the model. In this paper, we explore the optimal solution to maximize the overall improvement of detection and false-alarm probabilities directly instead of the convex combination. In practical applications, though the structure of the detector commonly cannot be replaced in many cases, some parameters of the detector can be varied to obtain a better performance. Moreover, the noise enhanced detection problems for a fixed detector can generally be achieved by simplifying the results for variable detectors directly. Thus, it is necessary to discuss the noise enhanced detection problems on the premise of variable detectors.
In order to obtain the optimal noise enhanced solution to maximize the overall improvement of detection and false-alarm probabilities for variable detectors under two inequality-constraints, we formulate a new framework to define six different disjoint sets of detector and discrete vector pairs based on the signs of the relative improvements of the detection and the false-alarm probabilities. Then we explore the optimal noise enhanced solutions for the maximum detection probability and the minimum false-alarm probability and give the corresponding algorithms in the new framework. Further, through some derivation, the optimal noise enhanced solution for the maximum overall improvement of detection and false-alarm probabilities is proved as a randomization of at most two detector and discrete vector pairs from two different sets, and the relationship among the three maximums is presented. In addition, the theoretical results for the case of allowing the randomization between detectors can be applied straightforwardly to the case where the randomization between detectors cannot be allowed. Namely, the optimization problem for variable detectors is simplified to choose a fixed detector and search the optimal additive noise when the randomization between detectors cannot be allowed. Actually, the maximization of detection probability in this paper is equivalent to the noise enhanced detection problem for variable detectors studied in [16], which also needs all information of detection and false-alarm probabilities obtained by every detector and discrete vector pair. Indeed, the new framework subdivides the one set in [16] into six subintervals. Based on the definition of the six sets, it is obvious that each detector and discrete vector pair as a component of additive noise is available, partially available, or unavailable to meet the two constraints. Then the available and partial available pairs can be utilized to construct the optimal noise enhanced solutions to satisfy different requirements. Namely, the division of six sets effectively provides the foundation for maximizing the relative improvements of detection and false-alarm probabilities, and the sum of them.
The main contributions of this paper can be summarized as follows:
  • Formulation of a new framework, where six different disjoint sets of detector and discrete vector pairs are defined according to two inequality-constraints.
  • Algorithms for the noise enhanced solutions to maximize the relative improvements of the detection and the false-alarm probabilities are given based on the new framework.
  • Noise enhanced solution of the maximum overall improvability is first provided based on the new framework and the relationship among the three maximums is explored.
  • Determination of the sufficient conditions for the improvability and nonimprovability under the two constraints.
The remainder of this paper is organized as follows: in Section 2, three optimization problems for a binary hypothesis testing problem for a variable detector are proposed and the six disjoint sets of detector and discrete vector pairs are defined. In Section 3, the optimal noise enhanced solutions for the three optimization problem are discussed when the randomization between detectors can or cannot be allowed and the corresponding algorithms are given. Numerical results are presented in Section 4 and the conclusions are provided in Section 5.

2. Problem Formulation

Consider a binary hypothesis testing problem as follows:
H i : p i ( x ) ,   i = 0 , 1
where p i ( x ) is the probability density function (pdf) of the observation x under H i , i = 0 , 1 , and x K . For any x , the probability of choosing H 1 can be characterized by ϕ ( x ) and 0 ϕ ( x ) 1 . Generally, ϕ ( x ) is also treated as a decision function of the detector. In order to investigate the possible enhancement of detectability, a new noise modified observation y is obtained by adding an independent noise v to the original observation x , i.e., y = x + v . Correspondingly, the pdf of y under H i can be expressed by the convolutions of p i ( x ) and p v ( x ) as below:
p y ( y ; H i ) = p i ( x ) p v ( x ) = K p i ( y v ) p v ( v ) d v
where represents the symbol of convolution.
For the same detector, the decision function for y is the same as that for x . Then the detection and false-alarm probabilities based on the new noise modified observation y are calculated by:
P D y = R K ϕ ( y ) p y ( y ; H 1 ) d y
P F A y = R K ϕ ( y ) p y ( y ; H 0 ) d y
According to the two constraints that P D y β and P F A y α , where β and α represent the lower limit on the detection probability and the upper limit on the false-alarm probability, respectively, the following three important definitions are given by:
z d = P D y β
z f a = α P F A y
z = z d + z f a
where z d and z f a can be regarded as the relative improvements of the detection and false-alarm probabilities obtained by adding additive noise, respectively, and z is the sum of z d and z f a .
In many cases, though the structure of the detector cannot be substituted, some parameters of it can be varied, such as decision threshold. In addition, the whole detector can also be replaced in some special cases. Instead of a fixed decision function ϕ ( ) , we suppose that there exist a set of candidate decision functions written as Φ and any ϕ Φ can be utilized. Then for any decision function ϕ Φ , = 1 , , L , the detection and false-alarm probabilities based on y can be obtained by replacing ϕ with ϕ , i.e.:
P D , ϕ y = R K ϕ ( y ) p y ( y ; H 1 ) d y
P F A , ϕ y = R K ϕ ( y ) p y ( y ; H 0 ) d y
When the additive noise is a discrete vector with pdf δ ( v v ) , where δ denotes the delta function, i.e., δ ( v v ) = 1 only if v = v and δ ( v v ) = 0 otherwise, p y ( y ; H i ) = p i ( y v ) . The corresponding noise modified detection and false-alarm probabilities can be rewritten as:
P D , ϕ y ( v ) = R K ϕ ( y ) p 1 ( y v ) d y
P F A , ϕ y ( v ) = R K ϕ ( y ) p 0 ( y v ) d y
Accordingly, under the constraints of P D y β and P F A y α , the relative improvements of the detection and false-alarm probabilities corresponding to the additive noise with pdf δ ( v v ) can be written as:
z ϕ d ( v ) = P D , ϕ y ( v ) β
z ϕ f a ( v ) = α P F A , ϕ y ( v )
Correspondingly:
z ϕ ( v ) = z ϕ d ( v ) + z ϕ f a ( v )
In order to make full use of the information gained by the discrete vector v , we define the following six mutually disjoint sets for each ϕ Φ according to the values of z ϕ d ( v ) and z ϕ f a ( v ) denoted by M 1 , ϕ = { ( ϕ , v ) | z ϕ d ( v ) > 0 , z ϕ f a ( v ) > 0 } , M 2 , ϕ = { ( ϕ , v ) | z ϕ d ( v ) > 0 , z ϕ f a ( v ) = 0 } , M 3 , ϕ = { ( ϕ , v ) | z ϕ d ( v ) = 0 , z ϕ f a ( v ) > 0 } , M 4 , ϕ = { ( ϕ , v ) | z ϕ d ( v ) > 0 , z ϕ f a ( v ) < 0 } , M 5 , ϕ = { ( ϕ , v ) | z ϕ d ( v ) < 0 , z ϕ f a ( v ) > 0 } , M 6 , ϕ = { ( ϕ , v ) | z ϕ d ( v ) 0 , z ϕ f a ( v ) 0 } . Further, define M ϕ = j = 1 6 M j , ϕ and M j = = 1 L M j , ϕ , then M = = 1 L M ϕ = j = 1 6 M j , where j = 1 , , 6 and = 1 , , L .
Accordingly, a framework is formulated by defining the six different sets. As a result, the purpose of this paper is to investigate the optimal noise enhanced solutions for the maximum z d , z f a and z , respectively, under the two inequality-constraints based on the new framework. Obviously, whether the pair of ( ϕ , v ) is useful, partially useful or unuseful for the noise enhancement can be determined according to the definitions of the six sets. For instance, any detector and discrete pair of M 1 , M 2 and M 3 can meet the two constraints that P D y β and P F A y α , the maximum z d may be obtained by a suitable detector and discrete pair of M 1 or M 2 , the maximum z f a may be achieved by a detector and discrete pair of M 1 or M 3 , and the maximum z may be reached by a suitable detector and discrete pair of M 1 , M 2 or M 3 . In the following sections, the corresponding theorems and algorithms are provided.

3. The Noise Enhanced Solutions

Let z m d , z m f a and z m be the maximum achievable z d , z f a and z , respectively, which are obtained by adding a discrete vector as additive noise when the randomization between detectors is allowed. Namely, z m d = max ( ϕ , v ) M z ϕ d ( v ) , z m f a = max ( ϕ , v ) M z ϕ f a ( v ) and z m = max ( ϕ , v ) M z ϕ ( v ) . If anyone of z m d , z m f a and z m is less than zero, P D y β and P F A y α cannot be obtained by adding any noise. So this paper is studied under the conditions that z m d , z m f a and z m are greater than zero.
In general, when the randomization between different detectors is allowed, the noise enhanced solution can be viewed as a randomization of one or more detector and noise pairs with the corresponding weights. Suppose that the additive noise pdf is for any ϕ Φ , = 1 , , L , then z d , z f a and z can be expressed as:
z d = = 1 L λ K P D , ϕ y ( v ) p v , ϕ ( v ) d v β = = 1 L λ K z ϕ d ( v ) p v , ϕ ( v ) d v
z f a = α = 1 L λ K P F A , ϕ y ( v ) p v , ϕ ( v ) d v = = 1 L λ K z ϕ f a ( v ) p v , ϕ ( v ) d v
z = z d + z f a = K [ z ϕ d ( v ) + z ϕ f a ( v ) ] p v , ϕ ( v ) d v = = 1 L λ K z ϕ ( v ) p v , ϕ ( v ) d v
where 0 λ 1 and = 1 L λ = 1 . Generally, the additive noise for any ϕ Φ can be viewed as a randomization of finite or infinite discrete vectors, i.e., p v , ϕ ( v ) = κ = 1 N η κ , ϕ δ ( v v κ , ϕ ) , where 0 η κ , ϕ 1 and κ = 1 N η κ , ϕ = 1 , and N is a finite or infinite positive integer.

3.1. The Optimal Noise Enhanced Solution to Maximize zd

From Equation (15), z d can be rewritten as:
z d = = 1 L λ K z ϕ d ( v ) κ = 1 N η κ , ϕ δ ( v v κ , ϕ ) d v = i = 1 L k = 1 N λ η κ , ϕ K z ϕ d ( v ) δ ( v v κ , ϕ ) d v = = 1 L κ = 1 N λ η κ , ϕ z ϕ d ( v κ , ϕ )
Further, z d can also be expressed by:
z d = τ = 1 6 j = 1 6 κ = 1 N τ τ j κ [ ξ τ j κ z ϕ h d ( v τ κ , ϕ h ) + ( 1 ξ τ j κ ) z ϕ l d ( v j κ , ϕ l ) ]
where ( ϕ h , v τ κ , ϕ h ) M τ , ( ϕ l , v j κ , ϕ l ) M j , τ j , τ = 1 6 j = 1 6 κ = 1 N τ τ j κ = 1 , 0 τ τ j κ 1 , 0 ξ τ j κ 1 , h , l = 1 , , L . Let z τ j κ d = ξ τ j κ z ϕ h d ( v τ κ , ϕ h ) + ( 1 ξ τ j κ ) z ϕ l d ( v j κ , ϕ l ) . In other words, z τ j κ d is obtained by the randomization of two detector and discrete vector pairs from two different sets, i.e., ( ϕ h , v τ κ , ϕ h ) M τ and ( ϕ l , v j κ , ϕ l ) M j . Then z d = τ = 1 6 j = 1 6 κ = 1 N τ τ j κ z τ j κ d is the convex combination of multiple z τ j κ d , which means that z d can be obtained by the randomization of multiple different randomizations consisting of two detector and discrete vector pairs ( ϕ h , v ϕ h ) , ( ϕ l , v ϕ l ) from two different sets with the corresponding probabilities. From Equation (18), if there exists at least one detector and discrete vector pair which belongs to M 1 , the constraints P D y β and P F A y α can be satisfied by choosing the suitable detector and adding the discrete vector. Otherwise, according to Equation (19) and the definitions of M 1 ~ M 6 , a randomization of two detector and discrete vector pairs from two different sets may satisfy the two constraints P D y β and P F A y α .
Let the maximum achievable z d obtained by any noise solution under the two constraints that P D y β and P F A y α be denoted by z o p t d . Define Q d = { ( ϕ , v ) | v = arg max ( ϕ , v ) M z ϕ d ( v ) } be the set of all detector and discrete vector pairs corresponding to z m d . Then the following theorem and corollary hold and the corresponding proofs are presented in Appendix A.
Theorem 1. 
z o p t d can be achieved by the randomization of at most two detector and discrete vector pairs ( ϕ 1 , v 1 ) and ( ϕ 2 , v 2 ) from two different sets, i.e., ( ϕ 1 , v 1 ) M τ ,   ( ϕ 2 , v 2 ) M j ,   τ , j = 1 , , 6 and τ j .
Corollary 1. 
(a) If there exists at least one pair ( ϕ o , v o ) of Q d which belongs to M 1 M 2 ,   z o p t d can be obtained by selecting ( ϕ o , v o ) and z o p t d = z m d . (b) When Q d M 4 , the z f a corresponding to z o p t d is zero. (c) When Q d M 4 ,   z o p t d is obtained by the randomization of ( ϕ 1 , v 1 ) M 4 and ( ϕ 2 , v 2 ) from M 1 ,   M 3 or M 5 with the respective probabilities ξ = z ϕ 2 f a ( v 2 ) z ϕ 1 f a ( v 1 ) z ϕ 2 f a ( v 2 ) and 1 ξ , or the detector and discrete pair ( ϕ o , v o ) = arg max ( ϕ , v ) M 2 z ϕ d ( v ) .
Next, we try to search the maximum achievable z d obtained by the randomization of ( ϕ 1 , v 1 ) M 4 and ( ϕ 2 , v 2 ) from M 1 , M 3 or M 5 when Q d M 4 . Generally, the corresponding z d and z f a can be expressed by:
z d = ξ z ϕ 1 d ( v 1 ) + ( 1 ξ ) z ϕ 2 d ( v 2 ) = i = 1 2 ξ i z ϕ i d ( v i )
z f a = ξ z ϕ 1 f a ( v 1 ) + ( 1 ξ ) z ϕ 2 f a ( v 2 ) = i = 1 2 ξ i z ϕ i f a ( v i )
where ξ 1 = ξ and ξ 2 = 1 ξ . Under the constraint that P F A y α , the Lagrangian of the optimization problem of maximizing z d can be formulated as:
L ( p ϕ , v , k ) = z d + k z f a = i = 1 2 λ i [ z ϕ i d ( v i ) + k z ϕ i f a ( v i ) ]
where p ϕ , v denotes the distribution of ( ϕ 1 , v 1 ) and ( ϕ 2 , v 2 ) . According to the Lagrange duality, we have:
max p ϕ , v z d = min k 0 max p ϕ , v L ( p ϕ , v , k )
So solving the optimal solution to maximize z d is equivalent to finding k > 0 and p ϕ , v to make Equation (23) hold. Let us define an auxiliary function d ϕ ( v , k ) such that:
d ϕ ( v , k ) = z ϕ d ( v ) + k z ϕ f a ( v )
Let d 1 ( k ) and d 2 ( k ) be the respective suprema of d ϕ ( v , k ) over the sets M 4 and M 1 M 3 M 5 , i.e.,
d 1 ( k ) = sup ( ϕ , v ) { d ϕ ( v , k ) : ( ϕ , v ) M 4 }
d 2 ( k ) = sup ( ϕ , v ) { d ϕ ( v , k ) : ( ϕ , v ) M 1 M 3 M 5 }
Due to z ϕ f a ( v ) < 0 when ( ϕ , v ) M 4 , d 1 ( k ) is a decreasing function of k . When ( ϕ , v ) M 1 M 3 M 5 , z ϕ f a ( v ) > 0 , which means d 2 ( k ) increases with k . Thus, there exists one k * > 0 such that d 1 ( k * ) = d 2 ( k * ) = d * , i.e., there are ( ϕ 1 , v 1 ) M 4 and ( ϕ 2 , v 2 ) M 1 M 3 M 5 such that d ϕ 1 ( v 1 , k * ) = d 1 ( k * ) = d 2 ( k * ) = d ϕ 2 ( v 2 , k * ) = d * . So the z f a and the z d obtained by the randomization between ( ϕ 1 , v 1 ) and ( ϕ 2 , v 2 ) with the respective probabilities ξ = z ϕ 2 f a ( v 2 ) z ϕ 1 f a ( v 1 ) z ϕ 2 f a ( v 2 ) and 1 ξ can be calculated by:
z f a = ξ z ϕ 1 f a ( v 1 ) + ( 1 ξ ) z ϕ 2 f a ( v 2 ) = z ϕ 2 f a ( v 2 ) z ϕ 1 f a ( v 1 ) z ϕ 2 f a ( v 2 ) z ϕ 1 f a ( v 1 ) + z ϕ 1 f a ( v 1 ) z ϕ 1 f a ( v 1 ) z ϕ 2 f a ( v 2 ) z ϕ 2 f a ( v 2 ) = 0
z d = ξ z ϕ 1 d ( v 1 ) + ( 1 ξ ) z ϕ 2 d ( v 2 ) = ξ ( d * k * z ϕ 1 f a ( v 1 ) ) + ( 1 ξ ) ( d * k * z ϕ 2 f a ( v 2 ) ) = d * k * ( ξ z ϕ 1 f a ( v 1 ) + ( 1 ξ ) z ϕ 2 f a ( v 2 ) ) = d *
Combined with Equations (27) and (28), the k * and the randomization of ( ϕ 1 , v 1 ) and ( ϕ 2 , v 2 ) are the solution of Equation (23), i.e., d * is the maximum achievable z d obtained by the randomization of ( ϕ 1 , v 1 ) M 4 and ( ϕ 2 , v 2 ) from M 1 , M 3 or M 5 when Q d M 4 . Based on the analysis above, Algorithm B1 is provided in Appendix B to search the two detector and discrete vector pairs.

3.2. The Optimal Noise Enhanced Solution to Maximize zfa

In this subsection, the optimal noise enhanced solution to maximize z f a is considered. Let the maximum achievable z f a obtained by any noise solution under the two constraints that P D y β and P F A y α be denoted by z o p t f a . Define Q f a = { ( ϕ , v ) | v = arg max ( ϕ , v ) M z ϕ f a ( v ) } . Then the following theorem and corollary hold and the corresponding proofs are omitted here, which are similar to Theorem 1 and Corollary 1, respectively.
Theorem 2. 
z o p t f a can be obtained by the randomization of at most two detector and discrete vector pairs ( ϕ 1 , v 1 ) and ( ϕ 2 , v 2 ) from two different sets, i.e., ( ϕ 1 , v 1 ) M τ ,   ( ϕ 2 , v 2 ) M j ,   τ , j = 1 , , 6 and τ j .
Corollary 2. 
(a) If there exists at least one pair ( ϕ o , v o ) of Q f a which also belongs to M 1 M 3 , z o p t f a can be achieved by selecting ( ϕ o , v o ) and z o p t f a = z m f a . (b) When Q f a M 5 , the z d corresponding to z o p t f a is zero. (c) When Q f a M 5 , z o p t f a is obtained by the randomization of ( ϕ 1 , v 1 ) from M 1 , M 2 or M 4 and ( ϕ 2 , v 2 ) M 5 with the respective probabilities η = z ϕ 2 d ( v 2 ) z ϕ 1 d ( v 1 ) z ϕ 2 d ( v 2 ) and 1 η , or the detector and discrete pair ( ϕ o , v o ) = arg max ( ϕ , v ) M 3 z ϕ f a ( v ) .
Then we focus on the maximum z f a obtained by the randomization of ( ϕ 1 , v 1 ) from M 1 , M 2 or M 4 and ( ϕ 2 , v 2 ) M 2 with respective probabilities η and 1 η when Q f a M 5 . The corresponding z d and z f a can be expressed by:
z d = η z ϕ 1 d ( v 1 ) + ( 1 η ) z ϕ 2 d ( v 2 ) = i = 1 2 η i z ϕ i d ( v i )
z f a = η z ϕ 1 f a ( v 1 ) + ( 1 η ) z ϕ 2 f a ( v 2 ) = i = 1 2 η i z ϕ i f a ( v i )
where η 1 = η and η 2 = 1 η . Under the constraint of P D y β , the Lagrangian of the maximization of z f a is:
L ( p ϕ , v , t ) = z f a + t z d = i = 1 2 η i [ z ϕ i f a ( v i ) + t z ϕ i d ( v i ) ]
The Lagrange duality suggests that:
max p ϕ , v z f a = min k 0 max p ϕ , v L ( p ϕ , v , t )
In order to solve Equation (32), let us define an auxiliary function f ϕ ( v , t ) such that:
f ϕ ( v , t ) = z ϕ f a ( v ) + t z ϕ d ( v )
Suppose that f 1 ( t ) and f 2 ( t ) are the respective suprema of f ϕ ( v , t ) over the sets M 1 M 2 M 4 and M 5 , i.e.,
f 1 ( t ) = sup ( ϕ , v ) { f ϕ ( v , t ) : ( ϕ , v ) M 1 M 2 M 4 }
f 2 ( t ) = sup ( ϕ , v ) { f ϕ ( v , t ) : ( ϕ , v ) M 5 }
When ( ϕ , v ) M 1 M 2 M 4 , z ϕ d ( v ) > 0 and then f 1 ( t ) increases with t . Since z ϕ d ( v ) < 0 when ( ϕ , v ) M 5 , f 2 ( t ) decreases with t . So there exists a t * > 0 such that f 1 ( t * ) = f 2 ( t * ) = f * . Namely, there exist ( ϕ 1 , v 1 ) M 1 M 2 M 4 and ( ϕ 2 , v 2 ) M 5 such that f ϕ 1 ( v 1 , t * ) = f 1 ( t * ) = f 2 ( t * ) = f ϕ 2 ( v 2 , t * ) = f * . The z d and the z f a obtained by the randomization between ( ϕ 1 , v 1 ) and ( ϕ 2 , v 2 ) with the respective probabilities η = z ϕ 2 d ( v 2 ) z ϕ 1 d ( v 1 ) z ϕ 2 d ( v 2 ) and 1 η can be calculated by:
z d = η z ϕ 1 d ( v 1 ) + ( 1 η ) z ϕ 2 d ( v 2 ) = z ϕ 2 d ( v 2 ) z ϕ 1 d ( v 1 ) z ϕ 2 d ( v 2 ) z ϕ 1 d ( v 1 ) + z ϕ 1 d ( v 1 ) z ϕ 1 d ( v 1 ) z ϕ 2 d ( v 2 ) z ϕ 2 d ( v 2 ) = 0
z f a = η z ϕ 1 f a ( v 1 ) + ( 1 η ) z ϕ 2 f a ( v 2 ) = η ( f * t * z ϕ 1 d ( v 1 ) ) + ( 1 η ) ( f * t * z ϕ 2 d ( v 2 ) ) = f * t * ( η z ϕ 1 d ( v 1 ) + ( 1 η ) z ϕ 2 d ( v 2 ) ) = f *
From Equations (36) and (37), the t * and the randomization of ( ϕ 1 , v 1 ) and ( ϕ 2 , v 2 ) are the solution of Equation (32), i.e., f * is the maximum achievable z f a obtained by the randomization of ( ϕ 1 , v 1 ) from M 1 , M 2 or M 4 and ( ϕ 2 , v 2 ) M 5 when Q f a M 5 . According to the derivation above, Algorithm B2 presented in Appendix B can be utilized to search the corresponding detector and discrete vector pairs.

3.3. The Optimal Noise Enhanced Solution to Maximize z

Let z o p t represent the maximum achievable z under the two constraints that P D y β and P F A y α . Define Q z = { ( ϕ , v ) | v = arg max ( ϕ , v ) M z ϕ ( v ) } . Next, the optimal noise enhanced solution to maximize z is explored in this subsection, the related theorem and corollary are provided as below.
Theorem 3. 
z o p t can be obtained by the randomization of at most two detector and discrete vector pairs ( ϕ 1 , v 1 ) and ( ϕ 2 , v 2 ) from two different sets such that ( ϕ 1 , v 1 ) M τ , ( ϕ 2 , v 2 ) M j and τ j . The proof of Theorem 3 is also similar to Theorem 1 and omitted here.
Corollary 3. 
(a) If there exists at least one pair ( ϕ o , v o ) of Q z belongs to M 1 M 2 M 3 , the maximum z can be realized by choosing ( ϕ o , v o ) and z o p t = z m . (b) If Q z M 4 M 5 , z o p t = max ( z o p t d , z o p t f a ) . (c) If Q z M 4 and Q z M 5 , we have z o p t d = z o p t f a = z o p t = z m . The proofs are provided in Appendix A.
Especially, when Q z M 4 M 5 , we can select the two pairs ( ϕ 1 , v 1 ) and ( ϕ 2 , v 2 ) directly, according to the analysis above and the properties of M 1 ~ M 5 , to form an available noise enhanced solution to make the value of z as greater as possible.
If Q z M 4 , then we can let ( ϕ 1 , v 1 ) Q z and ( ϕ 2 , v 2 ) M 1 M 3 M 5 . Since z ϕ 1 ( v 1 ) z ϕ 2 ( v 2 ) , the maximum z is achieved when λ = λ 2 = z ϕ 2 f a ( v 2 ) z ϕ 1 f a ( v 1 ) z ϕ 2 f a ( v 2 ) . The greater the value of λ , the greater the value of z . So ( ϕ 1 , v 1 ) and ( ϕ 2 , v 2 ) can be selected as ( ϕ 1 , v 1 ) = arg max ( ϕ , v ) Q z z ϕ f a ( v ) and ( ϕ 2 , v 2 ) = arg max ( ϕ , v ) T 1 z ϕ f a ( v ) , where T 1 = { ( ϕ , v ) | ( ϕ , v ) = arg max ( ϕ , v ) M 1 M 3 M 5 z ϕ ( v ) } .
Similarly, when Q z M 5 , let ( ϕ 2 , v 2 ) = arg min ( ϕ , v ) Q z z ϕ d ( v ) and ( ϕ 1 , v 1 ) = arg min ( ϕ , v ) T 2 z ϕ d ( v ) , where T 2 = { ( ϕ , v ) | ( ϕ , v ) = arg max ( ϕ , v ) M 1 M 2 M 4 z ϕ ( v ) } .

3.4. Sufficient Conditions for P D y β and P F A y α

In this section, according to the analysis from Section 3.1 to Section 3.3 and the properties of the six sets, the sufficient conditions which can or cannot satisfy the two constraints P D y β and P F A y α are determined as below.
Theorem 4. 
(a) If M 1 M 2 M 3 , any pair ( ϕ , v ) M 1 M 2 M 3 can meet P D y β and P F A y α ; (b) When M 1 M 2 M 3 = , if there exist ( ϕ 1 , v 1 ) M 4 and ( ϕ 2 , v 2 ) M 5 such that:
| z ϕ 1 f a ( v 1 ) | | z ϕ 2 d ( v 2 ) | < | z ϕ 1 d ( v 1 ) | | z ϕ 2 f a ( v 2 ) |
then P D y β and P F A y α can be realized by the randomization of ( ϕ 1 , v 1 ) and ( ϕ 2 , v 2 ) , otherwise there exists no noise enhanced solution to make P D y β and P F A y α hold. The proofs are presented in Appendix A.
When no randomization between detectors is allowed, only one detector can be selected to utilize. Suppose that the selected detector is ϕ o , the conclusions for the case of allowing randomization between detectors can be applied to the case of nonrandomization between detectors straightforwardly by replacing M j with M j , ϕ o and ϕ 1 = ϕ 2 = ϕ o , where j = 1 , , 6 .

4. Numerical Results

In this section, a binary hypothesis-testing problem is considered to verify the theoretical results explored in the previous sections. The binary hypotheses are given by:
{ H 0 : x [ i ] = ω [ i ] H 1 : x [ i ] = A + ω [ i ]
where i = 0 , , K 1 , A > 0 is a known constant, and ω [ i ] are independent identically distributed (i.i.d.) symmetric Gaussian mixture noise samples such that:
p ω ( ω ) = 1 2 γ ( ω ; μ , σ 2 ) + 1 2 γ ( ω ; μ , σ 2 )
where γ ( ω ; μ , σ 2 ) = ( 1 / 2 π σ 2 ) exp ( ( ω μ ) 2 / 2 σ 2 ) . The test statistic of a suboptimal detector is shown as:
T ( x ) = 1 K i = 0 K 1 ( 1 2 + 1 2 sgn ( x [ i ] ) ) = 1 K i = 0 K 1 ( S ( x [ i ] ) ) > H 1 < H 0 γ
where S ( t ) = 1 2 + 1 2 sgn ( t ) = { 1 , t > 0 0 , t < 0 . From Equation (41), the test result in this case is obtained through twice decision. Firstly, use the sign detector S ( ) to determine the sign of each observation component x [ i ] . Secondly, compute the proportion of the positive observation components in the observation vector and then compare it with the decision threshold γ to obtain the final result.
Let K = 2 , then the detector shown in Equation (41) can be substituted by two decision thresholds γ = 1 and γ = 0.5 , the corresponding decision function are ϕ 1 and ϕ 2 , respectively. When γ = 1 , the detector chooses H 1 only if x [ 0 ] > 0 and x [ 1 ] > 0 at the same time. When γ = 0.5 , the detector chooses H 1 if x [ 0 ] > 0 or x [ 1 ] > 0 . Assume v = ( v 1 , v 2 ) T be a discrete vector without any constraints. Then the detection and false-alarm probabilities of the sign detector S ( ) choosing the noise modified observation component y [ i ] = x [ i ] + v [ i ] > 0 , i = 0 , 1 , can be calculated by P 1 , s ( v [ i ] ) = S ( y [ i ] ) p 1 ( y [ i ] v [ i ] ) d y [ i ] = 1 2 Q ( v [ i ] μ A σ ) + 1 2 Q ( v [ i ] + μ A σ ) and P 0 , s ( v [ i ] ) = S ( y [ i ] ) p 0 ( y [ i ] v [ i ] ) d y [ i ] = 1 2 Q ( v [ i ] μ σ ) + 1 2 Q ( v [ i ] + μ σ ) , where v [ 0 ] = v 1 , v [ 1 ] = v 2 and Q ( x ) = x + ( 1 / 2 π ) exp ( t 2 / 2 ) d t . Further, the detection and false-alarm probabilities of y for ϕ 1 are computed as P D , ϕ 1 y ( v ) = P 1 , s ( v 1 ) P 1 , s ( v 2 ) and P F A , ϕ 1 y ( v ) = P 0 , s ( v 1 ) P 0 , s ( v 2 ) , respectively. The detection and false-alarm probabilities of y for ϕ 2 can be expressed by P D , ϕ 2 y ( v ) = 1 ( 1 P 1 , s ( v 1 ) ) ( 1 P 1 , s ( v 2 ) ) and P F A , ϕ 2 y ( v ) = 1 ( 1 P 0 , s ( v 1 ) ) ( 1 P 0 , s ( v 2 ) ) . Correspondingly, z ϕ i d ( v ) = P D , ϕ i y ( v ) β and z ϕ i f a ( v ) = α P F A , ϕ i y , i = 1 , 2 .
Let μ = 3 and A = 1 . Under the two constraints that P F A y α and P D y β , for any σ , we can determine the six sets M j , ϕ i , j = 1 , , 6 , for ϕ 1 and ϕ 2 according to the definitions of the six sets and the values of z ϕ i d ( v ) and z ϕ i f a ( v ) , respectively. Naturally, the six sets obtained by allowing the randomization between ϕ 1 and ϕ 2 can be determined by M j = M j , ϕ 1 M j , ϕ 2 , j = 1 , , 6 . Afterwards, we can search the maximum z d , z f a , z and the corresponding noise enhanced solutions according to the algorithm provided in Section 3.
Figure 1 illustrates that the maximum achievable z d , z f a and z for ϕ 1 , ϕ 2 and the case of allowing the randomization between ϕ 1 and ϕ 2 for different values of σ when α = 0.3 and β = 0.7 . The z d plotted in Figure 1a is actually the relative improvement of the maximum achievable detection probability P D , o p t y compared to β = 0.7 under the constraint P F A y α = 0.3 , i.e., z d = P D , o p t y β . Hence, P F A y α and P D y β can be realized only when z d 0 . As shown in Figure 1a, z d < 0 when σ increases to a certain extent, which means the feasible range of σ for the noise enhanced phenomenon is limited. When σ is close to 0, the maximum achievable z d for ϕ 1 is 0.3, which equals to that for the case of allowing the randomization between detectors, and the corresponding P D , o p t y is close to 1 while the maximum P D y for ϕ 2 can only reach 0.9. With the increase of σ , z d for ϕ 1 , ϕ 2 and the case of allowing the randomization between them gradually decrease. When, σ > 0.34 the maximum achievable z d for ϕ 1 is lower than that for the case of allowing the randomization between detectors. The maximum achievable z d = 0 for ϕ 1 and ϕ 2 when σ = 0.63 , and the maximum P D y for ϕ 2 is gradually greater than that for ϕ 1 when σ > 0.63 . In particular, for the case where the randomization between detectors is allowed, the maximum achievable z d decreases to 0 when σ = 0.71 . Consequently, for σ [ 0.63 , 0.71 ] , the noise enhanced phenomenon, under the constraints P F A y α and P D y β , can still happen through allowing the randomization between ϕ 1 and ϕ 2 , which is on account of the fusion of M j , ϕ 1 and M j , ϕ 2 , j = 1 , , 6 , providing more available noise enhanced solutions.
The z f a depicted in Figure 1b is actually the relative improvement of the minimum achievable false-alarm probability P F A , o p t y compared to α = 0.3 under the constraint that P F A y β = 0.7 , i.e., z f a = α P F A , o p t y . Similarly, there exists noise enhanced solution to meet the two constraints P F A y α and P D y β only if z f a 0 . When σ approaches to 0, the maximum z f a for ϕ 2 is equal to that for the case of allowing the randomization between ϕ 1 and ϕ 2 , the corresponding minimum P F A y is close to 0 while the minimum P F A y for ϕ 1 can only reach 0.1. From Figure 1b, as σ increases, the maximum achievable z f a for ϕ 1 , ϕ 2 and the case of allowing the randomization between them gradually decrease. The maximum achievable z f a for ϕ 1 and ϕ 2 are lower than zero when σ > 0.63 , while z f a obtained in the case of allowing the randomization between ϕ 1 and ϕ 2 is still greater than zero for 0.63 σ 0.71 . In other words, for 0.63 σ 0.71 , compared to the nonrandomization case where the noise enhanced phenomenon cannot happen, P F A y α and P D y β can still be realized by allowing the randomization between ϕ 1 and ϕ 2 .
From Figure 1a,b, it is clear that under the constraints P F A y 0.3 and P D y 0.7 , the maximum achievable P D y for ϕ 1 is greater than that for ϕ 2 and the minimum achievable P F A y for ϕ 2 is smaller than that for ϕ 1 when σ 0.63 . In such case, we can choose ϕ 1 for a greater P D y or select ϕ 2 for a smaller P F A y when the randomization between detectors cannot be allowed. As illustrated in Figure 1c, the maximum z for ϕ 1 is equal to that for ϕ 2 . When σ is close to 0, the maximum z for the case of allowing the randomization can reach 0.35 , which is greater than the corresponding maximum z d and z f a . Obviously, there exists ( ϕ , v ) M 1 to obtain the maximum z ϕ ( v ) in the whole M . As σ increases, the number of the elements in the set M 1 decreases. When σ > 0.33 , Q z M 4 M 5 , then the maximum z obtained in the case of allowing the randomization is equal to the maximum z d or z f a according to Corollary 3, i.e., z o p t = max ( z o p t d , z o p t f a ) .
As a comparison, Figure 2 and Figure 3 show the maximum achievable z d , z f a and z for ϕ 1 , ϕ 2 and the case of allowing the randomization between them for different values of σ when α = 0.3 , β = 0.6 and α = 0.2 , β = 0.7 , respectively. Compared Figure 1a and Figure 2a, both of them plot the z d corresponding to the maximum P D y under the constraint P F A y 0.3 . So the z d in the two figures indicate the same change trend. In Figure 2b, the maximum z f a obtained for the case of allowing randomization between detectors equals to that for ϕ 2 when σ < 0.59 . Compared to Figure 1b, when σ is close to 0, the minimum P F A y in Figure 2b for ϕ 2 still maintains zero, while the minimum P F A y for ϕ 1 decreases from 0.1 to 0.05 as the corresponding z f a increases from 0.2 to 0.25. Further, compared the minimum P F A y for ϕ 2 when β = 0.7 and β = 0.6 , they are equal when σ < 0.20 and then the latter one is gradually greater than the former one as σ increases, which is consistent with the description as shown in Figure 1b and Figure 2b. From the definition of z ϕ d ( v ) , i.e., z ϕ d ( v ) = P D , ϕ y ( v ) β , with the decrease of β , the value of z ϕ d ( v ) increases and some z ϕ d ( v ) may change from negative to positive. In other words, the decrease of β changes the distribution of the detector and discrete pair ( ϕ , v ) in M 1 ~ M 6 . For any σ , some ( ϕ , v ) belonged to M 6 for β = 0.7 are reallocated to the set M 2 or M 4 when β decreases to 0.6. In addition, some ( ϕ , v ) M 5 when β = 0.7 may belong to M 1 or M 3 when β = 0.6 . Further, these new elements in M 1 ~ M 5 can be utilized to construct more available noise enhanced solutions to obtain a superior false-alarm probability. However, we need to note that behind the improvement of P F A y is the decrease of the corresponding P D y .
Compared Figure 3 and Figure 1, as α decreases from 0.3 to 0.2, some z ϕ f a ( v ) will change from positive to negative, i.e., the distribution of ( ϕ , v ) changes. Consequently, for any σ , there may be some ( ϕ , v ) M 1 change to M 2 and M 4 , ( ϕ , v ) M 2 change to M 4 , or ( ϕ , v ) M 3 M 5 change to M 6 . As shown in Figure 3a, when σ closes to 0, the maximum available z d for ϕ 1 , ϕ 2 and the case of allowing the randomization between them are 0.2, 0.15 and 0.25, and the corresponding maximum P D y can reach 0.9, 0.85 and 0.95, respectively. As σ increases, the maximum z d for ϕ 1 , ϕ 2 and the case of allowing the randomization decrease, where the maximum z d decreases fastest for ϕ 1 and slowest for ϕ 2 . Further, the maximum achievable P D y for ϕ 2 is greater than that for ϕ 1 when σ > 0.32 and the difference between the maximum z d for ϕ 2 and the case of allowing the randomization are smaller and smaller with the increase of σ . Compared Figure 3b and Figure 1b, both of them plot the z f a corresponding to the minimum P F A y under the constraint P D y 0.7 . Especially, Q z M 4 M 5 for any σ , i.e., z o p t = max ( z o p t d , z o p t f a ) according to Corollary 3. In addition, compared with Figure 1, under the two constraints that P F A y α and P D y β , the feasible ranges of σ for ϕ 1 , ϕ 2 and the case of allowing the randomization between them become smaller.
In conclusion, as σ decreases, the values of z d , z f a and z increase. This is mainly on account of the noise enhanced phenomenon generally occurs when the observation has multimodal pdf and the multimodal structure is more obvious for a smaller σ [21]. In order to investigate the simulation results of Figure 1, Figure 2 and Figure 3 further, Table 1, Table 2 and Table 3 present the optimal noise enhanced solutions to maximize z d , z f a and z for ϕ 1 , ϕ 2 and the case of allowing the randomization, respectively, for different σ when α = 0.3 and β = 0.7 .
It is worthy to note that the optimal noise enhanced solutions to maximize z d , z f a and z , respectively, are not unique in many cases. Moreover, due to the property of the detector, the noise modified detectability for ϕ i , i = 1 , 2 , obtained by adding v = ( v 1 , v 2 ) T is the same with that achieved via adding v = ( v 2 , v 1 ) T . As a demonstration, for each σ , there only lists one noise enhanced solution for the maximum z d , as well as the maximum z f a and z . As shown in Table 1, Table 2 and Table 3, the optimal noise enhanced solutions to maximize z d , z f a and z , respectively, are the randomization of at most two detector and discrete vector pairs ( ϕ 1 , v 1 ) and ( ϕ 2 , v 2 ) from two different sets, which are consistent with Theorems 1–3.
Next, the noise enhanced solution for ϕ 1 is taken as an example to illustrate firstly. When σ = 0.01 , the maximum z ϕ 1 d ( v ) obtained by ( ϕ 1 , v ) M 1 , ϕ 1 is equal to the maximum z ϕ 1 d ( v ) obtained by ( ϕ 1 , v ) M ϕ 1 .Through some calculations, v = [ 2.5 , 2.5 ] M 1 , ϕ 1 is one of the discrete vectors to maximize z ϕ 1 d ( v ) , so v = [ 2.5 , 2.5 ] is the optimal noise to maximize z d for ϕ 1 when α = 0.3 and β = 0.7 . At the same time, the z ϕ 1 ( v ) obtained by v = [ 2.5 , 2.5 ] is also the maximum z ϕ 1 ( v ) for ϕ 1 , thus v = [ 2.5 , 2.5 ] is the optimal noise to maximize z for ϕ 1 . Besides, the maximum z ϕ 1 f a ( v ) obtained from M 1 , ϕ 1 and M 3 , ϕ 1 are smaller than the maximum z ϕ 1 f a ( v ) for ϕ 1 , then the maximum z f a under the two constraints P F A y 0.3 and P D y 0.7 is obtained by the randomization of v 1 = [ 2.5 , 2.5 ] from M 1 , ϕ 1 and v 2 = [ 3.75 , 2.5 ] from M 5 , ϕ 1 with probabilities 0.4 and 0.6, respectively. The case of σ = 0.2 is similar with the case of σ = 0.01 .
When σ = 0.4 , both M 2 , ϕ 1 and M 3 , ϕ 1 are null, and the maximum z ϕ 1 d ( v ) , z ϕ 1 f a ( v ) and z ϕ 1 ( v ) for ϕ 1 cannot be obtained by the discrete vector from M 1 , ϕ 1 . Based on Theorems 1–3, the maximum z d , z f a and z can be achieved by the randomization of two detector and discrete vector pairs from M 4 , ϕ 1 and M 5 , ϕ 1 . Further, the noise enhanced solutions for the maximum z d and z f a have the same additive noise components, i.e., v 1 and v 2 , but with different probabilities. Moreover, according to Corollary 3(b), z o p t = max ( z o p t d , z o p t f a ) . When the randomization between ϕ 1 and ϕ 1 is allowed, the z ϕ 1 ( v 1 ) obtained by ( ϕ 1 , v 1 = [ 2.6 , 2.5 ] ) M 4 is equal to the z ϕ 2 ( v 2 ) obtained by ( ϕ 2 , v 2 = [ 3.6 , 3.5 ] ) M 5 , and it is the maximum z ϕ ( v ) obtained in M . According to Corollary 3(c), the maximum z d , z f a and z can be obtained by the randomization of ( ϕ 1 , v 1 ) and ( ϕ 2 , v 2 ) with different probabilities. Especially, the probability λ of ( ϕ 1 , v 1 ) for the maximum z d or z f a is unique, while the probability λ of ( ϕ 1 , v 1 ) for the maximum z can be chosen in a certain interval. When σ = 0.65 , no noise enhanced solution exists to meet the two constraints for the nonrandomization case in Table 1 and Table 2, while there still exist noise enhanced solutions to improve the detectability under the same constraints by allowing randomization between ϕ 1 and ϕ 2 shown in Table 3 and the corresponding solutions are also obtained according to Corollary 3(c).
In order to discuss the effect of the decision threshold γ on the detection and false-alarm probabilities, the proposed noise enhanced method is operated on different values of γ . Further, the relationships between the maximum achievable detection probability and β , the minimum achievable false-alarm probability and α are explored for different γ . The different results of the original detector and the noise enhanced detector for different γ are given in Figure 4, Figure 5 and Figure 6.
Figure 4 gives the original detection and false-alarm probabilities for different γ when K = 20 . From Figure 4, we can see that both the original detection and false-alarm probabilities decrease with the increase of γ and the value of the original detection probability is close to that of the original false-alarm probability for any γ . In other words, a better detection probability is obtained for a smaller γ and a lower false-alarm probability is achieved for a greater γ . Figure 5 plots the maximum achievable P D y obtained by adding noise as a function of α for γ = 0.05 , 0.25 , 0.5 , 0.75 and 0.95 , and the case of allowing the randomization between thresholds, when μ = 3 , A = 1 , σ = 1 and K = 20 . Compared Figure 5 with Figure 4, the detection probabilities for γ = 0.95 and 0.75 can be increased significantly by adding suitable additive noises with a lower false-alarm probability. Figure 6 presents the minimum achievable P F A y obtained by adding noise as a function of β for γ = 0.05 , 0.25 , 0.5 , 0.75 and 0.95 , and the case of allowing the randomization between thresholds, when μ = 3 , A = 1 , σ = 1 and K = 20 . Comparing Figure 6 with Figure 4, the false-alarm probabilities for γ = 0.05 , 0.25 and 0. 5 can be decreased significantly by adding suitable additive noises with a higher detection probability. From Figure 5 and Figure 6, different detection performance can be realized by adding noise. As shown in Figure 5 and Figure 6, for the cases of α ( 0.055 , 0.725 ) and β ( 0.405 , 0.975 ) , the detector of γ = 0.5 shows the worst performance compared to others. Thus, in such cases, γ = 0.5 is not a suitable threshold. From Figure 5 and Figure 6, different detection performance can be realized by adding noises. Namely, various noise enhanced solutions can be provided with our method to satisfy different performance requirements. As a result, for any decision threshold γ , we can determine whether the performance of the detector can be improved or not, and search a noise enhanced solution to realize the improvement according to the method proposed in this paper.
It is worthy to note that there is no limit on the detector in the method proposed in this manuscript. Furthermore, it only depends on detector itself whether the detection performance of the detector can or cannot be improved by adding noise. The algorithms proposed in this paper not only provide ways to prove the improvability or nonimprovability, but also analyze how to search the optimal noise enhanced solutions. For any detector, no matter an optimal Neyman–Pearson (Bayesian, Minimax) detector or other suboptimal detector, we first calculate all information of ( P D , ϕ ( v ) , P F A , ϕ ( v ) ) obtained by every detector and discrete vector pair ( ϕ , v ) . Then, we divide each pair ( ϕ , v ) into six sets according to the values of z ϕ d ( v ) and z ϕ f a ( v ) , where z ϕ d ( v ) = P D , ϕ y ( v ) β and z ϕ f a ( v ) = α P F A , ϕ y ( v ) . If there exist detector and discrete vector pairs to satisfy the sufficient conditions as given in Section 3.4, noise enhanced solutions to maximize z d , z f a and z can be obtained according to Section 3.1, Section 3.2, and Section 3.3, respectively, on the premise that P D y β and P F A y α . Otherwise, no noise enhanced solution exists to satisfy P D y β and P F A y α simultaneously.

5. Conclusions

In this paper, a framework consisting of six mutually disjoint sets is established according to two inequality-constraints on detection and false-alarm probabilities. The maximization of z d , z f a and z are searched based on the framework. Theorems 1–3 give the forms of the optimal noise enhanced solutions to maximize z d , z f a and z . The calculations of maximum z d , z f a and z are presented in Corollaries 1, 2 and 3, respectively. Especially, the maximum z is equal to the maximum z d or z f a under certain conditions according to Corollary 3. Furthermore, sufficient conditions for P D y β and P F A y α are given in Theorem 4.
Compared with the method proposed in [16], which only focuses on the maximization of z d with a constant false-alarm rate (CFAR), both methods require all the information of ( P D , ϕ ( v ) , P F A , ϕ ( v ) ) obtained by every detector and discrete vector pair ( ϕ , v ) , and our method may use the information more effectively to realize the overall improved performance or decrease the false alarm probability due to the division of six sets.
Furthermore, the theoretical results in this paper can be extended to a broad class of noise enhanced optimal problems subject to two inequality constraints, such as the minimization of Bayes risk under the constraint on condition risk about the binary hypothesis testing problem, and the minimization of linear combinations of error probabilities under constraints on type I and II error probabilities as discussed in [28].

Acknowledgments

This research is supported by the Fundamental Research Funds for the Central Universities (Grant No. CDJZR11160003 and No. 106112015CDJZR165510) and the National Natural Science Foundation of China (Grant No. 41404027, No. 61471072, No. 61301224, No. 61103212, No. 61471073 and No. 61108086).

Author Contributions

Shujun Liu raised the idea of the new framework to solve different noise enhanced signal detection optimal problems. Ting Yang and Shujun Liu contributed to the drafting of the manuscript, interpretation of the results, some experimental design and checked the manuscript. Mingchun Tang and Kui Zhang designed the experiment of the maximum achievable P D y and P F A y for different cases. Ting Yang and Xinzheng Zhang contributed to the proofs of the theories developed in this paper. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

A.1. Proofs of Theorems and Corollaries

Theorem A1. 
z o p t d can be achieved by the randomization of at most two detector and discrete vector pairs ( ϕ 1 , v 1 ) and ( ϕ 2 , v 2 ) from two different sets, i.e., ( ϕ 1 , v 1 ) M τ , ( ϕ 2 , v 2 ) M j , τ , j = 1 , , 6 and τ j .
Proof. 
Combining Equation (19) and the definitions of M 1 ~ M 6 , if there exists a randomization of two detector and discrete vector pairs from two different sets to make P D y β and P F A y α hold, the two constraints of P D y β and P F A y α can be satisfied by the randomization consisting of one or more randomization consisting of two detector and discrete vector pairs from two different sets with corresponding probabilities. Otherwise, no noise enhanced solution exists to meet P D y β and P F A y α .
Obviously, under the two constraints that P D y β and P F A y α , the z d obtained by any randomization of detector and discrete vector pairs cannot be greater than the maximum z d obtained by a randomization of two detector and discrete vector pairs ( ϕ 1 , v 1 ) and ( ϕ 2 , v 2 ) with probabilities ξ and 1 ξ from two different sets, respectively, where 0 ξ 1 . Especially, ξ = 0 and ξ = 1 represent the case where the maximum z d under the two constraints is obtained by one pair of detector and discrete vector. Namely, under the two constraints that P D y β and P F A y α , the maximum z d can be obtained by the randomization of at most two detector and discrete vector pairs from two different sets. □
Corollary A1. 
(a) If there exists at least one pair ( ϕ o , v o ) of Q d which belongs to M 1 M 2 , z o p t d can be obtained by selecting ( ϕ o , v o ) and z o p t d = z m d . (b) When Q d M 4 , the z f a corresponding to z o p t d is zero. (c) When Q d M 4 , z o p t d is obtained by the randomization of ( ϕ 1 , v 1 ) M 4 and ( ϕ 2 , v 2 ) from M 1 , M 3 or M 5 with the respective probabilities ξ = z ϕ 2 f a ( v 2 ) z ϕ 1 f a ( v 1 ) z ϕ 2 f a ( v 2 ) and 1 ξ , or the detector and discrete pair ( ϕ o , v o ) = arg max ( ϕ , v ) M 2 z ϕ d ( v ) .
Proof. 
Part (a): According to the definition of Q d , the z d obtained by each pair ( ϕ , v ) Q d is equal to z m d . Since z m d > 0 , the ( ϕ , v ) of Q d can only come from M 1 , M 2 or M 4 . If there exists at least one pair ( ϕ o , v o ) of Q d which belongs to M 1 M 2 , it is obvious that z o p t d = z m d can be obtained by choosing ( ϕ o , v o ) .
Part (b): The contradiction method is used here. When Q d M 4 , suppose that the z f a corresponding to z o p t d is greater than zero and denoted by z 1 f a > 0 . From the definition of Q d , for any pair ( ϕ d , v d ) Q d , we have z ϕ d f a ( v d ) < 0 and z ϕ d d ( v d ) = z m d > z o p t d as z m d is defined without the two constraints that P D y β and P F A y α . Then the z f a and the z d , which are obtained by the randomization of ( ϕ d , v d ) and the optimal noise enhanced solution for z o p t d with probabilities ξ = z 1 f a z ϕ d f a ( v d ) z 1 f a and 1 ξ , respectively, can be calculated as follows,
z f a = ξ z ϕ d f a ( v d ) + ( 1 ξ ) z 1 f a = z 1 f a z ϕ d f a ( v d ) z 1 f a z ϕ d f a ( v d ) + z ϕ d f a ( v d ) z ϕ d f a ( v d ) z 1 f a z 1 f a = 0
z d = ξ z ϕ d d ( v d ) + ( 1 ξ ) z o p t d = z 1 f a z ϕ d f a ( v d ) z 1 f a z m d + z ϕ d f a ( v d ) z ϕ d f a ( v d ) z 1 f a z o p t d > z o p t d
because z m d > z o p t d . Obviously, z d > z o p t d is in conflict with the definition of z o p t d . So the z f a corresponding to z o p t d is zero when Q d M 4 .
Part (c): Firstly, according to Theorem 1, z o p t d is obtained by the randomization of at most two detector and discrete pairs from two different sets. Secondly, according to the conclusion (a) of Corollary 1, the z f a corresponding to z o p t d is zero. Then combined the definitions of M 1 ~ M 6 , z f a = 0 can only be realized by a detector and discrete pair from M 2 or a randomization of ( ϕ 1 , v 1 ) from M 4 and ( ϕ 2 , v 2 ) from M 1 , M 3 or M 5 with the respective probabilities ξ and 1 ξ , and vice versa. Further, z f a = 0 means that z f a = ξ z ϕ 1 f a ( v 1 ) + ( 1 ξ ) z ϕ 2 f a ( v 2 ) = 0 , i.e., ξ = z ϕ 2 f a ( v 2 ) z ϕ 1 f a ( v 1 ) z ϕ 2 f a ( v 2 ) . If the maximum z d obtained by the above randomization is greater than the maximum z d obtained by ( ϕ , v ) from M 2 , the optimal noise enhanced solution to maximize z d is the corresponding randomization. Otherwise, the optimal noise enhanced solution to maximize z d is the detector and discrete vector pair ( ϕ o , v o ) = arg max ( ϕ , v ) M 2 z ϕ d ( v ) . □
Corollary A3. 
(a) If there exists at least one pair ( ϕ o , v o ) of Q z belongs to M 1 M 2 M 3 , the maximum z can be realized by choosing ( ϕ o , v o ) and z o p t = z m . (b) If Q z M 4 M 5 , z o p t = max ( z o p t d , z o p t f a ) . (c) If Q z M 4 and Q z M 5 , we have z o p t d = z o p t f a = z o p t = z m .
Proof. 
Part (a): According to the definition of Q z , the z obtained by each pair ( ϕ , v ) Q z is equal to z m . Since z m > 0 , the ( ϕ , v ) of Q z can come from M 1 ~ M 5 . If there exists at least one pair ( ϕ o , v o ) of Q z which belongs to M 1 M 2 M 3 , it is obvious that z o p t = z m can be achieved by choosing ( ϕ o , v o ) .
Part (b): When Q z M 4 M 5 , the maximum z is obtained by the randomization of ( ϕ 1 , v 1 ) and ( ϕ 2 , v 2 ) with probabilities λ and 1 λ , so:
z = λ z ϕ 1 ( v 1 ) + ( 1 λ ) z ϕ 2 ( v 2 )
In order to satisfy the two constraints that P D y β and P F A y α , ( ϕ 1 , v 1 ) and ( ϕ 2 , v 2 ) need to meet the following inequalities:
z d = λ z ϕ 1 d ( v 1 ) + ( 1 λ ) z ϕ 2 d ( v 2 ) 0
z f a = λ z ϕ 1 f a ( v 1 ) + ( 1 λ ) z ϕ 2 f a ( v 2 ) 0
Accordingly, λ = λ 1 = z ϕ 2 d ( v 2 ) z ϕ 1 d ( v 1 ) z ϕ 2 d ( v 2 ) when z d = 0 and λ = λ 2 = z ϕ 2 f a ( v 2 ) z ϕ 1 f a ( v 1 ) z ϕ 2 f a ( v 2 ) when z f a = 0 .
Since Q z M 4 M 5 , there is at least one pair of ( ϕ 1 , v 1 ) and ( ϕ 2 , v 2 ) which belongs to M 4 M 5 . If ( ϕ 2 , v 2 ) can be selected only from M 1 , M 3 or M 5 .
When ( ϕ 1 , v 1 ) M 4 and ( ϕ 2 , v 2 ) M 1 M 3 , the randomization is available if and only if z ϕ 1 ( v 1 ) z ϕ 2 ( v 2 ) . From the definitions of M 1 and M 3 , Equation (A4) holds for any 0 λ 1 , and Equation (A5) holds only for 0 λ λ 2 . So the maximum z is obtained when λ = λ 2 as z ϕ 1 ( v 1 ) z ϕ 2 ( v 2 ) .
When ( ϕ 1 , v 1 ) M 4 and ( ϕ 2 , v 2 ) M 5 , Equations (A4) and (A5) can hold at the same time if and only if λ 1 < λ 2 . If z ϕ 1 ( v 1 ) z ϕ 2 ( v 2 ) , z gets the maximum when λ = λ 2 , otherwise z reaches the maximum when λ = λ 1 .
When ( ϕ 1 , v 1 ) M 1 M 2 and ( ϕ 2 , v 2 ) M 5 , suppose z ϕ 1 ( v 1 ) z ϕ 2 ( v 2 ) , based on the definitions about M 1 and M 2 , Equation (A4) holds for λ 1 λ 1 and Equation (A5) holds for any 0 λ 1 . So the maximum z is reached when λ = λ 1 .
If Q z M 4 or Q z M 4 , z m < z m d and Q d M 4 . According to Corollary 1, z o p t d is obtained when λ = λ 2 and the corresponding z f a = 0 . If Q z M 5 or Q z M 5 , z m < z m f a and Q f a M 5 . According to Corollary 2, z o p t f a is obtained when λ = λ 1 and the corresponding z d equals to zero, i.e., z d = 0 . As a result, the maximum z is equal to the maximum z d or z f a , i.e., z o p t = max ( z o p t d , z o p t f a ) .
Part (c): When Q z M 4 and Q z M 5 , let ( ϕ 1 , v 1 ) Q z M 4 and ( ϕ 2 , v 2 ) Q z M 5 . Then we have z ϕ 1 ( v 1 ) = z ϕ 2 ( v 2 ) = z m . As z m > 0 , | z ϕ 1 d ( v 1 ) | > | z ϕ 1 f a ( v 1 ) | and | z ϕ 2 d ( v 2 ) | < | z ϕ 2 f a ( v 2 ) | . It is obvious that λ 1 < λ 2 through some simple calculations, which means that P D y β and P F A y α can be satisfied by the randomization of ( ϕ 1 , v 1 ) and ( ϕ 2 , v 2 ) with the respective probabilities λ and 1 λ , where λ [ λ 1 , λ 2 ] z o p t = λ z ϕ 1 ( v 1 ) + ( 1 λ ) z ϕ 2 ( v 2 ) = z m . Accordingly, we always have z o p t = λ z ϕ 1 ( v 1 ) + ( 1 λ ) z ϕ 2 ( v 2 ) = z m for any λ [ λ 1 , λ 2 ] . Furthermore, z d and z f a reach the maximum when λ = λ 2 and λ = λ 1 , respectively, and z o p t d = z o p t f a = z o p t = z m . □
Theorem 4. 
(a) If M 1 M 2 M 3 , any pair ( ϕ , v ) M 1 M 2 M 3 can meet P D y β and P F A y α ; (b) When M 1 M 2 M 3 = , if there exist ( ϕ 1 , v 1 ) M 4 and ( ϕ 2 , v 2 ) M 5 such that
| z ϕ 1 f a ( v 1 ) | | z ϕ 2 d ( v 2 ) | < | z ϕ 1 d ( v 1 ) | | z ϕ 2 f a ( v 2 ) |
Then P D y β and P F A y α can be realized by the randomization of ( ϕ 1 , v 1 ) and ( ϕ 2 , v 2 ) , otherwise there exists no noise enhanced solution to make P D y β and P F A y α hold.
Proof. 
Part (a): If M 1 M 2 M 3 , from the definitions of M 1 ~ M 3 , z ϕ d ( v ) 0 and z ϕ f a ( v ) 0 for any ( ϕ , v ) M 1 M 2 M 3 . Accordingly, the corresponding P D y = β + z ϕ d ( v ) β and P F A y = α z ϕ f a ( v ) α .
Part (b): When M 1 M 2 M 3 , if we want to make the P D y and P F A y obtained by the randomization of ( ϕ 1 , v 1 ) M 4 and ( ϕ 2 , v 2 ) M 5 meet the two constraints P D y β and P F A y α , the two inequalities (A3) and (A4) should be satisfied simultaneously. Further, Equations (A3) and (A4) hold if and only if λ 1 < λ 2 . Due to z ϕ 1 d ( v 1 ) > 0 and z ϕ 1 f a ( v 1 ) < 0 for any ( ϕ 1 , v 1 ) M 4 , z ϕ 2 d ( v 2 ) < 0 and z ϕ 2 f a ( v 2 ) > 0 for any ( ϕ 2 , v 2 ) M 5 , then:
λ 1 = | z ϕ 2 d ( v 2 ) | | z ϕ 1 d ( v 1 ) | + | z ϕ 2 d ( v 2 ) | < λ 2 = | z ϕ 2 f a ( v 2 ) | | z ϕ 1 f a ( v 1 ) | + | z ϕ 2 f a ( v 2 ) |
| z ϕ 2 d ( v 2 ) | ( | z ϕ 1 f a ( v 1 ) | + | z ϕ 2 f a ( v 2 ) | ) < | z ϕ 2 f a ( v 2 ) | ( | z ϕ 1 d ( v 1 ) | + | z ϕ 2 d ( v 2 ) | )
| z ϕ 1 f a ( v 1 ) | | z ϕ 2 d ( v 2 ) | < | z ϕ 1 d ( v 1 ) | | z ϕ 2 f a ( v 2 ) |

Appendix B

Algorithm B1. Optimal noise solution to maximize zd.
   d 1 = max { z ϕ d ( v ) + z ϕ f a ( v ) : ( ϕ , v ) M 4 } ; d t = d 1 ;
   d 2 = max { z ϕ d ( v ) + z ϕ f a ( v ) : ( ϕ , v ) M 1 M 3 M 5 } ; d p = d 2 ;
while | d 2 d 1 | > ε
   d c = ( d t + d p ) / 2 ;   d 1 = d c ;
   k = m a x { ( d c z ϕ d ( v ) ) / z ϕ f a ( v ) : ( ϕ , v ) M 4 } ;
   d 2 = max { z ϕ d ( v ) + k · z ϕ f a ( v ) : ( ϕ , v ) M 1 M 3 M 5 } ;
  if d 2 > d 1
d p = min ( d 2 , max ( d t , d p ) ) ;
else
d p = max ( d 2 , min ( d t , d p ) ) ;
end
d t = d c ;
end
( ϕ 1 , v 1 ) = arg max ( ϕ , v ) M 4 ( z ϕ d ( v ) + k · z ϕ f a ( v ) ) ; ( ϕ 2 , v 2 ) = arg max ( ϕ , v ) M 1 M 3 M 5 ( z ϕ d ( v ) + k · z ϕ f a ( v ) ) .
It is noted that the parameter ε in Algorithm B1 is a near-zero positive, which is used to ensure accuracy of the algorithm.
Algorithm B2. Optimal noise solution to maximize zfa.
   f 1 = max { z ϕ f a ( v ) + z ϕ d ( v ) : ( ϕ , v ) M 1 M 2 M 4 } ; f t = f 1 ,
   f 2 = = max { z ϕ f a ( v ) + z ϕ d ( v ) : ( ϕ , v ) M 5 } ; f p = f 2 ;
while | f 2 f 1 | > ε
   f c = ( f t + f p ) / 2 ;   f 1 = f c ;
   t = min { ( f c z ϕ f a ( v ) ) / z ϕ d ( v ) : ( ϕ , v ) M 1 M 2 M 4 } ;
   f 2 = max { z ϕ f a ( v ) + t · z ϕ d ( v ) : ( ϕ , v ) M 5 } ;
  if f 2 > f 1
f p = min ( f 2 , max ( f t , f p ) ) ;
else
f p = max ( f 2 , min ( f t , f p ) ) ;
end
f t = f c ;
end
( ϕ 1 , v 1 ) = arg max ( ϕ , v ) M 1 M 2 M 4 ( z ϕ f a ( v ) + t · z ϕ d ( v ) ) ; ( ϕ 2 , v 2 ) = arg max ( ϕ , v ) M 5 ( z ϕ f a ( v ) + t · z ϕ d ( v ) ) .

References

  1. Benzi, R.; Sutera, A.; Vulpiani, A. The mechanism of stochastic resonance. J. Phys. A Math. Gen. 1981, 14, 453–457. [Google Scholar] [CrossRef]
  2. Gammaitoni, L.; Hanggi, P.; Jung, P.; Marchesoni, F. Stochastic resonance. Rev. Mod. Phys. 1998, 70, 223–287. [Google Scholar] [CrossRef]
  3. Xu, B.; Li, J.; Zheng, J. How to tune the system parameters to realize stochastic resonance. J. Phys. A Math. Gen. 2003, 36, 11969–11980. [Google Scholar] [CrossRef]
  4. Rousseau, D.; Chapeau-Blondeau, F. Stochastic resonance and improvement by noise in optimal detection strategies. Digit. Signal Process. 2005, 15, 19–32. [Google Scholar] [CrossRef]
  5. Patel, A.; Kosko, B. Noise benefits in quantizer-array correlation detection and watermark decoding. IEEE Trans. Signal Process. 2011, 59, 488–505. [Google Scholar] [CrossRef]
  6. Chen, H.; Varshney, L.R.; Varshney, P.K. Noise-enhanced information systems. Proc. IEEE 2014, 102, 1607–1261. [Google Scholar] [CrossRef]
  7. Chapeau-Blondeau, F. Input-output gains for signal in noise in stochastic resonance. Phys. Lett. A 1997, 232, 41–48. [Google Scholar] [CrossRef]
  8. Chapeau-Blondeau, F. Periodic and aperiodic stochastic resonance with output signal-to-noise ratio exceeding that at the input. Int. J. Bifurc. Chaos 1999, 9, 267–272. [Google Scholar] [CrossRef]
  9. Gingl, Z.; Makra, P.; Vajtai, R. High signal-to-noise ratio gain by stochastic resonance in a double well. Fluctuat. Noise Lett. 2001, 1, L181–L188. [Google Scholar] [CrossRef]
  10. Makra, P.; Gingl, Z. Signal-to-noise ratio gain in non-dynamical and dynamical bistable stochastic resonators. Fluct. Noise Lett. 2002, 2, L145–L153. [Google Scholar] [CrossRef]
  11. Makra, P.; Gingl, Z.; Fulei, T. Signal-to-noise ratio gain in stochastic resonators driven by coloured noises. Phys. Lett. A 2003, 317, 228–232. [Google Scholar] [CrossRef]
  12. Stocks, N.G. Suprathreshold stochastic resonance in multilevel threshold systems. Phys. Rev. Lett. 2000, 84, 2310–2313. [Google Scholar] [CrossRef] [PubMed]
  13. Mitaim, S.; Kosko, B. Adaptive stochastic resonance in noisy neurons based on mutual information. IEEE Trans. Neural Netw. 2004, 15, 1526–1540. [Google Scholar] [CrossRef] [PubMed]
  14. Kay, S. Can detectability be improved by adding noise. IEEE Signal Process. Lett. 2000, 7, 8–10. [Google Scholar] [CrossRef]
  15. Chen, H.; Varshney, P.K.; Kay, S.M.; Michels, J.H. Theory of the stochastic resonance effect in signal detection: Part I—Fixed detectors. IEEE Trans. Signal Process. 2007, 55, 3172–3184. [Google Scholar] [CrossRef]
  16. Chen, H.; Varshney, P.K. Theory of the stochastic resonance effect in signal detection: Part II—Variable detectors. IEEE Trans. Signal Process. 2007, 56, 5031–5041. [Google Scholar] [CrossRef]
  17. Chen, H.; Varshney, P.K.; Kay, S.; Michels, J.H. Noise enhanced nonparametric detection. IEEE Trans. Inf. Theory 2009, 55, 499–506. [Google Scholar] [CrossRef]
  18. Patel, A.; Kosko, B. Optimal noise benefits in Neyman–Pearson and inequality constrained signal detection. IEEE Trans. Signal Process. 2009, 57, 1655–1669. [Google Scholar] [CrossRef]
  19. Bayram, S.; Gezici, S. Stochastic resonance in binary composite hypothesis-testing problems in the Neyman–Pearson framework. Digit. Signal Process. 2012, 22, 391–406. [Google Scholar] [CrossRef] [Green Version]
  20. Bayrama, S.; Gultekinb, S.; Gezici, S. Noise enhanced hypothesis-testing according to restricted Neyman–Pearson criterion. Digit. Signal Process. 2014, 25, 17–27. [Google Scholar] [CrossRef]
  21. Bayram, S.; Gezici, S.; Poor, H.V. Noise enhanced hypothesis-testing in the restricted bayesian framework. IEEE Trans. Signal Process. 2010, 58, 3972–3989. [Google Scholar] [CrossRef] [Green Version]
  22. Bayram, S.; Gezici, S. Noise enhanced M-ary composite hypothesis-testing in the presence of partial prior information. IEEE Trans. Signal Process. 2011, 59, 1292–1297. [Google Scholar] [CrossRef]
  23. Kay, S.; Michels, J.H.; Chen, H.; Varshney, P.K. Reducing probability of decision error using stochastic resonance. IEEE Signal Process. Lett. 2006, 13, 695–698. [Google Scholar] [CrossRef]
  24. Bayram, S.; Gezici, S. Noise-enhanced M-ary hypothesis-testing in the minimax framework. In Proceedings of the 3rd International Conference on Signal Processing and Communication Systems (ICSPCS), Omaha, NE, USA, 28–30 September 2009.
  25. Liu, S.J.; Yang, T.; Zhang, X.Z.; Hu, X.P.; Xu, L.P. Noise enhanced binary hypothesis-testing in a new framework. Digit. Signal Process. 2015, 41, 22–31. [Google Scholar] [CrossRef]
  26. Choudhary, A.; Kohar, V.; Sinha, S. Noise enhanced activity in a complex network. Eur. Phys. J. B 2014, 87, 1–8. [Google Scholar] [CrossRef]
  27. Kohar, V.; Sinha, S. Noise-assisted morphing of memory and logic function. Phys. Lett. A 2012, 376, 957–962. [Google Scholar] [CrossRef]
  28. Pericchi, L.; Pereira, C. Adaptative significance levels using optimal decision rules: Balancing by weighting the error probabilities. Braz. J. Probab. Stat. 2016, 30, 70–90. [Google Scholar] [CrossRef]
Figure 1. The maximum achievable z d , z f a and z for ϕ 1 , ϕ 2 and the case of allowing the randomization between ϕ 1 and ϕ 2 for different values of σ when α = 0.3 and β = 0.7 Plotted in (a), (b) and (c), respectively.
Figure 1. The maximum achievable z d , z f a and z for ϕ 1 , ϕ 2 and the case of allowing the randomization between ϕ 1 and ϕ 2 for different values of σ when α = 0.3 and β = 0.7 Plotted in (a), (b) and (c), respectively.
Entropy 18 00213 g001
Figure 2. The maximum achievable z d , z f a and z for ϕ 1 , ϕ 2 and the case of allowing the randomization between ϕ 1 and ϕ 2 for different values of σ when α = 0.3 and β = 0.6 . Plotted in (a), (b) and (c), respectively.
Figure 2. The maximum achievable z d , z f a and z for ϕ 1 , ϕ 2 and the case of allowing the randomization between ϕ 1 and ϕ 2 for different values of σ when α = 0.3 and β = 0.6 . Plotted in (a), (b) and (c), respectively.
Entropy 18 00213 g002
Figure 3. The maximum achievable z d , z f a and z for ϕ 1 , ϕ 2 and the case of allowing the randomization between ϕ 1 and ϕ 2 for different values of σ when α = 0.2 and β = 0.7 Plotted in (a), (b) and (c), respectively.
Figure 3. The maximum achievable z d , z f a and z for ϕ 1 , ϕ 2 and the case of allowing the randomization between ϕ 1 and ϕ 2 for different values of σ when α = 0.2 and β = 0.7 Plotted in (a), (b) and (c), respectively.
Entropy 18 00213 g003
Figure 4. P D x and P F A x for different γ when μ = 3 , A = 1 , σ = 1 and K = 20 .
Figure 4. P D x and P F A x for different γ when μ = 3 , A = 1 , σ = 1 and K = 20 .
Entropy 18 00213 g004
Figure 5. The maximum achievable P D y as a function of. for γ = 0.05 , 0.25 , 0.5 , 0.75 and 0.95 , and the case of allowing the randomization between thresholds, when μ = 3 , A = 1 , σ = 1 and K = 20 .
Figure 5. The maximum achievable P D y as a function of. for γ = 0.05 , 0.25 , 0.5 , 0.75 and 0.95 , and the case of allowing the randomization between thresholds, when μ = 3 , A = 1 , σ = 1 and K = 20 .
Entropy 18 00213 g005
Figure 6. The minimum achievable P F A y as a function of β for γ = 0.05 , 0.25 , 0.5 , 0.75 and 0.95 , and the case of allowing the randomization between thresholds, when μ = 3 , A = 1 , σ = 1 and K = 20 .
Figure 6. The minimum achievable P F A y as a function of β for γ = 0.05 , 0.25 , 0.5 , 0.75 and 0.95 , and the case of allowing the randomization between thresholds, when μ = 3 , A = 1 , σ = 1 and K = 20 .
Entropy 18 00213 g006
Table 1. The optimal additive noises to maximize z d , z f a and z for various values of σ for ϕ 1 when α = 0.3 and β = 0.7 .
Table 1. The optimal additive noises to maximize z d , z f a and z for various values of σ for ϕ 1 when α = 0.3 and β = 0.7 .
σ max z d max z f a max z
v 1 / v 2 / λ v 1 / v 2 / λ v 1 / v 2 / λ
0.01[2.5, 2.5]/-/1[2.5, 2.5]/[−3.75, 2.5]/0.4[2.5, 2.5]/-/1
0.2[2.7, 2.75]/-/1[2.5, 2.5]/[2.7, −3.5]/0.4088[2.55, 2.5]/-/1
0.4[2.5, 2.5]/[−3.5, 2.75]/0.9794[2.5, 2.5]/[−3.5, 2.75]/0.5684[2.5, 2.5]/[−3.5, 2.75]/0.9794
0.65---
Table 2. The optimal additive noises to maximize z d , z f a and z for various values of σ for ϕ 2 when α = 0.3 and β = 0.7 .
Table 2. The optimal additive noises to maximize z d , z f a and z for various values of σ for ϕ 2 when α = 0.3 and β = 0.7 .
σ max z d max z f a max z
v 1 / v 2 / λ v 1 / v 2 / λ v 1 / v 2 / λ
0.01[−5.75, 2.75]/[−3.5, −3.25]/0.6[−3.75, −3.75]/-/1[−3.7, −3.75]/-/1
0.2[2.55, 2.5]/[2.7, −3.5]/0.4088[−3.7, −3.75]/-/1[−3.55, −3.5]/-/1
0.4[−3.75, 2.5]/[−3.5, −3.5]/0.4316[−3.75, 2.5]/[−3.5, −3.5]/0.0206[−3.75, 2.5]/[−3.5, −3.5]/0.0206
0.65---
Table 3. The optimal additive noises to maximize z d , z f a and z for various values of σ for the case of allowing the randomization between ϕ 1 and ϕ 2 when α = 0.3 and β = 0.7 .
Table 3. The optimal additive noises to maximize z d , z f a and z for various values of σ for the case of allowing the randomization between ϕ 1 and ϕ 2 when α = 0.3 and β = 0.7 .
σ max z d max z f a max z
( ϕ , v 1 ) /-/ λ ( ϕ , v 1 ) / ( ϕ , v 2 ) / λ ( ϕ , v 1 ) / ( ϕ , v 2 ) / λ
0.01 ( ϕ 1 , [2.3, 2.75])/-/1-/ ( ϕ 2 , [−3.35, −3.25])/1 ( ϕ 1 , [2.3, 2.75])/-/1
0.2 ( ϕ 1 , [2.75, 2.75])/ ( ϕ 1 , [2.7, 2.75])/0.4770 ( ϕ 2 , [−3.7, −3.75])/ ( ϕ 2 , [−3.75, −3.75])/0.5230-/ ( ϕ 2 , [−3.55, −3.5])/1
0.4 ( ϕ 1 , [2.6, 2.5])/ ( ϕ 2 , [−3.6, −3.5])/0.9141 ( ϕ 1 , [2.6, 2.5])/ ( ϕ 2 , [−3.6, −3.5])/0.0859 ( ϕ 1 , [2.6, 2.5])/ ( ϕ 2 , [−3.6, −3.5])/(0.0859, 0.9141)
0.65 ( ϕ 1 , [2.65, 2.75])/ ( ϕ 2 , [−3.65, −3.75])/0.5437 ( ϕ 1 , [2.65, 2.75])/ ( ϕ 2 , [−3.65, −3.75])/0.4563 ( ϕ 1 , [2.65, 2.75])/ ( ϕ 2 , [−3.65, −3.75])/(0.4563, 0.5437)

Share and Cite

MDPI and ACS Style

Yang, T.; Liu, S.; Tang, M.; Zhang, K.; Zhang, X. Optimal Noise Enhanced Signal Detection in a Unified Framework. Entropy 2016, 18, 213. https://0-doi-org.brum.beds.ac.uk/10.3390/e18060213

AMA Style

Yang T, Liu S, Tang M, Zhang K, Zhang X. Optimal Noise Enhanced Signal Detection in a Unified Framework. Entropy. 2016; 18(6):213. https://0-doi-org.brum.beds.ac.uk/10.3390/e18060213

Chicago/Turabian Style

Yang, Ting, Shujun Liu, Mingchun Tang, Kui Zhang, and Xinzheng Zhang. 2016. "Optimal Noise Enhanced Signal Detection in a Unified Framework" Entropy 18, no. 6: 213. https://0-doi-org.brum.beds.ac.uk/10.3390/e18060213

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop