Next Article in Journal
Symmetry Studies and Decompositions of Entropy
Previous Article in Journal
Entropy and Energy in Quantum Measurement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inference with the Median of a Prior

by
Adel Mohammadpour
1,2 and
Ali Mohammad-Djafari
2,*
1
School of Intelligent Systems (IPM) andAmirkabir University of Technology (Dept. of Stat.), Tehran, Iran.
2
LSS (CNRS-Supélec-Univ. Paris 11),Supélec, Plateau de Moulon, 91192 Gif-sur-Yvette, France.
*
Author to whom correspondence should be addressed.
Submission received: 14 February 2006 / Accepted: 9 June 2006 / Published: 13 June 2006

Abstract

:
We consider the problem of inference on one of the two parameters of a probability distribution when we have some prior information on a nuisance parameter. When a prior probability distribution on this nuisance parameter is given, the marginal distribution is the classical tool to account for it. If the prior distribution is not given, but we have partial knowledge such as a fixed number of moments, we can use the maximum entropy principle to assign a prior law and thus go back to the previous case. In this work, we consider the case where we only know the median of the prior and propose a new tool for this case. This new inference tool looks like a marginal distribution. It is obtained by first remarking that the marginal distribution can be considered as the mean value of the original distribution with respect to the prior probability law of the nuisance parameter, and then, by using the median in place of the mean.

1 Introduction

We consider the problem of inference on a parameter of interest θ of a probability distribution when we have some prior information on a nuisance parameter ν from a finite number of samples of this probability distribution. Assume that we know the expressions of either the cumulative distribution function (cdf) FX|ν,θ(x|ν, θ) or its corresponding probability density function (pdf) fX|ν,θ(x|ν, θ), X = (X1, … ,Xn)′ and x = (x1, … ,xn)′. 𝒱 is a random parameter on which we have an a priori information and θ is a fixed unknown parameter. This prior information can either be of the form of a prior cdf F𝒱(ν) (or a pdf f𝒱(ν)) or, for example, only the knowledge of a finite number of its moments. In the first case, the marginal cdf
Entropy 08 00067 i001
is the classical tool for doing any inference on θ. For example the Maximum Likelihood (ML) estimate, θ ^ M L of θ is defined as
Entropy 08 00067 i002
where fX(x) is the pdf corresponding to the cdf FX(x).
In the second case the Maximum Entropy (ME) principle ([4, 5]), can be used for assigning the probability law f𝒱(ν) and thus go back to the previous case, e.g. [1] page 90.
In this paper we consider the case where we only know the median of the nuisance parameter 𝒱. If we had a complementary knowledge about the finite support of pdf of 𝒱, then we could again use the ME principle to assign a prior and go back to the previous case, e.g. [3]. But if we are given the median of 𝒱 and if the support is not finite, then in our knowledge, there is not any solution for this case. The main object of this paper is to propose a solution for it. For this aim, in place of FX(x) in (1), we propose a new inference tool F ˜ X | θ (x) which can be used to infer on θ (we will show that F ˜ X | θ (x) is a cdf under a few conditions). For example we can define
Entropy 08 00067 i003
where f ˜ X | θ (x) is the pdf corresponding to the cdf F ˜ X | θ (x).
This new tool is deduced from the interpretation of FX(x) as the mean value of the random variable T = T (𝒱; x) = FX(x) as given by (1). Now, if in place of the mean value, we take the median, we obtain this new inference tool F ˜ X | θ (x) which is defined as
Entropy 08 00067 i004
and can be used in the same way to infer on θ.
As far as the authors know, there is no work on this subject except recently presented conference papers by the authors, [9, 8, 7]. In the first article we introduced an alternative inference tool to total probability formula, which is called a new inference tool in this paper. We calculated directly this new inference tool (such as Example A in Section 2) and a numerical method suggested for its approximation. In the second one, we used this new tool for parameter estimation. Finally, in the last one, we reviewed the content of two previous papers and mentioned its use for the estimation of a parameter with incomplete knowledge on a nuisance parameter in the one dimensional case. In this paper we give more details and more results with proofs using weaker conditions, with a new overlook on the problem. We also extend the idea to the multivariate case. In the following, first we give more precise definition of F ˜ X | θ (x). Then we present some of its properties. For example, we show that under some conditions, F ˜ X | θ (x) has all the properties of a cdf, its calculation is very easy and depends only on the median of prior distribution. Then, we give a few examples and finally, we compare the relative performances of these two tools for the inference on θ. Extensions and conclusion are given in the last two sections.

2 A New Inference Tool

Hereafter in this section to simplify the notations we omit the parameter θ, and we assume that the random variables Xi, i = 1, … , n and random parameter 𝒱 are continuous and real. We also use increasing and decreasing instead of non-decreasing and non-increasing respectively.
Definition 1
Let X = (X1, … , Xn)′ have a cdf FX(x) depending on a random parameter 𝒱 with pdf f𝒱(ν), and let the random variable T = T(𝒱; x) = FX(x|𝒱) have a unique median for each fixed x. The new inference tool, F ˜ X ( x ) , is defined as the median of T:
Entropy 08 00067 i005
To make our point clear we begin with the following simple example, called Example A. Let FX|𝒱(x|ν) = 1 − eνx, x > 0, be the cdf of an exponential random variable with scale parameter ν > 0. We assume that the prior pdf of 𝒱 is known and also is exponential with parameter 1, i.e. fν(ν) = eν, ν > 0. We define the random variable T = FX|𝒱(x|𝒱) = 1 − e𝒱x, for any fixed value x > 0. The random variable 0 ≤ T ≤ 1 has the following cdf
Entropy 08 00067 i006
Therefore, pdf of T is fT(t) = 1 x (1 − t)( 1 x − 1), 0 ≤ t ≤ 1. Now, we can calculate the mean of the random variable T as follow
Entropy 08 00067 i007
Let Med(T) be the median of the random variable T, then it can be calculated by
Entropy 08 00067 i008
Mean value of the random variable T is a cdf with respect to (wrt) x. This fact is always true; because E(T) is the marginal cdf of random variable X, i.e. FX(x). The marginal cdf is well known, well defined and can also be calculated directly by (1). On the other hand, in this example, it is obvious that Med(T) is a cdf wrt x, which is called F ˜ X (x) in Definition 1, see Figure 1. However, we have not a short cut for calculating F ˜ X (x) such as FX(x) in (1).
In the following theorem and remark, first we show that under a few conditions, F ˜ X (x) has all the properties of a cdf. Then, in Theorem 2, we drive a simple expression for calculating F ˜ X (x) and show that, in many cases, the expression of F ˜ X (x) depends only on the median of the prior and can be calculated simply, see Remark 2. In Theorem 3 we state separability property of F ˜ X (x) versus exchangeability of FX(x).
Theorem 1
Let X have a cdf FX(x) depending on a random parameter 𝒱 with pdf f𝒱(ν) and the real random variable T = FX(x|𝒱) have a unique median for each fixed x. Then:
1.
F ˜ X (x) is an increasing function in each of its arguments.
2.
If FX(x) and F𝒱(ν) are continuous cdfs then F ˜ X (x) is a continuous function in each of its arguments.
3.
0 ≤ F ˜ X (x) ≤ 1.
Proof: 
1.
Let y = (y1, … , yn)′, z = (z1, … , zn)′, yj < zj for fixed j and yi = zi for ij, 1 ≤ i, jn and take
Entropy 08 00067 i009
Then using (2) we have
Entropy 08 00067 i010
We also have YZ, because FX|𝒱 is an increasing function in each of its arguments. Therefore,
Entropy 08 00067 i011
ky is the unique median of Y and so kykz or equivalently F ˜ X (x) is increasing in its j-th argument.
2.
Let x. = (x1, … , xj − 1, x., xj + 1, … ,xn)′ and t = (x1, … , xj − 1, t, xj + 1, … ,xn)′. By part 1, F ˜ X (x) is an increasing function in each of its arguments. Therefore,
Entropy 08 00067 i012
exist and are finite, e.g. [11].
Further, FX|𝒱(x|𝒱) is continuous wrt xj, and so
Entropy 08 00067 i013
and by (2) we have
Entropy 08 00067 i014
But F ˜ X (x) is the unique median of FX|𝒱(x|𝒱), therefore by (3),
Entropy 08 00067 i015
and thus F ˜ X (x) is continuous.
3.
F ˜ X (x) is the median of random variable T, where T = FX|𝒱(x|𝒱) and 0 ≤ T ≤ 1, and so 0 ≤ F ˜ X (x) ≤ 1.   ☐
Remark 1
By part 1 of Theorem 1, limxj↑+∞ F ˜ X (x) and limxj↓−∞ F ˜ X (x) exist and are finite, [11]. Therefore F ˜ X (x) is a continuous cdf if conditions of Theorem 1 hold and
1.
limxj↓−∞ F ˜ X (x) = 0 for any particular j,
2.
limx1↑+∞,… ,xn↑+∞ F ˜ X (x) = 1,
3.
b1a1 …∆bnan F ˜ X (x) ≥ 0, where aibi, i = 1, … , n, andbjaj F ˜ X (x) = F ˜ X ((x1, … , xj − 1, bj, xj + 1, … ,xn)′)− F ˜ X ((x1, … , xj − 1, aj, xj + 1, … ,xn)′) ≥ 0.
In this case, we call F ˜ X (x) the marginal cdf of X based on median. When F ˜ X (x) is a one dimensional cdf, the last condition follows from parts 1 and 3 of Theorem 1.
Theorem 2
If L(ν) = FX(x) is a monotone function wrt ν and 𝒱 has a unique median F 𝒱; 1 ( 1 2 ) , then F ˜ X (x) = L( F 𝒱; 1 ( 1 2 ) ).
Proof: 
Let
Entropy 08 00067 i016
be the generalized inverse of L, e.g. [10] page 39. Noting that
Entropy 08 00067 i017
and by (2) we have,
Entropy 08 00067 i018
where the last expression follows from
Entropy 08 00067 i019
Remark 2
If conditions of Theorem 2 hold, then F ˜ X (x) belongs to the family of distributions FX|𝒱(x|𝒱). Because, F ˜ X (x) = FX(x| F 𝒱; 1 ( 1 2 ) ) Therefore F ˜ X (x) is a cdf and conditions in Remark 1 hold.
Remark 3
F ˜ X (x) depends only on the median of prior distribution, F 𝒱; 1 ( 1 2 ) , while the expression of FX(x) needs the perfect knowledge of F𝒱(ν). Therefore, F ˜ X (x) is robust relative to prior distributions with the same median.
Remark 4
If median of T is not unique then F ˜ X (x) may not be a unique cdf wrt x. For example (called Example B), assumethat 𝒱 has the following cdf, in Example A, Figure 2-left:
Entropy 08 00067 i020
Then, T = T (𝒱; x) = FX|𝒱(x|𝒱) = 1 − e𝒱x has the following cdf
Entropy 08 00067 i021
Therefore, the median of T is an arbitrary point in the following interval: (see Figure 2-right)
Entropy 08 00067 i022
Theorem 3
Let FX(x) be conditional cdf of X = (X1, … , Xn)′ given 𝒱 = ν and L(k1,… ,kr)(ν) = F(Xk1 ,… ,Xkr)|𝒱(xk1 , … , xkr|ν) be monotone function of ν for each {k1, … , kr} ⊆ {1, … , n}. Let also 𝒱 have a unique median F 𝒱; 1 ( 1 2 ) . If for each {k1, … , kr} ⊆ {1, … , n},
Entropy 08 00067 i023
i.e. X | 𝒱 = ν has independent components, then
Entropy 08 00067 i024
Proof: 
Conditions of Theorem 2 hold and so, for each {k1, … , kr} ⊆ {1, … , n},
Entropy 08 00067 i025
Remark 5
If X | 𝒱= ν has independent components, then the marginal distribution of X cannot have independent components. For example, in general case,
Entropy 08 00067 i026
It can be shown that, if X | 𝒱= ν has Independent and Identically Distributed (iid) components, then the marginal distribution of X is exchangeable, see Example 1. We recall that for identically distributed random variables exchangeability is a weaker condition than independence.
In the following we show that some families of distributions (e.g. [6]) have a monotone distribution function wrt their parameters and so, calculation of F ˜ X (x) is very easy by using Theorem 2.
Lemma 1
Let L(ν) = FX|𝒱(x|𝒱). If ν is a real location parameter then L(ν) is decreasing wrt ν.
Proof: 
Let ν1 < ν2 and ν be a location parameter. Then
Entropy 08 00067 i027
Lemma 2
Let L(ν) = FX|𝒱(x|ν). If ν is a scale parameter then L(ν) is monotone wrt ν.
Proof: 
Let ν1 < ν2. If ν is a scale parameter, ν > 0, then
Entropy 08 00067 i028
Therefore, L(ν) is an increasing function if x < 0 and is a decreasing function if x > 0, i.e. L(ν) is a monotone function wrt ν. ☐
The proof of the following lemma is straightforward.
Lemma 3
Let X1, … Xn given 𝒱 = ν beiid random variables and X = (X1, … , Xn)′. If L(ν) = FX1|𝒱(x|ν) is an increasing (a decreasing) function then L*(ν) = FX(x) is an increasing (a decreasing) function of ν.
In some cases we can show directly that L(.) is a monotone function. For example, in the exponen-tial family this property can be proved by using differentiation. Let X|η be distributed according to an exponential family with pdf
Entropy 08 00067 i029
where η = (η1, … , ηn)′ and T = (T1, … , Tn)′. It can be shown that L(η) = FX|η(x|η) is a monotone function wrt each of its arguments in many cases by the following method: Let Iy≤x = 1 if y1x1, … , ynxn and 0 elsewhere; and note that the differentiation under the integral sign is true for exponential family. Then
Entropy 08 00067 i030
The last equality follows from Entropy 08 00067 i031, e.g. [6] page 27.
On the other hand, we can use stochastic ordering property of a family of distributions for showing that L(.) is a monotone function. A family of cdfs
Entropy 08 00067 i032
where V is an interval on the real line, is said to have Monotone Likelihood Ratio (MLR) property if, for every ν1 < ν2 in V the likelihood ratio
Entropy 08 00067 i033
is a monotone function of x. The property of MLR defines a very strong ordering of a family of distributions.
Lemma 4
If is an MLR family wrt x then FX|𝒱(x|ν) is an increasing (or a decreasing) functionof ν for all x.
Proof: 
See e.g. [12] page 124. ☐
A family of cdfs in (4) is said to be stochastically increasing (SI) if ν1 < ν2 implies FX|𝒱(x|ν1) ≥ FX|𝒱(x|ν2) for all x. For stochastically decreasing (SD) the inequality is reversed. This definition is a weaker property than MLR property (by Lemma 4), but is a stronger property than monotonicity of L(ν) = FX|𝒱(x|ν) (because L(ν) is monotone for each fixed x). Therefore, we have
MLR ⇒ SI or SD ⇒ L(ν) is monotone
It can be shown that the converse of the above relations are not true.
Remark 6
In Theorem 1, we prove that F ˜ X (x) is an increasing function. In the proof of this theorem we do not use the monotonicity property of L(ν) = FX|𝒱(x|ν) wrt ν.Forexample (called Example C), assume that
Entropy 08 00067 i034
be mixture cdf of an exponential and a Cauchy cdf with parameter ν > 0. Figure 3-left shows the graphs of L(ν) = FX|𝒱(x|ν) for different x. L(ν) is not monotone for some of x values in this figure. If we assume that the prior pdf of 𝒱 is known and is also exponential with parameter 1, then, still median of random variable T is a cdf, see Figure 3-right.

3 Examples

In what follows, we use the following notations and expressions, [2] pages 427-422:
Entropy 08 00067 i036
Exchangeable Normal: The random vector X = (X1, … , Xn)′ is said to have an exchangeable normal distribution, 𝒩 (x; µ, σ2, ρ), if its distribution is multivariate normal with the following mean vector and variance-covariance matrix
Entropy 08 00067 i037
It can be shown that 𝒩 (x; µ, σ2, ρ) =
Entropy 08 00067 i038
where Entropy 08 00067 i039

3.1 Example 1

The first example we consider is
Entropy 08 00067 i040
where we assume that the mean value ν is the nuisance parameter. Let X1, … , Xn be an iid copy of X (i.e. X|𝒱 = ν, θ) and X = (X1, … , Xn)′, then:
  • Prior pdf case f𝒱(ν) = 𝒩 (ν; ν0, θ0):
    Then we have
    Entropy 08 00067 i041
    and
    Entropy 08 00067 i042
  • Unique median knowledge case Median {𝒱} = ν0:
    Then, as we could see, by using Lemma 1 and Theorem 2, we have
    Entropy 08 00067 i043
    or equivalently,
    Entropy 08 00067 i044
    Now we can use Theorem 3 for calculating F ˜ X | θ (x) (because FX|𝒱(x|ν, θ) is a decreasing function wrt ν by Lemma 1), therefore,
    Entropy 08 00067 i045
    Note that, if f𝒱(ν) = 𝒩 (ν; ν0, θ0) or f𝒱(ν) = 𝒞 (ν; ν0, θ0) then f ˜ X | θ (x) is given by (5), because the median of these two distributions are equal to ν0 (see Remark 3).
  • Moments knowledge case E(|𝒱|) = ν0:
    Then the ME pdf is given by 𝒟 (ν; ν0). In this case we cannot obtain an analytical expression for
    Entropy 08 00067 i046
    where Entropy 08 00067 i047 We recall that, if we know that E(𝒱) = ν0 or Median {𝒱} = ν0 and the support of 𝒱 is R the ME pdf does not exist.

3.2 Example 2

The second example we consider is
Entropy 08 00067 i048
where, this time, we assume that ν is the variance and the nuisance parameter. Then:
  • Prior pdf case f𝒱(ν) = 𝒢 (ν; α, β):
    Then, it is easy to show that,
    Entropy 08 00067 i049
    but fX(x) cannot be calculated analytically.
  • Unique median knowledge case Median {𝒱} = ν0:
    Then, as we could see, by using Lemma 2 and Theorem 2, we have
    Entropy 08 00067 i050
    It can be shown that FX|ν,θ(x|ν, θ) is a monotone function wrt ν (by using derivative) and by Theorem 3 we have
    Entropy 08 00067 i051
  • Moments knowledge case E(1/𝒱) = 1/ν0:
    Then, knowing that the variance is a positive quantity, the ME pdf f𝒱(ν) is an 𝒢 (ν; 1, ν0). In this case we have
    Entropy 08 00067 i052
    and fX(x) cannot be calculated analytically.

3.3 Example 3

In this example we consider is 𝒩 (x; ν, σ2, ρ), where ν is a nuisance parameter. Noting that, we can write 𝒩 (x; ν, σ2, ρ), as follows (exponential family),
Entropy 08 00067 i053
where Entropy 08 00067 i035, Entropy 08 00067 i054 can be determined. This pdf is a monotone function wrt θ3 and so L(ν) is a monotone function. Let θ = (σ2, ρ) and the median of prior pdf be ν0, then
Entropy 08 00067 i055

3.4 Comparison of Estimators in Example 1

Suppose we are interested in estimating θ in Example 1. In the case that n = 1
Entropy 08 00067 i056
and so the ML estimator (MLE) of θ based on these two pdfs are equal to
Entropy 08 00067 i057
respectively. For n > 1 the MLE of θ based on
Entropy 08 00067 i058
can be calculated numerically by the following simplified likelihood function,
Entropy 08 00067 i059
where we assume that θ0 = 1. The MLE of θ based on Entropy 08 00067 i060, is equal to Entropy 08 00067 i061
Before comparing these two estimators (by considering normal prior for ν), one can predict that, θ ^ is better than θ ˜ , because θ ^ uses more information (i.e. known normal prior) than θ ˜ which uses only the median of prior distribution. We may also recall that, fX(x) is the true pdf of observations obtained using the full prior knowledge on the nuisance parameter, while f ˜ X | θ (x) is a pseudo pdf which includes only prior knowledge of the median of nuisance parameter.
The empirical Mean Square Error (MSE) of 4 estimators are plotted in Figure 4 for different sample sizes n. We note by T the MLE of θ when ν = ν0, and we note by TMaxEnt the MLE of θ when the prior mean and variance are known.
In Figure 4-left we plot the graphs of MSE of θ ^ , θ ˜ , T and TMaxEnt. In Table 1 we classify these 4 estimators and corresponding assumptions for n = 1. We see that, in Figure 4-left, θ ^ is better than θ ˜ , especially for large sample size n, and T is the best.
In Figure 4-right we plot the graphs of MSE wrt median, ν0. This is useful for checking robustness of estimators wrt false prior information. We see that θ ^ is more robust than θ ˜ relative to ν0, but both of them dominated by T. In this case, samples are generated from a normal distribution with random normal mean (median ν0) when θ = 2, however, we assume that ν has a standard normal prior distribution.
The simulations confirm the following logic: more we have information better will be the estima-tion. In fact for calculating T we have not nuisance parameter; for θ ^ , we use all prior distribution information; for TMaxEnt we use prior mean and prior variance information; and for θ ˜ we use only the median value of prior distribution.

4 Extensions

In this section, we show that the suggested new tool can be extended to other functions such as quantiles instead of median, but not to other functions such as mode. For example, mode of the random variable T = T (𝒱; x) = FX|𝒱(x) in Definition 1, i.e.,
Entropy 08 00067 i062
is not a cdf in Example A. The mode of T is: (see Figure 1 top)
Entropy 08 00067 i063
which is not a distribution function. If we assume k = 1, then Mod(T) is a degenerate cdf. In Figure 5 we plot the mean, median and mode of the random variable T. We see that they are cdfs. However, the cdf based on mode is the extreme case of the two others.
As noted by one of the referees, the mode of prior pdf is useful for introducing a pseudo cdf similar to our new inference tool, F ˜ X (x). That is, instead of using the result of Theorem 2: F ˜ X | θ (x) = FX|ν,θ(x|Med(𝒱), θ), using F ˜ X | θ Mod (x) = FX|ν,θ(x|Mod(𝒱), θ). This method was used for eliminating the nuisance parameter ν. In this case, Theorem 3, i.e. separability property of pseudo marginal distribution, also holds for F ˜ X | θ Mod (x). Note that, the mode of the random variable T, defined in (7) is not equal to F ˜ X | θ Mod (x) and may not be a cdf similar to the above illustration. However, it may be a cdf similar to the following example pointed out by the referee. In Example A, let 𝒱 − 1 be a binomial distribution with parameters (2, 3 4 ), i.e. 𝒱 is a discrete random variable with support {1, 2, 3}. Then E(T) = 1 − (ex + 6e−2x + 9e−3x)/16 and Mod(T) = 1 − e−3x are cdfs see Figure 6.
On the other hand, we may extend the method presented in this paper to the class of quantiles (e.g., quartiles or percentiles). To make our point clear we consider the first and third quartiles of random variable T in Example A (instead of median, which is the second quartile). We denote the new inference tools based on first and third quartiles by F ˜ X | θ Q 1 (x) and F ˜ X | θ Q 3 (x) respectively.
They can be calculated such as (2) by
Entropy 08 00067 i064
It can be shown that, in Example A, F ˜ X Q 1 (x) = 1 − ex ln 0.75 and F ˜ X Q 3 (x) = 1 − ex ln 0.25. In Figure 7 we plot them.
In conclusion, it seems that the method can be extended to any quantiles instead of median, but its extension to other functions may need more care.

5 Conclusion

In this paper we considered the problem of inference on one set of parameters of a continuous probability distribution when we have some partial information on a nuisance parameter. We considered the particular case when this partial information is only the knowledge of the median of the prior and proposed a new inference tool which looks like the marginal cdf (or pdf) but its expression needs only the median of the prior. We gave precise definition of this new tool, studied some of its main properties, compared its application with classical marginal likelihood in a few examples, and finally gave an example of its usefulness in parameter estimation.

Acknowledgments

The authors would like to thank the referee for their helpful comments and suggestions. The first author is grateful to School of Intelligent Systems (IPM, Tehran) and Laboratoire des signaux et syst`emes (CNRS-Sup´elec-Univ. Paris 11) for their supports.

References

  1. Berger, J. O. Statistical Decision Theory: Foundations, Concepts, and Methods; Springer: New York, 1980. [Google Scholar]
  2. Bernardo, J. M.; Smith, A. F. M. Bayesian Theory; Wiley: Chichester, UK, 1994. [Google Scholar]
  3. Hernández Bastida, A.; Martel Escobar, M. C.; Vázquez Polo, F. J. On maximum entropy priors and a most likely likelihood in auditing. Qüestiió 1998, 22(2), 231–242. [Google Scholar]
  4. Jaynes, E. T. Information theory and statistical mechanics I,II. Physical review 1957, 106, 620–630. [Google Scholar] and 108, 171–190.
  5. Jaynes, E. T. Prior probabilities. IEEE Transactions on Systems Science and Cybernetics SSC-4 1968, (3), 227–241. [Google Scholar]
  6. Lehmann, E. L.; Casella, G. Theory of point estimation, 2nd ed.; Springer: New York, 1998. [Google Scholar]
  7. Mohammad-Djafari, A.; Mohammadpour, A. On the estimation of a parameter with incomplete knowledge on a nuisance parameter. 2004; Vol. 735, AIP; pp. 533–540. [Google Scholar]
  8. Mohammadpour, A.; Mohammad-Djafari, A. An alternative criterion to likelihood for parameter estimation accounting for prior information on nuisance parameter. In Soft methodology and Random Information Systems; Springer: Berlin, 2004; pp. 575–580. [Google Scholar]
  9. Mohammadpour, A.; Mohammad-Djafari, A. An alternative inference tool to total probability formula and its applications. 2004; Vol. 735, AIP; pp. 227–236. [Google Scholar]
  10. Robert, C. P.; Casella, G. Monte Carlo statistical methods, 2nd ed.; Springer: New York, 2004. [Google Scholar]
  11. Rohatgi, V. K. An Introduction to Probability Theory and Mathematical Statistics; Wiley: New York, 1976. [Google Scholar]
  12. Zacks, S. Parametric statistical inference; Pergamon, Oxford, 1981. [Google Scholar]
Figure 1. Top: pdf of random variable T = T (𝒱; x) = FX|𝒱(x|𝒱) = 1 − e𝒱x, Middle: cdf of random variable T, and Bottom: mean and median of random variable T in Example A.
Figure 1. Top: pdf of random variable T = T (𝒱; x) = FX|𝒱(x|𝒱) = 1 − e𝒱x, Middle: cdf of random variable T, and Bottom: mean and median of random variable T in Example A.
Entropy 08 00067 g001
Figure 2. Left: cdf of random variable 𝒱 in Example B and its corresponding pdf. Right: cdf of random variable T = T (𝒱; x) = FX|𝒱(x|𝒱) = 1 − e𝒱x in Example B.
Figure 2. Left: cdf of random variable 𝒱 in Example B and its corresponding pdf. Right: cdf of random variable T = T (𝒱; x) = FX|𝒱(x|𝒱) = 1 − e𝒱x in Example B.
Entropy 08 00067 g002
Figure 3. Left: the graphs of L(ν) = FX|𝒱(x|𝒱) for different x in Example C. Right: the mean and median of random variable T in Example C.
Figure 3. Left: the graphs of L(ν) = FX|𝒱(x|𝒱) for different x in Example C. Right: the mean and median of random variable T in Example C.
Entropy 08 00067 g003
Figure 4. The empirical MSEs of θ ˜ , TMaxEnt, θ ^ , and T wrt θ (left) and ν0 (right, for θ = 2) for different sample sizes n.
Figure 4. The empirical MSEs of θ ˜ , TMaxEnt, θ ^ , and T wrt θ (left) and ν0 (right, for θ = 2) for different sample sizes n.
Entropy 08 00067 g004
Figure 5. Mean, median and mode of random variable T = T (𝒱; x) = FX|𝒱(x|𝒱) = 1 − e𝒱x wrt x.
Figure 5. Mean, median and mode of random variable T = T (𝒱; x) = FX|𝒱(x|𝒱) = 1 − e𝒱x wrt x.
Entropy 08 00067 g005
Figure 6. Mean and mode of random variable T = T (𝒱; x) = FX|𝒱(x|𝒱) = 1 − e𝒱x wrt x.
Figure 6. Mean and mode of random variable T = T (𝒱; x) = FX|𝒱(x|𝒱) = 1 − e𝒱x wrt x.
Entropy 08 00067 g006
Figure 7. Q1, median and Q3 of random variable T = T (𝒱; x) = FX|𝒱(x|𝒱) = 1 − e𝒱x wrt x.
Figure 7. Q1, median and Q3 of random variable T = T (𝒱; x) = FX|𝒱(x|𝒱) = 1 − e𝒱x wrt x.
Entropy 08 00067 g007
Table 1. Comparing estimators of variance in four different situations.
Table 1. Comparing estimators of variance in four different situations.
Assumptionspdf of X|θ based on prior information MLE of θSimulated data pdf MSE(θ) = E(MLEθ)2
Known parameter
ν = ν0
𝒩 (x; ν0, θ)
T = (Xν0)2
𝒩 (x; 0, θ)
2θ2
Known prior
f𝒱(ν) = 𝒩 (ν; ν0, θ0)
𝒩 (x; ν0, θ + θ0)
θ ^ = max{(Xν0)2θ0, 0}
𝒩 (x; 0, θ + 1)
E( θ ^ θ)2
Known moments
E(𝒱) = ν0, V (𝒱) = θ 0 2
𝒩 (x; ν0, θ + θ 0 2 )
TMaxEnt = max{(Xν0)2 θ 0 2 , 0}
𝒩 (x; 0, θ + 1)
E(TMaxEntθ)2
Known unique median
Median(𝒱) = ν0
𝒩 (x; ν0, θ)
θ ˜ = (Xν0)2
𝒩 (x; 0, θ + 1)
2(θ + 1)2 + 1

Share and Cite

MDPI and ACS Style

Mohammadpour, A.; Mohammad-Djafari, A. Inference with the Median of a Prior. Entropy 2006, 8, 67-87. https://0-doi-org.brum.beds.ac.uk/10.3390/e8020067

AMA Style

Mohammadpour A, Mohammad-Djafari A. Inference with the Median of a Prior. Entropy. 2006; 8(2):67-87. https://0-doi-org.brum.beds.ac.uk/10.3390/e8020067

Chicago/Turabian Style

Mohammadpour, Adel, and Ali Mohammad-Djafari. 2006. "Inference with the Median of a Prior" Entropy 8, no. 2: 67-87. https://0-doi-org.brum.beds.ac.uk/10.3390/e8020067

Article Metrics

Back to TopTop