Next Article in Journal
Engineering Classical Capacity of Generalized Pauli Channels with Admissible Memory Kernels
Next Article in Special Issue
Local Intrinsic Dimensionality, Entropy and Statistical Divergences
Previous Article in Journal
Secure OFDM with Peak-to-Average Power Ratio Reduction Using the Spectral Phase of Chaotic Signals
Previous Article in Special Issue
An Overview of Geometrical Optics Restricted Quantum Key Distribution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Information Generating Function of Ranked Set Samples

by
Omid Kharazmi
1,
Mostafa Tamandi
1 and
Narayanaswamy Balakrishnan
2,*
1
Department of Statistics, Faculty of Mathematical Sciences, Vali-e-Asr University of Rafsanjan, Rafsanjan P.O. Box 518, Iran
2
Department of Mathematics and Statistics, McMaster University, Hamilton, ON L8S 4L8, Canada
*
Author to whom correspondence should be addressed.
Submission received: 9 September 2021 / Revised: 15 October 2021 / Accepted: 18 October 2021 / Published: 21 October 2021
(This article belongs to the Special Issue Entropies, Divergences, Information, Identities and Inequalities)

Abstract

:
In the present paper, we study the information generating (IG) function and relative information generating (RIG) function measures associated with maximum and minimum ranked set sampling (RSS) schemes with unequal sizes. We also examine the IG measures for simple random sampling (SRS) and provide some comparison results between SRS and RSS procedures in terms of dispersive stochastic ordering. Finally, we discuss the RIG divergence measure between SRS and RSS frameworks.

1. Introduction

Moment generating function (MGF) plays an important role in statistical distribution theory. Its derivatives evaluated at zero yield the moments of the considered distribution. Information generating (IG) functions have also been used in information theory, in addition to the moment generating function, to generate some well-known information measures such as Shannon entropy and Kullback–Leibler divergence.
The IG function of a probability model f was first introduced by Golomb [1], whose first derivative evaluated at one provides Shannon entropy for that probability model.
Suppose the variable X has an absolutely continuous probability density function (PDF) f. Then, the IG function of density f , for any α > 0 , is defined as
G α ( X ) = X f α ( x ) d x ,
when the integral is finite. In order to simplify the notation, we do not use X in the integration with respect to d x throughout the article, unless a distinction needs to be made. The following properties of G α ( X ) in (1) have been stated in Golomb [1]:
( i ) G 1 ( X ) = 1 ; ( i i ) α G α ( X ) | α = 1 = H ( X ) ,
where H ( X ) is the Shannon entropy defined as H ( X ) = f ( x ) log f ( x ) d x . In particular, when α = 2 , the IG measure is simply X f 2 ( x ) d x , known as informational energy (IE) function. The IG function and its extensions have been used extensively in chemistry and physics to discuss the atomic structure of a given phenomena or system; for more details, one may see López-Ruiz et al. [2]. In addition, the IG function, known as entropic moment in chemistry and physics literature, plays a key role in chaos theory and non-extensive thermodynamics. Note that the IG function is closely linked to Tsallis and Rényi entropies. The entropic moment measure, as well as the information entropy, reflect on the degree of spread of a probabilistic model, see Bercher [3].
Recently, Clark [4] has presented an analogous IG function for stochastic processes to assist in the derivation of information measures for point processes.
Guiasu and Reischer [5] proposed relative information generating (RIG) function between two density functions, whose first derivative evaluated at 1, yields Kullback–Leibler (KL) divergence (Kullback and Leibler, [6]) measure.
Suppose the variables X and Y have absolutely continuous density functions f and g, respectively. Then, the RIG function, for any α > 0 , is defined as
R α ( X , Y ) = g ( x ) f ( x ) g ( x ) α d x
when the integral is finite. The KL divergence is then obtained, from its first derivative, as
K L ( X , Y ) = α R α ( X , Y ) | α = 1 = f ( x ) log f ( x ) g ( x ) d x .
One may refer to Clark [4] and Mares et al. [7] for some discussions on the usefulness and applications of the RIG function
The main objective of this paper is to study the IG and RIG information measures associated with ranked set sampling (RSS) schemes. The analysis of information content in various sampling strategies is of great importance in sampling theory. In this regard, information theory provides specifically a framework for the quantification of information content in a given source with a probabilistic structure under different sampling strategies. Among various strategies discussed in sampling theory, we focus here on some well-known strategies that are known to be efficient. A cost-effective survey sampling method, known as ranked set sampling (RSS), was first introduced by McIntyre [8]. He specifically introduced RSS to estimate the mean of a population based on a given simple random sample (SRS) of size n and observed that the estimator based on RSS is an unbiased estimator with a smaller variance as compared to the mean of a SRS. The RSS and some of its generalizations have been discussed rather extensively in the literature. For example, Frey [9]; Park and Lim [10]; and Chen, Bai, and Sinha [11] have all discussed the information content in RSS based on Fisher entropy, while Tahmasebi et al. [12] have studied the Tsallis entropy based on maximum RSS scheme. Therefore, considering the importance of this issue and the connection between information theory and ranked set sampling theory, a systematic study of the IG function as generator function of some well-known information measures, in the framework of RSS strategy, seems to be necessary. This forms the primary motivation for the present study.
We now briefly introduce SRS and RSS strategies that will be used in the sequel. Let X be an absolutely continuous random variable with PDF f. Then, a SRS of size n, derived from the random variable X, is denoted by X S R S = { X i , i = 1 , , n } . Further, suppose a random sample of size n 2 is selected and is randomly divided into n groups of equal size n. Then, a one-cycle RSS is observed in the following manner:
1 : X ( 1 : n ) 1 ̲ X ( 2 : n ) 1 X ( n : n ) 1 X ( 1 : n ) = X ( 1 : n ) 1 2 : X ( 1 : n ) 2 X ( 2 : n ) 2 ̲ X ( n : n ) 2 X ( 2 : n ) = X ( 2 : n ) 2 n : X ( 1 : n ) n X ( 2 : n ) n X ( n : n ) n ̲ X ( n : n ) = X ( n : n ) n .
As we see from the above representation, the recorded sample in each group of SRS with size n corresponds to the ith order statistic. Thus, the RSS vector of observations is given by X R S S ( n ) = X i : n , i = 1 , n , where X i : n is the ith order statistic based on a given SRS of size n with PDF f and cumulative distribution function (CDF) F. Then, the PDF of X i : n is known to be
f i : n ( x ) = n ! ( i 1 ) ! ( n i ) ! f ( x ) F i 1 ( x ) ( 1 F ( x ) ) n i .
Here, X i : n corresponds to the ith order statistic, and with that taking the value x, there will be i 1 observations less than x each with probability F ( x ) and n i observations greater than x each with probability 1 F ( x ) . For pertinent details, one may refer to the authoritative book on this subject by Arnold et al. [13].
Maximum and minimum ranked set sampling schemes are two useful modifications of ranked set sampling procedure. A maximum RSS is given by X M R S S ( n ) = X ( i ) i , i = 1 , , n , where X ( i ) i is the largest order statistic based on a SRS of size i from f. Similarly, a minimum RSS is given by X m R S S ( n ) = X ( 1 ) i , i = 1 , , n , where X ( 1 ) i is the smallest order statistic based on a SRS of size i from f. From (5), the PDF of X ( 1 ) i is given by
f ( 1 ) i ( x ) = i F ¯ ( x ) i 1 f ( x ) , i = 1 , , n ,
where F ¯ = 1 F , is the survival function of X. Similarly, the PDF of X ( i ) i is given by
f ( i ) i ( x ) = i F ( x ) i 1 f ( x ) , i = 1 , , n .
The corresponding CDFs of (6) and (7) are given by 1 F ¯ i ( x ) and F i ( x ) , respectively.
The purpose of this work is twofold. The first part is to derive IG measures for the SRS and RSS, and especially in maximum and minimum RSS frameworks, and provide some comparison results associated with IG measures of these observations based on dispersive stochastic ordering. In the second part, we further study the RIG divergence measure between SRS and RSS, and specifically the RIG divergence measure between minimum and maximum RSS procedures.
The rest of this paper is organized as follows. In Section 2, we consider the information generating function and establish some results for SRS and RSS procedures. We show that the IG measures of SRS and RSS can be expressed based on different orders of fractional Shannon entropy. Moreover, we examine the monotonicity properties of IG measure for vectors X M R S S ( n ) and X m R S S ( n ) based on a sample of size n, under a mild condition. In Section 3, we discuss the comparison of information generating functions for SRS and RSS frameworks in terms of dispersive stochastic ordering. Next, in Section 4, we study the RIG measures for vectors X S R S ( n ) , X M R S S ( n ) and X M R S S ( n ) . Finally, we make some concluding remarks in Section 5.

2. IG Measures Based on SRS and RSS Schemes

In this section, we first consider the IG measure for SRS and then for RSS schemes. Specifically, we discuss the IG measure for the maximum and minimum RSS schemes.

2.1. IG Measure Based on SRS Scheme

Let X S R S ( n ) = ( X 1 , , X n ) be a SRS of size n obtained from PDF f. Then, the IG measure of vector X S R S ( n ) is given by
G α ( X S R S ( n ) ) = f α ( x 1 ) f α ( x n ) d x 1 d x n = i = 1 n f α ( x i ) d x i = f α ( x ) d x n = G α ( X ) n .
Lemma 1.
Suppose the random variable X has density function f. Then, we have
G α ( X S R S ( n ) ) = j = 0 ( 1 α ) j j ! H j ( f ) n ,
where H j ( f ) is the extended fractional Shannon entropy of order n defined as H j ( f ) = log f ( x ) j f ( x ) d x . For more details about fractional Shannon entropy, one may refer to Xiong et al. [14].
Proof. 
From the definition of IG measure of X S R S ( n ) in (8) and using Lemma 1 of Kharazmi and Balakrishnan [15], we have
G α ( X S R S ( n ) ) 1 n = E e ( α 1 ) log f ( X ) = j = 0 ( 1 α ) j j ! log f ( x ) j f ( x ) d x = j = 0 ( 1 α ) j j ! H j ( f ) ,
as required. □

2.2. IG Measure Based on RSS Scheme

Suppose X 1 , , X n are independent and identically distributed (iid) variables from an absolutely continuous CDF F and PDF f, and X 1 : n , , X n : n are the corresponding order statistics. We then present the IG measure of vector X R S S ( n ) = X i : n , i = 1 , n in the following theorem.
Theorem 1.
Let X R S S ( n ) denote a RSS from density function f. Then, the IG measure of vector X R S S ( n ) , for α > 0 , is given by
G α ( X R S S ( n ) ) = i = 1 n G α ( X i : n ) = ψ ( α , n ) i = 1 n E f α 1 F 1 ( V i ) ,
where ψ ( α , n ) = i = 1 n B α ( i 1 ) + 1 , α ( n i ) + 1 B α ( i , n i + 1 ) , and V i has B e t a α ( i 1 ) + 1 , α ( n i ) + 1 distribution with PDF
f V i ( v ) = 1 B α ( i 1 ) + 1 , α ( n i ) + 1 v α ( i 1 ) ( 1 v ) α ( n i ) , 0 < v < 1 .
Proof. 
From the definition of IG measure in (1) for vector X R S S ( n ) and setting v = F ( x ) , we have
G α ( X R S S ( n ) ) = i = 1 n G α ( X i : n ) = i = 1 n f i : n α ( x ) d x = i = 1 n 1 B α ( i , n i + 1 ) f α ( x ) F ( x ) α ( i 1 ) 1 F ( x ) α ( n i ) d x = ψ ( α , n ) i = 1 n E f α 1 F 1 ( V i ) ,
as required. □
Based on the definition of fractional Shannon entropy and Lemma 1 of Kharazmi and Balakrishnan [15], we can present an alternative representation for G α ( X R S S ( n ) ) as
G α ( X R S S ( n ) ) = j = 0 ( 1 α ) j j ! i = 1 n H j ( f i : n ) ,
where H j is the fractional Shannon entropy of order j and f i : n is the PDF of X i : n as given in (5).
Example 1.
Let X be an exponential variable with PDF f ( x ) = λ e λ x , λ > 0 , x > 0 . From (1) and (8), we then find G α X S R S ( n ) = λ n ( α 1 ) α n . On the other hand, as f ( F 1 ( u ) ) = λ ( 1 u ) , 0 < u < 1 , from (9), we find
G α ( X R S S ( n ) ) = λ n ( α 1 ) i = 1 n B α ( i 1 ) + 1 , α ( n i + 1 ) B α ( i , n i + 1 ) .
Next, we discuss the IG measure for maximum and minimum RSS schemes with vectors X M R S S ( n ) = X ( i ) i , i = 1 , n and X m R S S ( n ) = X ( 1 ) i , i = 1 , n , respectively.
Theorem 2.
Let X m R S S ( n ) and X M R S S ( n ) denote the minimum and maximum RSS schemes from density function f, respectively. Then, the IG measures of vectors X m R S S ( n ) and X M R S S ( n ) , for α > 0 , are given by
G α ( X m R S S ( n ) ) = i = 1 n G α ( X ( 1 ) i ) = c ( α , n ) i = 1 n E f α 1 F 1 ( U i )
and
G α ( X M R S S ( n ) ) = i = 1 n G α ( X ( i ) i ) = c ( α , n ) i = 1 n E f α 1 F 1 ( V i ) ,
respectively, where U i has B e t a ( 1 , α ( i 1 ) + 1 ) and V i has B e t a ( α ( i 1 ) + 1 , 1 ) distributions, with c ( α , n ) = ( n ! ) α i = 1 n ( α ( i 1 ) + 1 ) .
Proof. 
From the definition of IG measure in (1) and using the PDF of X ( 1 ) i in (6), upon setting u = F ( x ) , we get
G α ( X m R S S ( n ) ) = i = 1 n f ( 1 ) i α ( x ) d x = i = 1 n i α F ¯ ( x ) α ( i 1 ) f α ( x ) d x = ( n ! ) α i = 1 n 0 1 ( 1 u ) α ( i 1 ) f α 1 ( F 1 ( u ) ) d u = c ( α , n ) i = 1 n E f α 1 F 1 ( U i ) ,
as required. The proof of (11) is similar, and is therefore omitted for the sake of brevity. □
Example 2.
For the exponential PDF considered in Example 1, by using (10) and (11), we find
 (i)
G α ( X m R S S ( n ) ) = ( n ! ) α 1 λ n ( α 1 ) α n ,
 (ii)
G α ( X M R S S ( n ) ) = G α ( X m R S S ( n ) ) ( α 1 ) ! i = 1 n Γ ( α ( i 1 ) + 1 ) Γ ( α i ) .
Figure 1 shows the differences between IG measures of vectors X S R S ( n ) , X R S S ( n ) , X M R S S ( n ) , and X M R S S ( n ) in Examples 1 and 2, for different values of α > 0 and n = 2 . From Figure 1, it is easy to observe that for α ( 0 , 1 ] , the IG differences are negative and increasing (Panel (a)), while for α [ 1 , ) , the IG differences are positive and increasing (Panel (b)).
Suppose X has CDF F and PDF f, and the vectors X M R S S ( n ) and X m R S S ( n ) are the associated maximum and minimum RSS schemes based on a sample of size n. Then, the following results present the monotonicity properties of IG measures for vectors X M R S S ( n ) and X m R S S ( n ) .
Theorem 3.
Consider the IG measure of vector X M R S S ( n ) . If f ( F 1 ( u ) ) 1 for all 0 < u < 1 , then:
 (i)
If α 1 , G α ( X M R S S ( n ) ) is increasing in n;
 (ii)
If α 1 , G α ( X M R S S ( n ) ) is decreasing in n.
Proof. 
By using the assumption and the definition of IG measure for the vector X M R S S ( n ) in (11), we have
G α ( X M R S S ( n + 1 ) ) G α ( X M R S S ( n ) ) = i = 1 n + 1 f ( i ) i α ( x ) d x i = 1 n f ( i ) i α ( x ) d x = f ( n + 1 ) n + 1 α ( x ) d x = ( n + 1 ) α 0 1 u α n f α 1 F 1 ( u ) d u ( n + 1 ) α 0 1 u α n d u = ( n + 1 ) α α n + 1 1 , f o r α 1 ,
which proves Part (i). Part (ii) can be proved in an analogous manner. □
Theorem 4.
Consider the IG measure of vector X m R S S ( n ) . If f ( F 1 ( u ) ) 1 for all 0 < u < 1 , then:
 (i)
If α 1 , G α ( X m R S S ( n ) ) is increasing in n;
 (ii)
If α 1 , G α ( X m R S S ( n ) ) is decreasing in n.
Proof. 
By using the assumptions and the definition of IG measure for the vector X m R S S ( n ) in (10), we have
G α ( X m R S S ( n + 1 ) ) G α ( X m R S S ( n ) ) = i = 1 n + 1 f ( 1 ) i α ( x ) d x i = 1 n f ( i 1 ) i α ( x ) d x = f ( 1 ) n + 1 α ( x ) d x = ( n + 1 ) α 1 F ( x ) α n f α ( x ) d x = ( n + 1 ) α 0 1 ( 1 u ) α n f α 1 F 1 ( u ) d u ( n + 1 ) α 0 1 ( 1 u ) α n d u = ( n + 1 ) α α n + 1 1 , f o r α 1 ,
which proves Part (i). Part (ii) can be proved in an analogous manner. □
Next, we compare the IG measure of vector X S R S ( n ) with those of X m R S S ( n ) and X M R S S ( n ) .
Theorem 5.
Consider the IG measures G α ( X S R S ( n ) ) , G α ( X m R S S ( n ) ) and G α ( X M R S S ( n ) ) . Then:
 (i)
If α 1 , G α ( X m R S S ( n ) ) ( n ! ) α G α ( X S R S ( n ) ) ;
 (ii)
If α 1 , G α ( X M R S S ( n ) ) ( n ! ) α G α ( X S R S ( n ) ) .
Proof. 
By the definition of IG measures of vectors X S R S ( n ) and X m R S S ( n ) , we find
G α ( X m R S S ( n ) ) = ( n ! ) α i = 1 n 0 1 ( 1 u ) α ( i 1 ) f α 1 F 1 ( u ) d u ( n ! ) α i = 1 n 0 1 f α 1 F 1 ( u ) d u = ( n ! ) α 0 1 f α 1 F 1 ( u ) d u n = ( n ! ) α G α ( X S R S ( n ) ) ,
which proves Part (i). Part (ii) can be proved in an analogous manner. □

3. IG Ordering Results Based on the RSS Scheme

An important criterion for comparing the dispersions (or variabilities) of two variables (or distributions) is dispersive ordering. Let the variables X and Y have CDFs F and G and PDFs f and g, respectively. Then, X said to be less dispersed than Y (denoted by X d i s p Y ) if g ( G 1 ( x ) ) f ( F 1 ( x ) ) for all x ( 0 , 1 ) ; see, for instance, Shaked and Shanthikumar [16] for relevant details.
Definition 1.
Let X and Y be two variables with IG measures G α ( f ) and G α ( g ) , respectively. Then, X is said to be less than Y in the sense of information generating function, denoted by X I G Y , if G α ( f ) G α ( g ) .
Lemma 2.
Suppose X d i s p Y . Then:
 (i)
If α 1 , X I G Y ;
 (ii)
If α 1 , Y I G X .
Proof. 
See Kharazmi and Balakrishnan [15] for a detailed proof. □
Now, we present the following theorem about the IG ordering for RSS schemes.
Theorem 6.
Let X i 1 be a sequence of i.i.d. variables from a deceasing failure rate (DFR) distribution. Then:
 (i)
If α 1 , X m R S S ( n ) I G X R S S ( n ) I G X M R S S ( n ) ;
 (ii)
If α 1 , X m R S S ( n ) I G X R S S ( n ) I G X M R S S ( n ) .
Proof. 
From the DFR assumption of the underling distribution, it is known that
X 1 : n d i s p X i : n d i s p X ( i ) i , i = 1 , , n ;
see Shaked and Shantikumar (2007). Therefore, from Lemma 2 and for α 1 , we get
G α ( X 1 : i ) G α ( X i : n ) G α ( X ( i ) i ) , i = 1 , , n ,
and consequently,
i = 1 n G α ( X 1 : i ) i = 1 n G α ( X i : n ) i = 1 n G α ( X ( i ) i ) .
Now, from the above inequality and definitions of the IG measures for vectors X m R S S ( n ) , X R S S ( n ) and X M R S S ( n ) , we immediately obtain
G α ( X m R S S ( n ) ) G α ( X R S S ( n ) ) G α ( X M R S S ( n ) ) ,
which is equivalent to
X m R S S ( n ) I G X R S S ( n ) I G X M R S S ( n ) ,
which proves Part (i). Part (ii) can be proved in an analogous manner. □
Theorem 7.
Let X and Y be independent random variables with densities f and g, respectively, and X d i s p Y . Then:
 (i)
If α 1 , X R S S ( n ) I G Y R S S ( n ) ;
 (ii)
If α 1 , Y R S S ( n ) I G X R S S ( n ) .
Proof. 
By the definition of IG measure for RSS in (9), we have
G α ( X R S S ( n ) ) = i = 1 n G α ( X i : n ) = ψ ( α , n ) i = 1 n E f α 1 F 1 ( V i ) .
Because X d i s p Y , we have f ( F 1 ( u ) ) g ( G 1 ( u ) ) for all u ( 0 , 1 ) , and so for α 1 , we get f α 1 ( F 1 ( u ) ) g α 1 ( G 1 ( u ) ) . Now, making use of this inequality, we obtain
G α ( X R S S ( n ) ) = i = 1 n 1 B α ( i , n i + 1 ) 0 1 u α ( i 1 ) ( 1 u ) α ( n i ) f α 1 F 1 ( u ) d u i = 1 n 1 B α ( i , n i + 1 ) 0 1 u α ( i 1 ) ( 1 u ) α ( n i ) g α 1 G 1 ( u ) d u = G α ( Y R S S ( n ) ) ,
which proves Part (i). Part (ii) can be proved in an analogous manner. □
Corollary 1.
Let X and Y be independent random variables with densities f and g, respectively, and X d i s p Y . Then:
 (i)
If α 1 , X m R S S ( n ) I G Y m R S S ( n ) ;
 (ii)
If α 1 , Y m R S S ( n ) I G X m R S S ( n ) ;
 (iii)
If α 1 , X M R S S ( n ) I G Y M R S S ( n ) ;
 (iv)
If α 1 , Y M R S S ( n ) I G X M R S S ( n ) .

4. RIG Divergence Measure Based on RSS Scheme

Let X S R S = { X i , i = 1 , , n } denote a SRS of size n from density function (PDF) f and cumulative distribution function F. Further, let X R S S ( n ) , X m R S S ( n ) and X M R S S ( n ) be the corresponding RSS, minimum RSS and maximum RSS vectors, respectively. We now consider the RIG measure between variable X and each of the vectors X m R S S ( n ) and X M R S S ( n ) . From the definition of RIG measure in (3), the RIG divergence between X ( 1 ) i with density in (6) and X is given by
R α ( X ( 1 ) i , X ) = f ( 1 ) i α ( x ) f 1 α ( x ) d x = i α 0 1 ( 1 u ) α ( i 1 ) d u = i α α ( i 1 ) + 1 .
Similarly, the RIG divergence between X ( i ) i with density in (7) and X is given by
R α ( X ( i ) i , X ) = f ( i ) i α ( x ) f 1 α ( x ) d x = i α 0 1 u α ( i 1 ) d u = i α α ( i 1 ) + 1 .
It is evident from the above results that R α ( X ( 1 ) i , X ) = R α ( X ( i ) i , X ) , which is free of the underling distribution F.
Theorem 8.
Consider the vectors X S R S ( n ) and X m R S S ( n ) from density function f. Then, we have:
 (i)
R α ( X m R S S ( n ) , X S R S ( n ) ) = i = 1 n R α ( X ( 1 ) i , X ) = c ( α , n ) ;
 (ii)
R α ( X M R S S ( n ) , X S R S ( n ) ) = i = 1 n R α ( X ( i ) i , X ) = c ( α , n ) ,
where c ( α , n ) = ( n ! ) α i = 1 n ( α ( i 1 ) + 1 ) .
Proof. 
From the definition of RIG divergence between vectors X S R S n and X R S S n , we find
R α ( X m R S S ( n ) , X S R S ( n ) ) = f ( 1 ) 1 α ( x 1 ) f ( 1 ) n α ( x n ) f 1 α ( x 1 ) f 1 α ( x n ) d x 1 d x n = i = 1 n f ( 1 ) i α ( x ) f 1 α ( x ) d x = i = 1 n R α ( X ( 1 ) i , X ) = c ( α , n ) ,
which proves Part (i). Part (ii) can be proved in an analogous manner. □
With the result that R α ( X m R S S ( n ) , X S R S ( n ) ) = R α ( X M R S S ( n ) , X S R S ( n ) ) = ( n ! ) α i = 1 n ( α ( i 1 ) + 1 ) in Theorem 8, we have plotted the RIG measure between vectors X m R S S ( n ) and X S R S ( n ) , for some selected choices of α and sample size n, in Figure 2. From Figure 2, it is easy to observe that for α ( 0 , 1 ] , the RIG divergence measure between X m R S S ( n ) and X S R S ( n ) is decreasing with respect to sample size n (Panels (a) and (b)), while for α [ 1 , ) , the considered RIG measure is increasing with respect to sample size n (Panels (c) and (d)). Therefore, for α ( 0 , 1 ] , the similarity between the density functions of the considered sampling vectors X m R S S ( n ) and X S R S ( n ) gets increased. For α [ 1 , ) , the result is the opposite, i.e., the similarity between the two sampling vectors gets decreased.
Theorem 9.
Consider the vectors X R S S ( n ) and X m R S S ( n ) from density function f. Then, we have:
 (i)
R α ( X m R S S ( n ) , X R S S ( n ) ) = i = 1 n R α ( X ( 1 ) i , X i : n ) = c * ( α , n ) ;
 (ii)
R α ( X M R S S ( n ) , X R S S ( n ) ) = i = 1 n R α ( X ( i ) i , X ( 1 ) i ) = n ! i = 1 n Γ ( α ( i 1 ) + 1 ) Γ ( ( 1 α ) ( i 1 ) + 1 ) Γ ( i + 1 ) ,
where c * ( α , n ) = n ( n 1 ) ! α i = 1 n n 1 i 1 1 α Γ ( α i ( α 1 ) ) Γ ( α ( 2 i n 1 ) + n i + 1 ) Γ ( α ( i n ) + n + 1 ) .
Proof. 
From the definition of RIG measure between vectors X m R S S ( n ) and X R S S ( n ) , we have
R α ( X m R S S ( n ) , X R S S ( n ) ) = i = 1 n f ( 1 ) i α ( x ) f i : n 1 α ( x ) d x = ( n ! ) α n α 1 i = 1 n 0 1 n 1 i 1 1 α ( 1 u ) α ( 2 i n 1 ) + n i u ( 1 α ) ( i 1 ) d u = n ( n 1 ) ! α i = 1 n n 1 i 1 1 α Γ ( α i ( α 1 ) ) Γ ( α ( 2 i n 1 ) + n i + 1 ) Γ ( α ( i n ) + n + 1 ) ,
which proves Part (i). Part (ii) can be proved in a similar manner. □
We have plotted the results of Theorem 9 in Figure 3 and Figure 4 for some choices of α . From these figures, we observe that for α ( 0 , 1 ] , both RIG measures in Theorem 9 are deceasing with respect to sample size n. Therefore, the similarity between the density functions of the considered sampling vectors X m R S S ( n ) and X R S S ( n ) gets increased with increasing sample size n.

5. Concluding Remarks

In this paper, we have studied the information generating (IG) function and relative information generating (RIG) function measures associated with SRS and RSS strategies. Specifically, we have examined the IG function for maximum and minimum RSS schemes. We have shown that, under a mild condition on the density function f, for α 1 , the IG function associated with the sampling vector X M R S S ( n ) is increasing with respect to sample size n. On the other hand, for α 1 , this function is decreasing. Similar results are established for the IG function of sampling vector X m R S S ( n ) based on values of α and n. We have shown that for values of α 1 , we can provide upper bounds for G α ( X m R S S ( n ) ) and G α ( X M R S S ( n ) ) based on G α ( X S R S ( n ) ) . We have also provided some comparative results for RSS schemes in terms of dispersive stochastic ordering. Based on this stochastic ordering, we have established some ordering results among the IG functions of sampling vectors X R S S ( n ) , X m R S S ( n ) and X M R S S ( n ) in terms of α 1 (or α 1 ). Finally, we have examined the RIG measure between the vectors X S R S ( n ) , X R S S ( n ) , X m R S S ( n ) and X M R S S ( n ) . The corresponding results associated with RIG divergence have been plotted in Figure 2, Figure 3 and Figure 4. For example, Figure 3 and Figure 4 present both RIG measures presented in Theorem 9 for some choices of α . We have demonstrated that the similarity between the density functions of the considered sampling vectors X m R S S ( n ) and X R S S ( n ) gets increased when the sample size n increases.

Author Contributions

All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data sharing not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Golomb, S. The information generating function of a probability distribution (corresp.). IEEE Trans. Inf. Theory 1966, 12, 75–77. [Google Scholar] [CrossRef]
  2. López-Ruiz, R.; Mancini, H.L.; Calbet, X. A statistical measure of complexity. Phys. Lett. A 1995, 209, 321–326. [Google Scholar] [CrossRef] [Green Version]
  3. Bercher, J.F. Some properties of generalized Fisher information in the context of non-extensive thermostatistics. Phys. A Stat. Mech. Appl. 2013, 392, 3140–3154. [Google Scholar] [CrossRef] [Green Version]
  4. Clark, D.E. Local entropy statistics for point processes. IEEE Trans. Inf. Theory 2019, 66, 1155–1163. [Google Scholar] [CrossRef]
  5. Guiasu, S.; Reischer, C. The relative information generating function. Inf. Sci. 1985, 35, 235–241. [Google Scholar] [CrossRef]
  6. Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  7. Mares, C.; Mares, I.; Dobrica, V.; Demetrescu, C. Quantification of the direct solar impact on some components of the hydro-climatic system. Entropy 2021, 23, 691. [Google Scholar] [CrossRef] [PubMed]
  8. McIntyre, G.A. A method for unbiased selective sampling, using ranked sets. Aust. J. Agric. Res. 1952, 3, 385–390. [Google Scholar] [CrossRef]
  9. Frey, J. A note on Fisher information and imperfect ranked-set sampling. Commun. Stat.-Theory Methods 2014, 43, 2726–2733. [Google Scholar] [CrossRef]
  10. Park, S.; Lim, J. On the effect of imperfect ranking on the amount of Fisher information in ranked set samples. Commun. Stat.-Theory Methods 2012, 413, 3608–3620. [Google Scholar] [CrossRef]
  11. Chen, Z.; Bai, Z.; Sinha, B. Ranked Set Sampling: Theory and Applications; Springer: New York, NY, USA, 2013. [Google Scholar]
  12. Tahmasebi, S.; Longobardi, M.; Kazemi, M.R.; Alizadeh, M. Cumulative Tsallis entropy for maximum ranked set sampling with unequal samples. Phys. A Stat. Mech. Appl. 2020, 556, 124763. [Google Scholar] [CrossRef]
  13. Arnold, B.C.; Balakrishnan, N.; Nagaraja, H.N. A First Course in Order Statistics; John Wiley & Sons: New York, NY, USA, 1992. [Google Scholar]
  14. Xiong, H.; Shang, P.; Zhang, Y. Fractional cumulative residual entropy. Commun. Nonlinear Sci. Numer. Simul. 2019, 78, 104879. [Google Scholar] [CrossRef]
  15. Kharazmi, O.; Balakrishnan, N. Jensen-information generating function and its connections to some well-known information measures. Stat. Probab. Lett. 2020, 170, 108995. [Google Scholar] [CrossRef]
  16. Shaked, M.; Shanthikumar, J.G. Stochastic Orders; Springer: New York, NY, USA, 2007. [Google Scholar]
Figure 1. The differences between IG measures for exponential distribution with λ = 2 and n = 2 when 0 < α < 1 (a) and α > 1 (b).
Figure 1. The differences between IG measures for exponential distribution with λ = 2 and n = 2 when 0 < α < 1 (a) and α > 1 (b).
Entropy 23 01381 g001
Figure 2. R α ( X m R S S ( n ) , X S R S ( n ) ) for some selected choices of parameter α and sample size n.
Figure 2. R α ( X m R S S ( n ) , X S R S ( n ) ) for some selected choices of parameter α and sample size n.
Entropy 23 01381 g002
Figure 3. R α ( X m R S S ( n ) , X R S S ( n ) ) for some choices of parameter α and sample size n.
Figure 3. R α ( X m R S S ( n ) , X R S S ( n ) ) for some choices of parameter α and sample size n.
Entropy 23 01381 g003
Figure 4. R α ( X M R S S ( n ) , X R S S n ) for some choices of parameter α and sample size n.
Figure 4. R α ( X M R S S ( n ) , X R S S n ) for some choices of parameter α and sample size n.
Entropy 23 01381 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kharazmi, O.; Tamandi, M.; Balakrishnan, N. Information Generating Function of Ranked Set Samples. Entropy 2021, 23, 1381. https://0-doi-org.brum.beds.ac.uk/10.3390/e23111381

AMA Style

Kharazmi O, Tamandi M, Balakrishnan N. Information Generating Function of Ranked Set Samples. Entropy. 2021; 23(11):1381. https://0-doi-org.brum.beds.ac.uk/10.3390/e23111381

Chicago/Turabian Style

Kharazmi, Omid, Mostafa Tamandi, and Narayanaswamy Balakrishnan. 2021. "Information Generating Function of Ranked Set Samples" Entropy 23, no. 11: 1381. https://0-doi-org.brum.beds.ac.uk/10.3390/e23111381

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop