Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Fisher’s exact approach for post hoc analysis of a chi-squared test

  • Guogen Shan ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Software, Writing – original draft, Writing – review & editing

    guogen.shan@unlv.edu

    Affiliation School of Community Health Sciences, University of Nevada Las Vegas, Las Vegas, NV 89154, United States of America

  • Shawn Gerstenberger

    Roles Conceptualization, Data curation, Project administration, Software, Writing – review & editing

    Affiliation School of Community Health Sciences, University of Nevada Las Vegas, Las Vegas, NV 89154, United States of America

Abstract

This research is motivated by one of our survey studies to assess the potential influence of introducing zebra mussels to the Lake Mead National Recreation Area, Nevada. One research question in this study is to investigate the association between the boating activity type and the awareness of zebra mussels. A chi-squared test is often used for testing independence between two factors with nominal levels. When the null hypothesis of independence between two factors is rejected, we are often left wondering where does the significance come from. Cell residuals, including standardized residuals and adjusted residuals, are traditionally used in testing for cell significance, which is often known as a post hoc test after a statistically significant chi-squared test. In practice, the limiting distributions of these residuals are utilized for statistical inference. However, they may lead to different conclusions based on the calculated p-values, and their p-values could be over- o6r under-estimated due to the unsatisfactory performance of asymptotic approaches with regards to type I error control. In this article, we propose new exact p-values by using Fisher’s approach based on three commonly used test statistics to order the sample space. We theoretically prove that the proposed new exact p-values based on these test statistics are the same. Based on our extensive simulation studies, we show that the existing asymptotic approach based on adjusted residual is often more likely to reject the null hypothesis as compared to the exact approach due to the inflated family-wise error rates as observed. We would recommend the proposed exact p-value for use in practice as a valuable post hoc analysis technique for chi-squared analysis.

1 Background

This research is motivated by one survey study conducted by Gerstenberger et al. [1] to assess potential influence of introducing zebra mussels to the Lake Mead National Recreation Area (LMNRA), Nevada, USA. Zebra mussels are relative small (finger-nail-sized for adult zebra mussels). Their extremely high reproductive rates raise the concern that they could clog water intakes in the LMNRA as it is the main water resource for the city [2]. They can be easily moved from an affected lake to an unaffected one by attaching to boats, nets, docks, and so on. In this study, surveys approved by United States Fish and Wildlife Service were used to collect data on six different sites in the LMNRA between 2002 and 2003 [1]. All the 274 participants were asked in person about their boating activity types (Pleasure, Angler, Jet Ski, and Other) and their awareness of zebra mussels (Yes/No), see Table 1 for data from this study. The chi-squared test was used to test independence between boater activity type and awareness of zebra mussels, and a very small p-value indicated a strong association between the two factors.

thumbnail
Table 1. Awareness of zebra mussels of boaters from Lake Mead National Recreation Area, Nevada, USA.

https://doi.org/10.1371/journal.pone.0188709.t001

Researchers are often interested in identifying significant cells/relationships after a statistically significant chi-squared test [3, 4]. Two test statistics are commonly used to test the significance for each cell. The first test is standardized residual that is calculated as raw residual divided by the squared root of the expected value, where raw residual is defined as the difference between the observed value and the expected value. The second test is adjusted residual: raw residual divided by its standard error. Both tests follow the standard normal distribution asymptotically. These two tests have different conclusions for testing the cells of data in Table 1. In addition to that, statistical inference of these two tests relies on how close the limiting distribution is to the true distribution. For a cell with a relatively small value, asymptotic approaches are often not reliable. Recently, Sharpe [5] reviewed several approaches to conduct a post hoc test after a statistically significant chi-squared test: residual comparison, ransacking, and partitioning. The goal of a post hoc test is to find the source of overall significance.

To overcome the unsatisfactory performance from the existing asymptotic approaches for testing each individual cell in a contingency table after a significant chi-squared test, we propose using Fisher’s approach to compute exact p-value by enumerating all possible tables with the same marginal row and column totals as the observed data. The two aforementioned test statistics can be used to order the sample space, and so does raw residual. It could be very computationally intensive to enumerate all possible tables due to the exponentially increased size of the searching sample space, even with the utilization of efficient numerical search algorithms [6]. For this particular problem, we find that the complete sample space can be reduced to a set of 2 × 2 tables instead of all possible R × C tables to test the significance of each cell. In addition, we theoretically show that the exact p-values based on the three test statistics are the same, thus they have the same conclusion.

The rest of the article is organized as follows. In Section 2, we review the commonly used approaches to test the significance of each cell after a statistically significant chi-squared test, and propose the exact p-value calculation by using Fisher’s approach. We theoretically prove the relationship between exact p-values based on different test statistics considered in this article. In Section 3, we illustrate the application of the proposed exact p-value by using two real examples including the motivation example from our survey study. We then conduct extensive Monte Carlo simulation studies to compare the performance between the proposed exact approach and the existing asymptotic approaches. Finally, we conclude our research with some remarks in Section 4.

2 Methods

In the case that the overall chi-squared test is significant, the next step is to perform a post hoc test to find out which cells from the contingency table are different from their expected values. Without any prior knowledge of each cell, we are interested in testing all cells in a contingency table at once. Three test statistics are often calculated for each cell: Raw Residual (RawR), Standardized Residual (StdR), and Adjusted Residual (AdjR). The larger these residuals are, the greater the contribution of these residuals to the overall chi-squared test.

2.1 Residuals

Raw residual is computed as the difference between the observed value and the expected value, which is where eij is the expected value of the ij-th cell under the independence hypothesis. It has been pointed that the TRawR is insufficient for hypothesis testing since the TRawR value tends to be large when the value in that cell is large [5]. For this reason, the following two test statistics were traditionally used for testing independence in the ij-th cell. Standardized residual is the component from the chi-squared test, which is and adjusted residual uses the standard error of xijeij in the test statistic [7, 8] where mi, nj, and N are the row marginal total, the column marginal total, and the total sample size, respectively. Both TStdR and TAdjR follow the standard normal asymptotically [9].

Both TStdR and TAdjR can be used for testing the independence hypothesis for each cell by comparing the calculated test statistics to the critical value from the standard normal distribution. It should be noted that they could reach a different conclusion based on their asymptotic p-values. It is easy to find out that the p-value based on TAdjR is always less than that based on TStdR, because |TAdjR| is always larger than |TStdR| for an observed data. For this reason, TAdjR is often recommended for use in practice as compared to TStdR as the latter test could be too conservative [10].

2.2 Exact post-hoc p-value

The accuracy of the limiting distribution for p-value calculation relies on multiple factors: marginal row and column totals, and whether the observed value in that cell is too small. In addition to that, the type I error control by using the limiting distribution is often unsatisfactory [1117]. To overcome these limitations from using asymptotic approaches for statistical inference, we propose using Fisher’s exact approach in testing the independence. All the possible data with the same marginal row and column totals as the observed data are enumerated and used in the p-value calculation, and the rejection region is determined by using any of the three test statistics: TRawR, TStdR and TAdjR. Suppose that the marginal row and column totals are m1, m2, ⋯, mR, and n1, n2, ⋯, nC in a R × C contingency table. The probability of observing a data with values X = {xij, i = 1, ⋯, R, and j = 1, ⋯, C} is computed as (1) which is often known as the hypergeometric probability. Let T be the test statistic to order the sample space, and X* be the observed data. Then, the exact p-value based on Fisher’s approach is calculated as where Ω(X*) = {X ∶ |T(X)| ≥ |T(X*)|} is the rejection region, and P(X) is the probability of data X as given in Eq (1).

It is very computational to calculate exact p-values without using network search algorithms to find the rejection region effectively. The network algorithm developed by Mehta and Patel [6] has been utilized by many statistical software in computing exact Fisher’s p-value for categorical data that can be organized in a contingency table. Obviously, this algorithm provides a much faster method to find the rejection region than a direct and naive full enumeration which could quickly become impossible as the table size and the total sample size increase. For this particular problem, we can simplify the exact p-value because two data sets having the same nij would have the same test statistic. In other words, if a data is in the rejection region, then a set of data that have the same nij as that data, should also be in the rejection region. For this reason, the sample space in exact p-value calculation is the collection of data as in Table 2.

thumbnail
Table 2. Reorganized data for testing the independence from the ij-th cell.

https://doi.org/10.1371/journal.pone.0188709.t002

This new sample size is a collection of data Y = (xij, mixij, njxij, Nminj + xij), and the probability of data Y is calculated as For a 2 by 2 table as in Table 2, it is much easier to enumerate all possible data without the involvement of efficient network search algorithms. Suppose Y* is the observed data. The new exact p-value based on Fisher’s exact approach is computed as (2) where I(a) is an indicator function with I(a) = 1 when a is true, and zero otherwise.

Theorem 2.1 Exact p-value calculations based on the three test statistics are the same.

Proof. The proposed exact p-value by using Fisher’s approach depends on the test statistic T to order the sample space. The rejection region is defined as In the new exact p-value calculation, the row and column marginal totals in Table 2 are considered as fixed. It follows that eij and eij(1 − mi/N)(1 − nj/N) in the denominate of TStdR and TAdjR are constant. Thus, TStdR and TAdjR are proportional to TRawR, and it follows that By the definition of exact p-value in Eq (2), exact p-values based on these three test statistics are the same for a given data.

We have shown that the three test statistics lead to the same exact p-value from this theorem. They agree with each other for testing individual independence in each cell. For simplicity, we use TAdjR for sample space ordering to compute exact p-value by using Fisher’s approach.

The classic approach to adjust the significance level for multiple comparisons is the Bonferroni method, which is α/W, where W is the number of comparisons. This correction method is widely used for a problem with independent multiple comparisons. However, in the considered problem for all cells in a contingency table, they are correlated, where the Holm-Bonferroni method can be used. In this method, all W p-values are sorted from the smallest to the largest, and the k-th smallest p-value is compared with α/(W + 1 − k). This method is uniformly more powerful than the traditionally used Bonferroni method. Later, Simes proposed an improved method for multiple comparisons with the adjusted significance level of αk/W for the k-th smallest p-value [18]. The method by Simes is often more powerful than the two aforementioned methods for multiple comparisons. For this reason, we use the method by Simes for both the asymptotic approach and the proposed approach.

3 Results

We first use two real examples to illustrate the application of the proposed exact p-value calculation for a post hoc test after a chi-squared test, then we conduct extensive numerical studies to compare the proposed exact approach with the existing approaches.

3.1 Real data application

The first example is a cross-sectional study to study malignant melanoma [19, 20]. In this study, 408 cases were randomly selected from all patients from New South Wales, Australia who was diagnosed with malignant melanoma. Tumor types (4 categories: Hutchinson’s melanotic freckle (H), Indeterminate (I), Nodular (N), and Superficial spreading melanoma (S)) and tumor site (3 categories: Head and neck, Trunk, and Extremities) were recorded for each case. Data of this study is presented in a 4 × 3 contingency table: Table 3. The chi-squared test statistic is calculated as 65.81, with the p-value of 2.9×10−12 which is much less than 0.05. Since the overall chi-squared test is significant, we would reject the null hypothesis that tumor type and tumor site are independent.

thumbnail
Table 3. Data from the malignant melanoma example for testing independence between tumor type and tumor site.

https://doi.org/10.1371/journal.pone.0188709.t003

We compute p-values for each cell in this contingency table of this example. First, we use the limiting distributions of test statistics TStdR and TAdjR for p-value calculation, see Table 4. This table is sorted by the TAdjR test statistic from the largest to the smallest. As can be seen from the table, TStdR is relatively conservative as compared to TAdjR since TAdjR has three cells with significant results as compared to one based on TStdR. Suppose TAdjR is used for statistical inference. We can conclude that the expected count is significantly different from the observed count for tumor types of H at all three tumor sites, and S when head and neck is the tumor site. In addition to these results by using asymptotic approaches, we also provide the proposed exact p-value based on TAdjR to order the sample space in the last column of Table 4. We have proved in Theorem 2.1 that exact p-values based on the three test statistics are identical. For this particular example, four cells have significant p-values, and the majority of them have tumor type of H at three different tumor sites.

thumbnail
Table 4. P-value calculation for each cell of data from the malignant melanoma example.

The calculated p-value for each cell is compared to the multiple comparison correction method by Simes [18]. The cells with significant p-values are bold.

https://doi.org/10.1371/journal.pone.0188709.t004

We revisit the awareness survey in Introduction section as the second example. This personal interview survey data is presented in Table 1, and the overall p-value to test the independence between boater activity type and awareness of zebra mussels in the Lake Mead is calculated as 1.4 × 10−5, which indicates a significant association between boater activity type and awareness of zebra mussels. Following a significant chi-squared test, we compute the three test statistics, asymptotic p-values based on TStdR and TAdjR, and the proposed exact p-value, see Table 5. No significant cell is found by using TStdR, while boaters for pleasure, Jet ski, or other are shown to be significant by using either TAdjR or the exact approach. In this example, a few observations have the same cell p-values. For such cases, we use the largest adjusted p-value for those having the same p-value. In this example, TAdjR and the proposed exact approach for p-value calculation have the same conclusion. It should be noted that when a factor only has 2 levels (the awareness in this example, j = 1, 2), TAdjR is the same within each level of the other factor (TAdjR(xi1) = TAdjR(xi2)) [9]. This leads to the same exact p-values for the these two cells as observed in the table.

thumbnail
Table 5. P-value calculation for each cell of data from the survey for the awareness of zebra mussels.

The calculated p-value for each cell is compared to the multiple comparison correction method by Simes [18]. The cells with significant p-values are bold.

https://doi.org/10.1371/journal.pone.0188709.t005

3.2 Simulation study

We conduct an extensive simulation study to further compare the existing asymptotic approach based on TAdjR and the proposed exact approach. It has been observed that the asymptotic approach based on TStdR is relatively conservative as compared to that based on TAdjR. For this reason, we exclude TStdR in the comparison.

For a given total sample size (N) and the size of table (R × C), we first simulate the row and column marginal totals, (m1, ⋯, mR) and (n1, ⋯, nC). We simulate 1,000 sets of the marginal totals. For each simulated marginal totals, we then use an R function, r2dtable, to randomly generate 2,000 R × C contingency tables by using Patefield’s algorithm [21, 22]. For each simulated data from these 2,000 tables, we compute the asymptotic p-value based on the limiting distribution of TAdjR and the exact p-value. We compute the family-wise error rate (FWER) for each approach when performing R × C hypotheses at the same time for each simulated data. The FWER is calculated as the average of the number of tables whose hypotheses are rejected from at least one cell. The significance level is set as 0.05k/(R × C), k = 1, 2, ⋯, R × C by using the Simes correction method for multiple comparisons.

Fig 1 shows the FWERs for both asymptotic and exact approaches for a contingency table size with sizes of 3 × 3, 5 × 5, and 8 × 8, and sample sizes from 50 to 500. It can be seen that the asymptotic approach does not guarantee the type I error in the majority of cases, and it is almost 5 times the nominal level in one case. The performance of the asymptotic approach gets worse as the size of table increases. It could be caused by the reason that the chance of rejecting at least one of the null hypotheses is increased when more hypotheses are tested simultaneously. The proposed exact approach guarantees the type I error rate.

thumbnail
Fig 1. Actual family-wise error rates of the proposed exact approach and the existing asymptotic approach based on the adjusted residual at the nominal level of 0.05.

https://doi.org/10.1371/journal.pone.0188709.g001

Suppose ΓAsy and ΓExact are the numbers of cells with significant p-values by using the asymptotic approach and the exact approach, respectively. We include the cases that have at least one cell being significant based on one of the two approaches, max(ΓAsy, ΓExact) > 0. In other words, the cases with ΓAsy = 0 and ΓExact = 0 are excluded in the performance comparison.

In Table 6, we compare the existing asymptotic approach based on TAdjR and the proposed exact approach by using all cases with max(ΓAsy, ΓExact) > 0 for given N and the table size (R = 3 and C = 3). The last row of this table shows the total number of such cases from the total 1,000× 2,000 = 2,000,000 simulated data. We find that the proportion of the two approaches having the same conclusion ΓAsy = ΓExact, increases as the total sample size goes up, and the proportion of ΓAsy > ΓExact (the number of cell rejected by the asymptotic approach is more than that by using the exact approach), is a decreasing function of N. Among the cases with ΓAsy > ΓExact, the majority of them are the ones that the exact approach has no significant p-value from any cell. The number of cases such that the exact approach has more rejected cells than the asymptotic approach, is relatively low, which is less 0.15% for the cases studied. When N is small, such as N = 50, the asymptotic approach always rejects at least the same number of cells as the exact approach, ΓAsy ≥ ΓExact.

thumbnail
Table 6. For a 3 × 3 contingency table, frequency (Freq) and proportion (Prop) of simulated data having at least one cell is significant based on either TAdjR or exact p-value, from a total of 2 million simulations.

ΓAsy and ΓExact are the number of cells with significant p-values by using the asymptotic approach and the exact approach, respectively.

https://doi.org/10.1371/journal.pone.0188709.t006

We present the frequency and proportion of simulated data from a 3 × 5 contingency table in Table 7 and a 5 × 5 contingency table in Table 8. When the total sample size is small, the proportion of ΓAsy = ΓExact is less than that of ΓAsy > ΓExact, and this trend is reversed as the sample size increases. As the table size increases, the proportion of two approaches having different numbers of rejected cells (ΓAsy ≠ ΓExact), goes up. Similar to Table 6, these two tables show that the proportion of ΓAsy > ΓExact is relatively large as compared to that of ΓAsy < ΓExact.

thumbnail
Table 7. For a 3 × 5 contingency table, frequency (Freq) and proportion (Prop) of simulated data having at least one cell is significant based on either TAdjR or exact p-value, from a total of 2 million simulations.

ΓAsy and ΓExact are the number of cells with significant p-values by using the asymptotic approach and the exact approach, respectively.

https://doi.org/10.1371/journal.pone.0188709.t007

thumbnail
Table 8. For a 5 × 5 contingency table, frequency (Freq) and proportion (Prop) of simulated data having at least one cell is significant based on either TAdjR or exact p-value, from a total of 2 million simulations.

ΓAsy and ΓExact are the number of cells with significant p-values by using the asymptotic approach and the exact approach, respectively.

https://doi.org/10.1371/journal.pone.0188709.t008

When we compare the three tables in Eqs (6), (7), and (8) with different table sizes, we find that the proportion of max(ΓAsy, ΓExact) > 0 among the total 2 million simulations, is increased as the table size increases. Within the 3 × 5 or 5 × 5 contingency table, the proportion of max(ΓAsy, ΓExact) > 0 is a decreasing function of N, while in Table 6 for a 3 × 3 contingency table, this proportion is almost constant across different total sample sizes.

4 Discussion

It is well known that asymptotic approaches could lead to different conclusions based on their limiting distributions for p-value calculation. In this article, we theoretically prove that exact p-values produce the same result by using any of the three commonly used test statistics. For this reason, we would like to recommend the proposed exact p-value for use in practice. We develop the software program to compute exact p-value by using the statistical software R [23], and it is available from the first author’s website at: https://faculty.unlv.edu/gshan/ under the Software development section. In addition to that, we also provide a website for researchers who do not use R, which is: http://gshan.i2.unlv.edu/ZPostHoc. We would appreciate any comments from users to further improve the R function and the website.

We do not find an alternative approach based on the exact framework. The existing approaches are generally based on asymptotic limiting distributions or simulation. For the approach based on simulation, it can only simulate a certain number of cases, and it may delete some cases (e.g., the ones with one or more zeros in the table). Although simulation is an approach to utilize when it is difficult to enumerate all possible samples, especially for a study with the total sum fixed [2427].

In addition to the considered three test statistics for testing cells, several other approaches were developed after a significant chi-squared test. Partitioning is one of them, and this approach basically divide a contingency table into a set of 2 × 2 tables. Obviously, the total possible number of set is × . Due to the large number of partitioning, a set of orthogonal partitions was proposed [28] to avoid having too many unnecessary comparisons [5, 29]. Alternatively, Jin and Wang [30] suggested to implement multiple comparisons on one factor. When that factor has R levels, each data for a post hoc test is a 2 × C contingency table. Then, the total number of comparisons is . They compute p-value for each 2 × C contingency table by using the chi-squared test. One can always consider using exact approaches for p-value calculation for such data [13]. We consider this as future work.

5 Conclusions

In this article, we propose using Fisher’s approach to compute exact p-value for each cell in a contingency table after a significant overall chi-squared test [3135]. The existing approaches are often based on asymptotic limiting distributions of their associated test statistics. From our extensive simulation studies conducted in this article, we find that the FWERs of the asymptotic approach based on TAdjR could be much larger than the nominal level, while the proposed exact approach guarantee the FWER. Due a lack of an existing approach with the FWER guaranteed, we do not have another approach to be included to compare with the proposed exact approach with regards to power.

Acknowledgments

The authors are very grateful to the Editor, and two reviewers for their insightful comments that help improve the manuscript. Shan’s research is partially supported by grants from the National Institute of General Medical Sciences from the National Institutes of Health: P20GM109025, P20GM103440, and 5U54GM104944.

References

  1. 1. Gerstenberger S, Powell S, McCoy M. The 100th Meridian Initiative in Nevada: Assessing the Potential Movement of the Zebra Mussel to the Lake Mead National Recreation Area, Nevada, USA. University of Nevada Las Vegas. 2003;.
  2. 2. Hebert PDN, Muncaster BW, Mackie GL. Ecological and Genetic Studies on Dreissena polymorpha (Pallas): a New Mollusc in the Great Lakes. Can J Fish Aquat Sci. 1989;46(9):1587–1591.
  3. 3. Cox MK, Key CH. Post Hoc Pair-Wise Comparisons for the Chi-Square Test of Homogeneity of Proportions. Educational and Psychological Measurement. 1993;53(4):951–962.
  4. 4. Freeman GH, Halton JH. Note on an Exact Treatment of Contingency, Goodness of Fit and Other Problems of Significance. Biometrika. 1951;38(1–2):141–149. pmid:14848119
  5. 5. Sharpe D. Your Chi-Square Test Is Statistically Significant: Now What? Practical Assessment, Research & Evaluation. 2015;20(8).
  6. 6. Mehta CR, Patel NR. A Network Algorithm for Performing Fisher’s Exact Test in r by c Contingency Tables. Journal of the American Statistical Association. 1983;78(382):427–434.
  7. 7. Haberman SJ. The Analysis of Residuals in Cross-Classified Tables. Biometrics. 1973;29(1):205–220.
  8. 8. Everitt BS. The analysis of contingency tables. New York; 1992.
  9. 9. Agresti A. Categorical Data Analysis. 3rd ed. Hoboken, New Jersey: Wiley; 2012. Available from: http://www.worldcat.org/isbn/0470463635.
  10. 10. MacDonald PL, Gardner RC. Type I Error Rate Comparisons of Post Hoc Procedures for I j Chi-Square Tables. Educational and Psychological Measurement. 2000;60(5):735–754.
  11. 11. Shan G, Ma C. Unconditional tests for comparing two ordered multinomials. Statistical methods in medical research. 2016;25(1):241–254. pmid:22700600
  12. 12. Shan G, Ma C, Hutson AD, Wilding GE. An efficient and exact approach for detecting trends with binary endpoints. Statistics in Medicine. 2012;31(2):155–164. pmid:22162106
  13. 13. Shan G. Exact Statistical Inference for Categorical Data. 1st ed. San Diego, CA: Academic Press; 2015. Available from: http://www.worldcat.org/isbn/0081006810.
  14. 14. Shan G. A Note on Exact Conditional and Unconditional Tests for Hardy-Weinberg Equilibrium. Human Heredity. 2013;76(1):10–17. pmid:23921792
  15. 15. Shan G. Exact sample size determination for the ratio of two incidence rates under the Poisson distribution. Computational Statistics. 2016;31(4):1633–1644.
  16. 16. Shan G, Wilding GE, Hutson AD, Gerstenberger S. Optimal adaptive two-stage designs for early phase II clinical trials. Statistics in Medicine. 2016;35(8):1257–1266. pmid:26526165
  17. 17. Wang W, Shan G. Exact confidence intervals for the relative risk and the odds ratio. Biometrics. 2015;71(4):985–995. pmid:26228945
  18. 18. Simes RJ. An improved Bonferroni procedure for multiple tests of significance. Biometrika. 1986;73(3):751–754.
  19. 19. Roberts G, Martyn AL, Dobson AJ, McCarthy WH. Tumour thickness and histological type in malignant melanoma in New South Wales, Australia, 1970–76. Pathology. 1981;13(4):763–770. pmid:7335383
  20. 20. Dobson AJ, Barnett A. An Introduction to Generalized Linear Models, Third Edition (Chapman & Hall/CRC Texts in Statistical Science). 3rd ed. Chapman and Hall/CRC; 2008. Available from: http://www.worldcat.org/isbn/1584889500.
  21. 21. Patefield M. Algorithm AS159. An efficient method of generating r x c tables with given row and column totals. Applied Statistics. 1981;30:91–97.
  22. 22. Demirhan H. rTableICC: An R Package for Random Generation of 22K and RC Contingency Tables. The R Journal. 2016;8(1):48–63.
  23. 23. Shan G, Wang W. ExactCIdiff: An R Package for Computing Exact Confidence Intervals for the Difference of Two Proportions. The R Journal. 2013;5(2):62–71.
  24. 24. Shan G, Wilding GE. Powerful Exact Unconditional Tests for Agreement between Two Raters with Binary Endpoints. PLoS ONE. 2014;9(5):e97386+. pmid:24837970
  25. 25. Shan G, Zhang H. Exact unconditional sample size determination for paired binary data (letter commenting: J Clin Epidemiol. 2015;68:733–739). Journal of clinical epidemiology. 2017;84:188–190. pmid:28063912
  26. 26. Shan G, Wang W. Exact one-sided confidence limits for Cohen’s kappa as a measurement of agreement. Statistical methods in medical research. 2017;26(2):615–632. pmid:25288510
  27. 27. Shan G. Comments on ‘Two-sample binary phase 2 trials with low type I error and low sample size’. Statistics in Medicine. 2017;36(21):3437–3438. pmid:28776726
  28. 28. Fisher RA. The Design of Experiments. 9th ed. Edinburgh, UK: Macmillan Pub Co; 1935. Available from: http://www.worldcat.org/isbn/0028446909.
  29. 29. Lancaster HO. The derivation and partition of chi2 in certain discrete distributions. Biometrika. 1949;36(Pt. 1–2):117–129. pmid:18146218
  30. 30. Jin M, Wang B. Implementing Multiple Comparisons on Pearson Chi-square Test for an RÃ?C Contingency Table in SAS. SAS. 2014;1544.
  31. 31. Shan G. More efficient unconditional tests for exchangeable binary data with equal cluster sizes. Statistics & Probability Letters. 2013;83(2):644–649.
  32. 32. Shan G. Exact confidence intervals for randomized response strategies. Journal of Applied Statistics. 2016;43(7):1279–1290.
  33. 33. Shan G, Zhang H, Jiang T, Peterson H, Young D, Ma C. Exact p-Values for Simon’s Two-Stage Designs in Clinical Trials. 2016;8(2):351–357.
  34. 34. Shan G, Zhang H, Jiang T. Minimax and admissible adaptive two-stage designs in phase II clinical trials. BMC Medical Research Methodology. 2016;16(1):90+. pmid:27485595
  35. 35. Shan G, Bernick C, Banks S. Sample size determination for a matched-pairs study with incomplete data using exact approach. The British journal of mathematical and statistical psychology. 2017;. pmid:28664985