Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

The Influence of Feature Selection Methods on Accuracy, Stability and Interpretability of Molecular Signatures

  • Anne-Claire Haury ,

    anne-claire.haury@mines-paristech.fr

    Affiliations Mines ParisTech, Centre for Computational Biology, Fontainebleau, France, Institut Curie, Paris, France, Institut National de la Santé et de la Recherche Médicale, Paris, France

  • Pierre Gestraud,

    Affiliations Mines ParisTech, Centre for Computational Biology, Fontainebleau, France, Institut Curie, Paris, France, Institut National de la Santé et de la Recherche Médicale, Paris, France

  • Jean-Philippe Vert

    Affiliations Mines ParisTech, Centre for Computational Biology, Fontainebleau, France, Institut Curie, Paris, France, Institut National de la Santé et de la Recherche Médicale, Paris, France

Abstract

Biomarker discovery from high-dimensional data is a crucial problem with enormous applications in biology and medicine. It is also extremely challenging from a statistical viewpoint, but surprisingly few studies have investigated the relative strengths and weaknesses of the plethora of existing feature selection methods. In this study we compare feature selection methods on public gene expression datasets for breast cancer prognosis, in terms of predictive performance, stability and functional interpretability of the signatures they produce. We observe that the feature selection method has a significant influence on the accuracy, stability and interpretability of signatures. Surprisingly, complex wrapper and embedded methods generally do not outperform simple univariate feature selection methods, and ensemble feature selection has generally no positive effect. Overall a simple Student's t-test seems to provide the best results.

Introduction

Biomarker discovery from high-dimensional data, such as transcriptomic or SNP profiles, is a crucial problem with enormous applications in biology and medicine, such as diagnosis, prognosis, patient stratification in clinical trials or prediction of the response to a given treatment. Numerous studies have for example investigated so-called molecular signatures, i.e., predictive models based on the expression of a small number of genes, for the stratification of early breast cancer patients into low-risk or high-risk of relapse, in order to guide the need for adjuvant therapy [1].

While predictive models could be based on the expression of more than a few tens of genes, several reasons motivate the search for short lists of predictive genes. First, from a statistical and machine learning perspective, restricting the number of variables is often a way to reduce over-fitting when we learn in high dimension from few samples and can thus lead to better predictions on new samples. Second, from a biological viewpoint, inspecting the genes selected in the signature may shed light on biological processes involved in the disease and suggest novel targets. Third, and to a lesser extent, a small list of predictive genes allows the design of cheap dedicated prognostic chips.

Published signatures share, however, very few genes in common, raising questions about their biological significance [2]. Independently of differences in cohorts or technologies, [3] and [4] demonstrate that a major cause for the lack of overlap between signatures is that many different signatures lead to similar predictive accuracies, and that the process of estimating a signature is very sensitive to the samples used in the phase of gene selection. Specifically [5], suggest that many more samples than currently available would be required to reach a descent level of signature stability, meaning in particular that no biological insight should be expected from the analysis of current signatures. On the positive side, some authors noticed that the biological functions captured by different signatures are similar, in spite of the little overlap between them at the gene level [6][8].

From a machine learning point of view, estimating a signature from a set of expression data is a problem of feature selection, an active field of research in particular in the high-dimensional setting [9]. While the limits of some basic methods for feature selection have been highlighted in the context of molecular signatures, such as gene selection by Pearson correlation with the output [5], there are surprisingly very few and only partial investigations that focus on the influence of the feature selection method on the performance and stability of the signature [10]. compared various feature selection methods in terms of predictive performance only, and [11] suggest that ensemble feature selection improves both stability and accuracy of SVM recursive feature elimination (RFE), without comparing it with other methods. However, it remains largely unclear how “modern” feature selection methods such as the elastic net [12], SVM RFE or stability selection [13] behave in these regards and how they compare to more basic univariate techniques.

Here we propose an empirical comparison of a panel of feature selection techniques in terms of accuracy and stability, both at the gene and at the functional level. Using four breast cancer datasets, we observe significant differences between the methods. Surprisingly, we find that ensemble feature selection, i.e., combining multiple signatures estimated on random subsamples, has generally no positive impact, and that simple filters can outperform more complex wrapper or embedded methods.

Materials and Methods

Feature selection methods

We compare eight common feature selection methods to estimate molecular signatures. All methods take as input a matrix of gene expression data for a set of samples from two categories (good and bad prognosis in our case), and return a set of genes of a user-defined size . These genes can then be used to estimate a classifier to predict the class of any sample from the expression values of these genes only. Feature selection methods are usually classified into three categories [9], [14]: filter methods select subsets of variables as a pre-processing step, independently of the chosen predictor; wrapper methods utilize the learning machine of interest as a black box to score subsets of variable according to their predictive power; finally, embedded methods perform variable selection in the process of training and are usually specific to given learning machines. We have selected popular methods representing these three classes, as described below.

Filter methods

Univariate filter methods rank all variables in terms of relevance, as measured by a score which depends on the method. They are simple to implement and fast to run. To obtain a signature of size , one simply takes the top genes according to the score. We consider the following four scoring functions to rank the genes: the Student's t-test and Wilcoxon sum-rank test, which evaluate if each feature is differentially expressed between the two classes; and the Bhattacharyya distance and relative entropy to calculate a distance between the distributions of the two groups. We used the MATLAB Bioinformatics toolbox to compute these scoring functions.

Wrapper methods

Wrapper methods attempt to select jointly sets of variables with good predictive power for a predictor. Since testing all combinations or variables is computationally impossible, wrapper methods usually perform a greedy search in the space of sets of features. We test SVM recursive feature elimination (RFE) [15], which starts with all variables and iteratively removes the variables which contribute least to a linear SVM classifier trained on the current set of variables. We remove of features at each iteration until remain, and then remove them one by one in order to rigourously rank the first . Following [11], we set the SVM parameter to , and checked afterwards that other values of did not have a significant influence on the results. Alternatively, we test a Greedy Forward Selection (GFS) strategy for least squares regression also termed Orthogonal Matching Pursuit, where we start from no variable and add them one by one by selecting each time the one which minimizes the sum of squares, in a -fold internal cross-validation setting. This algorithm was implemented in the SPAMS toolbox for Matlab initially published along with [16].

Embedded methods

Embedded methods are learning algorithms which perform feature selection in the process of training. We test the popular Lasso regression [17], where a sparse linear predictor is estimated by minimizing the objective function , where is the mean square error on the training set (considering the two categories as values) and . controls the degree of sparsity of the solution, i.e., the number of features selected. We fix as the smallest value which gives a signature of the desired size . Alternatively, we tested the elastic net [12], which is similar to the Lasso but where we replace the norm of by a combination of the and norms, i.e., we minimize and . By allowing the selection of correlated predictive variables, the elastic net is supposed to be more robust than the Lasso while still selecting predictive variables. Again, we tune to achieve a user-defined level of sparsity. For both algorithms, we used the code implemented in the SPAMS toolbox.

Ensemble feature selection

Many feature selection methods are known to be sensitive to small perturbations of the training data, resulting in unstable signatures. In order to “stabilize” variable selection, several authors have proposed to use ensemble feature selection on bootstrap samples: the variable selection method is run on several random subsamples of the training data, and the different lists of variables selected are merged into a hopefully more stable subset [11], [13], [18].

For each feature selection method described above, we tested in addition the following three aggregation strategies for ensemble feature selection. We first bootstrap the training samples times (i.e., draw a sample of size from the data with replacement times) to get rankings of all features by applying the feature selection method on each sample. For filter methods, the ranking of features is naturally obtained by decreasing score. For RFE and GFS, the ranking is the order in which the features are added or removed in the iterative process. For Lasso and elastic net, the ranking is the order in which the variables become selected when decreases. We then aggregate the lists by computing a score for each gene as an average function of its rank in the -th bootstrap experiment. We test the following functions of the rank for aggregation:

  • Ensemble-mean [11]: we simply average the rank of a gene over the bootstrap experiments, i.e., we take .
  • Ensemble-stability selection [13]: we measure the percentage of bootstrap samples for which the gene ranks in the top , i.e., if , otherwise.
  • Ensemble-exponential: we propose a soft version of stability selection, where we average an exponentially decreasing function of the rank, namely .

Finally, for each rank aggregation strategy, the aggregated list is the set of genes with the largest score.

Accuracy of a signature

In order to measure the predictive accuracy of a feature selection method, we assess the performance of various supervised classification algorithms trained on the data restricted to the selected signature. More precisely, we test classification algorithms: nearest centroids (NC), k-nearest neighbors (KNN) with , linear SVM with , linear discriminant analysis (LDA) and naive Bayes (BAYES). The parameters of the KNN and SVM methods are fixed to arbitrary default values, and we have checked that no significantly better results could be obtained with other parameters by testing a few other parameters. We assess the performance of a classifier by the area under the ROC curve (AUC), in two different settings. First, on each dataset, we perform a 10-fold cross-validation (CV) experiment, where both feature selection and training of the classifier are performed on of the data, and the AUC is computed on the remaining of the data. This is a classical way to assess the relevance of feature selection of a given dataset. Second, to assess the performance of the signature across datasets, we estimate a signature on one dataset, and assess its accuracy on other datasets by again running a 10-fold CV experiment where only the classifier (restricted to the genes in the signature) is retrained on each training set. In both cases, we report the mean AUC across the folds and datasets, and assess the significance of differences between methods with a paired ANOVA test.

Stability of a signature

To assess the stability of feature selection methods, we compare signatures estimated on different samples in various settings. First, to evaluate stability with respect to small perturbation of the training set, we randomly subsample each dataset into pairs of subsets with of sample overlap, estimate a signature on each subset, and compute the overlap between two signatures in a pair as the fraction of shared genes, i.e., . Note that this corresponds to the figure of merit defined by [5]. The random sampling of subsets is repeated times on each dataset, and the stability values are averaged over all samples. We will refer to this procedure the soft-perturbation setting in the remaining. Second, to assess stability with respect to strong perturbation within a dataset, we repeat the same procedure but this time with no overlap between two subsets of samples. In practice, we can only sample subsets of size , where is the number of samples in a dataset, to ensure that they have no overlap. Again, we measure the overlap between the signatures estimated on training sets with no sample in common. We call this procedure the hard-perturbation setting. Finally, to assess the stability across datasets, we estimate signatures on each dataset independently, using all samples on each dataset, and measure their overlap. We call this procedure the between-datasets setting below.

Functional interpretability and stability of a signature

To interpret a signature in terms of biological functions, we perform functional enrichment analysis by inspecting the signature for over-represented Gene Ontology (GO) terms. This may hint at biological hypothesis underlying the classification [6], [7]. We perform a hypergeometrical test on each of the GO biological process (BP) terms that are associated to at least one gene in our dataset, and correct the resulting p-values for multiple testing through the procedure of [19]. To assess the interpretability of a signature, i.e., how easily one can extract a biological interpretation, we compute the number of GO terms over-represented at FDR. To compare two signatures in functional terms, we first extract from each signature the list of 10 GO terms with the smallest p-values, and compare the two lists of GO terms by the similarity measure of [20] which takes into account not only the overlap between the lists but also the relationships between GO BP. Finally, to assess the functional stability of a selection method, we follow a procedure similar to the one presented in the previous section and measure the mean functional similarity of signatures in the soft-perturbation, hard-perturbation and between-datasets settings.

Data

We collected breast cancer datasets from Gene Expression Omnibus [21], as described in Table 1. The four datasets address the same problem of predicting metastatic relapse in breast cancer on different cohorts, and were obtained with the Affymetrix HG-U133A technology. We used a custom CDF file with EntrezGene ids as identifiers [22] to estimate expression levels for genes on each array, and normalized all arrays with the Robust Multi-array Average procedure [23].

Results

Accuracy

We first assess the accuracy of signatures obtained by different feature selection methods. Intuitively, the accuracy refers to the performance that a classifier trained on the genes in the signature can reach in prediction. Although some feature selection methods (wrapper and embedded) jointly estimate a predictor, we dissociate here the process of selecting a set of genes and training a predictor on these genes, in order to perform a fair comparison common to all feature selection methods. We test the accuracy of 100-gene signatures obtained by each feature selection method, combined with 5 classifiers to build a predictor as explained in the Methods section. Table 2 shows the mean accuracies (in AUC) over the datasets as reached by the different combinations in 10-fold cross-validation.

Globally, we observe only limited differences between the feature selection methods, for a given classification method. In particular the selection of a random signature reaches a baseline AUC comparable to that of other methods, confirming results already observed by [3]. Second, we observe that, among all classification algorithms, the simple NC classifier consistently gives good results compared to other classifiers. We therefore choose it as a default classification algorithm for further assessment of the performance of the signatures below. Figure 1 depicts graphically the AUC reached by each feature selection method with NC as a classifier, reproducing the first three lines of Table 2. Although the t-test has the best average AUC, the results vary widely across datasets explaining the large error bars. In fact, a paired ANOVA test detects no method significantly better than the random selection strategy; the only significant differences are observed between t-test, on the one hand, and Entropy and GFS, on the other hand, which have the lowest performances without aggregation. In particular, we observe that ensemble methods for feature selection do not bring any improvement in accuracy in a significant way.

thumbnail
Figure 1. Area under the ROC curve.

Signature of size in a -fold CV setting and averaged over the four datasets.

https://doi.org/10.1371/journal.pone.0028210.g001

In order to assess how a signature estimated on one dataset behave in another dataset, we report the results for between-datasets experiments in Table 3. For each training dataset, we highlight the method with the best results, and report the average results (over the folds) in the last row. In this setting, we barely notice any difference with the cross-validation setting (Table 2) and essentially reach the same conclusions, namely that no significant result stands out, except for the t-test to perform overall better than entropy.

In order to check how these results depend on the size of the signature, we plot in Figure 2 the AUC of the feature selection methods, with or without ensemble averaging, combined with a NC classifier, as a function of the size of the signature. Interestingly, we observe that in some cases the AUC seems to increase early, implying that fewer than genes may be sufficient to obtain the maximal performance. Indeed, while it is significant that -gene signatures perform better than a list of fewer than features ( regardless of the method or the setting), signatures of size do not lead to significantly worse performances in general. It is worth noting that some algorithms have an increasing AUC curve in this range of sizes, and we observe no overfitting that may lead to a decreasing AUC when the number of features increases. Random selection was previously shown to give an AUC equivalent to other methods for a large signature, but as we observe on this picture, the fewer genes the larger the gap in AUC.

thumbnail
Figure 2. Area under the ROC Curve.

NC classifier trained as a function of the size of the signature, for different feature selection methods, in a -fold CV setting averaged over the four datasets.

https://doi.org/10.1371/journal.pone.0028210.g002

In order to assess the influence of the number of samples used to estimate the signature, we computed the 10-fold cross-validation AUC (repeated 50 times) reached with a NC classifier as a function of the number of samples in the training set. Figure 3 shows the AUC averaged over the four datasets, for each feature selection method, while Figure 4 shows the same AUC on each dataset separately. With no surprise, we observe that the average accuracy clearly increases with the number of samples in the training set, for all methods, and that the relative order of the different methods does not strongly depend on the number of samples. While it is impossible to extrapolate the curve, it is not hard to imagine that it would continue to increase to a certain point. On this plot, t-test clearly outperforms the rest of the methods. However, looking at the behavior of the methods with respect to the size of the training set on each set separately, we note that not only the level of performance but also the relative order between methods strongly depend on the dataset. For example, while t-test outperforms all methods in the GSE4922 dataset, Lasso and Elastic Net seem to be the best choices in GSE2034. On the other hand, we observe that the best methods on each datasets have not reached their asymptote yet, suggesting by extrapolation that better accuracies could be reached with larger cohorts.

thumbnail
Figure 3. Area under the ROC Curve.

NC classifier trained as a function of the number of samples in a -fold CV setting. We show here the accuracy for 100-gene signatures as averaged over the datasets. Note that the maximum value of the x axis is constrained by the smallest dataset, namely GSE2990.

https://doi.org/10.1371/journal.pone.0028210.g003

thumbnail
Figure 4. Area under the ROC Curve.

NC classifier trained as a function of the number of samples in a -fold CV setting for each of the four datasets. We show here the accuracy for 100-gene signatures.

https://doi.org/10.1371/journal.pone.0028210.g004

Stability of gene lists

We now assess the stability of signatures created by different feature selection methods at the gene level. Figure 5 compares the stability of -gene signatures estimated by all feature selection methods tested in this benchmark, in the three experimental settings: soft-perturbation, hard-perturbation and between-datasets settings. The results are averaged over the bootstrap replicates and the four datasets. It appears very clearly and significantly that filter methods provide more stable lists than wrappers and embedded methods. It also seems that ensemble-exponential and ensemble-stability selection yield much more stable signatures than ensemble-average. It is worth noting that a significant gain in robustness through bootstrap is only observable for relative entropy and Bhattacharyya distance. Interestingly, SVM-RFE seems to benefit from ensemble aggregation in the soft-perturbation setting, as observed by [11], but this effect seems to vanish in the more relevant hard-perturbation and between-dataset settings.

thumbnail
Figure 5. Stability for a signature of size 100.

Average and standard errors are obtained over the four datasets. a) Soft-perturbation setting. b) Hard-perturbation setting. c) Between-datasets setting.

https://doi.org/10.1371/journal.pone.0028210.g005

Obviously, subfigures 5B) and 5C) are very much alike while Figure 5A) stands aside. They confirm that the hard-perturbation setting is the best way to estimate the behavior of the algorithms between different studies. The larger stability observed in the between-datasets setting compared to the hard perturbation setting for some methods (e.g., t-test) is essentially due to the fact that signatures are trained on more samples in the between-dataset setting, since no split is required within a dataset. Figure 6 illustrates this difference for one feature selection method. It shows the stability of the t-test in both settings with respect to the number of samples used to estimate signatures. While both curves remain low, we observe like [5] a very strong effect of the number of samples. Interestingly, we observe that for very small sample sizes the stability in the hard-perturbation setting is a good proxy for the stability in the between-dataset setting. However, the slope of the hard-perturbation setting stability seems sharper, suggesting that the gap would stretch for larger sample sizes, should the blue curve be extrapolated. These results suggest that i) the main reason for the low stability values is really the sample size and ii) the uniformity of the cohort still plays a role for larger sizes of training sets.

thumbnail
Figure 6. Evolution of stability of t-test signatures with respect to the size of the training set in the hard-perturbation and the between datasets settings from GSE2034 and GSE4922.

https://doi.org/10.1371/journal.pone.0028210.g006

We also observe in Figure 7 that the relative stability of the different methods does not depend on the size of the signature over a wide range of values, confirming that the differences observed for signatures of size 100 reveal robust differences between the methods.

thumbnail
Figure 7. Stability of different methods in the between-dataset setting, as a function of the size of the signature.

https://doi.org/10.1371/journal.pone.0028210.g007

Interpretability and functional stability

Even when different signatures share no or little overlap in terms of genes, it is possible that they encode the same biological processes and be useful if we can extract information about these processes from the gene lists in a robust manner. In the case of breast cancer prognostic signatures, for example, several recent studies have shown that functional analysis of the signatures can highlight coherent biological processes [6], [7], [24][26]. Just like stability at the gene level, it is therefore important to assess the stability of biological interpretation that one can extract from signatures.

First, we evaluate the interpretability of signatures of size , i.e., the ability of functional analysis to bring out a biological interpretation for a signature.

As shown on Figure 8, the four filter methods appear to be much more interpretable than wrappers/embedded methods. However, it should be pointed out that the number of significant GO terms is often zero regardless of the algorithm, leading to large error bars. Ensemble methods do not seem to enhance the interpretability of signatures.

thumbnail
Figure 8. GO interpretability for a signature of size 100.

Average number of GO BP terms significantly over-represented.

https://doi.org/10.1371/journal.pone.0028210.g008

Second, we assess on Figure 9 the functional stability for all methods in the three settings. While the baseline stability, as obtained by random signatures, is approximatively the same regardless of the setting, we observe that, like stability at the gene level, soft- and hard-perturbation can lead to very different interpretations. This suggests again that the high functional stability obtained by several methods in the soft-perturbation setting is mainly due to the overlap in samples. Hence the hard-perturbation setting seems to be a much better proxy for the between-datasets framework.

thumbnail
Figure 9. GO stability for a signature of size 100 in the soft-perturbation setting.

Average and standard errors are obtained over the four datasets. A) Soft-perturbation setting. B) Hard-perturbation setting. C) Between-datasets setting.

https://doi.org/10.1371/journal.pone.0028210.g009

Stability results at the functional level are overall very similar to the results at the gene level, namely, we observe that univariate filters are overall the most stable methods, and that the hard-perturbation setting returns a trustworthy estimate of the inter-datasets stability. In particular, an ANOVA procedure reveals that in the single-run settings, only signatures obtained from filters are significantly more stable than random. We also note that Ensemble-mean never improves the functional stability and that Ensemble-exponential/Ensemble stability selection return more stable signatures than single-run for Entropy and Bhattacharyya as well as for GFS and Lasso although less significantly.

Bias issues in selection with Entropy and Bhattacharrya distance

Gene selection by relative entropy and Bhattacharyya distance is more stable but less accurate than random selection, which suggests a bias in the method which may preferably and consistently select particular genes, not necessarily very predictive. To elucidate this behavior, we investigated the genes selected by these two methods. We noticed that they tend to be systematically expressed at low levels, as shown in Figure 10, and that they barely depend on the labels, which explains the high stability but small accuracy. In fact the frequently selected genes systematically show a multimodal yet imbalanced distribution due to the presence of outliers, as illustrated on Figure 11. As soon as, by chance, one class of samples contains one or more outliers when the other class doesn't, this type of distribution is responsible for a very high variance ratio between the two classes, thus leading to a very high value of the entropy and Bhattacharyya statistics. It is therefore likely that, although stable and interpretable, the molecular signatures generated by these two methods lead to erroneous interpretation.

thumbnail
Figure 10. Bias in the selection through entropy and Bhattacharyya distance.

Estimated cumulative distribution functions (ECDF) of the first ten genes selected by four methods on GSE1456. They are compared to the ECDF of randomly chosen background genes.

https://doi.org/10.1371/journal.pone.0028210.g010

thumbnail
Figure 11. Estimated distribution of the first gene selected by entropy and Bhattacharyya distance.

https://doi.org/10.1371/journal.pone.0028210.g011

Discussion

We compared a panel of 32 feature selection methods in light of two important criteria: accuracy and stability, both at the gene and at the functional level. Figure 12 summarizes the relative performance of all methods, and deserves several comments.

thumbnail
Figure 12. Accuracy/stability trade-off.

Accuracy versus stability for each method in the between-datasets setting. We show here the average results over the four datasets.

https://doi.org/10.1371/journal.pone.0028210.g012

Taking random feature selection as a baseline, we first notice the strange behavior of gene selection by Batthacharyya distance and relative entropy: they are both more stable but less accurate than random selection. A careful investigation of the genes they select allowed us to identify that they tend to select genes with low expression levels, independently of the sample labels. This unwanted behavior can easily be fixed by pre-filtering genes with small variations, but it highlights the danger of blindly trusting a feature selection method, which in this case gives very stable and interpretable signatures.

Second, we observe that among the other methods, only elastic net, Lasso and t-test clearly seem to outperform random in terms of accuracy, and only t-test outperforms it in terms of stability. Overall, t-test gives both the best performance and the best stability. The fact that the Lasso is not stable is not surprising since, like most multivariate methods, it tries to avoid redundant genes in a signature and should therefore not be stable in data where typically many genes encode for functionally related proteins. What was less expected is that neither the elastic net, which was designed exactly to fight this detrimental property of Lasso by allowing the selection of groups of correlated genes, nor stability selection, which is supposed to stabilize the features selected by Lasso, were significantly more stable than the Lasso. In addition, we also found very unstable behaviors at the functional level. This raises questions about the relevance of these methods for gene expression data. Similarly, the behavior of wrapper methods was overall disappointing. SVM RFE and Greedy Forward Selection are neither more accurate, nor more stable or interpretable than other methods, while their computational cost is much higher. Although we observed like [11] that SVM RFE can benefit from ensemble feature selection, it remains below the t-test both in accuracy and stability.

Overall we observed that ensemble method which select features by aggregating signatures estimated on different bootstrap samples increased the stability of some methods in some cases, but did not clearly improve the best methods. Regarding the aggregation step itself, we advise against the use of ensemble-average, i.e. averaging the ranks of each gene over the bootstrapped lists, regardless of the selection method. Ensemble-stability selection or ensemble-exponential gave consistently better results. The superiority of the latter two can be explained by the high instability of the rankings, as discussed in [27].

Regarding the choice of method to train a classifier once features are selected, we observed that the best accuracy was achieved by the simplest one, namely the nearest centroids classifier, used e.g. by [10], [25]. An advantage of this classifier is that it does not require any parameter tuning, making the computations fast and less prone to overfitting.

The performance evaluation of gene selection methods must be done carefully to prevent any selection bias, which could lead to underestimated error rates as discussed in [28], [29]. This happens when, for example, a set of genes is selected on a set of samples, and its performance as a signature is then estimated by cross-validation on the same set. In our experimental protocol, we overcome this issue by ensuring that gene selection is never influenced by the test samples on which the accuracy is measured. In the 10-fold cross-validation setting, this means that genes are selected and the classification model is trained 10 times, on the 10 training sets. Alternatively, we also tested the performance of prognostic signature across datasets, where selection bias is clearly absent. We barely observed any difference between the 10-fold cross-validation setting and the setting across dataset, in terms of average accuracy, confirming that cross-validation without selection bias is a good way to estimate the generalization performance.

We noticed that evaluating the stability and the interpretability in a soft-perturbation setting may lead to untrustworthy results. The best estimation seems to be obtained in the hard-perturbation setting experiments. The lack of stability between datasets has been explained by four arguments. First data may come from different technological platforms, which is not the case here. Second and third, there are differences in experimental protocols and in patient cohorts, which is indeed the case between datasets; fourth, the small number of sample leads statistical instability. We however obtained very similar stability in the hard-perturbation setting (within each dataset) and in the inter-datasets results. This suggests that the main source of instability is not the difference in cohorts or experimental protocols, but really the statistical issue of working in high dimension with few samples.

Acknowledgments

We thank Simon Fourquet, Alexandre Hamburger, Laurent Jacob and Fabien Reyal for fruitful discussions.

Author Contributions

Conceived and designed the experiments: A-CH PG J-PV. Performed the experiments: A-CH PG. Analyzed the data: A-CH PG J-PV. Wrote the paper: A-CH PG J-PV.

References

  1. 1. Sotiriou C, Pusztai L (2009) Gene-expression signatures in breast cancer. N Engl J Med 360: 790–800.
  2. 2. Ioannidis JPA (2005) Microarrays and molecular research: noise discovery? Lancet 365: 454.
  3. 3. Ein-Dor L, Kela I, Getz G, Givol D, Domany E (2005) Outcome signature genes in breast cancer: is there a unique set? Bioinformatics 21: 171–178.
  4. 4. Michiels S, Koscielny S, Hill C (2005) Prediction of cancer outcome with microarrays: a multiple random validation strategy. Lancet 365: 488–492.
  5. 5. Ein-Dor L, Zuk O, Domany E (2006) Thousands of samples are needed to generate a robust gene list for predicting outcome in cancer. Proc Natl Acad Sci USA 103: 5923–5928.
  6. 6. Shen R, Chinnaiyan A, Ghosh D (2008) Pathway analysis reveals functional convergence of gene expression profiles in breast cancer. BMC Med Genomics 1: 28.
  7. 7. Reyal F, Van Vliet M, Armstrong N, Horlings H, De Visser K, et al. (2008) A comprehensive analysis of prognostic signatures reveals the high predictive capacity of the proliferation, immune response and RNA splicing modules in breast cancer. Breast Cancer Res 10: R93.
  8. 8. Wirapati P, Sotiriou C, Kunkel S, Farmer P, Pradervand S, et al. (2008) Meta-analysis of gene expression profiles in breast cancer: toward a uni_ed understanding of breast cancer subtyping and prognosis signatures. Breast Cancer Res 10: R65.
  9. 9. Guyon I, Elisseeff A (2003) An introduction to variable and feature selection. J Mach Learn Res 3: 1157–1182.
  10. 10. Lai C, Reinders M, Van't Veer L, Wessels L, et al. (2006) A comparison of univariate and multivariate gene selection techniques for classification of cancer datasets. BMC Bioinformatics 7: 235.
  11. 11. Abeel T, Helleputte T, Van de Peer Y, Dupont P, Saeys Y (2010) Robust biomarker identification for cancer diagnosis with ensemble feature selection methods. Bioinformatics 26: 392–398.
  12. 12. Zou H, Hastie T (2005) Regularization and variable selection via the Elastic Net. J R Stat Soc Ser B 67: 301–320.
  13. 13. Meinshausen N, Bühlmann P (2010) Stability selection. J R Stat Soc Ser B 72: 417–473.
  14. 14. Kohavi R, John G (1997) Wrappers for feature selection. Artif Intell 97: 273–324.
  15. 15. Guyon I, Weston J, Barnhill S, Vapnik V (2002) Gene selection for cancer classification using support vector machines. Mach Learn 46: 389–422.
  16. 16. Mairal J, Bach F, Ponce J, Sapiro G (2010) Online learning for matrix factorization and sparse coding. J Mach Learn Res 11: 19–60.
  17. 17. Tibshirani R (1996) Regression shrinkage and selection via the lasso. J R Stat Soc Ser B 58: 267–288.
  18. 18. Bi J, Bennett K, Embrechts M, Breneman C, Song M (2003) Dimensionality reduction via sparse support vector machines. J Mach Learn Res 3: 1229–1243.
  19. 19. Benjamini Y, Hochberg Y (1995) Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc Ser B 57: 289–300.
  20. 20. Wang J, Du Z, Payattakool R, Yu P, Chen C (2007) A new method to measure the semantic similarity of GO terms. Bioinformatics 23: 1274.
  21. 21. Barrett T, Troup D, Wilhite S, Ledoux P, Rudnev D, et al. (2009) NCBI GEO: archive for highthroughput functional genomic data. Nucleic Acids Res 37: D885.
  22. 22. Dai M, Wang P, Boyd A, Kostov G, Athey B, et al. (2005) Evolving gene/transcript definitions significantly alter the interpretation of GeneChip data. Nucleic Acids Res 33: e175.
  23. 23. Irizarry RA, Hobbs B, Collin F, Beazer-Barclay YD, Antonellis KJ, et al. (2003) Exploration, normalization, and summaries of high density oligonucleotide array probe level datas. Biostatistics 4: 249–264.
  24. 24. Fan C, Oh D, Wessels L, Weigelt B, Nuyten D, et al. (2006) Concordance among gene-expressionbased predictors for breast cancer. N Engl J Med 355: 560.
  25. 25. Abraham G, Kowalczyk A, Loi S, Haviv I, Zobel J (2010) Prediction of breast cancer prognosis using gene set statistics provides signature stability and biological context. BMC Bioinformatics 11: 277.
  26. 26. Shi W, Bessarabova M, Dosymbekov D, Dezso Z, Nikolskaya T, et al. (2010) Functional analysis of multiple genomic signatures demonstrates that classification algorithms choose phenotype-related genes. Pharmacogenomics J 10: 310–23.
  27. 27. Iwamoto T, Pusztai L (2010) Predicting prognosis of breast cancer with gene signatures: are we lost in a sea of data? Genome Med 2: 81.
  28. 28. Ambroise C, McLachlan G (2002) Selection bias in gene extraction on the basis of microarray gene-expression data. Proc Natl Acad Sci USA 99: 6562–6566.
  29. 29. Simon R, Radmacher M, Dobbin K, McShane L (2003) Pitfalls in the use of dna microarray data for diagnostic and prognostic classification. J Natl Cancer Inst 95: 14.
  30. 30. Pawitan Y, Bjoehle J, Amler L, Borg A, Egyhazi S, et al. (2005) Gene expression profiling spares early breast cancer patients from adjuvant therapy: derived and validated in two population-based cohorts. Breast Cancer Res 7: R953–R964.
  31. 31. Wang Y, Klijn J, Zhang Y, Sieuwerts A, Look M, et al. (2005) Gene-expression profiles to predict distant metastasis of lymph-node-negative primary breast cancers. Lancet 365: 671–679.
  32. 32. Sotiriou C, Wirapati P, Loi S, Harris A, Fox S, et al. (2006) Gene expression profiling in breast cancer: understanding the molecular basis of histologic grade to improve prognosis. J Natl Cancer Inst 98: 262.
  33. 33. Ivshina A, George J, Senko O, Mow B, Putti T, et al. (2006) Genetic reclassification of histologic grade delineates new clinical subtypes of breast cancer. Cancer Res 66: 10292.