Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

"Are you gonna publish that?" Peer-reviewed publication outcomes of doctoral dissertations in psychology

  • Spencer C. Evans ,

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Supervision, Writing – original draft, Writing – review & editing

    scevans@fas.harvard.edu

    Affiliations Clinical Child Psychology Program, University of Kansas, Lawrence, KS, United States of America, Department of Psychology, Harvard University, Cambridge, MA, United States of America

  • Christina M. Amaro,

    Roles Conceptualization, Investigation, Methodology, Project administration, Resources, Supervision, Writing – review & editing

    Affiliation Clinical Child Psychology Program, University of Kansas, Lawrence, KS, United States of America

  • Robyn Herbert,

    Roles Investigation, Writing – review & editing

    Affiliations Clinical Child Psychology Program, University of Kansas, Lawrence, KS, United States of America, Department of Psychology, Washington State University, Pullman, WA, United States of America

  • Jennifer B. Blossom,

    Roles Investigation, Methodology, Writing – review & editing

    Affiliations Clinical Child Psychology Program, University of Kansas, Lawrence, KS, United States of America, Department of Psychiatry & Behavioral Sciences, University of Washington School of Medicine, Seattle, WA, United States of America

  • Michael C. Roberts

    Roles Conceptualization, Methodology, Resources, Supervision

    Affiliation Clinical Child Psychology Program, University of Kansas, Lawrence, KS, United States of America

Abstract

If a doctoral dissertation represents an original investigation that makes a contribution to one’s field, then dissertation research could, and arguably should, be disseminated into the scientific literature. However, the extent and nature of dissertation publication remains largely unknown within psychology. The present study investigated the peer-reviewed publication outcomes of psychology dissertation research in the United States. Additionally, we examined publication lag, scientific impact, and variations across subfields. To investigate these questions, we first drew a stratified random cohort sample of 910 psychology Ph.D. dissertations from ProQuest Dissertations & Theses. Next, we conducted comprehensive literature searches for peer-reviewed journal articles derived from these dissertations published 0–7 years thereafter. Published dissertation articles were coded for their bibliographic details, citation rates, and journal impact metrics. Results showed that only one-quarter (25.6% [95% CI: 23.0, 28.4]) of dissertations were ultimately published in peer-reviewed journals, with significant variations across subfields (range: 10.1 to 59.4%). Rates of dissertation publication were lower in professional/applied subfields (e.g., clinical, counseling) compared to research/academic subfields (e.g., experimental, cognitive). When dissertations were published, however, they often appeared in influential journals (e.g., Thomson Reuters Impact Factor M = 2.84 [2.45, 3.23], 5-year Impact Factor M = 3.49 [3.07, 3.90]) and were cited numerous times (Web of Science citations per year M = 3.65 [2.88, 4.42]). Publication typically occurred within 2–3 years after the dissertation year. Overall, these results indicate that the large majority of Ph.D. dissertation research in psychology does not get disseminated into the peer-reviewed literature. The non-publication of dissertation research appears to be a systemic problem affecting both research and training in psychology. Efforts to improve the quality and “publishability” of doctoral dissertation research could benefit psychological science on multiple fronts.

Introduction

The doctoral dissertation—a defining component of the Doctor of Philosophy (Ph.D.) degree—is an original research study that meets the scientific, professional, and ethical standards of its discipline and advances a body of knowledge [1]. From this definition it follows that most dissertations could, and arguably should, be published in the peer-reviewed scientific literature [12]. For example, research participants typically volunteer their time and effort for the purposes of generating new knowledge of potential benefit; therefore, to breach this contract by not attempting to disseminate one’s findings is to violate the ethical standards of psychology [3] and human subjects research [2,4]. The nonpublication of dissertation research can also be detrimental to the advancement of scientific knowledge in other ways. Researchers may unwittingly and unnecessarily duplicate efforts from doctoral research when conducting empirical studies, or draw biased conclusions in meta-analytic and systematic reviews that often deliberately exclude dissertations. Many dissertations go unpublished due to nonsignificant and complicated results, exacerbating the “file drawer” problem [56]. Indeed, unpublished dissertations are rarely if ever cited [78].

The problem of dissertation non-publication is of critical importance in psychology. Some evidence [9] suggests that unpublished dissertations can play a key role in alleviating file drawer bias and reproducibility concerns in psychological science [10]. More broadly, the field of psychology—given its unique strengths, breadth, and diversity—poses a useful case study for examining dissertation nonpublication in the social, behavioral, and health sciences. Like other scientific disciplines, many Ph.D. graduates in psychology may be motivated to revise and submit their dissertations for publication for the usual reasons offered by academic and research careers. However, other new psychologists might not pursue this goal for a variety of reasons. Those in professional and applied subfields may commit most or all of their working time to non-research activities (e.g., professional practice, clinical training) and have little incentive to seek publication. Even those in more research-oriented subfields increasingly take non-research positions (e.g., industry, consultation, teaching, policy work) or other career paths which do not incentivize publications. Negative graduate school experiences, alternative career pursuits, and personal or family matters can all be additional factors that may decrease the likelihood of publication. Moreover, it is typically a challenging and time-consuming task to revise a lengthy document for submission as one or more journal articles. Still, all individuals holding a Ph.D. in psychology have (in theory) produced an original research study of scientific value, which should (again, in theory) be shared with the scientific community. Thus, for scientific, ethical, and training reasons, it is important to understand the frequency and quality of dissertation publication in psychology.

There is an abundance of literature relevant to this topic, including student or faculty perspectives (e.g., [1113]) and studies of general research productivity during doctoral training and early career periods (e.g., [1419]). However, evidence specifically regarding dissertation publication is remarkably sparse and inconsistent [8,2024]. This literature is limited by non-representative samples, biased response patterns, and disciplinary scopes that are either too narrow or too broad to offer insights that are useful and generalizable for psychological science. For example, in the only psychology-specific study to our knowledge, Porter and Wolfle [23] mailed surveys to a random sample of individuals who earned their psychology doctorates. Of 128 respondents, 59% reported that their dissertation research had led to at least one published article. Unfortunately, this study [23] and others (e.g., [8]) are now over 40 years old, offering little relevance to the present state of training and research in psychology. A much more recent and rigorous example comes from the field of social work. Using a literature searching methodology and a random sample of 593 doctoral dissertations in social work, Maynard et al [22] found that 28.8% had led to peer-reviewed publications. However, this estimate likely does not generalize to psychology and its myriad subfields. Thus, there is a need for more comprehensive, rigorous, and recent data to better understand dissertation publication in psychology.

Accordingly, the present study investigated the extent and nature of dissertation publication in psychology, specifically examining the following questions: (a) How many dissertations in psychology are eventually published in peer-reviewed journals? (b) How long does it take from dissertation approval to article publication? (c) What is the scientific impact of published dissertations (PDs)? and (d) Are there differences across subfields of psychology? Based on the literature and our own observations, we hypothesized that (a) a majority of dissertations in psychology would go unpublished; (b) dissertation publication would occur primarily during the first few years after Ph.D. approval, diminishing thereafter; (c) PDs would show evidence of at least moderate scientific influence via citation rates and journal metrics; and (d) professional/applied subfields (clinical, counseling, school/educational, industrial-organizational, behavioral) would yield fewer PDs than research-oriented subfields (social/personality, experimental, cognitive, neuroscience, developmental, quantitative).

Materials and methods

Sample

The dataset of psychology dissertations was obtained directly from ProQuest UMI’s Dissertations and Theses Database (PQDT), which is characterized as “the world’s most comprehensive collection of dissertations and theses. . . [including] full text for most of the dissertations added since 1997. . . . More than 70,000 new full text dissertations and theses are added to the database each year through dissertations publishing partnerships with 700 leading academic institutions worldwide” [25]. While international coverage varies across countries, PQDT’s repository is estimated to include approximately 97% of all U.S. doctoral dissertations [26], across all disciplines, institutions, and training models.

Upon request, PQDT provided a database of all dissertations indexed with the term “psychology” in the subject field during the year 2007. This resulted in a total population of 6,580 dissertations, which were then screened and sampled according to pre-defined criteria. The number of dissertations included at each stage in the sampling process is summarized in a PRISMA-style [27] flow diagram for the overall sample in Fig 1, and broken down by subfield in Table 1. Dissertations were excluded if written in a language other than English, for any degree other than Ph.D. (e.g., Psy.D., Ed.D.), or in any country other than U.S. The remaining dissertations were recoded for subfields based on the subject term classification in PQDT, with a few modifications (e.g., combining “neuroscience” and “biological psychology”). This left a remaining sample of 3,866 relevant dissertations, representing our population. This figure is approximately in line with the U.S. National Science Foundation’s Survey of Earned Doctorates [28] estimate that 3,276 research doctorates in psychology were granted during the year 2007, suggesting that PQDT could be slightly broader or more comprehensive in scope.

thumbnail
Fig 1. Flow diagram reflecting the numbers of dissertations included at each stage of the sampling process.

Note. PQDT = ProQuest Dissertations and Theses. a Categories of excluded dissertations are mutually exclusive, summing to 100%. b PQDT exclusion criteria were applied sequentially in the order presented; thus, the number associated with each exclusion criterion reflects how many were excluded from the sample that remained after the previous criterion was applied. Adapted from the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram [27].

https://doi.org/10.1371/journal.pone.0192219.g001

thumbnail
Table 1. Stratified study sample by subfield of psychology.

https://doi.org/10.1371/journal.pone.0192219.t001

From this relevant population of 3,866, we drew a stratified random sample of 1,000 dissertations. This number was selected because it represented over 25% of the population and offered sufficient power to obtain 95% CIs less than ±3% for the overall proportion estimates (i.e., the primary research question). As shown in Table 1, the sampling procedure was stratified by subfield using a formula that sought to balance (a) power for between-group comparisons, aiming to include ≥50 dissertations from each subfield; and (b) representativeness to the population, aiming to include ≥10% of the dissertations from each subfield. This resulted in subfield sample sizes ranging from 59 for general/miscellaneous (75.6% of relevant subfield population) to 179 for clinical (12.5% of relevant subfield population). Ninety (9.0%) dissertations were later found to be ineligible during the full-text review because the approval date was before or after the year 2007. This incongruence was partly explained by copyright or graduation dates differing from the dissertation year, and was not significantly different across subfields. The resulting final sample consisted of 910 dissertations, with subfield samples ranging from 52 (general/miscellaneous) to 159 (clinical). Because this study did not meet the definition of human subjects research, institutional review board approval was not required.

Search timeframe

We aimed to conduct prospective follow-up searches for PDs within a timeframe that was both (a) long enough to capture nearly 100% of PDs and (b) short enough for results to retain their relevancy to the current state of psychological science. Because the literature does not offer dissertation-publication “lag time” statistics for reference, we used the “half-life” of knowledge—i.e., the average time it takes for half of a body of knowledge to become disproven or obsolete [2930]. Across methodologies, the half-life of knowledge in psychology has been estimated at 7–9 years [3133]. Accordingly, we selected a prospective search window allowing 0–7 years for dissertations to be published. Because the doctoral dissertations were sampled from the year 2007, follow-up searches were restricted to articles published between 2007 and 2014. We elected to exclude candidate publications from years prior to 2007 for several reasons. First, most U.S. psychology Ph.D. programs follow a more traditional dissertation model (and this would have been even more ubiquitous in 2007), where the dissertation would have to be completed before it could be published in a peer-reviewed journal. Second, even for the minority of programs that might follow less conventional models such as dissertation-by-publication [34], the lag-time to publication would likely still result in at least one PD appearing in print concurrently with or after the dissertation, and would therefore be captured by our search strategy. Finally, any potential benefits of searching retrospectively were outweighed by the potential risks of introducing unreliability into the data, such as identifying false positives from student publications, master’s theses, pilot studies, or other analyses from the same sample. On the other end of our search window, candidate publications that appeared in print during or after 2015 were also not considered. Post hoc analyses (see Results) suggested that this 0–7 year timeframe was adequate.

Publication search and coding procedures

Searches for PDs were conducted in two rounds, utilizing scholarly databases in a manner consistent with the evidence regarding their specificity, sensitivity, and quality. Specifically, searches were conducted first in PsycINFO, which has high specificity for psychological, social, and health sciences [3536]; and second, in Google Scholar, casting a much broader net but still searching for peer-reviewed scholarly journal articles [35,3740]. The objective of these searches was to locate the PDs or to determine that the dissertation had not been published in the indexed peer-reviewed journals. Although it is never possible to definitively ascertain a thing’s non-existence, we added additional steps and redundancies to ensure that our searches were as exhaustive as possible. First, when no PD was found in either scholarly databases, as a final step we conducted Google searches for the dissertation author and title, then reviewed the search results (e.g., CVs posted online, faculty web pages) for possible PDs. Second, all searching/coding procedures were performed at least twice by trained research assistants. If two coders disagreed on whether a PD was found, which article it was, or if either coder was uncertain, these dissertations were then coded by consensus among three or more members of the research team, including master’s-level researchers (SCE and CMA).

In all literature searches, the following queries were entered for each dissertation: (a) title of dissertation, without punctuation or logical operand terms; (b) author/ student’s name; and (c) chair/ advisor’s name. Search results were assessed for characteristics of authorship (student and chair names), content (title, abstract, acknowledgments, methods), and publication type (specifically targeting peer-reviewed journal articles) by which a PD could be positively identified. Determination of PD status was made and later validated based on global judgments of these criteria. Identified PDs were then coded for their bibliographic characteristics. Results were excluded if published in a non-English journal, outside of the 0–7 year (2007–2014) window, or in a non-refereed or non-journal outlet (e.g., book chapters). Because dissertations can contain multiple studies and be published as multiple articles, searches aimed to identify a single article that was most representative of the dissertation, based on the criteria outlined above and by consensus agreement among coders. All searches were conducted and coding was completed between January 2015 and May 2017.

Variables

Dissertation, publication, and year.

Although the structure and content of doctoral dissertations varies across institutions, countries, and disciplines, the common unifying factor is that the dissertation represents an original research document produced by the student, approved by faculty, and for which a degree is conferred. Accordingly, in using PQDT as our population of U.S. Ph.D. psychology dissertations, we adhere to this broad but essential definition of a dissertation. This definition includes all different models of dissertations (e.g., ranging from traditional monographs to more recent models, such as briefer publication-ready dissertations and dissertation by publication [34]), but does not differentiate among them.

In this paper and in common scientific usage, “publication” refers to the dissemination of a written work to a broad audience, typically through a journal article. Accordingly, we do not consider indexing in digital databases for theses and dissertations as a publication such as in PQDT, even though it may be called “publishing” by the company. Rather, we define “dissertation publication” as the dissemination of at least part of one’s Ph.D. dissertation research in the form of an article published in a peer-reviewed journal. The peer-reviewed status of the journal was included among the variables that were coded twice with discrepancies resolved by consensus. Lastly, year of publication (2007, 2008, 2009 … 2014) and years since approval (0, 1, 2 … 7) were coded from when the print/final version of the article appeared, given that advance online access varies and is not available in all journals.

Subfield.

The PQDT subject terms were used as a proxy indicator of the subfield of psychology from which the dissertation was generated. As described above, twelve categories were derived (Table 1). We considered five categories as professional/ applied subfields (clinical, counseling, educational/school, industrial-organizational, and behavioral), given that graduates in these fields are trained for careers that often include professional licensure or applied activities (e.g., consultation, program evaluation). In contrast, seven categories were considered research/ academic subfields (cognitive, developmental, experimental, neuroscience, quantitative, and social/personality), given that these subfields train primarily in a substantive or methodological research area. Note that Ph.D. programs in all of these subfields train their students to conduct research; when professional/ applied training components are present, they are there in addition to, not instead of, research training.

Article citations.

The influence of PDs was estimated using article- and journal-level variables. At the article level, we used Web of Science to code the number of citations to the PD occurring each year since publication, tracking from 2007 up through year 2016. Importantly, Web of Science has been found to exhibit the lowest citation counts, but the citations which are included are drawn from a more rigorously controlled and higher quality collection of scholarly publications compared to others like Google Scholar, PubMed, and Scopus [35,3738,4041]. Citations were coded and analyzed primarily as the mean number of citations per year in order to account for time since publication. Total citations and citations each year were also calculated.

Journal-level metrics.

The following journal impact metrics were recorded for the year in which the PD was published: (a) Impact Factor (IF) and (b) 5-Year IF [42]; (c) Article Influence Score (AIS) [43]; (d) Source Normalized Impact (SNIP) [44]; and (e) SCImago Journal Rank indicator (SJR) [45]. Each of these indices shares different similarities and distinctions from the others and provides different information about how researchers cite articles in a given journal. While each has its limitations, these five indicators together offer a broad overall characterization of a journal’s influence, without over-relying on any single metric. As a frame of reference, the population-level descriptive statistics for each of these journal metrics (2007–2014) are as follows: IF (M = 1.8, SD = 2.9), 5-year IF (M = 2.2, SD = 3.0), SNIP (M = 0.9, SD = 1.0), SJR (M = 0.6, SD = 1.1), and AIS (M = 0.8, SD = 1.4).

As described above, all of the dissertation, literature searching, and outcome data used in the present study were obtained from a variety of online sources available freely or by institutional subscription. Links to these sources can be found in the supplementary materials (S1 File).

Analytic plan

Overall descriptive analyses were conducted to examine the univariate and bivariate characteristics of the data, including the frequency and temporal distribution of PDs in psychology. Similar descriptive statistics were provided to characterize the nature of and scholarly influence of the PD via article citations and journal impact metrics. Group-based analyses were conducted using chi-square and ANOVAs to assess whether dissertation publication rates and scientific influence differed across subfields of psychology. The 95% CIs surrounding the total weighted estimate were used as an index of whether subfield estimates were significantly above or below average.

Time-to-publication analyses were conducted in three different ways. First, we used weighted Cox regression and Kaplan-Meier survival analyses to model dissertation publication as a time-to-event outcome, both for the overall sample and separately by subfields. Second, because the large majority of dissertations “survived” the publication outcome past our observation window (i.e., most cases were right-censored), we also conducted between-group comparisons regarding subfield publication times for only those whose dissertations were published. Finally, in order to ensure the adequacy of our 0–7 year search window, we fit a distribution to our observed data and projected this trend several years into the future.

Full-sample analyses were conducted using the complex samples option in SPSS Version 24, which yields weighted estimates that are less biased by sample proportions and more generalizable to the population. Distribution model-fitting and projections were estimated in R. For analyses related to dissertation publication outcomes, there were no missing data because all values could be coded based on obtained dissertations. Data availability for journal- and article-level variables are reported in those results tables.

Results

Frequency of and time to publication

The overall weighted estimate showed that 25.6% (95% CI: 23.0, 28.4) of psychology dissertations were published in peer-reviewed journals within the period of 0–7 years following their completion. The unweighted estimate was similar (27.5% [24.6, 30.4]), but reflected sampling bias due to differences between subfields. Thus, weighted estimates are used in all subsequent results. Significant variations were found across subfields (Rao-Scott adjusted χ2(df = 9.65) = 65.28, F(9.65, 8869.62) = 8.28, p < .001). As shown in Table 2, greater proportions of PDs were found in neuroscience (59.4% [47.8, 70.1]), experimental (50.0% [37.7, 62.3]), and cognitive (41.0% [31.1, 51.8]), whereas much lower rates were found for industrial-organizational (10.1% [5.0, 19.5]) and general/miscellaneous (13.5% [6.5, 25.7]). All other subfields fell between 19.0 and 29.0%. Quantitative and social/personality fell within the 95% CIs for the weighted total, suggesting no difference; however, most other subfields fell above or below this average. Of note, three core professional subfields (clinical, counseling, and school/educational) were all between 19.0 and 20.8%—below average and not different from one another.

The overall time-to-publication results are presented in Table 3 and the left panel of Fig 2. As shown, over half (56.0% of those ultimately published; 14.3% of the total sample) of PDs appeared in print within 2 years following the year of completion, with the large majority (89.7% of ultimately published; 23.0% of total) being published within 5 years. Among those dissertations that were ultimately published, the time to publication averaged about 2–3 years (M = 2.58 [2.34, 2.83]), with a median of 2 years and a mode of 1 year. Omnibus comparisons from the Kaplan-Meier survival model revealed significant variations across subfields, χ2(df = 1) = 4.24, p = .039), as plotted in the right panel of Fig 2. These results generally mirrored the same pattern found for overall binary publication outcomes across subfields. Among only those dissertations that were published, the subfield differences in time-to-publication were marginal overall, F(11, 238) = 5.99, p = .064, but still shed some additional light beyond the binary publication outcomes. Specifically, neuroscience (M = 1.61, [1.09, 2.13]), counseling (M = 1.92, [1.04, 2.79]), and experimental (M = 1.93 [1.45, 2.41]) averaged less than two years to publication, shorter than the weighted average. In contrast, clinical (M = 2.88, [2.20, 3.56]), social/personality (M = 2.90, [1.92, 3.89]), school/educational (M = 2.94, [1.78, 4.1]), industrial-organizational (M = 3.00, [0.73, 5.27]), and quantitative (M = 3.06, [2.35, 3.78]) all took longer, approximately three years.

thumbnail
Fig 2. Estimated cumulative rates of dissertation publication over time.

Overall estimates and 95% confidence intervals (left panel) are derived from the weighted Cox regression model (see Table 3). Subfield estimates (right panel) are derived from the unweighted Kaplan-Meier regression model. In both plots, cumulative publication estimate = one minus survival function.

https://doi.org/10.1371/journal.pone.0192219.g002

thumbnail
Table 3. Temporal lag from dissertation completion to publication by year.

https://doi.org/10.1371/journal.pone.0192219.t003

Lastly, as a methodological check, we modeled our time-to-publication data and projected this trend into the future to estimate what percentage of PDs we might have missed by stopping after 7 years. More specifically, these models used the weighted estimates of how many dissertations were published each year as the outcome and time (years 0 to 7) as the predictor. A Poisson model containing quadratic and linear effects for time fit the data best. When projected into the future, this model estimated that an additional 7 dissertations would be published at 8–10 years post-dissertation (4, 2, and 1 PDs, respectively). From 11 years onward, estimates asymptotically approached and rounded down to zero, even cumulatively. Thus, our sampling frame appears to have captured virtually all (97.3%) of the dissertations that ultimately would be published. In other words, had the study been implemented for as long as necessary to capture all PDs, the data suggest that our primary result, the estimated percentage of dissertations published, would increase only modestly from 25.6% to 26.4%.

Scientific impact

As shown in Table 4, PDs were cited an average of 3.65 times per year since publication, totaling 15.95 citations on average during the years captured by the study. There were significant variations by subfield in terms of both total and per-year citations. Specifically, PDs in cognitive (M = 5.08 [1.33, 8.83]) and industrial-organizational (M = 5.18 [0.80, 9.56]) were more highly cited, with over 5 citations/year. On the other end, fields that exhibited relatively lower (but still nontrivial) rates of citations/year included quantitative (M = 1.42, [0.87, 1.97]), general/miscellaneous (M = 1.46 [0.15, 2.78]), counseling (M = 1.64 [0.63, 2.64]), developmental (M = 2.82 [1.31, 4.32]), and social/personality (M = 2.86 [2.05, 3.67]).

thumbnail
Table 4. Cumulative article citations in Web of Science per year after publication.

https://doi.org/10.1371/journal.pone.0192219.t004

The 250 PDs in our sample appeared in 186 different peer-reviewed outlets, including top-tier journals in general (e.g., Nature, Science) and psychological (e.g., Psychological Science, Journal of Consulting and Clinical Psychology) science. Notably, several PDs appeared in journals predominately representing professions or disciplines outside psychology (e.g., Public Health Nursing, Endocrinology). The most common journal titles were all in relatively specialized areas of psychology (e.g., Applied Psychological Measurement, Brain Research), tending to draw from experimental, social/personality, neuroscience, behavioral, and cognitive. Overall, however, dissertations were disseminated broadly, with no single journal “catching” more than five (2.0%) dissertations from our overall sample, and most journals publishing only one (0.4%).

As shown in Table 5, PDs appeared in journals of moderate-to-high influence according to all five metrics used. Subfield differences were found for the IF, SNIP, and SJR (ps < .01, but not in the 5-year IF or the AIS (ps > .09). Specifically, neuroscience and cognitive PDs appeared in higher-IF journals (Ms = 4.47 [3.17, 5.78] and 3.86 [1.87, 5.86], respectively), while most others fell in the below-average IF, including those still within the 2+ range (clinical, social/personality, general/miscellaneous, developmental, and behavioral; Ms = 2.14 to 2.45) and those in the 1–2 range (quantitative, school/educational, counseling, and industrial-organizational; Ms = 1.27 to 1.71). Similarly, neuroscience (M = 2.17 [1.68, 2.66]), cognitive (M = 1.97 [1.22, 2.72]), and social/personality (M = 1.65 [1.09, 2.21]) PDs appeared in higher-SJR journals, whereas behavioral, clinical, general/miscellaneous, quantitative, school/educational, industrial-organizational, and counseling PDs had lower SJRs (Ms = 0.51–1.21). Lastly, cognitive (M = 1.61 [1.17, 2.05]) and social/personality (M = 1.55 [1.18, 1.92]) were published in higher-SNIP journals, while clinical (M = 1.28 [1.08, 1.47]), general/miscellaneous (M = 1.19 [0.23, 2.15]), and counseling (M = 0.57 [0.26, 0.89]) PDs appeared in journals with lower SNIPs.

Discussion

The primary finding of this study was that only about one in four psychology Ph.D. dissertations in the U.S. was published in a peer-reviewed journal. Typically this occurred within 2–3 years after completing the dissertation. Despite variations across subfields, dissertation publication appears to be the exception not the rule. When dissertations were published, however, they were often highly cited and appeared in influential journals. The relatively high impact of published dissertations may reflect a gatekeeping effect, whereby only the highest quality or most significant contributions get published; or a refining effect, whereby the dissertation development and committee review process helps strengthen the contribution [1,46], increasing the likelihood and impact of publication. In other words, the dissertation process may add some value to doctoral research, and some doctoral research appears to add value to psychological science. A larger and more important question is why the vast majority of psychology dissertation research does not contribute to the peer-reviewed literature.

Our estimated rate of dissertation publication in psychology (25.6%) is similar to or slightly below a corresponding estimate for social work (28.8%) [22], the only field in which a similarly rigorous and comparable design had been used. To our knowledge, the present study is the first to offer a reliable estimate of publication rates specific to the dissertation and specific to psychology. Further, the present study advances the literature by demonstrating the impact that these published dissertations have on the scientific literature. Although it was only minority of cases, published dissertations in psychology were disseminated in moderate- to high-impact journals across a wide spectrum of disciplines and specialty interests. Whereas published dissertation articles were cited several times per year, anecdotally we saw very few citations to the actual dissertation documents in PQDT. These observations are consistent with evidence showing that the impact of dissertations themselves has declined markedly [78] in recent decades. In contrast, peer-reviewed journal articles are much more likely to be read, cited, and included in systematic and meta-analytic reviews.

Subfield differences were broadly consistent with hypotheses. Dissertations from professional/applied fields were less often published, whereas the more research/academic-oriented subfields published at rates much higher than average. These findings likely reflect differences in the nature of training and motivation in professional and scientific subfields, and also align with evidence about student research productivity in professional/applied subfields. For example, annual results from the Association of Psychology Postdoctoral and Internship Centers applicant survey indicate that only about 50% of advanced doctoral students in professional psychology have authored or co-authored any peer-reviewed publications, while only 10% have published 5 or more [18]. Given this relatively low baseline rate of productivity during graduate school for this population, the average likelihood of post-graduation publication seems low. On the other hand, individuals in more research-oriented subfields are often training specifically for an academic position which incentivizes publication. Further, lab-based dissertations often include multiple experiments, which may create more publishable units (this also may explain the relatively higher rate of publication in behavioral psychology). The low publication of dissertations in industrial-organizational (10%) is also interesting, and may reflect an applied focus, organizational propriety of data, greater non-academic incentives (e.g., higher salaries in industry), or limited generalizability as market or consultative research. When these and other types of applied/professional dissertations were published, however, they were often cited several times per year.

The time from dissertation completion to publication appears to be a critical consideration. From our main results and longitudinal projections, we can generalize that by two calendar-years post-dissertation, over 50% of ultimately-published dissertations will appear in print. After five years, this number increases to nearly 90% (10% probability of publication). After 7–10 years, the dissertation findings are likely to become outdated, irrelevant, or overturned [3032], and the probability of publication approaches 0%. Thus, if students wish to publish their dissertation, it is recommended that they proactively develop a plan for adapting the full document into a manuscript (or multiple manuscripts) for publication [1]. As one example of this, we are aware of some universities that have begun requiring that approved dissertations be accompanied by a form that outlines an agreed-upon plan for publication and authorship.

The present findings raise questions about the reasons for nonpublication. Possible explanations include the burden of revising and submitting a lengthy document, or limited career incentives for pursuing publications in non-academic careers. Alternatively, unpublished dissertations may lack methodological rigor, including “fatal flaws,” or fail to make a novel and substantive contribution. Thus, unpublished dissertations might not pass the bar of peer review. The present results only illustrate how many dissertations were actually published, and cannot speak to how many students attempted to publish their dissertations, or how many dissertations might have been publishable quality. Similarly, these results do not provide direct evidence of the mechanisms underlying publication vs. nonpublication, but the apparently high quality of the published dissertation articles is consistent with the file drawer hypothesis. Interestingly, one recent study in management research found that in the path from dissertation to publication, studies appear to get “beautified,” for example, such that the ratio of supported to unsupported hypotheses more than doubles in at least one discipline [47]. Such questionable research practices may provide one explanation for how dissertations selectively get published, but this is clearly not an appropriate solution. Whatever the underlying explanations may be, the widespread non-publication of dissertation research is a problem in psychology. To the extent that this non-publication continues, it exacerbates the file drawer problem [56,9], biases systematic reviews and meta-analyses, and contributes to the replication problem in psychology [10]. It also amounts to inefficient use of time and resources, raising ethical questions about violating agreements with participants and funding agencies, and about the consequences of not disseminating research findings [2,4].

The present study was designed so that results could be generalized to the population of dissertations produced in U.S. psychology Ph.D. programs. However, some limitations should be noted. First, our stratified random sample was drawn from an archival data source (PQDT), which is an approximation of the population of dissertations in psychology (although a very comprehensive one) and a proxy of the boundaries delineating subfields in psychology. Our outcome variables were likewise drawn from various databases (e.g., PsycINFO, Google Scholar, Web of Science, Thomson Reuters) which are necessarily restricted in different ways. As noted in the Methods section, these databases were selected as the most comprehensive and appropriate sources available for the purposes for which they were used, and their strengths and weaknesses were considered in developing the study protocol.

A second constraint lies in the selection of a single cohort year (2007) and 7-year follow-up period, raising the possibilities of missed cases and of cohort/historical effects. The changing landscape of doctoral training in psychology (e.g., more competitive admissions, increasing emphasis on research productivity, nontraditional dissertation requirements) may limit generalizability to past and future populations. Of note, these results may not generalize broadly to other countries (e.g., Northern/Western Europe, Australia, and New Zealand) and fields (e.g., biomedical, natural, and physical sciences) that are increasingly using a dissertation-by-publication model [34]. From our read of the literature and our assessment of the present sample, this model has not been widely adopted in U.S. psychology, where more traditional dissertation documents are still the norm. Accordingly, our sampling and search strategies were designed to work reasonably well for all U.S. psychology dissertations, including a suspected minority of nontraditional models; however, we could not differentiate types of dissertations. For all of these reasons, periodic replication of these results would be useful. Nonetheless, the large sample size, stratified sampling method, comprehensive dataset, and thorough multi-stage search protocol help mitigate bias. Narrow 95% confidence intervals support the precision of the overall estimate, and post hoc analyses suggested that the results are unlikely to change substantially given a longer sampling frame.

Finally, we note that this study should not be interpreted as any sort of evaluation of students, advisors, or programs for dissertations that were or were not published. Nor are we advocating that all dissertations be published, regardless of quality. Rather, these findings shed light on what appears to be a systemic problem affecting research and training in all areas of psychology. Efforts aimed at increasing the quality and “publishability” of doctoral dissertation research may have broad benefits for both training and research in psychology. On the training side, these efforts may benefit students and graduates in terms of providing a higher standard of scientific training, more research/publishing experience, and greater early-career productivity. On the research side, such efforts can help promote a higher level of rigor in doctoral research and increase the likelihood that the findings will be shared with the scientific community.

Supporting information

Acknowledgments

A portion of this research was presented at the 28th Annual Convention of the Association for Psychological Science, Chicago, IL, May 2016, where it received the APS Student Research Award. We thank Austin McLean and the staff of ProQuest Dissertations and Theses for providing the population dataset of dissertations, Patrick Edmonds for offering statistical consultation, Oliver Blossom for reviewing the manuscript, and the following members of our research team for their extensive coding efforts: Maggie Biberstein, Jamie Eschrich, Andrea Garcia, Mackenzie Klaver, Alexa Mallow, Alexandra Monzon, and Emma Rogers.

References

  1. 1. Cone JD, Foster SL. Dissertations and theses from start to finish: Psychology and related fields. American Psychological Association; 1993.
  2. 2. Roberts MC, Beals-Erickson SE, Evans SC, Odar C, Canter KS. Resolving ethical lapses in the nonpublication of dissertations. In Sternberg RJ, Fiske ST, editors. Ethical challenges in the behavioral and brain sciences. Cambridge University Press; 2015: pp. 59–62.
  3. 3. American Psychological Association. Standard 8: Research and publication. Ethical principles of psychologists and code of conduct including 2010 amendments. Washington, DC: Author; 2010.
  4. 4. Dowd MD. Breaching the contract: the ethics of nonpublication of research studies. Arch Pediatr Adolesc Med. 2004;158: 1014–1015. pmid:15466692
  5. 5. Franco A, Malhotra N, Simonovits G. Publication bias in the social sciences: Unlocking the file drawer. Science. 2014;345: 1502–1505. pmid:25170047
  6. 6. Pautasso M. Worsening file-drawer problem in the abstracts of natural, medical and social science databases. Scientometrics. 2010;85:193–202.
  7. 7. Larivière V, Zuccala A, Archambault É. The declining scientific impact of theses: Implications for electronic thesis and dissertation repositories and graduate studies. Scientometrics. 2008;74: 109–121.
  8. 8. Yoels WC. On the Fate of the Ph. D. Dissertation: A Comparative Examination of the Physical and Social Sciences. Sociol Focus. 1974;7: 1–3.
  9. 9. McLeod BD, Weisz JR. Using dissertations to examine potential bias in child and adolescent clinical trials. J Consult Clin Psychol. 2004;72: 235–251. pmid:15065958
  10. 10. Open Science Collaboration. Estimating the reproducibility of psychological science. Science. 2015;349:aac4716-1-aac4716-8.
  11. 11. Bowen GA. From qualitative dissertation to quality articles: Seven lessons learned. Qual Rep. 2010;15: 864–879.
  12. 12. Kamler B. Rethinking doctoral publication practices: Writing from and beyond the thesis. Stud High Educ. 2008;33: 283–294.
  13. 13. Knox S, Burkard AW, Janecek J, Pruitt NT, Fuller SL, Hill CE. Positive and problematic dissertation experiences: The faculty perspective. Couns Psychol Q. 2011;24: 55–69.
  14. 14. Hilmer MJ, Hilmer CE. Is it where you go or who you know? On the relationship between students, Ph. D. program quality, dissertation advisor prominence, and early career publishing success. Econ Educ Rev. 2011;30: 991–996.
  15. 15. Kahn JH, Scott NA. Predictors of research productivity and science-related career goals among counseling psychology doctoral students. Couns Psychol. 1997;25: 38–67.
  16. 16. Larivière V. On the shoulders of students? The contribution of PhD students to the advancement of knowledge. Scientometrics. 2012;90: 463–481.
  17. 17. Lee WM. Publication trends of doctoral students in three fields from 1965–1995. J Assoc Inf Sci Technol. 2000;51: 139–144.
  18. 18. Lund EM, Bouchard LM, Thomas KB. Publication productivity of professional psychology internship applicants: An in-depth analysis of APPIC survey data. Training and Education in Professional Psychology. 2016:54.
  19. 19. Williamson IO, Cable DM. Predicting early career research productivity: The case of management faculty. J Organ Behav. 2003;24: 25–44.
  20. 20. Anwar MA. From doctoral dissertation to publication: a study of 1995 American graduates in library and information sciences. J Librariansh Inf Sci. 2004;36: 151–157.
  21. 21. Dinham S, Scott C. The experience of disseminating the results of doctoral research. J Furth High Educ. 2001;25: 45–55.
  22. 22. Maynard BR, Vaughn MG, Sarteschi CM, Berglund AH. Social work dissertation research: Contributing to scholarly discourse or the file drawer? Br J Soc Work. 2012;44: 1045–1062.
  23. 23. Porter AL, Wolfle D. Utility of the doctoral dissertation. Am Psychol. 1975;30: 1054–1061
  24. 24. Santos M, Willett P, Wood FE. Research degrees in librarianship and information science: a survey of master’s and doctoral students from the Department of Information Studies, University of Sheffield. J Librariansh Inf Sci. 1998;30: 49–56.
  25. 25. ProQuest. Dissertation Abstracts International. 2017. Retrieved from http://www.proquest.com/products-services/dissertations/find-a-dissertation.html
  26. 26. McLean A. Making dissertations & thesis accessible and discoverable. Presentation given to the Summer Workshop of the Council of Graduate Schools, Quebec, Canada. 2015, July 13. Retrieved from: http://cgsnet.org/ckfinder/userfiles/files/2015CGSSmrMtg_McLean.pdf
  27. 27. Moher D, Liberati A, Tetzlaff J, Altman DG, Prisma Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009 Jul 21;6:e1000097, 1–6. pmid:19621072
  28. 28. National Science Foundation. Survey of Earned Doctorates: Doctorate recipients from U.S. universities. 2017. Retrieved from: https://www.nsf.gov/statistics/2018/nsf18304/data/tab13.pdf
  29. 29. Arbesman S. The half-life of facts: Why everything we know has an expiration date. Penguin; 2013.
  30. 30. de Solla Price DJ. Little science, big science. New York: Columbia University Press; 1963.
  31. 31. Davis PM, Cochran A. Cited half-life of the journal literature. arXiv:1504.07479. 2015 Apr 28.
  32. 32. Neimeyer GJ, Taylor JM, Rozensky RH. The diminishing durability of knowledge in professional psychology: A Delphi Poll of specialties and proficiencies. Prof Psychol Res Pr. 2012;43: 364–371.
  33. 33. Tang R. Citation characteristics and intellectual acceptance of scholarly monographs. Coll Res Libr. 2008;69: 356–369.
  34. 34. Gould J. Future of the thesis. Phd courses are slowly being modernized. Now the thesis and viva need to catch up. Nature, 2016; 535:26–28. pmid:27383968
  35. 35. García‐Pérez MA. Accuracy and completeness of publication and citation records in the Web of Science, PsycINFO, and Google Scholar: A case study for the computation of h indices in Psychology. J Assoc Inf Sci Technol. 2010;61:2070–2085.
  36. 36. Wu YP, Aylward BS, Roberts MC, Evans SC. Searching the scientific literature: implications for quantitative and qualitative reviews. Clin Psychol Rev. 2012;32: 553–557. pmid:22819996
  37. 37. Aguillo IF. Is Google Scholar useful for bibliometrics? A webometric analysis. Scientometrics. 2012;91: 343–351.
  38. 38. Bergman EM. Finding citations to social work literature: The relative benefits of using Web of Science, Scopus, or Google Scholar. J Acad Librariansh. 2012;38: 370–379.
  39. 39. Bramer WM, Giustini D, Kramer BM, Anderson PF. The comparative recall of Google Scholar versus PubMed in identical searches for biomedical systematic reviews: a review of searches used in systematic reviews. Systematic reviews. 2013;2:115. pmid:24360284
  40. 40. Falagas ME, Pitsouni EI, Malietzis GA, Pappas G. Comparison of PubMed, Scopus, web of science, and Google scholar: strengths and weaknesses. FASEB J. 2008;22: 338–342. pmid:17884971
  41. 41. Mongeon P, Paul-Hus A. The journal coverage of Web of Science and Scopus: a comparative analysis. Scientometrics. 2016;106: 213–228.
  42. 42. Thomson Reuters. InCites Journal Citation Reports. 2017. Retrieved from: http://ipscience-help.thomsonreuters.com/incitesLiveJCR/overviewGroup/overviewJCR.html
  43. 43. Eigenfactor. About the Eigenfactor Project. 2017. Retrieved from: http://www.eigenfactor.org/about.php
  44. 44. Moed HF. Measuring contextual citation impact of scientific journals. J Informetr. 2010;4: 265–277.
  45. 45. González-Pereira B, Guerrero-Bote VP, Moya-Anegón F. A new approach to the metric of journals’ scientific prestige: The SJR indicator. J informetr. 2010;4: 379–391.
  46. 46. Boote DN, Beile P. Scholars before researchers: On the centrality of the dissertation literature review in research preparation. Educ Res. 2005;34:3–15.
  47. 47. O’Boyle EH Jr, Banks GC, Gonzalez-Mulé E. The chrysalis effect: How ugly initial results metamorphosize into beautiful articles. J Manag. 2017;43:376–399.