Abstract

As research collaborations grow in size, scope, and time horizon, they increasingly resemble organizations in and of themselves. The traditional institutional structure of science, however, is fundamentally focused on individual scientists. Reconciling these novel research organizations with traditional structures has proven a difficult challenge for the high energy physics community, which has a longstanding tradition of large collaborations. In this paper I draw on interview data gathered in this community to explore the issues of authorship and credit attribution, with an eye toward extrapolating lessons for those in other disciplines. Results suggest that authorship practices in physics are fundamentally problematic in several respects, and that this stems in part from a need to recognize multiple types of contributions.

Introduction

Current interest in cyberinfrastructure, a shared set of advanced computational and network resources to support scholarship, reflects both the advances that have been achieved and those that are likely to be possible as a result of the continued application of advanced computing technologies to fundamental questions in science and engineering. An important dimension of this effort has been the need for collaboration in addressing the scope and complexity of science and engineering research questions (Galison & Hevly, 1992). Increasingly, these collaborations transcend laboratory, disciplinary, institutional, and national boundaries (Birnholtz, 2005; Finholt, 2003; Nentwich, 2003). This is due to three primary factors. First, many important research problems, such as understanding the relationship between public health policy and disease transmission/prevention, require expertise from multiple disciplines in order to reach creative and effective solutions. Second, even within disciplines it is increasingly necessary to share data and methods across levels of analysis and scale. And third, research apparatuses have increased substantially in size and cost, meaning that equipment must be shared among individual researchers or located in specialized facilities that may be independent of traditional university laboratories.

While the capacity for sharing facilities and working across boundaries was historically supported via project-specific collaboratories that enabled researchers from a single field to share resources and instruments across geographic boundaries (Finholt & Olson, 1997; Newell & Sproull, 1982; NRC, 1993), the notion of supporting research via shared, interdisciplinary computing resources is more recent. These resources are referred to as cyberinfrastructure in the United States (Atkins et al., 2003), while those in Europe or the United Kingdom tend to use the terms e-science or cyberscience (Nentwich, 2003).

In addition to facilitating advanced computation (Foster & Kesselman, 1999) and the sharing of instruments (Kouzes & Wulf, 1996), all of these technologies enable people to communicate and work together in virtual organizations that transcend traditional boundaries (DeSanctis & Monge, 1998). How these virtual organizations work, however, and how they are reconciled with the traditional institutional structures of science, remain very much open and important questions (Finholt, 2003). In particular, the advanced research and higher education community places significant value on individual contribution and reputation. How then should systems of credit attribution balance the contribution of the individual with the aggregate contribution of an entire research community that may have organized around a single experiment or cluster of experiments in this environment?

The high energy physics (HEP) community may offer some lessons, since it has been grappling with such questions with for many years. These issues date back to earlier shared instruments, such as the particle accelerators and colliders at the Stanford Linear Accelerator (SLAC) described in Traweek’s (1988) work. The current HEP experiments are massive. Each involves over 2,000 researchers from over 100 institutions around the world. Physicists involved in these projects, as I will describe below, have set up a fascinating array of formal and informal management, communication, and credit-attribution structures. In this article, I will address this issue by exploring the formal and informal structures in two HEP collaborations and discussing their impacts and implications.

To explore these questions, I will focus on authorship and attribution of credit. Authorship serves as a useful avenue for exploring these issues in two closely related respects. On the one hand, authorship is the traditional currency of science. It is at the root of being identified with one’s research contributions, and the recognition for these contributions that accrues in the form of positive reputation, career advancement, and awards (Whitley, 2000). On the other hand, however, the traditional model of authorship is fundamentally at odds with contemporary collaboration (Birnholtz, 2006; Cronin, 2001). Listing multiple authors can make it difficult to identify the roles of individual contributors, and there may also be questions about what constitutes a “real” research contribution that merits inclusion on the author list (Rennie, Yank, & Emanuel, 1997). Different disciplines have addressed these challenges in different ways, but the HEP community has a particularly interesting strategy.

In HEP, all members of a collaboration are traditionally listed as authors on any paper published by any member of that collaboration, a tradition that persists even as the number of names creeps into the thousands (Cronin, 2001; Galison, 1997). Moreover, all members are listed alphabetically on all papers. In other words, there is no distinction made for the “first author” on a paper, as is done in many disciplines to indicate the most significant contributor. The intent of this tradition is to render the individual contributor subservient to the overarching collaboration (Knorr Cetina, 1999), to ensure equitable distribution of formal credit even to those who did not perform the most significant analyses, and to ensure that everyone is motivated to contribute (Galison, 1997). As I will argue below, however, the effect is quite different. Individual collaborators report primarily to their home institutes outside of the collaboration structure, and these institutes hire, evaluate, and promote individuals—not collaborations. A formalized credit structure that does not recognize individual contributions is of little use in making these decisions. Thus, the effect of this ostensibly equitable system is to force evaluators to turn to informal channels of recognition. This is fundamentally problematic for the physicists, as I will illustrate below, but also emblematic of the sort of problems that are increasingly likely to arise as large collaborations become more common.

Background: Collaboration and Authorship

What does authorship do?

As was indicated above, authorship has multiple functions in the sciences. We can describe these as follows: 1) attributing credit for discoveries to a person or group of people; 2) assigning ownership to this person or persons; and 3) enabling the accrual of reputation.

Attribution of Credit

A paper’s authorship attributes credit for particular discoveries to individuals or groups of individuals. Authorship is generally the accepted method for recognizing the contributions of researchers to their field of interest. Having one’s name appear on a conference presentation or journal article is intended to signal some form of significant contribution toward that discovery (Claxton, 2005). As Cronin (1995) points out, however, there are many types of contributions to scientific effort, and authorship is not the only way to recognize these. In many of the cases he looks at, relatively minor contributions to intellectual work were credited with formal acknowledgements in published articles, though exact practices and tradition differ somewhat by field.

Historically, authors were individuals and it was relatively easy to use authorship to gauge the value and extent of an individual’s contributions to the literature (Shapin, 1989). The recent rise of co-authorship in some fields, however, has made this substantially more difficult, as having multiple authors can render individual contributions ambiguous (Rennie et al., 1997). Moreover, this ambiguity is further confounded in the case of contributions by paid technicians or consultants. Staff laboratory technicians, for example, have historically not been included as authors on papers (Shapin, 1989). Statisticians, on the other hand, may be considered authors in some fields if they have contributed substantially to multiple phases of the research project (Parker & Berman, 1998). Thus, there is an important, though blurry, distinction between those who deserve formal recognition as legitimate contributors to research and those who do not.

This becomes particularly interesting in the case of the current HEP collaborations, in which thousands of contributors are listed alphabetically as authors on each paper published by any member of the collaboration. The rationale for this practice is a pervasive acknowledgement that no research could be done without the contributions of all of these individuals. At the same time, however, specific analyses are not carried out by thousands of people—they are done by individuals or in small groups of people well known to each other. Papers are typically written by small groups as well. Thus, we can see that where author lists are long there appears to be a fundamental disconnect between the attribution of credit and the actual effort involved in making discoveries and writing papers.

Ownership

There are two senses of ownership that pertain to any discussion of authorship. Outside of the sciences, ownership is frequently considered with an eye toward copyright (Rose, 1993; Woodmansee & Jaszi, 1994). This aspect of ownership, however, tends to be less important in the sciences, where journal authors typically sign copyright away to publishers in exchange for the reputation and career benefits that will accrue from the broad circulation of their work.[1] On the other hand, the sense of the word “ownership” that is critical to the present discussion, but rarely mentioned in the copyright debates, is that of taking responsibility for one’s work, including error or controversy that might lie within it.

As has been argued repeatedly in the literature, this connection between authorship and responsibility becomes problematic on collaborative projects (Kennedy, 2003; Paneth, Hemenway, Fortney, & Jung, 1998; Rennie et al., 1997). When there is more than one author, it is unclear where liability rests. This was quite apparent, for example, in the late-twentieth century controversy surrounding Nobel Prize winner David Baltimore and his colleagues (Kevles, 1998). In that case, one of Baltimore’s co-authors was accused of data fabrication, but Baltimore refused to withdraw the paper. The accusations were aggressively pursued, eventually by the US Congress, and Baltimore was forced to resign from the presidency of Rockefeller University, even though an expert panel later cleared the co-author of the misconduct charges. The important point here is that ownership of the claims in the paper—and hence the responsibility for them—were ambiguous due to the presence of multiple authors. While it is clear that Baltimore took ownership of the paper and stood behind his colleague’s work, the ambiguity here stems from the fact that Baltimore had to do this in the first place. If all co-authors’ contributions were made explicit in the paper, his colleague’s alleged behavior might not have resulted in Baltimore’s own integrity being questioned.

Reputation

Science operates on what has been referred to as an economy of reputation (Whitley, 2000). Starting from the time they are graduate students choosing the niche in which they will work, researchers are expected to aim to be the world’s expert in their particular areas. Reputation is, in this sense, analogous to Bourdieu’s (1984) notion of “symbolic capital” in that status in science is not determined by possession of economic capital (i.e., how much money one has), but rather by the perceived quality of one’s work. Historically this has been accomplished by publishing papers in high-visibility venues (Franck, 1999) and by winning high-profile awards, of which the Nobel Prize is a particularly well-known (but rare) example.

These publications and other accomplishments are carefully scrutinized by hiring committees, promotion and tenure committees, and others looking to assess individual accomplishment. It should also be noted that even as collaborative work has become more common in recent years, many fields continue to place a premium on single-authored or first-authored publications. Indeed, some junior researchers in the field of neuroscience maintain two independent research programs (Birnholtz, 2005). One of these is typically more complex, involves human specimens, and requires multiple collaborators. The other is individually conducted and geared toward the single-author publications that are so valuable for reputation purposes.

Given the value of reputation and the ambiguity of specific contributions when there are multiple authors, it is perhaps not surprising that some researchers listed as “authors” on papers did not make significant contributions to the work. Both Tarnow (1999) and Claxton (2005), for example, discuss instances of “gift” authorships to maintain social ties, or to acknowledge senior researchers who provided laboratory space or financial support.

Research Context

Data described and analyzed here were collected in the HEP community. Investigators in this field explore questions that address the fundamental nature of matter and the origins of the universe. Their experimental investigations utilize high-energy accelerators to recreate the atmospheric and energy conditions in the moments after the phenomenon commonly known as the “Big Bang.” In so doing, physicists are able to generate specific particles of interest that do not occur naturally under current, more stable atmospheric conditions, and then track their behavior with detectors that capture evidence of the energy trails left behind. Detecting, analyzing, and understanding the existence and behavior of these particles are crucial to validation of what physicists call the “Standard Model,” on which our understanding of matter rests (Kane, 1987).

Today’s accelerators and detectors dwarf all other scientific instruments in size and complexity. The Large Hadron Collider (LHC) at the European Center for Nuclear Research (CERN), the world’s most advanced physics research facility, is an underground tunnel 27 kilometers in circumference. The ATLAS (A Toroidal LHC ApparatuS) detector, one of two that now sit in the LHC, is 20 meters in diameter and weighs 7,000 tons (Close, Marten, & Sutton, 2002). The human and organizational scale of this work is similarly large. The ATLAS experiment, for example, involves over 2,500 physicists based at 140 institutes in 34 countries around the world.

Moreover, the global nature of the work means that support for the research is no longer funneled through a single source (or small number of sources), as was the case historically, for example, with Department of Energy funding for the Fermilab facility in Illinois.

Rather, CERN is a global facility in unprecedented ways, since it is no longer the case that individual countries (or economic and political blocs) have their own cutting-edge accelerator facilities. Those institutes that wish to be involved with this work must align themselves with one of the LHC experiments. Moreover, in the CERN experiments, only funding from the 20 CERN European member states is actually routed through or controlled by CERN and experiment leadership. The rest of the funding is controlled by participating institutes outside of Europe, who voluntarily use resources from their home countries to build components for and provide services to the LHC in accordance with the experiment’s Memorandum of Understanding. Effectively, because any institute is essentially free to withdraw its voluntary contribution to ATLAS at any time,[2] this means that the elected leaders of the ATLAS experiment have no real power, beyond gentle persuasion and what one project team leader describes as “managing by coffee.” In other words, leadership becomes an exercise in continuous consensus-building through informal meetings (usually held over coffee in one of the ubiquitous cafes at CERN), formal presentations, and peer review panels.

For the present discussion, the important implication of this arrangement is that there are multiple types of contributions to research efforts for which people may wish to receive credit. Some are intellectual, as has traditionally been the case, but others are financial or technical in nature. Biagioli (2003) referred to this as a “labor” mentality of contribution, as contrasted with an “originality” mentality found in, for example, biomedical journals.

Method

Qualitative methods were used to gather data for this study, which was conducted as part of a broader investigation of scientists’ collaboration behavior (Birnholtz, 2005). Data were collected during a nine-week visit to CERN from June 8 to August 10, 2004. Semi-structured 30-60 minute interviews were conducted with 32 individuals affiliated in various capacities with ATLAS and CMS, the two major LHC experiments. Interviews were recorded and later transcribed by the author. Participants were selected using snowball sampling techniques, and deliberate efforts were made to speak with individuals at multiple levels of the experiment hierarchy, from first-year graduate students to members of the experiment leadership teams. A uniform protocol was used to conduct interviews, but the order and selection of items were periodically changed to accommodate conversational flow and respondent experience.

Analysis

Inductive qualitative techniques were used to analyze the data (Huberman & Miles, 1994). Data analysis consisted of careful reading and re-reading of interview notes and field notes, and examination of photographs and other artifacts collected while at CERN. During this process, it became clear that authorship in HEP was a complicated and nuanced topic on which there were a range of opinions among the physicists interviewed. Most discussion, though, centered on three themes: 1) balancing the attribution of credit to a large group with the need of individuals to attain recognition and advance their careers; 2) whether there is a significant difference (that merits recognition) between what will below be called infrastructural contributions and discovery-oriented contributions to research endeavors; and 3) pragmatic strategies for “survival” in HEP given the nature of authorship. These themes guided re-examination and further analysis of the data, and provide the basic framework for the presentation of results in the next section.

Results

This section begins with a detailed description of how authorship works in HEP, followed by a more analytical treatment of its implications.

How does authorship work in HEP?

As noted earlier, HEP has a longstanding tradition of extremely inclusive author lists. Beginning with the growth of collaborative research in the 1940s and into the present, this has meant alphabetically listing all members of a collaboration as authors on any paper written based on data from that project by any member of that collaboration. Given that every member is to be listed as an author, there are also many opportunities for all members of the collaboration to provide feedback to the actual authors during the writing and revision process. The remainder of this section provides more detail on this process.

Becoming an author

Knorr-Cetina (1999) suggests that, among physicists, the collectivist orientation of HEP experiments means that the individual is largely erased as “an epistemic participant.” It was therefore surprising to find the composition of the author list to be a highly contentious topic among the physicists I interviewed. When newcomers register to be part of the ATLAS experiment, for example, they are not automatically included in the author list. Rather, this is an option on the registration form that requires additional signatures from appropriate experiment leadership and project leaders at each newcomer’s home institute. These requirements result in part from the experiment’s need to certify that some amount of “service work” to the collaboration, which is required by ATLAS of all members, is completed prior to becoming a part of the author list. This requirement prevents people from bypassing the hard work of the design and construction phases of the experiment and joining just in time to participate in the more glamorous physics analysis tasks.

In some cases, however, participants said that the true threshold for a physicist’s inclusion in the list is service contributions by anybody from that physicist’s home institute. In other words, somebody from University X could become an author at a late phase of the experiment, as long as others from University X had made service contributions to the collaboration. Many senior researchers, for example, noted that they were spending the final stage of their careers working on ATLAS so that the junior faculty and graduate students at their home institutes, who are not currently working on ATLAS, could become involved when the detector comes online and the experiment has a data stream to generate publications. This is an important difference from prior experiments, such as the CDF experiment observed by Biagioli (2003), in which author list eligibility was based on individual contribution.

This difference is likely due to the fact that, unlike smaller-scale projects in the past, the 15–20-year time horizon of the current generation of experiments is far longer than the tenure clock at most US universities. In the early phases of any HEP experiment, much of the work consists of designing and constructing the detector, and analyzing data from computational simulations of expected detector performance. These design and simulation activities do not involve the collection and analysis of novel experimental data, however, and the results therefore cannot be published or considered as research contributions in the traditional sense. In the case of the LHC experiments, which began in 1995 but will not generate novel data until 2008, the time span required by the experiment effectively disadvantages junior faculty members whose tenure aspirations are out of sync with the time frame of the experiments. Thus, many junior faculty are currently working on the analysis of novel data from older experiments, and will join the LHC experiments as the detectors begin to be used.

Despite these measures for restricting who gets credit, many still question whether certain colleagues really belong on the author list:

People have views that vary all over the field. So an engineer who did some work on a special part of the apparatus, should he be in the author list? Or even a physicist who’s in a group, but never even set foot in the experiment. Should his name be there? (CERN05).

This example alludes to the fact that researchers on these projects come from and work primarily in vastly different institutional contexts that have an important impact on their ability to contribute to the project. Teaching loads and requirements, travel funding, levels of graduate student and research scientist or postdoc support, along with many other issues, help determine the amount of time individuals have to devote to the project. These contextual factors are blurred significantly within the projects, however, and this can affect colleagues’ perceptions of actual effort contributed. In other words, it is nearly impossible to tell during a meeting at CERN who has a significant teaching load and may be unable to contribute to the collaboration during a particular term. Nonetheless, researchers are judged by their contributions to the experiments and their ability to “get noticed,” as described below.

Publishing a co-authored paper

While the exact procedures for writing, soliciting feedback, and publication vary somewhat from experiment to experiment, interview participants generally described processes that involve the following steps: 1) the main contributors carry out specific analyses and write these up as scientific papers, possibly also presenting the work at meetings internal to the collaboration; 2) a draft of the paper is circulated via e-mail to all members of the collaboration for comments and feedback; 3) the paper is submitted for approval (some groups refer to this as having the results “blessed”) by the publication committee within the collaboration; and 4) once approved, the paper can be formally submitted for conference or journal publication and released outside the collaboration.

It is important to note that the physicists interviewed take these procedures very seriously. Several indicated that the premature release of results could constitute grounds for ejection from the collaboration. In addition, the head of the secretariat for one of the large LHC experiments indicated that her office is responsible for the submission of all publications from the collaboration. In other words, individuals affiliated with this experiment are not allowed to submit their own work for publication even when the results have been approved.

What does this mean for HEP?

So far I have illustrated that authorship, and especially inclusion in the author list on papers, is a significant issue in the HEP community. This makes some sense in that the author list is the formal record of responsibility for a discovery. To further explore authorship in HEP, though, consider its three functions described earlier. In terms of credit attribution, the author list is the formal means by which credit for discoveries is attributed. It is considered very important in the HEP community that all contributors to a research project receive formal credit for their efforts in this way. Many believe it is not fair to place a premium on the analysis tasks that lead directly to high-profile discoveries. As one interview participant said:

Every piece which is there has somebody who has thought about, has given a year of his life to make sure that a bolt is in the right place and has the right effect. Not that guy at the end [doing the analysis] who does not know that the bolt is absorbing part of the noise. . . . So I think that it is important that everybody who has worked there, even left or even died, every year people die on these collaborations. It is very bad if this memory is gone. . . . I like the idea of authorship extended (CERN24).

At the same time, all contributions are not equal in impact. There is a clear tension between a desire to recognize all contributions to a large collaborative project and a desire to give special credit to those who put forth particularly valuable effort:

In a lot of ways it sort of doesn’t work. You put everyone on, it sort of demoralizes some people. If there’s a real creative person, you want to somehow let him get the rewards for being creative, and that’s difficult because one person can do something creative but he’s using the data and the work of a few thousand others (CERN03).

Thus, the individual role ambiguity inherent in multiple authorship leaves open the questions of ownership and reputation.

With regard to ownership, my interview participants have less to say. The LHC experiment teams have not yet published physics results, so there have been few opportunities for controversy over formal notions of ownership. Some participants did, though, mention instances in the past where they were hesitant to take ownership of particular results in papers on which they were co-authors. One participant, who was atypically diligent among those interviewed, indicated that he had read all but two of the 250 papers on which he is listed as an author. The two he had not read were published in Russian, which he does not understand. He went on to describe making a point of reading every paper, and removing his name when he does not feel he fully understands or agrees with the results:

There are papers where you say to yourself ‘Do I really want to be associated with this? Maybe I don’t.’ One in particular was high profile and I think it was wrong. And the real reason I took my name off it is I was here [at CERN] when the paper came out, and I said, you know, if somebody calls me and says ‘Gee, this is interesting. You’re an author. Why don’t you come give a seminar on this?’ I didn’t feel like I could defend what was on there at least as well as the proponents could. So I said ‘No, I don’t really want to sign my name to that.’ (CERN20)

The interesting aspect of this example is the participant’s unusual regard for notions of ownership and ability to defend the claims in what others might consider to be his work. This is similar to Rennie et al.’s (1997) notion of guarantorship.

Moreover, formal fraud and misconduct did not come up in interview discussions of credit attribution and authorship. In part, this is likely because HEP has extensive structures for internal review, such as the “blessing” of results and extensive circulation of preprints. In this vein, Kling and McKim (2000) argue that knowledge certification occurs earlier in physics than in other fields.

At the same time, however, participants were quite cognizant of the possibility that other individuals might seek to take ownership for the entire collective endeavor if it is successful. Many mentioned the story of Carlo Rubbia as a cautionary tale. Rubbia was the controversial winner of the 1984 Nobel Prize in Physics for his leadership on the H2 experiment at CERN that also involved substantial effort by approximately 200 other collaborators (Taubes 1986).

Physicists interviewed in the present study, particularly those at early stages in their careers, were significantly concerned about how to get adequate credit for their efforts and establish their reputations in HEP. Many said that even though the LHC projects are very large, “the Nobel Prize won’t be given to 1,500 physicists.” In other words, individual reputation remains the coin of the realm. At the same time, though, there is widespread recognition of the fact that nobody gains anything without the efforts of all their collaborators. Thus, respondents indicated a significant need to remain alert and competitive both as individuals in need of a strong reputation within their collaboration, and as a collaborative group in fierce head-to-head competition with other experiments to be the first to make specific discoveries. As shall be illustrated below, most participants indicated that ambiguity renders the formal record of contribution meaningless in hiring, promotion, and evaluation decisions. They described a system of informal recognition that instead relies heavily on word-of-mouth recommendations and individuals’ ability to get noticed within the collaboration. The remainder of this section will focus on these issues of reputation.

The importance of reputation

Interview participants indicated that reputation has been particularly important in recent years due to a scarcity of jobs resulting from declining funding levels for physics research and ever-increasing experiment costs (Seife, 2005). It is not uncommon for junior researchers in physics to hold two or three postdoc positions before moving into faculty positions, if they are able to secure faculty positions at all. One physicist described this as follows:

I didn’t have great choices. You know, you look around where you get a job. . . . ATLAS is finally a post where I have permanent contract, but before I was on three different postdoc positions, if you like. And then when that runs out, you had to see what next. So you’re not free to say “Now I want to go and work there” because you have to find some payment for what you want to do (CERN14).

One key reason for the importance of reputation is that, in evaluating individuals, the community’s authorship practices become problematic. Because any individual may be listed as an author on hundreds of papers, it is difficult or impossible to tell from the formal record what specific contributions he or she is responsible for. A search on the names of 20 randomly selected interview participants from this study in the SPIRES physics publication database (http://slac.stanford.edu/spires) illustrates this point. The median number of papers on which each participant was listed as an author is 105.5 (SD=185.3), with a wide range from 0 (for a first-year graduate student) to 603. Though most were clustered around 100, six individuals were listed on 200 or more papers. It is therefore not surprising that most interview participants reported that they appear as authors on papers they have not read. The important point here is that when there is no expectation that a job applicant has even read all of the papers listed on his or her CV, assessment by traditional means is a challenge.

Getting Noticed

In the face of ambiguity on both sides of their publication practices—from both the long list of names on any given paper and the long list of papers associated with any given name—the HEP community has largely turned away from formal records of contribution and taken to using informal means of assessment and evaluation. One participant, a senior researcher, described the experience of a talented postdoc:

One of the postdocs has made quite a lot of progress in [a technical area of a large experiment]. He did pretty much all the work by himself along with one of the associate scientists, who actually happens to work for me so I know a bit about this. He gets credit, I guess, because he gets to give the seminars about that, but any publications will be strictly alphabetical. Is that fair? Probably not. But how else do you do it? (CERN04).

The important part of this example is the admission that actual credit for the research discovery does not seem to come from the publications, but rather from the informal seminars and talks the postdoc will give within the collaboration. This is just one of many informal means of “getting noticed” that participants discussed.

Students and junior researchers must ensure that they establish a solid reputation within their workgroups. It was surprising, however, how many graduate students at CERN reported having minimal contact with their advisors at their home institute. Instead, many reported close involvement with a CERN workgroup or even a workgroup based primarily at another institute. The influential reference letters for these students will come from these workgroup colleagues and supervisors, and not necessarily from their advisors. This serves to increase the imperative for researchers to distinguish themselves via individual reputation, which does not sit well with everybody in the community:

I think that’s one of the problems that younger people face. At certain points it becomes quite political somehow. It is very difficult to get credit and I find that . . . it’s also a very sociological thing, you have to give presentations, you have to show up yourself, so I think as much it’s the quality of your work as it is the publicity that you do, which honestly I don’t like as much.

. . .

So, very honestly, I’m thinking about whether I should stay in ATLAS, and whether I should stay in the field or not. Because although I find the physics they’re trying to do very, very interesting, and very, very challenging, it’s tough, you know, and I don’t like to work with all these many people (CERN16).

Additionally, there is a sort of catch-22 regarding getting noticed. As shall be described below, one excellent way to get noticed within the collaboration is to secure a high-profile task, such as representing the collaboration by giving a talk at a conference or doing a particularly glamorous bit of data analysis. At the same time, though, these tasks are generally assigned by the central leaders of the experiment, so it is difficult to be assigned to one without having been noticed already. Thus, the second important component of “getting noticed” is being noticed not only by one’s colleagues, but also by those in positions of authority who can assign high-profile tasks and will be aware of job openings, conference presentations, and other opportunities that may become available. This potential for broad exposure was described by one participant as a significant advantage to graduate students working on a very large collaboration like ATLAS:

I actually think these collaborations have for young people a particular advantage in that they can situate themselves in a real international context, where . . . cleverness and such values can get through. Whereas, for example, if you just have a little experiment only at your university, you are completely locked in perhaps to the hierarchy of your small group of five people or something like that. There is maybe much less room really to show yourself off, or it’s only your supervisor or your professor who has an opinion about whether you’re good or bad (CERN19).

While it may be true that large collaborations provide stars with the opportunity to shine more brightly, as this participant suggested, it is also true that large collaborations place a much more significant burden on junior researchers, who need to ensure that their efforts are broadly recognized. As will be illustrated below, this can prove problematic.

How to get noticed without really trying

Interview participants described several means of getting noticed and distinguishing themselves within a large collaboration. First, it has been pointed out elsewhere (e.g., Traweek, 1988) that physicists prize a willingness to work hard in order to achieve high-quality results. Being known as somebody who is dependable, diligent, responsible, and willing to work long hours is likely to yield positive letters of recommendation from immediate supervisors and colleagues, though this is not always true.

In addition, some physicists described a need to be known as somebody who can come up with novel solutions to difficult problems. It is interesting and important to note that these problems need not be discovery-oriented. To be sure, there is significant value in solving an analysis problem that leads to a major discovery. At the same time, however, many participants indicated that a novel solution to a difficult detector design or construction problem could also carry significant weight.

Moreover, many participants indicated that talks and presentations are another way to achieve visibility. These can be given either internally, as part of “collaboration weeks” where hundreds of collaborators convene at CERN, or externally when the collaboration is presenting work at a field-wide HEP conference. Both of these are important in getting noticed. While Knorr-Cetina (1999) described the assignment of these presentations as a collectively-oriented activity that considered which students or junior researchers “needed” visibility at the time, participants here described a somewhat more individually focused process. While it was certainly true that they described some effort to be “fair,” it was also widely acknowledged that individuals must be pro-active about getting credit for their contributions to the project. For example, the spokesman of one of the LHC experiments indicated that more than 100 talks per year are given at conferences by members of the collaboration and he tries very hard to be sure that “they are given to people who really deserve it, are competent (CERN19).” Another participant described her experience in trying to give talks and be otherwise visible, a process that could easily “go bad”:

You know, you have to take care on your own that all of the credit is given to you. It’s difficult . . . but it’s definitely based on personal effort (CERN09).

The final class of methods that participants described for achieving visibility or getting noticed involves providing exemplary or exceptional service to the collaboration, generally by taking some sort of leadership role in the overall collaboration or one of its components. These roles are important to mention in that they can be nontrivial to secure in two ways. First, many of the high-level leadership positions are elected, and therefore require some prior exposure. Even leadership positions within subgroups, however, are difficult to obtain in that they require substantial presence at CERN. Thus, one must be affiliated with CERN or with an institute that can support frequent travel to CERN and also cope with long absences from the home institute. Thus, several informants reported needing to secure permission from their home institutions (which is not uncommonly denied) to take on additional responsibilities within the collaboration. In one particularly interesting case, a participant indicated that in the ATLAS hierarchy he is a superior to the lab director to whom he reports at his home institute. When asked how he balances this, he noted that:

Basically it depends on the participants we’re discussing, you know, on who takes the lead. If it’s an ATLAS point, then it’s me who says ‘Well, we need to do this, I need to do that.’ If it’s a [home institute] point, it is usually one of my colleagues. Now I, of course, have to have permission to have this job from my lab director at [my home institute], so he could have said that he didn’t want me to stand for re-election this period. He could have said ‘I want you to go back to [home institute] and do something else.’ If he had said that, then I have little to discuss with him. But he would have been fully in his right (CERN11).

The interesting point here is that this hybrid system of seniority can confuse political relations and influence people’s ability to get noticed within a collaboration.

No record to fall back on

It is also quite easy to get lost or even crushed in the crowd of a large HEP collaboration. Breakdowns in informal systems of recognition, of course, are not a novel result on their own. What distinguishes the present discussion is the almost complete absence of a formal record to fall back on.

First, there are situations in which individuals work diligently and provide exemplary service to the collaboration, but these efforts are, for some reason or other, not noticed or properly credited by their supervisor. For example, one participant described her situation as follows:

It’s been a very frustrating experience because I do know that I have been one of the few who has performed exceptionally well. We have done it on time and whenever there was a problem I was able to re-arrange, re-steer, and adjust the problem and all that. . . Despite that, upper management insists on assigning someone else the responsibility for being in charge officially in the [org chart]. . . . He was never in the lab. He doesn’t know what we are doing. The only time he came to the lab was to borrow screwdrivers, which he did not return. So it’s been very, very frustrating, and he gets the credit officially for the work, so this has been very tough (CERN21).

The problem she describes is that her supervisor wishes to take credit for her efforts, and she describes herself as having few options for official recourse.

Several participants, particularly women,[3] felt that the informal system of recommendation and credit attribution can be particularly difficult when one is faced with a supervisor who does not take one’s contributions seriously. For example, one participant described her perceived level of influence on the overall experiment as follows:

I am represented by somebody, you know? I am in a group which is 80 people and we have a project leader that represents all of us. So I have no idea what he takes of my opinion when he goes to the big board and makes decisions. So I don’t think there I have any influence at all. Really, definitely. I mean, we are like little ants. That is what we are (CERN09).

Here, we see again that supervisors and leaders in HEP collaborations wield power over their subordinates that is, in some ways, different from that found in other fields in that there is no effective and formal record of authorship that at least denotes some contribution to a research effort. Rather, because credit and contribution are tracked informally, individuals must be particularly vigilant and are, at some level, at the mercy of their supervisor.

In another example of this, one junior researcher who held a CERN fellowship described his own experience as a mixture of institutional politics and the nature of the positions that he held:

In [the CDF experiment at Fermilab], I was at Chicago, that was the most powerful institution and so they gave me a lot of responsibility. Here at CERN [there’s] also, I guess, a . . . tradition [of influence and responsibility]. [As a fellow, however,] since you’re only here two years they know you’re only here two years. And they give you less responsibility because they know they cannot rely on you because you’re going to leave (CERN16).

In each of these situations, we see that the informal system of attribution in HEP relies heavily on the good faith of supervisors and leaders, who are themselves trying to get noticed in many cases.

Discussion: What does this mean for science?

While this examination of authorship and credit attribution practices in HEP has clear implications for traditional notions of authorship itself, I began this article with the larger question of conflict between contemporary collaborative practice and the traditional institutional structures of science. In this section I will discuss the implications of these findings for addressing this critical question.

Individuals remain the unit of organizations

One key theme in these results is that, despite an increased focus on collaborative entities in HEP, individuals remain the fundamental unit of both collaborations and researchers’ home institutions. Even though the entire collaboration may wish to take collective credit for the overall research contribution of their work, this result can come about only through the efforts of individuals. And these individuals have needs, goals, and career trajectories that require them to distinguish themselves from others—even when those others are also their collaborators.

We have seen here that in the HEP community a focus on collective entities in the formal credit attribution scheme has resulted not in a system in which all people get equal credit, but one in which credit is assigned to individuals through informal and sometimes unreliable channels. These data suggest that such a system places an even-greater-than-normal burden on graduate students and junior faculty, since they must get noticed and establish a reputation within a system that is largely informal and based on conversations and reference letters. These results do not provide conclusive evidence about whether or not such a system is functional, but it is undeniably different from the way research contributions are established in other disciplines. In other words, physicists are being hired, promoted and evaluated according to different standards than their peers on campus. Moreover, we saw above that junior faculty in HEP were not able to contribute significantly to the current generation of experiments, because the time horizon for these projects is so long that data will not be generated in time for them to publish papers before tenure.

These observations have many possible implications. One could provocatively ask, for example, whether the notion of tenure for individuals will become obsolete as research becomes more collaborative, or if the five-year time horizon should be adjusted to account for project duration in different fields. If individuals remain the fundamental unit of research organizations, the former scenario is unlikely. A potentially more useful question is whether the formal record of one’s accomplishments should incorporate a broader range of contribution types.

Larger collaborations mean a larger range of contributions

A second clear theme in these data is that the HEP community is struggling to adapt multiple styles of contribution to fit into a single style of attributing credit. On the one hand, this makes intuitive sense. Regardless of the contribution they are making, virtually all of the physicists I spoke with are involved with the LHC collaborations because they are interested in the physics. If doing great physics requires researchers to spend time as programmers or even engineers, they are willing to do that—for the sake of the physics that will follow. However, those researchers want to get credit for their contributions to the results that follow from these efforts.

While individual contributions are formally blurred in authorship, they are informally made known via seminars and word of mouth. This raises the question of whether a model more akin to “contributorship” (Rennie et al., 1997) is called for in HEP, in which individual contributions to a project are explicitly noted in papers or detailed on line. On the one hand, this would be useful in that it would formally report and track contributions in ways that could be useful to outsiders. On the other hand, though, this only solves part of the problem. The real issue in HEP is not just that contributions are not acknowledged formally, but that there is also a serious concern that these contributions don’t really matter all that much to colleagues outside the collaboration because they do not constitute traditionally defined physics research. Formally acknowledging contributions on the current HEP collaborations would essentially create a very long list of senior faculty who have made substantial contributions so that junior faculty can someday do “real” physics. If these contributions mattered to outsiders, junior faculty would be on that list as well.

I began by seeking lessons from HEP that could be applied to other fields. While I did not find solutions to this problem, one key lesson learned is a warning of issues likely to become increasingly salient as other disciplines adopt “big science” practices. Can individual researchers build a reputation by contributing to the infrastructure and instruments that enable science? Can universities adapt to these very different circumstances for contributing to and getting credit for science? And if the construction and purpose of infrastructure is inherently collaborative, whose reputation will be advanced by contributing to it? Will we build structures to support science that punish many with years of painstaking tasks only to reward a select few for reasons that may not fit with the strong meritocratic traditions of science?

These are key questions that raise a host of others. Among these is mapping the space of possible contributions and their value. Some contributions to infrastructure (such as the noise-absorbing bolt example quoted in the data above) require intellectual energy and creativity, and these contributions might be judged to be rather significant if they solve an important problem. At the same time, some contributions to research involve minimal intellectual effort—such as the NASA clickworkers program in which volunteers label craters on Mars via a Web interface (Kanefsky et al., 2001). There is a vast space of contribution types in between these that must be considered, and we must determine how to assess their merits.


Acknowledgements

I wish to thank Amy Friedlander, Matthew Bietz, Tom Finholt, Steven Jackson, Judith Turner, Ann Zimmerman, and the anonymous reviewers for their thoughtful feedback on earlier drafts of this work. I also acknowledge financial support from the Horace H. Rackham School of Graduate Studies at the University of Michigan, and the University of Michigan Department of Physics.


An earlier version of this work was published as: Birnholtz, J. (2006) “What Does It Mean To Be An Author? The Intersection of Credit, Contribution and Collaboration in Science,” Journal of the American Society for Information Science and Technology (JASIST), 57 (13), pp. 1758-1770.


References

Atkins, D. E., Droegemeier, K. K., Feldman, S. I., Garcia-Molina, H., Klein, M. L., & Messina, P. (2003). Revolutionizing science and engineering through cyberinfrastructure: Report of the national science foundation blue-ribbon advisory panel on cyberinfrastructure.Washington, DC: National Science Foundation.

Biagioli, M. (2003). Rights or rewards. In M. Biagioli & P. Galison (Eds.), Scientific authorship: Credit and intellectual property in science (pp. 253–280). New York: Routledge.

Birnholtz, J. P. (2005). When do researchers collaborate? Toward a model of collaboration propensity in science and engineering research. Doctoral dissertation, School of Information, University of Michigan, Ann Arbor, MI.

Birnholtz, J. P. (2006). What does it mean to be an author? The intersection of credit, contribution and collaboration in science. Journal of the American Society for Information Science and Technology, 57(13), 1758–1770. [doi: 10.1002/asi.20380]

Bourdieu, P. (1984). Distinction: A social critique of the judgement of taste. Cambridge, Mass.: Harvard University Press.

Claxton, L. D. (2005). Scientific authorship: Part 2. History, recurring issues, practices and guidelines. Mutation Research, 589(1), 31–45. [doi: 10.1016/j.mrrev.2004.07.002]

Close, F., Marten, M., & Sutton, C. (2002). The particle odyssey: A journey to the heart of matter. New York: Oxford University Press.

Cronin, B. (1995). The scholar's courtesy: The role of acknowledgement in the primary communication process. London: Taylor Graham.

Cronin, B. (2001). Hyperauthorship: A postmodern perversion or evidence of a structural shift in scholarly communication practices? Journal of the American Society for Information Science and Technology, 52(7), 558–569. [doi: 10.1002/asi.1097]

DeSanctis, G., & Monge, P. (1998). Communication processes for virtual organizations. Journal of Computer Mediated Communication, 3(4).

Finholt, T. A. (2003). Collaboratories as a new form of scientific organization. Economics of Innovation and New Technologies, 12(1), 5–25. [doi: 10.1080/10438590303119]

Finholt, T. A., & Olson, G. M. (1997). From laboratories to collaboratories: A new organizational form for scientific collaboration. Psychological Science, 8(1), 28–36. [doi: 10.1111/j.1467-9280.1997.tb00540.x]

Foster, I., & Kesselman, C. (1999). The grid: Blueprint for a new computing infrastructure. San Francisco: Morgan Kaufmann.

Franck, G. (1999). Scientific communication—a vanity fair? Science, 286(5437), 53–55. [doi: 10.1126/science.286.5437.53]

Galison, P. (1997). Image and logic: A material culture of microphysics. Chicago: University of Chicago Press.

Galison, P., & Hevly, B. (1992). Big science: The growth of large-scale research. Stanford, CA: Stanford University Press.

Huberman, A. M., & Miles, M. B. (1994). Data management and analysis methods. In Lincoln & Denzin (Eds.), Handbook of qualitative research. Thousand Oaks, CA: Sage.

Kane, G. L. (1987). Modern elementary particle physics. New York: Perseus Books.

Kanefsky, B., Barlow, N.G., Gulick, V.C. (2001). Can distributed volunteers accomplish a massive data analysis task? Proceedings of Planetary Science XXXII, 1272.

Kennedy, D. (2003). Multiple authors, multiple problems. Science, 301, 733. [doi: 10.1126/science.301.5634.733]

Kevles, D. J. (1998). The Baltimore case: A trial of politics, science, and character. New York: Norton.

Kling, R., & McKim, G. (2000). Not just a matter of time: Field differences and the shaping of electronic media in supporting scientific communication. Journal of the American Society for Information Science, 51(14), 1306–1320. [doi: 10.1002/1097-4571(2000)9999:9999<::AID-ASI1047>3.0.CO;2-T]

Knorr Cetina, K. (1999). Epistemic cultures: How the sciences make knowledge. Cambridge, MA: Harvard University Press.

Kouzes, R. T., Myers, J. D., & Wulf, W. A. (1996). Collaboratories: Doing science on the internet. IEEE Computer, 29(8), 40–46.

National Research Councils. (1993). National collaboratories: Applying information technology to scientific research. Washington, DC.

Nentwich, M. (2003). Cyberscience: Research in the age of the internet. Vienna: Austrian Academy of Sciences.

Newell, A., & Sproull, R. F. (1982). Computer networks: Prospects for scientists. Science, 215(4534), 843–852. [doi: 10.1126/science.215.4534.843]

Paneth, N., Hemenway, D., Fortney, J. A., & Jung, B. C. (1998). Authorship: Readers and editors respond. American Journal of Public Health, 88(5), 824–831.

Parker, R. A., & Berman, N. A. (1998). Criteria for authorship for statisticians in medical papers. Statistics in Medicine, 17(20), 2289–2299. [doi: 10.1002/(SICI)1097-0258(19981030)17:20<2289::AID-SIM931>3.0.CO;2-L]

Rennie, D., Yank, V., & Emanuel, L. (1997). When authorship fails: A proposal to make contributors accountable. Journal of the American Medical Association, 278(7), 579–585. [doi: 10.1001/jama.278.7.579]

Rose, M. (1993). Authors and owners: The invention of copyright. Cambridge, MA: Harvard University Press.

Seife, C. (2005). High-energy physics: Exit America? Science, 308(5718), 38–40. [doi: 10.1126/science.308.5718.38]

Shapin, S. (1989). The invisible technician. American Scientist, 77, 554–563.

Tarnow, E. (1999). The authorship list in science: Junior physicists' perceptions of who appears and why. Science and Engineering Ethics, 5(1), 73–88. [doi: 10.1007/s11948-999-0061-2]

Taubes, G. (1986). Nobel dreams: Power, deceit and the ultimate experiment. New York: Random House.

Traweek, S. (1988). Beamtimes and lifetimes: The world of high energy physicists. Cambridge, MA: Harvard University Press.

Whitley, R. (2000). The intellectual and social organization of the sciences. Oxford: Oxford University Press.

Woodmansee, M., & Jaszi, P. (1994). The construction of authorship: Textual appropriation in law and literature. London: Duke University Press.


Notes

    1. In this regard journal articles stand in contrast to books, and textbooks in particular, that are frequently written to generate royalties.return to text

    2. Withdrawal would, of course, carry substantial intangible costs to the institute, but it is not unprecedented.return to text

    3. It should be noted that women remain a significant minority in HEP, comprising around 10% of the field.return to text