Research Ideas and Outcomes : Policy Brief
Print
Policy Brief
A New Research Economy: Socio-technical framework to open up lines of credit in the academic community
expand article infoLaurel Haak‡,§, Sarah Greene|, Kristen Ratan
‡ Ronin Institute, Montclair, United States of America
§ Mighty Red Barn, Townsend, United States of America
| Rapid Science, Brooklyn, United States of America
¶ Stratos, San Francisco, United States of America
Open Access

Abstract

Journal articles have been the gold standard for research and scholarly communication. Specifically, measurements of publication and citation, particularly in high-impact journals, have long been the key means of accruing credit for researchers. In turn, these credits become the currency through which researchers acquire funding and achieve professional success. But, like global trade, tying in to a fixed standard limits wealth distribution and innovation.  It is time for the research community to attribute credit for contributions that reflect and drive collaborative innovation, rewarding behaviors that produce better research outcomes.

Keywords

attribution, collaboration, credit, evaluation, metadata, metrics, persistent identifier, research lifecycle, research policy, rigor and reproducibility, scholarly communication

Introduction

Our goal as researchers is to better understand the world around us.  To this end, we observe, form hypotheses, gather information, compare notes, and accept, toss out, or reframe our hypotheses, then continue the cycle.  Every researcher relies on collaboration in some form or another, whether that is by participating on a research team, connecting in conference venues, or through the peer review process.  For the past several decades, researchers receive reputational credit in the form of research papers to make progress in their careers and secure funding for their work (Cline et al. 2020).  This reliance on publishing as the main source of credit favors competition over collaboration and slows down research progress overall (Anderson et al. 2007). Aligning the collaborative nature of research with credit is a key research policy challenge that funders, governments, and institutions must address.

We benefit from rapid and effective communication of research findings.  However, given that research is an iterative and highly collaborative enterprise, research findings as reported in peer-reviewed articles represent but a small component of the research process. Alone, they do not support rigor and reproducibility. The singular credit they generate discourages collaboration. They are designed as a way to showcase work and not to fuel dialog and debate that would allow other researchers to build on the work. So long as journal articles are the gold standard for receiving credit – and therefore researcher participation – we will continue to have perverse incentives that skew the research process, hinder diversity and inclusiveness, and ultimately limit innovative capacity.

To break the dependence on traditional publishing as the “gold standard” measure of progress, we need to apply metrics, identifiers, and infrastructures to all stages of the research lifecycle: ideation, experimentation, analysis, validation, review, and impact.  This means attributing contributions throughout the research lifecycle; connecting components using persistent identifiers; and re-designing the static, print-based article to be a dynamic and evolving research report of project progress. And because measurements are fundamental to formulating rewards, digitizing contributions through each step of the life cycle will enable the necessary tracking and rewards. In this way, credit can be distributed more equitably and collaborative behaviors – a known stimulus of innovation – can be remunerated (Wuchty et al. 2007).  In this article, we propose a blueprint for this new credit economy for the research community, illustrated with practical examples and proofs of concept.

Designing Effective Solutions

Let’s begin by examining research process stakeholders: researchers, funders, organizations, community groups, and policy makers.  Each of these stakeholders has different motivations for participating in the research process. Researchers are driven by curiosity and career progression, and want credit for their contributions.  Community groups are motivated to drive the development of new processes and products, and endeavor to be included in the design process.  Funders want to drive progress in their mission area(s) and measure progress toward their goals.  Research organizations want to recruit and retain talent and benchmark individual and organizational performance.  Policy makers are interested in developing research capacity and want to be able to use evidence to support policy development and program evaluation. Each of these stakeholder groups has something to gain from opening new lines of credit and incentivizing cooperative behaviors.

The research process has four broad stages: Ideation, Experiment, Analysis and Validation, and Review and Impact. These are as likely to follow each other as to loop back or skip, but for the purposes of argument we will take a linear approach. Each stage is associated with a set of activities and artifacts, a non-exhaustive sample of which are listed for each stage, shown in Fig. 1. In addition, there are “glue” activities that enable coordination within and between stages, including project management, team facilitation and development, collaboration, presentation, annotation, and curation.  

Figure 1.  

Activities and artifacts at each stage of the research process.

Clearly, there are many important activities that researchers engage in, fundamental to research progress, above and beyond writing an article.  The CRediT taxonomy (Brand et al. 2015), developed by the research community, creates a broader framework for identifying contributions that drive the research process.  Its use in the publication process is shifting attribution from “authorship” to “contributorship” (Allen et al. 2019) and is an important step toward a more representative allocation of credit. However, it is still focused on the research article and publication process.

We propose that attribution be expanded further, to contributions and artifacts across the research lifecycle, from team development plans, to methods, data management plans, annotated data sets, and collaborative activities.

We envision a research process in which hypotheses are shared as they are posited; where teams create data and output management plans and shared spaces for project plans, methods, and resources; and methods and findings (including null) are shared in an accessible database for analysis. As findings, methods, and hypotheses coalesce and evolve, status reports are published at regular intervals to capture a snapshot of progress in the area of focus.  And all along, collaborative activities are captured, analyzed, and disseminated for discussion and iteration.  

This vision is built upon an open infrastructure that captures research outputs and embeds persistent identifiers for people, organizations, and objects in each of these stages, ensuring that researchers and materials sources get credit for contributions made throughout the project process – even if there is not a journal paper output – and that findings are retrievable, discoverable, and lasting.  

How can we get to this future, where collaborative activities are recognized and incentivized through research credit structures?  We propose a redesign of research process systems centered on core principles of attribution, communication, and measurement:

  1. Attribute. Embed the attribution of open and collaborative activities across the research process.

  2. Communicate. Encode activities with transparent provenance:  granular and open sharing with persistent identifiers for people, places, things, and projects, and transparent and trusted metadata.

  3. Reward. Engage with researchers to incentivize and recognize adoption of open and collaborative practices and define new metrics to measure change and fuel new reward structures.  

These actions, taken together, will lead to a deeper acknowledgement of and alignment with collaborative activities through a broader apportionment of credit.  We anticipate that this process redesign will also bring needed improvements in diversity and inclusiveness, and result in more rigorous research processes and reproducible results.  Realization of these goals must be tested and adjusted using embedded metrics enabled by persistent identifier infrastructures.

Trusted Attribution

For new forms of credit to become adopted by the research community, they need to be trusted.  This trust emerges from a shared understanding of how information is created and shared, and comes from intentional community governance of research information, application of ethical standards, and implementation of transparent information provenance.

Governance

Open infrastructure governance, sustainability, and insurance principles (Bilder et al. 2015) are critical for building trust in new lines of credit by ensuring the transparency and availability of data that supports research claims.  The FAIR principles (Wilkinson et al. 2016) build upon these principles and focus on the ability of machines to automatically find and use research data, and support its reuse by individuals. 

In addition to the technical and services component, we also need to ensure that research communities are integral to the change process.  There are many examples of community engagement in this space: development of the CRediT taxonomy (Allen et al. 2019), the DORA Initiative, adoption of ORCID at the national level (e.g., Simons 2015), MetaData 2020 working groups, and organizations such as the Research Data Alliance, to mention but a few.  It is not necessary, nor is it advisable,  to have one organization solely responsible for driving new credit models. Coordination efforts across stakeholder groups to spur the iterative development of the expanded credit model are an essential design element.

Ethics

While promoting trust in findability, FAIR principles do not fully meet the credit needs of researchers and communities.  This is illustrated, for example, by the general lack of source- and person-credit fields in many data repositories (e.g., see Krznarich 2019).  Source metadata is particularly important for Indigenous Peoples, who must be able to assert control over the application and use of Indigenous data and Indigenous knowledge for collective benefit (United Nations General Assembly 2007). 

To address these needs, the CARE principles (Global Indigenous Data Alliance 2019) have been developed. CARE principles reflect the crucial role of people and purpose in building community trust and participation in the new research credit economy and provide a template for participation by other communities such as research facilities (ORCID 2017) and collection curators.  The Tribal Knowledge and Biocultural labels developed by Local Contexts coupled with personas developed in the Metadata2020 project and the Educopia Values and Principles Checklist (Skinner and Lippincott 2020) provide additional bridges between researchers, data, creators, communities, and curators.

Provenance

Assurance standards are a component of trust building.  FAIR and CARE get at findability and appropriate use.  We also need transparency in the design principles of the new credit economy.  To instantiate trust, we need to know more than a node or edge on a graph.   ORCID has done some work in this area, examining how assertions (connections between an ORCID ID, a work item, and the organization(s) hosting/funding/resourcing that work) are made into the ORCID registry and classifying assurance standards based on source transparency and traceability (Peters 2018).  

In addition, stakeholders need to be involved in developing the metrics of contributions, sharing and collaborating, and in the analysis of the data.  Data models, inputs, pre-processing steps, attribution, and de-identification methods must be transparent, while also respecting privacy (Lane et al. 2014). Credit units applied to diverse project goals and disciplines must be normalized if they are to serve in assessing researchers’ subsequent funding and career advancement. This is no easy task, but there are examples of successful measurement frameworks (e.g., Basner et al. 2013).  Critically, at these early stages of reinventing the research process, stakeholders can use their sticks (policy mandates) and carrots (resources and rewards) and must ensure that researchers are integral partners in the measurement system, helping to design and test tools and platforms that capture open and collaborative behaviors. 

Bringing together governance, ethics, and provenance, we can develop transparent and trusted methods to track use of collaboration technologies and then drive adoption of a new credit economy, becoming part of the assessment frameworks being used by funders and tenure and promotion committees to be truly effective. We are already part-way there:  CRediT roles, persistent identifiers, CARE and FAIR principles are already in use and, if used in concert, provide an effective means to tie together components of the research lifecycle and measure, at least in the first iteration, what is working and what is not.  

Rapid and Holistic Communication of Research

Research communication focused on journal article submissions is a slow and incomplete process. As we are learning in the time of COVID-19, rapid data sharing and preprint posting is accelerating our understanding of the virus and its impact on human life (Kupferschmidt 2020). Early sharing and open review of research methods and findings offers a more fertile ground for collaboration. This section describes the fundamental building blocks, such as assigning persistent identifiers to research outputs and logging them in the appropriate repositories as well as innovations that push research communication into a more dynamic era. 

Identifier Infrastructure and PID Graphs

An open identifier infrastructure has been developing over the last 20 years (Haak et al. 2012) that is providing the underlayment for the credit revolution.  Infrastructure services have enabled clear identification of the people and, increasingly, organizations involved in driving research, as well as the papers, datasets, and resources associated with research activities.  Embedding persistent identifiers into standard research workflows is making it possible to not only identify but also connect components within and across the research lifecycle (e.g., Fenner 2020).  From these connections graphs (Fenner and Aryani 2020) can be derived, showing associations between research activity components.  Grouping activities using project identifiers adds context for evaluating research impact (Haak et al. 2018) and innovation drivers (Glennon et al. 2018), as well as for tracking efforts to improve research processes we propose in this article.

Research Output Management: ROMS

Before research is communicated publicly its component parts need to be logged, shared with trusted colleagues, and stored in a way that makes them discoverable and persistent. There are many repositories, tools, and sites for sharing and storing datasets and other research outputs and, while use of these has been slowly increasing, most outputs remain scattered across various local drives or isolated cloud storage. Researchers don’t typically use the available reliable third party repositories for data, code, and other research outputs such as protocols and resources. While some funders and institutions have policies on open data (and other outputs), many have voiced frustration that it is difficult to track compliance. There aren’t clear pathways to using these third party repositories and no way to check and monitor their use by either funders or institutions.

Because there is no persistent record of when and where datasets and other outputs have been shared or reused by others, credit cannot be given for all of the work done by researchers and more nuanced measures of impact are not possible. These outputs are generally not tied to preprints, journal articles, or future funding proposals so are not contributing to the complete communication of scholarship, the reproducibility of the work, or the reputation of those who worked hard to produce them. If the code used to analyze a dataset is not shared alongside the dataset, for example, that analysis cannot be verified and the person who designed the software is not given credit for the work. 

The Research Output Management System (ROMS), a project initiated by Stratos and undertaken by Aligning Science Across Parkinson’s (ASAP), is a demonstration of our proposed design principles: extending the attribution of credit throughout the research lifecycle, with services for storing, preserving, and monitoring research outputs.  As a living, dynamic tool with automation built in, the ROMS operationalizes connections between interrelated open source components to support a living representation of research workflows, beginning at the start of a funded project and carrying through to publication and beyond. Through the use of identifiers and open APIs, information sharing and metrics collection can be semi-automated, and sharing permissions managed as a component of the project. 

The ROMS is currently being built by ASAP as an open source tool that can be adopted by others, including funders and institutions. Because it logs all research outputs, with persistent identifiers and accurate metadata, it can serve multiple functions, including helping researchers share their work in a consistent, discoverable, and minable way and offering funders insight into the full impact of their funding programs. 

Executable Preprints and Articles

Journals can drive research reproducibility with tools such as Stencila, a platform for embedding live code and datasets to a manuscript.  Used to create eLife’s recently announced Executable Research Article (ERA), preprints and journal articles can be ‘born reproducible’ with authors demonstrating how their data and code work through the preprint or article itself. Recent scale implementation of executable research articles (Tsang and Maciocci 2020) demonstrates the feasibility of this concept.  Beyond linking related resources to a published article, ERA functionality allows authors to easily incorporate datasets, code, and protocols into their manuscript, without possessing coding knowledge.

Facilitated Living Reviews

The Facilitated Living Review (FLR) is a process developed by Rapid Science to encourage and enable researchers from different disciplines to encounter each other’s ideas and latest findings in a setting not unlike a journal club.  The FLR incorporates a curatorial service that interprets these insights and findings as they relate to the latest topically relevant published evidence.  

This process addresses impediments to collaborative and open research.  First, there are few opportunities in team-based initiatives for researchers to gather and informally discuss their work, as occurs at conferences or departmental journal clubs when new evidence is published.  Second, early, incremental, and null findings are rarely posted openly to the research community because of time constraints, lack of context, fear of being wrong or being scooped, and the absence of incentives/rewards. And, finally, Incremental findings are generally not subjected to peer review or oversight by peers, and yet entire projects and subsequent publications are built upon them

The FLR addresses each of these. The process is managed by an Editorial Facilitator (EF), a subject matter expert with editing expertise, who writes and maintains the review, updating it continually when a report of new evidence is published. Team members who are expert on that topic are called in to debate/annotate/revise the positioning of the evidence, based on how their work supports or challenges the findings. Incremental findings such as a dataset or null results, organically peer reviewed by the team members, can be cited in the FLR and shared external to the team simultaneously. 

The FLR attributes credit to team members who were involved in producing the review and to investigators whose early findings are incorporated in the review. This leverages an existing credit standard – that of citation – with new reward metrics for the collaborative behaviors leading up to open dissemination of the FLR.  Once shared, ongoing feedback keeps the FLR alive, incorporates new findings, and informs the subsequent versions, and shown in Fig. 2.

Figure 2.  

Collaborative Workflow of an Incremental Dataset in a Consortium Setting (click to view enlarged slide in Present mode). The workflow of an investigator’s incremental dataset is shown as it is incorporated into the Facilitated Living Review (FLR), moving along a continuum from closed to open review. (1) After ideation and hypothesis formation by the team, early experimentation creates an incremental dataset.  (2) The dataset is shared and discussed with the Project X research consortium, then iterated.  (3) The v.3 dataset is shared with the consortium for further analysis,  positioning, and citation in the context of the latest published evidence in the FLR.  (4) The FLR with its cited, organically peer reviewed dataset is posted on a preprint server under the authorship of the FLR consortium. (5) Feedback from the broader community may lead to further iteration of the dataset at the discretion of the project team in the next round of the FLR revisions.

Reward Paradigms that Drive Research Citizenship

An exhaustive review on “The Science of Team Science” warns against reliance on publication metrics as a means of studying team science outcomes (Hall et al. 2018). Yet, a study of users on 25 online platforms – including ResearchGate, Academia.edu, Impactstory, Mendeley and Kudos – revealed that while 95% of scholars consider research as the most important reputation determinant, the highest ratings on the import of research activities were conferred on dissemination in journals and citations (Nicholas et al. 2015). Similar tendencies are uncovered in high-level studies of interdisciplinary and team science (The National Academies 2004, The National Academies 2015), all of which demonstrate the tenacious grip the community maintains with publication metrics as the primary method for measuring research progress and success. 

The beginnings of culture change are evident. The Research on Research Institute (RoRI), fostered by a collaborative effort of research funders and institutions, is focusing a priori on research culture and is charged with developing and testing alternatives to the current and long-standing focus on what is achieved, and instead how it is achieved (Editor 2019).   When research is intentionally collaborative it achieves better outcomes (e.g., Hall et al. 2012).  If a broad range of contributions is captured and attributed, the community can begin to measure and incentivize collaborative activities.   

We cannot underline strongly enough that a successful design process must be embedded in – not applied to – the research community.  We need to collaborate and iterate through cycles of adoption and failure, and collect data to measure effectiveness at changing culture.  To move to a new credit paradigm, we must embrace the concept of research citizenship (Porter 2016), and prioritize community-level governance principles, individual and community control requirements, and information transparency. 

Metrics to incentivize, track, and reward collaborative behaviors

Change requires actively rewarding the behaviors that we want to see. Attribution extended to the full range of outputs across the research lifecycle must lead to more granular tracking of activities and outputs, large and small. Sophistication in digital technology and design, combined with increasing familiarity and use of these technologies by researchers, makes it possible to track participants’ contributions both qualitatively and quantitatively. Activities such as sharing findings and insights can be logged for each individual, as can peer reviewing, replicating, co-authoring, data curation and analysis, and posting incremental and negative results to open access repositories and journals.  

Application of these principles is demonstrated in the ResCognito platform, which utilizes an extended attribution label taxonomy, persistent identifiers, and associated open digital infrastructure, to uniquely identify researchers and their contributions and enable community acknowledgement and curation of contributions.  The platform can also incorporate checklists, which can help researchers better understand and act on research citizenship standards. 

Taking this a step farther is the C-Score, a combined metric proposed by Rapid Science that captures collaborative activities on a project platform and aggregates them into a composite score.  Weighting of activities can be determined by the team, the project sponsor, and/or other relevant stakeholders. For instance, collaborative activities may include sharing results widely; robust discussions; reviewing work;  starting and moderating special-topic groups; or downloading, liking, bookmarking and other social media actions, with quantification levels determined in the project team plan.  In the course of the project, a team member could access their accumulating score directly and within the context of group contributions through leaderboards.

The C-score is not intended to serve as a measure of the team’s output or impact – that is, it would not displace quantitative or qualitative measures of disseminated results.  Rather, it offers funders and other adjudicators of large projects a transparent means of assessing research citizenship in equal measure with output.  Accordingly, it will be critical for research stakeholders to define and prioritize activities to be scored at project launch, and signal which behaviors are highly regarded, prioritized, and factored into future rounds of funding.  The National Institutes of Health have been experimenting with such “defined up-front” metrics models for large team science projects with some success (Basner et al. 2013).

Motivation for Culture Change

Given that the publish-or-perish mentality and the accompanying reward system of publishing metrics are  responsible for intense competition and lack of sharing research results – why introduce yet another metric?  Is competing to collaborate a solution to a problem or will it amplify the drawbacks that currently plague the scientific enterprise?  Does the accompanying transparency of the collaboration metric system described above improve the validity of the investigator’s work?  Why not simply track and archive contributions without introducing yet another form of competition via a metric?

Identifying goals for team development and project success at the outset leads to a new paradigm for scientific investigation: competing to contribute to the team goal rather than restricting aims to individual or lab goals. This approach amplifies collaboration, advancing the team’s objectives while building reputation based on one’s effectiveness in sharing data and insights. Thinking of competition in this context turns it from a negative to a positive. 

This paradigm converts “research excellence” from a zero-sum game based on one gold standard metric, to an inclusive game that fosters diversity, participation, and sharing. The best competitors respect their opponents because they perceive at some level that cooperation and competitiveness are not zero-sum operations. For example, sports such as baseball and soccer are popular because of the duality of competition and cooperation occurring among participants.  Rewards, or performance evaluations, are based not only on individual output but on how well that person improved their group’s performance.  Robert Merton described “competitive cooperation” in 1942, referring to scientists as “compeers” (Merton 1942), emphasizing that the interplay of these conflicting modes of interaction “in pursuit of knowledge and other rewards” can elicit highly effective results (Nickelsen and Krämer 2016).

There are many examples of successful programs that involve teams competing to solve scientific problems, such as those sponsored by the XPrize Foundation and Sage Bionetworks’ DREAM challenges (Boutros et al. 2014).  Similarly, a workshop on rescuing biomedical research highlighted how “competition strengthens research, but hypercompetition weakens it” (Kimble et al. 2015).  Competition can be used not only to spur innovation but also to promote collaboration, rendering the traditional meaning of competition obsolete. 

Design – Test – Iterate: Vision for the Future

We are now living in a world where we can collaborate online, instantly discuss findings, and share components of research from lab notebook entries to data to narrative.  Our adherence to the journal gold standard diminishes incentives to utilize these amazing research tools, by restricting how contributions are recorded and credited.  There is growing interest in looking beyond the current  credit proxies for a more nuanced view of research strengths, weaknesses, and networks of opportunity (Bryant et al. 2020Belluz et al. 2016).

What is a gold standard? It is a commodity-based approach to trade. Tied to a stable and well-recognized and acknowledged anchor, it is an automatic way to exchange value across a variety of goods.  Journal articles have been the gold standard for research and scholarly communication since the 1700s.  They were a useful means of communicating research findings in a time when travel was time consuming and telecommunications did not exist.

Just as the gold standard restricted the flow of credit by concentrating wealth in countries that had massive gold reserves, the near-total focus on journal articles as the sine qa non of academic credit limits how researchers interact with each other, with research findings, and with communities more broadly.  They de-incentivize research sharing and exacerbate problems with research reproducibility. 

Journal articles provide but one useful mode of research sharing.  Data sets, presentations, reagents, facilities, workforce training, collaborative activities, association leadership are all drivers of research capacity, but at present these are not part of research community credit networks.  We advocate for adoption of a new global research credit economy that derives value from intentional collaboration (For parallels in corporate research, see Price et al. 2020).  In this new economy, credit can be distributed more equitably across a variety of contribution types and modalities and drive a diverse approach to research and  innovation. 

To get to this vision, we need to produce and test ideas and implementations, learn from those experiments, and pave the way for widespread adoption. It is not enough to propose a new taxonomy for attributing contributions at a more granular level, for example; this must be able to evolve, extend, and be adapted by different communities. It must also be implemented in the reputation systems currently in place and fuel ideation on new reputation systems. 

A design-oriented approach to reinvention of research communication will ensure that the system is considered in its entirety, new approaches and innovations are given a chance to be tried and tested, and that the path toward a new vision be taken in a practical, stepwise fashion.   

Author contributions

The three authors contributed equally to researching, writing, and editing this report. 

References

login to comment