Research Ideas and Outcomes : Small Grant Proposal
Print
Small Grant Proposal
Can video improve grant review quality and lead to more reliable ranking?
expand article infoMichael R Doran‡,§, Adrian Barnett|, Joan Leach, William B Lott#, Katie Page|, Will Grant
‡ Queensland University of Technology at the Translational Research Institute (TRI) , Brisbane, Australia
§ Mater Medical Research - University of Queensland, Brisbane, Australia
| School of Public Health and Social Work & Institute of Health and Biomedical Innovation (IHBI), Queensland University of Technology (QUT), Brisbane, Australia
¶ Australian National Centre for the Public Awareness of Science, Australian National University (ANU), Canberra, Australia
# School of Chemistry, Physics and Mechanical Engineering, Science and Engineering Faculty, Queensland University of Technology (QUT), Brisbane, Australia
Open Access

Abstract

Multimedia video is rapidly becoming mainstream, and many studies indicate that it is a more effective communication medium than text. In this project we AIM to test if videos can be used, in place of text-based grant proposals, to improve communication and increase the reliability of grant ranking. We will test if video improves reviewer comprehension (AIM 1), if external reviewer grant scores are more consistent with video (AIM 2), and if mock Australian Research Council (ARC) panels award more consistent scores when grants are presented as videos (AIM 3). This will be the first study to evaluate the use of video in this application.

The ARC reviewed over 3500 Discovery Project applications in 2015, awarding 635 Projects. Selecting the “best” projects is extremely challenging. This project will improve the selection process by facilitating the transition from text-based to video-based proposals. The impact could be profound: Improved video communication should streamline the grant preparation and review processes, enable more reliable ranking of applications, and more accurate identification of the “next big innovations”.

Keywords

research funding, grant writing, video, grant review, peer review, meta-research, research policy

Aims and Background

In the 21st century, multimedia video has revolutionised the way that we communicate throughout all aspects of life, and we reason that its incorporation into grant applications is an inevitable evolutionary next step. Annually, the Australian Research Council (ARC) and the National Health and Medical Research Council of Australia (NHMRC) review thousands of Project grant applications. Applicants’ invest weeks into drafting text-based proposals designed to communicate the future of cutting edge science. The subsequent grant review processes are very challenging and time consuming, and prone to influence of reviewer variability. This results in a large “grey zone” where the success or failure of a grant can be significantly influenced by chance.

Our previous research (focused on the NHMRC Project Grant review process) revealed that the top ~40% of awarded NHMRC Project Grant Applications were reliably identified as “fundable” when the variability in the current review processes was considered (Graves et al. 2011). However, the selection of the remaining ~60% of funded grants changed depending on the selected reviewers. Internationally, “luck of the draw” in the selection of peer reviewers has been repeatedly identified as major factor in funding and publication outcomes (Mayo et al. 2006, Osmond 1983, Smith 2006). As the review processes used by the NHMRC and ARC are similar, it is reasonable to assume chance also plays a significant role in the allocation of ARC Discovery Project funding. If chance played a role in ~60% of funding outcomes in 2015, then chance influenced the award of ~$458 million worth of NHMRC Project Grants (Australian Government, NHMRC 2015) and ~$147 million of ARC Discovery Grants (Australian Research Council 2015).

While it is not possible to eliminate the role of chance in grant outcomes, it should be possible to improve decision-making processes by improving the quality of communication between applicants and reviewers. For very large grants, with few applicants, interviews are conducted to ensure that the best funding decisions are made (e.g., NHMRC Centre’s of Research Excellence). Interviews are not feasible with the thousands of applications considered annually for ARC Discovery Projects and NHMRC Project Grants.

In 2014, we argued in Nature (Doran et al. 2014c), Trends in Biochemical Sciences (Doran et al. 2014b), and in a CellPress Video (Doran et al. 2014a) that the most effective/efficient mechanism to enhance communication between grant applicants and reviewers was through video. The combination of video and audio is already used to more effectively communicate concepts to students and/or consumers than text alone (Smith 2006, Australian Government, NHMRC 2015), and video is increasingly used by scientific journals.

We HYPOTHESISE that research proposals presented as recorded 10-minute PowerPoint videos, rather than traditional ARC text-based 10-page Project Descriptions, will lead to (1) a reduction in time required for applicants to prepare complex grant proposals, (2) more effective conveyance of grant concepts to reviewers, (3) more efficient reviews, enabling the engagement of more reviewers per application, and a reduction of the influence of reviewer bias, and (4) greater confidence in the system's capacity to appropriately allocate research funding.

The OVERALL AIM of this 2-year project is to test the capacity of recorded 10-minute PowerPoint videos, relative to text-based 10-page Project Descriptions, to enable more effective communication of ARC Discovery Project grant proposals, and more reliable rank-ordering of projects. Specifically, we AIM to:

  • AIM 1: Use randomised control trials to test the capacity of recorded 10-minute PowerPoint presentations, relative to text-based 10-page Project proposals in ARC Discovery Project Grant applications, to significantly enhance reviewer comprehension and recall of grant concepts.
  • AIM 2: Use randomised control trials to test the capacity of recorded 10-minute PowerPoint presentations, in place of text-based 10-page Project proposals in ARC Discovery Project Grant applications, to yield more reliable and efficient rank-ordering of grant applications by external reviewers.
  • AIM 3: Use mock ARC College of Experts grant review panels to assess the capacity of recorded 10-minute PowerPoint presentations, in place of text-based 10-page Project proposals in ARC Discovery Project Grant applications, to yield more reliable rank-ordering of grant applications in panel scenarios.

Reliable rank-ordering of grant applications is a challenge faced by grant agencies around the world. Ironically, despite the impact on academic progression/career development and the progression of cutting edge science, little research has been directed towards improving the funding peer review process. A 2007 Cochrane review concluded, “There is little empirical evidence on the effects of grant giving peer review”, and that, “Experimental studies assessing the effects of grant giving peer review on importance, relevance, usefulness, soundness of methods, soundness of ethics, completeness and accuracy of funded research are urgently needed” (Smith 2006). We hypothesise that video will enhance grant communication, and that this will facilitate significant methods improvements. While novel, our feasible proposal to record PowerPoint presentations exploits an inexpensive platform that is already widely used in academia. This project will contribute significantly to the evidence-based evolution of the grant preparation and peer-review process, potentially enabling more robust allocation of limited research dollars by funding agencies. Through this project, Australia will be an international leader; improvements in grant communication efficacy will benefit researchers and grant agencies around the world.

The following sections discuss the grant peer review process and the use of multimedia video in scientific communication. Our team has significant expertise and a growing track record in each of these subject areas.

The grant peer review process

Grant agencies strive to fund the most promising and beneficial scientific research. Peer review is the principal hurdle for all scientific research grant applications. In 2015, the ARC reviewed 3,584 Discovery Project applications, and awarded funding to 635 Projects (17.7% success rate); (Australian Research Council 2015). In 2015, the NHMRC reviewed 3,758 Project Grant applications, and awarded funding to 516 Projects (13.6% success rate); (Australian Government, NHMRC 2015). In both the ARC and NHMRC systems, expert opinions are sought from external reviewers, and these reviews are considered by either the ARC College of Experts or an NHMRC Panel to rank the applications. Applications are then funded in rank order until the budget is exhausted.

A successful application requires complex and abstract ideas to be carefully and effectively communicated to persuade reviewers that the knowledge quest described in the proposal is valuable. However, to be ranked highly among all submitted applications, it is also essential for proposals to convey the work’s relevance and potential socioeconomic impact, and to contextualise the work within current national funding priorities. The bulk of this argument must be conveyed in a 10-page proposal for ARC Discovery Projects or in a 9-page proposal for NHMRC Project Grants. As every grant author and reviewer appreciates, articulating preliminary data, insight and vision within a 9 or 10-page written proposal is challenging.

The role chance currently plays in ARC and NHMRC funding outcomes

In previous research, we investigated the current NHMRC Project Grant review process. We found that in 2009, approximately 9% of all submitted NHMRC Project Grant proposals were reliably ranked in the “fund” category, and that these top-ranked grants were funded regardless of the make-up of the review panel (Graves et al. 2011). Similarly, 61% of the submitted applications were reliably ranked as “do not fund”. The success of the remaining 29%, the “grey zone”, depended on chance events such as the make-up of the review panel (Graves et al. 2011). Of the 620 applications funded, only 255 (41%) had been reliably ranked as “fund”. The majority of funded proposals (365, 59%) were in the uncertain “grey zone”. This implies that approximately $250 million per year (2009 dollars) in research funding is allocated with uncertainty. This analysis excluded 278 applications submitted under special initiatives.

Dr Karen Mow (University of Canberra) studied the Research Grant Funding and Peer Review in Australian Research Councils (Mow 2009). She discussed the load on panel members in the two systems, and the role of chance in outcomes. The review load on ARC Panel members is huge. Individual members are responsible for reviewing 40–80 Discovery Project applications (Mow 2009), and panels may assess up to 800 applications over a week. It is unreasonable to expect that all reviewers will equally understand and value written applications containing a 10-page proposal, in addition to ~80 pages related to CVs and budgets, over such a compressed time frame. Dr Mow found that track record largely determined the top 30% of ranked applications (Mow 2009). When reviewer comprehension is incomplete (in this case due to time constraints), reviewers often focus on merits largely unrelated to the scientific proposal, such as applicant track record or political associations, which are often more tangible (Graves et al. 2011, Smith 2006).

Dr Mow’s interviews with ARC Panel members found that they all felt that the ARC review quality was high given the allocated time and resources. However, she suggested that there is evidence that the ARC is overworking its committee members and that the goodwill of the sector is under stress (Mow 2009). ARC panel members commented:

I was drained at the end of the week – and I am one of those who does take it seriously, it is quite a burden…..Its hard to get inside other people’s minds given the time you’ve got. The 600 plus applications make it a torrid, the pace is furious. (Interview 10, Mow 2009) It’s stamina, who on the committee is still going and able to say something at the end of the process. (Interview 1, Mow 2009)

Improvements in grant communication could take a “physical load” off panel members, assist the review of the scientific merit, lead to more reliable project ranking and funding of the “best” research.

The increasing role of video in scientific communication

In a 2014 Nature Communication (Doran et al. 2014c), a Trends in Biochemical Research opinion article (), and in a CellPress Video (Doran et al. 2014a) we discussed the merits of incorporating multimedia video into grant applications. In the 21st century, multimedia has revolutionised communication throughout all aspects of life, and its incorporation into grant applications is an inevitable evolutionary next step. The combination of video and audio is already used to more effectively communicate concepts to students and/or consumers than text alone (Smith 2006, Australian Government, NHMRC 2015). These same strengths make video a more effective medium than paper to communicate novel/complex scientific concepts to granting review panels. Videos have already proven invaluable in scientific communication applications including:

  1. Technical brochures for biotech companies (e.g., companies such as STEMCELL Technologies and Miltenyi Biotec provide videos and webinars to explain the use of their products).
  2. Video journals - e.g., the Journal of Visualized Experiments (JoVE) which is entirely dedicated to publishing experimental methods videos online, while many others now use video abstracts.
  3. Video for science teaching/lectures (e.g., Massachusetts Institute of Technology (MIT) offers lectures online through their OpenCourseWare initiative). All undergraduate lectures at QUT are now provided in video format.

The use of video in grant applications and grant review processes

The use of video to communicate complex methods/techniques in grant applications is already endorsed by the US National Institutes of Health (NIH), the world’s largest public funder of health and medical research (Viergever and Hendriks 2016). Specifically, the NIH allows the inclusion of videos that augment the applicants’ methods description. In addition, the NIH uses videos to instruct applicants how to prepare grant applications and how the review process works.

Video was recently evaluated as a tool to improve the consistency of reviewer scoring and ranking of grant applications (Sattler et al. 2015). In 2015, Sattler et al demonstrated that the instructions provided in 11-minute training video were able to significantly improve the reliability and accuracy of NIH research grant proposal scoring and funding recommendations (compared to traditional text-based instructions); this demonstrates the merit of video in this field.

Do researches embrace the concept of using video in grant applications?

Yes – our survey data suggests that they do (see Figs 1, 2). We recently conducted a preliminary anonymous survey of researchers from QUT, UQ, and Mater Medical Research Institute who are submitting major grant proposals (in 2016, n = 15), in order to determine their opinion regarding the inclusion of video in grant applications. The collection of this survey data was approved by the Queensland University of Technology's (QUT) University Human Research Ethics Committee (UHREC), and adapted for Figs 1, 2. The ethics approval number was 1600000074. We found that ~70% of researchers estimated that it will take them over 28 workdays to prepare their text-based project proposal. Researchers were asked to exclude the time required for non-proposal components (such as CVs) of the overall application in this time estimate. By contrast, the majority of researchers believe that they could generate a 10-minute recorded PowerPoint Video with similar content in 1 to 2 weeks. If true, the use of recorded PowerPoint presentations should result in substantial reduction in the time required to prepare grant applications.

Figure 1.

Applicants' estimated time required to either write a text-based or video project proposal.

Figure 2.

Applicants' estimation of how a PowerPoint video would assist the grant prepration and review process.

Approximately 80% of researchers felt that video would likely improve their capacity to communicate grant concepts to reviewers (Figure 2). Approximately 80% of researchers felt that their capacity to understand and review more grant applications per year could be enhanced by video. These results suggest that video may contribute to better quality review processes, and the feasibility of engaging more reviewers per application.

Video has the potential to improve applicant-reviewer communication, while also reducing the time to prepare and review applications.

Summary of Background and preliminary data

  1. Our research (and international work) reveals that chance plays a major role in the allocation of grant funding.
  2. Chance occurs because of the limited number of reviewers, the varying opinions of reviewers, and potentially poor comprehension in time-limited review situations (such as peer review panels).
  3. Increasing the number of reviewers and reducing Panel member workload requires improvements in communication efficency.
  4. Video is a more effective communication platform than text only.
  5. Researchers routinely produce PowerPoint presentations, and these can be easily recorded as videos.
  6. 80% of surveyed researchers felt they could generate grant project proposals more efficiently by recording PowerPoint presentations, and that videos would enhance their communication of complex grant concepts.
  7. 80% of surveyed researchers felt that they could review grants more efficiently in the video format, and that this would allow them to review more applications annually.

Research project

Our team has specific expertise in scientific communication, video production, bioengineering, chemistry, statistics, health economics, and psychology/survey design. Building on our significant preliminary data and publications in the area (Graves et al. 2011, Doran et al. 2014c, Doran et al. 2014b, Doran et al. 2014a, Herbert et al. 2013), we AIM to test the capacity of recorded 10-minute PowerPoint videos, relative to text-based 10-page Project Descriptions, to enable more effective communication of ARC Discovery Project grant proposals, and more reliable rank-ordering of projects. We will achieve this OVERALL AIM through the following three linked AIMs:

AIM 1: Use randomised control trials to test the capacity of recorded 10-minute PowerPoint presentations, relative to text-based 10-page Project Descriptions in ARC Discovery Project Grant Applications, to significantly enhance reviewer comprehension and recall of grant concepts.

Comprehension is the first step in providing a high quality grant review. An apposite comment from an ARC Panel member was Mow (2009):

I would ask myself a few common questions each time. Do I understand what the key objectives of the project were? How did that fit internationally, and did they have a methodology that seemed reasonable to do it, and could I articulate those? I’d actually try to summarise them. It was a very clarifying point, for all 120 applications I was assigned…… (Interview 8, Mow 2009)

As video is a superior learning platform relative to text-only (Kamin et al. 2003, Gaudin and Chaliès 2015), we reason that video will also enable an improvement in reviewer/panel member comprehension. Through this AIM we will derive data that quantifies the relative efficiency with which grant proposal information can be conveyed from the author to the reviewer in text-only or video format. This study will provide a relative efficiency measure of the text versus video grant format; reviewers will not conduct a rigorous review of the proposals in this AIM. We will use the study outlined below.

Research Design: We will assess reviewer comprehension following review of 10-page text-based ARC Discovery Project Grant proposals versus 10-minute PowerPoint videos.

What scientific proposals will be used? We will use 10 ARC Discovery Project Grant applications that have been previously submitted to the ARC and for which scores and funding outcomes are known. The ARC uses the following scoring system:

  • Scoring band A: Outstanding –Approximately 10%* of Proposals should receive ratings in this band.
  • Scoring band B: Excellent –Approximately 15%* of Proposals should receive ratings in this band.
  • Scoring band C: Very good –Approximately 20%* of Proposals should receive rating in this band.
  • Scoring band D: Good –Approximately 35%* of Proposals are likely to fall into this band.
  • Scoring band E: Uncompetitive: Approximately 20%* of Proposals are likely to fall into this band.

The 10 ARC applications we will use will be selected as follows: 5 applications awarded score band B, and 5 applications awarded score band C. Our previous work suggests that the top 10% (Score band A) are reliably funded, but that current review processes have difficulty resolving the relative ranking of applications with Scores in band B and C – the grey zone (Graves et al. 2011). We anticipate that the improved communication offered by video will lead to more reliable ranking of applications in this grey zone, and thus more confidence in the allocation of research funds.

To facilitate the comparison of different grant applications in the studies outlined in AIMs 2 & 3, and to best exploit CI Doran and Lott’s technical expertise, we will select grants previously submitted to the ARC Biological Sciences and Biotechnology (BSB) Panel. Recruitment will be facilitated through our broad national networks in the field.

The original authors will provide written consent for their applications to be used, agree to assist with the generation of a corresponding 10-minute PowerPoint video, and assist with the development of the associated survey questions. We will compensate the lead CI with a $2500 bursary (total of $25,000 across the project) for their contribution in addition to their normal workload; it is likely that authors will have prepared much of the required PowerPoint presentation for other conferences/meetings.

What information will be included in the 10-minute PowerPoint videos? Each video will include the exact same headings as required in current text-based 10-page ARC Discovery Project proposals. The grant authors will prepare PowerPoint presentations as is routine in academia. Videos will be recorded, including voice-over, using PowerPoint version 2016. This recording feature is now integrated directly into Microsoft PowerPoint (this includes recording of laser pointer mimic via mouse movements).

Who will review the grant applications and/or video summaries? 100 national and international researchers with appropriate expertise in Biological Sciences and Biotechnology will each review 5 randomly selected applications. Reviewers will receive either the text-based 10-page project proposal or the 10-minute equivalent PowerPoint video. Reviewers will be given only 10 minutes to read the text-based proposal or watch the video. Readers generally focus on specific grant sections and skim through other sections, acquiring considerable information in 10 minutes.

Reviewers will be randomly allocated to one of four groups, and randomly allocated 5 proposals from the 10 text or video-based proposals described above:

Group A (50 reviewers): Randomly allocated 5 text-based proposals (total time allocated to all grants is ~2 hours).

Group B (50 reviewers): Randomly allocated 5 video-based proposals (total time allocated to all grants is ~2 hours).

Comprehension tests: Reviewers will complete comprehension tests immediately following reading/watching the grant proposal. They will have 15 minutes to complete these tests. Comprehension tests will include 10 multiple-choice questions, and 3 “short answer” questions. Multiple-choice and short answer questions will be designed to both quantify recall and test comprehension. The questions will be similar for all applications, but the question content will be developed through one-on-one consultation with the proposal’s authors

The multiple-choice section will include questions like the following:

1. The addition of compound […] to the haematopoietic stem cell (HSC) cultures described in the preliminary data section resulted in […]?

  • Increased homing of the HSC to the bone marrow niche in recipient animals.
  • Increased survival of the HSC in vitro.
  • Increased self-renewal of HSC in vitro.
  • Compound […] caused apoptosis of HSC.

Short answer questions will qualitatively evaluate reviewer comprehension, and take the following form:

  1. Please summarise the hypotheses and aims of the grant?
  2. Please indicate how the project would progress over the planned time period?
  3. Please indicate what the applicants’ suggested to be their key innovation?

Scores will be assigned to the reviewers’ responses to the above questions using two independent scorers who are blind to whether or not the reviewer watched the video. Any comments from peer reviewers that mention the video or text will be edited prior to scoring to keep the meaning but remove the reference to the randomised group.

Reviewer expertise in a specific subject area will likely influence outcomes, so the survey will also include a small subset of questions asking about the reviewer’s publication record and self-assessment of expertise in the grant subject area. This will allow us to investigate the influence of specific expertise in the comprehension test results.

How will the grant applications and test be administrated? Reviewers will have 10-minutes to read the 10-page text-based application or watch the 10-minute videos, and an additional 15-minutes answer the associated review questions. The proposals will be supplied to the reviewers in the first 15-minute interval, and then the questions supplied in the second 15-minute interval (total time of ~2 hours per reviewer for all 5 grants). It will not be possible to go back to the proposals once the questions are released to the reviewers. The process will be administered through the QUT Blackboard website in the same manner as online exams are delivered to QUT students. The strict time limits will simulate the pressure on time-poor reviewers (especially at panels) and allow us to quantify the relative efficiency of reviewer comprehension and information retention over a standardised time-frame.

Outcome measure: Quantitative outcomes will be generated for the multiple choice questionnaires, and scores will be assigned to the responses to written questions to enable further comparison between groups. CI Page is a psychologist with relevant expertise in qualitative and quantitative research methods and survey design, and analysis of decision-making processes (Graves et al. 2013, Page 2012).

Statistical analysis: The statistical model will be a generalised linear mixed model using the count of correct responses as the dependent variable and assuming a Poisson distribution. The independent variables will be: (1) the application (1 to 10), to control for applications that were more or less challenging; (2) the reviewer, to control for the varying ability in reviewers (using a random intercept); and (3) the randomised group, to show the impact of the video intervention on the overall score.

Gwet’s statistic will be used to assess the level of agreement between raters (Gwet 2008). We will identify reviews where the between-rater agreement is very poor (under 0.5) and will use a third rater in consultation with the original two raters to resolve differences.

We anticipate video will substantially improve reviewer comprehension and recall. Nevertheless, a sample size of 100 reviewers gives us a 90% power to detect even a modest 10% increase in the number of correct answers using videos. This uses a generalised linear mixed model and assumes that the text-only group will get half correct on average. Recruiting 100 reviewers is feasible as we will access both national and international reviewers.

EXPECTED OUTCOMES: AIM 1 will provide the first quantitative data on the capacity for the use of video to improve reviewer recall and comprehension of content in ARC Discovery grant proposals. Enhanced communication will ultimately ease pressure on reviewers/panel members, and allow them to focus on evaluating and ranking the relative the scientific merit of each application.

AIM 2: Use randomised control trials to test the capacity of recorded 10-minute PowerPoint presentations, in place of text-based 10-page Project Descriptions in ARC Discovery Project Grant Applications, to yield more reliable and efficient rank-ordering of grant proposals by external reviewers.

Chance has been repeatedly identified as a problem in funding and publication peer review (Mayo et al. 2006, Osmond 1983). Richard Smith, editor of the BMJ from 1992 to 2004, called peer review “largely a lottery” (Smith 2006). A major driver of chance is the “luck of the draw” in the selection of peer reviewers. Peer reviewers, who are experts in the specific field, have their individual opinions of what defines quality research and what research in that field should be funded (Lamont 2009). To reduce the impact of reviewer variability, we must increase the number of reviewers. Current review processes are laborious, and thus only a limited number (2 to 3) of external reviewers are typically engaged per application. In this AIM we will test if 10-minute PowerPoint presentations enable efficient and robust rank-ordering of grant applications. Improvements in the efficiency of the external process, will allow a greater number of reviewers to be engaged in future grant rounds. In this AIM we will conduct the following study:

Research Design: Using the 10 applications and videos from AIM 1, we will compare reviewers’ capacity to rank-order the 10 proposals, evaluate reviewer confidence in their assigned rank-order, and compare the relative time to review completion. We are testing for consistency and confidence in the awarded rank-ordering.

Who will review the grant applications or video equivalents? 50 volunteer national and international researchers with expertise relevant to the Biological Sciences and Biotechnology (BSB) Panel. The 50 reviewers (different reviewers than those used in AIM 1) will be randomly allocated to one of two groups:

Group 1 (25 reviewers): Text only versions of Proposals 1–10 plus the ARC RMS sections

Group 2 (25 reviewers): Video only versions of Proposals 1–10 plus the ARC RMS sections.

How will the test be administered? All grants will be presented via QUT’s online Blackboard interface. Reviewers will be asked to review the grants and CVs (full ARC RMS pdf output less the grant proposal), using the same criteria as provided by the ARC for Discovery Project Review. In addition to the standard assessment, reviewers will be asked the following questions at the end of the review of all 10 applications:

  1. Please provide your best estimate of the ranking of for each application (top 10%, top 25%, top 50% etc).
  2. Please indicate the highest ranking that you would give each application.
  3. Please indicate the lowest ranking that you would give each application.
  4. Please indicate how confident you are that you would rank the ten applications similarly if you had to rank them again in 2 months (scale of 1–10)?
  5. Please indicate your confidence that other reviewers would rank the applications similarly (scale of 1–10).
  6. For video reviews: Please indicate the impact of video had on efficiency and comprehension (scale of 1–10).
  7. How long did it take to review each application?

Statistical analysis: We will repeat the bootstrap analysis from our BMJ paper that estimated the size of the “grey zone” (Graves et al. 2011), which are those grants that fall between the reliably funded and reliably not funded grants. If the videos improve comprehension and create a more reliable ranking, then the grey zone will be smaller amongst the group given the video. Using previous data (Graves et al. 2011), the grey zone ranged between panels from 16% to 47%, with a mean of 29% and standard deviation of 7%. A 7% (one standard deviation) reduction will be detectable, and this would equate to ~$17 million worth of ARC Discovery Projects (Australian Research Council 2015), and ~$29 million worth of NHMRC Project Grants (Australian Government, NHMRC 2015). With a sample size of 25 per group, we will have an 83% power to detect a 7% reduction, assuming that the top 2 of 10 applications would be funded (20% success rate). This sample size was calculated by simulating data based on our previous study (Graves et al. 2011).

Questions 2 and 3 on the lowest and highest ranking will estimate reviewer uncertainty, and the width of the interval (low minus high) can be used as a summary measure. We will compare this statistic between groups using summary statistics and a statistical test.

Unfortunately, we do not have a gold standard ranking with which to compare the results from our experiment, because we cannot create 10 applications that can be ranked with absolute certainty. Kaplan calculated that such an exercise would require thousands of reviewers (Kaplan et al. 2008). However, assuming that the current process is unbiased (meaning the estimated ranks are centered on the true ranks with some error), reducing the grey zone improves the process by bringing the estimated ranks closer to the true ranks.

EXPECTED OUTCOMES: AIM 2 will quantitatively assess the capacity of video to enable more reliability rank-ordering of ARC Discovery Project Grant applications. More reliable ranking will ensure that the “right” science is funded. Significant enhancements in communication may enable the reviewers to identify brilliant science that is challenging to convey in a text-based document.

AIM 3: Use mock ARC College of Experts grant review panels to assess the capacity of recorded 10-minute PowerPoint presentations, in place of text-based 10-page Project Descriptions, to yield more reliable rank-ordering of grant applications in panel scenarios.

In previous work we conducted mock NHMRC Project Grant review panels to test if the review of streamlined grants would yield a similar rank-ordering of applications as the formal NHMRC review process (Herbert et al. 2013). In this study, we will conduct mock ARC Panels, and test if the efficiency and reliability of the ranking process is improved when proposals are presented as pre-recorded 10-minute PowerPoint videos.

Research Design: Six panels (3 × Panel A, and 3 × Panel B) will be assembled. Panels will have 5 members each (total 30 panel members, (each provided with hotel and paid return flights to Brisbane). Panels are replicated 3 times for statistical purposes, but with the total number of members reduced to 5 per Panel to limit the overall cost and challenge of assembling 30 panel members. Panels will review applications as follows:

Panel A (replicated 3 times): Text version of Proposals 1–10 plus the ARC RMS sections.

Panel B (replicated 3 times): Video version of Proposals 1–10 plus the ARC RMS sections.

The design is such that Panels review either text-based or video-based proposals as follows:

  1. Each panel member will receive all grants (including RMS CV sections provided to us by the consenting researcher). Each panel member will take primary responsibility for 2/10 applications.
  2. The primary panel member will receive 3 randomly selected external assessor reports from external assessors in AIM 2. The same external reviews will be used for all 6 Panels.
  3. At the panel meeting all members will score and rank applications.
  4. For text-based proposals, the responsible panel member will introduce and summarise the grant, and provide a summary of the external reviewers aggregated scores/comments for discussion.
  5. For video-based proposals, the video will be played for the panel, and then the responsible panel member will provide a summary of the external reviewers aggregated scores/comments for discussion.
  6. After a total of 25 minutes, final scores will be collected by secret ballet from each panel member.

Survey and interviews: Panel members will be given a survey as part of an exit interview. The survey will include questions about members’ confidence in their awarded scoring, ranking of grants, and indications of the time required to review text and video-based applications (similar to the survey in AIM 2). Immediately following the survey, all Panel members will be reunited for a group discussion about the process. Similar questions will be asked, to gain insight regarding Panel members’ experience with the video versus text-based approaches.

Statistical analysis: We will repeat the bootstrap analysis in our BMJ paper that estimated the size of the “grey zone” (Graves et al. 2011). We will determine if a video proposal improves the reliability of relative ranking of applications in a panel scenario, if the Panel members feel their comprehension of the applications was greater with video, and if the Panel members have greater confidence in the awarded ranking with video.

EXPECTED OUTCOMES: We expect that video will enhance the reliability of application rank-ordering in a panel scenario. We expect that video will enhance the confidence Panel members’ have in their awarded score, video “will refresh their memory” and offer an excellent introduction from which to initiate discussion. These will be the “first” data on the use of video to communicate grant proposals to review panels.

Important considerations

There are many studies concerning funding peer review which could be undertaken, and which would be useful. We have selected a modest scope that will enable an informative evaluation of this new concept, and which can be completed on a reasonable budget within 2 years.

Of course, we also expect that the addition of video comes with substantial challenges for both those who create the videos and those who evaluate them. These are likely to be non-negligible and will add complexity as well as other potentially negative biases. This project would have ample opportunity to document these issues. A few potential issues that we can identify from the pilot work are: (1) Teams with access to good communication support (video production, contexts for higher production value, etc.) are likely to produce videos of higher quality and could create a perception of a higher quality proposal. This issue, of course, exists with text-based communication and research support, but as communication technology and modes of communication change, these issues will differently frame the research. (2) Aspects of speaker presentation will come to the fore and issues of gender, language, and presentation will be visible and highlighted. Aspects of this are already visible in the text-based system. (3) ‘Reading' video is different to ‘reading’ text. Given the rapid advancement of communication technology, we expect to find some issues about how evaluation is carried out by reviewers that is medium-dependent. Considering, and describing these issues will be of crucial importance and of value across disciplinary fields.

The proposed project timeline is outlined below in Table 1.

Project timeline

Time

2017

2018

AIM 1. Test if video enhances grant reviewer comprehension.

AIM 2. Test if video enables superior grant communication/ranking with external reviewers.

AIM 3. Test if video enables superior grant communication/ranking with in a panel scenario.

Significance, innovation, national benefit and feasibility

Significance

For research commencing in 2016, the ARC awarded ~$245 million to Discovery Projects, and the NHMRC awarded ~$420 million to Project Grants (total ~$665 million). Despite the enormous expenditure, there is surprisingly little research conducted on how this process could be optimised. The role of chance in funding outcomes is well known to researcher, and increasingly well described in the literature (Graves et al. 2011, Mayo et al. 2006, Osmond 1983, Herbert et al. 2013, Graves et al. 2013). Our studies indicated that ~60% of awarded grants are selected from a “grey zone” where chance plays a significant role in the award allocation (Graves et al. 2011). To increase reliability and reduce error we need to engage more reviewers, while simultaneously improving the quality of communication with existing reviewers and panel members.

Innovation

We have taken an innovative approach, by proposing to use video instead of text-based proposals to enhance communication, thereby improving review quality and review efficiency. We propose to exploit a mainstream academic tool, recorded PowerPoint presentations, to enable an “easy” transition into this new communication space.

National benefit

This project has the potential to greatly streamline the grant preparation and review process. It should save time for both applicants’ and reviewers’, and it should result in the more reliable allocation of funding. Critically, more reliable rank-ordering of applications will ensure the “best” science is reliably funded.

Feasibility

We have carefully assembled a team that contains leaders in all of the multidisciplinary areas required to execute this project. We have previously conducted similar studies, and all aspects of this project build off our previous work and expertise. Our existing networks will allow us to recruit researchers and peer reviewers, and our past experience is that researchers are keen to take part in research that could potentially improve the system. Each team member is very productive and as a team we are cohesive.

Role of personnel

This team of 6 CIs from QUT and ANU has been carefully assembled from experts with specific expertise in scientific communication, grant review processes, statistics, and survey design and decision-making processes. Collectively, the CIs will oversee all aspects of the research and collaborate with the Postdoctoral Fellow to perform the described studies, analyse the data and prepare manuscripts and conference papers. The Postdoctoral Fellow will be based at QUT. The team is cohesive and productive: Doran and Lott have 10 co-authored outputs, including 3 outputs directly related to this application (Doran et al. 2014a, Doran et al. 2014b, Doran et al. 2014c). Barnett and Page hold a joint NHMRC Partnership Project together, and have and have 6 co-authored publications, including papers on decision-making associated with grant review processes. Prof Joan Leach and Dr Will Grant are co-located at Australian National University, and are pivotal members of the Australian National Centre for the Public Awareness of Science.

CI Dr Mike Doran (Queensland University of Technology – 20% time commitment)

Dr Mike Doran's formal training includes a BSc (genetics), BEng (chemical engineering) and a PhD (Biomedical Engineering, 2006). He was the lead author on two publications and one video discussing the incorporation of video into grant applications (Doran et al. 2014a, Doran et al. 2014b, Doran et al. 2014c). Doran will take primary responsibility for the intellectual leadership, budget, project direction, management and coordination, data analysis, manuscript preparation, and communication of the work.

CI Associate Professor Adrian Barnett (Queensland University of Technology – 10% time commitment)

A/Prof Barnett is a statistician with 21 years’ experience in clinical and public health trials. Barnett won a CIA Project Grant in 2011 to study evidence-based methods for funding evidence-based medicine; work directly related to this project. This work led to 6 published papers (1 more under review) and 2 letters in Nature. Barnett also made submissions based on his research to the McKeon review of Health and Medical Research especially concerning ways to reduce the complexity of the Project Grant process. In this project Barnett will help with study design, oversee the data collection and lead the statistical analyses.

CI Professor Joan Leach (Australian National University – 15% time commitment)

Dr Leach’s formal training includes a BSc (biophysics), BA (Literature), M.A. (Communication), and PhD (History and Philosophy of Science). Leach’s research is focused on science and health communication and public engagement with science, technology and medicine. Leach is President of Australian Science Communicators and Director, Australian National Centre for the Public Awareness of Science. In this project she will add value to the project by bringing in communication expertise, provide project direction on the communication research questions and on-going efforts to improve our communication platforms for researchers and grant agencies.

CI Dr William Lott (Queensland University of Technology – 20% time commitment)

Dr William Lott holds a BSc (chemistry) and a PhD (organic chemistry). Lott and Doran published in Nature and Trends in Biochemical Sciences suggesting a video summary to improvement research grant communication (Doran et al. 2014a, Doran et al. 2014b, Doran et al. 2014c). In this project, Lott will help manage and coordinate the video production and survey data collection, analyse data, prepare manuscripts, and communicate the work.

CI Dr Katie Page (Queensland University of Technology – 10% time commitment)

Dr Katie Page is a psychologist (PhD 2008) who specialises in the study of individual judgment and decision-making, and behavioural economics. Her contributions to this project will be in the overall research design, the development of the qualitative questions, and the survey design and analysis.

CI Dr William Grant (Australian National University – 10% time commitment)

Dr William Grant holds a BA, and a PhD in Political Science (2007). His research interests include a focus on the relationship between science and public policy, and an examination of the spatial practices in modern science communication. He will contribute to evaluation of video communication, and our on-going efforts to improve this process, for end users (grant agencies and researchers).

Project research environment

The project team is located at three excellent QUT or ANU-supported research centres across Australia.

  1. Doran and Lott are located at the Translational Research Institute (TRI/QUT). The TRI is the newest and most comprehensive medical research and biopharmaceutical facility in Queensland, located at the Princess Alexandra Hospital (Queensland’s second largest hospital) campus in Brisbane, Australia. The TRI is a >$350 million collaborative venture that brings together researchers from QUT, UQ, Mater Medical Research, and the PA Hospital. The new TRI (2013) is already recognised for its research excellence. This rich environment contributes to the team’s network and capacity to recruit the many academics required to contribute to the studies outlined in this project, and from whom feedback is essential.
  2. Barnett and Page are located at the Institute for Health and Biomedical Innovation (IHBI/QUT). IHBI is a multidisciplinary medical and health research institute with over 900 researchers conducting cutting edge research with the aim of improving the health of individuals and communities through research innovation. Through IHBI, CI Barnett is a founding member of the Australian Centre for Health Services Innovation (AusHSI), which is focused on the movement to economically viable evidenced-based health practices; this established community is very open and supportive of the types of studies proposed in this application. CI Barnett held an NHMRC Project Grant targeting optimisation of the grant submission and review process; this productive project was hosted by QUT/IHBI. QUT/IHBI is an excellent host for this new proposed research.
  3. Leach and Grant are located at the Australian National University (ANU), the top rated university in Australia (QS World University Rankings). ANU hosts the Australian National Centre for the Public Awareness of Science, of which Leach is the Director and Dr Grant is a Senior Lecturer. This centre represents an internationally recognized hub of expertise in science communication education and research, and is a Centre for the Australian National Commission of UNESCO. Leach is President of Australian Science Communicators, a key organisation in the field of science communication, allowing us to expand our network, and tap into other strong national/international research environments.

Communication of results

Our CI team has an excellent publication and engagement record; our results will be communicated through academic journals, conferences and in more “accessible” platforms like The Conversation. QUT and ANU share publications through free access repositories (such as QUT ePrints). CI Leach is the President of the Australian Science Communicators, providing purpose-suited communications network that extends beyond QUT, ANU and Australia. Our multidisciplinary team gives us access to a broad cross-section of the academic community, and this will enhance dissemination of our results. We will share our results with the ARC/NHMRC and will offer to present to internal ARC/NHMRC committees.

Management of data

A data management plan will be developed and managed according to QUT’s policy on Management of Research Data (MoPP D/2.8). Data management practices will follow the Australian Code for the Responsible Conduct of Research, specifically Section 2, “Management of research data and primary materials.” QUT and ANU provide storage for research data that is appropriate to that data and relevant to that point in the research data life cycle. Surveys will all be delivered using QUT Blackboard, which is specifically designed for robust data capture, storage, and retrieval. De-identified data will be freely shared with our primary publication in order to enhance research reproducibility and openness.

Funding program

This grant application was submitted to the Australian Research Council (ARC) for funding in 2016. The project was not funded, and remains unfunded as of January 2017. We intend to reapply for further funding. In an effort to improve the proposal, we invite readers to contact us if they are interested in contributing to the proposed project as “text vs video grant reviewers” or if they have general recommendations as to how to improve aspects of the project design.

References

login to comment