Research Ideas and Outcomes : Small Grant Proposal
|
Corresponding author: Michael R Doran (michael.doran@qut.edu.au)
Received: 25 Jan 2017 | Published: 01 Feb 2017
© 2017 Michael Doran, Adrian Barnett, Joan Leach, William Lott, Katie Page, Will Grant
This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Citation: Doran M, Barnett A, Leach J, Lott W, Page K, Grant W (2017) Can video improve grant review quality and lead to more reliable ranking? Research Ideas and Outcomes 3: e11931. https://doi.org/10.3897/rio.3.e11931
|
Multimedia video is rapidly becoming mainstream, and many studies indicate that it is a more effective communication medium than text. In this project we AIM to test if videos can be used, in place of text-based grant proposals, to improve communication and increase the reliability of grant ranking. We will test if video improves reviewer comprehension (AIM 1), if external reviewer grant scores are more consistent with video (AIM 2), and if mock Australian Research Council (ARC) panels award more consistent scores when grants are presented as videos (AIM 3). This will be the first study to evaluate the use of video in this application.
The ARC reviewed over 3500 Discovery Project applications in 2015, awarding 635 Projects. Selecting the “best” projects is extremely challenging. This project will improve the selection process by facilitating the transition from text-based to video-based proposals. The impact could be profound: Improved video communication should streamline the grant preparation and review processes, enable more reliable ranking of applications, and more accurate identification of the “next big innovations”.
research funding, grant writing, video, grant review, peer review, meta-research, research policy
In the 21st century, multimedia video has revolutionised the way that we communicate throughout all aspects of life, and we reason that its incorporation into grant applications is an inevitable evolutionary next step. Annually, the Australian Research Council (ARC) and the National Health and Medical Research Council of Australia (NHMRC) review thousands of Project grant applications. Applicants’ invest weeks into drafting text-based proposals designed to communicate the future of cutting edge science. The subsequent grant review processes are very challenging and time consuming, and prone to influence of reviewer variability. This results in a large “grey zone” where the success or failure of a grant can be significantly influenced by chance.
Our previous research (focused on the NHMRC Project Grant review process) revealed that the top ~40% of awarded NHMRC Project Grant Applications were reliably identified as “fundable” when the variability in the current review processes was considered (
While it is not possible to eliminate the role of chance in grant outcomes, it should be possible to improve decision-making processes by improving the quality of communication between applicants and reviewers. For very large grants, with few applicants, interviews are conducted to ensure that the best funding decisions are made (e.g., NHMRC Centre’s of Research Excellence). Interviews are not feasible with the thousands of applications considered annually for ARC Discovery Projects and NHMRC Project Grants.
In 2014, we argued in Nature (
We HYPOTHESISE that research proposals presented as recorded 10-minute PowerPoint videos, rather than traditional ARC text-based 10-page Project Descriptions, will lead to (1) a reduction in time required for applicants to prepare complex grant proposals, (2) more effective conveyance of grant concepts to reviewers, (3) more efficient reviews, enabling the engagement of more reviewers per application, and a reduction of the influence of reviewer bias, and (4) greater confidence in the system's capacity to appropriately allocate research funding.
The OVERALL AIM of this 2-year project is to test the capacity of recorded 10-minute PowerPoint videos, relative to text-based 10-page Project Descriptions, to enable more effective communication of ARC Discovery Project grant proposals, and more reliable rank-ordering of projects. Specifically, we AIM to:
Reliable rank-ordering of grant applications is a challenge faced by grant agencies around the world. Ironically, despite the impact on academic progression/career development and the progression of cutting edge science, little research has been directed towards improving the funding peer review process. A 2007 Cochrane review concluded, “There is little empirical evidence on the effects of grant giving peer review”, and that, “Experimental studies assessing the effects of grant giving peer review on importance, relevance, usefulness, soundness of methods, soundness of ethics, completeness and accuracy of funded research are urgently needed” (
The following sections discuss the grant peer review process and the use of multimedia video in scientific communication. Our team has significant expertise and a growing track record in each of these subject areas.
Grant agencies strive to fund the most promising and beneficial scientific research. Peer review is the principal hurdle for all scientific research grant applications. In 2015, the ARC reviewed 3,584 Discovery Project applications, and awarded funding to 635 Projects (17.7% success rate); (
A successful application requires complex and abstract ideas to be carefully and effectively communicated to persuade reviewers that the knowledge quest described in the proposal is valuable. However, to be ranked highly among all submitted applications, it is also essential for proposals to convey the work’s relevance and potential socioeconomic impact, and to contextualise the work within current national funding priorities. The bulk of this argument must be conveyed in a 10-page proposal for ARC Discovery Projects or in a 9-page proposal for NHMRC Project Grants. As every grant author and reviewer appreciates, articulating preliminary data, insight and vision within a 9 or 10-page written proposal is challenging.
In previous research, we investigated the current NHMRC Project Grant review process. We found that in 2009, approximately 9% of all submitted NHMRC Project Grant proposals were reliably ranked in the “fund” category, and that these top-ranked grants were funded regardless of the make-up of the review panel (
Dr Karen Mow (University of Canberra) studied the Research Grant Funding and Peer Review in Australian Research Councils (
Dr Mow’s interviews with ARC Panel members found that they all felt that the ARC review quality was high given the allocated time and resources. However, she suggested that there is evidence that the ARC is overworking its committee members and that the goodwill of the sector is under stress (
I was drained at the end of the week – and I am one of those who does take it seriously, it is quite a burden…..Its hard to get inside other people’s minds given the time you’ve got. The 600 plus applications make it a torrid, the pace is furious. (Interview 10,
Improvements in grant communication could take a “physical load” off panel members, assist the review of the scientific merit, lead to more reliable project ranking and funding of the “best” research.
In a 2014 Nature Communication (
The use of video to communicate complex methods/techniques in grant applications is already endorsed by the US National Institutes of Health (NIH), the world’s largest public funder of health and medical research (
Video was recently evaluated as a tool to improve the consistency of reviewer scoring and ranking of grant applications (
Yes – our survey data suggests that they do (see Figs
Applicants' estimated time required to either write a text-based or video project proposal.
Applicants' estimation of how a PowerPoint video would assist the grant prepration and review process.
Approximately 80% of researchers felt that video would likely improve their capacity to communicate grant concepts to reviewers (Figure 2). Approximately 80% of researchers felt that their capacity to understand and review more grant applications per year could be enhanced by video. These results suggest that video may contribute to better quality review processes, and the feasibility of engaging more reviewers per application.
Video has the potential to improve applicant-reviewer communication, while also reducing the time to prepare and review applications.
Our team has specific expertise in scientific communication, video production, bioengineering, chemistry, statistics, health economics, and psychology/survey design. Building on our significant preliminary data and publications in the area (
Comprehension is the first step in providing a high quality grant review. An apposite comment from an ARC Panel member was
I would ask myself a few common questions each time. Do I understand what the key objectives of the project were? How did that fit internationally, and did they have a methodology that seemed reasonable to do it, and could I articulate those? I’d actually try to summarise them. It was a very clarifying point, for all 120 applications I was assigned…… (Interview 8,
As video is a superior learning platform relative to text-only (
Research Design: We will assess reviewer comprehension following review of 10-page text-based ARC Discovery Project Grant proposals versus 10-minute PowerPoint videos.
What scientific proposals will be used? We will use 10 ARC Discovery Project Grant applications that have been previously submitted to the ARC and for which scores and funding outcomes are known. The ARC uses the following scoring system:
The 10 ARC applications we will use will be selected as follows: 5 applications awarded score band B, and 5 applications awarded score band C. Our previous work suggests that the top 10% (Score band A) are reliably funded, but that current review processes have difficulty resolving the relative ranking of applications with Scores in band B and C – the grey zone (
To facilitate the comparison of different grant applications in the studies outlined in AIMs 2 & 3, and to best exploit CI Doran and Lott’s technical expertise, we will select grants previously submitted to the ARC Biological Sciences and Biotechnology (BSB) Panel. Recruitment will be facilitated through our broad national networks in the field.
The original authors will provide written consent for their applications to be used, agree to assist with the generation of a corresponding 10-minute PowerPoint video, and assist with the development of the associated survey questions. We will compensate the lead CI with a $2500 bursary (total of $25,000 across the project) for their contribution in addition to their normal workload; it is likely that authors will have prepared much of the required PowerPoint presentation for other conferences/meetings.
What information will be included in the 10-minute PowerPoint videos? Each video will include the exact same headings as required in current text-based 10-page ARC Discovery Project proposals. The grant authors will prepare PowerPoint presentations as is routine in academia. Videos will be recorded, including voice-over, using PowerPoint version 2016. This recording feature is now integrated directly into Microsoft PowerPoint (this includes recording of laser pointer mimic via mouse movements).
Who will review the grant applications and/or video summaries? 100 national and international researchers with appropriate expertise in Biological Sciences and Biotechnology will each review 5 randomly selected applications. Reviewers will receive either the text-based 10-page project proposal or the 10-minute equivalent PowerPoint video. Reviewers will be given only 10 minutes to read the text-based proposal or watch the video. Readers generally focus on specific grant sections and skim through other sections, acquiring considerable information in 10 minutes.
Reviewers will be randomly allocated to one of four groups, and randomly allocated 5 proposals from the 10 text or video-based proposals described above:
Group A (50 reviewers): Randomly allocated 5 text-based proposals (total time allocated to all grants is ~2 hours).
Group B (50 reviewers): Randomly allocated 5 video-based proposals (total time allocated to all grants is ~2 hours).
Comprehension tests: Reviewers will complete comprehension tests immediately following reading/watching the grant proposal. They will have 15 minutes to complete these tests. Comprehension tests will include 10 multiple-choice questions, and 3 “short answer” questions. Multiple-choice and short answer questions will be designed to both quantify recall and test comprehension. The questions will be similar for all applications, but the question content will be developed through one-on-one consultation with the proposal’s authors
The multiple-choice section will include questions like the following:
1. The addition of compound […] to the haematopoietic stem cell (HSC) cultures described in the preliminary data section resulted in […]?
Short answer questions will qualitatively evaluate reviewer comprehension, and take the following form:
Scores will be assigned to the reviewers’ responses to the above questions using two independent scorers who are blind to whether or not the reviewer watched the video. Any comments from peer reviewers that mention the video or text will be edited prior to scoring to keep the meaning but remove the reference to the randomised group.
Reviewer expertise in a specific subject area will likely influence outcomes, so the survey will also include a small subset of questions asking about the reviewer’s publication record and self-assessment of expertise in the grant subject area. This will allow us to investigate the influence of specific expertise in the comprehension test results.
How will the grant applications and test be administrated? Reviewers will have 10-minutes to read the 10-page text-based application or watch the 10-minute videos, and an additional 15-minutes answer the associated review questions. The proposals will be supplied to the reviewers in the first 15-minute interval, and then the questions supplied in the second 15-minute interval (total time of ~2 hours per reviewer for all 5 grants). It will not be possible to go back to the proposals once the questions are released to the reviewers. The process will be administered through the QUT Blackboard website in the same manner as online exams are delivered to QUT students. The strict time limits will simulate the pressure on time-poor reviewers (especially at panels) and allow us to quantify the relative efficiency of reviewer comprehension and information retention over a standardised time-frame.
Outcome measure: Quantitative outcomes will be generated for the multiple choice questionnaires, and scores will be assigned to the responses to written questions to enable further comparison between groups. CI Page is a psychologist with relevant expertise in qualitative and quantitative research methods and survey design, and analysis of decision-making processes (
Statistical analysis: The statistical model will be a generalised linear mixed model using the count of correct responses as the dependent variable and assuming a Poisson distribution. The independent variables will be: (1) the application (1 to 10), to control for applications that were more or less challenging; (2) the reviewer, to control for the varying ability in reviewers (using a random intercept); and (3) the randomised group, to show the impact of the video intervention on the overall score.
Gwet’s statistic will be used to assess the level of agreement between raters (
We anticipate video will substantially improve reviewer comprehension and recall. Nevertheless, a sample size of 100 reviewers gives us a 90% power to detect even a modest 10% increase in the number of correct answers using videos. This uses a generalised linear mixed model and assumes that the text-only group will get half correct on average. Recruiting 100 reviewers is feasible as we will access both national and international reviewers.
EXPECTED OUTCOMES: AIM 1 will provide the first quantitative data on the capacity for the use of video to improve reviewer recall and comprehension of content in ARC Discovery grant proposals. Enhanced communication will ultimately ease pressure on reviewers/panel members, and allow them to focus on evaluating and ranking the relative the scientific merit of each application.
Chance has been repeatedly identified as a problem in funding and publication peer review (
Research Design: Using the 10 applications and videos from AIM 1, we will compare reviewers’ capacity to rank-order the 10 proposals, evaluate reviewer confidence in their assigned rank-order, and compare the relative time to review completion. We are testing for consistency and confidence in the awarded rank-ordering.
Who will review the grant applications or video equivalents? 50 volunteer national and international researchers with expertise relevant to the Biological Sciences and Biotechnology (BSB) Panel. The 50 reviewers (different reviewers than those used in AIM 1) will be randomly allocated to one of two groups:
Group 1 (25 reviewers): Text only versions of Proposals 1–10 plus the ARC RMS sections
Group 2 (25 reviewers): Video only versions of Proposals 1–10 plus the ARC RMS sections.
How will the test be administered? All grants will be presented via QUT’s online Blackboard interface. Reviewers will be asked to review the grants and CVs (full ARC RMS pdf output less the grant proposal), using the same criteria as provided by the ARC for Discovery Project Review. In addition to the standard assessment, reviewers will be asked the following questions at the end of the review of all 10 applications:
Statistical analysis: We will repeat the bootstrap analysis from our BMJ paper that estimated the size of the “grey zone” (
Questions 2 and 3 on the lowest and highest ranking will estimate reviewer uncertainty, and the width of the interval (low minus high) can be used as a summary measure. We will compare this statistic between groups using summary statistics and a statistical test.
Unfortunately, we do not have a gold standard ranking with which to compare the results from our experiment, because we cannot create 10 applications that can be ranked with absolute certainty. Kaplan calculated that such an exercise would require thousands of reviewers (
EXPECTED OUTCOMES: AIM 2 will quantitatively assess the capacity of video to enable more reliability rank-ordering of ARC Discovery Project Grant applications. More reliable ranking will ensure that the “right” science is funded. Significant enhancements in communication may enable the reviewers to identify brilliant science that is challenging to convey in a text-based document.
In previous work we conducted mock NHMRC Project Grant review panels to test if the review of streamlined grants would yield a similar rank-ordering of applications as the formal NHMRC review process (
Research Design: Six panels (3 × Panel A, and 3 × Panel B) will be assembled. Panels will have 5 members each (total 30 panel members, (each provided with hotel and paid return flights to Brisbane). Panels are replicated 3 times for statistical purposes, but with the total number of members reduced to 5 per Panel to limit the overall cost and challenge of assembling 30 panel members. Panels will review applications as follows:
Panel A (replicated 3 times): Text version of Proposals 1–10 plus the ARC RMS sections.
Panel B (replicated 3 times): Video version of Proposals 1–10 plus the ARC RMS sections.
The design is such that Panels review either text-based or video-based proposals as follows:
Survey and interviews: Panel members will be given a survey as part of an exit interview. The survey will include questions about members’ confidence in their awarded scoring, ranking of grants, and indications of the time required to review text and video-based applications (similar to the survey in AIM 2). Immediately following the survey, all Panel members will be reunited for a group discussion about the process. Similar questions will be asked, to gain insight regarding Panel members’ experience with the video versus text-based approaches.
Statistical analysis: We will repeat the bootstrap analysis in our BMJ paper that estimated the size of the “grey zone” (
EXPECTED OUTCOMES: We expect that video will enhance the reliability of application rank-ordering in a panel scenario. We expect that video will enhance the confidence Panel members’ have in their awarded score, video “will refresh their memory” and offer an excellent introduction from which to initiate discussion. These will be the “first” data on the use of video to communicate grant proposals to review panels.
There are many studies concerning funding peer review which could be undertaken, and which would be useful. We have selected a modest scope that will enable an informative evaluation of this new concept, and which can be completed on a reasonable budget within 2 years.
Of course, we also expect that the addition of video comes with substantial challenges for both those who create the videos and those who evaluate them. These are likely to be non-negligible and will add complexity as well as other potentially negative biases. This project would have ample opportunity to document these issues. A few potential issues that we can identify from the pilot work are: (1) Teams with access to good communication support (video production, contexts for higher production value, etc.) are likely to produce videos of higher quality and could create a perception of a higher quality proposal. This issue, of course, exists with text-based communication and research support, but as communication technology and modes of communication change, these issues will differently frame the research. (2) Aspects of speaker presentation will come to the fore and issues of gender, language, and presentation will be visible and highlighted. Aspects of this are already visible in the text-based system. (3) ‘Reading' video is different to ‘reading’ text. Given the rapid advancement of communication technology, we expect to find some issues about how evaluation is carried out by reviewers that is medium-dependent. Considering, and describing these issues will be of crucial importance and of value across disciplinary fields.
The proposed project timeline is outlined below in Table
Project timeline
Time |
2017 |
2018 |
||
AIM 1. Test if video enhances grant reviewer comprehension. |
||||
AIM 2. Test if video enables superior grant communication/ranking with external reviewers. |
||||
AIM 3. Test if video enables superior grant communication/ranking with in a panel scenario. |
For research commencing in 2016, the ARC awarded ~$245 million to Discovery Projects, and the NHMRC awarded ~$420 million to Project Grants (total ~$665 million). Despite the enormous expenditure, there is surprisingly little research conducted on how this process could be optimised. The role of chance in funding outcomes is well known to researcher, and increasingly well described in the literature (
We have taken an innovative approach, by proposing to use video instead of text-based proposals to enhance communication, thereby improving review quality and review efficiency. We propose to exploit a mainstream academic tool, recorded PowerPoint presentations, to enable an “easy” transition into this new communication space.
This project has the potential to greatly streamline the grant preparation and review process. It should save time for both applicants’ and reviewers’, and it should result in the more reliable allocation of funding. Critically, more reliable rank-ordering of applications will ensure the “best” science is reliably funded.
We have carefully assembled a team that contains leaders in all of the multidisciplinary areas required to execute this project. We have previously conducted similar studies, and all aspects of this project build off our previous work and expertise. Our existing networks will allow us to recruit researchers and peer reviewers, and our past experience is that researchers are keen to take part in research that could potentially improve the system. Each team member is very productive and as a team we are cohesive.
This team of 6 CIs from QUT and ANU has been carefully assembled from experts with specific expertise in scientific communication, grant review processes, statistics, and survey design and decision-making processes. Collectively, the CIs will oversee all aspects of the research and collaborate with the Postdoctoral Fellow to perform the described studies, analyse the data and prepare manuscripts and conference papers. The Postdoctoral Fellow will be based at QUT. The team is cohesive and productive: Doran and Lott have 10 co-authored outputs, including 3 outputs directly related to this application (
Dr Mike Doran's formal training includes a BSc (genetics), BEng (chemical engineering) and a PhD (Biomedical Engineering, 2006). He was the lead author on two publications and one video discussing the incorporation of video into grant applications (
A/Prof Barnett is a statistician with 21 years’ experience in clinical and public health trials. Barnett won a CIA Project Grant in 2011 to study evidence-based methods for funding evidence-based medicine; work directly related to this project. This work led to 6 published papers (1 more under review) and 2 letters in Nature. Barnett also made submissions based on his research to the McKeon review of Health and Medical Research especially concerning ways to reduce the complexity of the Project Grant process. In this project Barnett will help with study design, oversee the data collection and lead the statistical analyses.
Dr Leach’s formal training includes a BSc (biophysics), BA (Literature), M.A. (Communication), and PhD (History and Philosophy of Science). Leach’s research is focused on science and health communication and public engagement with science, technology and medicine. Leach is President of Australian Science Communicators and Director, Australian National Centre for the Public Awareness of Science. In this project she will add value to the project by bringing in communication expertise, provide project direction on the communication research questions and on-going efforts to improve our communication platforms for researchers and grant agencies.
Dr William Lott holds a BSc (chemistry) and a PhD (organic chemistry). Lott and Doran published in Nature and Trends in Biochemical Sciences suggesting a video summary to improvement research grant communication (
Dr Katie Page is a psychologist (PhD 2008) who specialises in the study of individual judgment and decision-making, and behavioural economics. Her contributions to this project will be in the overall research design, the development of the qualitative questions, and the survey design and analysis.
Dr William Grant holds a BA, and a PhD in Political Science (2007). His research interests include a focus on the relationship between science and public policy, and an examination of the spatial practices in modern science communication. He will contribute to evaluation of video communication, and our on-going efforts to improve this process, for end users (grant agencies and researchers).
The project team is located at three excellent QUT or ANU-supported research centres across Australia.
Our CI team has an excellent publication and engagement record; our results will be communicated through academic journals, conferences and in more “accessible” platforms like The Conversation. QUT and ANU share publications through free access repositories (such as QUT ePrints). CI Leach is the President of the Australian Science Communicators, providing purpose-suited communications network that extends beyond QUT, ANU and Australia. Our multidisciplinary team gives us access to a broad cross-section of the academic community, and this will enhance dissemination of our results. We will share our results with the ARC/NHMRC and will offer to present to internal ARC/NHMRC committees.
A data management plan will be developed and managed according to QUT’s policy on Management of Research Data (MoPP D/2.8). Data management practices will follow the Australian Code for the Responsible Conduct of Research, specifically Section 2, “Management of research data and primary materials.” QUT and ANU provide storage for research data that is appropriate to that data and relevant to that point in the research data life cycle. Surveys will all be delivered using QUT Blackboard, which is specifically designed for robust data capture, storage, and retrieval. De-identified data will be freely shared with our primary publication in order to enhance research reproducibility and openness.
This grant application was submitted to the Australian Research Council (ARC) for funding in 2016. The project was not funded, and remains unfunded as of January 2017. We intend to reapply for further funding. In an effort to improve the proposal, we invite readers to contact us if they are interested in contributing to the proposed project as “text vs video grant reviewers” or if they have general recommendations as to how to improve aspects of the project design.