Research Ideas and Outcomes : Workshop Report
PDF
Workshop Report
Report on the Marine Imaging Workshop 2022
expand article infoCatherine Borremans, Jennifer M Durden§,|, Timm Schoening, Emma J. Curtis#, Luther A Adams¤, Alexandra Branzan Albu«, Aurélien Arnaubec», Sakina-Dorothée Ayata˄,˅, Reshma Baburaj¦, Corinne Bassinˀ, Miriam Beckˁ, Katharine T. Bigham₵,, Rachel E. Boschen-Rose, Chad Collett, Matteo Contini, Paulo V.F. Correa, Carlos Dominguez-Carrió, Gautier Dreyfus, Graeme Duncan, Maxime Ferrera», Valentin Foulon, Ariell Friedman, Santosh Gaikwad, Chloe Game₩,, Adriana Gaytán-Caballero‡‡, Fanny Girard§§, Michela Giusti||, Mélissa Hanafi-Portier, Kerry L Howell¶¶, Iryna Hulevata##, Kiamuke Itiowe¤¤, Chris Jackett««, Jan Jansen»», Clarissa Karthäuser˄˄, Kakani Katija§§, Maxime Kernec˅˅, Gabriel Kim, Marcelo Visentini Kitahara¦¦, Daniel Langenkämperˀˀ, Tim Langloisˁˁ, Nadine Lanteri₵₵, Claude Jianping Liℓℓ, Qi-Ran Li₰₰, Pierre-Olivier Liabot₱₱, Dhugal Lindsay₳₳, Ali Loulidi₴₴, Yann Marcon₣₣, Simone Marini₮₮, Ashley Marranzino₦₦, Miquel Massot-Campos₭₭, Marjolaine Matabos, Lenaick Menot, Bernabé Moreno₲₲, Marcus Morrissey, David Nakath‽‽,, Tim Nattkemperˀˀ, Monika Neufeld₩₩, Matthias Obst₸₸, Karine Olu, Alexa Parimbelli‡‡‡, Francesca Pasotti§§§, Dominique Pelletier|||, Margaux Perhirin¶¶¶, Nils Piechaud###, Oscar Pizarro¤¤¤,«««, Autun Purser»»», Clara F. Rodrigues˄˄˄, Elena Ceballos Romero˅˅˅,¦¦¦, Brian Schlining§§, Yifan Song, Heidi M. Sosik˄˄, Marc Sourisseauˀˀˀ, Bastien Taormina###, Jan Taucher, Blair Thornton#, Loïc Van Audenhaege§,, Charles von der Medenˁˁˁ, Guillaume Wacquet₵₵₵, Jack Williams#, Kea Witting, Martin Zurowietzˀˀ
‡ Univ Brest, CNRS, Ifremer, UMR6197 Biologie et Ecologie des Ecosystèmes marins Profonds, Plouzané, France
§ National Oceanography Centre, Southampton, United Kingdom
| Norwegian University of Science and Technology, Trondheim, Norway
¶ GEOMAR Helmholtz Centre for Ocean Research, Kiel, Germany
# University of Southampton, Southampton, United Kingdom
¤ South African National Biodiversity Institute (SANBI), Cape Town, South Africa
« University of Victoria, Victoria, Canada
» Ifremer, DFO, La Seyne-sur-Mer, France
˄ Sorbonne Université, CNRS, IRD, MNHN, Laboratoire d’Océanographie et du Climat: Expérimentation et Analyses Numériques, LOCEAN-IPSL, Paris, France
˅ Institut universitaire de France (IUF), Paris, France
¦ Division of Electronics, School of Engineering, Cochin University of Science and Technology, Ernakulam, Kerala, India
ˀ Schmidt Ocean Institute, Palo Alto, CA, United States of America
ˁ Sorbonne Université, CNRS, Laboratoire d'Océanographie de Villefranche, LOV, Villefranche-sur-Mer, France
₵ National Institute of Water and Atmospheric Research (NIWA), Wellington, New Zealand
ℓ School of Biological Sciences, Victoria University of Wellington, Wellington, New Zealand
₰ Marine Directorate of the Scottish Government, Aberdeen, United Kingdom
₱ SubC Imaging, Newfoundland, Canada
₳ Ifremer, DOI, La Réunion, France
₴ Ocean Networks Canada, Victoria, BC, Canada
₣ Instituto de Investigação em Ciências do Mar - Okeanos, Universidade dos Açores, Horta, Portugal
₮ FORSSEA Robotics, Paris, France
₦ Joint Nature Conservation Committee, Peterborough, United Kingdom
₭ Lab-STICC, IA & OCEAN, ENIB - Technopôle Brest-Iroise, Plouzané, France
₲ Greybits Engineering, Sydney, Australia
‽ Laboratory of Benthic Ecological Trait Analysis (L-BETA), CSIR- National Institute of Oceanography, Mumbai, India
₩ University of East Anglia, Norwish, United Kingdom
₸ Gardline Ltd., Great Yarmouth, United Kingdom
‡‡ Universidad Nacional Autónoma de México (UNAM), Mexico City, Mexico
§§ Monterey Bay Aquarium Research Institute, Moss Landing, CA, United States of America
|| Istituto Superiore per la Protezione e la Ricerca Ambientale (ISPRA), Roma, Italy
¶¶ University of Plymouth, Plymouth, United Kingdom
## Faculty of Physical, Mathematical and Natural Sciences of Szczecin university, Szczecin, Poland
¤¤ Federal University of Petroleum Resources Effurun, Delta State, Nigeria
«« CSIRO, Hobart, Australia
»» Australian Centre for Excellence in Antarctic Science, University of Tasmania, Hobart, Australia
˄˄ Woods Hole Oceanographic Institution, Woods Hole, MA, United States of America
˅˅ Institut Universitaire Européen de la Mer (IUEM) - Technopôle Brest Iroise, Plouzané, France
¦¦ Universidade de São Paulo, São Sebastião, Brazil
ˀˀ Bielefeld University, Bielefeld, Germany
ˁˁ University of Western Australia, Perth, Australia
₵₵ Ifremer, RDT, Plouzané, France
ℓℓ Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
₰₰ INNOVICES SARL, Boulogne Billancourt, France
₱₱ Ifremer, DYNECO/LEBCO, Plouzané, France
₳₳ Japan Agency for Marine-Earth Science and Technology, Yokosuka, Japan
₴₴ Geoscience Laboratory, Faculty of Sciences Ain Chock, University Hassan II, Casablanca, Morocco
₣₣ MARUM – Center for Marine Environmental Sciences, University of Bremen, Bremen, Germany
₮₮ CNR, ISTITUTO DI SCIENZE MARINE, Lerici, Italy
₦₦ University Corporation of Atmospheric Research, NOAA Ocean Exploration, Fort Lauderdale, FL, United States of America
₭₭ Department of Civil, Maritime and Environmental Engineering Faculty of Engineering and Physical Science, University of Southampton, Southampton, United Kingdom
₲₲ Marine Ecology Department, Institute of Oceanology Polish Academy of Sciences, Sopot, Poland
‽‽ Kiel University, Kiel, Germany
₩₩ Dalhousie University, Department of Oceanography, Halifax, NS, Canada
₸₸ Göteborg University, Gothenburg, Sweden
‡‡‡ Ryan Institute & School of Natural Sciences, University of Galway, Galway, Ireland
§§§ Marine Biology Laboratory, Ghent University, Gent, Belgium
||| Ifremer, UMR DECOD, HALGO, LTBH, Lorient, France
¶¶¶ LOCEAN, UMR 7159, SU-CNRS-IRD-MNHN ; Sorbonne Université, Paris, France
### Institute of Marine Research, Bergen, Norway
¤¤¤ Norwegian University of Science and Technology (NTNU), Trondheim, Norway
««« University of Sydney, Sydney, Australia
»»» Alfred Wegener Institute, Bremerhaven, Germany
˄˄˄ Centre for Environmental and Marine Studies (CESAM), Department of Biology, University of Aveiro, Aveiro, Portugal
˅˅˅ Department of Marine Chemistry and Geochemistry, Woods Hole Oceanographic Institution, Woods Hole, MA, United States of America
¦¦¦ Department of Applied Physics II, University of Sevilla, ETSIE, Seville, Spain
ˀˀˀ Ifremer, DYNECO, Plouzané, France
ˁˁˁ University of Kwazulu-Natal, Westville, South Africa
₵₵₵ Ifremer, Unité Littoral, Laboratoire Environnement et Ressources, Boulogne-sur-mer, France
Open Access

Abstract

Imaging is increasingly used to capture information on the marine environment thanks to the improvements in imaging equipment, devices for carrying cameras and data storage in recent years. In that context, biologists, geologists, computer specialists and end-users must gather to discuss the methods and procedures for optimising the quality and quantity of data collected from images. The 4th Marine Imaging Workshop was organised from 3-6 October 2022 in Brest (France) in a hybrid mode. More than a hundred participants were welcomed in person and about 80 people attended the online sessions. The workshop was organised in a single plenary session of presentations followed by discussion sessions. These were based on dynamic polls and open questions that allowed recording of the imaging community’s current and future ideas. In addition, a whole day was dedicated to practical sessions on image analysis, data standardisation and communication tools. The format of this edition allowed the participation of a wider community, including lower-income countries, early career scientists, all working on laboratory, benthic and pelagic imaging.

This article summarises the topics addressed during the workshop, particularly the outcomes of the discussion sessions for future reference and to make the workshop results available to the open public.

Keywords

photography, method development, underwater, pelagic, benthic, optical imaging, video, ocean observation, remote sensing, computer vision, image, scientific community

Date and place

The Marine Imaging Workshop 2022 was held at the Océanopolis in Brest, France from 3 - 6 October 2022.

List of participants

The participants who wished to contribute to this workshop report appear in the author’s list. Please see Fig. 1 for a group photo of the in-person participants.

Figure 1.  

In-person participants of the Marine Imaging Workshop 2022 (photo credit: Olivier Dugornay, Ifremer).

Introduction

Marine imaging continues to grow in popularity as a method for investigating and monitoring marine environments. The Marine Imaging Workshop 2022 was the fourth edition of the workshop that was previously held in Southampton, UK (2014; Durden et al. (2016)), Kiel, Germany (2017; Schoening et al. (2017)) and Victoria, Canada (2019). Since the last Marine Imaging Workshop, several national and international initiatives related to subsets of marine imaging methods have begun, with crossovers of participants. The Big Picture (Golding et al. 2021) is a UK consortium focused on seabed community monitoring using photography, including research, government agencies and consultancies. The Quatre A workshop (unpubl. data) was a partnership between French and Australian academic and government researchers that also focused on methods for seabed habitat monitoring. Digital twins of environmental systems (Bauer et al. 2021) are also likely to incorporate marine imaging soon.

Aims of the workshop

The Marine Imaging Workshop aims to bring together a multidisciplinary group of marine scientists, engineers, computer scientists and users to present and discuss the latest developments in marine imaging methodology. These developments were presented by participants through talks, posters and hands-on sessions. Keynote talks were given by Dr. Edie Widder (“Imaging Deep-Sea Bioluminescence”) and Prof. Chris Lintott (“Past, present and Future of Online Citizen Science. A View from the Zooniverse”). The workshop featured discussion sessions to foster new ideas and collaborations and to challenge participants to consider the trends in marine imaging and identify gaps.

In 2022, the workshop was delivered in a hybrid format for the first time, with both in-person and online participation. Online participants could watch talks via a YouTube platform and present talks using Zoom or asynchronously, by submitting a recorded talk. They could ask questions by typing them on a chat box within the Youtube channel. The onsite co-chair released online questions to the presenter at the end of his talk, allowing him/her to also answer online questions live. Soon after the live talks, recorded versions of these talks were made available on that same channel on replay until 31 October to avoid time zone conflict. All posters (from both online and onsite presenters) were broadcasted as videos on the same channels during coffee and lunch breaks. All posters were made available on those channels on replay until 31 October.

As the fourth edition of the series, the Marine Imaging Workshop 2022 aimed to increase the diversity of participants over previous workshops in two important ways. The first was to increase the global reach of the workshop, particularly to include participants from lower- and middle-income countries. This aim was supported by the availability of online participation and bursaries offered for attendance. The second was to broaden the discussion to include wider applications of marine imaging. Previous Marine Imaging Workshops focused on benthic environments, although many aspects of marine imaging are broadly applicable. The 2022 workshop aimed to include more participants focused on pelagic applications of marine imaging.

Key outcomes and discussions

Attendance and scope

The workshop consisted of 188 participants: 110 in person and 78 online. More than 25% of the participants (48 individuals) identified as early career researchers, 13 of which received bursaries to attend. Participants worked primarily in academia/research (167). The rest included 17 participants from industry, three from government/policy and one from consultancies. In-person participants included 50 from the host country, 35 from the rest of Europe, 17 from the Americas, four from Asia and four from Africa. Online participants numbered 44 from Europe, 22 from Americas, four from Asia, six from Oceania and two from Africa.

A total of 84 abstracts prepared by 358 authors were accepted, with 51 talks and 33 posters presented. Contributions were grouped into the following five themed sessions:

  • Session 1: Platforms, optical sensors and (underwater) image acquisition, calibration and preprocessing (8 talks, 7 posters);
  • Session 2: Tools for image annotation (8 talks, 5 posters);
  • Session 3: Automated image processing (9 talks, 9 posters);
  • Session 4: Quality control, standardisation and sharing of imagery data (12 talks, 5 posters);
  • Session 5: Scientific advancements in biology and geology using (underwater) imagery data (14 talks, 7 posters);

Discussion session themes were more or less related to these session themes. Hands-on (interactive) sessions were conducted on annotation platforms SQUIDLE+ (Williams and Friedman 2015) and BIIGLE 2.0 (Langenkämper et al. 2017), organism identification SMarTaR-ID (Howell et al. 2019), FAIR data management for image metadata (Schoening et al. 2022), production of 3D image mosaics (Arnaubec et al. 2023) and the ImmerSea Lab virtual reality project (ran by Maxime Kernec and Loic Van Audenhaege). All of these sessions were available to in-person participants, with a subset also conducted in hybrid to include online participants.

New discussion session format and analysis of voting

The discussion sessions are a popular component of the Marine Imaging Workshop according to past participant feedback. At previous workshops, discussions were held in small groups, but, due to the venue space and online participation, discussions were held in a single group in 2022. An electronic platform (Slido) was employed to conduct polling and record discussion ideas and to facilitate voting to quantify support for those ideas. Each session had multiple electronic questions; the first questions were open polls with multiple choice answers and the second questions were open-ended. In response to an open-ended question, participants could suggest ideas and vote for other ideas while polling was open; ideas suggested early in the session were likely to receive more votes than those suggested later. The electronic questions and votes were used as prompts for verbal discussion by in person participants. This new format enabled online participants to contribute through the polls, but meant that verbal discussion by in-person participants was substantially reduced compared to previous workshops.

For each discussion theme, the results of polls are presented first, followed by summaries of the recorded responses (ideas and votes) to each discussion/open-ended question. To summarise the voting data, spurious comments (e.g. ‘testing online platform’, ‘Hello’) were removed, then the remaining responses were assessed as follows. The total number of suggested ideas and votes were reported. Suggested ideas were also grouped into common themes, with votes aggregated. The most popular suggestions (e.g. >10 votes), where noteworthy or not represented by the common themes, were noted. Raw data are presented in Suppl. material 1.

A total of 11 questions were asked in the polls, with an average of five multiple-choice responses per question. The number of votes per question ranged from 52 to 88 (arithmetic mean 71 or 0.38 per participant). A total of 14 open-ended questions were asked across the five discussion sessions, prompting 411 ideas submitted and 1903 votes. The number of ideas per question ranged from 13 to 45 (arithmetic mean 29), while the number of votes per question ranged from 71 to 194 (arithmetic mean 126 or 0.67 per participant).

This electronic voting facilitated the integration of online participants into the discussion along with in-person attendees and the quantification of strength of opinion/feeling about particular ideas. It may also have made the discussion more equitable, by reducing the dominance of some voices. However, it reduced the back-and-forth discussion that characterised previous workshops, which meant that ideas were not developed beyond short points.

Session 1: Extent of imaging

Polling

See Fig. 2 for polling results of question 1: "What are the barriers to increasing the extent of imaging the oceans?". See Fig. 3 for polling results of question 2: "Have you used crowd-sourced marine image data?". See Fig. 4 for polling results of question 3: "Which area/volume does all image data you have ever worked with cover?".

Figure 2.  

Polling results of discussion session 1 "Extent of imaging", question 1: "What are the barriers to increasing the extent of imaging the oceans?".

Figure 3.  

Polling results of discussion session 1 "Extent of imaging", question 2: "Have you used crowd-sourced marine image data?".

Figure 4.  

Polling results of discussion session 1 "Extent of imaging", question 3: "Which area/volume does all image data you have ever worked with cover?".

How can we increase the spatial/temporal/resolution extent of imaging in the oceans?

A total of 37 ideas were submitted, with 184 votes. The most popular group of ideas was for data sharing, with 10 ideas and 87 votes. The ideas involved facilitating shared datasets supported by software, tools for combining multiple data sources and data aggregators or brokers between repositories. Three of the four most popular individual ideas (> 10 votes each) were related to sharing: “shared datasets” (34 votes), “supporting a permanent network to facilitate data, tech and expert sharing” (16 votes), “share collected data” (15 votes). Ideas related to equipment improvements were also popular (10 ideas with 39 votes), including new technology development (e.g. ARGOs for seafloor imaging, technology to reduce costs), rental of imaging equipment, more use of autonomous platforms and making information about imaging systems more widely available. Collaboration and partnership were themes of 8 ideas with 24 votes, including increased collaboration between scientists and with industry and centralising understanding of knowledge gaps. The concept of data standards was the theme of 3 ideas with 21 votes and the fourth most popular individual idea “Standards (e.g. ifdo!)” (12 votes). Other ideas outside these groupings (6 ideas with 13 votes) were focused on user-friendly automated image analysis software (and with user-friendly interfaces), benchmarks and ground-truthing/validation, increased use of acoustic data for photo survey design and capacity building.

What role can capacity building and citizen science have in increasing the extent?

A total of 30 ideas were submitted, with 161 votes. Using capacity building and citizen science to generate annotations was by far the most popular theme, with 17 ideas and 110 votes. The ideas (largely for citizen science) included generating annotations from more images, from existing and new datasets and for tasks not requiring speciality knowledge (e.g. finding major events, litter or high level taxonomic groups). Capacity building and citizen science were also seen as way to build public interest and government funding for marine science (6 ideas, 36 votes). Other ideas outside these two main groups (7 ideas, 16 votes) included linking to television shows for citizen scientists, engaging the computer science community and increasing the global extent of observations.

Where should we go and image (more)?

A total of 45 ideas were submitted, with 184 votes. “Everywhere and anywhere” was the most popular group of ideas, expressed as 6 ideas with 68 votes, including the need for repeated measurements/monitoring. Specific areas were suggested as 13 ideas, with 40 votes, including the Arctic/Antarctic, Indian/Atlantic/Southern Oceans, Mekong River, African continental waters and seafloor beyond national jurisdiction. The pelagic realm was the subject of 4 suggested ideas that garnered 24 votes, including the coastal, midwater and deep pelagic. Concepts of selecting locations, based on need for pressing environmental reasons, were submitted as 6 ideas with 18 votes, including relevance for climate change, biodiversity and seabed mining. Five ideas related to ocean features, with 11 votes, such as seamounts, trenches and canyons, deep-sea vent peripheries and oxygen minimum zones. A wide variety of ideas were submitted that defied grouping (11 ideas, 27 votes), including shallow and deep water, lakes, artificial structures and even the extraterrestrial oceans of icy moons!

Session 2: Interoperability in marine imaging

Polling

See Fig. 5 for polling results of question 1: "Which improvements to interoperability would be most useful to you?". See Fig. 6 for polling results of question 2: "Who has a mandate to define interfaces for interoperability between marine imaging software tools?".

Figure 5.  

Polling results of discussion session 2 "Interoperability in marine imaging", question 1: "Which improvements to interoperability would be most useful to you?".

Figure 6.  

Polling results of discussion session 2 "Interoperability in marine imaging", question 2: "Who has a mandate to define interfaces for interoperability between marine imaging software tools?".

What specific improvements do you want most?

This question prompted relatively few ideas (28), with strong opinions (180 votes). The most popular group of improvements to interoperability (10 ideas, 80 votes) related to development of best practices, standards and ontologies. Ideas included best practices for common processes; ontologies for annotation tool features (21 votes); standards for labels (18 votes) and describing annotation geometry (17 votes); ontologies for annotation data and metadata; and shared taxonomic catalogues for biota in images. Another group of ideas for improving interoperability focused on components between which interoperability was desired (9 ideas, 45 votes). Ideas in this group included interoperability of software/outputs/metadata from different instruments/systems and annotation software output format suitability for machine learning. Other ideas to improve interoperability (9 ideas, 55 votes) included making software user-friendly by making it point-and-click friendly and providing online video tutorials and providing clear documentation for data interfaces.

How can we engage with industry towards interoperability?

A total of 24 ideas were submitted, garnering 89 votes. By far the most popular group of ideas (15 ideas, 81 votes) related to collaboration with industry, principally paid collaboration. Ideas included asking industry how to best engage, communicating benefits of partnership to them and needs for research community and marking the value of the collaboration. Other ideas (9, 8 votes) included establishing standards for work that extended beyond research, making our data accessible and identifying applications outside marine imaging.

How do we future proof our interoperability?

Of all of the open-ended questions, participants responded least to this question; a total of 13 ideas were submitted, with 74 votes. The most popular group of ideas centred on building a network of collaboration to communicate developments (3 ideas, 36 votes). A second popular group of ideas focused on building standards and best practices (6 ideas, 18 votes), including adopting standards from outside the marine imaging community, developing new standards and best practices. Uncategorisable ideas (4, 20 votes) including the most popular idea: “one ring to rule them all”.

Session 3: Automation

Polling

See Fig. 7 for polling results of question 1: "Which AI Frameworks are your lab using/planning to use?". See Fig. 8 for polling results of question 2: "Is AI for underwater images fundamentally different to existing "in-air" methods?". See Fig. 9 for polling results of question 3: "What do you think is the biggest barrier to making AI systems applicable across datasets?".

Figure 7.  

Polling results of discussion session 3 "Automation", question 1: "Which AI Frameworks are your lab using/planning to use?".

Figure 8.  

Polling results of discussion session 3 "Automation", question 2: "Is AI for underwater images fundamentally different to existing "in-air" methods?".

Figure 9.  

Polling results of discussion session 3 "Automation", question 3: "What do you think is the biggest barrier to making AI systems applicable across datasets?".

What impact would interactive GUI tools for AI non-specialists have?

A total of 24 ideas were submitted, with 71 votes. Many responses suggested that they thought the impact of interactive GUI tools for AI non-specialists would be positive (14 ideas, 18 votes), stating that it would be “essential” or “make it possible” or “make it easier”, it could reduce the annotation bottleneck, that it could circumvent having to learn programming (or new languages) and that it would assist in having a full workflow in one place. Three ideas and 19 votes suggested that the impact would be negative, citing that it could be detrimental if users did not understand the underlying theory and complexity or if programming was not also taught. Ideas without a positive or negative outlook numbered 6 with 34 votes; these indicated that the impact could be significant and pointed to the “difference between no-use and use-at-all” and the lowered bar for experimentation and iteration, but there was a potential need for increased flexibility.

What can we learn from any overlap to other imaging domains?

This question garnered 32 ideas in response, with 128 votes. A group of 4 ideas and 49 votes with the most popular ideas related to large datasets (e.g. “big public datasets”, “aggregating datasets”, “integration of all data”). The largest group of ideas related to acquiring technical tools from other imaging domains (16 ideas, 54 votes); these related to design and development of workflows/pipelines (18 votes), standardisation (16 votes), current best practice, cross-platform tools, best neural network designs, gamification and borrowing code. A smaller group of ideas (7 ideas, 12 votes) suggested that we could learn aspects of research community building and partnership from other domains, including how to best cooperate and collaborate, how to involve industry, planning more workshops, copyright and use issues and learning strategies from other fields (particularly medical imaging). Miscellaneous ideas (5 ideas, 13 votes) included the potential to acquire synthetic data, identifying where cost and time savings could be made and being inspired by ideas in other domains.

How do we advance from tuning AI systems for one dataset towards cross-dataset systems?

This question prompted the most enthusiasm of the session, with 35 ideas and 152 votes. One group of responses (6 ideas, 50 votes) suggested training models on multiple datasets and using online platforms to facilitate this. Another group (3 ideas, 26 votes) suggested that sharing (trained) models (potentially via GitHub) was important, while a third group (3 ideas, 12 votes) suggested incorporating additional information, such as camera settings, platform settings and locational information, into models. Other technical suggestions included building localised models for particular areas (10 votes), style transfer (8 votes), learning on 3D models, self-supervised learning, examining features used in clustering for clues, human-in-the-loop systems and sharing experiences across users/developers. A final group of ideas (6 ideas, 25 votes) was broader, suggesting that the problem was a wider issue in computer vision, that expectations should be set appropriate to the (large) size of this task or expressing the need for assistance with getting started (10 votes).

Session 4: Reproducibility

Polling

See Fig. 10 for polling results of question 1: "Which part of Findable/Accessible/Interoperable/Reusable is most important to improve reproducibility?". See Fig. 11 for polling results of question 2: "Can we reproduce studies at all?".

Figure 10.  

Polling results of discussion session 4 "Reproducibility", question 1: "Which part of Findable/Accessible/Interoperable/Reusable is most important to improve reproducibility?".

Figure 11.  

Polling results of discussion session 4 "Reproducibility", question 2: "Can we reproduce studies at all?".

Which aspects of marine imaging are most important to make reproducible?

This question prompted 26 ideas with 116 votes. Ideas (11) related to documentation and metadata describing acquisition, processing and decisions garnered 82 votes. Making annotations reproducible, including classification, taxonomic precision and output formats, was suggested in 7 ideas with 22 votes. Eight miscellaneous ideas (12 votes) were largely related to concerns about reproducibility, including reproducing studies of large issues, availability of raw imagery to test reproducibility, and whether we should question existing findings.

How can we increase the number of observers per dataset? (Should we?)

This question prompted slightly more ideas (29) than the previous one and 132 votes. Many responses suggested that we should increase the number of observers; 2 ideas indicated so, but without suggestions on how (9 votes). The use of citizen science was popular (2 ideas, 25 votes). Other suggestions for capacity building (5 ideas, 12 votes) included providing training, engaging students/interns/volunteers and operating an accreditation system for annotators. These suggestions were complemented by other strong suggestions to pay annotators (3 ideas, 36 votes). The use of games and competitions, potentially with awards or prizes, garnered 5 ideas and 13 votes. Two ideas (10 votes) suggested collaboration facilitated by swapping images. Miscellaneous suggestions to increase the number of observers included offering paper authorship, use of AI alongside humans and support to aggregate existing data. A final group of responses (6 ideas, 22 votes) expressed concerns with increasing the number of observers, including how to address annotator variability and quality control.

Session 5: Future of marine imaging

Polling

See Fig. 12 for polling results of question 1: "Where have you seen the biggest progress of marine imaging during this workshop?.

Figure 12.  

Polling results of discussion session 5 "Future of marine imaging", question 1: "Where have you seen the biggest progress of marine imaging during this workshop?.

What role can our community play for marine imaging over the next ten years?

This question prompted high engagement, with the most ideas of the session (33) and the most votes of any open-ended question (194). By far the most popular group of ideas (14 ideas, 118 votes) related to collaboration, including sharing ideas and expertise and sharing/validating each other’s data and workflows. A second group of ideas (8 ideas, 48 votes) suggested that the group could contribute to the technical aspects of method development, including standardisation (44 votes). Continued hosting of workshops, including both the Marine Imaging Workshop and technical workshops of software tools was another group of ideas (2 ideas, 17 votes). Another group of ideas (7 ideas, 4 votes) related to engagement with others, including with other scientific fields and data types and with other groups of people (e.g. those local to study sites, school pupils, general audience). A miscellaneous idea included starting a journal or journal theme on marine imaging techniques.

Where are decisions required that block our progress in marine imaging?

This question prompted the least engagement of this session: 21 ideas and 90 votes. The most popular group of ideas (9 ideas, 64 votes) focused on the need for data unification and standards for both imagery, metadata, data provenance and models. A related group of ideas (2 ideas, 9 votes) suggested unification or standardisation of software and workflows. A small group (4 ideas, 8 votes) identified funding and costs as a block. Miscellaneous ideas included requiring institutional buy-in and leadership, developing larger teams and increased interoperability/testing of processes to find blocks.

Where do you see marine imaging ten years from now?

This question garnered 32 ideas and 144 votes. Three ideas (29 votes) suggested that we would still be working on some current challenges in marine imaging, including still talking about standardisation (24 votes), struggling with image storage and multiscale imaging. Many ideas suggested significant progress using AI (7 ideas, 27 votes), including its ubiquitous use for annotation and increased accuracy and the suggestion of full sentience. Another group of ideas focused on increase imaging (4 ideas, 10 votes), including increased use of automated underwater vehicles, increased image resolution and having captured > 100 billion images. Enthusiasm for the further development of the marine imaging community was evidence from the 6 ideas and 41 votes cast, including more connection between computer scientists/engineers/biologists (22 votes) and increasing the community diversity more generally (19 votes). Miscellaneous ideas included developing low-cost imaging solutions and increased diversity of funding sources, increased connection with the public and other applications of marine imaging (moving away from biodiversity applications).

Conclusions

The Marine Imaging Workshop 2022 was a successful and fruitful edition gathering experts of the marine imagery field and allowing a broad audience to be included (e.g. students and young researchers). Discussions and hands-on sessions were highly appreciated and benefited to both online and on-site participants. The topics of the workshop covered methods and technologies for all types of underwater optical imaging in pelagic and benthic environments, providing knowledge and monitoring solutions for marine ecosystems. This event boosted exchanges between international scientists in the field of marine imaging. The collaborations initiated or strengthened will undoubtfully give birth to more ambitious projects in the near future.

We look forward to reconvening the research and engineering community at the next imaging workshop.

Acknowledgements

We thank IFREMER for hosting the workshop and the local and scientific organising committees for their efforts in preparing the event. We thank the event sponsors: FORSSEA ROBOTICS, Schmidt Ocean Institute, ISblue, SubC Imaging, International Seabed Authority, Département du Finistère, Brest Metropole and ZEISS. We thank all participants who contributed to the event and to this article through their active participation during the workshop discussion sessions. We thank Océanopolis and Guy Bescond's team for their high-quality logistical and technical services.

Hosting institution

IFREMER

Author contributions

All authors contributed to workshop discussions and writing this report.

Conflicts of interest

The authors have declared that no competing interests exist.

References

Supplementary material

Suppl. material 1: Discussion question response data 
Authors:  Marine Imaging Workshop participants
Data type:  Ideas and polls results
Brief description: 

Exports of Slido files after polling and Q&A sessions.

login to comment