Research Ideas and Outcomes : Workshop Report
Print
Workshop Report
Report on the Marine Imaging Workshop 2017
expand article infoTimm Schoening, Jennifer M Durden§,|,, Inken Preuss, Alexandra Branzan Albu#, Autun Purser¤, Bart De Smet«, Carlos Dominguez-Carrió», Chris Yesson˄, Daniëlle de Jonge˅, Dhugal Lindsay¦, Jan Schulzˀ, Klas Ove Möllerˁ, Kolja Beisiegel, Linda Kuhnz, Maia Hoeberechts, Nils Piechaud, Stephanie Sharuga, Tali Treibitz
‡ GEOMAR Helmholtz Center for Ocean Research Kiel, Kiel, Germany
§ Norwegian University of Science and Technology, Trondheim, Norway
| National Oceanography Centre, Southampton, United Kingdom
¶ Ocean and Earth Science, University of Southampton, United Kingdom
# University of Victoria, Victoria, Canada
¤ Alfred Wegener Institute, Bremerhaven, Germany
« Marine Biology Research Group, Ghent University, Ghent, Belgium
» Institut de Ciències del Mar (CSIS), Barcelona, Spain
˄ Institute of Zoology, Zoological Society of London, London, United Kingdom
˅ NIOZ Royal Netherlands Institute for Sea Research, Texel, Netherlands
¦ Japan Agency for Marine-Earth Science and Technology, Yokosuka City, Japan
ˀ Institute for Chemistry and Biology of the Marine Environment, University of Oldenburg, Oldenburg, Germany
ˁ Helmholtz Centre Geesthacht, Institute of Coastal Research / Operational Systems, Geesthacht, Germany
₵ Leibniz Institute for Baltic Sea Research Warnemuende, Rostock, Germany
ℓ Monterey Bay Aquarium Research Institute, Moss Landing, United States of America
₰ Ocean Networks Canada, Victoria, Canada
₱ Plymouth University, Plymouth, United Kingdom
₳ The National Academies of Sciences, Engineering, and Medicine, Washington DC, United States of America
₴ University of Haifa, Haifa, Israel
Open Access

Abstract

Marine optical imaging has become a major assessment tool in science, policy and public understanding of our seas and oceans. Methodology in this field is developing rapidly, including hardware, software and the ways of their application. The aim of the Marine Imaging Workshop (MIW) is to bring together academics, research scientists and engineers, as well as industrial partners to discuss these developments, along with applications, challenges and future directions. The first MIW was held in Southampton, UK in April 2014.

The second MIW, held in Kiel, Germany, in 2017 involved more than 100 attendees, who shared the latest developments in marine imaging through a combination of traditional oral and poster presentations, interactive sessions and focused discussion sessions. This article summarises the topics addressed during the workshop, particularly the outcomes of these discussion sessions for future reference and to make the workshop results available to the open public.

Keywords

Marine imaging, underwater image acquisition, underwater image analysis, computer vision, optical imaging, ocean observation

Date and place

The Marine Imaging Workshop 2017 was held at the GEOMAR Helmholtz Center for Ocean Research Kiel, Germany on 20-24 February 2017.

Introduction

Imaging has a long history in marine science and is a rapidly growing field in terms of technology and methodology. The Marine Imaging Workshop (MIW) is the first international meeting to bring together members of this interdisciplinary community, from foundational researchers and early adopters to computer scientists and industrial users. The first MIW was held in 2014 at the National Oceanographic Centre in Southampton, UK. The aims were to bring together users of marine imaging tools from the fields of research and industry to discuss present and future practical, methodological and technological challenges. A multi-disciplinary review article of the current state-of-the-art methods in marine imaging was published by a group of commited participants following their presentation at the workshop (Durden et al. 2016).

Since 2014, technology for marine imaging has continued to evolve rapidly. Recent developments were showcased at the second MIW, which was held in February 2017 at GEOMAR. Technological developments include improvements to camera platforms, lighting, and high-resolution still and video cameras. Software for image and metadata management, data exchange, annotation of images and automated image processing have also improved. Together, these developments have allowed to address many research questions and have pushed the frontiers of marine imaging towards to new challenges. To address these challenges, emerging strategies have been discussed at the workshop. These aim to increase the robustness of the use of marine imaging as a tool for marine surveying from the littoral zone to the deep sea with applications in both science and industry.

Aims of the workshop

The workshop aimed to link participants across diverse disciplines, to present cutting-edge research and to discuss community-wide progress on all facets of marine imaging. This was accomplished through a combination of traditional oral and poster presentations, interactive sessions and focused discussion sessions. Invited keynote talks were given by Jules Jaffe (Scripps Institute of Oceanography - “ From the Titanic to the tiny: 30 years of inventing underwater imaging systems”), Yoav Schechner (Technion - “ Opportunities in distributed imaging through scatter”) and Sönke Johnsen (Duke University - “ Seeing the underwater world through the eyes of animals”) . Technical presentations and posters were primarily focused on methodology, procedures and technology for marine imaging, with a context of application to scientific aims and ecosystem management. Contributions addressed various themes, such as: Strategy, Sensors, Annotation, Machine Learning, 3-D Imaging, Data Management and Applications. Many of the presentations are available through the workshop website (www.marine-imaging-workshop.com).

An entire day of the workshop focused on interactive sessions that allowed participants to be involved with hands-on trials of a variety of imagery manipulation, annotation and data management software. These sessions included:

  • Online annotation with Squidle+ (squidle.acfr.usyd.edu.au)
  • Video mining with Ocean Networks Canada Data Preview (dmas.uvic.ca)
  • 3-D image calibration and scene reconstruction (Agisoft PhotoScan)
  • Annotation and data management with the Video Annotation and Reference System (www.mbari.org/products/research-software/video-annotation-and-reference-system-vars/)
  • Shape morphometry with SHERPA

The workshop’s focussed discussion sessions aimed to capture insights on current state-of-the-art marine imaging work, as well as promote interactive discussions of the future directions of six specific, yet arbitrarily chosen, areas of marine imaging. These topics were:

  • Planning to use marine imagery for scientific aims
  • Multi-disciplinary expertise for marine imaging
  • Semantic image annotation
  • Data management in this data-rich field
  • Access to and development of marine imagery tools
  • The future of marine imaging

This article presents the results of these discussions including open questions, best-practice suggestions and fields for future development.

Key outcomes and discussions

Methods of capturing discussion outputs

One discussion session was held each day, with two themes addressed during each session and two groups of participants addressing each theme. Attendees were assigned to one of four discussion groups, with each group composed of participants from different technical backgrounds, institutions and varied geographic locations. Introductory questions were provided by the workshop organizers for each theme, as initial starting points to promote discussion; these questions are presented at the beginning of the following sections for the reader’s reference. A chairperson was assigned to facilitate discussion, and encourage contributions from all members of each discussion group. At the end of each session, the chair provided a short summary of the contributions to all workshop attendees. The outcomes of these discussions are described below.

Planning to use marine imagery to a scientific aim

The following questions were posed to incite discussion on planning:

  • How do you plan your marine imaging projects (any/all aspects)? What are the priorities? How much planning is done in advance?
  • How do you estimate the time and monetary costs of this type of work? Do some phases of marine imagery have more uncertainty in planning than others?
  • What strategies do you employ to use marine imaging effectively? Are they project-specific or transplantable to other projects?
  • What challenges remain in planning to use marine imagery?

Participants agreed that planning occurs at many different stages of a project, and that different types of planning (with appropriate detail) occur at different stages. Many attendees reported having experience confined to a particular stage(s) of a marine imagery analysis chain, but highlighted the value of understanding the complete workflow (from survey design all the way through to data analysis). Practical details and the overarching scientific aims both require sufficient planning. Planning of imagery acquisition provided much discussion, particularly given the logistical complexity in capturing marine imagery in certain environments and conditions (e.g., using ocean going vessels, in difficult environments such as under ice or in areas of high relief).

The value of experience in good planning was raised. To that end, attendees discussed the importance of thorough documentation during of all steps of the imaging process in order to provide a robust reference (rather than anecdotal impressions) on which aspects of an imaging project were successful, and/or which needed improvement. Such documentation would also facilitate the sharing of experience between researchers and research groups. Experienced attendees mentioned that documenting even miniscule changes between subsequent iterations of imaging projects is very valuable to reduce uncertainty and to improve the methodologies, provided that experiences learnt on previous campaigns are implemented in subsequent iterations. It was noted that mentioning such iterative improvement can be useful in funding applications.

Planning of particular phases of the imaging workflow were discussed in detail. The high costs of annotation were identified, with the strategy and method planning for this stage often left to students in research settings. As the technology for image acquisition matures and ever more imagery is captured, the volume (and cost) of annotation and analysis of this imagery increases – thus the biggest compromises in planning a project can be in this phase. Attendees noted that some institutions have standardised methods for data management, metadata acquisition, annotation and image processing steps, and suggested that budgeting for data management should provide for its maintenance and open accessibility far into the future. Since the data collected (or method) may be used by other persons and institutions for other research questions in the future, records of methods, assumptions and actions associated with the data must be kept so that future users can understand its genesis. Key concepts identified for successful planning and execution were (1) good communication amongst members of the project team (including between engineers and scientists), and (2) good record keeping.

Challenges in planning imagery projects were also identified, along with some solutions. Challenges in collaboration on imaging projects, accounting for different timelines of user availability and varied expertise, as well as interdisciplinary differences in scope and communication were highlighted. Planning and executing image capture can be difficult because of uncertainty in natural phenomena (e.g. storms or environmental conditions affecting lighting, flow conditions or behaviour of animals), and contingency planning must be related to this (e.g. adapting survey design on-site). The use of different ships may require an adaptation of equipment or procedures, so the use of consumer-based equipment that is easily transferrable may be a solution if this is anticipated to occur often. Planning the consistent use of equipment and methods to detect changes over time can be challenging, as technology is evolving rapidly. Post-acquisition annotation presents significant challenges in the time and effort involved in analysis; species catalogues are an example of a high-effort task here, even before annotation begins. Attendees request automated detection/classification methods that would reduce annotation time. The challenge is how manual classifications can be used in a standardised manner most efficiently to feed into automated annotation methods.

Multi-disciplinary expertise for marine imaging

Attendees were asked to consider the following questions in their discussion on multi-disciplinary expertise:

  • What do you want to discuss with experts in other disciplines?
  • How do you access expertise from other disciplines?
  • What types of expertise are you most in need of, or for what tasks do you require support? What expertise is the community most in need of?
  • What barriers are there to sharing or gaining expertise?
  • How would you like to access the expertise you need, and what partnerships would you like to see?

Attendees identified several areas where expertise other than their own, as well as multidisciplinary expertise, would be useful. Scientists expressed a desire for more information on engineering and technical requirements or limitations for achieving scientific goals (e.g. How small can a camera be? What are the light limitations of particular optical systems?), and the costs related to their development. Engineers and industry professionals, on the other hand, are interested in obtaining more information on what would be most useful to develop as technologies and tools for the science community to utilize. There was consensus that both groups are interested in working together to further both the science and technology-development components of marine imaging research. Better communication and more effective sharing of information between fields is needed to foster collaborative development of more effective technologies, tools, and processes. More discussion is warranted on how best to accomplish this, especially when faced with challenges related to current lack of standardisation and issues related to propriety of information and technology.

Collaborative work across disciplinary and international boundaries could help facilitate the development of standardisation that will benefit all disciplines in the future. In general, attendees noted the lack of standards for imaging data, data exchange, and interfaces for data sharing. Concerns were raised over the challenges resulting from the diverse variety of imagery types, vast differences in processing and analysis approaches, and lack of formal community-wide protocols and standards for all stages of imagery-based research. There was a suggestion that it could be valuable to develop an international working group within the marine imaging community that would focus on facilitating standardisation across the field.

Overall, science is becoming more interdisciplinary and must go beyond merely promoting collaboration between scientists and engineers/industry. Biologists commented that a solid foundation in computer science is now needed or may be needed in the future. As a result, this field should be encouraged for study by all engaging in imagery-related research. Such a background would be helpful in understanding some multidisciplinary topics (e.g. automated classification). Industry is often heavily involved in computer-science related work and has the potential to be a valuable resource for scientists in this regard. It is also valuable for computer scientists, engineers, and those involved in technology development (including industry) to have some knowledge of the science behind imagery-related research in order to better evaluate technological needs to complete that research. A final point on this topic was the perceived lack of industry perspective in the community.

Many attendees expressed that they are not connected with the broader imaging community and are not aware of other ongoing projects and research. This poses a challenge for multidisciplinary research in marine imaging and obtaining necessary expertise from another field of expertise. Attendees wanted to seek out experts from different fields; conference attendance, word of mouth, interdisciplinary organisations, and engaging technology vendors were their main methods of accessing other expertise. The MIW workshop, in particular, was identified as being very useful for sharing of multidisciplinary expertise. It was suggested that this community could expand to include or learn from experts in additional fields such as aquatic imaging and those working in freshwater environments. In some areas an interdisciplinary information exchange is developing, such as the “Aquatic Optical Technologies” working group in Germany. A suggestion was made for scientific match-making process or tool, possibly in the form of an application for matching expertise online, to assist researchers in finding expertise and provide opportunities for multidisciplinary collaboration.

Another issue that was discussed is the lack of journals specifically for multidisciplinary marine and aquatic imaging research, which was identified as a challenge for accessing expertise and keeping abreast of current research. Further, attendees have often found it difficult to publish multidisciplinary research crossing traditional boundaries, particularly where the interdisciplinarity is the novel aspect of the work. It was suggested that the workshop could publish proceedings in the future, with submissions undergoing internal review; however, the popularity and perceived practicality of having such workshop proceedings varied with discipline. Other methods for improving communication and sharing of information that were suggested include having a formal website or email mailing list that spans across the multidisciplinary marine and aquatic imaging community as a whole.

Annotation – the good, the bad and the ugly

Questions posed for attendees to spur discussion on image annotation were:

  • What challenges are you experiencing with manual annotation, crowd sourced annotation or automated detection and classification?
  • How can we incorporate good quality control into our annotation? (e.g. qualitative, quantitative, human factors)
  • What are the risks and benefits to annotation/data management standardization?

Attendees identified the need for consistent standards for annotation to ensure repeatability, since annotation is practised differently even within single institutes. The need to document annotating processes was pointed out; however, attendees were unsure about how to establish such a protocol for training and for quality assurance/quality control (QA/QC) of annotations. Some participants questioned whether standard protocols could be prepared at all if, for example, no taxonomic experts exist for the taxonomy of a particular region or if new data is examined.

It was undecided whether research lags behind industry in compiling annotation protocols. Attendees advised the use of QA/QC protocols that characterize the performance of annotators and consider examples available from industry. It was discussed whether a framework determining the type of QA/QC protocol would be necessary. Such a framework for protocols could help increase accuracy and consistency. The community could learn from other imaging spheres, such as medical domains, which document best practices. It was seen as crucial to assess and report the classification performance. An easy check for classification quality could be to show image patches of commonly labeled objects next to each other. In such displays, any discrepancies are then rendered readily apparent.

Limits on quality control were discussed. In the field of marine biology, where natural variation can be extreme, organism abundances often low and taxonomic discrimination in imagery commonly disputable, it was seen as an important task strictly to adhere to a specific annotation protocol. A potential solution was given by suggesting confidence levels be assigned to annotations. Such confidence levels could later be used to split annotations by credibility (e.g. use high-confidence annotations for publication-grade analysis and use low-confidence annotations to highlight images in need of reassessment by field experts).

The attendees agreed that consistent annotation systems are necessary for any type of annotation. Here, the incorporation of good quality control related to taxonomy in classification can be a starting point. Attendees identified the need for an explicit understanding of the cues for identification used by different annotators. Some cues can be codified as has been shown in libraries. While some cues can be rather obvious from imagery (e.g. morphology, number of legs etc.), others can be based on further, ex-situ analyses of samples (e.g. texture) which may not be available to all annotators. Taxonomic keys should facilitate high quality classification, by including this information, and potential variations related to such things as changes in taxon appearance across different depth ranges. Attendees suggested that open access tutorials and training sets could be helpful in improving taxonomic understanding; it was suggested that it would be valuable to have a common platform for sharing and distributing this information (e.g. a webpage providing links to different sources).

Attendees expressed concerns about how to combine datasets labelled under different standards. In terms of taxonomy, a hierarchical classification system was deemed practical, enabling annotations from different sources to be merged at higher levels in the hierarchy. It was stressed that classification schemes might be focused on a specific task or question rather than being generally applicable.

Attendees explored other aspects of QA/QC of annotation data, and how annotation could be designed to facilitate such QA/QC – a topic of some recent publications. Cognitive bias was understood as an important challenge of annotation. Strategies to avoid bias were discussed; different views on this bias may be formed when annotating yourself, and when training others to do so. Attendees agreed that imagery should generally not be analysed/annotated in serial succession, but randomised to reduce bias in the resulting data. Randomisation might introduce an additional cost for untrained annotators who might need to consult taxonomic references repeatedly. Randomisation is easy with still images, but videos should be cut into clips to facilitate this randomisation. In addition, a certain percentage of each annotator’s imagery should be given to someone else for cross-evaluation. Attendees suggested that few trick photos from other surveys could be added, such as photos from other regions, to keep the annotator observant. The concern that overlapping human annotation is not cost-effective was raised; however, it was recognised that quality control always comes at some cost. Furthermore, updating annotations still requires (often repeated) input from others, which is another time-consuming aspect of quality control.

It was noted that few groups publish quality information on annotations along with their results (e.g. confidence, quantification of annotation success), which is another point raised in recent publications. Also, it was pointed out that annotations are often student or project-based and rarely looked at afterwards, not even for quality assurance. The community is aware that bias in annotations exists in annotation, and has implemented responses to some extent. However, many of these biases / responses have not been quantified in terms of impact on the data. For instance, human factors such as learning experience or loss of concentration over time are largely unquantified. It is unclear how objectivity can be maintained throughout an annotation process. It is scientifically unknown which ‘sensible’ responses to identified biases should be prioritised. In order to produce appropriate guards against bias, the factors causing the bias must be explored, and their impact on the results quantified.

It has been posited that annotation performance could be improved by familiarizing annotators with the specimens via videos before handling the stills. Looking at specimens from physical sampling, on video footage or on multiple images could thus improve annotation quality by creating an annotators’ mental model of the specimen. However, it remains unclear how to sustain quality control in the annotation of objects that have no verification or have never been identified before.

The community uses different annotation systems, resulting in inconsistencies in annotation data. Some attendees have not used annotation software at all. Currently, annotation is predominantly pursued with custom systems. In some cases, special training is needed to install or operate these systems. Attendees identified open access tutorials as very helpful in learning how to use annotation software. One annotation system can be applied in several different ways to different imagery or for different project aims (e.g. video/stills, for different organisms of interest). In computer science it is common practice to write custom-made annotation tools. To increase the efficiency of annotation software, tools should be easy to use for both beginner annotators and experts. It was noted that some annotation tools embed annotations in media. This method can be superior to time code-based annotation because it assists in the quality control of annotations, but it can complicate future changes to the annotation data.

There is a need for standardised naming schemes (or ontologies) to enable coordination or transfer of annotation data between different annotation tools. One suggested scheme is the AphiaID from the World Register of Marine Species, but it was noted that this scheme lacks non-biological categories. Another is the CATAMI scheme developed in Australia, which does include abiotic categories.

Many users of marine imagery for science desire tools for ‘Automated annotation’, including detection and/or classification of specimens. Automated detection is striding forward, but classification remains difficult in automation. The attendees saw an urgent need for more automated analysis, even though expectations of this technology should be kept at a realistic level. With automation, annotation can enable machine learning.

Attendees also discussed the information needed to replicate or compare annotation data, and also to merge datasets or perform meta-analyses. Image capture, study design and annotation workflow information – collectively termed ‘operating procedures’ – should be recorded to enable this data interpretation and reuse. It was suggested that as a best practise, the community’s published papers should include metadata about the annotations to enhance the understanding of the annotations and their meaning. Despite this desire, a lack of knowledge on what to include in such metadata exists, and it was suggested that standards should be defined across institutions. Suggestions were made to include the following information in publications and publically-available datasets:

  • Type of imagery (e.g. still, video, holographic, stereo imagery)
  • Camera system and other acquisition technology
  • Distance and angle from subject and field of view
  • Image resolution
  • Frame rate
  • Zoom level of images for annotation
  • Strategy for annotating large images (e.g. left-to-right, what overlap in view, etc.)
  • Taxonomic trees and resolution
  • Level of confidence in annotations
  • Strategy applied to deal with partial animals
  • Standard operating protocol or annotation guide used
  • Metadata for automated classification system
  • Minimum image quality standard (image exclusion criteria)

A core topic of discussion was on the purpose of collecting the annotation data. As annotation is generally seen as a costly task (in terms of time required by taxonomic experts), annotation is often mostly targeted at one specific research question. ‘General purpose annotation’ was seen as a potential improvement to enrich the derived data and make it appealing for other users. Despite this opportunity, investment in additional annotating not directly related to the research question is rarely made, largely because of budgetary and time constraints. In addition, ‘general purpose annotation’ has not been formally and consistently defined

Data management in a data-rich field

The following questions were addressed during this discussion session, which focused on the technical aspects and protocol implementation in data management:

  • What are the challenges for successful image data management?
  • How do we tackle the risks of sharing data (images, annotations, metadata)?

One technical aspect is the long-lasting controversy relates to file formats. With many digital standards now available for media files, it is still unclear what the long-term options are to enable future access to data. Existing efforts to digitize film images have shown that some reading devices and decoding mechanisms are extremely difficult/impossible to source and the data has therefore effectively been lost as a result. Digitizing old data was seen by attendees as important to provide historical context, and for enabling long-term observations in a changing ocean. It is also important for preserving some types of media, for example magnetic tapes tend to be damaged if not kept in stable cool conditions. Dedicated funds for digitizing films are generally not available, so investments for this purpose are often taken from other funding sources. Attendees suggested that future projects should consider incorporating funds to digitize old data assets and that funding agencies should encourage digitizing of, and provision of open access to, such assets. With the first digital data archives ageing, the long-term challenge of keeping data accessible is becoming visible as links to other archives are often no longer valid. Sustained maintenance and infrastructure of imagery data archives is needed. Attendees debated whether specific institutions should be focussing on such tasks for the benefit of the entire community (such as the PANGAEA data centre).

The community is currently facing bottlenecks in making their imagery available online. Existing databases for marine science are not able to deal with the enormous size of imagery data sets compared to oceanographic and bathymetric data. With increasing camera resolution and higher frame rates, terabytes of footage are acquired per survey, resulting in petabytes of data each year. Such volumes are expensive to store online. Attendees discussed whether parts of the data could be neglected, but most agreed that all data needs to be permanently accessible to enable future research based on this data. This includes data that may currently not be usable (or used), but which might be of use in the future. Despite the costs for long-term online storage it was agreed that more data should be made available to broaden our view on the oceans and to enable re-use of data whenever possible.

Alongside the imagery data, metadata plays an important role in enriching the usefulness of available datasets. Metadata archives are easier to construct due to lower data volumes, although the link of metadata and imagery is often hard to maintain. Attendees acknowledged that synchronising time stamps, which are often used as a link between imagery and metadata, obtained by different acquisition devices is still an open challenge. Fusion of metadata with the imagery by incorporating the data into the media files was discussed as an option but was not yet seen as a best practice solution.

Annotations play an important role in the data management pipeline. As standards for raw data storage and annotation have not been developed or accepted across the community, it was pointed out that archiving well-documented annotation results is not feasible at present. Publishing raw data with DOIs and linking derived annotation data to those datasets is one way to achieve repeatable results. As publication requirements vary globally and by sector, this strategy might not be available to all marine imaging stakeholders. It was agreed though that future marine imaging should be conducted with the aim of making data openly accessible.

A key question that indicated an imminent change of consensus was the fact that most attendees worked with data from their own institutions. Attendees were sceptical about the cost-benefit ratio between making data available and the actual usage of such open data by external partners.

Marine imaging tools – Access and development

Discussion in this session was prompted with these questions:

  • How do you access the tools you need to work with marine imagery? (e.g. how do you choose/get cameras/equipment/annotation software?)
  • What new tools do you need?
  • Are tools being developed in parallel, which have enough overlap to be combined (e.g. HD-DVD vs BlueRay moment)?

Hardware is incorporated into the community in three ways, with increasing flexibility and effort by the applicants: (1) Off-the-shelf hardware is relatively easy to use yet often hard to incorporate into existing setups, (2) interdisciplinary hardware development, often with collaborating partners, provides more specialised solutions for specific demands, and (3) to achieve the highest application flexibility, most of the discussion participants find themselves developing tailored hardware for their needs.

The current, core camera hardware is generally good enough to meet the community’s demands for seafloor imaging. Current steps for sensor improvement are the enrichment of image data with environmental data. This includes attitude information for camera pose estimation, and camera calibration using external measurements like laser markers or altimeters for real-world scaling of the raw pixel information. Commercial systems often lack these options, making individual developments necessary. For pelagic imaging it is desirable to include state-of-the-art camera systems, allowing for more detailed images at higher resolutions and ultra-short shutter times (< 15µs).

Attendees identified high-resolution navigation data as indispensable to make the high-resolution image data most useful. With limited angular resolution from sea-surface based systems, seafloor navigation beacons could be an alternative. These systems come at an economic and temporal cost as hardware is expensive and needs to be triangulated by the vessel before being useful.

Efforts to reduce imaging cost have led to many experimental imaging systems that often incorporate commercial action cameras. Attendees acknowledged that such setups could provide valuable scientific data, but when aiming for quantitative analysis, more robust hardware is needed that can be efficiently incorporated into data acquisition and analysis workflows. In such cases, commercial solutions can often provide a more cost-efficient solution rather than developing the complete stack of tasks again.

In terms of software, no clear trend towards community-wide standards was found amongst attendees. A multitude of image annotation software tools has been implemented across the community; most attendees use a custom annotation tool that are not necessarily available to other partners. These tools are developed either in-house, or in collaboration with external partners. Commercial solutions for image annotation are generally scarce. Some scientists still rely on basic software like Microsoft Excel for image annotation without harnessing the features of more advanced image annotation tools.

While many tools are available for image annotation, it is still rarely possible to transfer data between them. Attendees acknowledged that the variety of tools has been beneficial in that solutions exist for many aspects of image annotation. However, the lack of data communication may become a barrier in data analysis and sharing annotation data. Some of the most recent annotation tools are now being developed to incorporate external data sources using well-document application programming interfaces, which will alleviate problems of communication between annotation tools.

Specific image analysis software designed for tasks other than annotation were less commonly used by attendees. Attendees used several different tools for tasks such as 3D reconstruction and mosaicking, and there was a demand for standardised and easy-to-use software with this focus. It was identified as a potential market for commercial solutions, as the marine imaging community cannot currently provide such tools itself.

Attendees agreed that hardware and software solutions evolve out of the needs of application methodologies. As standards for application of imagery are lacking, standardised tools cannot be developed. Thus, the community should decide which methodologies to promote and advance, in order to select and develop suitable hardware and software. As with many current technical developments, cameras and algorithms are being developed faster than the standardisation processes can follow, manifesting a serious task for the imaging community.

Think Big!

This discussion session challenged attendees to think of the future of marine imaging, guided by these questions:

  • What is driving the development of marine imaging – technology, science or application? How is this affecting the development of the field?
  • What are the limitations to the development and application of marine imaging? How could these be overcome?
  • Think of the future – what technological developments can you imagine, what scientific questions could be answered using imagery, how could imaging be applied to policy or governance of the marine realm?
  • How can we connect with those outside marine imaging – regulatory bodies, industry, researchers in other image-heavy sectors, and the public?

Attendees noted a variety of drivers for the development of marine imaging. A reduction in sea time means that humans are becoming removed from data collection, so remote technologies are generally experiencing development booms (marine imaging being one example). The improved efficiency of imaging for evaluating epibenthic communities over trawls (as well as minimising the habitat damage) was also cited as a driver. It was also noted that images and maps are useful in applying for grants. Human interest stories were also cited as a driver, and imaging is particularly useful in monitoring and showing ecosystem change to the public; public interest was seen as a key driver for imaging development into the future.

The limitations for imaging development and application that were being raised included the storage and maintenance of video data, the need for more technical staff, the lack of general taxonomic knowledge, the quality control or validation of annotations (including identifications), and problems with scaling up data collection (both in terms of image acquisition and annotation). The costs of deployment of cameras and platforms are high, and need to be reduced. Researchers noted that they can envision technological solutions and developments, but these are not necessarily being implemented by industry. As a community, we need to evaluate why this may be the case, and must consider that these solutions may not be commercially attractive. More public awareness is needed; it was noted that a public culture exists related to space research, but a similar culture has not yet developed to the same extent for ocean research. A suggestion to generate public excitement is to release an exciting image, and challenge the world to find its origin.

In the future, we will increase our presence in the marine environment at many scales using imaging. Attendees envisioned cameras and platforms that are smaller (like a mobile phone), remotely deployed, and deployed in groups. Cameras could follow individual fauna to observe behaviour. Funding would be available to analyse historical imaging data (with appropriate drivers in place to enable this funding), and data will be shared within the community, possibly using a community-wide storage and maintenance platform. We will have modular, integrated systems on-ship and on-shore, and problems related to internet connectivity and syncing will be overcome. Automated long-term moorings with imaging capabilities for benthic and pelagic cameras should be more wide-spread. Biologists will gain more computer science expertise, with topic shifts in many jobs. Annotation time will be reduced (possibly by harnessing more crowd-sourced annotation), and quality will be improved. A strategy for standardisation of hardware and software, while maintaining technological creativity and development will be fostered.

Attendees expressed interest in connecting to other experts and groups using and developing imaging, and cited personal connections as being very valuable. One challenge is the potential for cross-disciplinary cultural clashes, particularly related to methods of publishing, and disseminating and protecting information. Two groups were explicitly mentioned for additional engagement: The Research Data Alliance (addressing Big Data problems), and those researching aquatic/freshwater imaging. The communication with policy-makers was also noted as being important. Finally, attendees reflected on the current political atmosphere in many countries, where anti-expert sentiment has made scientists more reticent to share data / results, and more wary of misattribution and misinterpretation.

Conclusions

As the marine imaging community has only recently begun to coordinate their efforts and developments, the MIWs are still evolving with regard to the workshop format and the demographic structure of the participants. The 2017 MIW brought together more technology-focused experts and fewer natural scientists and government partners in comparison to the first workshop. This change in demographic structure indicates a diversification of the community and broadening of its reach. There are undoubtedly further interested parties with which to connect.

Major achievements of this workshop included showcasing state-of-the-art technologies from around the world, identifying tasks for the community for the upcoming years, and establishing collaborations between individuals representing different sectors, institutes and international partners. A great indication of its success is the suggestion by attendees for the creation of a permanent community, with online resource exchange facilities, a formal communication channel and potentially a focussed marine imaging journal.

The next Marine Imaging Workshop is scheduled to take place at Ocean Networks Canada in Victoria, Canada in 2019. We are looking forward to reconvening with the community to discuss developments and future steps.

List of participants

See authors list for participants who wished to contribute to this workshop report. Fig. 1 shows a group picture of all workshop participants.

Figure 1.

Participants of the Marine Imaging Workshop 2017 in the Lithothek of GEOMAR (Photo: Jan Steffen, GEOMAR).

Acknowledgements

We thank GEOMAR for hosting the workshop, and the local and scientific organising committees for their work in preparing the event. We thank the sponsors of the workshop: Schmidt Ocean Institute, SubC Imaging, Develogic Subsea Systems, CathX Ocean, Sulis Subsea, Oktopus GmbH and KUM. We thank all participants of the workshop who contributed to this article by their active participation during the discussion sessions.

We congratulate the participants Sergey Galkin, Cherisse Du Preez and Noelie Benoist for winning the Photo Competition Awards and Carlos Dominguez-Carrió for being awarded the Best Poster Prize.

This is publication 027 of the DeepSea Monitoring Group at GEOMAR.

Hosting institution

GEOMAR Helmholtz Center for Ocean Research Kiel, Germany

References

login to comment