Research Ideas and Outcomes :
Conference Abstract
|
Corresponding author: Vincent Emonet (vincent.emonet@maastrichtuniversity.nl)
Received: 16 Sep 2022 | Published: 12 Oct 2022
© 2022 Vincent Emonet, Remzi Çelebi, Jinzhou Yang, Michel Dumontier
This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Citation:
Emonet V, Çelebi R, Yang J, Dumontier M (2022) Towards an extensible FAIRness assessment of FAIR Digital Objects. Research Ideas and Outcomes 8: e94988. https://doi.org/10.3897/rio.8.e94988
|
The objective of the FAIR Digital Objects Framework (FDOF) is for objects published in a digital environment to comply with a set of requirements, such as identifiability, and the use of a rich metadata record (
Without a dedicated framework, communities will develop isolated assessment systems from the ground up (
Previous work from the FAIR Metrics working group defined a framework for deploying individual FAIR metrics tests as separate services endpoints (
To amend this problem, we published the fair-test library in python and its documentation, which help with developing and deploying individual FAIRness assessments. With this library, developers define their metric tests using custom python objects, which will guide them to provide all required metadata for their test as attributes, and implement the test evaluation logic as a function. The library also provides additional helper functions for common tasks, such as retrieving metadata from a URL, or testing a metric test.
These tests can then be deployed as a web API, and registered in a central FAIR evaluation service supporting the FAIR metrics working group framework, such as FAIR enough or the FAIR evaluator. Finally, users of the evaluation services will be able to group the registered metrics tests in collections used to assess the quality of publicly available digital objects.
There are currently as many as 47 tests that have been defined to assess compliance with various FAIR metrics, from which 25 have been defined using the fair-test library, including assessing if the identifier used is persistent, or if the metadata record attached to a digital object complies with a specific schema.
This presentation introduces a user-friendly and extensible tool, which can assess whether specific requirements are met for a digital resource. Our contributions are:
We aim to engage with the FDO community to explore potential use-cases for an extensible tool to evaluate FDOs, and discuss their expectations related to the evaluation of digital objects.
Insights and guidelines from the FDO community would contribute to further improving the fair-test ecosystem. Among improvements that are currently being under consideration, we can cite improving the collaborative aspect of metadata extraction, or adding new metadata to be returned by the tests.
FAIR evaluations, library, validation
Vincent Emonet
First International Conference on FAIR Digital Objects, presentation