Research Ideas and Outcomes :
Conference Abstract
|
Corresponding author: Esteban Gonzalez (esteban.gonzalez@upm.es), Daniel Garijo (daniel.garijo@upm.es), Oscar Corcho (ocorcho@fi.upm.es)
Received: 03 Oct 2022 | Published: 12 Oct 2022
© 2022 Esteban Gonzalez, Daniel Garijo, Oscar Corcho
This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Citation:
Gonzalez E, Garijo D, Corcho O (2022) Challenges for FAIR Digital Object Assessment. Research Ideas and Outcomes 8: e95943. https://doi.org/10.3897/rio.8.e95943
|
|
A Digital Object (DO) "is a sequence of bits, incorporating a work or portion of a work or other information in which a party has rights or interests, or in which there is value". DOs should have persistent identifiers, meta-data and be readable by both humans and machines. A FAIR Digital Object is a DO able to interact with automated data processing systems (
Although FAIR was originally targeted towards data artifacts, new initiatives have emerged to adapt other research digital resources such as software (
FAIR assessment tools
There are a growing number of tools used to assess the FAIRness of DOs. Community groups like FAIRassist.org have compiled lists of guidelines and tools for assessing the FAIRness of digital resources. These range from self assessment tools like questionnaires and checklists to semi-automated validators (
When it comes to assessing FDOs, we find two main challenges:
In (
The proposed indicators may be a starting point to define which tests are needed for each type of resource (
Aggregation of FAIR metrics
Another challenge is the best way to produce an assessment score for a FDO, independently of the tests that are run to assess it. For example, each of the four dimensions of FAIR (Findable, Accessible, Interoperable and Reusable) usually have a different number of associated assessment tests. If the final score is presented based on the number of tests, then by default some dimensions may have more importance than others. Similarly, not all tests may have the same importance for some specific resources (e.g., in some cases having a license in a resource may be considered more important than having its full documentation).
In our work we consider a FDO as an aggregation of resources, and therefore we face the additional challenge of creating an aggregated FAIRness score for the whole FDO. We consider the following aggregation scores:
Both metrics are agnostic to the kind of resource analyzed. The score they produce ranges from [0 - 100].
Discussion
A FDO has metadata records that describe it. Some records are common for all DOs, and others are specific to a DO. This makes it difficult to assess some FAIR principles like "F2: "data are described with rich metadata". Therefore, we believe a discussion of a minimal set of FAIR metadata should be addressed by the community.
In addition, a FAIR assessment score can change significantly depending on the formula used for aggregating all metrics. Therefore, it is key to explain to users the method and provenance used to produce such score. Different communities should agree on the best scoring mechanism for their FDOs, e.g., by adding a weight to each principle and figuring out the right number of tests for each principle, which may give more importance to the principles with tests.
We believe that the objective of a FAIR scoring system should not be to produce a ranking, but become a mechanism to improve the FAIRness of a FDO.
FAIR assessment, FDO, Digital Objects
Esteban González
First International Conference on FAIR Digital Objects, presentation
European Comission - Project FAIR IMPACT