Since the introduction of FAIR principles in 2016, there has never been a question that their implementation would help drive innovation and accelerate the R&D lifecycle. The effort to implement FAIR principles has led many organizations to adopt a data-centric culture that values collaboration and encourages the sharing of data. Although many successes have been reported, and much progress made, after seven years, only a very small fraction of data is truly FAIR. That fraction is greater if one considers only the data of a single organization that has invested in the implementation of FAIR principles, it is significantly smaller for organizations that have not, and it drops precipitously for data across organizations, domains, and regulatory jurisdictions.
A key characteristic that truly FAIR data have is that they are machine-readable and machine-interpretable. So, it is reasonable to expect that an organization can maximize the benefits of recent advances in artificial intelligence (AI) if inputs to AI systems consist of FAIR data. It is also important to acknowledge that there isn’t a single AI system or a single algorithm that can “magically” understand what the data means if it is not explicitly expressed. AI systems can help us to increasingly extract value from our data if there is value in the data! With the explosion of new AI services, the adoption of FAIR data principles is paramount.
R&D teams today collaborate across life science to overcome hurdles to innovation and help advance the discovery and delivery of treatments and this collaboration includes the implementation of FAIR principles. For example, numerous projects of the Pistoia Alliance today make use of FAIR data principles and aim at facilitating their implementation (e.g., the IDMP ontology project). In addition to such self-governing collaborative efforts, international policymakers such as the G20/G7 and the OECD committee for scientific and technological policy have publicly endorsed the implementation of FAIR principles.
According to a recent article by Sansone et al. (FAIR: Making Data AI-Ready): “Turning FAIR into reality requires new technological and social infrastructure, as well as cultural and policy changes, supported by educational and training elements that target not just researchers but all stakeholders involved in the data life cycle: from developers, service providers, librarians, journal publishers, funders, societies in the academic as well as in the commercial and governmental setting.”
CCC, in partnership with the GO FAIR Foundation, will host an in-person FAIR Forum on “The Evolving Role of Data in the AI Era” on 18 September at Poortgebouw, the University of Leiden, the Netherlands. This one-day forum will provide leaders in research-intensive businesses with expert insights on the importance of FAIR data to successful AI initiatives and best practices for FAIR data implementations. Speakers will address critical topics such as:
- The vital importance of FAIR data in strategic AI initiatives.
- Using FAIR data to improve efficiency and drive innovation.
- Evangelizing the FAIR Data Principles and identifying practical steps to implementation.
- The important role that information management professionals can have in making the FAIR data principles a reality.