Mandated by funders and governments and implemented at universities and research-intensive organizations worldwide, FAIR data principles ensuring that data is findable, accessible, interoperable, and reusable are expected to drive innovation in science in the years ahead.

On Monday, September 18, at Poortgebouw, the University of Leiden, the Netherlands, CCC in partnership with the GO FAIR Foundation will host its inaugural FAIR Forum on “The Evolving Role of Data in the AI Era.” Complimentary registration is here.

With new AI services being introduced on an almost daily basis, the adoption of FAIR Data Principles is more important than ever. This one day in-person forum will provide leaders in research-intensive businesses with expert insights on the importance of FAIR data to successful AI initiatives and best practices for FAIR implementation.

Click below to listen to the latest episode of the Velocity of Content podcast.

At this special program, CCC and GOFAIR will welcome global FAIR and AI experts including Barend Mons, President, CODATAErik SchultesFAIR Implementation Lead, GO FAIR Foundation; Lars Juhl Jensen, Professor, NNF Center for Protein Research; Jane Lomax, Head of Ontologies, SciBite; and Martin Romacker, Product Manager, Roche Data Marketplace.

Tracey Armstrong, President & CEO, CCC, and Babis Marmanis, Executive Vice President & CTO, CCC, will also share insights.

Molecular biologist Barend Mons is responsible for groundbreaking research on malaria parasites. He became involved with creating the FAIR data principles in 2014, and today is a leading advocate for their adoption. In a CCC Town Hall panel program this spring, he explained why he had committed himself to this ambitious effort.

“There’s no way to work without machines, and that’s why I need FAIR data for my own research,” Mons told me. “But I also became an advocate because I think it can change the face of science and the whole way we do open science fundamentally, which is needed.”

GO FAIR Foundation’s Erik Schultes acknowledged the challenge posed by the ever-growing volume of data in all fields – a challenge exacerbated by inadequate metadata identification.

“All the data we have today, six months from now, it’s going to be doubled. A year from now, that’s going to be quadrupled,” Schultes noted. “If that data is not made accessible online and if there isn’t sufficient metadata that can allow search engines or machine agents to locate it in some meaningful way, then indeed that data can be lost in plain sight.”

Topic:

Author: Christopher Kenneally

Christopher Kenneally hosts CCC's Velocity of Content podcast series, which debuted in 2006 and is the longest continuously running podcast covering the publishing industry. As CCC's Senior Director, Marketing, he is responsible for organizing and hosting programs that address the business needs of all stakeholders in publishing and research. His reporting has appeared in the New York Times, Boston Globe, Los Angeles Times, The Independent (London), WBUR-FM, NPR, and WGBH-TV.
Don't Miss a Post

Subscribe to the award-winning
Velocity of Content blog