Language interfaces are going to be a big deal. That’s how Sam Altman, chair of OpenAI, put it when the company launched ChatGPT last November.

Going to be a big deal? Definitely a big deal.

At the 2023 Frankfurt Book Fair last week, the halls were alive with the sounds of AI. In the Frankfurt Studio, I moderated a panel discussion, “Trained With Your Content,” considering what limits should be placed on training Large Language Models (LLMs) and how to address concerns over equity, transparency, and authenticity.

Click below to listen to the latest episode of the Velocity of Content podcast.

“Right now, the current status situation is that the AI governance is far behind the AI capabilities, which is dangerous,” noted Dr. Hong Zhou, Director of Intelligent Services & Head AI R&D, Wiley. “This has impacted the research and also the publishing, because it’s very hard for the people to manage all these AI capabilities.

“That’s why we need to create the legal framework to catch up to these technologies to have the response,” he explained. “I do have several concerns about this. The first concern, as everyone knows, is copyright infringement. Today, generative AI generates content which infringes on copyright without permission. This is a problem. Another concern, actually, is that AI can generate content that is similar to the original content but is not enough to be considered as copyright infringement. This is one scenario. Another scenario is it generates some content which infringes the copyright, but it’s hard to detect. In both cases for the copyright holders, it’s very difficult for them to enforce the rights – in both cases.”

According to Dr. Namrata Singh, Founder and Director, Turacoz Groupthe ICMJE has developed guidelines on the responsibility of scientific authors when using AI in their work.

“If you have used an AI tool, then you mention that in your methods section. You mention the name of the tool. You mention the version if it is there or the whole technology part behind it. This is where, I guess, the transparency works. But ultimately, the responsibility is on the author. But guidelines and recommendations do help us just to know what is right and what is wrong and what we can do and what we cannot do.”

The demand for AI tools in research and scholarly publishing raises copyright-related questions about the use of published materials that feed the tools. Carlo Scollo Lavizarri described how licensing solutions might meet that demand.

“These licenses can either be from segments of publishing, perhaps, that have large content that they can license, or it could be voluntary collective license, linking many-to-many situations. For example, you have many writers, many publishers on the one side, and you have many pieces of content on the other side used by different AI tools. So that is one such mechanism – voluntary collective licensing.”

2023 Frankfurt Book Fair Panel

Topic:

Author: Christopher Kenneally

Christopher Kenneally hosts CCC's Velocity of Content podcast series, which debuted in 2006 and is the longest continuously running podcast covering the publishing industry. As CCC's Senior Director, Marketing, he is responsible for organizing and hosting programs that address the business needs of all stakeholders in publishing and research. His reporting has appeared in the New York Times, Boston Globe, Los Angeles Times, The Independent (London), WBUR-FM, NPR, and WGBH-TV.
Don't Miss a Post

Subscribe to the award-winning
Velocity of Content blog