The world of scholarly publishing is rapidly changing in response to pressure from research funders and policymakers, such as the 2018 Plan S mandates and the recent memorandum from the White House Office of Science and Technology Policy, to accelerate the move to open access (OA).​ As a result, publishers are working to quickly transform their business models in the short term to remain viable over the long term.

As publishers make this transition from traditional subscription business models to new OA models, including transformative agreements like Read and Publish, they are experiencing many challenges as they work to collaborate with their institutional, consortium, and funder stakeholders, and then operationalize the deals they have created. Publishers and their partners are trying to create agreements that are fair and sustainable as they work together to make this transition to OA.

In this blog series, we will examine the challenges publishers face when crafting OA agreement proposals. We also discuss the keys to accelerating the process and creating agreements that are equitable and sustainable for all stakeholders. In this first post, we will focus on the data challenges that publishers face in their agreement journey and how data accessibility and quality can present challenges from the start.

Key Data Challenges

For open access agreements, it is all about finding a way to efficiently analyze data from disparate systems. Here is a quick rundown of the key data challenges publishers face as they begin to explore a new agreement offer:

  • Data access—publishers face accessibility challenges with the data sets needed to begin to conceptualize what an agreement offer might look like. They often are forced to manually pull and manipulate data across multiple systems and tools. This manual process holds up agreement modeling activities and creates a slow collaboration cycle that can jeopardize deals.
  • Data quality—for agreements to be sustainable for all stakeholders, publishers need access to accurate and complete OA data sets, including historical APC transaction data. This is the only way publishers can properly make informed renewal and new agreement decisions and execute an informed agreement strategy.
  • Error-prone analysis and modeling—for publishers that run data modeling and scenario planning in spreadsheets and pivot tables, these time-consuming and error-prone processes make it hard to innovate and experiment to find the deals that work best for all stakeholders. This can result in inconsistent agreement terms, missed opportunities, and inaccurate projections.
  • Ingesting data and calculating fees—after bringing all the data sources together for analysis, publishers need data model scheming to match entities—such as authors to funding institutions. The model must also rely on accurate data to properly calculate the fee. This includes author and journal value as well as the number of articles produced.
  • Data transparency—providing data transparency is critical for building and maintaining trust with institutional, funding, and consortium partners. These stakeholders look to publishers to maintain an equitable pricing structure. With clean and sharable data, publishers can efficiently communicate the benefits of open access agreements.

One central theme of the challenges listed above? Manual and lengthy processes that often hold up agreement modeling activities. Publishers have noted that current processes and lack of clean, standardized, and transparent data sets are hindering their efforts to efficiently communicate the shared benefits of agreement terms with institutions, resulting in misalignment and a lack of shared understanding of the benefits of potential deals for all stakeholders.

If data is the key to moving forward with OA agreements in a scalable way, publishers must explore tools and solutions that help them better streamline their internal processes especially concerning their data. In the coming weeks, I’ll explore agreement modeling challenges and how publishers can address them.




Author: Herman Mentink

Herman Mentink is an Econometrics MSc from the Erasmus University of Rotterdam in the Netherlands. Throughout his career, he has worked in marketing intelligence & strategy, specializing in pricing since 2008. As a pricing professional, Herman has worked in various B2C and B2B environments, focusing on scientific publishing since 2014. He has worked with some of the leading scientific publishers in a variety of areas, including transformative agreements. Currently, he works as an interim CPO (Chief Pricing Officer) in the publishing and entertainment industry.
Don't Miss a Post

Subscribe to the award-winning
Velocity of Content blog