CORE update for April to June 2019

CORE releases CORE Discovery tool

CORE has released a BETA version of the CORE Discovery tool, which offers a one-click access to free copies of research papers whenever you might hit a paywall. 

CORE Discovery

Our free CORE Discovery service provides you with:

  • Highest coverage of freely available content. Our tests have shown CORE Discovery finding more free content than any other discovery system.
  • Free service for researchers by researchers. CORE Discovery is the only free content discovery extension developed by researchers for researchers. There is no major publisher or enterprise controlling and profiting from your usage data.
  • Best grip on open repository content. Due to CORE being a leader in harvesting open access literature, CORE Discovery has the best grip on open content from open repositories as opposed to other services that disproportionately focus only on content indexed in major commercial databases.
  • Repository integration and discovering documents without a DOI. The only service offering seamless and free integration into repositories. CORE Discovery is also the only discovery system that can locate scientific content even for items with an unknown DOI or which do not have a DOI.

The tool is available as:

  • A browser extension for researchers and anyone interested in reading scientific documents
  • Plugin for repositories, enriching metadata only pages in repositories with links to freely available copies of the paper
  • API for developers and third party services

If you are interested in the CORE Discovery plugin do get in touch

CORE receives Vannevar Bush Best Paper Award

Vannevar Bush Best Paper Award The CORE team has also won the Vannevar Bush Best Paper Award at JCDL 2019, one of the most highly recognised digital libraries conference in the world, for our work on analysing how soon authors deposit into repositories, which was driven by CORE data. A blog post about this is already available.
read more...

CORE highly visible at Open Repositories 2019 conference

CORE participated at the Open Repositories conference (10 – 13 June 2019), which took place in Hamburg, Germany. This year’s conference theme was “All the user needs”, where CORE received much attention and participated actively with 5 presentations. 

Assessing Compliance with the UK REF 2021 Open Access Policy

The recent increase in Open Access (OA) policies has brought forth important questions concerning the effect these policies have on the practice of publishing Open Access. In particular, is there evidence to support that mandating OA increases the proportion of OA outputs (in other words, do authors comply with relevant policies)? Furthermore, does mandating OA reduce the time from acceptance to the public availability of research outputs, and can compliance with OA mandates be effectively tracked? This work studies compliance with the UK REF 2021 Open Access policy. We use data from CrossRef and from CORE to create a dataset containing 1.6 million publications. We show that after the introduction of the UK OA policy, the proportion of OA research outputs in the UK has increased significantly, and the time lag between the acceptance of a publication and its Open Access availability has decreased, although there are significant differences in compliance between different repositories. We have developed a tool that can be used to assess publications’ compliance with the policy based on a list of DOIs. read more...

CORE becomes the world’s largest open access aggregator (or how about them stats 2018 edition)

This was another productive year for the CORE team; our content providers have increased, along with our metadata and full text records. This makes CORE the world’s largest open access aggregator

. More specifically, over the last 3 months CORE had more than 25 million users, tripling our usage compared to 2017. According to read more...

Increasing the Speed of Harvesting with On Demand Resource Dumps

 

I am currently working with Martin Klein, Matteo Cancellieri and Herbert Van de Sompel on a project funded by the European Open Science Cloud Pilot that aims to test and benchmark ResourceSync against OAI-PMH in a range of scenarios. The objective is to perform a quantitative evaluation that could then be used as evidence to convince data providers to adopt ResourceSync. During this work, we have encountered a problem related to the scalability of ResourceSync and developed a solution to it in the form of an On Demand Resource Dump. The aim of this blog post is to explain the problem, how we arrived to the solution and how the solution works.

The problem

One of the scenarios we have been exploring deals with a situation where the resources to be synchronised are metadata files of a small data size (typically from a few bytes to several kilobytes). Coincidentally, this scenario is very common for metadata in repositories of academic manuscripts, research data (e.g. descriptions of images), cultural heritage, etc.

The problem is related to the issue that while most OAI-PMH implementations typically deliver 100-1000 responses per one HTTP request, ResourceSync is designed in a way that requires resolving each resource individually. We have identified and confirmed by testing that for repositories with larges numbers of metadata items, this can have a very significant impact on the performance of harvesting, as the overhead of the HTTP request is considerable compared to the size of the metadata record.

More specifically, we have run tests over a sample of 357 repositories. The results of these tests show that while the speed of OAI-PMH harvesting ranges from 30-520 metadata records per second, depending largely on the repository platform, the speed of harvesting by ResourceSync is somewhere in the range of only 4 metadata records per second for harvesting the same content using existing ResourceSync client/server implementations and sequential downloading strategy. We are preparing a paper on this, so I am not going to disclose the exact details of the analysis at this stage.

As ResourceSync has been created to overcome many of the problems of OAI-PMH, such as:

  • being too flexible in terms of support for incremental harvesting, resulting in inconsistent implementations of this feature across data providers,
  • some of its implementations being unstable and less suitable for exchanging large quantities of metadata and
  • being only designed for metadata transfer, omitting the much needed support for content exchange

it is important that Resource Sync performs well under all common scenarios, including the one we are dealing with.

Can Resource Dumps be the solution?

An obvious option for solving the problem that is already offered by ResourceSync are Resource Dumps. While a Resource Dump can speed up harvesting to levels far exceeding those of OAI-PMH, it creates some considerable extra complexity on the side of the server. The key problem is that it creates the necessity to periodically package the data as a Resource Dump, which basically means running a batch process to produce a compressed (zip) file containing the resources.

The number of Resource Dumps a source needs to maintain is equal to the number of Capability Lists it maintains times the size of the Resource Dump Index. The minimum practical operational size of a Resource Dump Index is 2. This is to ensure we don’t remove a dump currently being downloaded by a client during the creation of a new dump. As we have observed that a typical repository may contain about 250 OAI-PMH sets (Capability Lists in the ResourceSync terminology), this implies the need for a significant data duplication and requirements on period creation of Resource Dumps if a source chose to use Resource Dumps as part of the harvesting process.

On Demand Resource Dumps

To deal with the problem, we suggest an extension of ResourceSync that will support the concept of an On Demand Resource Dump. An On Demand Resource Dump is a Resource Dump which is created, as the name suggests, whenever a client asks for it. More specifically, a client can scan through the list of resources presented in a Resource List or a Change List (without resolving them individually) and request from the source to package any set of the resources as a Resource Dump. This approach speeds up and saves processing on the side of both the source as well as the client. Our initial tests show that this enables ResourceSync to perform as well as OAI-PMH in the metadata only harvesting scenario when requests are sent sequentially (the most extreme scenario for ResourceSync). However, as ResourceSync requests can be parallelised, as opposed to OAI-PMH (due to the reliance of OAI-PMH on the resumption token), this makes ResourceSync a clear winner.

In the rest of this post, I will explain how this works and how it could be integrated with the ResourceSync specification.

There are basically 3 steps:

  1. defining that the server supports an on-demand Resource Dump,
  2. sending a POST request to the on-demand dump endpoint and
  3. receiving a response from the server that 100% conforms to the Resource Dump specification.

I will first introduce steps 2 and 3 and then I will come back to step 1.

Step 2: sending a POST request to the On Demand dump endpoint

We have defined an endpoint at https://core.ac.uk/datadump . You can POST it a list of resource identifiers (which can be discovered in a Resource List). In the example below, I am using curl to send it a list of resource identifiers in JSON which I want to get resolved. Obviously, the approach is not limited to JSON, it can be used for any resource listed in a Resource List regardless of its type. Try it by executing the code below in your terminal.

curl -d ‘[“https://core.ac.uk/api-v2/articles/get/42138752″,”https://core.ac.uk/api-v2/articles/get/32050″]‘ -H “Content-Type: application/json” https://core.ac.uk/datadump -X POST > on-demand-resource-dump.zip read more...

CORE’s Open Access content has reached the Moon! (or how about them stats 2017 edition)

For yet another year (see previous years 2016, 2015) CORE has been really productive; the number of  our content providers has increased and we have now more open access full text and metadata records than ever.

Our services are also growing steadily and we would like to thank the community for using the CORE API and CORE Datasets.

We also offer other services, such as the CORE Repositories Dashboard, CORE Publisher Connector and the CORE Recommender. We received great feedback with regards to the CORE Recommender, with George Macgregor, Institutional Repository Manager at Strathclyde University, reporting:

We are thrilled that this year CORE made it to the moon. Our next destination is Venus.

The CORE Team wishes you Merry Christmas and a Prosperous New Year!

* Note: Special thanks to Matteo Cancellieri for creating the graphics in this blog post.

CORE enhances library discovery services

The CORE service is working in partnership with ProQuest to deliver more content within their library discovery services (Ex Libris Primo and Ex Libris Summon).  What does this mean for the end user?  This means that search results will bring back more relevant content from OA repositories worldwide in addition to the existing library collection records.  The user will not have to go to a separate search interface to run the same search query.

Read more…

CORE visits Ethiopia and participates in an Open Science training session

=&0=&

In June 2017, EIFL invited the global open access full text aggregator CORE to take part in an Open Science train-the-trainer course for universities and research institutions in EIFL partner countries.

Watch the videos recorded during the workshop and read more

Solomon Mekonnen – Open Access Ethiopia 

Zaituni Kaijage – Open Access Tanzania

Dr Roshan Karn – Open Access Nepal

Dr Manisha Dhakal – Open Access Nepal

Simon Osei – Open Access Ghana

Gloria Kadyamatimba – Open Access Zimbabwe

It was a great experience travelling to Addis Ababa and a big thanks to the workshop host, Library of the University of Addis Ababa (Mesfin Gezehagn, Solomon Mekonnen and Girma Aweke) for their hospitality. It was also great to meet the trainers participating in the workshop, from Ghana (Lucy Adjoa Dzandu, Simon Kwame Osei, Benjamin Yao Folitse), Nepal (Dr Manisha Dhakal and Dr Roshan Kumar Karn), Tanzania (Zaituni Kokujona Kaijage, Paul Samwel Muneja, Bwire Wilson Bwire) and Zimbabwe (Gloria Kadyamatimba).

 

Implementing the CORE Recommender in Strathprints: a “whitehat” improvement to promote user interaction

by George Macgregor, Institutional Repository Coordinator, University of Strathclyde

This guest blog post briefly reviews why the CORE Recommender was quickly adopted on Strathprints and how it has become a central part of our quest to improve the interactive qualities of repositories.

Back in October 2016 my colleagues at the CORE Team released their Recommender plugin. The CORE Recommender plugin can be installed on repositories and journal systems to recommend similar scholarly content. On this very blog, Nancy Pontika, Lucas Anastasiou and Petr Knoth, announced the release of the Recommender as a:

…great opportunity to improve the functionality of repositories by unleashing the power of recommendation over a huge collection of open-access documents, currently 37 million metadata records and more than 4 million full-text, available in CORE*.
(* Note from CORE Team: the up-to-date numbers are 80,097,014 metadata and 8,586,179 full-text records.).

When the CORE Recommender is deployed a repository user will find that as they are viewing an article or abstract page within the repository, they will be presented with recommendations for other related research outputs, all mined from CORE. The Recommender sends data about the item the user is visiting to CORE. Such data include any identifiers and, where possible, accompanying metadata. The CORE response to the repository then delivers CORE’s content recommendations and a list of suggested related outputs are presented to the user in the repository user interface. The algorithm used to compute these recommendations is described in the original CORE Recommender blog post but is ultimately based on content-based filtering, citation graph analysis and analysis of the semantic relatedness between the articles in the CORE aggregation. It is therefore unlike most standard recommender engines and is an innovative application of open science in repositories.

Needless to say, we were among the first institutions to proudly implement the CORE Recommender on our EPrints repository. The implementation was on Strathprints, the University of Strathclyde’s institutional repository, and was rolled out as part of some wider work to improve repository visibility and web impact. The detail of this other work can be found in a poster presented at the 2017 Repository Fringe Conference and

a recent blog post read more...

CORE reaches a new milestone: 75 million metadata and 6 million full text

CORE is continuously growing. This month we have reached 75 million metadata and 6 million full of text scientific research articles harvested from both open access journals and repositories. This past February we reported 66 million metadata and 5 million full text articles, while at the end of December 2016 we had just over 4 million full text. This shows our continuous commitment to bring to our users the widest possible range of Open Access articles.

To celebrate this milestone, we gathered the knowledge of our data scientists, programmers, researchers, and designers to illustrate our portion of metadata and full text with a less traditional (sour apple) “pie chart”. read more...