CORE releases a new website version

A couple of days ago we released a new version of our website and if you visit our main page it now looks slightly different.

Image: blickpixel @ pixabay https://pixabay.com/en/lego-legomaennchen-males-workers-568039/

One of our aims was to showcase in a more clear way the CORE testimonials, i.e. what others think of the project and how the community uses our products, mainly our API and Datasets. In an effort to give credit to the universities and companies that are using our services, such as our Recommender and API, we are now displaying their logos on our main page. Our last new item is our research partners; CORE could not offer some of its services without co-operating with other projects, such as IRUS-UK, RIOXX and more. read more...

CORE’s open access and text mining services – 2016 growth (or, how about them stats – 2016 edition)

The past year has been productive for the CORE team; the number of harvested repositories and our open access content, both in metadata and full-text, has massively increased. (You can see last year’s blog post with our 2015 achievements in numbers here.)

There was also progress with regards to our services; the number of our API users was almost doubled in 2016, we have now about 200 registered CORE Dashboard users, and this past October we released a new version of our recommender and updated our dataset. read more...

CORE Recommender

* This post was authored by Nancy Pontika, Lucas Anastasiou and Petr Knoth.

The CORE team is thrilled to announce the release of a new version of our recommender; a plugin that can be installed in repositories and journal systems to suggest similar articles. This is a great opportunity to improve the functionality of repositories by unleashing the power of recommendation over a huge collection of open-access documents, currently 37 million metadata records and more than 4 million full-text, available in CORE. read more...

‘Measuring’ and managing mandates

An investigation by Research Support staff at Brunel University London considers the role CORE might play in supporting funder compliance and the wider transition to open scholarship…

By David Walters (Open Access officer at Brunel) and Dr Christopher Daley (Research Publications Officer at Brunel)

In 2001, the Budapest Open Access Initiative (BOAI) brilliantly and simply encapsulated the aspirational qualities of ‘openness’ that funders, scholars, institutions, services and publishers have since driven forward. This simplicity has been lost in the detail of implementing funder mandates over copyright restrictions, resulting in significant administrative overheads to support staff whose primary role is to smoothly progress a cultural change. Although the momentum is undeniable, the transition to open scholarship is now fraught with complexity. read more...

CORE wins Best Poster Award at the Open Repositories Conference #OR2016

Last week, the CORE team attended the 11th Annual Conference on Open Repositories, an international conference addressed mainly to subject and institutional repository managers, focusing on open access, open data and open science tools, projects and services.

2016-06-15 20.04.57

At the conference the team had six submissions:

  1. A workshop presentation on “How can repositories support the text-mining of their content and why?” where Nancy Pontika explained how repository managers should be supportive of text-mining practices and Petr Knoth described the technical requirements that can enable the text mining of repositories. In addition to that, the CORE team was the workshop organiser, as part of its involvement with the OpenMinTeD project, an EU-funded project on text and data mining. The workshop has been described in two blog posts, one hosted at the OpenMinTeD blog (which includes all workshop presentations), and another post composed by Rebecca Sutton Koeser, a workshop participant.
  1. A full presentation on “Exploring Semantometrics: full text-based research evaluation for open repositories” by Petr Knoth. The presentation explored semantometrics, a new class of research evaluation metrics, which builds on the premise that full text is needed to assess the value of a publication. (Presentation available here.)
  1. A 24×7 presentation on the “Implementation of the RIOXX metadata guidelines in the UK’s repositories through a harvesting service”, where Matteo Cancellieri and Nancy Pontika described how the RIOXX metadata guidelines are now a new embedded feature in the CORE Repositories Dashboard. (Presentation slides here.)
  1. & 5. Two demo presentations during the Developer Track sessions. The first one was on “Mining Open Access Publications in CORE”, where Matteo Cancellieri demonstrated the new CORE API and the second was entitled “Oxford vs Cambridge Contest: Collecting Open Research Evaluation Metrics for University Ranking” where Petr Knoth used the traditional Oxford University vs Cambridge University contest to show how to freely gather and compare the research performance of universities. (The code for both demo presentations is on Github.)
  1. A poster on the “Integration of the IRUS-UK Statistics in the CORE Repositories Dashboard”, by Samuel Pearce and Nancy Pontika, which showed the process of embedding the existing IRUS-UK statistics service to the CORE Repositories Dashboard. We were delighted also that our poster won the best poster award (yay!). We would like to thank all the conference participants who stopped by our poster, got the CORE freebies and voted for us! (You can access the poster here.)

IMG_1252Based on the fact that this conference has a clear focus on repository services and that the CORE service uses or is being used by these services, we were also extensively mentioned in other presentations as well. For example: Richard Jones in his presentation on Lantern mentioned that the project is using the CORE API; Paul Walk described how CORE is using the RIOXX metadata application profile; the Repositories of the Future panel, organised by COAR, stressed on the importance of the role of aggregators in the repository environment specifically naming CORE; and the “Ideas Challenge”, a thought-provoking and brainstorming group exercise consisting of programmers and repository managers that focused on how to make the lives of academics easier, proposed CORE as a runner up for the development of a cross-repository journal and topic browse interface. Finally, CORE was also presented in the Jisc poster on “Jisc’s Open Access Services”. read more...

How about them stats?

Every month Samuel Pearce, one of the CORE developers, collects the CORE statistics – perhaps a boring task, but useful for us to know where we stand as a service. A very brief report of the accumulative statistics of all years that CORE operates as a project, 2011 – 2015, are as follows.
Users can retrieve from CORE,

  • 25,363,829 metadata records and
  • 2,954,141 open access full-text records, 

from 689 repositories (institutional and subject) and 5,488 open access journals. In addition, 122 users have access to the CORE API

In the playful Christmas spirit we attempted this time to have some fun with the statistics. read more...

7 tips for successful harvesting

7tipsThe CORE (COnnecting REpositories) project aims to aggregate open access research outputs from open repositories and open journals, and make them available for dissemination via its search engine.  The project indexes metadata records and harvests the full-text of the outputs, provided that they are stored in a PDF format and are openly available. Currently CORE hosts around 24 million open access articles from 5,488 open access journals and 679 repositories.

Like in any type of partnership, the harvesting process is a two way relationship, were the content provider and the aggregator need to be able to communicate and have a mutual understanding. For a successful harvesting it is recommended that content providers apply the following best practices (some of the following recommendations relate generally to harvesting, while some are CORE specific): read more...

CORE Repositories Dashboard: An infrastructure to increase collaboration of Aggregators with Open Repositories

In an effort to improve the quality and transparency of the harvesting process of the open access content and create a two way collaboration between the CORE project and the providers of this content, CORE is introducing the Repositories Dashboard. The aim of the Dashboard is to provide an online interface for repository providers and offer, through this online interface, valuable information to content providers about:

  • the content harvested from the repository enabling its management, such as by requesting metadata updates or managing take-down requests,
  • the times and frequency of content harvesting, including all detected technical issues and suggestions for improving the efficiency of harvesting and the quality of metadata, including compliance with existing metadata guidelines,
  • statistics regarding the repository content, such as the distribution of content according to subject fields and types of research outputs, and the comparison of these with the national average.

In the CORE Dashboard there is a designated page for every institution, where repository managers will be able to add all the information that corresponds to their own repository, such as the institution’s logo, the repository name and email address. read more...