GYA member and co-chair of the Open Science Working Group, Koen Vermeir, attended a conference on Open Access and the Evaluation of Research: Towards a New Ecosystem. The conference, which took place on 13 and 14 October 2016 in Toulouse, France, started off under favorable auspices. Indeed, a few days before, on October 8, a new law promoting open access took effect in France. This law gives unalienable rights to authors who have been fully or partially funded by the French state, and allows them to make their work available in OA repositories (after a short embargo period). (http://openaccess.couperin.org/publication-de-la-loi-pour-une-republique-numerique/)
That this kind of top-down legislation remains important for Open Science became very clear during the conference, where the many problems of current day Open Science, and Open Access in particular, were critically discussed. The close intertwinement of research evaluation and publication was perceived as especially problematic, not only because the current metrics (impact factor, h-index,…) do not measure scientific quality and are misused by science managers, but also because the interconnection of evaluation and publication keeps back innovative models of publication, including good versions of Open Access, and spawns negative side effects, like predatory publishing.
Speakers at the conference proposed successful examples of Open Access (e.g. the Liège model), new technical solutions and technical fixes. (DOAI, for instance, a Digital Open Access Identifier, is an alternate to the common DOI, Digital Object Identifier, and takes you to a free version of the requested article, when available.) Initiatives like PLOS ONE propose alternative kinds of evaluation, in which impact is not determined by journal ratings but by the readers after publication of the article. Researchers are also experimenting with new publishing models and platforms, such as epi-revue, Self-Journal of Science, OpenAire, or the creation of open access science magazines.
Although the general atmosphere at the conference was upbeat, participants were aware that they did not have a solution for a “new research ecosystem”, and not by a long shot. The conference showed quite some animosity amongst the heads of established evaluation institutions as well as with the public, and fair and credible procedures of research evaluation remains a hotly contested topic. Furthermore, participants explained that new challenges are looming at the horizon, including new ways of monetizing research practices, the continuing consolidation of the scientific publishing sector, which is now also buying social platforms in order to monetize data (not the data of the scientists but data generated generate about the scientists). Dedicated private companies are inventing new metrics of evaluation, keeping their proprietary algorithms secret, and more generally, developments in big data, AI and robotics are prone to fundamentally change research management and, as a result, research practices. It is still to be seen how healthy the new scientific ecosystem will turn out to be.