The focus of the workshop is the future of scholarly assessment approaches, including organizational, infrastructural, and community issues. The overall goal is to:
"identify requirements for novel assessment approaches, several of which have been proposed in recent years, to become acceptable to community stakeholders including scholars, academic and research institutions, and funding agencies."Panelists include Oren Beit-Arie (Ex Libris), Peter Binfield (PLoS ONE), Johan Bollen (Indiana University), Lorcan Dempsey (OCLC), Tony Hey (Microsoft), Jorge E. Hirsch (UCSD), Julia Lane (NSF), Michael Kurtz (Astrophysics Data Service), Don Waters (Andrew W. Mellon Foundation), Jevin West (UW/eigenfactor.org), and Jan Velterop (Concept Web Alliance).
A summary of the goal of the workshop:
The quantitative evaluation of scholarly impact and value has historically been conducted on the basis of metrics derived from citation data. For example, the well-known journal Impact Factor is defined as a mean two-year citation rate for the articles published in a particular journal. Although well-established and productive, this approach is not always best suited to fit the fast-paced, open, and interdisciplinary nature of today's digital scholarship. Also, consensus seems to emerge that it would be constructive to have multiple metrics, not just one.
In the past years, significant advances have been made in this realm. First, we have seen a rapid expansion of proposed metrics to evaluate scientific impact. This expansion has been driven by interdisciplinary work in web, network and social network science, e.g. citation PageRank, h-index, and various other social network metrics. Second, new data sets such as usage and query data, which represent aspects of scholarly dynamics other than citation, have been investigated as the basis for novel metrics. The COUNTER and MESUR projects are examples in this realm. And, third, an interest in applying Web reputation concepts in the realm of scholarly evaluation has emerged and is generally referred to a Webometrics.
A plethora of proposals, both concrete and speculative, has thus emerged to expand the toolkit available for evaluating scholarly impact to the degree that it has become difficult to see the forest for the trees. Which of these new metrics and underlying data sets best approximate a common-sense understanding of scholarly impact? Which can be best applied to assess a particular facet of scholarly impact? Which ones are fit to be used in a future, fully electronic and open science environment? Which makes most sense from the perspective of those involved with the practice of evaluating scientific impact? Which are regarded fair by scholars? Under which conditions can novel metrics become an accepted and well-understood part of the evaluation toolkit that is, for example, used in promotion and tenure decisions?
I look forward to the twitter stream..