On Wednesday I attended and event laid on by the JIBS user group (users  of bibliographic software). Somewhat provocative title but the aim of  the day was to look at librarians role in research evaluation. There  were some interesting presentations, ranging from evaluation of using  Google Scholar for bibliometric data to implementing research  information systems like PURE or Symplectic.
The event was  introduced by Fiona Bowtell (chair of JIBS) and the day was chaired by  Kara Jones. Here's my notes on some of the presentations:
Andria McGrath - King's College London - Keynote
Andria made the point that research evaluation is a "brave new  world" and I guess that this is an emerging specialism for librarians.  In the past research support was likely to be done by subject librarians  but it was interesting to see quite a high number of specialist  Research Support Librarians at this event. Andria went on to discuss the  decreased importance that the REF 2014(?) places on bibliometric data  and that  impact (in terms of the REF) is a more general concept.  Demonstrating some societal change is (quite rightly in my opinion) more  important than citations. But there are areas where bibliometrics will  inform panels. Andria described some initiatives that should make  getting at bibliometric data easier - in particular the CERIF data  model, which is a European standard for CRIS (Common European  Research  Information Format). This is an xml-based model that goes beyond  metadata - allowing deeper description of the information needed to  manage research. A CERIF4REF schema has been produced for  ePrints that  will allow easy porting of data in repositories for REF submissions.
Andria  was the first of many speakers to mention that librarians are not  always involved by their institutions and that we need to push ourselves  forwards. She mentioned a RIN report "Value of libraries for research  and researchers" that looks useful.
So CRIS, CERIF, new ways of  using bibliometric citation APIs, discovering other systems (like  grants) and bibliometrics products and principles are all new areas for  librarians. Some institutions have engaged with this and there was  discussion in the room of an institution (Surrey?) that have employed a  bibliometrician in the library staff. There is some evidence that  academics are starting to get interested in bibliometrics (I've  certainly seen some interest around generating H index numbers!)  and  that some are using advanced techniques like using Essential Science  Indicators or InCites to normalize results.
Andria also listed some of the  things people want from this area: they do want a Research Information  System, they do want a dashboard or simple interface that has all the  information they need for grant bids; they want to capture data easily  (pulling in bid data from databases and only entering data ONCE!); and  they want integrated bibliometrics. All of these are things that the  library can help with!
Jeremy Upton - St Andrews - Librarians involvement with CRIS
Jeremy  made it clear from the outset that his involvement with CRIS has been  positive and that it may not match everyone's experience. One of his  first slides was about how CRIS represents the convergence of a number  of areas that libraries support: OA; Impact; Research profile; and  digital communication. An important point (I think) was that although  Jeremy is positive about these things, developments have been from  genuine desire from the academic community (rather than evangelising: it  is based on dialog and recognition of a need). St Andrews was one of the  first institutions to use the publications database as the front end  for getting data into the repository (talking to people over lunch and  coffee this now seems to be what alot of folks are working towards). I think the main point  Andrew was making though was that partnerships are very important.  Previous involvement with the research community at St Andrews (e.g. on  digital thesis) meant that they came  to him when they realised that  they needed support implementing a CRIS, and as a result of the work,  library staff have been transferred to the research office to support  the REF (as they realise that librarians have skills that are needed).  Along these lines academic liaison staff are seen as an important part  of the process of rolling out and supporting these systems with academics - one of the  hooks for this is by showing the academics stats of how their material  has been accessed via the system - the thought of increased citations is  a powerful motivator.
There followed some brief demos:
Sue Cumberpatch - York - PURE
This  is not quite live at York but has been released to academics, who are  currently adding publications. Academic liaison librarians are currently  supporting academics asked to put material on at very short notice.  PURE includes publications, media outputs like radio programmes, students supervised, etc.  It's been populated by the repository and web of science (they bought  the InCites API to support this), and has also brought together many  sources (e.g. academics records) as York didn't have publications  database before this. Librarians are also supporting inputting data from  reference manamgnent software (e.g. endnote). Sue had a slide showing  the whole system and identifying the areas where librarians have  contributed and supported. A very useful slide! Unfortunately there are  not many databases that can populate PURE (just Arxive, WoS, and pubmed)  and social sciences are not well covered by these. PURE can import RIS files although the import filters can be a bit tricksy.
Andria McGrath - InCites demo
Nadine demoed Thomson Reuters InCites product  that extends what you can  do with the citation data in WoS. You can create your own data sets  within InCites. King's have set one up based on author affiliation in  WoS but also a more bespoke one based on documents in their CRIS. This  uses the UT number of the record in the CRIS to link to the citation data.  They've found that the tool acts as a real incentive for people to get  their work into the CRIS (because they can then instantly access the  citation data). It's useful for the REF as you can create subject area  schemes that provide data for specific UoAs for the REF but it's also  useful in other ways. You can compare your outputs to other institutions  for example, which can be useful in future bids.
Nadine Edwards - Greenwich - e-resources and repository manager - survey on librarians and research.
Nadine presented the results of a JIBS survey. There were no responses  from repository managers - which indicates that this is often tacked on  to other roles (e.g. subject librarian). Responses showed that  librarians had been consulted in less than 50% of CRIS acquisitions.
Half of the librarians surveyed support bibliometrics. Most popular form  is 1:1 support with academics. Majority of people don't have InCites. I  later found this surprising as it emerged that InCites can be used to  normalise bibliometric data. Without a product like InCites it is very  difficult to do this and, as bibliometric data varies greatly between  fields and even from year to year it is meaningless without putting it  into context by normalizing.
Kate Bradbury - Cardiff - Research support
Kate listed some drivers for research evaluation: external ones like:  REF2014; impact/ value for money; performance/ league tables; grant  applications/ awards, and internal factors like: promotion/ pay  progression; evaluating initiatives; departmental performance; funding  allocations. (I'd also add recruitment to this list). Helping to find  evidence of impact might involve searches for researchers; verifying  outputs for the REF;  liaison; training; and advocacy. As a skillset it  helps to have: knowledge of the research landscape (e.g. keeping on top  of the many HEFCE REF reports!). Kate made the point that it is hard to  provide normalized bibliometrics without a product like InCites.
Afternoon session
 
Jenny Delasale - Warwick - Supporting bibliometrics
Jenny explained her organisational context and made an interesting point  about how their support for the publications database had been  communicated to academics i.e. that they were introducing a publications  service to support academics with the university's publications database. Very clever marketing.
Jenny made the point that there are lots of routes to researchers and  recommended Web of Knowledge bibliometrics sessions through Mimas to  develop skills in this area. There was a useful slide on how librarians  can support the publication process. The bit I found really valuable  about this presentation was Jenny's explanation of the h index and in  particular the assertation that context is everything! Basically,  bibliometrics depend so much on context that you often can't provide a  yes or no answer to academics questions. The example given was is it  better to have 1 paper cited 100 times or 10 papers cited 10 times? All  depends on context.
Isobel Stark and Michael Whitton - Southampton - Google Scholar: Can it be used for bibliometrics?
This was an interesting talk. First was a discussion on the pro's and cons of using Google for bibliometrics:
Pros:
Easy to use and free. Wide range of articles e.g. book chapters and  conference proceedings so it's especially useful for law, humanities,  social sciences. Metrics tend to be a higher number because more  material is indexed but also there is some duplication.
Cons:
Data can be poor quality and it is not transparent where the data comes  from; there's a lack of de-dupe; there are big gaps in subject coverage;  the indexing (esp. subject indexing) is naff and this makes it  difficult to narrow to a particular authors work; citation matching can  be flaky (relies on algorithms to do this).
Isobel listed some services that are good with scholar: Quadsearch;  Scholar H-index calculator (FF add-on); Scholarometer. Publisher or  perish.
There was also some useful references to the literature on the subject:
Bar-Illan (2008) - Israeli highly cited researchers - differences between Wos and scopus.
Jasco (2008) - problems with the data, issues around transparency.
Franceshet (2009) - computer science metrics higher as computer scientists publish in conference proceedings.
Lee (2009) - neurosurgery - Google scholar and scopus very close.
Mingers (2010) - Business and management - google metrics higher.
Isobel made the point that you need to know the work well as Scholar  doesn't deduplicate and doesn't have institutional affiliation.
Southampton bibliometrics guides are open access  www.soton.ac.uk/library/research/bibliometrics - interestingly the  Google Scholar guide has been accessed far more than others!
The discussion on whether to use Google Scholar was useful. I thought it  was an interesting point that if someone asks for a H index without  specifying where it's come from then they probably don't understand what  they are asking for - is it then fair game to give them the highest  figure?
As an interesting aside the Thompson Reuters rep in the room disclosed  that they are releasing a book chapter citation database in November.
All in all an interesting day and I learned alot- but my brain was a bit frazzled by the end :-)
Saturday, 2 July 2011
Subscribe to:
Post Comments (Atom)
 
 
No comments:
Post a Comment