Dario Taraborelli has a reflective post on the potential impact of tagging and social software on scientific communication. His particular interest is how such approaches might “challenge traditional evaluation processes”.
Online reference managers are extraordinary productivity tools, but it would be a mistake to take this as their primary interest for the academic community. As it is often the case for social software services, online reference managers are becoming powerful and costless solutions to collect large sets of metadata, in this case collaborative metadata on scientific literature. Taken at the individual level, such metadata (i.e. tags and ratings added by individual users) are hardly of interest, but on a large scale I suspect they will provide information capable of outperforming more traditional evaluation processes in terms of coverage, speed and efficiency. Collaborative metadata cannot offer the same guarantees as standard selection processes (insofar as they do not rely on experts’ reviews and are less immune to biases and manipulations). However, they are an interesting solution for producing evaluative representations of scientific content on a large scale. [Academic Productivity » Soft peer review? Social software and distributed scientific evaluation]
Here is a reductive summary of the ways in which he sees social software affect evaluation.
- Semantic metadata. Aggregate tagging behaviors indicate relevance and tags can be mined to do automatic clustering. He notes the benefit of ranking tags by the number of times which they have been applied.
- Popularity. The number of times an item has been bookmarked is a sign of popularity. This is a case of what I have called intentional data: data that aggregates choices people have made about things. It has emerged as a major factor in Internet services. Google’s PageRank, citation counts, or number of downloads are other examples. (We – OCLC that is – often rank using holdings data, for example, recognizing that the aggregate purchase choice of libraries is a useful indicator.)
- Hotness. This looks at change in popularity over time. It is a way of looking at trends. Rapid change indicates ‘hotness’ as where a social bookmarking site will note the most highly bookmarked items in the last week or similar.
- Collaborative annotation. Users add reviews, ratings and other comments. However, we have no clues here to the the expertise of the annotator and Taraborelli discusses various ways in which some evalution of these annotations might be introduced.
Of course, at the moment these benefits will depend on the scope and scale of an individual service, as the data does not tend to flow between services. It will be interesting to see if services emerge which aggregate this data across services.
I came across this interesting post in an Ariadne article about CiteUlike, which the authors, Kevin Emamy and Richard Cameron, describe as a “fusion of Web-based social bookmarking services and traditional bibliographic management tools”. This intersection is an interesting one, as one wonders about the different ways in which people want to manage their personal metadata collections. It would be interesting to know more about what influences behaviors here, as between managing private collections and participating in the benefits of shared collections. It is another example where there is increasing intersection between what once might have been personal or institutional and a shared network space.
Related entries: