Here is one with considering.
The New Media Department and The University of Maine amended their promotion and tenure guidelines (all the way) back in 2007 with redefined criteria in the form of alternative recognition measures. Their documents identify nine alternatives to the standard 'article in a peer-review journal' model. I think the measures can be applied to library science since many aspects of LS has similar accessibility and timeliness requirements for their research/scholarship.
The following measures of recognition are prioritized at U of Maine in the evaluation of candidates (bolding is mine for emphasis):
-------
1. Invited / edited publications
Invitations to publish in edited electronic journals or printed magazines and books should be recognized as the kind of peer influence that in other fields would be signaled by acceptance in peer-reviewed journals.
2. Live conferences
The 2003 National Academies study concludes that conferences on new media, both face-to-face and virtual, offer a more useful and in some cases more prestigious venue for exposition than academic journals:
[The sluggishness of journal publications] is offset somewhat by a flourishing array of conferences and other forums, in both virtual and real space, that provide a sense of community and an outlet as well as feedback[11]....The prestige associated with presentations at major conferences actually makes some of them more selective than journals.[12]
New forms of conference archiving--such as archived Webcasts--add value and exposure to the research presented at conferences.
3. Citations
Citations are a valuable and versatile measure of peer influence because they may come from or point to a variety of genres, from Web sites to databases to books in print. Examples include citations in:
a. Electronic archives and recognition networks, such as the publicly accessible databases.
b. Books, printed journals, and newspapers. These are easier to find now, thanks to Google Scholar, Google Print, and Amazon's "look inside the book" feature.
c. Syllabi and other pedagogical contexts. Google searches on .edu domains and citations of the author's work in syllabi from outside universities can measure the academic currency of an individual researcher or her ideas. In the sciences, readings or projects cited on a syllabus are likely to be popular textbooks, but in an emerging field like new media, such recognition is a more valid marker of relevance.
4. Download / visitor counts
Downloads and other traffic-related statistics represent a measure of influence that has gained importance in the online community recently. As a 2005 open access study[13] concludes:
Whereas the significance of citation impact is well established, access of research literature via the Web provides a new metric for measuring the impact of articles – Web download impact. Download impact is useful for at least two reasons: (1) The portion of download variance that is correlated with citation counts provides an early-days estimate of probable citation impact that can begin to be tracked from the instant an article is made Open Access and that already attains its maximum predictive power after 6 months. (2) The portion of download variance that is uncorrelated with citation counts provides a second, partly independent estimate of the impact of an article, sensitive to another form of research usage that is not reflected in citations (Kurtz 2004).
5. Impact in online discussions
Email discussion lists are the proving grounds of new media discourse. They vary greatly in tone and substance, but even the least moderated of such lists can subject their authors to rigorous--and at times withering--scrutiny.[14] Measures such as the number of list subscribers, geographic scope, the presence or absence of moderation, and the number of replies triggered by a given contribution can give a sense of the importance of each discussion list.[15]
6. Impact in the real world
While magazine columns and newspaper editorials may have little standing in traditional academic subjects, one of the strengths of new media are their relevance to a daily life that is increasingly inflected by the relentless proliferation of technologies. Even counting Google search returns on the author's name or statistically improbable phrases can be a measure of real-world impact[16]. By privileging new media research with direct effect on local or global communities, the university can remain relevant in an age where much research takes place outside the ivory tower.
8. Net-native recognition metrics
Peer-evaluated online communities may invent their own measures of member evaluation, in which case they may be relevant to a researcher who participates in those communities. Examples of such self-policing communities include Slashdot, The Pool, Open Theory, and the Distributed Learning Project. The MLA pins the responsibility for learning these new metrics on reviewers rather than the reviewed.[17] Given the mutability of such metrics, however, promotion and tenure candidates may be called upon to explain and give context to these metrics for their reviewers. Again, efforts to educate a scholar's colleagues about new media should be considered part of that scholar's research, not supplemental to it.
9. Reference letters
Letters of recommendation from outside referees are an important compensation for the irrelevance of traditional recognition venues. Nevertheless, it is insufficient merely to solicit such letters from professors tenured in new media at other universities, since so few exist. More valuable is to use the measures outlined in this document to identify pre-eminent figures in new media, or to require new media promotion and tenure candidates to identify such figures and supply evidence that they qualify according to the criteria above.
-------
I will work on modifying these and publish them in a future post.