Monday, December 14, 2009

Capturing Employee Ideas: The CDC IdeaLab

The CDC (Centers for Disease Control and Prevention) is a large government agency with 14,000 full-time, part-time, and contract employees. While headquartered in Atlanta, the CDC has a large geographically dispersed workforce working in 19 facilities across the United States and 54 countries around the globe. Since many the CDC's employees are isolated geographically, the processes of collaboration, communication, and sharing information efficiently and effectively are challenging.

The solution to their challenge is what they call the IdeaLab. IdeaLab is a web-based application which CDC employees are encouraged to use to post their ideas, comment on others’ posts, and vote on the quality of the posts and comments. Submissions are attributed and authenticated in real time. Ideas are categorized according to CDC organizational goals, and related ideas are affinity-grouped using tag clouds.




A weekly “Bright Idea” highlights a submission that has broad agency interest across multiple national centers and offices. All communications are stored in a searchable archive that anyone at CDC can review at anytime. IdeaLab enables CDC employees to "use their insights and experiences to help colleagues build and implement high-impact solutions to important public health challenges."

The CDC hopes that the IdeaLab will:
  • Increase connectivity of CDC employees who support multidisciplinary, evidence-based solutions
  • Promote scientific crowd-sourcing and peer-2-peer networking to build ideas, enable virtual piloting and refinement of ideas, and foster rapid implementation and adoption of the best ideas
  • Foster retention and sharing of institutional memory
  • Improve interactions among networks of knowledge
  • Improve orientation for and assimilation by new employees
  • Accelerate health impacts by increasing employee-driven innovation and improving organizational efficiency
Sphere: Related Content

Monday, December 07, 2009

Harvard: Computers in Hospitals Do Not Reduce Administrative or Overall Costs

ResearchBlogging.orgHarvard researchers recently released the study Hospital Computing and the Costs and Quality of Care: A National Study, which examined computerization’s cost and quality impacts at 4,000 hospitals in the U.S over a four-year period.

The researchers concluded that the immense cost of installing and running hospital IT systems is greater than any expected cost savings. Much of the software being written for use in clinics is aimed at administrators, not doctors, nurses and lab workers. Additionally, as currently implemented, hospital computing might modestly improve process measures of quality but does not reduce administrative or overall costs.

The researchers also found no reliable data support claims of cost savings or dramatic quality improvement from electronic medical records.

The researchers did acknowledge that the modest quality advantages associated with computerization were difficult to interpret since the quality scores reflect processes of care rather than outcomes. Access to more information technology may merely improve scores without actually improving care by facilitating documentation of allowable exceptions.

From the paper:
"We used a variety of analytic strategies to search for evidence that computerization might be cost-saving. In cross-sectional analyses, we examined whether more computerized hospitals had lower costs or more efficient administration in any of the 5 years. We also looked for lagged effects, that is, whether cost-savings might emerge after the implementation of computerized systems. We looked for subgroups of computer applications, as well as individual applications, that might result in savings. None of these hypotheses were borne out. Even the select group of hospitals at the cutting edge of computerization showed neither cost nor efficiency advantages. Our longitudinal analysis suggests that computerization may actually increase administrative costs, at least in the near term."
Himmelstein, D., Wright, A., & Woolhandler, S. (2009). Hospital Computing and the Costs and Quality of Care: A National Study The American Journal of Medicine DOI: 10.1016/j.amjmed.2009.09.004 Sphere: Related Content

Wednesday, December 02, 2009

What Technology? Reflections on Evolving Services EDUCAUSE Report

I just finished reading What Technology? Reflections on Evolving Services, a report from Sharon Collins and the 2009 EDUCAUSE Evolving Technologies Committee. For the first time, Information resource management technologies for libraries were featured as an evolving technology.

From the report:
"The increasing primacy of highly distributed digital resources has brought disruptive change to the way libraries must approach their work to remain relevant to their parent organizations and constituencies."

"Organizing content to support research and learning is at the heart of the library's institutional role. Once limited to applying subject terms, co-locating physical materials, and producing research guides, this role has been changed by the volume and variety of online resources, which require new tools to more effectively meet the needs of users. A growing collection of technologies and tools can be used to more granularly organize, customize, and personalize the online information environment to fit professional, learning, and research activities."

"These technologies are evolving away from being strictly stand-alone tools and resources and are converging into a more interoperable, collaborative, enterprise-level information management environment — one more closely integrated with teaching, learning, research, and administrative systems. Underlying system architectures are focusing more on providing discrete services (service-oriented architecture) rather than monolithic systems, enabling more interoperable and customizable workflows."

"By combining discrete services with cloud storage and cloud-enabled applications, institutions can build collaborative work environments between libraries as well as between libraries and non-library units, both on and off their home campuses, for discovering, acquiring, describing, and managing all types of resources. Layered over this enterprise-level resource management environment, information discovery and management tools are providing individuals and workgroups with much more intuitive and productive ways to discover, manipulate, incorporate, and share information for teaching, learning, and research, allowing users to shift time from the mechanics of managing specific resources to a focus on analyzing the information itself."
Sphere: Related Content

Monday, November 16, 2009

NSF Funded Workshop on Scholarly Evaluation Metrics

A one-day NSF-funded workshop entitled "Scholarly Evaluation Metrics: Opportunities and Challenges" will take place in the Renaissance Washington DC Hotel on Wednesday, December 16th 2009. The 50 available seats were filled the day that the workshop was announced. I would have loved to be in attendance, given my role as a P&T chair, but I heard about it four days after the announcement.

The focus of the workshop is the future of scholarly assessment approaches, including organizational, infrastructural, and community issues. The overall goal is to:
"identify requirements for novel assessment approaches, several of which have been proposed in recent years, to become acceptable to community stakeholders including scholars, academic and research institutions, and funding agencies."
Panelists include Oren Beit-Arie (Ex Libris), Peter Binfield (PLoS ONE), Johan Bollen (Indiana University), Lorcan Dempsey (OCLC), Tony Hey (Microsoft), Jorge E. Hirsch (UCSD), Julia Lane (NSF), Michael Kurtz (Astrophysics Data Service), Don Waters (Andrew W. Mellon Foundation), Jevin West (UW/eigenfactor.org), and Jan Velterop (Concept Web Alliance).

A summary of the goal of the workshop:

The quantitative evaluation of scholarly impact and value has historically been conducted on the basis of metrics derived from citation data. For example, the well-known journal Impact Factor is defined as a mean two-year citation rate for the articles published in a particular journal. Although well-established and productive, this approach is not always best suited to fit the fast-paced, open, and interdisciplinary nature of today's digital scholarship. Also, consensus seems to emerge that it would be constructive to have multiple metrics, not just one.

In the past years, significant advances have been made in this realm. First, we have seen a rapid expansion of proposed metrics to evaluate scientific impact. This expansion has been driven by interdisciplinary work in web, network and social network science, e.g. citation PageRank, h-index, and various other social network metrics. Second, new data sets such as usage and query data, which represent aspects of scholarly dynamics other than citation, have been investigated as the basis for novel metrics. The COUNTER and MESUR projects are examples in this realm. And, third, an interest in applying Web reputation concepts in the realm of scholarly evaluation has emerged and is generally referred to a Webometrics.

A plethora of proposals, both concrete and speculative, has thus emerged to expand the toolkit available for evaluating scholarly impact to the degree that it has become difficult to see the forest for the trees. Which of these new metrics and underlying data sets best approximate a common-sense understanding of scholarly impact? Which can be best applied to assess a particular facet of scholarly impact? Which ones are fit to be used in a future, fully electronic and open science environment? Which makes most sense from the perspective of those involved with the practice of evaluating scientific impact? Which are regarded fair by scholars? Under which conditions can novel metrics become an accepted and well-understood part of the evaluation toolkit that is, for example, used in promotion and tenure decisions?

I look forward to the twitter stream..

Sphere: Related Content

Monday, November 02, 2009

Have Life Science Researchers Removed Themselves from the Mainstream Library User Population?

A report Entitled Patterns of Information Use and Exchange: Case Studies of Researchers in the Life Sciences has been released by the British Library and the Research Information Network.

The report was developed by capturing the day-to-day patterns of information use in seven research teams from a wide range of disciplines. The study, undertaken over 11 months and involving 56 participants, concluded that ‘one-size-fits-all’ information and data sharing policies are not achieving "scientifically productive and cost-efficient information use in life sciences"

Skip past all of that and jump to page 47 of the report. There, they state (I'll let the report speak for itself) :
"Conventional university library facilities rank low as a vehicle for accessing published information. The traditional role of professional information intermediaries has been largely replaced by direct access to online resources, with heavy reliance upon Google to identify them. Given the limitations of generic search engines such as Google, measures to reconnect researchers with IIS professionals could bring improvements in information retrieval, and benefits to the research process.

"Researchers also tend to use services that have been ‘proven’ by colleagues, or to interrogate websites they regard as authoritative and comprehensive in their field. When they use such services, researchers tend to take the results on trust: the specificity and the breadth of the information retrieved do not appear to require further enquiry.

"The result of all these developments is that many life science researchers have removed themselves from the mainstream library user population. They do not even use the library catalogue. Library-based services can replace the services researchers do use only by demonstrating that they can improve retrieval capability, and deliver results within a timeframe that corresponds to researchers’ own patterns of work. This is a significant challenge when researchers are driven by a desire for immediate online access to specific resources of interest, at a time convenient to them, and from a known and trusted source."
Overall they found that the groups that they studied use a narrow range of search engines and bibliographic resources, for three reasons:
• lack of awareness and time to achieve or build a broader suite
• the ‘comfort’ that comes from relying on a small set of familiar resources, usually endorsed by peers and colleagues, and
• the cost in time and effort needed to identify other resources, and to learn to use them effectively.

They detail what would appear to be emerging roles of the library in a researcher's information seeking patterns:
"The challenge for institutional information services is thus to develop and provide online services geared to the needs of their research groups and thereby to add value to the research process, facilitating the use of new tools, providing individuated professional support, as well as advice, training and documentation on a subject or discipline basis. Any such strategy would have to be proactive: as noted by our regenerative medicine group, researchers are reluctant to adopt new tools and services unless they know a colleague who can recommend or share knowledge about them."

"Library and information service providers in the higher education sector need to come to a clearer view of their structures and roles.. some of our groups expressed a desire for better portals and tools to identify the information resources relevant to researchers working in their domain. Some of the specialised repositories that are emerging (e.g. in neurophysiology) may help to develop such services."

"Re-establishing a lively and sustained dialogue with their research communities is a key challenge for the library and information services in many universities. Such dialogue is essential if libraries are to provide the publications, other information resources and services that their researchers need."

"Better engagement between information professionals and researchers could add to the efficiency and effectiveness of research, with specialist support facilitating the use of new tools, and providing individuated professional advice, training and documentation on a subject or discipline basis."

"Such a strategy would have to be proactive, for researchers are reluctant to adopt new tools and services unless they know a colleague who can recommend or share knowledge about them. And it would have to meet the challenge of delivering results that correspond to researchers’ patterns and timetables of work."

Sphere: Related Content

Wednesday, October 28, 2009

A Need for University Branded URL Shortening Services?

Twitter users are quite familiar with URL shortening tools as a way to include web links within their 140 character limit. URL shorting is is the process of taking a long URL and turning it into, well, a short one. For example, instead of using the long URL of http://library.osu.edu/blogs/techtips/2009/09/21/techtips-augmented-reality/ one can use the shortened URL of http://tinyurl.com/ykdkmss.

Shortened URLs are extremely useful in Internet conversations such as forum threads,IM chats, etc. They are also essential in communication channels where there is a limited to specific number of characters, such as with Twitter. Shortened URLs can also be useful when reading long URLs aloud to customers over the phone, adding URLs to print materials, and when showing them on video displays or during presentations. Shortened URLs are also easier to enter into a mobile device.

There are many services that create shortened URLs, most notably TinyURL.com. OCLC was ahead of this game way back in 1995 with their PURL ( Persistent Uniform Resource Locators ) service. While the goal of PURL was to allow content providers to redirect users as content moves from site to site, it did so using shorter URLs. 

The mechanism for resolving a shortening URLs is simple: The browser is directed to the shortened URL site. That site performs an HTTP redirect of the address and the browser is sent to the registered long URL. The URL shortening service maintains the master table of redirects.

One problem is that all the shortened links die when such free services die, as tr.im almost did in August '09. As a result, members of the academic community that rely upon such services will eventually lose access to their shortened links. This will require reentering the URLs into another service, which might also die.

Another concern with existing shortening services that the URL domain plays an important role in identifying the authority of a Web resource. Shortened URLs lose their link and organizational information. All brand/name recognition - the authority of an organization - goes away since the domain is hidden within the shortened URL. One needs to click on the shortened URL and visit the redirected site before discovering the domain's authority.

An example of where short URL branding works is with Flickr. Each photo page also get a shortened Flickr URL. The domain flic.kr is owned and operated by flickr.com so the shortening service will be as reliable as the Flickr service. When someone goes to the site flic.kr they know they will get to a Flickr photo page, not a redirect to a site containing malware.

It therefore makes a lot of sense than academic institutions consider building their own URL shortening services as a way to brand and create authority with their shortened URLs. One University that has done just that is the University of Nebraska-Lincoln. Wayne State University also appears to have such a service.

I would love to see a local url.osu.edu shorting service. If I had the programming chops, I would write it over the next weekend. I know. It's easier to start a shortening service than it is to maintain it in perpetuity.

Yet, creating an in-house URL shortening service not only helps to promote and support the institutional brand, it lessens the chance that the institution's carefully crafted custom links will not die if the third-party goes down, or out of business.
Sphere: Related Content

Thursday, October 08, 2009

Ohio State President Calls for Tenure Changes

Thank You President Gee!

At his annual presidential address yesterday afternoon, Ohio State University President E. Gordon Gee thinks it's time for faculty members to be evaluated on the quality and impact of their work.

New faculty members at Ohio State University Libraries enter as assistant professors and have six years to build up their record of scholarship, teaching and service. They receive performance evaluations every year, a fourth-year comprehensive review, and in their sixth year undergo a more vigorous examination to see if they measure up to the level of performance required for tenure and a promotion to associate professor. Library faculty can then choose to undergo an additional review later in the careers to attain the rank of full professor.

President Gee said in his address that professors should be rewarded for their talents and should be encouraged to work with academic departments outside their own. Instead of using an arbitrary formula for evaluation, he would like OSU to create a system in which faculty members are judged on the quality of their work and their impact on students, their disciplines and the community.

This is exactly the position I have been advocating not only on this blog, but in discussions with my library faculty colleagues. Even though I have articulated to colleagues all the points that Gee highlighted, inertia has indeed won out.

From his prepared remarks:

Let me state this directly: We must change our recognition and reward criteria.

Since I returned to Ohio State two years ago, I have made this point a number of times. Changing the way we define scholarship, appreciate new forms of engagement, and properly reward superb teaching can be this University’s signal differential.

If we do not properly and tangibly value those activities, our efforts to extend our resources more fully into our communities will be stymied. We must take it upon ourselves to revise the centuries-old equations for promotion and tenure and develop new reward structures.

Without a doubt, this is a nettlesome issue. And I am not the first person to raise it. Ernie Boyer articulated the case nearly 20 years ago in a speech here on campus. And of course he did so very persuasively in his 1990 book, “Scholarship Reconsidered,” in which he called for “recognition that knowledge is acquired through research, through synthesis, through practice, and through teaching.”

At Ohio State, and at colleges and universities across the country, we have long had faculty committees devoted to looking at revising promotion and tenure standards. And yet, the status quo remains. Inertia is winning.

I believe we must finally speak aloud the truth: that some arbitrary volume of published papers, on some narrowly defined points of debate, is not necessarily more worthy than other activities.

Ladies and gentlemen, this University is big and strong enough to be bold enough to judge by a different standard.

We can dare to say, “No more,” to quantity over quality.

We can stop looking at the length of a vita and start measuring its true heft.

This University, finally, can be the first to say, “We judge by a different standard.” And let others follow our lead, if they wish.


I sit here thinking, what if OSU Libraries HAD acted a year ago and began to change our criteria? Would we have been included in President's speech as the leaders of where the University should be heading? Would that have raised our visibility on campus? As a profession?

So, Thank You! President Gee for validating my, and several of my colleagues, position. Maybe NOW we will be able to break that inertia and finally move ahead. Sphere: Related Content

Wednesday, October 07, 2009

Process of Tenure and Promotion a Monster That Eats Its Young?

The approach that Kathleen Fitzpatrick has taken with her new book manuscript might be one possible path that the future of scholarly communications will take.

Ms. Fitzpatrick has made the manuscript of Planned Obsolescence: Publishing, Technology, and the Future of the Academy available online for open peer review. The 'book' is a part of Media Commons Press, who's tag line is "open scholarship in open formats."

While the plan is for the manuscript to go through the traditional blind peer-review process, and is forthcoming by NYU Press, Fitzpatrick plans to incorporate reader comments from the online manuscript into her revisions. She asserts:
"One of the points that this text argues hardest about is the need to reform peer review for the digital age, insisting that peer review will be a more productive, more helpful, more transparent, and more effective process if conducted in the open. And so here’s the text, practicing what it preaches, available online for open review."
Not only is the process being used to write the manuscript exciting, the manuscript is as well. A couple parts of the text which relate to the academic rewards system:
"our institutional misunderstanding of peer review as a necessary prior indicator of “quality,” rather than as one means among many of assessing quality, dooms us to misunderstand the ways that scholars establish and maintain their reputations within the field."
"we need to remind ourselves, as Cathy Davidson has pointed out, that the materials used in a tenure review are meant in some sense to be metonymic, standing in for the “promise” of all the future work that a scholar will do (“Research”). We currently reduce such “promise” to the existence of a certain quantity of texts; we need instead to shift our focus to active scholarly engagement"
"Until institutional assumptions about how scholarly work should be assessed are changed — but moreover, until we come to understand peer-review as part of an ongoing conversation among scholars rather than a convenient means of determining “value” without all that inconvenient reading and discussion — the processes of evaluation for tenure and promotion are doomed to become a monster that eats its young, trapped in an early twentieth century model of scholarly production that simply no longer works."
"I want to suggest that the time has come for us to consider whether, really, we might all be better served by separating the question of credentialing from the publishing process, by allowing everything through the gate, and by designing a post-publication peer review process that focuses on how a scholarly text should be received rather than whether it should be out there in the first place."
Sphere: Related Content

Thursday, October 01, 2009

Peer Reviewers Get Worse, Not Better, Over Time

Almost all peer reviewers get worse, not better, over time.

So suggests a study presented at the Sixth International Congress on Peer Review and Biomedical Publication in Vancouver, Canada, and reported by Nicola Jones in the October 2009 issue of Nature. In his paper "The Natural History of Peer Reviewers: The Decay of Quality" Michael Callaham, editor-in-chief of the Annals of Emergency Medicine in San Francisco, California, reported his analysis of the scores that 84 editors at the journal had been given by nearly 1500 reviewers between 1994 and 2008.

The journal routinely has its editors rate reviews on a scale of one (unsatisfactory) to five (exceptional). The average score stayed at roughly 3.6 throughout the entire period. The surprising result, however, was how individual reviewers' scores changed over time: 93% of them went down, which was balanced by fresh reviewers who kept the average score up. The average decline was 0.04 points per year.

As quoted by Jones, Callaham said "I was hoping some would get better, and I could home in on them. But there weren't enough to study." According to Callaham, less than 1% improved at any significant rate, and even then it would take 25 years for the improvement to become valuable to the journal.

Jones also notes that Callaham agrees that a select few senior advisers are always very useful. But from his own observation, older reviewers do tend to cut corners. Young reviewers assigned a mentor also typically scored half a point better than non-mentored colleagues, but when the mentor's eye disappeared after a year or so, the advantage evaporated.
Sphere: Related Content

Tuesday, September 29, 2009

Faculty Rewards Systems Discourage Alternative Scholarly Communications

"As a young scholar, with a family to support and without a secured position, my main selection criteria is in practice how the chosen journal would look in my CV."
The above is a comment by Jan Kunnas in reaction to an article by Zoë Corbyn entitled A Threat to Scientific Communication that appeared in the British Times Higher Education Supplement. In fact, Kunnas' reaction is typical in most academic disciplines. One reason why junior faculty continue to focus on using traditional print publications for their scholarly communication can be summed up in Corbyn's quote of Richard Smith, former editor of the British Medical Journal:
"We have an industry in which most journals exist to perpetuate an inward-looking academic-reward system, and there is no clear purpose that has anything to do with science."
As Michael Nielsen observes in Doing Science in the Open, the continued reliance upon tradition journals is not only slowing the flow of information but inhibits the move towards the use of alternative scholarly communication methods:
"The adoption of the journal system was achieved by subsidizing scientists who published their discoveries in journals. This same subsidy now inhibits the adoption of more effective technologies, because it continues to incentivize scientists to share their work in conventional journals and not in more modern media."
A University of California, Berkeley report The Influence of Academic Values on Scholarly Publication and Communication Practices indicates that faculty realize the value of experimenting and using alternative methods of scholarly communication:
"There are clear advantages to newer forms of publication that are recognized by a wider circle of scholars than those who have actually used them for publishing their own work. These include the ability to reach a larger audience, ease of access by readers, more rapid publication even when peer reviewed, the ability to search within and across texts, and the opportunity to make use of hyperlinks."
The report then concludes:
"There is presently a somewhat dichotomous situation in which electronic forms of print publications are used heavily, even nearly exclusively, by performers of research in many fields, but perceptions and realities of the reward system keep a strong adherence to conventional, high-stature print publications as the means of record for reporting research and having it evaluated institutionally."
Why do scholars continue to have a strong adherence to conventional print publications and avoid experimenting with modern methods? It comes back around to the academic-rewards system, as highlighted in Digital Scholarship in the University Tenure and Promotion Process: A Report on the Sixth Scholarly Communication Symposium at Georgetown University Library. The report quotes Stephen Nichols, professor of medieval French literature at Johns Hopkins University:
"the operative concepts here are fear and snobbery, and the disincentives are so powerful as to discourage experimentation. Young scholars are counselled that they need solid print dossiers before they attempt digital scholarship and that, even then, they are still at some risk."
Yet, there can be a significant fallout from perpetuating an inward-looking academic-reward system that continues to rely upon the journal while discouraging the use of alternative scholarly communication methods.

An article in the New York Times discusses the possibility that it may inhibit the world’s ability to respond to the sudden emergence of a widespread illness, including H1N1. The reason? Researchers are waiting to report their findings until it is published in traditional journals:

"Officials and experts say they have learned a lot about human swine influenza. But relatively little of that information, including periodic summaries of what has been learned since the beginning of the pandemic, has been reported and published. Some experts said researchers were waiting to publish in journals, which can take months or longer."

However, the Internet has afforded great opportunity for experimentation in alternative forms of scholarly communications, as Joseph Heller observes:
"The integration of digital technology into nearly every aspect of the daily workflow of scholars and researchers has begun to produce new channels of communication that do not fit neatly into the category of ‘journal’ or ‘pre-print’ or even ‘email communication’. These new mechanisms include blogs and wikis that spring up organically around a topic or an experiment and collaborative annotations on a web page. These advances are the natural result of scholars using digital technology in ways that they independently determine best serve their immediate needs and the needs of their community."
When compared to other disciplines, academic librarianship has more liberty to be experimental with our scholarly communication. Advancing the nature of scholarship in academic librarianship is less dependent on adhering to traditional norms. Yet, the major hurtle remains faculty rewards systems that contend that only those communications that go through a pre-publication anonymous peer-review have any value.

Instead, academic libraries need to retool their faculty rewards systems so they more closely resemble the vision of Michael Jensen:
"For universities, the challenge will be ensuring that scholars who are making more and more of their material available online will be fairly judged in hiring and promotion decisions. It will mean being open to the widening context in which scholarship is published, and it will mean that faculty members will have to take the time to learn about — and give credit for — the new authority metrics, instead of relying on scholarly publishers to establish the importance of material for them.

Sphere: Related Content

Friday, September 18, 2009

The Uncertain Future of QuickDoc: UPDATE

Earlier this month, I wrote about the Uncertain Future of QuickDoc in light of the spring passing of Jay Daly. A colleague passed on the following message, which appeared on the DOCLINE-L list on Weds Sept 16, 2009:
Dear Colleagues,

I am happy to report that we have a few possibilities for the take-over of QuickDoc. Jay's daughter and son-in-law (Eowyn and Tommy Griffin) are endeavoring to find the best fit. An RFP went out to several vendors and independent programmers and the Griffins are pursuing the most promising offers. It is very important to them (and to all of us!) that whoever takes over QD will have the same dedication to the user base that Jay did. Be assured that Eowyn and Tommy are well aware of the time frame they have to work with and they are trying to come to an agreement as soon as possible. In the meantime, feel free to contact me with any questions or concerns. I will do my best to help!

Margo

Margo Coletti, AMLS, AHIP
Director
Knowledge Services
Beth Israel Deaconess Medical Center
One Deaconess Road
Boston, MA 02215

First off, few, if any, traditionally commercial vendors could EVER provide the same level of personalized support Jay provided. There is nothing else that needs to be said.

Second,
I remained perplexed that NLM hasn't stepped up (publicly at least) and offered to take over the development of the system. What role, if any, does NLM plans to play in the future of QuickDoc? Perhaps there are some legal issues that prevents NLM form taking it over. Perhaps they are one of the opportunities which Margo refers. Perhaps they may be one of the vendor's which will respond to the RFP. Perhaps NLM does not have the development and support resources.

The reality is that Jay was able to manage all this by himself PLUS perform his functions at
Beth Israel PLUS have a life outside of work. If Jay could do it, I'm certain NLM could.

Sphere: Related Content

Tuesday, September 15, 2009

The Futurity.org Research News Channel

There has been some concern expressed in science communities that the coverage of science news has also been reduced as newspaper publishers have had to reduce the size of their issues to adapt to changing economic climate and information seeking patterns.

In an effort to keep news about new research discoveries flowing, about three dozen North America's Association of American Universities (AAU) research universities began a news research channel in March '09 named Futurity. The channel includes news and discoveries in the categories of earth & environment, health & medicine, science & design, society & culture.

From a University of Rochester press release:
"Futurity is a direct link to the research pipeline. If you want a glimpse of where research is today and where it's headed tomorrow, Futurity offers that in a very accessible way," said Lisa Lapin, one of Futurity's cofounders and assistant vice president for communications at Stanford University. "Today's online environment is perfectly suited for this type of direct communication. There's something very natural about universities working together to share knowledge."
"In light of the shifting news landscape, universities are looking for new ways to share important breakthroughs with the public. Futurity gives our partners an opportunity to communicate in a novel and direct way—and to remind the public why research matters," Murphy said.
"It's not often you see high-powered universities working to communicate together in such a collaborative way," says Schoenfeld, a Futurity cofounder. "That fact alone indicates the project's significance. Universities are the world's laboratories. They host the brightest minds working to answer some of today's most urgent questions. The breadth and caliber—and the collective force—of the research featured on Futurity is truly extraordinary.

Hmmm. Why didn't academic libraries think of this? The release could have / should have read:
"XXXX is a direct link to the academic librarianship. If you want a glimpse of where academic libraries are today and where it's headed tomorrow, XXXX offers that in a very accessible way.....Today's online environment is perfectly suited for this type of direct communication. There's something very natural about academic libraries working together to share knowledge."

"In light of the shifting landscape, academic libraries are looking for new ways to share important breakthroughs with the public. XXXX gives our partners an opportunity to communicate in a novel and direct way—and to remind the public why academic libraries matter"
"It's not often you see high-powered academic libraries working to communicate together in such a collaborative way. That fact alone indicates the project's significance. Academic Libraries are the world's laboratories. They host the brightest minds working to answer some of today's most urgent questions. The breadth and caliber—and the collective force—of the activities featured on XXXX is truly extraordinary."
Sphere: Related Content

Wednesday, September 02, 2009

The Uncertain Future of QuickDoc

The library community lost a colleague this past spring with the passing of Jay Daly. Among his accomplishments was the conception and development of QuickDoc, an ILL-management system designed to interface with the Nation Library of Medicine's NLM's DOCLINE system and was used by around 1,500 medical libraries at the time of his passing. Jay would will call libraries to work through any unresolved problems and was very personable. He will be missed.

QuickDoc filled an important gap in the service offered by NLM; a more user friendly interface and management module. NLM was likely happy that Jay built the system since it saved them development time. Since Jay's death, NLM has posted a note on their site that the future of QuickDoc is uncertain. In a note to MEDLIB-L, Margo Coletti, Director of Knowledge Services at Beth Israel Deaconess Medical Center, indicated that it has fallen on her to figure out what to do with QuickDoc:
I don't know what plans Jay had, if any, for the continuation of QD should he be unable to continue. I'm guessing that he did not have any idea that this would happen. His death was not at all predictable or expected. I am still trying to access Jay's files and his program. I cannot promise anything at all to you, Jay's customers. I'm not a programmer and I haven't been able to access the program, anyway. If I can access it, I'll ask someone to look at it and figure out if they can take it on. Please be patient. This will take some time.
I don't monitor the QuickDoc email list and am not up to date on current discussions. Unless QuickDoc was written as a work for hire and Beth Israel owns it, or was willed, it will likely be tied up in probate. The truth is that NLM should have taken over the development of the system long ago since so many of their DOCLINE customers were using it.

As a community, libraries should not have to reply on innovative people like Jay to develop systems that bridge the functionality gaps we expect from our systems. We continue to see such development occurring since many of us are getting tired getting responses from vendors like this (a real vendor response):
"...our development folks have talked about...I'll let them know of your interest in such functionality and we'll consider it as potential enhancement to the system"
When we have to out of necessity, as a community we need to jump in to help support and make sure those solutions remain viable in an unforeseen event.

The future of QuickDoc is indeed uncertain. Many hospital libraries will probably continue to use the QuickDoc application until problems occur that can not be fixed locally. When an install does fail, there will be few alternative solutions. Some may land up paying substantial licensing feeds for solutions that are really too sophisticated for their needs.

The thought that some will have no choice but to go back to the manual processing methods used decades ago is too hard to comprehend this day and age. It could soon be the reality.
Sphere: Related Content

Wednesday, August 26, 2009

Digital Dossiers Are Here!

Earlier this year, I discussed the need for the promotion and tenure process to transition the the digital dossier. I argued that with all the connectivity to online content that now exists, academia really needs to rethink the dossier paradigm and move from analog to digital. I speculated that this transition WILL happen in the next few years anyway, and suggested that perhaps academic librarians should be the first!

Well, too late. according to an article in in the Chronicle of Higher Education it HAS already happened. Beginning this fall, Kent State University faculty members have the option of submitting their dossiers electronically; digital dossiers will very likely become the only way to go in a year.
A big attraction of digital dossiers, some professors note, is that it's easier to include elements of scholarship and research that couldn't be captured as well in a binder. "You can post video and audio of your teaching. You can take pictures of art and include it," says David W. Dalton, an associate professor of instructional technology at Kent State. "You can hyperlink to things. You can really tell your story in new ways."
Kent State is not alone in this transition. The article reports that Virginia Tech and St. John's University have also gone digital. In a note to Deans and Department Heads in May 2009, Virgina Tech's Senior Vice President and Provost Mark McNamee wrote:
the university committee is asking that dossiers now be submitted in Adobe pdf format rather than as paper. Several colleges are already using electronic dossiers and have found these to be easy to work with while saving many reams of paper. Staff involved in managing the dossier submissions will work together to develop the details in the coming months, including standardization of bookmarks and other changes that take advantage of the electronic format.
The Chronicle article reported that St. John's has saved 225,000 pieces a year when its process went online, in 2008. Yet, their document PAF, Years 1-2 indicates that three paper copies are still required.

The true value of the digital dossier is not in the simply creating an electronic copy of the core dossier. Instead, as Provost McNamee points out it is to take advantage of the electronic format. With a digital dossier:
  • The only information that would be sent to an external reviewer would be a single URL. The reviewers could generate paper versions, if they prefer.
  • Network-based content (e.g. web sites, blogs) would be hyperlinked though the digital dossier. This would allow scholarly communications to be viewed and interacted with in their native formats
  • Traditional content would be made accessible through the use of any combination of OpenURL / Link Resolvers / DOI. Most academic institutions have online access to a growing amount of published literature.
  • Content stored in institutional repositories could also be accessed.
  • Content shared on cloud services such as YouTube and SlideShare could also be linked.
Sphere: Related Content

Friday, August 14, 2009

Start Thinking Infostreams, Not Web Pages

Ever since the first time I encountered the Web it has been all about 'pages.' Thinking of the Web in terms other than in pages is quite difficult. Even before the Web I created content using HyperCard, which was based on the concept of a stack of virtual cards.

It's time to begin thinking beyond the page.

The instantaneous and conversational discovery and delivery of newly added content is emerging as the new phase of evolution of the Web. In a post entitled Distribution...Now, John Borthwick discusses how information is increasingly being distributed and presented in real-time streams instead of dedicated Web pages.
Today there seems to be a new distribution model that is emerging. One that is based on people’s ability to publicly syndicate and distribute messages — aka content — in an open manner...what emerges out of this is a new metaphor — think streams vs. pages.

This seems like an abstract difference but I think its very important... In the initial design of the web reading and writing (editing) were given equal consideration- for fifteen years the primary metaphor of the web has been pages and reading. The metaphors we used to circumscribe this possibility set were mostly drawn from books and architecture (pages, browser, sites etc.).

Most of these metaphors were static and one way. The steam metaphor is fundamentally different. It’s dynamic, it doesn’t live very well within a page and still very much evolving.
The people I talk to about information streams they generally state that they don't want more information, they want less. I don't blame them. All these streams (Twitter, Facebook, Friendfeed, etc) are independent of each other. We need to constantly flip from one to another.
So, what I am really hearing from them is they do want access to more information, they just want to be able to winnow and aggregate the streams. Again, from Borthwick:
The streams of data that constitute this now web are open, distributed, often appropriated, sometimes filtered, sometimes curated but often raw...Weeding out context out of this stream of data is vital... I believe search gets redefined in this world, as it collides with navigation... filtering becomes a critical part of this puzzle. Friendfeed is doing fascinating things with filters — allowing you to navigate and search in ways that a year ago could never have been imagined.

This is not to say that streams will replace Web pages or Web search, but it will certainly transform them. As a result, libraries need to begin thinking in terms of streams and not pages when; A) redesigning their Web sites; and B) When rethinking information literacy/education programs.

Sphere: Related Content

Thursday, August 13, 2009

The Value of Innovation: New Criteria for Library Scholarship: Part Two

Part two of the two-part article I wrote for Library Journal Academic Newswire entitled The Value of Innovation: New Criteria for Library Scholarship was published today.

In this part, I provide examples of the types of activities that need to carry increased weight within the academic librarianship rewards model. I adapted the criteria from used by the University of Maine New Media Department and activities contained in the criteria of Trinity University Libraries.

It is important to note that broadening the scope of what is valued that I am suggested is not in any way meant to devalue traditional scholarly models.

Instead, criteria used to evaluate the activities of academic librarians needs to be better balanced so that alternative forms of scholarly communication - scholarly activities in general - are supported and rewarded as scholarship, but not at the expense of traditional scholarship. Librarians exploring and implementing new types of services, new forms of scholarship, and alternative instructional techniques need to be properly reward

Once again, the disclaimer: While I retain copyright to the work, Reed Business has a 6-month exclusive license to publish the work in print or online. So, I'm unable to publish it on this blog until February '10. Sphere: Related Content

Thursday, August 06, 2009

The Value of Innovation: New Criteria for Library Scholarship

Part one of a two-part article I wrote entitled The Value of Innovation: New Criteria for Library Scholarship appears in today's Library Journal Academic Newswire.

In the article, I discuss how it's time for academic libraries to embrace a new faculty rewards model that properly rewards librarians for exploring and implementing new types of services, new forms of scholarship, and alternative instructional techniques. In part two, I adapt the criteria used by the University of Maine New Media Department to propose updated review criteria for academic librarians.

While I retain copyright to the work, Reed Business has a 6-month exclusive license to publish the work in print or online. So, I'm unable to publish it on this blog until February '10.

At least it's available online - today Sphere: Related Content

Wednesday, July 29, 2009

TEDx Columbus

One of the more interesting annual technology events over the past 25 years has been The TED Conference, held annually in Long Beach, CA. The conference started out as a way to bring together people from three worlds: Technology, Entertainment, Design. Since then its scope has broadened. Over four days, 50 speakers which include musicians, performers and comedians take an 18-minute slot. There are no breakout groups.

TEDx is a program that enables schools, businesses, or libraries to enjoy a TED-like experience through events that they organize, design and host. Events can be held in homes, workplaces, universities or public spaces. A TEDx event may last just an hour or a full day A TEDx event can consist of a dozen people or hundreds. Some TEDx events focus solely on recorded TEDTalks while others include short talks from live speakers.

TED supports organizers by offering a free toolset that includes advice, the right to use recorded TEDTalks, promotion on the TED site, connection to other organizers, and access to the TED brand.

Over 200 TEDx events are planned, include one in Columbus on October 20, 2009

Sphere: Related Content

Wednesday, July 22, 2009

Six Soon To Be Obsolete Technologies

At my recent high school reunion I got into a conversation about technologies that we grew up with that are now obsolete. One of my favorite down time sites is devoted to obsolete computer equipment (ah, the Trash-80!). Here is a short list of six technologies that are fading, some fast:
Photographic Film. My wife's 92-year-old grandmother may have the only film camera in the family (well, I still have analog Nikon body). The sound of the thumb wheel click while advancing the film and the click/whir of the automatic are fading. A sign that this is coming soon includes Kodak's June, 2009 announcement that they stopped producing the venerable Kodachrome. (see also: "The Music CD").

Stamp Vending Machines. The USPS is in the process of removing all Stamp vending machines from the local post offices by 2010. If one needs stamps after the post office closes one will need to go to a pharmacy, grocery store, or make use of an online service or one of their Automated Postal Centers.

The Music CD: The current CD format will go away fairly soon as the shift from physical media to downloadable content continues. Emergent technologies such as flash drives and the CD-DVD provide alternative physical formats. Then again, we still have yet to shovel all the dirt on top of analog vinyl records. (See also: Photographic film)

LCD Displays. The organic light-emitting diode (OLED) displays will take over from LCD since they draw less power and can operate longer on a battery charge. OLED technology is already in use in small screen applications such as mobile phones, MP3 players, car radios, and digital cameras.
Wrist Watches. Who really needs a watch for timekeeping when practically every electronic device has a clock? Watches are are more about fashion and less about function these days.

Antivirus Software. Having antivirus software installed on individual computers is becoming increasingly ineffective. The U of Mich announced a cloud computer approach called CloudAV that can identify malicious software at the network level.

What technologies do you see fading away that were once a part of your everyday life?
Sphere: Related Content

Tuesday, July 07, 2009

Broadcatching: Capturing "The Flow"

In his Library Journal online article entitled "The Flow" Revisited: The Personal Angle, Roy Tennant observes:
some of today's communication methods are like an undammed river -- if you're not there when it flows by, it's gone. Email, on the other hand, is like a dammed river -- it flows in, but it doesn't go anywhere until you do something with it....I wonder what tools will rise up to help cope with this -- perhaps your own little Twitter dam, with filters that allow you to choose to see what you missed from particular people while you were away? Or a filter to show only those tweets with a URL? Who knows? It's early days yet for the flow, and I'm curious to see what it brings.
This post made me reach over to my bookshelf and pull out Stewart Brands' 1987 book The Media Lab: Inventing the Future at MIT to re-read his vision of the concept of broadcatching. (Note: I often see things that are the end result of the research done during the '80s at the Media Lab and documented in this book. Lego Mindstorms, custom Portals and personalized Internet, virtual reality games were developed, envisioned, conceptualized, or influenced by Media Lab research.)

Brand describes broadcatching as an application to assist content selection (hunting for specific information) and viewing (grazing a single unfiltered flow or browsing multiple flows with no particular content in mind). Station-selector buttons on a car radio are a kind of broadcatch device. While they are customizable, they can only catch a specific source and not specific content.

While I was familiar with Brand's use of the term, a little research reveal that Fen Labalme is credited with coining 'broadcatch' back in 1983, referring to an automated agent that aggregates and filters content from multiple sources for presentation to an individual user. His definition:

To understand the concept described by this term, first take a look at traditional broadcast media (such as radio, TV, magazines and newspapers) and note that they generally consist of a one publisher to many consumers flow of information, and as such rely upon common opinions and beliefs, as each published issue is targeted for a mass audience.

On the other hand, Broadcatch connotes a many to one gathering of information, using a network of personalized agents to ideally sift through all available information and return just that which is of possible current interest from trusted, authenticatable sources and in a form and style amenable to the user. Broadcatch is designed to thrive in a diversity of opinions and provide a mechanism that effectively automates word of mouth.

I have to reiterate, this was thought up back in the early 1980's.

An ideal 'broadcatch' agent would grab my RSS, Twitter, and Facebook flows and be smart enough to know which items are important to me right now based on my recent information seeking/gathering patterns. It would allow those items to flow through the dam while holding back the remaining until I manually open the gates. But, as Roy comments, "teaching it what is important is likely the hard part." Sphere: Related Content

Friday, June 26, 2009

A Model for Alternative Scholarly Recognition Measures in Academic Librarianship?

One of my soap box topics that regular readers are familiar with is the need for academic library promotion and tenure committees to update their criteria to be more accepting of scholarly contributions that appear in alternative formats, specifically in new media. The challenge to date for our organization, and I suspect most others, has been finding an existing model to build from.

Here is one with considering.

The New Media Department and The University of Maine amended their promotion and tenure guidelines (all the way) back in 2007 with redefined criteria in the form of alternative recognition measures. Their documents identify nine alternatives to the standard 'article in a peer-review journal' model. I think the measures can be applied to library science since many aspects of LS has similar accessibility and timeliness requirements for their research/scholarship.

The following measures of recognition are prioritized at U of Maine in the evaluation of candidates (bolding is mine for emphasis):

-------

1. Invited / edited publications

Invitations to publish in edited electronic journals or printed magazines and books should be recognized as the kind of peer influence that in other fields would be signaled by acceptance in peer-reviewed journals.

2. Live conferences

The 2003 National Academies study concludes that conferences on new media, both face-to-face and virtual, offer a more useful and in some cases more prestigious venue for exposition than academic journals:

[The sluggishness of journal publications] is offset somewhat by a flourishing array of conferences and other forums, in both virtual and real space, that provide a sense of community and an outlet as well as feedback[11]....The prestige associated with presentations at major conferences actually makes some of them more selective than journals.[12]

New forms of conference archiving--such as archived Webcasts--add value and exposure to the research presented at conferences.

3. Citations

Citations are a valuable and versatile measure of peer influence because they may come from or point to a variety of genres, from Web sites to databases to books in print. Examples include citations in:

a. Electronic archives and recognition networks, such as the publicly accessible databases.

b. Books, printed journals, and newspapers. These are easier to find now, thanks to Google Scholar, Google Print, and Amazon's "look inside the book" feature.

c. Syllabi and other pedagogical contexts. Google searches on .edu domains and citations of the author's work in syllabi from outside universities can measure the academic currency of an individual researcher or her ideas. In the sciences, readings or projects cited on a syllabus are likely to be popular textbooks, but in an emerging field like new media, such recognition is a more valid marker of relevance.

4. Download / visitor counts

Downloads and other traffic-related statistics represent a measure of influence that has gained importance in the online community recently. As a 2005 open access study[13] concludes:

Whereas the significance of citation impact is well established, access of research literature via the Web provides a new metric for measuring the impact of articles – Web download impact. Download impact is useful for at least two reasons: (1) The portion of download variance that is correlated with citation counts provides an early-days estimate of probable citation impact that can begin to be tracked from the instant an article is made Open Access and that already attains its maximum predictive power after 6 months. (2) The portion of download variance that is uncorrelated with citation counts provides a second, partly independent estimate of the impact of an article, sensitive to another form of research usage that is not reflected in citations (Kurtz 2004).

5. Impact in online discussions

Email discussion lists are the proving grounds of new media discourse. They vary greatly in tone and substance, but even the least moderated of such lists can subject their authors to rigorous--and at times withering--scrutiny.[14] Measures such as the number of list subscribers, geographic scope, the presence or absence of moderation, and the number of replies triggered by a given contribution can give a sense of the importance of each discussion list.[15]

6. Impact in the real world

While magazine columns and newspaper editorials may have little standing in traditional academic subjects, one of the strengths of new media are their relevance to a daily life that is increasingly inflected by the relentless proliferation of technologies. Even counting Google search returns on the author's name or statistically improbable phrases can be a measure of real-world impact[16]. By privileging new media research with direct effect on local or global communities, the university can remain relevant in an age where much research takes place outside the ivory tower.

8. Net-native recognition metrics

Peer-evaluated online communities may invent their own measures of member evaluation, in which case they may be relevant to a researcher who participates in those communities. Examples of such self-policing communities include Slashdot, The Pool, Open Theory, and the Distributed Learning Project. The MLA pins the responsibility for learning these new metrics on reviewers rather than the reviewed.[17] Given the mutability of such metrics, however, promotion and tenure candidates may be called upon to explain and give context to these metrics for their reviewers. Again, efforts to educate a scholar's colleagues about new media should be considered part of that scholar's research, not supplemental to it.

9. Reference letters

Letters of recommendation from outside referees are an important compensation for the irrelevance of traditional recognition venues. Nevertheless, it is insufficient merely to solicit such letters from professors tenured in new media at other universities, since so few exist. More valuable is to use the measures outlined in this document to identify pre-eminent figures in new media, or to require new media promotion and tenure candidates to identify such figures and supply evidence that they qualify according to the criteria above.

-------

I will work on modifying these and publish them in a future post.

Sphere: Related Content

Friday, June 19, 2009

OSUL2013: Agile Organizational Development?

Over the past year I have been involved in a grassroots effort to create a roadmap for organizational change at The Ohio State University Libraries, known as OSUL2013. The purpose of the effort is to help the organization adapt to the changing information, educational, and environmental landscapes.

What has been unique about this process is that while it has been supported by library administration it has been entirely motivated and guided by Libraries staff. For their part, the administration has been actively encouraging staff to participate. I have been impressed by the growing participation considering the timing of the initiative coincided with the Library's move back into it's newly renovated building.

The process began with a full-day workshop on April 1, 2008, attended by 35 OSUL faculty and staff, including myself. The primary outcome of that event was the creation of five task forces, which worked through the summer of 2008 to investigate and create reports on the topics of:
Each of the reports define the topic, provides case studies, outlines a Blue Sky vision, and identifies "quick hits." The quick hits were important since they were seen as small scale projects and efforts which could help lead the organization towards each Blue Sky vision.

To continue the process, the task forces recommended an Implementation Phase. This phase began late last fall and concluded at the end of May. The Implementation Phase focused on a handful of quick hit projects projects and was managed by an Implementation Team, on which I served. An Implementation Community (made up of nearly 30 staff volunteers) was created to serve as a very important support unit. They acted as a sounding board and a source of new ideas. Community members were also active participants in several brainstorming sessions. Many also served on project teams.

The Implementation Phase final report (found here) summarizes the process, the projects, and recommended a new group be identified to continuation of the process. It calls for the creation of a working group starting in August '09 and continue through the fiscal year. That group would have several roles:
  • serve as a peer source of information and support for staff and faculty interested in pursuing innovative projects;
  • test and promote an appreciative model for group-based work based on facilitation, encouragement, and constructive feedback;
  • assess the progress toward the goals;
  • propose the next phase of the 2013 process
The activities associated with the next phase of OSUL2013 would be built from five high-level goals:
  • Library as Commons: Provide physical and virtual space for collaboration and communication
  • Empowered Staff and Focused Leadership: Encourage staff and faculty to take initiative to assess and to innovate
  • User-Centered Organization
  • One Library System: Facilitate and encourage communication and collaboration between individuals and units
  • Leadership in Scholarly Communications: Lead innovative efforts in the creation, distribution, and management of scholarship in all formats.
So, what does the title of this post come into play? As I was finishing up this summary I began to realize that the process that we have been unknowingly using could be characterized as being agile organizational development. We have been moving from phase to phase with only a half-baked notion of what we were going to do next. Each phase was an iteration on the previous with mid-process corrections being made based on our experiences and feedback.

While our report recommends that the next phase should last until next June, I have been thinking as I write this that it should last only until the end of the calendar year. We could squeeze two iterative phases in during that time period rather than just one.

Participating in OSUL2013 has been interesting experience for me, considering that I work in the Health Sciences Library which is technically not a part of OSUL. Not only has the OSUL organization embraced my involvement, but HSL leadership has been very supportive and encouraging. I will continue to be involved in the process going forward, but I haven't decided at which level of participation. Sphere: Related Content

Friday, June 12, 2009

Academic Mobbing: Dirty Politics or Animal Instincts?

An article entitled "Mobbing' Can Damage More Than Careers, Professors Are Told at Conference" appeared in the June 12th e-edition Chronicle of Higher Education. By the title alone, I thought the article was going to be about flash mobs. I became very curious how they could affect a professor's career.

In fact, what the article was about was the phenomenon of 'academic' mobbing. Mobbing in this sense refers to members of a department gang up to isolate or embarrass a colleague. I was able to uncover web sites, blogs, various articles, and books on the topic. Yes, mobbing also happens in libraries.

The practice of mobbing was recently reported in the New York Times and the Chronicle of Higher Ed of how Oxford professor Ruth Padel effectively engineered the mobbing of Nobel laureate Derek Walcott when they were competing for a coveted Oxford poetry professorship. Southern Illinois University Carbondale has been criticized in the past for having a culture where academic mobbing occurs.

From my perspective, academic mobbing is simply intelligent academics playing dirty politics or being reduced down to their animal instincts:
When songbirds perceive some sign of danger — a roosting owl, a hawk, a neighborhood cat — a group of them will often do something bizarre: fly toward the threat. When they reach the enemy, they will swoop down on it again and again, jeering and making a racket, which draws still more birds to the assault. The birds seldom actually touch their target ... The barrage simply continues until the intruder sulks away. Scientists call this behavior "mobbing."
The June 12th Chronicle article highlights the work of Kenneth Westhues a professor of sociology at the University of Waterloo, who discussed his studies of academic mobbing with The Chronicle in 2006, and created on the 16 indicators of mobbing.
  • The first stage of a mobbing is a period of increasing social isolation. At this point the 'target' is left off of committees or not invited to certain meetings. Colleagues begin to roll their eyes at them during meetings and there is a growing sense that more people dislike them than they once thought.
  • The next stage is one of petty harassment. Administrative requests are delayed or misplaced. They are made to follow the rules and processes while others are able to get around them. A research grant is squelch.
  • The third stage is the "critical incident." It is when significant accusations are made; a charge of plagiarism, a surprise audit. In the eyes of the mob, the critical incident demands swift administrative action and it is use to reinforce what they have always suspected.
  • The next stage is adjudication. At this point, the mobbing escalates to the administrative level, where it is either legitimized or stopped short.

And then, Mr. Westhues says, chances are the 'target' leaves. Whether they are dismissed or fully reinstated, whether it is due to exhaustion, or illness they cut their losses and get out.

Sphere: Related Content

Thursday, June 04, 2009

Do Conference Bloggers and Tweeters Need to Follow Media Rules?

Science Insider reports that Cold Spring Harbor Laboratory (CSHL) is amending its meetings policy so that participants who plan on blogging and tweeting must also adhere to the rules set for members of the media. The article states that bloggers and tweeters, in addition to the media, will need to notify CSHL ahead of time if they plan to cover the meeting and must receive permission from the speaker or poster author before reporting on what's presented.

The post highlights the case of Daniel MacArthur from the Wellcome Trust Sanger Institute in Cambridge, United Kingdom. MacArthur wrote several on the spot blog posts at The Biology of Genomes meeting that covered advances being discussed by the participants. The news service Genomeweb complained.

When should someone feel free to blog or tweet? Andrew Maynard posts some guidelines:

In general: Irrespective of the setting, I tend to ask whether the information being presented is confidential, whether it is sensitive in any way, and whether others would benefit from reading about it on Twitter or 2020science. There has been at least one occasion where I decided not to live-tweet from a public meeting because I thought it would embarrass the speakers unnecessarily. There have been other occasions where I have live tweeted to provide people not at the meeting a sense of what someone is saying, as they say it.

This only applies to formal presentations and public comments. Publicly commenting on private conversations is absolutely out as far as I’m concerned, and I will only write about side conversations the person I’m talking to knows my intentions beforehand.

Invitation-only meetings: Definitely no live tweeting, and no blogging unless express permission is given.

Meetings with clearly stated reporting limitations: Generally, no live tweeting, and abiding by the rules when it comes to blogging.

Expert presentation & discussion of non-peer reviewed data. If the aim of the meeting is to seriously assess and discuss someone’s unpublished research, I would hesitate to live tweet. I might blog - but only if it seemed appropriate given the state and significance of the research.

Open conferences (i.e. anyone who pays can attend) where researchers are reviewing the state of knowledge, presenting published data, or clearly think they are the bees knees and everyone should know it. These I see as fair game for live tweeting and blogging - without the permission of the speaker.

Public meetings, where anyone can attend and there is no entrance fee. Open season as far as tweeting and blogging go.

Sphere: Related Content

Tuesday, May 26, 2009

Assistance in Evaluating Digital Scholarship?

As a follow up to my Faculty Review System FAIL post, an article by Scott Jaschik entitled Tenure in a Digital Age appeared in today's Inside HigherEd.

The article highlights a consortium called the Humanities, Arts, Science and Technology Advanced Collaboratory that have produce a draft guide (on a personal Web site, not the consortium's) that offers some guidance for departments on approaches used by various colleges to evaluate digital scholarship, resources available to scholars wanting to get a take on some project, and policies that could be adopted to assure the fair treatment of those coming up for review.
One reason for the new effort is that shifts in publishing may make it impossible for a growing number of academics to submit traditional tenure dossiers. With many university presses in financial trouble and others -- notably the University of Michigan Press -- turning to electronic publishing for monographs, there will be fewer possibilities for someone to be published in the traditional print form that was once the norm for tenure.
The article notes that the shift that is occurring isn't just about the digital scholarship, but about how tenure committees being forced to learn much more about candidates and how their work was evaluated than has been the norm.
So many tenure decisions have been made on the basis of assuming that a university press has a sound peer review system -- and one that can be relied upon -- that tenure has been outsourced, some say. Now, new models of scholarship are forcing these committees to closely consider how they know a candidate is producing good work.
The article also echoed my comments about how activities that do not fit neatly into teaching, research, or services get categorized as service:
Many tenure review procedures are based on an assumption that a junior professor's work can be divided easily into teaching, research and service. Feal noted that one of the exciting aspects of the new digital projects being created is that they advance scholarship and create teaching tools at the same time. Professors shouldn't be forced to pick between one category and another. Similarly, those involved in this project say that some college departments just categorize anything digital as service, a solution seen as unsatisfactory because many of these project are in fact focused on scholarship and teaching, and because service typically doesn't count for much in tenure reviews.
I haven't had the time to look over the HASTAC documents at length since I wanted to communicate their availability. I am very interested in seeing how they could be applied or modified for our library faculty. Sphere: Related Content

Friday, May 22, 2009

Faculty Review System FAIL?

One of the issues that our AP&T committee is trying to get a handle on is how we can make sure that our criteria reflects the current and changing models of scholarship occurring both within the library profession and the academy.

In Talk About Talking About New Models of Scholarly Communication, Karla Hahn, Director of the Office of Scholarly Communications at the Association of Research Libraries (ARL), helps to define scholarly communication. It is:
.. knowledge transmission—even if it is simply passing information from one brain to another through speech, e-mail, submission to a database, the display of an image or video, or through a formal writing and printing process. In contrast, scholarly publishing is a subset of communication activities mediated through the use of a durable medium to fix knowledge
The traditional definition of scholarly communication is the publication of monographs and journals. This has served as a useful model since traditional publication can be clearly distinguished from other communication practices. Our library faculty as a whole conforms to this conventional definition of scholarly communication. Even so, a growing number of our faculty feel that new models of scholarly communication are not only just as valid as the traditional forms, but are critical for real-time knowledge transmission.

The obstacle to pursuing new forms of scholarly communication appears to be a rewards systems that still places a major emphasis on traditional publishing models. Yes, we need to be careful not to rebalance priorities in a way that devalues traditional scholarship. We also need to strike a balance so that alternative forms of scholarly communication - scholarly activities in general - are supported and rewarded as scholarship, but not at the expense of traditional scholarship.

The University of California's Office of Scholarly Communication issued a paper entitled Faculty Attitudes and Behaviors Regarding Scholarly Communication. They observed:
The majority’s lack of motivation to alter behavior appears to be connected... to the tradition-bound tenure and review process... the current tenure and promotion system drives them to focus on conventional publishing activities...Assistant Professors show consistently more skepticism about the ability of tenure and promotion processes to keep pace with or foster new forms of scholarly communication.
The groundswell for changes in how scholarly activities are defined and evaluated is growing. The thinking is that scholarly activities must not be judged on traditional review and distribution methods but according to (appropriate) standards for significance, excellence, and impact. The problem is that no one can seem to get an agreement on what those standards should be. At what price?

Boyce D. Watkins, an assistant professor of finance at Syracuse University is reacting to Syracuse's decision to deny him tenure and let him go by saying he had been led to believe that the university's standards for judging faculty publications had changed, putting less emphasis on refereed journals. Watkins points to SU Chancellor Cantor's efforts to encourage more faculty engagement with the public and interpreted Chancellor Cantor's call for "scholarship in action" as giving him the green light to focus on publishing and publicizing his work in the mainstream media.

As a growing number of scholarly activities depart from established academic patterns, review committees simply do not know what to do with them. Since they don't fit neatly into one of the three legs of facultyness - teaching , research, and service - such activities are commonly classified as service. This generic classification is, of course, problematic since all emerging forms of scholarly communication are being dumped into this category and assessed as service, not as scholarship.

When it comes to review time, as Professor Watkins experienced, innovative scholarly activities are not given the weight they deserve. We simply fall back into a practice of classifying activities in one of the three legs, even as those classifications no longer make sense. We then fall back into the thinking about traditional communication models as being the only ones that really have any value.

As UC observes, committees need to begin treating value as intrinsic in the work and its use, rather than predetermined by how it is classified. Review committees need to remove this presumption in the evaluation process in order not to prejudge what they are evaluating. Scholarship other than publication needs to be assigned a greater value. The teaching and service categories need be renovated to make contributions in these categories a potential basis for tenure or promotion.

After all, where will the motivation to become more innovative scholars, to be involved in more interdisciplinary endeavors, or to engage in new activities come from if faculty must conform to rewards systems that reflects a bygone era? (see: Copyright Law) Faculty review cultures and processes which have been perfected for traditional scholarship need to be replaced with structures optimized for digital scholarship.

Are we really trying to fit a square peg into a round hole here? Can we really expect to break out of the existing paradigm of how we define and assess scholarly activities if we limit our thinking to fitting into the existing system?
Sphere: Related Content