Monday, December 31, 2007

Experiencing Analog-to-Digital Transition Pains?

In 2003, I bought a new GM vehicle. One of the options I added was the OnStar system. The problem is that the technology they used on the vehicle relied on the 24-year-old analog cell phone network, established back in the day of 'luggable' cell phones. As of Jan 1st, 2008, the OnStar system in my vehicle will no longer work due to the FCC reallocating the frequency spectrum used by the legacy analog system. All services which rely upon analog cell phone systems must be shut down by Feb '08. Others affected by this changeover are those which subscribe to wireless security systems.

Everyone who relies on over-the-air (terrestrial) signals will be also be affected by the FCC mandated analog-to-digital transition of legacy analog television signals occurring Feb '09. One would think there are few households in this age of cable and satellite TV that still rely upon over-the air signals. Some estimates place the number of US households still relying upon over-the-air signals at around 15% (I still do as a backup during rain fade periods). I am also surprised by the number of people I talk to that still do not know what this transition may mean to them. Over the holidays, I must have explained the digital TV change a half dozen times to those in the 'older' generation.

No matter how much prior notice is given to the consumer, the transition from analog-to-digital technologies is painful to many. There are new concepts to be learned, and explained. There are new hardware and software investments to be made. Many wonder why the transition is needed to begin with when what they have used for decades seem to work just fine.

If one were to draw a comparison between changes in consumer technology to those in library technology, one begins to understand why the transition from an analog librarianship to a digital librarianship has also been painful to many in the profession. There are new concepts to be learned, and explained. There are new hardware and software investments to be made. Many wonder why the transition is needed to begin with when what they have used for decades seem to work just fine. Sphere: Related Content

Wednesday, December 19, 2007

ISI Impact Factor Data Under Fire (again)

Blogging on Peer-Reviewed ResearchAn editorial appearing in the Journal of Cell Biology by Mike Rossner, Heather Van Epps, and Emma Hill entitled Show me the data reports the inability for the authors to verify published impact factors using data provided by ISI. While it is common to read about the quirks of the impact factor, the authors question the underlying validity of the data used to calculate those impact factors and therefore the validity of the metrics that are published using it.

The authors, from The Rockefeller University Press, The Journal of Experimental Medicine, The Journal of Cell Biology, highlight their unsuccessful efforts to replicate ISIs published impact factors for these journals, which they serve as directors/editors. They reveal numerous and serious errors in several data sets provided by ISI.
When we requested the database used to calculate the published impact factors (i.e., including the erroneous records), Thomson Scientific sent us a second database. But these data still did not match the published impact factor data. This database appeared to have been assembled in an ad hoc manner to create a facsimile of the published data that might appease us. It did not.

When we examined the data in the Thomson Scientific database, two things quickly became evident: first, there were numerous incorrect article-type designations. Many articles that we consider "front matter" were included in the denominator. This was true for all the journals we examined. Second, the numbers did not add up. The total number of citations for each journal was substantially fewer than the number published on the Thomson Scientific, Journal Citation Reports (JCR) website (http://portal.isiknowledge.com/, subscription required). The difference in citation numbers was as high as 19% for a given journal, and the impact factor rankings of several journals were affected when the calculation was done using the purchased data (data not shown due to restrictions of the license agreement with Thomson Scientific).

It became clear that Thomson Scientific could not or (for some as yet unexplained reason) would not sell us the data used to calculate their published impact factor. If an author is unable to produce original data to verify a figure in one of our papers, we revoke the acceptance of the paper. We hope this account will convince some scientists and funding organizations to revoke their acceptance of impact factors as an accurate representation of the quality—or impact—of a paper published in a given journal.


If the problems that these editors encountered in their research are indeed accurate and widespread, the qualitative and evaluative decisions that rely in part on ISI's published impact factors (library purchasing; promotion and tenure; hiring; where to submit manuscripts) could now be considered suspect. Sphere: Related Content

Friday, December 14, 2007

Google's 'knol' Project a New Form of Scholarship?

Earlier this week, Google started to invite a selected group of people to try a new, 'free' tool that they are developing called "Knol."

Knol appears to be a more formalized and authoritative version of wikipedia. The key behind the project is to highlight the authority of content authors. Their idea is not lost on those in libraryland: knowing who wrote what will significantly help individuals make better use of web-published content.

The goal is to create 'Knols; to cover a wide variety of topics ranging from scientific concepts,, to historical, to how-to entries. Readers will be able to submit comments, questions, make edits, and add content. Readers will be able to rate a knol or write a review of it. Knols will also include references and links to additional information. There are also plans to create a level of search quality will be to ranked when the Knols appear in Google search results.

Google will not serve as an editor and all editorial responsibilities and content will rest with the authors. Google will not ask for any exclusivity on any of this content and will make that content available to any other search engine.

Since this project is currently vaporware and only screenshots are available (only those select few have access) I have not had a chance to see it live. (Hey, guys, how about an invite!?) I am also curious to read any of the licensing agreements to see who actually 'owns' the content. Chances are it is not the author. Sphere: Related Content

Thursday, December 13, 2007

Biomedical Digital Libraries Moves to Open Journal Systems

I just received a message from Marcus Banks, editor for Biomedical Digital Libraries (BDL).

In October, the journal amicably ended its relationship with BioMed Central. BMC's author payment model had become untenable for most of the authors wishing ot publish in the journal. While the BMC site still exists but they can no longer accept submissions.

BDL is in the process of transferring information about the journal to the Open Journal Systems (OJS) platform, which will enable the journal to accept submissions at no cost to authors. The new site should be available in January.

Open Journal Systems (OJS) is a journal management and publishing system that has been developed by the Public Knowledge Project through its federally funded efforts to expand and improve access to research. It operates through a partnership among the Faculty of Education at the University of British Columbia, the Simon Fraser University Library, the School of Education at Stanford University, and the Canadian Centre for Studies in Publishing at Simon Fraser University. The OJS assists with every stage of the refereed publishing process, from submissions through to online publication and indexing.

OJS Features

  1. OJS is installed locally and locally controlled.
  2. Editors configure requirements, sections, review process, etc.
  3. Online submission and management of all content.
  4. Subscription module with delayed open access options.
  5. Comprehensive indexing of content part of global system.
  6. Reading Tools for content, based on field and editors' choice.
  7. Email notification and commenting ability for readers.
  8. Complete context-sensitive online Help support.
Sphere: Related Content

Wednesday, December 12, 2007

Capturing Blogs Citing Peer-Reviewed Research

About a month ago, I discussed my involvement as a member the BPR3 (Bloggers for Peer-Reviewed Research Reporting) team. The primary goal of BPR3 to to create a service which allows researchers to discover blog posts about peer-reviewed research. It offers a way to distinguish serious posts from general news and what the family pet did last night.

Dave Munger, the team lead, is in the process of filling out the paperwork to establish the organization as a non-profit. As a part of the legal process he describes the purpose of the organization:
  • To establish standards for online discussion, cataloging, and citation of peer-reviewed research;
  • To improve the visibility and status of weblogs and other sites that thoughtfully discuss peer-reviewed research;
  • To produce and manage a central web site where readers can locate weblog posts, online discussions, journal articles, and other information about academic research, and which other institutions can use to provide other services to the public and the research community;
  • To provide a forum for researchers and the public to discuss and collaborate on research projects;
  • To promote the discussion and dissemination of peer-reviewed research;
  • To educate and inform the public about academic research;
  • To engage in other activities related to the discussion, dissemination, and education about peer-reviewed research.
There are indeed other services out there exploring similar approaches, such as Postgenomic (supported by the Nature Publishing Group). There has also been discussion on how to include citation metadata when creating blog posts. The fact there are several groups looking at a way to capture this information only adds support to the argument that blogging can add value to academic discourse, when done with a scholarly approach.

The BPR3 service is currently in beta and the developers are working hard to make sure it is stable before the formal rollout, scheduled for very early in '08. Sphere: Related Content

Saturday, December 08, 2007

Android and Libraries

Android is an open, and free mobile application platform developed by a group of more than 30 technology and mobile companies that make up the Open Handset Alliance.

OpenAndroid was built to enable developers to create mobile applications that take full advantage of the features of a mobile device. As a result, an application built on the platform could call upon any of the phone's core functionality such as voice calling, text messaging, or build in audio/video. A developer can combine information from the web with data on he device such as contacts, calendar, or GPS location to provide a customizable user experience. Since Android will be open source; it can be extended to incorporate technologies as they emerge.

I think that Android will bring us into an interesting future of mobile communications. Anyone can develop and install and develop applications and customize their mobile device to do what they want it to do.

The potential impact of these developments on libraries will be interesting to watch. I had high hopes when I saw Robin Ashford post Google's Android and Libraries in Academic Libraries. While she brings up many good points, they can be applied to mobile devices in general and not specific to Andriod.

As an example of the possibilities, imagine and Andriod WorldCat client. With GPS built into a device, one could do a search where ever they are physically and the results could be mapped to a library near you, much in the way WorldCat now identifies locations by IP addresses. Mash those results with Google Maps and one can create a complete resource discovery experience by navigating the customer directly to the library location.

Developments like Andriod also mean that the move towards a more Service Oriented approach in the design of library systems becomes even more critical. Instead of creating systems which work though a web interface, we may need to build standalone applications that on the customer's side uses a mobile interface but on the backside interacts with our existing systems and databases. Each library system could (should) have an Andriod application that is customized to access local content. Sphere: Related Content

Monday, December 03, 2007

Give an Open Source Gift

The folks over at Make have made their 2007 open source hardware gift guide available.

Open source hardware refers to computer and electronic hardware that is designed in the same fashion as open source software. The hardware is available under a license that permits customers to re-engineer and improve the hardware and then redistribute it in modified or unmodified form. A detailed article is available. The kits and projects included open source 3D printers, TV-turn-off devices, iPod chargers, music players, and a tube-based micro guitar amp.

There are many interesting open hardware projects out there that one should be paying attention to, such as Open OEM, which is trying simputer deviceto create an open computer where all of the specifications are available and there are no restrictions upon its use.

OpenBook is a project which wants allow tablet usage to masses by high volume by to creating specifications for a tablet PC positioned somewhere between between the One Laptop Per Child $100 laptop and consumer Tablet PC.

The goal of the Simputer project was a low cost portable alternative portable device designed to run on Linux and use the XML-based Information Markup Language (IML).

When you're thinking of giving a gift this year to your techie friend or family member give the gift of open source. For those who are supporters of open source software, it may be time to begin to think about supporting hardware developers that are out challenging the way technology is made and distributed.

Sphere: Related Content

Sunday, November 25, 2007

Is Your Library Tech Staff Friendly?

The LibrarianInBlack provides a nice summary of Jenny Benevento's presentation at Internet Librarian 2007 discussing the problems of tech-savvy librarians leaving libraries. Jenny cautions that libraries need to be careful about keeping tech-savvy people in the profession. Some of the things that contribute to negative environments for techies in libraries include (my modifications/enhancement/creative liberties in italic):

  • Creating a organizational culture where technology and library staff are not put into a position where they can (have to) gain respect for each other's uniqueness
  • Adopt technology just because of the buzzword aspect. Make the implementation a high priority
  • Hop on Internet trends two years after they happened and 18 months after your technology person suggested them, rinse and repeat
  • Include the workplace luddite on all technology projects with the expectation that the individual will change or somehow assimilate the technology
  • Under resource and underpay your technology staff
  • Don't fund projects yet expect them to be important services
  • Tell techies that you want new technology, but reject all change that they suggest.
  • Don't make an effort to understand emerging technologies, but expect the techie to create, implement, and manage new services based on them.
  • Equate all technical knowledge--it's all interchangeable; all techies know everything. For example, desktop support staff can manage firewall issues.
  • Expect your techie to keep all staff up-to-date on emerging technologies and still investigate, implement, and maintain it.
  • Expect all tech requests to happen immediately
Sphere: Related Content

Thursday, November 15, 2007

Is a Librarian Gender Salary Gap Ahead?

A new scorecard report has been released by the National Center for Women & Information Technology (NCWIT) reporting on the staus of women in IT. The scorecard indicates that women are falling further behind in information technology and computer science.

The findings showed that gare not pursuing careers or majors in information sciences. The research suggests that "women are more interested in using computing as a tool for accomplishing a goal than they are in the workings of the machine." the report states. Some stats:
  • Girls comprise fewer than 15 percent of all AP computer science exam-takers – the lowest representation of any AP discipline.


  • Between 1983 and 2006, the share of computer science bachelor’sdegrees awarded to women dropped from 36 to 21 percent.


  • Women hold more than half of professional positions overall, but fewer than 22 percent of software engineering positions.
An article entitled The New Library Professional by Stanley Wilder appeared back in the Feb 20, 2007 online issue of Chronicle of Higher Education discusses the emerging generation gap and non-traditional background among library professionals. His observations are based on the Association of Research Libraries 2005 Salary Survey data.

I found the following observation in Wilder's piece interesting in light of the scorecard and past discussions about the library gender gap:

"The computer types in academic libraries are disproportionately young. And perhaps not surprisingly, young computer experts enjoy a substantial advantage in salary (47 percent of them earn $50,000 and up) when compared to other young professionals in non-supervisory library jobs (only 18 percent earn $50,000 or more).

"Finally, most information-technology professionals in our libraries are male (71 percent), which is not the case in other types of library positions (28 percent male)."



Hmmm. I wonder. What will future library salary surveys show when more men are entering the profession in IT positions that generally have higher salaries? Sphere: Related Content

Wednesday, November 14, 2007

Joining the BPR3 Team

Put this one into the category "be careful what you wish for."

After my post A Blog Ciation Index? I was contacted by Dave Munger from the Bloggers for Peer-Reviewed Research Reporting (BPR3) team. In my post, I commented that the project could use some librarian help. Well, within 24 hours Dave reached out.

BPR3 is an effort that "strives to identify serious academic blog posts about peer-reviewed research by developing an icon and an aggregation site where others can look to find the best academic blogging on the Net."

My role is pretty undefined, but I plan on keeping them up on library oriented topics, standards, and efforts they may be unaware of, such as OASIS. I am also planning to get the word out through library publications and conferences, as well as in this space. Sphere: Related Content

Wednesday, November 07, 2007

Revisited: Are OSS4Lib Networks Needed?

Over a year ago, I posted a piece entitled Are OSS4Lib Networks Needed? I argued that libraries have traditionally banded together to pool resources with the common goals of obtaining monographs, serials, and databases as economical as possible. The decision to develop or become involved with a library network is to affect a positive change on a library’s ability to plan and budget.

Unlike these cooperative efforts of the past, libraries have chosen to work independently on information system solutions. So, I asked this question: With the significant costs involved in the purchasing and maintenance commercial information systems why haven't more libraries banded together to build library systems?

A post by Joe Lucia to NGC4Lib presents a compelling case in support of this concept. He discusses a shift from an investment in commercial software support to a collaborative support environment for open source applications facilitated by regional networks.
"It is frightening for many to contemplate the leap to open source, but if there were a clear process and well-defined path, with technical partners able to provide assistance through the regional networks, I suspect some of the hesitancy to make this move, even among smaller libraries, might dissipate quickly... The success models are there and developing best practice frameworks and implementation support methods that will scale will not be rocket science."

"What if, in the U.S., 50 ARL libraries, 20 large public libraries, 20 medium-sized academic libraries, and 20 Oberlin group libraries anted up one full-time technology position for collaborative open source development. That's 110 developers working on library applications with robust, quickly-implemented current Web technology -- not legacy stuff. There is not a company in the industry that I know of which has put that much technical effort into product development. With such a cohort of developers working in libraries on library technology needs -- and in light of the creativity and thoughtfulness evident on forums like this one -- I think we would quickly see radical change in the library technology arena. Instead of being technology followers, I venture to say that libraries might once again become leaders. Let's add to the pool some talent from beyond the U.S. -- say ! 20 libraries in Canada, 10 in Australia, and 10 in the U.K. put staff into the pool. We've now got 150 developers in this little start-up. Then we begin pouring our current software support funds into regional collaboratives. Within a year or two, we could be re-directing 10s of millions of dollars into regional technology development partnerships sponsored by and housed within the regional consortia, supporting and extending the work of libraries. The potential for innovation and rapid deployment of new tools boggles the mind. The resources at our disposal in this scenario dwarf what any software vendor in our small application space is ever going to support...

I couldn't have said it any better.

The State of Ohio has a great resource in OhioLink. OhioLink is a consortium of 86 Ohio college and university libraries, and the State Library of Ohio, that work together to provide Ohio students, faculty and researchers with the information they need for teaching and research. Serving more than 600,000 students, faculty, and staff at all 87 institutions, OhioLINK’s membership includes 16 public/research universities, 23 community/technical colleges, 47 private colleges and the State Library of Ohio.

I have been in discussions for over the years involving concepts like a state-wide electronic document delivery system. They go nowhere. Right now, much of the technical site of the OhioLink system lies on the shoulders of Peter Murray and Thomas Dowling (There are a couple others, but they are the ones I know of) The usual response is they can provide a LAMP box for testing, but have no spare staff for development, let alone ongoing support. So, the question I have been posing for a while now is:

What if each of the 87 institutions provided 1/2 of an IT position to OhioLink as a condition of membership?

That would result in over 40 FTE developers. Not only would the state be able to build and maintain an vendor-based catalog system that uses open standards and service-oriented architecture, they could build other systems and services that could benefit the entire state rather then each struggling to find resources to do so. Imagine if each provided a full FTE!

The challenge that Joe will find is libraries are very protective of their IT resources. This issue became apparent to me in the question/answer period following my presentation at a Library 2.0 conference earlier this year. A library director asked how libraries can change from the siloed systems we currently have and adopt SOA. My response was essentially it is up to you! The discussion then turned to the limited numbers of IT human resources. (In my opinion, this is because too many library directors still equate all technical knowledge as being interchangeable and that all IT staff know everything. As a result, most libraries do not allocate enough resources into IT yet have high expectations of their limited staff. But, I digress.)

The challenge continues to be that libraries leaders/directors need to be willing to see the benefits and more importantly, willing to make the sacrifice. They control the resources and therefore hold all of the cards. So, having more of us out there talking to them like Joe, is key! (assuming, of course, that they are not in this communication space and not a part of the dialog).

There have been many of us out there evangelizing for over 10 years now, (Thanks, Dan Chudnov and Eric Lease Morgan, for helping me see the light!) and we are only now beginning to see some daylight. Sphere: Related Content

Monday, November 05, 2007

Can Blogging 'Cool Down' Scholarly Communication?

"In the mechanical age now receding, many actions could be taken without too much concern. Slow movement insured that the reactions were delayed for considerable periods of time. Today the action and reaction occur almost at the same time. We actually live mythically and integrally, as it were, but we continue to think in the old, fragmented space and time patterns of the pre-electronic age."

Sounds as if this was written today, doesn't it? In fact, this was put into print 43 years ago by Marshall McLuhan in Understanding Media. This passage came to mind as I read Phil Ford's post Anarchy in the AMS , a blog version of a presentation he gave on Nov 1, 2007 at the American Musicological Society Annual Meeting in Quebec City. A couple excerpts:

"And it is here that the abstract and intransitive power of the blogosphere becomes a very great virtue. No-one can shut anyone else up, ideas can cross-pollinate unpredictably, and since the blogosphere is not a zero-sum enterprise, there is nothing forcing people to "take a side" with this or that school of thought. Those who gather around their shared faith in Academic Theory X might face questions from which they would otherwise be insulated by institutional mechanisms. And I suspect that this is something a few academics secretly resent and fear about blogs. They don't want someone who hasn't been properly housebroken asking cheeky questions, and they don't want to be denied the institutional authority to control the discourse.

"It is "cool," in the McLuhanesque sense: readers can profitably interact with it in a wider variety of ways than they can with more traditional forms of academic communication. Blog writing tends to be "porous," filled with open spaces that readers can fill with their own contributions. This kind of writing doesn't make the "hotter," denser kinds of academic writing obsolete, of course, but I would guess that as academic blogging continues to grow it will "cool down" academic discourse generally. Whether this is a good thing or not is a tough question, and maybe at this point an unanswerable one."


This last observation is what caught my attention. Also from McLuhan:
"Hot media are, therefore, low in participation, and cool media are high in participation or completion by the audience. ... The principle that distinguishes hot and cold media is perfectly embodied in the folk wisdom: "Men seldom make passes at girls who wear glasses." Glasses intensify the outward-going vision, and fill in the feminine image exceedingly, Marion the Librarian notwithstanding. Dark glasses, on the other hand, create the inscrutable and inaccessible image that invites a great deal of participation and completion."


McLuhan defines 'hot' media as those that do not leave much to be filled in or completed by the individual and are low in participation. In contrast, 'cool' media as those that provide a meager amount of information and so much has to be filled in by the individual. The user experience is quite different. Cool media are high in participation and offer small amounts of information. Scholarly blogging is all about participation and often consists of half-baked ideas.

Blogging is therefore a cool media, especially in contrast to the relatively low participation and high information levels of traditional scholarly communication (e.g. journal article).

Slow movement in our scholarly communication insured that the reactions were delayed for considerable periods of time. Some would say this was to control the discourse. With blogging, the action and reaction occur almost at the same time. We actually live mythically and integrally, as it were, but we continue to think in the old, fragmented space and time patterns of the pre-electronic age.

So, yes. Blogging can cool down scholarly communication. In my opinion, Mr. Ford, that is a good thing for my profession - librarianship.

Since you have read this far this must be a topic of interest. You may wish to take some time to read and participate in ACRL's Establishing a Research Agenda for Scholarly Communication: A Call for Community Engagement. One of the many challenges:

"When faculty employ and create new forms and techniques, evaluating their work against traditional measures is a particular challenge. Although studies document the conservatism and constraining influence of scholarly promotion and tenure review processes and reward systems, we do not yet have deep insight into how they can evolve to recognize and embrace new forms of scholarship. The problem is acute for the creators of digital scholarship, which rarely enters the formal publishing stream, yet is a creative, scholarly act that can influence and underpin both present and future research. But authorship of these programs is not yet rewarded as a form of scholarly communication of the first order in most disciplines."
Sphere: Related Content

World's Smallest Radio

Scientists at the Lawrence Berkeley National Laboratory have created the world's first complete nanoradio. Physicist Alex Zettl led the development team and grad student Kenneth Jensen built the radio.

The nanoradio consists of a single carbon-nanotube molecule that serves simultaneously as all the essential components of a radio -- antenna, tunable band-pass filter, amplifier and demodulator. A direct current voltage source, supplied by a battery, powers the radio. They have demonstrate successfully both music and voice reception using carrier waves in the 40-400 MHz range and both frequency and amplitude modulation techniques.

This innovation opens the possibility of creating radio-controlled interfaces on the subcellular scale, which may have applications in the areas of medical and sensor technology. Sphere: Related Content

Friday, November 02, 2007

A Blog Citation Index?

One of the primary quality indicators for scholarly works is citation analysis. The assumption is that the value a work is based in part of the number of times it has been cited in other works. I often include links or references to peer-reviewed papers that I have read or which support my arguments in my blog postings. Many of the library blogs I read also include such references.

If there is a great deal of discussion about an article, how does easily one gain access to all the blog postings that reference a specific (non-blog) scholarly work?

Bloggers for Peer-Reviewed Research Reporting (BPR3) is an effort that "strives to identify serious academic blog posts about peer-reviewed research by developing an icon and an aggregation site where others can look to find the best academic blogging on the Net." The concept grew from a Dave Munger Cognitive Daily post.

The BPR3 primary goal has been to create a recognizable icon for use on any blog when discussing peer-reviewed research. Blogging on Peer-Reviewed ResearchThe icon would point to the original primary source material. Posts discussing peer-reviewed research articles could be distinguishing from those containing news and other miscellaneous content. Guidelines for usage are available. (In fact, I am in violation of the guidelines by including it in this post. Sorry guys. It is free promotion) Their long term vision:
  • BPR3 as it was originally conceived was simply a way for bloggers to denote posts on peer-reviewed research. It will still be that, but it will be much more.

  • There will be a central web site where snippets from these posts will be displayed, along with links back to the original posts.

  • Readers will be able to choose the topics they're interested and view only those posts.

  • Bloggers will use plugins for WordPress and Movable Type allowing them to enter a DOI or other identifier and automatically generate code to post the icon, link to the post to our site and its aggregation tools, and generate a properly formatted research citation which links to the original article.

  • Bloggers will be able to instantly find people blogging on the research they're blogging on. Researchers will find blog posts about their research, too.

  • Readers, bloggers, and researchers can use topic-specific RSS feeds Forums and other tools will allow researchers to collaborate in real-time

  • Readers can share questions about research, discuss how to use our site, and discuss topics they can't find blog posts on.

  • Based on their blog alone, the focus and energy in their early efforts has focused on the icon. In my opinion, the much more powerful and useful part of their concept is a site is the aggregated index. When a blogger creates a post with the icon, a link is automatically generated back at the index site. As the number of tracks backs grows, the index becomes a central depository for blog posts about peer-reviewed materials. A researcher could then follow the icon on any blog back to the aggregated index containing all the other blog commentaries on that specific work.

    This concept could become the basis of a very powerful research tool. Think Science Citation Index for blogs, or what I call a Blog Citation Index (BCI).

    It would seem that having a simple icon and trackback somewhere on the blog post is not good enough to generate a useful index. It would have to be associated with a specific bibliographic information of the primary material. Otherwise the index could become dirty real fast. Their site is silent on their plans. DOI is optional and there has been no discussion about the use of OpenURL. I also wonder if there are plans for a code snipit generator for those that do not use Movable Type or Word Press in an effort to simplify the process of creating the trackbacks and standardizing the information required for indexing.

    I am anxiously awaiting their prototype, which has been promised to be available in a month.

    If the BCI were implemented at the publisher level (charging publishers to link to the index could be part of the business model) , visitors to a journal's online table of contents could quickly identify which articles have been blogged. One could also see what topics are hot by quickly identifying the most cited materials over a period of time. Metatags could be used to create tag clouds to tracking keywords and memes.

    It is a very interesting concept that could use a librarian's help. It is a concept that our friends across town at OCLC would be (should be?) interested in. Sphere: Related Content

    Monday, October 29, 2007

    More on Blogging as Scholarly Communication

    As the vice-chair (will chair next year) of our University Library's promotion and tenure committee, I have been gathering up evidence to use in a discussion about blogging as scholarly communication. This will be a very interesting challenge since we have a very traditional culture. While it will be a long, uphill effort I still feel strongly that the topic needs to be brought to the table.

    A couple items I have come upon in building my argument includes an April post on Peter Suber's Open Access News blog regarding scholarly communication and blogging. Peter is an independent policy strategist for open access to scientific and scholarly research literature, is a Senior Researcher at SPARC and the Open Access Project Director at Public Knowledge. Based on his blog experience:
    • The process is much faster. A few hours to a few days to create a post, then a few hours of intensive review, then a day or two in which the importance of the reviewed work becomes evident as other blogs link to it. Stuart's comment came 9 hours into a process that accumulated 217 comments in 30 hours. Contrast this with the ponderous pace of traditional academic communication.
    • The process is much more transparent. The entire history of the review is visible to everyone, in a citable and searchable form. Contrast this with the confidentiality-laden process of traditional scholarship.
    • Priority is obvious. All contributions are time-stamped, so disputes can be resolved objectively and quickly....
    • The process is meritocratic. Participation is open to all, not restricted to those chosen by mysterious processes that hide agendas. Participants may or may not be pseudonymous but their credibility is based on the visible record. Participants put their reputation on the line every time they post. The credibility of the whole blog depends on the credibility and frequency of other blogs linking to it - in other words the same measures applied to traditional journals, but in real time with transparency.
    • Equally, the process is error-tolerant....Because the penalty for error is lower, participants can afford to take more creative risk.
    • The process is both cooperative and competitive....
    • Review can be both broad and deep. Staniford says "The ability for anyone in the world, with who knows what skill set and knowledge base, to suddenly show up ... is just an amazing thing". And the review is about the written text, not about the formal credentials of the reviewers.
    • Good reviewing is visibly rewarded. Participants make their reputations not just by posting, but by commenting on posts. Its as easy to assess the quality of a participant reviews as to assess their authorship; both are visible in the public record.

      More recently, Kevin Smith at Duke details how two scholars have recently undertaken to write major pieces of scholarship about scholarly communications issues in blog form. He writes:

      "Not only are these two projects interesting because of their topics, they also represent important experiments in the kind of collaborative scholarship that the digital environment makes possible."

      Somehow it just makes some sense that librarians should be leading the way in recognizing the role that blogging plays in moving our profession forward. Having P&T committees that see and place some value on blogging is as good of a place as any to start. Sphere: Related Content

      Thursday, October 25, 2007

      One-Hour TV Ad for Second Life

      Last night's episode of CSI:NY turned into a one-hour commercial for Second Life. At first, I thought there was going to be a casual mention, but a chunk of the episode was within SL. A few observations about the episode:
      • Linden Labs was very forthcoming with the first life identities of several avatars.
      • CSI was able to track down the IP addresses of avatars and map them to physical locations in real time.
      • There was a character named the "White Rabbit' who knew the current SL location of all avatars. It sold the information for 30,000 Linden dollars ($125).
      • The CSI folks were able to change their avatar's appearance and clothing amazingly fast. They didn't even need to go shopping.
      • Their avatar participated in a gladiator event and move more like something in XBox360 rather than anything I experienced in SL.
      • One SL avatar was able to use the system to launch a virus attack that penetrated CSIs firewall.
      The tie-in was a virtual CSI experience / contest being promoted in Second Life. I am sure Linden Labs developed/bartered the entire site for the TV time. The scenario of the contest:

      Tragedy has struck Chefanista's, a neighborhood deli in the Bronx. The door is open, but the crime scene tape says closed. Read Anthony E. Zuiker's description of a murder, and then explore the virtual crime scene in world. Uncover what happened and why, and submit your explanation to Zuiker himself via the CSI: NY message board. Mr. Zuiker will announce the winner on December 1st.
      Sphere: Related Content

      Monday, October 22, 2007

      Facebook v. MySpace: A Socio-Economic Divide?

      The Sydney Morning Herald reports that HitWise research indicates that in the 10 weeks to October 13 the traffic to MySpace in Australia has dropped 5% while Facebook has tripled its traffic. The overall number of visits to social networking sites have also doubled in that period.

      According to Nielsen/NetRatings, since Facebook’s registration was opened to the public last year, the site has seen triple digit traffic growth, increasing 117 percent from 8.9 million unique visitors in August 2006 to 19.2 million unique visitors in August 2007. Facebook’s innovative features, the result of open their API to developers, are helping to drive the growth. This suggests that the momentum has moved in favor of Facebook as it picks up a larger portion of the new traffic while MySpace growth appears to have become static.

      This data would also support the argument that young people are leaving MySpace for Facebook in droves and that MySpace is becoming the latest victim of a hot trend. University of California, Berkeley, researcher Danah Boyd indicates that not all teens (in America) are leaving MySpace, they're splitting up along class lines. "Who goes where gets kinda sticky... probably because it seems to primarily have to do with socio-economic class." Boyd studies social networks and youth culture and has made her observations based on formal interviews with 90 teens, informal interviews, and reviewing thousands of teens' profiles.

      The "goodie two shoes, jocks, athletes or other 'good' kids" are now going to Facebook," Boyd writes, and that "Facebook was framed as being about college...Facebook is what the college kids did. Not surprisingly, college-bound high schoolers desperately wanted in."

      Boyd also notes that back in May 2007, the military banned MySpace but not Facebook. She saw this as an interesting move because the division in the military reflects the division in high schools. MySpace is the primary way that young soldiers communicate with their peers. While Facebook is popular in the military, but it's not the choice for 18-year old soldiers, a group that is primarily from poorer, less educated communities. The officers, many of whom have already received college training, are using Facebook. Boyd asserts that the "military ban appears to replicate the class divisions that exist throughout the military. " Sphere: Related Content

      Tuesday, October 16, 2007

      The Library? Have You Heard of the Internet?

      There was an interesting scene in last night's episode (The Kindness of Strangers) of Heroes.

      In the scene, Claire Bennet is at the dinner table with her family. To get out of a family event to sneak off to see her boyfriend she comes up with the following excuse:


      Claire: "I can't. I have to go to the library for a research paper"

      Lyle (her brother, sitting on the other side of the table): "The Library? Have you ever heard of the Internet?"

      Claire: "Actually, the research paper is on libraries and how in the digital age they are increasingly becoming obsolete for our generation"


      What does it say when television shows start to bring up the issue of the relevancy of libraries?

      Sphere: Related Content

      Monday, October 01, 2007

      Credit Sputnik for the Development of the Internet, Not Al Gore

      October 4, 1957 marks a significant time in technological history. It was this date fifty years ago that the Soviet Union launched Sputnik I. The world's first artificial satellite was 23 inches in diameter, weighed 183 pounds, and took about an hour and a half to orbit the Earth.

      As a technical achievement, Sputnik changed everything.

      One of the U.S responses to Sputnik was the establishment of the Advanced Research Projects Agency (ARPA) in February of 1958 in a supplemental military authorization for the Air Force (Public Law 85-325, H.R. 9739). One of the initial purposes of ARPA was to research new technologies that may have been considered too risky for private industry to investigate.

      In 1969, ARPA created the ARPAnet as a tool to transfer research transfer between computers across systems. ARPAnet was the predecessor to the Internet.

      They designed a host-host protocol known as the Network Control Program (NCP) that allowed for the exchange of information between geographically separated computers. They also envisioned a hierarchy of protocols including Telnet and FTP built on top of NCP. Also established was the Request for Comments (RFC) open documentation that encouraged "notes may be produced at any site by anybody and included in this series."

      In Who's Who in the Internet, Biographies of IAB, IESG, and IRSG Members, (RFC 1336) Robert Braden is quoted:
      "One important reason it worked, I believe, is that there were a lot of very bright people all working more or less in the same direction, led by some very wise people in the funding agency. The result was to create a community of network researchers who believed strongly that collaboration is more powerful than competition among researchers. I don't think any other model would have gotten us where we are today."
      There may be a subtle lesson here for libraries and library system vendors regarding community and collaboration...

      Photo from the Smithsonian Institution Sphere: Related Content

      Tuesday, September 25, 2007

      I'm Tired of Hearing This From Library System Vendors:

      "...our development folks have talked about...I'll let them know of your interest in such functionality and we'll consider it as potential enhancement to the system"

      I was at OhioLink yesterday attending a demo of the OCLC / Illiad system. Since the presentation was by a sales rep I sent this email to Atlas after the meeting:
      "As you may be well aware, a general library trend is figuring out ways to get into our customers workflow. We can no longer expect our customers to want to come into each of our silo library systems. To that end, we have been doing some pilot testing of various Google and Facebook gadgets. It makes a great deal of sense that one of the systems which we can create gadgets for is the Illiad customer interface. We would love to be able to provide our customers with an easy view of their pending requests from their iGoogle home page or their Facebook profile. We are interested in building such gadgets, which are customizable so that any Illiad licensee could modify them for their local system."

      "Are their any APIs or Web Services available from Illiad/Atlas Systems which would enable us to build such gadgets for the Illiad community?"

      "Exposing such services in those spaces would enable all libraries to raise the profile of the Illiad service."

      The top of this post was a part of the reply I received (sigh).

      Before I continue, I want to say that Illiad is a very powerful system that can save valuable staff time. The efforts that Jason Glover and Atlas Systems team have made in developing and promoting an open standard for Internet document delivery (even if as a community we continue to support a proprietary closed system as the defacto standard) are commendable.

      Based on my past interactions with Atlas, part of me expected (hoped?) that they would have already thought of this. The response is unfortunate. It appears that Atlas has simply grown up into just another library system vendor, waiting until there is a large enough group requests an enhancement.

      The problem is that this is exactly the kind of functionality we desperately need from our systems - today. If we are going to continue to rely upon hosted or licensed solutions we need vendors to be thinking about integration between products before we say we need them.

      The reality is that by the time that a large enough group of Illiad customers realize the potential of such functionality and communicate it to Atlas, and it is developed, yet another window of opportunity will have closed behind us. Sphere: Related Content

      Tuesday, September 18, 2007

      If It's Free, It's For Me (?)

      I just read that at midnight tonight (9/18) the NY Times is no longer requiring a TimesSelect subscription to access much of its online content. They found that they could make more money by opening access and selling advertising. It appears that readers started coming to the site from search engines and links on other sites instead of coming directly to NYTimes.com. These indirect visitors, unable to gain access to articles and less likely to pay subscription fees, were seen as opportunities for more page views and increased advertising revenue.

      This is a similar approach that that Elsevier is using on a new website aimed at oncologists that provides registrants free access to articles from 100 of their journals, including The Lancet and Surgical Oncology.

      I have also been playing around with a new free music download service called SpiralFrog which supports itself through advertising.

      This is part of a growing trend that the potential ad revenue from increased traffic at 'free' sites outweighs subscription fees. Of course, nothing is really free. Most 'free' sites require registration that provides the site with important demographic information which can be used to target advertising, assuming one is truthful when filling in the forms.

      Hmm. Knowing what percentage of people are always truthful when filling out the forms would be an interesting study - assuming one can count on those responding to be truthful. The marketing folks probably already know this information and have it in their algorithms. Sphere: Related Content

      Monday, September 17, 2007

      The Future of Libraries is in Web Services

      In her post The future of Web Services isn't the Library website, Karen Coombs highlights the challenges that many of us are facing:
      • Most of the library sees the redesign process as about “fixing” the current website so that it is more usable, up-to-date, and attractive.

      • Trying to make a site that works equally well for everyone has two consequences; huge amounts of resources are devoted to crafting multiple permutations that have to be maintained and you end up with a mediocre site that no one hates or loves.
      She also states:
      • The redesign is about defining the types of content the library has to offer its users and getting that content into pieces that can be reused and repurposed elsewhere.

      • Focusing on content rather than look and feel will allow us to provide these different types of services. It will also allow different types of users to potentially selectively access content.

      • These kinds of services that will make of break a library’s virtual presence not the library website.

      In my five-part post on Service-Oriented Architecture, I argue that the adoption of the SOA models can help libraries to aggregate the information they create and manage.

      The traditional web site represents a siloed information system. The structure of the data/content we create becomes fixed within the web site. The costs in real dollars and staff time required to export, convert, and import into the new site are significant. The question is whether if we should be spending our limited resources running around on the web site gerbil wheel, or, on ways to better manage and syndicate our content.

      In SOA, all our information systems would be designed to be loosely coupled. Direct connections to each of our data sources would be available from any of our our web presences. The user experience is enhanced since they could gain access to any of the resources from any of the user interfaces. For example, one could search the course management system and receive results from the eJournals collection.

      With SOA, the library could create a single Facebook-like portal which would aggregate all networked resources. New library 'web services' could be built and simply plugged in by the customer. This approach to the design of library systems represents a radical departure from what we have today. At the same time, it provides libraries with an unprecedented ability to create and maintain systems that can quickly adapt to the changing networked information infrastructure.

      SOA has the potential to get our resources out to where our customers are instead of, as Karen puts it spending "all our time caught up in look and not enough time working to make the library meet users where they are and be a seamless part of their work processes."

      Sphere: Related Content

      Friday, September 14, 2007

      2007 ECAR Study of Undergraduate Students and IT

      The Educause Center for Applied Research (ECAR) Study of Undergraduates and Information Technology, is a longitudinal study of students and information technology based on interviews with 27,846 freshman, senior, and community college students at 103 higher education institutions. It focuses on what kinds of information technologies these students use, own, and experience; their technology behaviors, preferences, and skills; how IT impacts their experiences in their courses; and their perceptions of the role of IT in the academic experience.

      The 2007 results are now available. Some random key findings:

      - 94.7% use their library's web site
      - 91.5% have high speed connectivity
      - 84.1% use instant messaging
      - 82% has used a course management system
      - 81.6% use social networking sites (e.g. Facebook)
      - 78.3% play computer games
      - 75.8% own a personal laptop. 34.5% of those are less then a year old
      - 74.7 own a music/video device (e.g. iPod)
      - Engineering majors are online the most at 21.9 hours per week. Life sciences are online 16.3. Sphere: Related Content

      Wednesday, September 12, 2007

      Elsevier Provides Free Journal Access ... With A Catch

      An article appearing in the September 10, 2007 New York Times describes how Elsevier has started a new website aimed at oncologists that provides registrants free access to articles from 100 of their journals, including The Lancet and Surgical Oncology.

      The site will provide registrants limited access to other publishers’ journals, too including summaries of cancer-related articles from 25 other leading journals, like the Journal of the American Medical Association and The New England Journal of Medicine.

      Since there are more than 500 cancer drugs in the pipeline oncologists are a pretty research oriented group. The primary market will be those oncologists not affiliated with academic medical centers, who actually see a large percentage of all cancer patients. While oncologists affiliated those those centers generally have access to local subscriptions, those that do not generally rely on materials available on the Web for their research.

      The catch is, and there has to be one since there is nothing that is truly free, is that Elsevier is planning to sell advertisements, especially from pharmaceutical companies with cancer drugs. Another revenue stream is the sale of their anticipated registration list of 150,000 professionals to advertisers. Since more of the general public is performing their own research online I would not be surprised that they actually make more money by using this model by opening up their resources beyond the professional. They could create custom banner ads and sell those registrant names to advertisers selling consumer products.

      If the model works one can also expect it to move beyond medical information - Elsevier also owns Lexis-Nexis. Sphere: Related Content

      Tuesday, September 11, 2007

      LibGuides: Where Are the RSS Feeds?

      Boston College Libraries recently revealed their new subject guide service using the LibGuides application. Several librarians have already commented on LibGuides, and in general the service has been well received.

      LibGuides is a hosted service which enables libraries to create locally branded subject guides. LibGuides is a service SpringShare, founded by Slaven Zivkovic, who once worked at the Orradre Library Santa Clara University and co-found Docutek Information Systems. According to the FAQ, the cost of annual license for LibGuides depends on the size of the institution and the number of libraries involved with fees ranging from $899 to $2,499.

      Each subject guide includes a profile of the individual responsible for managing the guide including a photo, and contact information. There is also an IM link! These guides can incorporate all kinds of content, pull in RSS feeds, embed videos, podcasts, custom search engines, etc. Customers can search or browse for a guide by subject or alphabetically. There is also a Facebook application.

      While it appears that one can pull in RSS feeds when building a guide, it does not appear that a library can syndicate a guide. While Slaven has come up with an interesting service in LibGuides, such syndication is essential as libraries move towards a more service oriented approach to our systems. Syndication would serve several purposes:
      • Libraries would allow customers to subscribe to any guide and get a feed of any changes made to it. There is already an email update feature (although limited to newly published guides with specific tags or by specific librarian) why not RSS?
      • Libraries could use RSS feeds to export the content of a single guide into other systems, such as a course management system. A librarian could make changes within a specific LibGuide and have it appear in the library resources section of a course. Several different guides could be brought into a single course, or, a single guide could be placed into every course shell.
      • Librarians spend a great of time managing content. We also have a tendency to embed our content into our applications. This continuously creates migration challenges. The reality is that LibGuides has a limited shelf life. As new technologies and ideas come around we will want to move the content out of the service. RSS would be a relatively painless method of exporting the content.
      • Since there is an option of printing the entire guide, it would seem to be relatively trivial to create an RSS version.

      Lastly, while everyone has to make a living, LibGuides would make a terrific open /community source project. Sphere: Related Content

      Wednesday, September 05, 2007

      Where are the Library Mashups? Well....

      The Krafty Librarian has asked the question: Mashups What Happened?:

      "So what happened? Are the mashers too busy working on their latest creation to be discussing it online? Are mashups still too technical for the average person to create to be popular in the library world? Are librarians victims of their closed systems, thus limiting the amount of mashups created and used?


      If the discussion is specifically about library resource/service mashups, then unfortunately I feel the last question gets to the crux of the reason.

      Libraries rely upon commercial systems from vendors that often use proprietary technologies instead of those that support open standards. These closed systems have made solutions such as mashups very difficult to build, if not impossible, primarily since our systems can not talk to one another.

      There are many ways that one can build a mashup, with the most common being through RSS and APIs. The core of what makes many of the more powerful mashups work - the API - remains one of their their most tightly guarded core intellectual properties and revenue streams. The vendors place a death grip on those very APIs which would allow libraries to create mashups using services provided by disparate systems. I would love to build mashups to pull information out of our III system in a more elegant way then screen scraping.

      While libraries have embraced consortia solutions for most large scale purchases, for some reason we have not gotten together (projects like PINES being one exception) in open systems development that would result in new library systems based on open standards. Such systems could be built using an open architecture and shared with the library community. Community source, if you will.

      At the very least, I believe our library leadership and professional organizations need to begin demanding that vendors begin using open standards and opening up their APIs. By continuing to embrace proprietary vendor solutions - as they decide to build them - we may be destined to remain two or three years behind the technology curve. Sphere: Related Content

      Tuesday, September 04, 2007

      Are Faculty Loyal to Current Scholarly Communications Peer-Review System?

      It appears so.

      The University of California Office of Scholarly Communications just released “Faculty Attitudes and Behaviors Regarding Scholarly Communication: Survey Findings from the University Of California.” The report analyzes over 1,100 survey responses covering a range of scholarly communication issues from faculty in all disciplines and all tenure-track ranks. The report provides summary and detailed evidence of a UC community of scholars that:

      • There is limited but significant use of alternative forms of scholarship, with 21% of faculty having published in open-access journals, and 14% having posted peer-reviewed articles in institutional repositories or disciplinary repositories. Such publishing is seen as supplementing rather than substituting for traditional forms of publication.

      • Faculty appear unwilling to undertake activities, such as forcing changes on publishers, that might undermine the viability of the system or threaten their personal success as traditionally evaluated.

      • Many respondents voiced concerns that new forms of scholarly communication, such as open access journals or repositories, might produce a flood of low-quality output. Faculty showed broad and strong loyalty to the current peer-review system as the primary means of ensuring the quality of published works now and in the future, regardless of form or venue.

      • On matters of tenure and promotion Assistant Professors show consistently more skepticism about the ability of tenure and promotion processes to keep pace with or foster new forms of scholarly communication.

      • The survey results overall suggest that senior faculty may actually be more open to innovation than younger faculty. Senior faculty are free from tenure concerns, and although many are still driven by a desire for promotion, they appear more willing to experiment, more willing to change behavior, and more willing to participate in new initiatives. Therefore, senior faculty may well serve as one starting point for fostering change. Furthermore, because senior faculty are both involved in making academic policy and serving as role models for junior faculty, their efforts at innovation are likely to have broader influence within their departments.
      Sphere: Related Content

      Friday, August 31, 2007

      Is iPhone Unlocking Legal?

      Today is the day that George Hotz is planning to trade in his unlocked iPhone for a Nissan 350Z sports car and three new iPhones. Hotz figured out a ten-step process to unlock his iPhone so it can be used on other cell-phone networks. Most cell phone service providers electronically 'lock' the phone so that it can only be used with their service.

      An article in Business Week suggests that Apple can not stop individuals from unlocking the iPhone. Individual users are already allowed to unlock their own phones under an exemption to the Digital Millennium Copyright Act (DMCA) that the U.S. Copyright Office issued last November. The exemption, in force until 2009, applies to:

      "computer programs…that enable wireless telephone handsets to connect to a wireless telephone communication network, when circumvention is accomplished for the sole purpose of lawfully connecting to a wireless telephone communication network."
      While lawyers for Apple and AT&T have tried to deter hackers from unlocking iPhones to protect the monthly service charges they receive, it does appear that individual may be within their legal rights to unlock an iPhone, until the exemption runs out. The two firms are expected to claim that the DMCA protects the iPhone from being unlocked because it is a copyrighted work:
      'No person shall circumvent a technological measure that effectively controls access to a work protected under this title.'
      It could also be argued that the practice of locking of cell phones only protects access to a carrier's communications network. While such services may be protected by other intellectual property laws, they aren't copyrightable and do not fall under the DCMA.

      Individuals who unlock their phones will still need to pay AT&T network charges, or pay the $175 early termination fee if they move to another carrier . Sphere: Related Content

      Wednesday, August 29, 2007

      GooglePhone OS to be Released in Early Sept?

      Word from Engadget is that the much anticipated Google mobile device platform may finally be revealed in early September. Rumors suggest that Google has plans to invest $7-8 billion in its development, although there has been no confirmation from the company.

      The word is also that Google isn't necessarily working on a hardware device of its own, but instead is working with OEMs and ODMs to get them to put the linux-based platform on upcoming devices. Speculation that they may build their own device increased after Google acquired startup Android Inc., along with founder Andy Rubin. One of his companies launched the mobile Hiptop device, marketed as the T-Mobile sidekick. So, they do have the talent to make a device.

      This also sheds some light on Google's release of the Short Message Service (SMS), which sends text-based localized information to mobile users from sports scores, driving directions, to weather forecasts. Sphere: Related Content

      Tuesday, August 28, 2007

      CAUTION: Paradigm Shift Ahead

      As a librarian inclined to think that libraries are at risk, I am one of those open to many of the more radical ideas about how libraries need to change. Several of the other librarians I work with may gravitate towards ideas that support the traditional core values of librarianship and will reject those that involve redefining reference, circulation, and cataloging services. The resulting discussions are very intersting, if not polarizing.

      I was just rereading a John Blyberg post about how librarians are drifting into two camps – those that believe libraries are in peril and those that don’t.
      "Like two distinct brands of the same religion, librarians are drifting into two camps–those that believe libraries are in peril and those that don’t. Those who find themselves as a member of the former tend to feel that their libraries need to change in a number of fundamental ways in order to remain relevant. Those who identify with the latter group feel that good old-fashion librarianship is still what their users want or need. They’re the purists."
      This brought me back to Thomas Kuhn's 1962 book The Structure of Scientific Revolutions, where he discusses paradigms as they relate to scientific discovery and evolution. It is the work which popularized the term 'paradigm shift.'

      A scientific revolution occurs when an older paradigm is replaced whole or in part by an incompatible new one. When a new paradigm is revealed, the supporters of the new and old paradigms naturally argue in defense of their position. The emergence of a new paradigm affects the structure of the group that practices in a given field.

      This is exactly what we are are experiencing in library science. We have the emergence of a new technology driven/focused definition of what a library is and is contrasted with the existing traditionalist definition highlighted by reference librarians sitting at desks. These are the two camps that John identifies.

      According to Kuhn, scientific paradigms before and after a shift are so different that their theories are incomparable. It is impossible to construct a language that can be used to perform a neutral comparison between conflicting paradigms, because the very terms used belong to a the paradigm and are therefore different. In essence, a new paradigm cannot build on the preceding one, it can only supplant it. Advocates of mutually exclusive paradigms are in an impossible position:

      "Though each may hope to convert the other to his way of seeing science and its problems, neither may hope to prove his case. The competition between paradigms is not the sort of battle that can be resolved by proof."
      When an individual or group produces a synthesis that attracts the attention of the next generation of practioners, the older schools gradually disappear. In part, the disappearance is caused by the members conversion to the new paradigm:

      "a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it."
      The next generation of library scientists graduating from library school will be hardwired to naturally accept the technology driven/focused definition of a library. If Kuhn is right, as the profession's retirement bubble bursts over the next few years the next generation should help complete the library science paradigm shift.

      That is, until the next paradigm emerges. Sphere: Related Content

      Monday, August 27, 2007

      Rattlesnakes on an Email List?

      I stopped subscribing to email discussions lists years ago. They are so, well, 1992.

      Too many email lists generate too many messages. Some individuals still do not know how to use lists (how many private messages or responses such as "me too" or "see you there" are STILL being sent?). I tried to use digest mode but it made topic threads too choppy to follow. I also tried mailbox filters. I ended up marking most email list messages as read or just deleted them as they piled up.

      In his editorial column in the June 2007 issue of Information Technology and Libraries, John Webb writes about his desire to have more individuals submit manuscripts to the journal. I am a bit perplexed by his decision to compare email list discussions to the peer review process of ITAL:

      "A typical discussion thread on lita-l happens in 'real time' and lasts two days at most. A small number of participants raise and "solve" an issue in less than a half dozen posts. A few times, however, a question asked or comment posted by a LITA member has led to a flurry of irrelevent postings, or, possibly worse, sustained bombing runs from at least two opposing camps that have left some members begging to be removed from the list..."

      "Some days I wish that lita-l responders would referee, honestly, their own responses for their relevance to the questions or issues or so-whatness and to the membership"

      "do you have the "-" to send your ITAL editor a manuscript to be chewed upon not by rattlesnakes, but by skilled professionals who are your editorial Board Members and referees."

      I see nothing wrong with a problem being raised and then solved within a few posts. Email lists such as lita-l are meant to be tools for quick questions and responses. Bombing runs and irrelevent postings exist on every email list, unless the list becomes moderated and filtered, which seems to be contrary to some core librarianship values.

      I suspect few lita-l list subscribers view the list as an alternative to a scholarly communication tool such as ITAL. ALA members that join LITA are generally technology oriented individuals. I also suspect that most understand the shortcomings of all email lists and take them for what they are worth. Any LITA member that keeps up with the latest in library technology happenings using email lists over publications such as ITAL, or even blogs, may be in the wrong professional association.

      There are also many questions that librarians have that simply do not warrant the full treatment of a typical ITAL article. Besides, blogging and the gray literature poses much more of a threat to the future of ITAL than the lita-l list ever could. Sphere: Related Content

      Wednesday, August 22, 2007

      A Two Hour Meeting to Plan A One Hour Meeting?

      I just read Alexander Johannesen's resignation from a library in Canberra, Australia. Mr. Johannesen is not a 'librarian' by training and admits that he is leaving libraries due to an incompatibility. So it goes.


      "Every time I see a glimmer of hope or a flash of something exciting going on in the library world, it usually fast fades into a charades of politics and committee-shuffle. I'm too impatient for this, and I seriously think the world is, too ; it will race past us as we decide on who's going to chair what committee, who'll take notes, and how we're reporting progress to what group. Also since these glimmers of hope usually is attached to specific people more than institutions or organizations, whenever that person goes or moves, so does the glimmer. Again, because we're not traditionally in the business of technical development, we're so fragile..."


      It is unfortunate that Mr. Johannesen is so frustrated. Yet, I do think that the perspective he offers as a programmer coming into the library profession is an important one. One could try to argue the culture and problems he encountered are unique to his library, but we all know better.

      This quickly brought to mind Stephen Abram's quote:

      "librarians like to process things to death, and death wasn't the original goal."

      Just yesterday, I had lunch with a colleague who is not a librarian, but whose current and previous positions have allowed her to work closely with libraries and librarians. Our various discussion topics always seemed to loop back around to the culture of libraries, resistance to change, and the need to process EVERYTHING.

      Another colleague also told me yesterday that earlier in the day they were in a two hour library meeting to discuss plans for a one hour meeting.

      What's the deal with us, folks??!!

      What also concerns me is that Mr. Johannesen is a self admitted "mid-life-crisis-aged" individual. What will happen when the younger generation begins to enter the profession? The next generation of library scientists graduating from library schools will be hardwired to naturally accept the technology driven/focused definition of a library. Will this culture embedded in the 'legacy' librarians reaching retirement bubble simply be handed off to the next generation, or, will the next generation finally force a culture shift?

      I am fairly confident that Mr. Johannesen is not alone in his frustration about the library culture and maybe even with his unwillingness to wait for that shift to occur. Sphere: Related Content

      Friday, August 17, 2007

      Open Source Scholarly Publishing

      It is fairly clear that a growing number of librarians (other than myself!) are frustrated with publication lag time. I also find that many of my ideas can't be stretched into a full length papers. The lag time and the fact that not all ideas warrant a full treatment are the primary reasons I began this blog.

      There are many librarians out there with interesting and innovative ideas that are important to the profession and should be disseminated. Many of these librarians may not want to a develop full blown manuscripts. Sometimes a concept doesn't immediately lend itself to a manuscript. Perhaps there that a librarian wishes to throw a theory or concept out there so it can be poked at and prodded. Some of those concepts could evolve into a manuscript, maybe most don't.

      One possible model I came across is called open source publishing, a concept being proposed by Dr. Eric Mockensturm, Associate Professor of Mechanical Engineering at The Pennsylvania State University.
      "The system will allow authors to submit papers, reviewers to immediately comment on them, and authors to immediately make revisions in a very dynamic way. The ultimate goal is to not only have open access to the papers but also make the reviewing process open (i.e. not anonymous) and the papers open source so that anyone can make revisions."
      This publishing model allows anyone (registered) to submit a manuscript or even an concept outline. Access to this manuscript can be closed to all but the submitter, open to a group of users, or open to all. The first option allows the author to submit something not ready to be viewed and critiqued by others. The author(s) can continue to work on the manuscript for as long as they wish. The second option allows specified individuals to view and comment on a manuscript. The authors can continue to revise it as needed.

      Once the article is ready and opened to all it is considered published, but not necessarily accepted for publication in 'traditional' static journal format.




      The model makes use of a rapid review process that creates real-time communication between an author and reviewers. Authors can solicit reviews, respond to comments, and revise their manuscript accordingly. Comments are organized in a hierarchical structure to make it easier to locate discussion topics that may result in multiple threads.

      When an author decide the article is ready for official review for the 'traditional static publication', a request is sent to the editor, who then assigns formal reviewers. The assigned reviewers can post quick, short comments for rapid response from the authors, without waiting to write an extensive review. The rapid communication between the authors and reviewers can significantly accelerate the review process. The manuscript remains open for discussion by the community throughout this process.

      Once official reviews are submitted and sufficient discussion has occurred, the editors decide to either accept or reject the manuscript. This process is different from traditional journals because, hopefully, the author has been revising their manuscript during the review process. There would be no need for an ‘accepted with revisions' option. Reviewers will be able to post follow-up comments about any revisions which are made. If the author and reviewers cannot agree on changes, an editor can personally review all the comments make a publication decision.

      If the manuscript is rejected, the author decides to keep it in the dynamic section for further revision, or pull and submit it elsewhere (all submissions are protected by a Creative Commons License). Should the author decide to leave the manuscript in the system, they can request that the editors to reconsider it after additional revisions. If a breakthrough occurs the author could inform the editors to consider it.

      If the author pulls the manuscript the comments and reviews remain so a history of the discussion and credit for reviewing the paper will be retained in case reviewers would like to cite their reviews. If subsequent work does not validate the idea, the authors could retract the work, or leave it posted indefinitely for further comment and the education of others who might be considering similar ideas.

      There are several characteristics that I like about this model.

      - It maintains the pre-publication peer review process and generates a static final version of traditional publications which could help satisfy promotion and tenure committees. There is no reason the author can not remain anonymous through the review period to satisfy double blind review.

      - The dynamic environment allows authors to present their ideas rapidly, even in 'half baked' unfinished form, and then fill in material as it becomes available. The manuscripts can evolve based on the comments and grow only to a length needed to effectively communicate an idea rather than being bloated to meet minimum manuscript length requirements.

      - Subsequent study might also show that the original idea was not good from the start. Such submissions could be maintained in the dynamic section for comment, dissemination, or revision. The result is a forum for ideas that did not work out. Partial or failed concepts could even be revisited later by the original authors or others might try to determine why a seemingly good idea just did not materialize into something useful. This is what makes this model 'open source.'

      - The model makes use of a ranking system that allows readers to rate reviews (or manuscripts) based on their intellectual merit. Such reader-based ranking systems exist on in community-based sites such as digg.com. A community ranking system could lead to a more objective rating with the accumulative rating of a review being considered regardless if it is signed or anonymous.

      Sphere: Related Content

      Tuesday, August 07, 2007

      Where is My Manuscript? Part 2

      Last month I detailed the saga of a manuscript I submitted to a traditional publication. In summary, the post detailed the long strange trip taken by a manuscript I submitted on December 21, 2005. Yesterday, much to my surprise, I received a package from the publisher containing reprints. Yes, my manuscript has been published!

      The saga deepens.

      When I compared the print version to my copy of the manuscript there were several noticeable and significant typographical errors. The special issue editor indicated that neither her or the journal editor had received galley proofs. Any changes we likely made by an editor at the publishing house or their 'typesetter.' Unfortunately, readers will likely question my writing skills - not the publisher's editing skills. Such errors pose a bit of a problem for tenure track librarians going up for a promotion since publications are often critiqued by 'external' reviewers. The reviewers could comment about such errors which in turn could be intepreted by a local review committee as a weakness.

      While I respect the role that traditional publication plays in archiving our professional communications, I still can't help but to feel that they can no longer be the trusted source for the dialog and communication going on in our profession today. Libraries are largely dependent on and are competing with technologies that change every nine months. How are we supposed to progress as a profession in such a changing environment when it still takes a year and a half for an article to move from submission to publication? Waiting until 2009 to read about the wiki approach that Kathryn Greenhill is using to write two conference papers just won't do.

      The fact that you are reading this blog means you 'get it.' Perhaps the only way that I will reach those that don't 'get it' is to write a peer reviewed paper about the impact of blogging that may be published near the end of the decade. If I am really lucky, that manuscript may actually be published while blogging is still the valuable communication tool that we view it as today. At the very least, I guess, it would make an interesting historical piece. Sphere: Related Content

      Monday, August 06, 2007

      Is There a Portlet I Can Use?

      Portlets are pluggable user interface components that are managed and displayed within a traditional web page. They are mini-applications that run inside regular applications and are completely independent of the rest of the application. For example, a travel website can include a weather portlet.

      Portlets have actually been around for a while and were once touted as the next big thing. With a conventional portlet, the browser needed to reload the entire page every time any change occured, but this changed with the advent of Ajax technology. The Java Portlet Specification 1.0, Java Specification Request (JSR) 168 creates a web services standard that allows for the "plug-n-play" of portlets from disparate sources.

      Bremner, Naidoo, Sandell, and Vickery offer up the following advantages of using portlet technology in the creation of a portal:

      Personalization – Customers of portals have the ability to select the kinds of information that they require and have it presented in a layout of their own choosing. This allows a level of customisation so customers can maximise their productivity.

      Single Sign On – Customers should not have to login to services after their initial portal login. Credentials for the initial portal login are propagated to the portal services, which can then be used to authenticate a customer to the service in the background.

      Aggregation – Customers can access a multitude of services from a single location. Instead of having to check out multiple pages on a web site, or even multiple web sites, a customer can have the information they seek presented to them on a single page if desired.

      Information Management – Management of the content distributed to our customers can be managed more effectively and efficiently, allowing a much better level of reuse without duplication.

      Information Targeting - Content can be targeted to specific groups of people such as Academic staff and/or Postgraduate students doing research or Library staff. This gives a much more granular level of control for content distribution which saves customers from being bombarded with irrelevant information.

      Multiple Device Delivery – Portals have the ability to be rendered independently of the data that they contain. What this means is that using a different ‘style sheet’ for rendering can enable the same information and even similar functionality on devices such as PDA’s and mobile phones.

      So, has anyone out there built any neat library-oriented portlets they wish to share? Sphere: Related Content