Interaction between systems and services on the network requires intelligence. Intelligence about what is in the environment (search or resolution targets, for example), about how to interact with found entities (addresses or interface specifications, for example), about who is authorized to do what, and so on.
Think of two parallels. One for human users: the yellow pages directory provides intelligence about businesses, the services they provide, and how to access those services (by telephone or at a physical address). One for machines: the DNS allows an address to be resolved into locations. In each case, a ‘network-level’ service reduces the need for extensive redundant collection and management of ‘intelligence’ by each potential user of services within the network.
Think of what the network world would be without the DNS. The burden of data collection and configuration on each organization would be that much greater, and the overall efficiency of the network would be much reduced.
But this is exactly the situation we are in with higher level network services where we have no such directory services. Increasingly, library applications need to know about a variety of entities. We are used to thinking about information objects (books, journals, maps, etc). What about institutions (suppliers, libraries, etc), policies (e.g. ILL policies), licenses, collections (databases, special collections, summary level descriptions of archival collections, and so on), and services (addresses and interface details for machine users, and descriptions for human users)? The absence of appropriate directory services for each of these reduces the efficiency of the network. We have an extensive infrastructure to allow us to discover and use information objects, and we are currently figuring out how that needs to be re-engineered for more effective use in a network world. However, we are very poorly equipped in the other areas. This means that there is a lot of local configuration and redundant effort in making certain applications work.
This is partly a discussion about metadata. We are very familiar with the notion of ‘metadata’. I like to think of metadata as data which removes from a user (human or machine) the need to have full advance knowledge of the existence or characteristics of things of potential interest in the environment.
So, a catalog record notes the existence of an item. Additional metadata may provide a location. Additional metadata may say something about terms under which it is available. We are now in an environment where we are really interested in metadata about many more ‘things’. A metasearch engine will need metadata about the targets it can interact with, and that metadata will be of various types. This is a form of metadata about ‘services’. We want this for humans and for machines. And we need metadata for all of the entities mentioned above. In fact, as our networked environment becomes richer so does the need to provide metadata about the entities in that environment.
However, we also need to think about how metadata is made available in useful ways. It needs to be acted upon in the appropriate domain. At the moment, we have metadata about all the entities I mention above, and others. It is scattered across many systems and services. It may be hardwired into particular applications and not be more generally available. Think of metasearch, for example. Each metasearch application will need to be configured with the data – the intelligence – needed to locate and to connect to available targets.
This is where discussions about directories or registries come in. In many cases there are potential advantages in lifting this configuration data out and consolidating it in shared registries. In this way, each application does not have to know in advance what is available to it and how it should interact with it. Of course, we are very familiar with this principle from the directory examples given above. Think of the phone. We may keep a local list of numbers, but we can derive some of it from directories, and we can always look to several directories where we do not have the required number to hand. A shared directory or registry removes a lot of confusion and redundant effort.
An example of such a registry in our space is the OCLC OpenURL resolver registry, which is beginning to be used in interesting ways. Here is Dan Chudnov’s description of the registry:
The OCLC OpenURL Resolver Registry comprises records for roughly 1000 OpenURL resolvers at various institutions, mostly but not solely in North America. It also provides a simple web service that takes an IP address as a parameter and returns zero-to-many resolver records for every resolver that serves users coming from that IP address. [A Clean, Well-Linked ‘Base (or, Solving the “Appropriate Resolver” Problem with the OCLC Resolver Registry) | One Big Library.]
And here is what he wants to use it for:
What does that mean? If you’re like me, and you work for a small service like the Canary Database, you used to be essentially unable to provide user-appropriate OpenURL linking without having to configure many many ranges of IP addresses after many many conversations with librarians. “Used to be,” that is. [A Clean, Well-Linked ‘Base (or, Solving the “Appropriate Resolver” Problem with the OCLC Resolver Registry) | One Big Library.]
Dan goes on to describe the approach. You can see it in action by going to the Canary Database. And this is how the functionality is described to users:
The Canary Database now attempts to create links to library full text link servers (known in libraries as “OpenURL resolvers”) for many hundreds of libraries. If you’re using the Canary Database from an academic campus, there’s a good chance you’ll see links from articles in our database back to your own library’s online journals. Follow these links to get to full text just like you would any other time you see the link buttons from your library! [Canary Database Project News » New features: Full text article links]
The registry is used in association with the OpenURL Gateway to connect to appropriate resolvers. Ross Singer also talks about how he is using the registry.
What the registry gives you in each case is the ability to direct visitors from citations on one site to their institutional resolver on another site, where it exists. This is quite nice. The registry is used to determine which service to point a user at, and its use avoids the need for local configuration. The referring site does not have to have advance knowledge of all the places from which it will be visited. And the target site does not have to notify all potential referrers of its existence. From the referring site point of view, this adds value to the site by connecting a ‘discovery’ service in one place to the appropriate ‘location’ service in another. From the target site point of view, it means that it can mobilize other people’s discovery environments to bring people back to their services.
The registry provides the intelligence which makes this happen. In this case it associates IP ranges with the metadata required to access the relevant resolver. My development colleague Phil Norman tells me that the Registry API will also accept an OCLC symbol, and in the future may accept other inputs (a geographic code for example).
This is a new way of working and it is not without its issues, some of which Dan addresses. Of course, an immediate issue is that not all visitors will be from institutions with OpenURL resolvers. Or, related, not all visitors who are from institutions with resolvers will come in from an IP address associated with their institution. And the referring site does not know in advance if the target site will hold a copy of the item. These and other issues present interesting questions about interaction design in a distributed environment where control is passed between systems.
Incidentally, the Registry is also used by Openly’s OpenURL Referrer, which “is a Firefox browser extension that can take certain kinds of citations on the web and convert them to direct links to the cited resource in one of your local library’s databases”. This works with Google Scholar and with Coins-enabled sites.
These are small examples of how one type of registry can add value. Registry services will need to get more common if we are going to have efficient interactions within networks of library providers and consumers.
- Resolution and transaction costs
- Coins in Open WorldCat, Openly’s OpenURL Referrer, and the OCLC Resolver Registry
- From metasearch to distributed information environments
- All that is solid melts into flows – metadata
- Making OpenURLs work hard
- Registries, research and jobs
Update: edited for clarity. Related entries added.