Search this keyword

Showing posts with label rant. Show all posts
Showing posts with label rant. Show all posts

Megascience platforms for biodiversity information: what's wrong with this picture?

The journal Mycokeys has published the following paper:

Triebel, D., Hagedorn, G., & Rambold, G. (2012). An appraisal of megascience platforms for biodiversity information. MycoKeys, 5(0), 45–63. doi:10.3897/mycokeys.5.4302

This paper contains a diagram that seems innocuous enough but which I find worrying:

MycoKeys 005 045 g001

The nodes in the graph are "biodiversity megascience platforms", the edges are "cross-linkages and data exchange". What bothers me is that if you view biodiversity informatics through this lens then the relationships among these projects becomes the focus. Not the data, not the users, nor the questions we are trying to tackle. It is all about relationships between projects.

I want a different view of the landscape. For example, below is a very crude graph of the kinds of things I think about, namely kinds of data and their interrelationship:

Biodiversity

What tends to happen is that this data landscape gets carved up by different projects, so we get separate databases of taxonomic names, images, publications, and specimens (these are the "megascience platforms" such as CoL, EOL, GBIF). This takes care of the nodes, but what about the edges, the links between the data? Typically what happens is lots of energy is expended on what to call these links, in other words, the development of the vocabularies and ontologies such as those curated by TDWG. This is all valuable work, but this doesn't tackle what for me is the real obstacle to progress, which is creating the links themselves. Where are the "megascience platforms" devoted to linking stuff together?

When we do have links between different kinds of data these tend to be within databases. For example, Genbank explicitly links sequences to publications in PubMed, and taxa in the NCBI taxonomy database. All three (sequence, publication, taxon) have identifiers (accession number, PubMed id, taxon id, respectively) that are widely used outside GenBank (and, indeed, are the de facto identifiers for the bioinformatics community). Part of the reason these identifiers are so widely used is because GenBank is the only real "megascience platform" in the list studied by Triebel et al. It's the only one that we can readily do science with (think BLAST searches, think of the number of databases that have repurposed GenBank data, or build on NCBI services).

Genbank

Many of the questions we might ask can be formulated as paths through a diagram like the one above. For example, if I want to do phylogeography, then I want the path phylogeny -> sequence -> specimen -> locality. If I'm lucky the phylogeny is in a database and all the sequences have been georeferenced, but often the phylogeny isn't readily available digitally, I need to map the OTUs in the tree to sequences, I then need to track down the vouchers for those sequences, and obtain the localities for those sequences from, say, GBIF. Each step involves some degree of pain as we try and map identifiers from one database to those in another.

Phylogeography

If I want to do classical alpha taxonomy I need information on taxonomic names, concepts, publications, attributes, and specimens. The digital links between these are tenuous at best (where are the links between GBIF specimen records and the publications that cite those specimens, for example?).

Taxonomy

Focussing on so-called "platforms" is unfortunate, in my opinion, because it means that we focus on data and how we carve up responsibility for managing it (never mind what happens to data that lacks an obvious constituency). The platforms aren't what we should be focussing on, it is the relationships between data (and no, these are not the same as the relationships between the "platforms").

If I'd like to see one thing in biodiversity informatics in 2013 it is the emergence of a "platform" that makes the links the centre of their efforts. Because without the links we are not building "platforms", we are building silos.

Why LSIDs suck

I'll keep this short: LSIDs suck because they are so hard to set up that many LSIDs don't actually work. Because of this there seems to be no shame in publishing "fake" LSIDs (LSIDs that look like LSIDs but which don't resolve using the LSID protocol). Hey, it's hard work, so let's just stick them on a web page but not actually make them resolvable. Hence we have an identifier that people don't recognise (most people have no idea what an LSID is) and which we have no expectations that it will actually work. This devalues the identifier to the point where it becomes effectively worthless.

Now consider URLs. If you publish a URL I expect it to work (i.e., I paste it into a web browser and I get something). If it doesn't work then I can conclude that the URL is wrong, or that you are a numpty and can't run a web site (or don't care enough about your content to keep the URL working). At no point am I going to say "gee, it's OK that this URL doesn't resolve because these things are hard work."

Now you might argue that whether your LSID resolves is an even better way for me to assess your technical ability (because it's hard work to do it right). Fair enough, but the fact that even major resources (such as Catalogue of Life) can't get them to work reliably reduces the value of this test (it's a poor predictor of the quality of the resource). Or, perhaps the LSID is a signal that you get this "globally unique identifier thing" and maybe one day will make the LSIDs work. No, it's a signal you don't care enough about identifiers to make them actually work today.

As soon as people decided it's OK to publish LSIDs that don't work, LSIDs were doomed. The most immediate way for me to determine whether you are providing useful information (resolving the identifier) is gone. And with that goes any sense that I can trust LSIDs.

DNA Barcoding, the Darwin Core Triplet, and failing to learn from past mistakes

Banner05
Given various discussions about identifiers, dark taxa, and DNA barcoding that have been swirling around the last few weeks, there's one notion that is starting to bug me more and more. It's the "Darwin Core triplet", which creates identifiers for voucher specimens in the form <institution-code>:<OPTIONAL collection-code>:<specimen-id>. For example,

MVZ:Herp:246033

is the identifier for specimen 246033 in the Herpetology collection of the Museum of Vertebrate Zoology (see http://arctos.database.museum/guid/MVZ:Herp:246033).

On the face of it this seems a perfectly reasonable idea, and goes some way towards addressing the problem of linking GenBank sequences to vouchers (see, for example, http://dx.doi.org/10.1016/j.ympev.2009.04.016, preprint at PubMed Central). But I'd argue that this is a hack, and one which potentially will create the same sort of mess that citation linking was in before the widespread use of DOIs. In other words, it's a fudge to postpone adopting what we really need, namely persistent resolvable identifiers for specimens.

In many ways the Darwin Core triplet is analogous to an article citation of the form <journal>, <volume>:<starting page>. In order to go from this "triplet" to the digital version of the article we've ended up with OpenURL resolvers, which are basically web services that take this triple and (hopefully) return a link. In practice building OpenURL resolvers gets tricky, not least because you have to deal with ambiguities in the <journal> field. Journal names are often abbreviated, and there are various ways those abbreviations can be constructed. This leads to lists of standard abbreviations of journals and/or tools to map these to standard identifiers for journals, such as ISSNs.

This should sound familiar to anybody dealing with specimens. Databases such as the Registry of Biological Repositories and the Biodiversity Collectuons Index have been created to provide standardised lists of collection abbreviations (such as MVZ = Museum of Vertebrate Zoology). Indeed, one could easily argue that the what we need is an OpenURL for specimens (and I've done exactly that).

As much as there are advantages to OpenURL (nicely articulated in Eric Hellman's post When shall we link?), ultimately this will end in tears. Linking mechanisms that depend on metadata (such as museum acronyms and specimen codes, or journal names) are prone to break as the metadata changes. In the case of journals, publishers can rename entire back catalogues and change the corresponding metadata (see Orwellian metadata: making journals disappear), journals can be renamed, merged, or moved to new publishers. In the same way, museums can be rebranded, specimens moved to new institutions, etc. By using a metadata-based identifier we are storing up a world of hurt for someone in the future. Why don't we look at the publishing industry and learn from them? By having unique, resolvable, widely adopted identifiers (in this case DOIs) scientific publishers have created an infrastructure we now take for granted. I can read a paper online, and follow the citations by clicking on the DOIs. It's seamless and by and large it works.

On could argue that a big advantage of the Darwin Core triplet is that it can identify a specimen even if it doesn't have a web presence (which is another way of saying that maybe it doesn't have a web presence now, but it might in the future). But for me this is the crux of the matter. Why don't these specimens have a web presence? Why is it the case that biodiversity informatics has failed to tackle this? It seems crazy that in the context of digital data (DNA sequences) and digital databases (GenBank) we are constructing unresolvable text strings as identifiers.

But, of course, much of the specimen data we care about is online, in the form of aggregated records hosted by GBIF. It would be technically trivial for GBIF to assign a decent identifier to these (for example, a DOI) and we could complete the link between sequence and specimen. There are ways this could be done such that these identifiers could be passed on to the home institutions if and when they have the infrastructure to do it (see GBIF and Handles: admitting that "distributed" begets "centralized").

But for now, we seem determined to postpone having resolvable identifiers for specimens. The Darwin Core triplet may seem a pragmatic solution to the lack of specimen identifiers, but it seems to me it's simply postponing the day we actually get serious about this problem.





TreeBASE II makes me pull my hair out

I've been playing a little with TreeBASE II, and the more I do the more I want to pull my hair out.

Broken URLs
The old TreeBASE had a URL API, which databases such as NCBI made use of. For example, the NCBI page for Amphibolurus nobbi has a link to this taxon in TreeBASE. The link is http://www.treebase.org/cgi-bin/treebase.pl?TaxonID=T31183&Submit=Taxon+ID. Now, this is a fragile looking link to a Perl CGI script, and sure enough, it's broken. Click on it and you get a 404. In moving to the new TreeBASE II, all these inward links have been severed. At a stroke TreeBASE has cut itself off from an obvious source of traffic from probably the most important database in biology. Please, please, throw in some mod_rewrite and redirect these CGI calls to TreeBASE II.

New identifiers
All the TreeBASE studies and taxa have new identifiers. Why? Imagine if GenBank decided to trash all the accession numbers and start again from scratch. TreeBASE II does support "legacy" StudyIDs, so you can find a study using the old identifier (you know, the one people have cited in their papers). But there's no support for legacy TaxonIDs (such as T31183 for Amphibolurus nobbi). I have to search by taxon name. Why no support for legacy taxon IDs?

Dumb search
Which brings me to search. The search interface for taxa in TreeBASE is gloriously awful:

tbsearch.png

So, I have to tell the computer what I'm looking for. I have to tell it whether I'm looking for an identifier or doing a text search, then within those categories I need to be more specific: do I want a TreeBASE taxon ID (new ones of course, because the old ones have gone), NCBI id, or uBio? And this is just the "simple" search, because there's an option for "Advanced search" below.

Maybe it's just me, I get really annoyed when I'm asked to do something that a computer can figure out. I shouldn't have to tell a computer that I'm searching for a number or some text, nor should I tell it what that number of text means. Computers are pretty good at figuring that stuff out. I want one search box, into which I can type "Amphibolurus nobbi", or "Tx1294" or "T31183" or "206552" or "6457215" or "urn:lsid:ubio.org:namebank:6457215" (or a DOI, or a text string, or pretty much anything) and the computer does the rest. I don't ever want to see this:

tbsearch2.png

Computers are dumb, but they're not so dumb that they can't figure out if something is a number or not. What I want is something close to this:

google.png

Is this really too much to ask? Can we have a search interface that figures out what the user is searching for?

Note to self: Given that TreeBASE has an API, I wonder how hard it would be to knock up a tool that took a search query, ran some regular expressions to figure out what the user might be interested in, then hit the API with that search, and returned the results?

My concern here is that TreeBASE II is important, very important. Which means it's important to make it usable, which means don't break existing URLs, don't make old identifiers disappear, and don't have a search interface that makes me want to pull my hair out.

Where next for BHL?

You can't just ask customers what they want and then try to give that to them. By the time you get it built, they'll want something new. - Steve Jobs

It's Friday, so time for either a folly or a rant. BHL have put another user survey into the field http://www.surveymonkey.com/s/BHLsurvey. I loathe user surveys. They don't ask the questions I would ask, then when you see the results, often the most interesting suggestions are ignored (see the Evaluation of the User Requirement Survey Oct-Nov 2009). And we've been here before, with EDIT (see this TAXACOM message about the moribund Virtual Taxonomic Library). Why go to the trouble of asking users if you aren't going to deliver?

I suspect surveys exist not to genuinely help figure out what to do, but as an internal organisational tool to convince programmers what needs to be done, especially in large, multinational consortia where the programmers might be in a different institution, and don't have any particular vested interest in the project (if they did, they wouldn't need user surveys, they'd be too busy making stuff to change the world).

So, what should BHL be doing? There's lots of things to do, but for me the core challenges are findability and linkage. BHL needs to make its content more findable, both in terms of bibliographic metadata and search terms (e.g., taxa, geographic places). It also needs to be much more strongly linked, both internally (e.g., cross referencing between articles where one BHL article cites another BHL article), and externally (to the non-BHL literature, for example, and to nomenclators), and the external links need to be reciprocal (BHL should link to nomenclators, and nomenclators should point back to BHL).

There are immediate benefits from improved linkage. Users could navigate within BHL content by citation links, for example, in the same way we can in the recent literature. If BHL cleaned up its metadata and had a robust article-level OpenURL resolver it could offer services to publishers to add additional links to their content, driving traffic to BHL itself. Better findability leads to better links.

One major impediment to improving things is the quality of the OCR text extracted from BHL scans. There have been various automated attempts to extract metadata from OCR scans (e.g., "A metadata generation system for scanned scientific volumes" doi:10.1145/1378889.1378918), but these have met with mixed success. There's a lot of scope for improving this, but I suspect a series of grad student theses on this topic may not be the way forward (grad students rarely go all the way and develop something that can be deployed). Which leaves crowd sourcing. Given the tools already available for correcting Internet Archive-derived book scans (e.g., Wikisource discussed in an earlier post), it seems to me the logical next move for BHL is to dump all their content into a Wikisource-style environment, polish the tools and interface a bit, and encourage the community to have at it. Forming and nurturing that community will be a challenge, but providing BHL can demonstrate some clear benefits (e.g., generating clean pages with new taxon names, annotated illustrations, OpenURL tools for publishers to use), then I think the task isn't insurmountable. It just needs some creativity (e.g., why not engage EOL users who land on BHL content to go one step further and clean it up, or link with Wikipedia and Wikispecies to attract users interested in actively contributing?).

I doubt any of this will be in any user survey...