Will history be kind to Open Hypermedia? The other day I gave a presentation on the Evolution of the Web to our new Masters students (part of their pre-sessional programme at Southampton) and I was forced to face an awkward thought. Open Hypermedia was an interesting side road of the Information Superhighway, but it wasn't a proper lane, or even a sliproad. Its something that you look wonderingly at as you cruise past on your full throttle Web 2.0 browser. A blip. For me its certainly a fun memory, but faced with explaining how we got from there to here - is it also an irrelevance?
I became involved in Hypertext research in the last year of my undergraduate degree when I undertook a project on the first implementation of the Open Hypermedia Protocol (OHP). Open Hypermedia was a pre-web idea that arose in the early 1990's and at its heart was the idea that you should seperate content and structure. Many hypermedia systems at the time embedded links in content (as in fact the web does with the HREF tag), Open Hypermedia Systems broke with this, and managed a separate linkbase, combining it with the content at runtime.
The result was hypermedia that could work on multiple type of media (video and audio, document files, etc.), that could be managed (so no dead ends), versioned, and which could present different linksets according to the situation - so early personalization. There were many OH systems (Microcosm, Chimera, HOSS, DHM, and Callimachus to name a few) and some were pretty sophisticated. But they were all swept away by the relentless rise of the Web, a process of technic cleansing that led Mark Bernstein to lament the death of strange hypertexts by 2001.
The Open Hypermedia Protocol was a good idea in the spirit of Web standards, but it was also a plaster over a mortal wound. Proposed in 1996 the idea was to get a number of these OH systems to follow the Web model and interoperate through simple open standards. OHP and the range of fantastic systems still standing by the late 90s inspired me to write my PhD on different models of hypermedia, but by the time of my doctoral graduation in 2001 it was clear that the game was up and Open Hypermedia was swept away like everything else.
I've recently been involved in a project to create an online Multimedia Annotation system (Synote). Because Synote deals with Multimedia the system has to hold its links separately from its content, and I thought it was the perfect opportunity to return to some of those lost OH principles in the age of Web 2.0.
The Synote design team began a process to create a hypertext model to sit behind a YouTube style front end, at first OH seemed a perfect fit, and we designed a model that supported a variety of traditional OH link structures (typed, n-ary, bi-directional, multi-model links) that could be used for annotation, subtitling, bookmarking, and a host of other activities.
However, as the weeks wore on it became obvious that there were some serious problems with an OH design: the issue is that OH holds the link model as sacred. The model is pushed up into the User Interface, and down into the Database, but in reality it is not a comfortable fit in either.
At the Database level it introduces run-time complexity. The OH model is very flexible, but contains many internal references (as it is stuffed full of first class elements). As a result it takes many database queries to resolve links, and an intolerable level of queries to retrieve even small networks. In comparison a more specific approach could use a single table for each type of structure (so for examples, annotations could be in one database table, and bookmarks in another). Resolving a structure then requires retrieving a single table row, which is much faster.
You might argue that the run-time overhead is worth it, because you can build a generalized back end, a hypertext engine that could deal with almost any type of structure. However this only really saves you effort when you have a generalized front end (user interface) as well.
This is a problem since at the User Interface level the generalized approach is not capable of delivering a quality experience. Users want a different type of interface when creating an annotation than when creating a bookmark, and they want the different types of structures displayed differently as well. In short the user interface layer needs to know the activity in order to render the structure (or the authoring process) appropriately. Using the raw OH structures in the UI is awkward and clumsy.
Since each new type of linking activity requires specific UI development, there is little advantage in the generalized back end. Modern development tools mean that is as easy to build a new specific back end as to map the new activity to an old generalized structure.
So perhaps Open Hypermedia failed because it fell awkwardly between these two layers. It still makes a lot of sense as a conceptual model, and as a point of interoperability, but inside the system itself, or in the user interface it seems overly complex and awkward.
In the end we implemented Synote as a series of specific structures. These were easy to code into the database, quick to manipulate at run time, and looked good when revealed in the user interface.
There's a lesson here for the Semantic Web, another grand idea from the Hypertext and Web community that I have commented on before. The Semantic Web is a set of standards for representing and exchanging knowledge (as sets of RDF triples constrained by ontologies), like Open Hypermedia it is therefore about models, openness and interoperability. But also like Open Hypermedia many Semantic Web developers have fallen into the trap of forcing their model down into the system implementation and up into the UI.
So in the end perhaps Open Hypermedia does offer us a valuable lesson - not about the structures of hypertext - but about the need to abstract implementation and user experience away from the conceptual models that drive them.
This is a hard lesson - because you want users and developers to see your models, otherwise how can you convince them of their value. But it needs to be learned, otherwise the resulting systems will be far from convincing, and the machine-readable Web will continue to exist only as a collection of chaotic mashups.
2 comments:
Trackback didn't seem to work, so I'll drop one manually: http://blog.semantic-web.at/2008/09/17/what-the-semantic-web-can-learn-from-open-hypermedia/
Was interesting to learn about OHP, best wishes from Vienna:
Jana
Dave,
Thank you for sharing this with us. I think we can indeed learn a lot from OHP and related, earlier work. Additionally, I've put a back-link to [1], our linked data community Wiki page were we discuss multimedia issues.
Cheers,
Michael
[1] http://community.linkeddata.org/MediaWiki/index.php?InterlinkingMultimedia
Post a Comment