My Blog has moved!

Thanks for visiting, but my blog has now moved to a new home at www.davidmillard.org, if you have javascript enabled you should automatically be redirected to the right place, if not then please follow the link directly to my new home page.

Sunday, December 14, 2008

Crack Defense

I'm a sucker for a good Tower Defense game (with or without FMV), and this week I've been playing Tap Defense on the iPhone, a great free example of the genre.


Tap Defense is a simple (but very addictive) game, based on three maps and six tower types, all of which work pretty nicely together. What makes it so good though is that fact that you earn interest on any gold that you save (rather than emptying the pot by buying or upgrading towers). This means that unlike other Tower Defense games the optimum strategy is not to build the most powerful defense, but to build the minimum defense that will work on any given level.

This makes for a very satisfying balancing act, as you try to find just the right combination of towers to kill the enemy for the minimum cost. Those saved pennies are needed later on for the harder levels (30-43), and if you get it wrong then you'll be overrun, triggering another cycle of battery sapping play.

Cracking :-)

Sunday, November 30, 2008

Something Open This Way Comes

At Southampton we have a history of Open Access Research, and a number of my colleagues (in particular Les Carr and Stevan Harnad) are heavily involved with the Open Access movement. At its heart Open Access Research holds the principle that publicly funded research should be available for free. It challenges the existing publishing model, where researchers sign away the rights to their work to publishers of journals and professional proceedings, in exchange for the knowledge that those publishers will act as gatekeepers for quality, and disseminate their findings to other universities and libraries around the world.

The contention of the Open Access movement is that the copyright restrictions placed on academic authors by publishers is a price that is no longer worth paying, and that Universities will get far higher levels of dissemination by simply making their publications available through web-based institutional repositories such as e-prints. Peer review is still a mechanism that works in this world, but the role of professional bodies and publishers changes dramatically.

I recently attended the AACE e-learn 2008 conference in Las Vegas, where for the first time I became aware of the momentum building behind the other side of this movement: Open Access Education. Applied to education Open Access is about the principle that Universities and Schools should share their teaching materials for the common good.

Richard Baraniuk's opening keynote set the tone. Richard is Professor of Electrical and Computer Engineering at Rice University and he spoke at length about the need for Open Access Education, in order to deal with a very real problem (especially in the USA) where text-books have become prohibitively expensive to buy for many poorer students and smaller colleges. In 1999 Richard founded Connexions, an Open Access website where teachers and lecturers can author and share learning content, or assemble existing content into book-shaped packages for their students.

Connexions is especially interesting to me, both because of my interest in Web Literacy, but also because we have a number of our own research projects looking at Teaching and Learning (TL) repositories (Faroes) and Institutional repositories (EdShare).

How do you Solve a Problem Like Copyright?

Faroes is focused on TL repositories for Language Teachers, and has produced a repository called the Language Box that allows teachers to upload and share their resources. The Language Box was a bit of a reaction to the previous crop of TL repositories - many of them based around complex Learning Objects, with larger metadata schemas and sophisticated content packaging - and re-imagines TL repositories in a Web 2.0 world. Many TL repositories have failed, and so our approach was not to scatter Web 2.0 glitter over the same old problems, but to go the core of what the repository is supposed to do.


In Faroes we learnt from the big Web 2.0 sharing sites such as YouTube and Flickr, and have re-imagined TL repositories as an online service for Hosting and Managing (rather than Archiving) digital resources. The Language Box is still in beta, but we're getting good feedback from our early users based around this approach. However Open Access is still an issue, and in our workshops many teachers are still concerned about copyright.

The Language Box

Julie Willems, from Monash University in Australia, presented a paper at e-learn that neatly summarized the problem. Julie talked about effectiveness of multi-model learning, and how images and video can be used alongside traditional texts to support different learning styles, but how copyright laws that allow fair use of materials for teaching in a classroom setting cripple our ability to use those materials online - or to share them via web-based communities.

Both Connexions and the Language Box are attempting to solve the problem by creating an online collection that is, in its entirety, licensed under one of the Creative Commons licenses. The issue is that many existing resources use copyrighted materials (often quite unintentionally, through the assumption that everything online is without copyright), and perhaps more seriously, that teachers are worried that their Universities may in fact have some intellectual property stake in materials that they create in the course of their job.

The Democratization of Education?

The Internet and the World Wide Web are often compared to the printing press in terms of their impact on society. Gutenberg's mechanical printing press invented in the 15th century was revolutionary not because it allowed the mass production of text (woodblock printing in China also enabled this centuries earlier), but because it was so cheap to assemble new texts for printing. This simple fact meant that not only did printed text become cheaper, but that more texts could be printed. In a way it was the start of The Long Tail.

Gutenberg and the Mechanical Printing Press

Through the printing press the age of mass publishing was born, a method that was later adopted by radio and TV and reinterpreted as mass broadcasting. The consequence of the printing press was much more than the availability of cheap texts, it was the democratization of reading, and the establishment of institutions (newspapers, publishers, universities, scientific societies) whose job it was to write, edit and produce materials that spread knowledge (both noble and scandalous) throughout the population.

But now the age of mass publishing is passing.

The Internet and the Web are so recent, so close to us, that the immediate effects are the ones that dominate our thinking: the creation of amazing collaborative works such as Wikipedia, the phenomenon of Blogging, and the emergence of online communities and social networks that have reduced distances, increased awareness, and made the world a smaller place.

But these effects should be understood as analogous to newspapers and books, in that they are historically important not because of what they are, but because of what they change. The consequences of the Internet and the Web will be much more than these tools and early applications, it will be the democratization of writing, and the impact this will have on the institutions and professions that are currently shaped by mass publishing.

And this is why Open Access Education is so important - because if Universities ignore the reality of what is happening to information and knowledge on the Web then they risk being sidelined in the short term, and potentially having their business models undermined in the longer term. The NSF in the USA and the JISC in the UK seem to have realised this and much greater time is being spent looking at Open Access and investing in the tools and mechanisms to make it happen. The challenge is to make the institutions and academics themselves realise that they may need to change their practices in order to ride the wave, rather than drowning beneath it.

It is possible to be a curator of knowledge and learning without being its gaoler.

Wednesday, November 5, 2008

Congratulations America

Americans like to style their president as the Leader of the Free World, and in many ways this is not a bad description. America is the most powerful of the democratic nations that respects freedom of religious, political and intellectual expression. The free world is an informal billion-person family of nations, and as a fully paid up citizen in one of the oldest countries in that family, I believe that we have a responsibility to lead by example, protect those values, and ultimately expand our freedoms to others who wish to experience them.


But the last eight years have been hard. At an important time when Russia has been reinventing itself, and China has ascended, we have seen America retreat into isolationism and a mindset of distrust, leaving the Free World rudderless and adrift. Like a lot of people in Europe I initially had a bit of a chuckle at the antics of George W. Bush, but his presidency has been so damaging that I have long stopped smiling.

This morning the American people elected Barack Obama to replace Bush in January. It's a momentus day for Americans for many reasons, not least of which is that he will be their first black president. But it's also an important day for those of us outside of America, but who share our values with Americans.

Barack Obama is a multilateralist, who believes in the family of free nations. He represents the next generation of politicians who understand America's place in the world, and understand the true nature of her importance. Of course he is just one man, and there are a lot of challenges inside and outside of America that he will have to tackle. I'm sure that not everything will go well. But I share the general sense that a corner has been turned, and that for the immediate future at least, the Free World once more has its Leader.

Thursday, October 16, 2008

Does m-learning exist?

I've been involved in mobile and pervasive learning research for around six years, but this year was the first time that I've attended two of the leading events in the field: IADIS m-learning (held back in April), and m-learn (which I attended last week).


I enjoyed both events - not least because of the interesting locations (especially the Cold War museum in Shropshire with its ominous collection of the best nuclear weapons 20th century money could buy) - however this year there has been a dramatic improvement in mobile technology (driven in no small part by the iPhone and Android) that has left me wondering if there is such a thing as m-learning anymore.

In fact some of the most enjoyable presentations at m-learn reinforced this thought. For example, Thomas Cochrane from the UniTec, New Zealand presented a fabulous experiment with multimedia reflective journals, students on a design course were given video phones and told to use them to add content to a variety of Web 2.0 style sites, and bring together them together in a Vox powered online blog.

The interesting thing is that most of the innovation in this case is in the use of multimedia and the online sharing, the mobile phones are almost incidental - a convenient way of creating content - their mobility is useful because you can film outside of the classroom and upload from anywhere (for example, design students road-testing their products in the wild) but its not enabling a new kind of activity (such as merging virtual with physical spaces, location-based services, or synchronously connecting students outside of the classroom). And even if it was, isn't that just what we do with desktop or laptop systems, only on a mobile device?

That may seem like an odd argument - after all, if it's on a mobile device doesn't that make it m-learning?

However, I think that as devices increase in power the fact that learning involves a mobile device becomes less and less interesting. If I upload a photo to flickr from my laptop is that m-learning? What about if I use my iphone? What about if all I was doing was accessing Blackboard through Mobile Safari?

It seems to me that the term m-learning will become less and less relevant - even if (perhaps especially if) we start to see it used more and more.

Because if all our e-learning is m-learning, why do we need the term at all?

Monday, October 6, 2008

Death Magnetic: How can I be lost, if I've got nowhere else to go?

Last week I got myself a copy of the latest Metallica album (Death Magnetic). I've been following Metallica since the Autumn of 1991. That was a good year for rock; Nirvana's Nevermind, G'n'R's Use Your Illusions (I and II), and Metallica's Black Album stabbed a flag in the eye of 80's rock, and defined the genre for my generation (and launched Grunge and Thrash into the mainstream as well).

The Black Album is astonishingly good, but Metallica never really recovered from their success and it has remained a high point in their career. Of their albums since then, only S&M really merits a re-listen, and that's basically a greatest hits album (with generous doses of orchestral accompaniment).

Death Magnetic claims to recapture some of the magic of those glory days. The band has supposedly put their arguments behind them, and this is supposed to be a new start. This is supposed to be vintage Metallica.

So is it?

Well let's get something straight, Death Magnetic is no Black Album. When I first heard it I was a little disappointed, sure its not some good stuff on it, but nothing that matches the riff on Enter Sandman, nothing that gets under your skin like Nothing Else Matters, and nothing that has the anthemic qualities of Unforgiven. But as I have listened to it more and more, I've realised that actually this is still a really good album.

Its no Black Album - but then why should it be? This is a different band, a different time, and this is a different piece of work.

The band are obviously enjoying themselves, if you were being negative you might say that some of the songs go off the rails slightly and turn into a bit of a jam, but then you could just as easily say they are more complex - the album more textured - and that underneath the layers the quality might, just might, have returned.

Nirvana disbanded after Kurt Cobain's suicide in 1994, G'n'R were riddled with internal arguments and finally disintegrated in 1997, but Metallica carried on. There work may not have been as fine as it once once, but they carried on touring and making music, and I think that it has finally paid off.

Death Magnetic is its own album - and it's a return to a new form.



Metallica are possibly the finest metal band there has ever been. And now looking back at the Load's and Reload's I feel slightly St. Guilty :-/ After all, this is the band that produced the Black Album: so how could they be lost, when they had nowhere else to go?

Tuesday, September 16, 2008

How is the Semantic Web like Open Hypermedia?

Will history be kind to Open Hypermedia? The other day I gave a presentation on the Evolution of the Web to our new Masters students (part of their pre-sessional programme at Southampton) and I was forced to face an awkward thought. Open Hypermedia was an interesting side road of the Information Superhighway, but it wasn't a proper lane, or even a sliproad. Its something that you look wonderingly at as you cruise past on your full throttle Web 2.0 browser. A blip. For me its certainly a fun memory, but faced with explaining how we got from there to here - is it also an irrelevance?


I became involved in Hypertext research in the last year of my undergraduate degree when I undertook a project on the first implementation of the Open Hypermedia Protocol (OHP). Open Hypermedia was a pre-web idea that arose in the early 1990's and at its heart was the idea that you should seperate content and structure. Many hypermedia systems at the time embedded links in content (as in fact the web does with the HREF tag), Open Hypermedia Systems broke with this, and managed a separate linkbase, combining it with the content at runtime.

The result was hypermedia that could work on multiple type of media (video and audio, document files, etc.), that could be managed (so no dead ends), versioned, and which could present different linksets according to the situation - so early personalization. There were many OH systems (Microcosm, Chimera, HOSS, DHM, and Callimachus to name a few) and some were pretty sophisticated. But they were all swept away by the relentless rise of the Web, a process of technic cleansing that led Mark Bernstein to lament the death of strange hypertexts by 2001.

The Open Hypermedia Protocol was a good idea in the spirit of Web standards, but it was also a plaster over a mortal wound. Proposed in 1996 the idea was to get a number of these OH systems to follow the Web model and interoperate through simple open standards. OHP and the range of fantastic systems still standing by the late 90s inspired me to write my PhD on different models of hypermedia, but by the time of my doctoral graduation in 2001 it was clear that the game was up and Open Hypermedia was swept away like everything else.

I've recently been involved in a project to create an online Multimedia Annotation system (Synote). Because Synote deals with Multimedia the system has to hold its links separately from its content, and I thought it was the perfect opportunity to return to some of those lost OH principles in the age of Web 2.0.

The Synote design team began a process to create a hypertext model to sit behind a YouTube style front end, at first OH seemed a perfect fit, and we designed a model that supported a variety of traditional OH link structures (typed, n-ary, bi-directional, multi-model links) that could be used for annotation, subtitling, bookmarking, and a host of other activities.

However, as the weeks wore on it became obvious that there were some serious problems with an OH design: the issue is that OH holds the link model as sacred. The model is pushed up into the User Interface, and down into the Database, but in reality it is not a comfortable fit in either.

At the Database level it introduces run-time complexity. The OH model is very flexible, but contains many internal references (as it is stuffed full of first class elements). As a result it takes many database queries to resolve links, and an intolerable level of queries to retrieve even small networks. In comparison a more specific approach could use a single table for each type of structure (so for examples, annotations could be in one database table, and bookmarks in another). Resolving a structure then requires retrieving a single table row, which is much faster.

You might argue that the run-time overhead is worth it, because you can build a generalized back end, a hypertext engine that could deal with almost any type of structure. However this only really saves you effort when you have a generalized front end (user interface) as well.

This is a problem since at the User Interface level the generalized approach is not capable of delivering a quality experience. Users want a different type of interface when creating an annotation than when creating a bookmark, and they want the different types of structures displayed differently as well. In short the user interface layer needs to know the activity in order to render the structure (or the authoring process) appropriately. Using the raw OH structures in the UI is awkward and clumsy.

Since each new type of linking activity requires specific UI development, there is little advantage in the generalized back end. Modern development tools mean that is as easy to build a new specific back end as to map the new activity to an old generalized structure.

So perhaps Open Hypermedia failed because it fell awkwardly between these two layers. It still makes a lot of sense as a conceptual model, and as a point of interoperability, but inside the system itself, or in the user interface it seems overly complex and awkward.

In the end we implemented Synote as a series of specific structures. These were easy to code into the database, quick to manipulate at run time, and looked good when revealed in the user interface.

There's a lesson here for the Semantic Web, another grand idea from the Hypertext and Web community that I have commented on before. The Semantic Web is a set of standards for representing and exchanging knowledge (as sets of RDF triples constrained by ontologies), like Open Hypermedia it is therefore about models, openness and interoperability. But also like Open Hypermedia many Semantic Web developers have fallen into the trap of forcing their model down into the system implementation and up into the UI.

So in the end perhaps Open Hypermedia does offer us a valuable lesson - not about the structures of hypertext - but about the need to abstract implementation and user experience away from the conceptual models that drive them.

This is a hard lesson - because you want users and developers to see your models, otherwise how can you convince them of their value. But it needs to be learned, otherwise the resulting systems will be far from convincing, and the machine-readable Web will continue to exist only as a collection of chaotic mashups.

Wednesday, September 10, 2008

WikiSym 2008

I'm currently attending WikiSym 2008 in Porto, Portugal. WikiSym is a small conference with a mix of participants: wiki enthusiasts, developers, researchers, consultants, cynics and evengalists. It also has a very open freeform structure, helped along by the fantastic Portuguese sunshine (a weather event not seen in the UK since April 2007), and the welcoming city of Porto, which looks like an intricate sculpture that could fall down at any moment.


There are a number of trends here, many of them thought-provoking, including application wikis, spatial wikis, and social wikis. In some ways these remind me of the hypertext community in pre-web days, where people had a lot of fun with innovative systems (strange hypertexts as Mark Bernstein calls them) - the WikiSym community is repeating a lot of that work, but this time with real users, and without the impending sense of doom that was hanging over all other hypertext systems after 1994.

Two systems that look particularly promising are XWiki (a full blown wiki system for managing structured content) and ShyWiki (a research prototype for doing spatial hypertext in a wiki). In fact I feel quite inspired myself to write a strange wiki - keep an eye on this space.

One trend that worries me a bit is the glut of papers on Wikipedia and how it evolves and is used. In many ways this is all good stuff, and some of the results are interesting, but part of me is getting a bit bored with graphs of contributions and analyses of version histories that pretty much tell you what you would expect. And when somebody else does stand up with an analysis of a different system you can't help but secretly scoff at their quaint scope (only ten thousand users - how sad!).

Its been fun, quite a lot of fun actually, but I think its time to come up for air. I've only been here 3 days and already everything looks like a nail, and I want to hit it with my favorite wiki.

Monday, August 18, 2008

Starry Starry Second Life

Saw a great video pop up on Digg a little while ago, less Mac-based then the last one, and this time proving that there is a point to Second Life (even if its not quite the samauri sword filled metaverse that we might have hoped for).

This video by Robbie Dingo is based on The Starry Night by Van Gogh, and underlines that not everything in Second Life is real.



Or something :-/

Sunday, July 27, 2008

Duck Taped by Apple in California

Why is the brilliance of the new iPhone 3G so intangible? When I try and explain why I've replaced one 3G, GPS enabled, touch screen smart phone (a TyTN II) with another I get blank stares. When I show people how it works, they nod and smile as if to say that this is just what they expected - all perfectly normal.

So how come I feel like I've made such a technical leap forward - how come this simple device blows me away?

Last week I could browse the internet, I could watch videos, play games, listen to podcasts or my favorite albums, I could read my email and even make phone calls if I had to. Last week I could do all the things that I can do this week, but this week I have a smile on my face as I do them. Why?
That was me that was :-/

It's taken me a week of thinking about it to come up with an answer. I think its exactly because I had a similar device before that the iPhone seems so incredible. In 2003 I opted out of consumer mobile devices. I replaced my Nokia handset and aging Palm V with a PocketPC PDA (the HTC Magician) and have been using a variant of PocketPC ever since. I was always impressed by the technology crammed into them, but friends of mine weren't. They'd look at the clunky phone interface and compare it to their latest handsets, raise a disappointed eyebrow at the media player and point out their iPod, or shrug in apathy at the simple games that were available before picking up their PSP.

In short the PocketPC was a jack of all trades, but master of none. There was no way it could compare with the single function devices in the consumer electronics market. It was a generic computer doing its best.

And this is why the iPhone blows me away. Because its the first all-in-one device that genuinely competes with other consumer electronics. Its not a generic computer at all, its a single box with a phone interface as good as any handset, an mp3 player as good as an iPod, a games engine as good as a PSP (well - as good as a Nintendo DS anyway :-), organisers as good as a PDA, and its piece de la resistance - a web browser that's actually as good as a desktop client. Imagine 5 or 6 consumer gadgets all duck taped together and you're about there.

That's why when you show it to your friends they will shrug, whatever application you show them will be of the same standard that they are used to on their dedicated devices. And its also why some techies don't get it, because on paper the iPhone is no better than the devices that have come before - and in their eyes it might even be worse (after all it only runs on one hardware platform, doesn't allow you to multitask properly, and its all a bit too tightly controlled by Apple).

But after five cold and lonely years I have returned to the consumer electronics fold, and on reflection it's nice to be back.

The iPhone is a jack of some trades, but master of them all.

Wednesday, July 16, 2008

Open Ports for Open Minds

I'm at the JISC Innovation Forum this week. The Forum is a chance for people working for and funded by JISC to get together and discuss the big challenges in HE, FE and e-learning. The event is arranged around a number of discussion sessions, panels and forum (so its unlike a traditional conference as its not about individual work, but the lessons that we can learn as a community).

Yesterday we kicked off with a session about potential future directions for JISC, my suggestion was that JISC should concentrate on helping Universities manage the new wave of technology - not by promoting that they adopt it, but by encouraging them to simply move aside, and allow their students and staff some flexibility and freedom.

This has been exemplified by the WiFi network available to us at Keele University campus where the event is based. Each user requires an individual login and password, and is required to download a mysterious Java app that somehow negotiates access for you. Once connected you are restricted to http and https requests (no imap or vpn for example), and the java app frequently falls over and gets disconnected, requiring you to kill it (quit doesn't work) and then re-run it (although sometimes this results in you being denied access for a few moments, presumably while you wait for your MAC address to be cleared from some cache somewhere).


This is utter madness, a large neon sign that says "we have been required to offer you this service, but don't trust you and would rather you didn't use it". By making the experience so difficult they could put off a great deal of casual use, by locking the firewall down so tightly they force you use awkward web alternatives to the tools that you may be used to, and by requiring this bizarre java stage they ensure that only laptops (no phones or pdas) can access the network.

I've noticed a trend that students are beginning to abandon University e-learning infrastructure because it is too restrictive, and moving to public offsite facilities (such as Google Groups and Mail), but this is surely a good way of making students opt-out of the physical infrastructure too! If I was a student at Keele I would simply buy myself a mobile broadband connection and never use the local system.

JISC's greatest challenge is to get this sort of restrictive practice reversed, so that Universities can start to offer proper IT services to their students in such a way that experimentation and innovation can occur.

I can think of plenty of other examples. We have a colleague working at the University of Portsmouth who is blocked from accessing YouTube on his Uni network, making it impossible for him to access valuable teaching resources (he teaches languages and YouTube is a rich resource of material). At our own University (of Southampton) the central email systems have recently been overhauled, and the ability to set up email forwards to external accounts has been removed. Staff use the University email accounts as points of contact with students (its impossible to keep track of so many other accounts), and so this now forces students to maintain and check two accounts, rather than the one that they may have used for years.

Networks need to be managed, and damaging or illegal activity needs to be controlled or stopped - but the default policy should be to support openness.

Of course it's as much about open minds as open ports. We have to start respecting the autonomy of our staff and learners. By all means monitor the network and close down services or block ports that develop into a problem (as Napster did a few years ago), but give people the freedom to integrate their existing digital environment and personal gear as they see fit.

The alternative is to see them opt out altogether.

Thursday, July 10, 2008

ICALT 2008 - A Cottage Conference?

Last week I was at IEEE ICALT 2008, held in Santander,Spain. Last year's conference was a bit of a wakeup call for me, partly because of Mark Eisenstadt's wonderful keynote, and partly because of the realisation of just how quickly Web trends were making much of the presented work obsolete.

This year the community seems to have noticed the change of pace, and although no wonderful answers were presented, at least we heard some of the right questions being asked. The location was pretty fine as well - Santander is a well kept Spanish secret - although, as you can tell from this picture of us outside our hotel, we found the weather tough going (and yes, we really did send that many people :-)


(in fact, there a few people missing from that photo. There were 14 of us from LSL by the 2nd day)

There were some presentations of some neat e-learning tools too, including a tool from the people at the MiGen project for allowing students to construct simple algebraic problems using a graphical editor. It occurs to me that this is actually about teaching abstractions, and might be useful for first year CS and IT students, as well as schoolkids struggling with algebra :-)

My happier assessment of the conference may also have something to do with the fact that one of my PhD students, Asma Ounass, won the best paper award - a brilliant achievement given the size of the conference and the number of papers considered. Asma's paper was on using Semantic Web technology to create student groups, and is available on our School e-prints server.

Probably the most interesting session that I attended was a panel that tried to address the question of 'why technology innovations are still a cottage industry in education' (with Madhumita Bhattacharya, Dragan Gasevic, Jon Dron, Tzu-Chien Liu and Vivekananandan Suresh Kumar).

Some of the panelists took the opportunity to explain why their pet technology or approach was going to save e-learning, however I found Jon Dron's position statement the most compelling. Jon questioned the assumption behind the topic, and asked if having a cottage industry was so bad, and whether we really wanted to industrialise e-learning. The point behind his question is that Higher Education is itself a bit of a cottage industry; a craft with personalised products and highly skilled craftsmen. The danger is that if you wish for industrialised e-learning systems, you may end up with industrialised learning and teaching.

This set me wondering what cottage industries actually look like in a post-industrial society, and whether or not the tools we need for e-learning are similar to the technologies that they use. Thinking about it it seems that they apply mechanisation in the small - administrative tools like MS Office and communication tools like email, web sites, social networks and ebay.

That assessment might be just a result of my own prejudices about e-learning technology, but what the analogy does show is that there may be an assumption driving our e-learning systems: VLEs assume that the industrialisation of learning is a good thing (consistent quality and economies of scale), while PLEs assume that the industrialisation of learning is a bad thing (the ownership of production and inpersonalisation) and rally against it.

What is not yet clear is whether the cloud approach could scale in the same way as a traditional VLE, enabling institutions to support PLEs on a large scale, or whether the diverse set of people, preferences and tools would create unmanageable complexity. I know that this is a concern with our own systems staff, who have to maintain a large number of systems to ensure quality of service, and I've stated before that I feel that the institutional involvement is essential so there's no avoiding the problem by relying on 3rd party systems.

No answers - but at least we've starting asking the questions :-)

Tuesday, July 8, 2008

Free, as in Web Designs

This month I discovered the wonders of free web design. I'm not sure why it never occurred to me before, but it turns out that there is a blossoming community of web designers out there who are in it for the glory, and who make their designs available on sites such as OSWD.org. You can browse the libraries and download a zip package of html templates and css files that you are then free to modify for your own use.

I took the opportunity to revise my own tired homepage, currently celebrating its 12th year of vanity and anonymity, using a design from Node Three Thirty Design (not much on their own page, but there are links to DeviantART and Zeroweb as well).

My website started back in 1996 with a coursework for my degree (on a module which I now run - who says life isn't a circle :-/

The text is probably too small to read, but it's an on-line technical report on digital video. The last bit reads:

"New advances both in compression and storage have meant that TV quality pictures and even High Definition TV pictures could soon be possible and in lengths that would enable people to watch entire movies on their monitors. These benefits are so great that even the average media consumer could soon see digital television sets in their homes.

Add to this the ability to transmit video across a network (or the internet) and video conferencing and video phones also seem to be a possibility"

Honestly, any more prophesy and I'd be growing a straggly beard, plucking juniper bushes from thin air, and running around following the gourd.

During my PhD I built a homepage that was a kind of a 2d bookmark manager, there was some information about me, but mostly it functioned as a personal hypertext, and a place to put public material (such as teaching notes):

At some point it occurred to me that having some handy search forms on my home page would be a good plan, and so when I became a Research Fellow I added a side bar to search a number of common sites (this was before the days when search had been integrated into browsers):

If I'd been paying attention I might have generalised this and called the result pageflakes or iGoogle, but I was too busy doing worthy things with Open Hypermedia, and by the time I looked up the boat wasnt even on the horizon any more.

When I became a lecturer I needed a homepage that better reflected the work that I was doing, as well as functioning as a homepage for my own browser. I therefore set about a re-design, and incorporated this blog as aggregated content:

Undeterred by my iGoogle experience I followed Ted Nelson's example and proceeded to invent other potentially profitable tools to ignore, including a news aggregation page that I kept until Google Reader came along and showed me better:

This website was altogether better structured, with proper delimited sections, a decent page layout, and a design encoded using CSS (although I did cheat and use tables). This design has worked really well for over a year now, but in the Spring I noticed something unfortunate about it - it looks like it was drawn by a six year old :-(

The problem is that I just dont have time to do a better job - that would require evenings spent lovingly drawing slightly curved corners and dabbling in CSS voodoo. I was getting despondent - but then came across the Free Web Design People at OSWD, and within a few hours I had a shiny new website. Ok, it took a bit longer to glue the various parts together, and I didnt escape some minor CSS witchcraft, but nothing worth burning anyone over. The result is my current design:

I'm not entirely happy with the moody black and white picture (and my wife thinks it makes me look like I've had a stroke :-( but I'm proud of the fact that despite the fact that I have a Mac now, my homepage photo doesn't involve a polo neck shirt.

So the lesson here is that busy geeks of the world you need toil no more! Friendly graphic designers have rescued you from your tardy prisons and 90's vi coded html.

You are free (as in Web Design)!

Thursday, June 26, 2008

ACM Hypertext 2008 and Web Science Workshop

Update 27-10-08: A more formal version of this blog entry appeared in the SIGWEB Newsletter as the Hypertext 2008 trip report. You can find the full text as a pre-print in the ECS e-prints repository.

I've just got back from this years ACM Hypertext Conference in Pittsburgh, PA. Hypertext '98 was also in Pittsburgh, that was my first academic conference, and also my first trip to the States, so it was interesting to head back there. Pittsburgh seemed smaller this time around, and the culture shock was missing - either Pittsburgh is more European that it was (certainly there are a lot more European style cars crawling about) or I've just gotten used to America (in the ten years since I've been to a lot other states, and some of them like Texas and Florida are as different from the East Coast as the East Coast is to the UK).

Hypertext is always a fun event and this year the conference was looking healthier than it has for a long time, good local organisation and an impressive conference dinner on Mount Washington overlooking the city certainly helped, but the main reason was probably that the CFP was broadened to include Social Linking. This has brought in a whole new side of the community, resulting in a great mix of papers. As I pointed out in a short paper of my own a few years ago, Web 2.0 style interaction was at the heart of Hypertext systems before there was even a Web 1.0, and so its strange that the conference doesn't already attract more people from the world of Blogs, Wikis, Tagging, Social Networking and so forth. Hopefully this event marks a turning point and the trend will continue in the future.

Defining Disciplines

My main role at this years conference was as the Workshops Chair, I also stood in for Weigang Wang as Chair for the Web Science Workshop, which was very well attended (20 people in total). Web Science is a new discipline proposed by Tim-Berners Lee, Wendy Hall and Others, and is concerned with the study of how the Web interacts with People and Society. I think its useful to think in terms of Web Science, but was aware that there a lot of people already working on topics in this area, and saw the workshop as a chance for them to get involved, and start to take some ownership of the idea.

A variety of work was presented and we spent some time discussing the difficulty of defining a new discipline, with the observation that it is often as useful to think about what it isn't, as well as what it is.


My PhD student Charlie pointed me to the cartoon above that neatly summarises some of the difficulties. Defining disciplines certainly isn't easy, we even got sidetracked into a discussion about whether Computer Science was a subset of Information Science or if was it the other way around! For me the real challenge for Web Science will be when people start to design undergraduate courses, because at that point it needs to stand separately from other disciplines (and Computer Science in particular). At least in the meantime we know its a subset of Mathematics :-)

Does Hypertext Work?

Although he was beaten to the best paper award, I was most taken by David Kolb's excellent paper and presentation 'The Revenge of the Page' that examined the viability of complex hypertexts 'in the pitiless gaze of Google Analytics'. David had created a new complex hypertext work, available on the Web, with a mix of sophisticated hypertext patterns (of links) designed to affect the readers experience (using techniques such as juxtaposition and revisiting). However Google's stats told him that few visitors lasted more than a few seconds, the majority were coming via Google Images looking for photos, and even when these were factored out visitors only stayed for a minute or so. It certainly sparked some interesting conversations about the viability of nuanced hypertexts, the unexpected arrivals that result from search tool indexing, and whether hypertext is fighting a 'quick-fix' media culture that is prevalent on the Web, and may even be spreading into normal media.

It made me reflect on the sad state of Web-based hypertexts. I wonder if the problem is two-pronged, that most readers don't have any accessible examples of readable hypertexts, and that there are no popular tools to create hypertexts (saving perhaps Tinderbox).

The hypertext authoring tools that are used by millions of people are mostly Wikis and Blog editors, and these encourage only exit links created around a single article page (exit links are links that take the reader elsewhere, perhaps to supplementary material, rather than to another part of the same work).

For example, this is a pretty long blog entry, so why is it written as a single article, why not as a hypertext?

I suspect that along with the tools it is familiarity that breeds this kind of linear article. After all I spent my childhood writing linear stories, and my adulthood writing linear papers. There is a growing body of classic hypertext fiction, but most of it is challenging, and has never been seen by most readers. Maybe we need more easily accessible hypertext works (along the lines of 253) so that readers get used to seeing hypertexts, and understand what to do with them. In the end that is the only way that they are ever going to actually write them.

Trends

Despite David's gloomy experience the overall feeling of the conference was positive, however there was no Grand Vision underpinning the presentations, and much more analysis of what's already happening, rather than any looking forward to the next big thing. Perhaps the community has already been stung (Open Hypermedia for example), or maybe there is just so much activity in the Web 2.0 space already, that it's as much as we can do to monitor and evaluate things - without adding to the madness ourselves.

One topic that was very noticeable in its absence was the Semantic Web. In previous conferences much has been made of it, both in terms of a long term Web 3.0 candidate, but also in a number of practical applications. So why the low profile? Could it be that the Semantic Web has quietly arrived already, or is it that the world has moved onwards, and the Semantic Web is no longer a convincing vision?

My feeling is that the Semantic Web has already arrived, but with a whimper rather than a bang. Its concepts underpin a lot of the work that is happening in the Web 2.0 world (Semantic Wikis and Folksonomy research for example), and the standards are being used in anger for many knowledge-base systems and mash-ups, but it's not common enough for its use to be widely analysed. Perhaps it never will be.

I've been studying this stuff for long enough now to realise that sometimes a technology succeeds, and sometimes it merely inspires. What we have now isn't very Semantic, and its not really a Web either - but it is certainly in the spirit of the original vision (there's that word again).

So perhaps the relationship between the Semantic Web and Web 3.0 will be similar to the relationship between Hypertext Systems and Web 2.0. That would be interesting, as the genealogy (memealogy?) from Hypertext to Web 2.0 is rather tortuous, and full of painful extinction and reinvention. I suspect that the Semantic Web is rather better placed to be transformed from research darling into popular technology (due to a well defined stack of standards), and that Web Science may actually help the process.

The Hypertext conference is a good place to find out :-)

Thursday, June 12, 2008

Bon Jovi at Southampton

Last night I went to see Bon Jovi play at St. Mary's Stadium in Southampton - it was the first UK date of their Lost Highway Tour.

My wife Jo and I don't really share the same tastes in music, she likes unbearable cat stranglers like Westlife and Enrique Iglesias, and I like bearable but LOUD cat stranglers like Muse, Metallica and Nightwish. Bon Jovi manage to sit in the very small overlap in our tastes, jostling for space on a tiny soft rock ledge with bands like the Kaiser Chiefs and the Hoosiers.

I bought us seated tickets for the event, but this turned out to be a mistake as the organisers seemed to get a bit confused, and when we arrived it turned out that my tickets were for seats that were actually behind the stage. Exactly what sort of spectacle they were hoping to provide when all we could see was the back of a speaker stack was unclear. Luckily the marshals didn't seem too bothered by us moving around, so when the band started playing we moved to stand in a walkway where could see as well as hear them.

bonjovigal15
(more pics on the St. Mary's site)

A local band kicked things off (Hours Til Autumn), and they actually did a pretty good job, especially since this was their first gig with more than a 100 people (29,000 in the stadium). The big disappointment was with the major support act - The Feeling are supporting Bon Jovi for half of their UK gigs, and we were hoping for a surprise support act in Southampton (Nickelback did the honours for them back in 2006, and Razorlight are doing their Ireland dates, so we had high hopes). Sadly the surprise was that there was no support act at all, and Bon Jovi came on cold at 8pm.

It was a good set, with just the right mix of classic stuff from their big 80's and 90's albums, and the better new songs from the Lost Highways album, but without a big support act out front they started pretty slowly, and it took a good hour for things to really get going.

The last hour was damn fine though and they managed to lift the stadium despite the drizzle. In fact I've caught myself humming a number of embarrassing soft rock classics this morning, so something must have clicked :-)

So a good night, even if the stadium should be ashamed of selling unusable seats (especially when the event wasn't sold out!), and even if Bon Jovi had to be their own warm up artists. If you're thinking of getting tickets for any of there other gigs - go for the ones where The Feeling are playing. Not only will you get an extra hour of crooning, but you're more likely to have got into the swing of things by the time Bon Jovi start their set.

Monday, May 12, 2008

OSX again and again

I came across this fabulous music video created by Dennis Liu using only Mac OSX - even if I hadn't switched I'd think this was brilliant:



I've been using the Mac now for almost two months and the shine hasn't rubbed off yet. Some of my earlier quibbles have disappeared as I've got used to the Mac way of doing things (most notably the additional Command key, and the missing right mouse button), and I've got to appreciate some other really nice features (like Preview, Spotlight searching and universal spell checking).

I've also come to regard Windows with a bit more tolerance as a low cost, hardware agnostic, public spirited OS. I've been trying to come up with analogies that do the difference between the two systems justice, and the one that best captures my thinking now is that Windows is a bit like social housing - its definitely something that I think should exist and be made available to those people who need it, but if I was completely honest I'd rather have my own place ;-)

Thursday, May 8, 2008

Thanks to the unique way the BBC is funded...

Over the last few weeks the BBC have been running what at first seems a trailer for Panorama. In the trailer a camera pans over dimly lit streets of circuitry, diodes flicker like broken lamps, and sink estates wallow in the shadows of tall capacitors. "Your town; your street; your home..." the narrator threatens, "its all in our database."



The viewer raises an eyebrow, expecting some shocking revelation about Orwellian government schemes to combat terrorism, or ineptitude over social service data - in short, expecting the pitch for a documentary about sinister forces working to subvert our liberties - but then the punchline comes: "new technology means that its easy to pay your TV license, and impossible to hide it you don't. Its all in the database."

WHAT? Did I just catch that right? This isn't a documentary trailer warning us about sinister government forces, these ARE sinister government forces. The UK government, in the form of the TV licensing people, have just threatened me, told me that 'they are watching', and implied terrible consequences if I don't pay my license. Perhaps a swift trip on American Air with internment at the end of it?

Online an international audience has expressed horror and disgust, not only at the veiled threats, but also at the whole notion of paying a TV license.

Now I'm a middle-class lefty liberal, so its a foregone conclusion that I'd love the BBC, but it also means that I have a natural dislike for the government fiddling around with my rights, and it seems to me that we have reached an awkward impasse with the TV license fee in the UK.

On the one hand I completely support publicly funded broadcasting, especially when the system produces such fine quality stuff, for so many different interest groups, and on such a varied medium. There is no doubt that the BBC delivers.

On the other hand the shear breadth of that delivery makes a bit of a mockery of the TV license. You need a license (one per household) if you own equipment capable of receiving a TV signal. Even twenty years ago this made a lot of sense, but now you can listen to BBC radio, BBC DAB stations, read BBC content online, and even download programmes - all without owning equipment capable of receiving a TV signal. You can also spend all your time watching commercial channels, delivered through a cable, or by commercial satellites, without ever seeing any BBC content, but still be liable for the fee. Or you can watch that content (with adverts) on one of the many channels that buy programmes from the BBC (examples include UK Gold, Dave, and even BBC America which is independently funded).


And then there's the elephant in the room - the fact that TV advertising is slowly failing, because of digital piracy, digital recorders (press "skip" to jump the advert break) and the shear proliferation of alternative media streams. In this rapidly changing area, a license fee looks like the one iron-clad method of TV funding that can weather the storm, its basically paying for content up-front, which removes all those worries about making money with it afterwards!

All of which makes me think that fiddling with the license fee may be a good way of cutting off our own nose to spite our face.

Less of the threatening 1984 adverts though please Auntie - somebody at ITV might spot the elephant and panic :-/

Tuesday, April 29, 2008

The Art of Visual Complexity

Tom Franklin pointed me towards this really neat website called VisualComplexity run by an interaction designer called Manuel Lima, which is a catalog/index of interesting data visualisation techniques, examples and tools.

At the moment there are over 500 items in there, and you can filter by what is being visualised (so for example, Internet or Knowledge Networks) to see a subset.

Some are pretty but functional, such as this one by Olivier Zitvogel that shows del.icio.us tags:


... while others, like this one by R. Justin Stewart showing a bus network, are really things of beauty.


Check the site out, its great for for a lazy browse :-)

Wednesday, April 23, 2008

Podcasting

Earlier this month I went to Portugal for the IADIS m-learning conference, a smallish conference on e-learning and mobile devices. The conference was really quite interesting, although it had a curiously ex-pat feel to it because of the number of UK delegates.

One presentation really got me thinking, it was an account from Malcolm Andrew, a lecturer in microbiology at De Montfort Univeristy, of how he has been using podcasts to support his teaching. I've been toying with the idea of podcasts for a while but couldn't quite decide how best to use them. If you do a whole-lecture will that mean that no-one turns up on the day? If you do supplementary material will anyone watch it? And if you make it part of a required reading exercise ("watch the podcast before the lecture folks") will they bother?

Malcolm has settled on an easy compromise, one of those things that's obvious once its been pointed out to you; he produces a 5-10 minute summary of each key idea in a course (normally one summary per lecture) using slides, simple animations and voiceovers. He doesn't ask the students to use the podcast in any particular way, and has observed that some watch it before a lecture, some afterward - some watch it on an iPod, some on the PC - some use it for revision, and a few don't watch it at all. Because a summary format is so flexible, students are able to bring the podcast into their own way of working, and because he keeps them short they are easy to watch, easier to make (they still take a bit of effort), and can be sampled as the students like, depending on their time and interest.

Do you ever get that feeling that you haven't seen the forest for the trees :-/

So, emboldened by Malcolm's example, when I returned to the UK I set about investigating how I could produce a few podcasts of my own. This involves two steps: producing the video and getting it hosted out there somehow.

After a few false starts the method I used to produce the video was:
  1. Prepare slides in Powerpoint. Create a short PowerPoint presentation with an obvious structure, and simple individual slides. There are many ways to get PowerPoint into a video (not least of which is to select 'save as video' on the Mac version) however I found that the best way was actually to save the presentation as a set of images, and then import those images into a video editor. This means that you cant use animations (but most free video export methods don't support those anyway), but gives you a lot of control over timings and slide transitions.
  2. Prepare a script. I tried a few times without one and it was a disaster. Ums and erms don't sound too bad (in moderation) when you're lecturing, but they sound dreadful on a video. A script makes you sound a bit mechanical, but at least it means that you end up saying what you meant to say, and when you're saving the whole thing for posterity that starts to matter!
  3. Import the images into a video editor. I used iMovie, becuase it comes with the Mac and its easy to use. I guess that commercial stuff would be more powerful, but for my needs iMovie was just fine.
  4. Record a section of the script for each slide and cut to size. iMovie has a voice over mode which allowed me to add a voice recording to each clip (every imported image appears as a clip). When importing I make each clip last 30 seconds, which is enough to make the recording, and then I reduce the length of the clip to match my voice over. Doing the voice over is actually quite tough; like most people I hate the sound of my own voice when played back to me (just who is that weird bloke talking - it sure as hell aint me?) and I tend to sound affected when I'm speaking very consciously, as a result I had to try hard to relax when I was recording (its still not perfect, but I got a lot better as I went on).
  5. Add appropriate transitions. I used a fancy in-your-face cube rotation transition for each section (I want it to stand out), and more subtle fade-to-white transitions between slides. I also used cross-fades to allow multiple slides to gradually build up a diagram, which made up slightly for the lack of animation.
  6. Save as a mp4 file (or other format, but mp4 is pretty flexible and allows you to store a master version at a decent resolution).
Once you have the video the next step is to get it online. The easy solution is to use YouTube or another video hosting site, but I'm a stubborn old GenXer with my own website, and so I need to convert and host it myself.

I used three pieces of software for this:
  • ffmpegX - this is a MacOS front end on the ffmpeg unix tools. It converts the mp4 file into a flv file (flash movie), which is the most popular way of putting video on the web at the moment (thanks in no small part to YouTube).
  • flv-duration - this is a small tool which adds duration meta-data to the FLV file, which allows FLV players to add a progress bar control that lets viewers move to whatever section of the video they like.
  • JW FLV Media Player - this is a flash video player that can load FLV files and play them within a webpage.

At the end of the whole shebang you end up with a hard-earned podcast you can embed like so:





To be honest I'm still astonished at how hard this is (even on the Mac) - perhaps its because I'm swimming against the flow, if I'd used YouTube to host it would have been a lot simpler, but a part of me doesn't want to hand over all my little video darlings to be skewered and critiqued by the slobbering hordes on YouTube :-/

We have a couple of projects within LSL that are looking at how you might host teaching resources in a friendly environment (the Faroes project), and how you might allow managed commenting and annotation of your hosted content (the Synote project). Eating some of our own dogfood has underlined for me how useful these two projects could be to teachers and lecturers.

Both of them hope to have beta versions available by the end of the year and now I have some podcasts of my own, I will be eager to try them out.

Sunday, March 30, 2008

In the Beginning there was the Word, and the Word was: Crash!

Arghhh!!! What is wrong with Microsoft? How has one of the biggest software companies in the world got it so wrong?

As you may know I recently escaped to a Mac to avoid the problematic and unappealing MS Vista, unfortunately I still do a lot of document editing and so one of the first installs on my shiny new MacBook was Office 2008 for Mac. On XP I had already noticed that Outlook 2007 had some nasty interface bugs, but these pale into insignificance compared the the hideous crashfest that is Word 2008 on the Mac.

I'm working on a significant document at the moment (its a bid for EU funding so its relatively big - 70 pages or so - and complex - tables, figures, references, etc.) and I needed to do some editing in Compatability Mode (thats what Word 2008 does when you load up a .doc rather than a .docx). I quickly noticed that Word was a bit unstable, crashing in flames and losing all changes about once every 20 minutes. I adjusted my workflow (repeat after me: save your work frequently) and soldiered on; however the crashes increased in frequency and by the time I got to one crash per minute I gave up, and used my precious minute of teetering instability to save out sections of the document in smaller chunks.

This seemed to do the trick and I managed to complete the sections, however on returning to the big document to incorporate my changes Word upped its game to an effective lifetime of ten seconds, rendering it useless. Honestly, Team Fortress 2 would have been a more effective way to write a collaborative document.

I sought Solace in some MS Office forums (Hello everyone, my name's Dave and I'm a Word user) and noticed that there was an update to Office for Mac that was supposed to address some of these problems - however the MS Autoupdater wasnt picking up on it for some reason.

I diligently downloaded the 114MB update by hand - 114MB! Christ on a Bike, what's in that thing! - however when I tried to install it the updater said:




It turns out that this is a known problem with the update, and that the problem is that the MS Autoinstaller is buggy. So I downloaded and installed the new version of that - but still no joy. I had a buggy version of Word, and a patch that refused to install. In the end I had to follow the advice on the forums and uninstall Office, reinstall Office, reboot, and install the update.

Only problem was that this didn't actually make any difference to Word's seizures every time it sees some text. Nowhere in those 114MB had an MS developer managed to fix the problem I was having.

In the end I had to take the extraordinary measure of using NeoOffice (a Mac open source office package based on OpenOffice) to read and edit the proprietary document format that the official software could not read without falling on its face.

Now that is an extraordinary state of affairs :-( And what's worse, losing several crucial hours with a deadline looming has made me so angry at the stinking thing that I've wasted another 30 minutes writing this blog post.

Bill - your company needs you back at the helm, and if you cant fix it, then at least you could have the decency to go down with the ship!

Thursday, March 20, 2008

The Switch is Flipped

In my last post I described why I have decided to switch to a Mac. Well a shiny new MacBook Pro rolled up earlier this week, I've had a few days to acclimatise, and its probably a good time to report on what life is like this side of the switch.

Before the machine arrived I noticed an interesting change in my attitude to Windows, I don't know if it was the feeling of freedom, or some sort of psychological post-decision justification, but I finally lost all patience with the old girl. Eccentricities that I had put up with for years...

("your update is complete, your machine will restart in 5 minutes without your permission and lose your current working space and very likely the work that you had spent two hours creating before you foolishly turned your back to get a coffee")

...suddenly became unbearable insults, and bugs that I had learned to live with...

 ("your IMAP folder is at 90%, so I have decided to create a modal dialogue box that snatches focus away from you - oh, and because this is Outlook 2008 the OK button will no longer make this dialog disappear, however please click it in futile horror to create even more indestructible dialog boxes - if the despair doesn't finish you off, please laugh at the irony of it all")

...became hot pins under my fingernails. I think it's fair to say that when the time came I was all too happy to leave Windows before I jumped through it.

Nice Surprises:

The MacBook itself was a really nice surprise. I bought a 17" machine, and was a bit worried that it might be to large, but actually its only slightly heavier (an extra 0.2kg), slightly wider (about 1"), slightly deeper (about 0.5") and actually a bit thinner (0.5" thinner). It's also a beautiful machine, with an overall simplicity which is really appealing, and little touches (like the magnetic power cord and pulsing sleep light) that make it feel cared for and considered.

NOW I can get trough the day!
The software learning curve has been far easier than I expected, things are different - but natural and consistent. The things that are most different are also the things that are much better. I have a WiFi printer, and I've set it up on a number of windows machines. This involves Googling through HPs cryptic website, downloading and installing a HP suite of unwanted applications (several hundred MBs) to get the one driver you do want, and finally struggling to get the setup wizard to see the printer on the network. I started this process on the Mac and then caught myself, instead I opened the Printer config bit of System Preferences and there was my Printer already listed (the Mac had already found it on the network - I didn't even have to initiate the search), I selected it and clicked "Add Printer" and I was done. The whole process took maybe 20 seconds.

I've also really taken to some of the Mac's visual tricks - such as Expose, and the virtual desktop system (called Spaces). I've tried virtual desktops on Linux and Windows boxes before, but it's integrated so well, and the animated transitions are so good, that it just feels a natural part of the system.

I have had a couple of crashes; an open source FTP client called Cyberduck has died on me a few times (otherwise its really good), and the Microsoft Office installer keeled over the first time i ran it. The problem is very different then on Windows. On Windows when a process falls over it goes in an explosion of chaos, freezing great tracts of screen estate and pulling down related processes and often the GUI shell itself, all of which is topped off by a loud boing and another of those damn dialog boxes. On the Mac the application kinda twitches, and then it disappears. In fact the clean up is so elegant that I didn't notice the few times that Cyberduck crashed. One moment it was there, the next it was gone.

Oddities:

There are some negatives, but they don't seem so bad. The built in Web Browser, Safari, isn't so great. It renders pages OK, but it looks cramped, and feels quite basic in some intangible way. I've installed Flock (a version of Firefox) instead - and that's great to use.

The keyboard is US, which means that the " and @ keys are in the wrong place (for my jaded fingers anyway) and I seem to have lost the hash key (see - I cant even type it!). Also the Enter and Cursor keys are weirdly small, the functionality of ALT and CTRL are different (and mixed up with the Mac COMMAND key), and given that the Mac uses context (right) clicks all over the place - why oh why oh why doesn't the trackpad have a second mouse button! I know I can two-finger click instead but its just annoying that its left out - presumably to make a pointless point about simplicity :-/

Story so far?

Overall I'm very happy with the Mac. This is the first computer since my first computer (and thats a lot of beige boxes) that I've actually got excited about. Its fun to learn to use it - it rewards you at every turn - its achingly beautiful - and the community actually likes itself and the products that its based around.

And the killer feature - the thing that its worth switching for all on its own - is that the Mac wakes from Sleep (well, pseudo sleep) in seconds. I got into my office yesterday, and decided to read my email on my Mac laptop rather than my PC desktop, because the Mac powers up more quickly from sleep than it takes Windows to log in.

Three seconds from off to on, and then when it arrives, by god its all so pretty :-)

Friday, February 29, 2008

Switching to Mac

Right - that's it. I've had enough. It's time to say goodbye.

I'm fed up of hourglasses, crappy modal dialog boxes, ugly UI and an OS that reboots itself without bothering to ask first. After 15 years of suffering its time to move on. I've decided to buy a Mac.


The image “http://www.adblogarabia.com/wp-content/ImaPC..ImaMac.jpg” cannot be displayed, because it contains errors.

Don't get me wrong, while there's a lot wrong with Windows, there's also quite a lot of things that are right. Its much more stable and secure than it's generally given credit for, and it's popularity gives it a steam roller advantage when it comes to choice of applications and the level of third party support. However, its time for me to upgrade my laptop, and I have a simple choice to make - do I change to Vista, or do I change to Mac OS.

By a slim margin the Mac wins. There are a number of reasons:
  • The Macbook Pro is the best designed heavyweight notebook. The 17" model is as light as my current 15" Samsung X60, and is even a bit thinner.
  • I'm no longer worried about leaving applications behind. I can get Office for the Mac, and virtually everything else I use is an internet application (Google Maps, Wikipedia, Digg, Blogger, etc.). Microsoft beware - internet apps may not compete with your applications directly (well, not much), but they remove a huge incentive to stay with windows. You can now move and still stick with what you know.
  • I'm ready for a different set of quibbles. I'm not expecting the Mac to be perfect, but at least it will be irritating in a different kind of way. I'm red raw with Windows, and want to scratch somewhere else :-/
  • The unknown Vista. Windows Vista looks pretty, but probably sucks. I know that's not a scientific appraisal, but if I'm going to spend the effort learning the ins and out of a new OS, I might as well change to Mac OS now.
  • The Boot camp safety net. As a fallback I think that the Macbook Pro running Vista might actually be the best Windows notebook, so in the worse case scenario I can just switch to that.
  • A wave of techie defections. This is really obvious to me because I work in a Computer Science research lab and do a circuit of computer-oriented conferences, but may be a bit invisible to those working in less geek-centic circles. The fact is that the Mac is gaining a big geek market share. The popular share is thought to be around 7%, but I would say that it is at least triple that with technical folk, maybe higher. There is a mass defection - which seems a weird trend, as technical folk typically shun out-of-the-box solutions, and software that is perceived as being noddy, but it seems like Mac OS has avoided this by being solid and flexible. One of my worries over the years was that the Mac was a simple computer for simple users - but the evidence is overwhelmingly contrary. If it's good enough for them, its good enough for me.
The Macbooks have just received a bump, and although the high end Sony Vaios are better on paper, I suspect that it might be another case of monkeys driving Ferraris.

The time is right and my first Mac is on order. I promise not to buy any black polo necks, or to get obsessive about white peripherals.

Let's see how I get on...

Wednesday, February 20, 2008

Nativism vs. Literacy

I have read a number of pieces recently attacking the notion of the Digital Native - Prensky's notion that there is a new generation of students who are in some way soaked in technology to the extent that it has changed their behaviour.

Stephen Marshall suggests that the concept of the Digital Native is reaching its autumnal years, and that there has been too much hype around this notion, and not enough evidence.

A recent study commissioned by JISC does a great job of analysing the characteristics that Prensky suggested (such as the ability to multi-task, and a preference for fun-style learning). The study concludes that while some aspects of the Digital Native idea do seem to hold true others are less clear. In summary:

True:
  • Competent with Technology

  • High Expectations of ICT

  • Prefer Interactive Systems

  • Prefer visual info over text

  • Have a tendency to cut'n'paste

  • They believe intellectual property laws to be unfair (but do understand them)
Open:
  • Prefer Digital communication to Physical communication

  • Are natural multi-taskers

  • Want entertainment and fun in their learning

  • They expect to find everything on the Web, and for free
False:
  • Have no patience for delay (want instant everything)

  • Prefer peers to authority figures

  • Learn by trial and error

  • Are expert searchers
The report also makes the point that some attributes seem to be universally true across all age groups and reflect general changes in society:
  • Want/expect to be connected all the time

  • Prefer small chunks rather than long texts
(this is greatly simplifying the report - its well worth reading the summary)

This fits in with a recent presentation that I saw by Emma Place at the recent LLAS e-learning conference. Emma noted that while there seemed to be an increased familiarity with technology, their was not a corresponding increase in the wisdom of how to apply it. For example, that students did not understand issues of evidential quality or provenance, and that this resulted in a cut'n'paste culture - information foraging that encourages shallow learning and borders on plagiarism.

In my own post advocating a new approach to e-learning tools I picked out the notion of Web Literacy from the general idea of Digital Natives. I suggested that Web Literacy was more than the ability to use the Web (that's more like Plain Old Computer Literacy), but was instead a desire to create an on-line identity, a willingness to forgo some aspects of privacy, and to embrace online relationships as the equal of real ones.

I like the notion of Web Literacy, as it explains a new attitude within the new generation, without making assumptions that the literacy is universal, or completely positive. The phrase Digital Natives seems too broad a brush, discussing literacies means we can talk separately about other factors, such as Computer Literacy and Information Literacy (which Emma might suggest is in danger of decline!), and also talk of the problems of illiteracy.

A colleague of mine, Mike Wald, has made the point that Literacy is a loaded term - that it has positive connotations, and sounds like something that we should aspire to - and perhaps he's right. Maybe Web Literacy is better used to describe some future form of this current behaviour where issues such as loss of privacy and respect for copyright have been untangled and hammered into some more respectable form - but then again, who gets to say what is respectable. Maybe these ideas should change?

Whatever terms we use I am confident that when designing the next generation of e-learning tools we should be aware that we are no longer creating applications in a virgin space, but are instead trying to build tools that fit into a person's existing digital landscape. We have to work with existing identities, interoperate with familiar toolsets, and provide an experience which matches a student's adventures on the wider Web.

And that sounds pretty positive to me.