Monthly Archives: April 2009

Since upgrading my blog from to a hosted installation, I’m only now realising how good WordPress is and how simple they’ve made the migration possible.

The free blogs are great for getting started – and that’s not to say only for ‘getting started’, I used their solution for two years without a single problem. I upgraded to have more control, with regards to embedding media, handling uploaded files and because I’m keen to get something on my domain soon – previously it redirected to my by web forwarding.

I obtained my hosting from WebFaction and without too much of a sell, they’re pretty awesome. They offer an unlimited number of sites, databases and email accounts, SSL support, SFTP, SSH and full shell access – you can literally install any applications you want on there.

As well as that they have a good number of applications ready to go via their control panel. One of these is WordPress.

I didn’t want to start this new blog from scratch and I didn’t want posts duplicated on the new domain and still existing on my free blog – this is a bad thing. I also didn’t want to delete the old posts and leave an ‘I have moved’ notice, because I’m aware of incoming links that would return HTTP 404 (file not found) errors. I wanted to keep all of my posts, preferably hosted now on my new hosting, maintain the old links even though the web address would be different – and do all of this seamlessly so no-one would really realise. My solution was in two parts.

Domain Mapping

One of the great things WordPress offer is Domain Mapping, which basically means your blog can be at but it looks like it is somewhere else – on another domain (support page here). It’s a paid upgrade and requires you having a domain registered first, of course.

It makes use of the ‘Domains’ feature, you can access it from the Dashboard, which can be set to forward all requests to your blog to the equivalent URLs on a new domain. For example, visitors to would be automatically redirected to, same as visiting would take you to

To do this (as the support guides you) you need to first update the nameservers with your domain registrar, which means the domain name will point to WordPress’ nameservers instead of theirs. The WordPress nameservers are as follows:

And the if the registrar needs the IP addresses:

Changing the DNS settings can take up to 72 hours to propagate, so don’t expect to see results immediately – but it’s often less.

Once this is done, you should be seeing at your domain name. Then in the Domains section of your blog Dashboard (under Settings), add your new domain. Here you will be asked to purchase the Domain Mapping upgrade here if you haven’t already. Then set that new domain as your primary URL and from then on, all requests should be forwarded!


For my blog though, I wanted to map to a subdomain, specifically rather than the top-level domain. To do this you don’t change nameservers, instead add a DNS CNAME at your DNS provider (this is on the support page, too).

Doing this manually will look something like this: IN CNAME

In my case, I implemented a DNS override with my WebFaction hosting (a feature that they offer) and after 24 hours I could see, so upgraded and added my domain name in the same way. That’s the first part – and this meant any old links (and links within posts to other posts) would be still find their destination.

Switching to

As I said though, this simply forwards your free to your new domain. This doesn’t even require any hosting, just a domain registration. But I wanted the features and control that offers.

Now that the domain mapping was in place, I removed the DNS override, meaning that the ‘blog.’ subdomain no longer pointed to Word press. Instead, I installed the full version of WordPress 2.7.1 there.

Then I exported all my blog posts, comments, categories, authors – everything – from my old blog (this is done through the Dashboard with a single XML file) and imported it into my new blog. I also found my old theme and updated all my links etc – basically rebuilt my old blog on the new platform. This is super-easy and easily accomplished before the DNS has updated.

This means that all requests are still forwarded as before, but instead land at my newly installed blog and the relevant post there!

Doing this does have some dependencies – the posts need to have been imported, they cannot be republished, because this would give them new unique identifiers and the forwarding wouldn’t find them as a result.

Similarly, the the structure of your permalinks must be consistent – requests are forwarded without regard for what is at the destination, the URL string is appended verbatim. This means if you used something like you can’t then use

It also means the previous blog is now hidden, done and dusted.

New comments to your new domain won’t be added to your old blog, though that shouldn’t really matter as it’s inaccessible now – so the posts aren’t ever seen to be ‘out-of-date’. The stats will also cease, as no-one is visiting that blog anymore, they’re visiting the installation – so install a stats package there!

Another handy outcome of the mapping means that RSS subscribers should still receive updates. Even though the blog feed is no longer being updated, the request for is forwarded to your new blog’s feed – and that’s how they’ll be seeing updates, hopefully including this post!

I have now upgraded and moved my blog from to

As far as I’m aware, I’ve successfully mapped my blog to my new domain. This means that any visits to my old blog will redirect you to my new domain automatically. This also means that any linking to those old posts, any bookmarks and (hopefully) RSS subscriptions should all still be OK – they too should ultimately get to their intended destinations.

Having said that, I’m not sure if all RSS readers will be happy with it, so if you do subscribe please update the feed to

See you there!

This month’s London Web Standards group met to discuss Designing for the Social Web, by Joshua Porter. Big thanks to Jeff Van Campen, the group’s organiser, for kindly giving me a copy!

I liked this book, and I don’t know if I had expected to. From the first page it’s immediate that the ‘design’ of the title doesn’t refer to graphic or Web design. It’s a 101 in creating successful social Web applications, covering the whole spectrum of ‘design’ and development and how to tackle the issues you’ll come across in doing so.

Quoting names as varied as Darwin, Freud, Berners-Lee and Douglas Adams, Designing for the Social Web offers a great amount of depth in it’s near-200 pages, more so than I had anticipated. Porter presents details studies in all aspects of social media ‘design’. Of course, the interface and UX designing itself, as well as discussing user behaviour, online identity, social economics and particularly the application life cycle.

The book lays the foundations for constructing the ‘perfect’ social website. It defines important goals and principles, it recommends research methodologies in order to identify your audience. From there, it takes you through the process of determining your users’ intentions, their goals and incentives – ultimately, for you to distinguish your core features and the site’s functions.

He looks at popular social sites, the big names as you’d expect and reflects upon their legacies, but others too that invite further investigation. Throughout, he refers to good supporting materials – important interviews, blogs and key names in this current social media boom.

Early on, Porter constructs a framework for development, his ‘AOF’ framework, describing a simple prioritisation scheme that he uses for reference thereafter. He expands upon its three building blocks – of audience, objects and features - and moves forward to studying each topic and their importance within the chapters that follow. It’s detailed and well deconstructed.

But the real good kick of the book is in the four chapters where Porter concentrates and dissects four very focused areas of your application’s development, specifically, designing for sign-up, for ongoing participation, for collective intelligence and for sharing. These parts make up the bulk of the book, they’re very well researched and informed.

Without wanting to relate the whole book here, these build upon four ‘hurdles’ that Joshua outlines at the very beginning, describing how to achieve, maintain and build-upon a deep level of user engagement.

Porter's User Engagement Hurdles
Ultimately, Porter concludes by offering methods of analysis in order to understand and optimise your application once published. He gives pointers to evaluate performance, measure and act upon usage statistics, describing techniques to gather meaningful metrics and how to react accordingly.

As I say, I enjoyed the book all in all. Throughout, it is very contemporary and up-to-date in both its principles and with its examples.

It’s needless for me to say how powerful and useful, lately how almost essential, social Web apps have become and can be. With giants like Facebook, MySpace, YouTube et al, even sites that you might not think at first to be inclusively ‘social’ (although they very much are) – Nike+, LibraryThing – it is an incredibly hard market to break into for any new startups.

I think for anyone intending to do just that, this book is both extremely relevant and important reading.

Last year Facebook released Facebook Connect and about the same time Google released Friend Connect, they’re two very similar services that allow users to connect with information and with their friends of the respective native platforms from third-party enabled sites. The intention, as I’ve written about before, is to add a layer of social interaction to ‘non-social’ sites, to connect your information and activity on these third-party sites to your information and activity (and contacts) on the original platforms.

Then in March, Yahoo! announced their service sign-on, called Yahoo! Updates.

Now, this week, Twitter have announced their connection service, called ‘Sign in with Twitter‘. It too gives you a secure authenticated access to your information and contacts, in exactly the same way the others do – except this time, it’s Twitter.

Sign in with Twitter

You might ask if we have three, do we need a fourth? Have you ever used any of the other three?

But don’t dismiss it, or think it Twitter are jumping on to any kind of bandwagon, Twitter’s implementation is fundamentally different to the others – and it could cause quite a stir.

The problem with the other services (ultimately the problem with the platforms) is, more than often not, they are completely closed and non-portable. Although you can sign-in to a third-party site and access your data, there’s a lot of limitation to what you can retrieve and publish. These popular social networks have grown and amassed huge amounts of members and data which they horde and keep to themselves. I’m not talking about privacy, I’m referring to data portability.

The infrastructures are like locked-in silos of information and each built differently, because, either, they never considered that you’d want to make your data portable or they didn’t then want (or see value) in you moving your data anywhere else. The services they’ve created to ‘connect’ to your data are also proprietary methods – custom built to channel in and out of those silos. Each of those services too, are singularities, they won’t work with each other.

Twitter though, have come up with a solution that adheres to agreed upon standards, specifically, by using OAuth to facilitate it’s connection. Technically, it’s significantly different, but in practice, you can expect it to do everything the others can do.

The community’s thoughts

Yahoo’s Eran Hammer-Lahav (a frequent contributor to OAuth) has written a good post discussing his thoughts, he says it’s ‘Open done right’ – no proprietary ‘special sauce’ clouds interoperability as happens with Facebook Connect. I think he’s right.

He looks at what happened when Facebook Connect was introduced, that they essentially offered third-party sites two key features: the ability to use existing Facebook accounts for their own needs, and access Facebook social data to enhance the site. The value of Facebook Connect is to save sites the need to build their own social layer. Twitter though, is not about yet another layer, but doing more with that you’ve already got.

Marshall Kirkpatrick also wrote about the announcement, his metaphor for the other ‘connection’ services best describes how they function – ‘it’s letting sites borrow the data – not setting data free’.

But then he talks about Twitter ‘as a platform’, and I think this is where things get interesting. He says:

Twitter is a fundamentally different beast.

All social networking services these days want to be “a platform” – but it’s really true for Twitter. From desktop apps to social connection analysis programs, to services that will Twitter through your account when a baby monitoring garment feels a kick in utero – there’s countless technologies being built on top of Twitter.”

He’s right. Twitter apps do pretty much anything and everything you can think of on top of Twitter, not just the primary use of sending and receiving tweets. I love all the OAuth and open standards adoption – but that’s because I’m a developer, but thinking about Twitter as a platform makes me wonder what kind of effect this will have on the users, how it could effect the climate, even landcape, of social media if, already being great, Twitter is given some real power

People have long questioned Twitter’s future – it’s business model, how it can be monetised, those things are important – but where can it otherwise go and how can it expand? Does it need to ‘expand’? It’s service is great it doesn’t need to start spouting needless extras and I don’t think it will. But in widening it’s connectivity, it’s adaptability, I think could change our perception of Twitter – it’s longevity and road map, the way we use it and think of ourselves using it.

My Thoughts

Irrelevant of Richard Madeley or Oprah Winfrey’s evangelism, Twitter is an undeniable success.

When Facebook reworked and redesigned their feed and messaging model, I almost couldn’t believe it. What was the ‘status’ updates, basically IS Twitter now, and that’s it’s backbone. It’s Twitter’s messaging model, it asks ‘What’s on your mind?’.

I’m probably not the only one who thought this, I’d guess any complaints about this being a bit of a blatant rip-off were bogged down by all the negativity about the interface redesign.

I think Facebook realised that Twitter has become a real rival. I think (and I guess Facebook also think) that as people become more web-savvy and literate to these sociable websites, they want to cleanse.

The great appeal of Twitter for me was, ingeniously, they took a tiny part of Facebook (this is how I saw it two years ago anyway) and made it their complete function – simple, short updates. Snippets of personal insight or creative wisdom, it didn’t matter really, what was important was it ignored the fuss and noise of whatever else Facebook had flying around it’s own ecology (and this was before Facebook applications came around) and took a bold single straight route through the middle of it.

Looking back, a lot of Facebook’s early adoption could be attributed to people growing restless with the noise and fuss of MySpace at the time – Facebook then was a clean and more structured an option.

I remember Twitter was almost ridiculed for basing it’s whole premise on such a minute part of Facebook’s huge machine. Now look at the turnaround.

Now people are growing up out of Web 2.0 craze. A lot went on, there was a lot of ‘buzz’, but a lot of progress was made in connecting things. People now are far more connected, but perhaps they’re over-connected, struggling from what Joseph Smarr calls ‘social media fatigue’. People they have multiple accounts in a ton of dispersed and unconnected sites around the web – true, each unique and successful for it’s own achievements – but it can’t go on.

Twitter for me is streamlined, cleansed, publishing. Whether talking about what I’m doing or finding out information from people or about topics that I follow, the 140 character limit constrains these utterances to be concise and straight-to-the-point pieces of information. The ‘@’ replies and hashtags are brilliant mechanisms conceived to create connections between people and objects where there is almost no space to do so.

I use my blog to write longer discourse, I use my Twitter to link to it. Likewise with the music I listen to, I can tweet Spotify URIs. I link to events and anything particularly good I’ve found (and probably bookmarked with Delicious) I’ll tweet that out too.

Twitter for me is like a central nervous system for my online activities. I won’t say ‘backbone’ – because it’s not that heavy. Specifically a nervous system in the way it intricately connects my online life, spindling and extending out links, almost to itself be like a lifestream in micro.

Recently, I saw Dave Winer‘s ‘Continuous Bootstrap‘ which although is admittedly a bit of fun, describes the succession of platforms deemed social media ‘leaders’ (see the full post here).

What I initially noticed is that he aligns successful platforms – blogging, podcasting – with a single application: Twitter. It doesn’t matter whether he is actually suggesting that Twitter alone is as successful as any single publishing form, but it did make me wonder if Twitter, rather than being the current ‘holder of the baton’, will actually be the spawn for whatever kind of Web-wide platform does become popular next.

If the real Data Portability revolution is going to kick in, if it’s on the cusp of starting right now and everything will truly become networked and connected – would you rather it was your Twitter connections and voice that formed that basis for you or your Facebook profile?

I know I’d much rather read explore the connections I’ve made through Twitter. The kind of information I’d get back from the type of people who’d connect in this way would be far more relevant from my pool of Twitter connections rather than the old school friends and family members (notoriously) who’ve added me on Facebook, the kind that just add you for the sake of it.

If Web 3.0 (or whatever you want to call it) is coming soon, I’d rather detox. Twitter is slimmer and still feels fresh to start it out with. For me, Facebook feels far too heavy now, out of date and messy. Maybe I’m being unfair and I feel that way because I’ve fallen out of touch with it and now I visit less frequently, but all the negativity hasn’t done it any favours – and those complaints aren’t unfounded.

If you’re like me and cannot resist watching YouTube videos in High Definition or High Quality whenever the option is available, you might also get a bit disgruntled that no-one ever seems to link directly to these versions – or might not know how to.

I’ve not seen it documented anywhere on YouTube’s site (maybe you’re told when you upload a video – I’ve not tried), but you can link directly to High Quality and High Definition versions of a video by adding or altering a single argument on the URL string.

For example, take a normal YouTube link:

If you add fmt=18 to the end of the URL, you’ll automatically view the High Quality version:

If a High Definition version is available, add fmt=22:


If you want to embed higher quality versions of the videos through the YouTube player though, you’ll have to use two arguments like so:


The HD videos on YouTube’s site play at 854 x 505 pixels by default (including the player chrome), but these 720p videos will support up to 1280 x 720 pixels.

You can edit the dimensions of the player in the embed code you’re provided with. So your final HD embed code will look something like this:

<object width=”854″ height=”505″><param name=”movie” value=”″></param><param name=”allowFullScreen” value=”true”></param><param name=”allowscriptaccess” value=”always”></param><embed src=”” type=”application/x-shockwave-flash” allowscriptaccess=”always” allowfullscreen=”true” width=”854″ height=”505″></embed></object>

Another thing, you can also jump straight into a specific part a video by adding #t parameter and specifying the time value like so:

These arguments can be paired of course, for example, the same in High Quality:

I recently found out you can do this with Spotify URIs, too:


Albeit formatted differently, it does the same job. Note this only works with Spotify URIs, it won’t work with the HTTP links.

It would be nice to see a standard adopted for such features, but it’s really up to the platform developers to decide upon the mechanism.

Unfortunately too often each one wants to make their own unique. This is a micro example of a much larger problem I tend to go on about (see Data Portability and Linked Data).

Anyway, I think YouTube should definitely make their quality selection easier. These parameters tweaks feel like code hacks. It would be much nicer if YouTube allowed you to specify ‘quality=HD’, or something similar, to any video link.

I know a pretty little place in Southern California down San Diego way.