Monthly Archives: May 2009

Last week I attended a YDN Tuesday, a developer talk hosted by the Yahoo! Developer Network led by Dirk Ginader, discussing Web Accessibility.

It looks as if these presentations have been running for a while now and they’ve got a good schedule lined up for the coming months. They discuss a decent section of Web development beyond the pure skills – JS, DOM, PHP, OAuth, Web services, Yahoo! technologies and by the looks of things have AJAX, Flex and Ruby on Rails in the pipeline.

They’re also free, which is great when you’re sitting down to hear Yahoo! experts talk about what they do best!

Dirk Ginader is part of the Accessibility Task Force at Yahoo! and tackled developing fully accessible Web applications at every level – covering basic markup, best practices with CSS and accessible Javascript, finishing with a discussion on WAI-ARIA, offering some of his insight gained from working with the standard.

Most people are familiar with the common three-layer development of Web sites, building a core HTML structure, styling with CSS and enhancing functionality with Javascript. In his talk though, Dirk introduced a five-layer development method and spoke about this throughout the sessions.

Dirk Ginader's 5-layer Web development

Building on top of the common three-layer method, Dirk spoke of adding levels of ‘CSS for Javascript’, i.e. adding extra CSS if Javascript is available and enhancing the interface to promote this – and a final layer of WAI-ARIA, the W3C standard for accessible rich Internet applications.

The core layers – HTML, CSS, JS

First though Dirk went into the basics, giving a good exploration of the first shared three layers – reiterating the importance of good, clean HTML, appropriate and logical tab ordering, form building and that it should, obviously, be usable without CSS and Javascript.

Again he reiterated the importance of dividing CSS and Javascript, simply, as it always should be, that CSS is for styling and Javascript is for interaction. CSS can be used achieve a lot of interactivity functionality that would otherwise be controlled by Javascript, but these are akin to hacks, says Dirk.

Another accessibility oversight is the assumption that all users have a mouse or pointing device – and as such, all design is appropriated for mouse control. If your mark-up is good and each ‘level’ of your development has been tested and is robust, your site should be completely navigable with just the Tab and Enter keys. Also, any CSS that uses the mouse-only :hover effects, should also use :focus, which includes active tabbing.

I always feel that approaching Web development in view to adhere to strict standards and to maintain accessibility always helps produce cleaner code and generally minimise errors and cross-browser inconsistencies in the long run anyway.

Dirk spoke about the usefulness of the focus() Javascript function, in bringing users’ attention to alerts, changes, altered screen states and such – especially handy for users with screen readers or screen magnifiers.

On the subject of screen readers, Dirk spoke about how they really work, how they see a Web page and handle the content – talking about loading systems and various reading modes. This was great becausue although I’ve designed for screen readers before, I’ve never seen one being used or had a go myself – and I’m sure I’m not the only one.

CSS for Javascript

The first extra level of Dirk’s five-layer development is in adding CSS when Javascript is available. This means your interface can be altered knowing that Javascript can be used.

You can use Javascript to append an additional class to page elements so that you can use CSS to target and style them. For example, the following line of code adds a class named ‘js’:

document.documentElement.className += ” js”;

You would then style with the follow CSS, where the first declaration is global and the second applied only if Javascript has been found and appended said ‘js’ class to an element:

.module {
/* Both layouts */
}
.js .module {
/* Javascript layout */
}

Enhancing a page in this way isn’t anything new, but it is very cool.

If you’ve heard of the term Progressive Enhancement, then you’ll know why. If you’ve not, you may have heard of Graceful Degradation. Both are methods for handling differences in browser rendering and capability, they’re similar but subtly different.

Graceful degradation, or ‘fault tolerance’, is the controlled handling of single or multiple errors – that when components are at fault, content is not compromised. In developing for the Web, it means focusing on building for the most advanced and capable browsers and dealing with the older, second.

Progressive enhancement turns this idea on it’s head, focusing on building a functioning core and enhancing the design, where possible, when capable.

There are a good few articles on A List Apart about that I strongly recommend bookmarking:

 

The last article, Scott Jehl discusses enhancement with both CSS and Javascript and has a similar trick of appending class names to page objects once Javascript has been detected. He talks about how to test those capabilities and offers a testing script, testUserDevice.js, which runs a number of tests and returns a ‘score’ for your interpretation. As well as offering an amount of detail on top of the alone recognition of Javascript, it even stores the results in a Javascript cookie so the tests don’t have to be run on every page load.

WAI-ARIA

The final layer of hotness is WAI-ARIA, the W3C standard, the recommendation for making today’s RIA and heavily client-scripted Web sites accessible.

WAI-ARIA adds semantic metadata to HTML tags, allowing the developer to add descriptions to page elements to define their roles. For example, declaring that an element is a ‘button’ or a ‘menuitem’. In Dirk’s words, it maps existing and well known OS concepts to custom elements in the browser.

As well as page elements, page sections or ‘landmarks’ can be described too – declaring, for example, a page’s ‘navigation’, ‘banner’ or ‘search’ box – these look very similar to HTML 5.

There’s a great introduction on the Opera Developers site.

WAI-ARIA is not something I’ve used before, but looking at the specification there seems to be a lot of definitions you can add into you code – it looks very extensive.

Although it is ready to use, it will apparently invalidate your code unless you specify a specific DTD. Dirk didn’t actually point to where you can find these, though I’ve seen that you can make them yourself, however I don’t know if this is what he was suggesting.

Dirk has also uploaded his slides onto Slideshare, including the code I’ve used above, you can see them here:

It’s been a good while since Twitter added their OAuth beta phase, I wanted to write about it when it first came about but never had the chance – same story with when they were under fire from phishing attacks in January and the real need for stronger security became so apparent. Anyway what with the recent ‘Sign in with Twitter’ announcement, which enhances the OAuth beta, I thought I’d use this is my excuse to say what I wanted to say.

If you’re unfamiliar with OAuth, it’s an open protocol standard for user authentication. It works by allowing a user of one platform to grant a secondary platform access to their information (stored by the first platform) without sharing their login credentials and only exposing data they choose.

When a user visits a ‘consuming platform’ (the secondary application, that is) it passes a request to the native platform, the ‘service provider’, which returns a login request for the user to complete. The user then logs in to the native platform, proceeds to inform the platform to grant access to their data when the secondary application asks for it – and is then returned to that consuming platform, ‘logged in’ and ready to go.

The crucial problem with Twitter’s API is that, currently, to access password protected services, for example the ability to publish tweets, this is not the mechanism facilitating the data access. The method they use instead is seriously flawed, and dangerous.

Right now, a website or desktop application such as TweetDeck or Twitpic, simply asks for your login details with a regular login prompt. I think from that point onwards, there is a huge amount of misunderstanding to what is actually going on.

Users are not logging in to Twitter at this point, instead they are just telling the third-party application what their password is. Thereafter, the application uses that password as it chooses.

Instead of telling Twitter that you’d like a certain application to access your data, you are instead freely handing over your password to the application, you hope in confidence, that it won’t be stored, sold or misused thereafter.

Incredibly, this has gone on for a very long time. It seems the general majority of Twitter users have come to accept handing out their password to completely unknown sources. True, those aware of the dangers or generally security-wary tend only to use a select few services, but there are so many applications built on Twitter’s platform and a lot of them offer very niche, almost ‘throwaway’ services, that I can’t believe the ease and almost disdain with which so many hand out their login credentials without concern.

OK – it’s not like it’s your giving away your online banking details, but I can’t imagine this happening with any other type of account – social media or otherwise – email, Facebook or any other website, if they offered an open platform for these kind of applications to be built upon.

It’s become as increasingly accepted as the request has become more common. The problem with there being so many applications, especially the ‘disposable’ kind, is that users can forget when and where they have given their details to whom.

Say a user tries a new application but it seems not to work, it will be easily forgotten – perhaps put down to teething problems of the app or maybe it’s just not a very good app and “..nevermind”, they might not have been that interested anyway. By this point, if it was purely an attempt to capture your details, it’s too late.

Admittedly, and thankfully, I’ve never heard of anything so blatant and I hope if anything so obvious came around that the Twitter community would raise awareness and Twitter would respond accordingly.

But of course, the real targets for these vulnerabilities are the users who aren’t aware of the danger and aren’t expecting to have to look out for fishy, or phishy, sites – and the problem is informing those people.

If you’re reading this blog – that being a Web development blog and you’ve sat this far reading a post about user authentication – chances are you’re Web-savvy and you’re exactly not the type of person I’m talking about. You’re probably also not the kind of person who reuses passwords, but you also know that’s not uncommon.

In a scenario where a password is breached, if the email account that you’ve registered with Twitter uses the same password as the password you’ve just lost to the phishing attack, there would be no question that an attacker wouldn’t try the same password with every other account you’re receiving email from and connected to.

Then that becomes a serious breach.

But like I say, I think I’m preaching to the choir – and maybe being a bit harsh about people’s common sense.

Twitter and OAuth

Anyway I wanted to talk about OAuth. Twitter’s implementation is described on their wiki page for ‘Sign in with Twitter’, it performs accordingly:

Sign in with Twitter workflow

  • If the user is logged into Twitter.com and has already approved the calling application, the user will be immediately authenticated and returned to the callback URL.
     
  • If the user is not logged into Twitter.com and has already approved the calling application, the user will be prompted to login to Twitter.com then will be immediately authenticated and returned to the callback URL.
     
  • If the user is logged into Twitter.com and has not already approved the calling application, the OAuth authorisation prompt will be presented. Authorising users will then be redirected to the callback URL.
     
  • If the user is not logged into Twitter.com and has not already approved the calling application, the user will be prompted to login to Twitter.com then will be presented the authorisation prompt before redirecting back to the callback URL.
     

You may have already seen it in action if you’ve used Kevin Rose’s new startup WeFollow, the ‘user powered Twitter directory’. You can see which applications (if any) you’ve granted access to in your account settings at http://twitter.com/account/connections.

Flickr also uses OAuth, you may have seen it there if you’ve tried uploading images with a third-party application.

Aside from being more secure as a technical solution, Twitter’s adoption of OAuth could have a very positive domino effect on similar and future applications. In fact, it’s been predicted that it’ll ‘pave the way’ for a whole host of new apps and more mash-ups to come – presumably using both Twitter’s data or for new platforms to be built upon. I imagine this prediction sees a point where users are familiar with the authentication process, confident that their data can be accessed securely and within their control.

As I said in my post about ‘Sign in with Twitter’ – Twitter is an incredible tool and is becoming ever more powerful and recognised as such. Although it’s not like it’s popularity won’t increase anyway, but if some people’s qualms and easy criticisms of Twitter, of which security always scores highly, are solved by these kind of platform advances, there will be no denying it as a leader, rather than a contender, in the social Web landscape.

It must be said though, OAuth isn’t infallible. Only two weeks ago, Twitter took down their OAuth support in response to the uncovering of a vulnerability, though they weren’t the only ones affected.

And then there’s phishing..

I mentioned the phishing attacks that Twitter suffered in January – thirty-three ‘high-profile’ Twitter accounts were phished and hacked. It saw a good effort on Twitter’s part for reacting quickly and fixing the problems, only two days prior they had released a notification to be aware of such scams.

During this time, Twitter was a great source for debate and argument over how to resolve its own issues.

I follow a lot of developers and platform evangelists including Alex Payne, Twitter’s API Lead, as he battled through the security breach. Another is Aral Balkan and between the two of them they voiced some fair criticisms (1, 2, 3) and argued out a lot of issues (1, 2).

As Alex says, OAuth does not prevent phishing, Twitter are aware of this. The very premise of phishing, that of dressing a trap as a legitimate and trusted source, can be extended to OAuth implementations, too – but it does make it easier to handle and by using OAuth, instead of Basic HTTP Auth, builds user trust along the way.

Up until now, Basic Auth has been a large part of Twitter’s API success – OAuth is an additional high hurdle for new developers. Twitter admit, they’ll give at least 6 months lead time before making any policy changes and they’ve no plans in the near term to require OAuth.

Alex did a good job of pointing out helpful resources and blog posts for those joining the debate. One was Jon Crosby’s post about the phishing attacks, which, as he says, is a great explanation of the correlation of OAuth to phishing attacks – which is to say, essentially none. It’s short post but clearly outlines the difference between authentication and authorisation – and in Alex posting it, shows Twitter’s awareness of the problem and understanding of what OAuth is and is not.

Another was Lachlan Hardy’s post about phishing (via), which extends Jeremy Keith’s proposed ‘password anti-pattern’. Keith thinks that accessing data by requesting login credentials is unacceptable, a cheap execution of a bad design practice. But interestingly he goes on to talk about the moral and ethical problems that developers experience – that although users will willingly give out their passwords and Basic Auth is an easier process to implement, as well as being a lower barrier of entry for users (again, look at Twitter’s success with it), we actually have a duty not to deceive them into thinking that it is acceptable behaviour.

Keith also talks about the pressure of the client, their need to add value to their applications ‘even when we know that the long-term effect is corrosive’ – but when I read that, posted from Alex remember, and having read his thoughts on security from his own blog, I wonder if Alex is hinting at something about Twitter outside of his control..

He is the only Twitter employee I follow so I tend to think of him the representative, but probably should think of them separately. Aral’s post about the phishing scam points the blame squarely at ‘Twitter’, but only in the last paragraph does he say so ‘stop blaming application developers’ – and at that point I realise the devs at Twitter are just trying to do their jobs.

Actually, I’ve just noticed Marshall Kirkpatrick’s article ‘How the OAuth Security Battle Was Won, Open Web Style‘ at ReadWriteWeb, it talks about the down time of OAuth last month. It’s a pretty good read, reporting that the lead developers of the providers were all aware of the vulnerability as it went down, but quickly and effectively worked together to resolve the problem before really going public and risk inviting attacks.

As Marshall says, if OAuth was software, a fix could be implemented and pushed out to everyone who was using it – but it’s not, it’s ‘out in the wild’ and no one party is in charge of it – it’s real victory that they all cooperated so quickly and so well to neutralise the threat.

Is there anybody alive out there?