Monthly Archives: February 2009

Yesterday I visited Flash Camp London ’09, an all day community-run Adobe sponsored event on all things Flash Platform.

Last September I attended Flex Camp ’08, (essentially the same, but obviously focused on Flex) so I expected much the same – cool demos, sneak previews, maybe some insight to what Adobe have in the pipeline for the future – and got pretty much exactly that.

Flash Camp '09

Serge Jespers‘ opening keynote held a lot of optimism and promise for the future of the Flash Platform, quoting the huge number of downloads to date and pointing to the constant growth in market share that the Flash Player and AIR are enjoying – throwing in a couple of digs to the various doubters in the sums while he was at it.

He spoke about the Open Screen Project and Adobe’s ongoing aim to achieve a level of open portability across multiple platforms – not only in the browser and onto the desktop, but to mobile devices too and television platforms. On the subject of the mobile platform, he discussed prototype versions of Flash Player 9 (and 10?) running on a few devices he had to hand (though unfortunately no demo) and expressed Adobe’s wish to have those ready for manufacturers by the end of the year, with intention to have them consumer ready for the end of 2010.

Seb Lee-Delisle was first up, showing off some of the Papervision work he’d recently completed with his agency. He also had some nice demos of the augmented reality tutorials that have been going around lately. These usually use nice applications of the ARToolKit, but Seb pointed to a Flash port I hadn’t yet come across called the FLARToolKit. Presumably with which, you have full control via Actionscript. The Papervision blog has a pretty cool example of the kind of things you can achieve with it.

Next up was Michael Chase, Senior Creative Developer at AKQA. He presented his latest work, Nike Football, which involved a lot of work with Pixel Bender – the new video processing and visual manipulation platform available with Flash Player 10.

Pixel Bender is a non-destructive way to manipulate the pixel data of images and videos by means of developing bespoke plug-ins that function in Flash in a similar way to the various visual effects and filters do in Photoshop or Illustrator.

He demonstrated the Pixel Bender Toolkit, the GUI software used to create these filters. It’s purposely almost identical to every other program in the Creative Suite. Adobe are really pushing for seamless integration across the whole family of software for creators – the vocabulary, workspace, tool sets – all feel very familiar.

For the Nike site, Michael basically developed one filter for use across all video and image content. This seems straightforward enough, but it’s an brilliant advancement only made possible by using Pixel Bender. This way, there’s no need to render of every piece of video with the filter on – or subsequently re-render when the filter is inevitably tweaked (which, of course, could only be the case if permission was given to manipulate supplied video footage in the first place). It also means the video filter doesn’t have to be designed by a creator skilled in After Effects or other video editing software – as said, the Toolkit handles very much like Photoshop, which most designers are fluent in – I think Michael said you could actually use Photoshop to create filters anyway.

It also means you can change the single filter once and apply the changes to all the assets rather than having to edit every piece individually – and as he suggested, not having manipulated the source material means the un-filtered source can be reused elsewhere. And of course because it’s just Actionscript before it’s compiled, the whole plug-in script can be manipulated by a Flash developer.

It was good to see this in use, I’d only really seen the default demo ‘Swirl’ effect that a lot of others there also seemed only to have seen (I’m not sure of the real name). That ‘swirl’ is so drastic it seems to have no possible use case, so I’d not really considered Pixel Bender since. Here though its use is subtle, well executed and well placed – I’ll have to give it a go.

Mike Chambers then discussed ‘Scripting with Actionscript 3.0′. Though relatively well-covered territory for the developers, he set about debunking popular misconceptions of Actionscript 3, going through the benefits of migration and giving some examples.

He started with a little background on the new Actionscript version, discussed how the Flash Player was hitting the limits of performance that AS2 could achieve, that Actionscript 3 was heavily driven by the need for application development – which by that point a lot of (the now) RIA developers were forcing into Actionscript 2. They also had Flex in mind.

As I agree with him, ultimately, AS3 isn’t that different to AS2, but it is just different. It’s not harder, or ‘slower’ per se. On a language level, the syntax is still simple and very much the same – it’s the APIs that might present more difficulty for those migrating. The APIs in Actionscript 2 grew organically, expanding where needed, but unfortunately did so inconsistently. It’s that realignment that’s a larger change to overcome.

Arguably, any developer with OOP experience, where consistency is promoted, wouldn’t struggle. He suggests that learning Actionscript 3 is future-proofing yourself for new languages that will be far more digestible now that Actionscript contends as a stronger language.

The Timeline is not Evil!

With that in mind, he did admit that the way Adobe present Actionscript 3 can be somewhat intimidating to those without that kind of basic knowledge. The documentation is very much aimed at developers – the code examples are in class and package structures, assuming programming experience where the previous help documentation never did.

Timeline coding is still possible, easily, but it isn’t documented anywhere near as much as class structured code. With one or two caveats, it actually works in almost exactly the same way.

As well as the ‘future-proofing’ mentioned, Actionscript 3 heralds a whole load of other advantages. It’s more verbose (probably where the argued ‘slower development process’ claim lies) but in that, offers better debugging – the compiler can be set to be more strict and to detect errors earlier, even – and it’s also the language for new libraries and APIs (think Papervision, Alchemy, the many tweening engines) both from Adobe and efforts from the community.

Richard Dean presented his work on the EA Spore microsite, specifically his efforts built using the Inverse Kinematics and 3D of Flash CS4 – demonstrating some nice timeline-based animation effects, the use of the new ‘Bone’ tool to build character skeletons (more about this later) – as well as some handy tips and best practices.

James Whittaker‘s presentation ‘Your First Custom Chrome AIR App With Flash CS4′ delivered exactly what it said on the tin. He offered a walkthrough on how to build your first AIR application, how to design a custom chrome and the various provisions that must be made in doing so, up to publishing an AIR application file and customising the various settings in the new CS4 GUI. He also spoke about handling icons, digital signing, then creating a nice installer badge at the end. His presentation files are already up online.

Lee Brimelow had a huge amount to say about the new CS4 version of Flash – apparently trying to cram a whole day session into his 45 minute slot. He spoke about the new animation model in Flash, how it’s more like After Effects now – again, the overlapping of software uses in the Creative Suite – how even the timeline in the standard workspace is at the bottom of the screen, more along the lines of video editing software.

So much more of the animation process is automated now, to great effect. Motion paths are automatically constructed, even for simple tweens. The path can be treated like any other line in Flash thereon, allowing curvature, adjustment of Bézier angles. Adding a keyframe and point in the middle of a tween no longer creates an awkward corner, but a curve to compliment the original motion path.

There’s far more control. The tween itself is handled as a unique object, so moving or resizing or changing the length of an animation is much easier and also independent of the clip being tweened – there’s no more clumsy attempt to select multiple frames to modify a complete tween.

Again there was demonstration of the native ’3D’ in Flash Player 10. Lee couldn’t emphasise enough though, that these is intentionally simple 3D effects for transitions and such – not for full 3D immersive environments, for which he recommends to look to Papervision or similar. When the 3D tools are in use though, it’s seamless. There’s a tool to rotate by the Z-axis as simply as there is one for the 2D axes – in doing this, Flash starts to look like 3D rendering software.

These renders are possible because of the ‘notorious’ inclusion of a constantly-running Flash Player on the stage – it’s how Adobe have addressed differences seen in author-time to run-time. In having an constantly running instance of the Flash Player, there should be far fewer discrepancies – although, as they are fully aware of – is a memory hog.

Lee also pointed out the code snippets panel Flash CS4 offers – something I thought Mike Chambers would have mentioned. They’re basically small templates of handy bits of code that anyone unfamiliar with Actionscript (or Actionscript 3, for migrating developers and designers alike) to add common bits of functionality – mouse or frame event handlers for example.

Again we saw Inverse Kinematics – these are great for character animations and (I think perfect) for mocking up prototypes when realistic proofs are required but perhaps the resource isn’t available to fully code them. They’re very quickly put together but equally very effective. Simply constraints applied to skeleton joints create faux-physics that look very convincing. Have a look here if you’ve not seen these in action.

All of that is possible with zero code. Also, all the drag-drop manipulation possible at author-time can also be translated for the user to play with at run-time with the tick of a box – still, with no coding.

Finally Lee demonstrated the new motion editor, which has also has given a huge amount of control to the author compared to what was available before. The complexity of a tween (whether an ‘x’ position or alpha value or whatever) can now be broken down into multiple channels of manipulation.

For example, previously the complexity of control over a tween was determined (and limited) by the tweening graph. This remains, but now different types of easing can be applied to the different parameters within that graph. Say a clip was moving diagonally across the stage – the horizontal movement could have an ease out whilst the vertical direction may have an elastic easing (or obviously any combination). All the tiny tweaks and nuances to animations that couldn’t be easily achieved in previous versions of Flash, or even those only achievable by code now look entirely possible on the timeline at author-time. Lee’s tutorial is a must-see.

Finally, Serge returned to discuss ‘Flex workflows with Flash CS4′. He demonstrated some good techniques in working across Flash and Flex within single projects – firstly how to use Flex metadata tags in Flash, then how to create classes using the Flex SDK and compile those as Flex Library Projects to use as SWC files within Flash (and the Flash CS4 use of SWCs is so much better – adding files to the library rather than to the classpath list) – then likewise compiling components in Flash to handle in Flex. The latter also maintains coded methods on the Flash components that can be handled within the Flex projects, easing the workflow between Flash and Flex developers no end.

Similarly, to ease the workflow between developers and designers (and as I thought would get a mention), Serge ended by demonstrating Flash Catalyst (previously ‘Thermo’). He created Flex components from Flash graphics, multi-layered PSD files and Illustrator assets – all of which generated MXML code that a developer can play with later.

All in all, a great session – Chester and the guys were never going to disappoint. ;)

Various content online can be found in a number of places if you look for the ‘flashcamp_uk’ tag – there’s a whole heap of conversation on Twitter, I expect photos on Flickr and videos on Youtube and Vimeo will surface soon enough. I’ll also put up links to presentations files and source code as and when they find themselves uploaded online.

Update (09.03.09): Serge now has a video tutorial over on his blog demonstrating how to use simple Flex Library Projects in Flash.

Seems I was a little late in finding out about the BBC’s work on integrating and exposing semantic data in their (then) new beta trial of Artist pages a little while ago.

In an interview with Silicon.com, Matthew Shorter, BBC’s interactive editor for music, speaks about establishing data associations with MusicBrainz, an open user-contributed ‘metadatabase’, to roll out across all of their encyclopaedic artist pages on the BBC site.

MusicBrainz has been around for some time now, it’s a huge database of music metadata storing information such as artists, their releases, song details, biographies. Right now it has information on over 400,000 artists.

As early as 2001, it was described as a ‘Semantic Web service‘ (think a Semantic Web web service), in its offering of a massive store of machine-processable, openly available information (mostly public domain or Creative Commons-licensed), available via open protocols – in RDF format no less.

The BBC have adopted this open standard, mapping their data schema with that published by MusicBrainz to utilise the unique identifiers they provide. This allows the BBC site to leverage the public domain content, augmenting the profile pages found there.

Take a look at one of the records from MusicBrainz, for example, John Lennon’s information at http://musicbrainz.org/artist/4d5447d7-c61c-4120-ba1b-d7f471d385b9.html.

The unique ID here is the MBID, ’4d5447d7-c61c-4120-ba1b-d7f471d385b9‘.

The BBC then, have a dynamically generated page at http://www.bbc.co.uk/music/artists/4d5447d7-c61c-4120-ba1b-d7f471d385b9.

Previously, writers at the BBC would have to write (and keep up to date) interesting and relevant content on every single artist pages they publish – which I’m sure you can imagine is as unenviable as impossible. Now, MusicBrainz populates a lot of the information here – see the Releases and Credits – and also provides the retrieval of the biography from Wikipedia.

At the same time, the BBC radio playout system (reportedly giant iPods in the basement of Broadcasting House) update the playlist information on the right of the page.

As Matthew Shorter says, automation and dynamic publishing means the pages can be created and maintained with a fraction of the manpower. Check the Foals page for a more recent artist and you’ll see news articles automatically aggregated also.

Gathering resources in this way and adding context around the artists enables machines to process the links between these data sets, establish relationships between the information and perform interoperation based on those.

In his article above, Tom Scott (the Technical Project Team Leader) also describes these URIs as ‘web scale identifiers’ and talks about the principles of Linked Data. Whilst in this use case these locators facilitate simple data retrieval, the notion of the absolute, global URI is a far larger idea, and here, could grow to be far more powerful.

The URIs facilitate the mechanisms, but stand to play a far larger role in opening and standardising information on the Web as a whole. The MusicBrainz MBID attempts to standardise the way we reference information online regarding music, it’s wide reuse, is in a sense, achieving that goal. But rather than thinking of these alphanumeric strings as pointing to locations of database records, they too can refer to the real world concepts they identify.

Imagine all online materials that feature a particular artist universally employing their single MBID string. Every semantically linked and annotated document and resource could be unified by an intelligent agent instructed to do so, collecting and amounting the information to describe that real world concept in it’s entirety. With consideration to the Semantic Web, ultimately, for a machine agent to understand that concept in it’s entirety.

In linking to MusicBrainz, the BBC then have equally made their data more portable to third parties wanting to use their data elsewhere. By agreeing on these unique IDs to identify resources, these pages can be automatically linked to and accessed based of this consistency.

The site provides a RESTful API, just add .xml.rdf, .json or .yaml to the end of the artist url.

The value of online information isn’t determined by scarcity like physical good are in the physical world. Reuse, repopulation and increasing visibility means, for the BBC, an enriched repository for the purposes of making information more accessible and useful to the reader (surely the inital goal), but also in having the link now established to MusicBrainz, the information is connected out into the Web, therefore enriching the source (and then exponentially any other links thereon). Better for the BBC, better for the third party, better for the reader - everything is enriched – so hopefully any later applications can benefit from this network effect.

Anyway, it turns out this has been going on since July last year, so perhaps the Silicon.com article was an attempt to increase visibility - we’re six months down the line now, after all.

If so, it worked – Sarah Perez wrote up an article at ReadWriteWeb and reports over at MusicBrainz suggest things are hotting up for this year. But if not, they should be applauded for commendable transparency and their open-minded efforts (and accept the extra drive of users to the service that comes with it!). It’s frustrating when products that are intended to ‘open up the web’ are kept closed and private for commercial purposes.

Thing is, I’m surprised I hadn’t found out about this before now. Shorter also describes this as being part of a general movement that’s going on at the BBC, “to move away from pages that are built in a variety of legacy content production systems to actually publishing data that we can use in a more dynamic way across the web.” So I went digging for more – thinking that, if this (pretty awesome) beta went online relatively quietly and the BBC aren’t particularly shouting about these new innovation (which I think they should!), perhaps there’s more elsewhere?

Well, I found two presentations over at Slideshare, the first on “BBC Programmes and Music on the Linking Open Data Cloud“, the second titled “Semweb at the BBC“, but unfortunately without transcripts of videos I can only really marvel at what might be in the works.

Patrick Sinclair (software engineer at the BBC – see his post on the Music beta) said a video might surface, but I’ve yet to find one.

By the looks of things though, there could be some fully recognised Semantic Web applications coming out of the BBC in the future. They look to discuss a handful of the languages and technologies that make up the Semantic Web stack, refer to constructing their own ontologies, reason use cases for Linked Data and look to be applying the techniques of the Music pages to Programmes sections and onward.

Look forward to it!

I know a pretty little place in Southern California down San Diego way.