Category Archives: Collaboration

This month’s YDN Tuesday was presented by Steve Marshall, he spoke about code generation, ‘Writing the Code that Writes the Code’.

A bit of a disclaimer: Going into this talk I wasn’t sure how much I’d be able to take from it, or whether what Steve would be discussing would be relevant to my own kind of development. I figured he’d be talking about working with huge projects with (literally) tens of thousands of lines of code and I don’t think that the evening was part of the ‘RIA’ umbrella which is probably closest to home for me. Anyway although Steve’s talk did cover those kinds of projects, he also went through some platform-agnostic fundamentals – the kind of transferable ideas that could be applied to any coder – and it’s really those I’m looking at here.

Code Generation in Action

Steve opened with a bit of code generation evangelism, introducing a discussion as to why code generation is something that all developers should want and should do.

Each point here I think speaks for itself, but only really having gone over each benefit staring me in the face have I questioned, hold on, why don’t I do this already? It’s a convincing argument..

The most immediate is scalability – so often development projects have large amounts of repetition, say in creating views, or sets of view/mediator/component classes, or whatever – needless to say as projects get larger, generating this kind of code from templates and base classes reduces or even eliminates the room for error.

Neatly, this this was his next point – consistency. Automated code cannot be mistyped or have any part overlooked (as could when writing by hand), unless of course the source is incorrect. And even if that’s the case, a small change can be rolled out to all generated code once corrected. Which brings us to his third point – automatic code is of course far quicker to produce.

Steve also suggests that generated code is ‘predictable’, in a way. Whether looking at code that has been generated or that written for ‘expansion’ (more on this later), on the whole it’s consequently easier to digest. Likewise, source code that is to be used by a generator or to be ‘expanded upon’ is, by its very nature, a smaller volume – so therefore easier to approach.

It’s also a single point of knowledge. Only the core source code needs to be understood. A project of tens of thousands of lines of code would be near impossible to traverse or otherwise understand, knowing that code has been generated from a smaller source that you do understand offers a reassurance that the rest of the code is good.

Another advantage is the ease of which the code can be refactored. Obviously only the source code base requires changing, the larger volume is generated automatically. This kind of code is language independent.

Outside of working with code direcly, the kind of abstraction writing for code generation offers its own benefits. Although arguably a desirable trait of good programming anyway, Steve suggests this kind of code is far more decoupled with the design process. Thinking more abstractly about coding means it has less effect on design capability, and vice-versa. Design reviews or changes should have no bearing on the coding. By being naturally quicker to produce, Steve also says that generated code is a good methodology to pursue when prototyping applications.

Steve also discussed testing, offering that generated code has (or presumably can have) 100% test coverage. Similarly, full documentation is easier achieved by way of having written a smaller code base.

Overall, cleaner code that’s easier to work with and easier to understand is undeniably a more attractive prospect to work with. Developers are more enthusiastic about working with that kind of code and so that kind of code is of a higher quality – in the first place, as well as because it has been consistently deployed, tested and documented thereafter.

So how is this achieved?

Models

Steve recognised six principle methods for generating code, each technique seemingly increasingly more ‘advanced’ than the previous and, as I said before, each progressively more suited to larger-scale projects.

He saw these as:

  1. Code munging
  2. Inline code expansion
  3. Mixed code generation
  4. Partial class generation
  5. Tier generation
  6. Full domain language

 

I’ll look at each briefly.

Code munging


Code Munging
Code munging essentially takes your code, parses, introspects and creates something new that’s not code, though still of benefit and in inherently useful.

Steve’s examples were Javadoc or PHPdoc, it’s the most basic form of generation you can introduce to your workflow.

Inline code expansion


Inline Code Expansion
This the ‘expansion’ mentioned above, where source code is annotated with keywords or variables where upon additional code is generated to replace those markers to extrapolate upon the original.

Templates are the most rudimentary example. This technique gets repetitive jobs done well, quickly and correctly.

Mixed code generation


Mixed Code Generation
This is the first ‘real’ form of generation, this expands upon Inline code expansion by allowing the output code to be used as input code once created. Instead of simply replacing special keywords for example, mixed code generation may use start and end markers, expanding code as before, but allowing the result to be worked upon once completed.

This may include code snippets, or smaller templates injected within classes.

Partial class generation


Partial Class Generation
I guess here’s where things start to get serious. Partial class generation begins to look at modeling languages like UML, creating abstract system models and generating base classes from a definition file.

The base classes are designed to do the majority of the low level work of the completed class, leaving the derived class free to override specific behaviors on a case-by-case basis.

Tier generation


Tier generation
This takes partial class generation a level further, a tier generator builds and maintains an entire tier within an application – not only the base classes. Again from some kind of abstract definition file, but the generator this time outputs files that constitute all of the functionality for an entire tier of the application – rather than classes to be extended.

Full domain language

Steve didn’t really talk about this much, he said it’s very tough – and even if you think this is what you want to do, it’s probably not (though a hand from the crowd wanted to reassure that it might not be quite that intimidating).

A domain language is a complete specification dedicated solely to a particular problem – with specific types, syntax and operations that map directly to the concepts of a particular domain.

I used the Code Generation Network’s Introduction for reference here, which looks like a good resource.

Also, all the diagrams belong to Steve. He’s put his presentation slides on Slideshare and uploaded PDF and Keynote versions on his website.

The presentation was also recorded as a podcast and is now on the Skills Matter site.

In retrospect

After all these, Steve took us through an example he’d worked on where code generation really proved it’s worth (a huge project for Yahoo!) – and offered tips and best practices to use once you’ve convinced your team to take on code generation (see the slides linked to above).

Though in my opinion here he seemed to seemed to almost contradict some of the things he’d talked about before. He pointed to some tools to use within your workflow, but spoke extensively about writing your own generator. Stressing that it can be written in any language, though should be written in a different language to that of the code being produced, that its easier than perhaps you think – though still with a fair few caveats.

But what happened to cutting down the work load? As the talk summary says, developer are fundamentally very lazy – we want to do less and less, surely this requires more effort?

One of his recommendations is that you document your generator once you’ve written it. But how would you create that documentation, with another generator? Is that meant to be a specially written generator also? And that needing documentation too?

I remember an episode of Sesame Street from my childhood, where Big Bird was painting a bench. Once he’d finished he made a ‘Wet Paint’ sign with the paint remaining, only to realise that the sign also was wet, so needed a ‘Wet Paint’ sign of it’s own. Which of course, was also wet and needed it’s own sign. Cut back a scene later and the whole street is full of signs. It was upsetting, to say the least.

I’m being pedantic, I know.

But I guess that’s where I recognise a line for me, or at least the kind of development I do. Very large-scale projects do often need this kind of approach, Steve is proof of that. For me only the first few techniques are appropriate. Though those, undeniably, are incredibly beneficial when you have the opportunity to put them into play.

On a similar note, this month’s LFPUG meeting had a presentation on UML for AS3 by Matthew Press, the recorded video for which is now online – have a look!

I managed to catch half of an Adobe webcast yesterday, previewing Creative Suite 4. It seems their main focus with the release is to improve workflow, easing the integration through the software family, across the whole suite, and with that improve the production process faced by designers and developers alike.

From the outset, their recent press release promises:

“Hundreds of feature innovations.. Delivering radical workflow breakthroughs that bring down the walls between designers and developers.”

So what were they?

Well from what I saw, there’s more ‘live updates’, some things I’d seen intended for CS3 that never quite made it. There was a good demo of Dynamic Link, their platform to facilitate these, which moved video clips from Premiere to After Effects and back, in this case, without the need to render a thing – a process that would ‘usually take fifteen minutes’ takes fifteen seconds in comparison.

Illustrator can now handle multiple art boards at once, embedding them into a single workspace. Meaning others’ updates are synchronised to your working environment. These could then be imported, for example, into Flash – still in their accumulated state.

A lot of Flash and Flex events I’ve attended recently seem to have presented the same message, their attempts to converge the designer and developer, or at least bring them closer together. The new skinning and design options in Flex 4 (Gumbo) for example, or even Thermo as a complete authoring tool, seem intent on doing this.

But I’m undecided, half of me won’t trust the code any ‘WYSIWYG’ editor writes for me. I wonder if designers might soon experience a similar dilemma – Photoshop CS4 has a ‘content-aware scaling’ tool that determines for you what ‘objects’ in a flat image should be resized, or otherwise maintain ratio. See it in action here.

The other half thinks that Adobe aren’t trying to dictate my working environment to me, or forcing me to change a thing. Instead, more trying to accommodate others that might struggle and/or are new to the software, or in my interest, interactive development.

Colin Moock recently presented the ‘Charges Against ActionScript 3.0‘ at InsideRIA, continuing a discussion into whether Actionscript 3′s ‘hard’ reputation is deserved. He criticised CS3 for making ‘simple interactivity hard’ – his example proves his point, the on() and onClipEvent() handlers are no more. But it’s not so bad, just that even the most simple animations require a little more structure now.

But in comes the demonstration of the new animation features of Flash CS4, including tweening by dynamic bezier-like paths with easy and intelligent ways to modify them, ‘scalable’ timelines which automatically reposition keyframes and even creating ‘skeletons’ for MovieClips to quickly animate what would previously have required tedious dissection and some fiddly manipulation.

There’s even 3D effects in the authoring tool – effects being the keyword. I can’t help but think this is the direct result of the impact and rise in popularity in some powerful open source 3D engines, like Papervision and Sandy. The demonstration didn’t impress at all compared to some of the samples from the aforementioned. I wouldn’t be surprised if in a similar vein, some ‘light’ physics simulation could soon be introduced.

The repeated message from Adobe; what previously took the time of a developer to write parameter-based code, whether for interaction or animation, can now be done by a designer in half the time, what they almost suggest, for half the price – because it’s now twice as easy.

I’m sure there’s some more showings today for southern hemisphere timezones, but a whole load of video tutorials are playing over at Adobe TV that are well worth checking out. Everything else can be found at the CS4 homepage.

Well, two.. actually.

Did some *amazing* acting for Gareth’s project on Sunday and a little filming when his acting efforts were required.

But in another collaboration I’m working with Barry Pace on his project Greens at Twenty After, a interactive short that explores a new format of film he has conceptualised.

He’s recently been to Cornwall to work with some recording artists for an original score for the piece. Tonight we were digitally captured the recordings from an old 4-track mixer that he can now send on to his sound engineer who’ll mix down the final audio for the film.

My main contribution though will be developing a platform for the project’s delivery. As important as the film itself are Barry’s ideas regarding the whole philosophy of platforms of interactive narrative, their commercial exploitability and success – this was in part the focus of his dissertation. Essentially he intends that more films could ultimately be created for a platform specific to interactive films such as his, that utilise high-speed bandwidth in a currently unexplored way.

My job then is to seemingly connect an offline application seamlessly to texts primarily stored online, unobtrusively, to blur the (present) very apparent boundaries between film, online texts, even games, creating the opportunity to enable a convergence of functionality and hopefully, success.

Gareth’s problem is sorted.

My last post shelled out the issues we were having with calling functions local to Director from embedded .swf cast members using Actionscript. With much trial and error attaching Lingo to various places within the .dir file – to the .swf cast member on the stage, the stage itself, the frame etc. – the problem actually seemed to be with the type of script handling the functions.

My Lingo lingo isn’t perfect so excuse any wrong terms in this explanation but essentially it seems assigning scripts to those mentioned places are then attached sprite behaviours, determining self-contained intrinsic actions. I got the function calling to work by storing very basic actions in a stand-alone cast member, unattached to stage, frame, .swf etc. The cast then recognises this as a movie script – obviously there’s some difference in the way Director executes them, but it seems to do the trick either way.

Gareth is pretty pleased with the results, I’ve left him with finding the equivalent method of sending asset-specific parameters to the functions to interpret them in different ways. This means Director can store sets of functions by class – sets for handling button, people, variables, movies and so on, hopefully condensing the amount of code transferring both ways and being ultimately performed at runtime.

One of my collaborations is with Gareth Leeding, who’s working on a multi-screen interactive game based on Orwellian totalitarian ideas and voyeurism, he’s called it ‘Start the Surveillance’.

I’ve been called in to do some design and Flash dev, lately trying recall my Lingo skills from last year’s 3D Shockwave project to establish a smooth communication between Actionscript and Director so to use Flash-based interactivity with Director’s better video support across a stage large enough to cross the desired five screens.

After much trial and error Gareth found the basic principle for sending Lingo via Actionscript was to use the getURL action. Usually in Flash it’s used to send Web addresses to the browser. Essentially what it does seems to be instead of talking to the actual browser, it sends the URL parameter to its _parent container – which generally is a browser. When it’s used in a .swf, embedded within a Director file, that .dir is then the parent so that receives the data. This data can be passed Lingo. For example:

function initDir() {
getURL(“lingo: go to the frame”);
}

So we’ve then experimented in different ways to send more actions, finding a few problems with multi-line commands, whether sending multiple URL parameters:

function initDir() {
getURL(“lingo: go to the frame”);
getURL(“lingo: myVar = 10”);
}

or sending multiple line strings with the special expressions:

function initDir() {
getURL(“lingo: go to the frame \n myVar = 10”);
// the expression “\n” calls a new line
}

Neither worked as intended so I started experimenting with other methods, then realising the most efficient and probably most straightforward way is actually to use the embedded Actionscript to call local Director functions. This way Director can then handle not only large volumes of functions, each of multiple lines, but the entire script can be written as a single cast member and accessible throughout the whole project – not needing to call variables or functions from Flash at various points so it can do all the calculations etc. locally as well. Not entirely sure why we didn’t think of that earlier.. hmm.

With a little help from Doug the code is seemingly as easy as:

function initDir() {
getURL(“lingo: myDirFunction”);
}

calling myDirFunction() in the .dir – seemingly that easy – it’s still not working. :(

We’ve had a word with Mark who thinks it may be some version issues. We’ll keep at it, if this gets nailed Gareth is away.

Maybe we ain’t that young anymore.