Category Archives: Development

With a new venture, comes a new set-up. I’m going to be working with a new start-up company, a small outfit that will require some working from home, some working on site, plenty of time spent in odd Wi-Fi spots around London presumably — all amounting to a good enough excuse to purchase a brand new laptop and establish a new development environment.

For some time I’ve been keen to try working with Linux full-time, particularly Ubuntu, the most popular flavour of Linux. So with a brand new system I have an opportunity to try it.

I usually work with Unix-like systems for Web development, but on the server-side and using a virtualisation such as VMware, but infrequently at author time or on the desktop. I figured this is going to be a development machine, prone to be tarnished by experiment (or more likely ignorance), perhaps in need of frequent reset, so why not give it a roll.

Rather than go solo with only an installation of Ubuntu, I decided to dual-boot with Windows 7. I already own a genuine copy of Windows 7 and a few other PC-based applications, particularly Creative Suite for Web, plus this meant I could save some cash on buying a laptop without a pre-installed operating system. The dual-boot also gifts me with a fallback option if I really can’t get on with Linux.

So far as installation goes, there’s really not much to report. Installing Windows came first, a set-up as normal, straightforward but tiresome probably hitting double figures in the number of reboots required to get going. Around seventy updates recommended therein.

Ubuntu was downloaded and burned as a Live CD, running a super-simple installer with a couldn’t-be-easier draggable horizontal slider to determine the partition size.

There are many ways to partition drives, recommended schemes such as those on the Ubuntu docs, but I figured a halfway split would be fine. I can always readdress the partitioning if Linux doesn’t work out or if I end up never touching Windows and need the extra space.

After a few days to playing with Ubuntu, so far so good. Needless to say, now at version 10.10 it’s been a fully realised operating system for some time.

The two flavours, for desktop and netbook, are both fully featured, true alternatives to Windows and Mac OS. No longer only options for developers and the computer-savvy.

They both come with a number of applications pre-installed — browsers, mail and chat clients, image editors, a range of software you’d expect. Anything else you need can be found in the Ubuntu Software Centre, an App Store-like directory of open-source and proprietary software from Ubuntu and trusted third-parties and partners.

Firefox and Thunderbird are amongst the familiar names that come packaged with the basic install and a huge number of common day-to-day applications are widely supported — Chrome, Opera, the Flash player, Skype, all AIR-based apps (such as TweetDeck).

There’s support for iPhone and Android devices and cameras. OpenOffice provides your alternative to Word, Excel and PowerPoint.

I’m currently getting to grips with Evolution Mail (email apps are always fun) and Empathy IM, a chat client that integrates Google Chat, MSN, Jabber — all the usual suspects — akin to Pidgin or Adium.

And the icing on the cake — Spotify have a Linux preview. With that installed, it feels like home. Now I just need to get used to a whole new set of shortcuts and hotkeys.

The Ubuntu docs have a lengthy article about the various ways of dual booting with Windows, on the whole the Community Docs look like a very helpful resource.

The Pomodoro Technique is a time management method developed in the late 1980s by a man named Francesco Cirillo.

The method is simple, you take a 25-minute timer (or anything capable of timing 25 minutes) and work uninterrupted until that time is up. Then you take a 5 minute break, then you repeat. At the end of every fourth working interval, take a longer break – say 15-20 minutes. Sounds easy.

The technique has gained a kind of quasi-cult status online, heralded as one of the most effective modern lifehacks – productivity tricks devised to cut through information overload, frequently adopted by programmers.

So what’s the big deal? Skeptically, you may ask – why do you need to employ this or any technique? Why can’t you just work normally? Particularly of one so straightforward.

Whilst the method is simple, such tricks can be hugely effective. The very process of planning, recording and reflecting on progress and work completed creates a sense of motivation and adds to your feelings of accomplishment. The ‘Pomodoro Timer’, a tomato-shaped kitchen timer (pomodoro is the Italian word for tomato) produces an audible ticking during the working interval, triggering some subconscious drive to get things done.

Cirillo’s view adds that the physical act of winding up the timer confirms the user’s determination to start the task, the kick we need to get going. To overcome the all-too-familiar deadline anxiety.

So over the past month I’ve been working with the Pomodoro Technique. Adopted at the beginning of a new one-month contact, in a new office at a new desk, a clean opportunity to try it out.

Firstly I should say I am usually somewhat skeptical to any kind of technique promising some revolutionary effect, on any level be it relating to money, work or health. I guess I already consider to days to be pretty productive, thanks. I probably only considered the Pomodoro Technique because of the amount of positive testimonials I’ve heard, from a wealth of online sources and more recently from a few people whose opinions I respect.

But I don’t particularly suffer from deadline anxiety, motivation isn’t a problem for me when I’m given a heap of work – in fact I’m the opposite, irritably frustrated when I have nothing to do.

I hoped this would improve my time management, rather than boost my productivity. Too often I’m distracted by checking emails, chat clients or Twitter. By “distracted”, I don’t particularly mean going off-task and browsing intermittently, so much as perhaps in a lull of attention, wonder whether an email response has arrived or message been returned, so I’ll switch windows hit refresh.

For me the technique structured my breaks. It offered the reassurance that I will have the chance to check for that email or tweet in no longer than 25 minutes time. It meant that there was no possibility that I could get caught up in a task that might stretch an hour and therefore cause me to fall behind in anything else.

Ultimately, that I could devote my full attention to the current task and I’ll be reminded of when it’s time for a break, when my timer tells me so.

An essential aim of the technique is to cut down on interruptions, both internal and external. If asked “Do you have time for a quick chat?”, I could confidently say “Yes, in 5 minutes”, or however much time I had left on the clock. I knew that if that ‘quick chat’ became a 10 minute meeting, my work wouldn’t be any more interrupted.

The timer offers an impartial and impersonal structure, an obedience to which relieves the distraction of continual desktop beeps and alerts whilst sharpening your focus on the task in hand.

There are a number of free unofficial Pomodoro applications for desktop and mobile platforms to save you ordering a tomato timer or testing your colleagues patience with a bell ringing every half hour.

I used the Pomodoro Lite iPhone app, which is very straightforward (it’s just a preset interval timer after all), though eats up the battery for use of a whole day.

Focus Booster and Pomodairo are cross-platform AIR-based apps. Both look pretty slick and have additional features such as task lists.

ChromoDoro is an extension for Google Chrome, adding a tomato timer to your toolbar with pop-ups to alert you when you’re done.

Whether I’ll continue to use the technique, I’m not sure. I think as a method of concentrating your focus, or retraining your attention span even, it’s very effective. I didn’t find any revelation, but it didn’t disrupt my normal working day either.

Getting into the hang of it, completing tasks within a 25 minute cycles does result a great sense of accomplishment, feeding a higher level of motivation. It might also surprise you how much you can achieve in 25 uninterrupted minutes. I found myself attempting tasks I would usually estimate taking longer. Perhaps that’s the competitive part of me, but it felt great beating the clock.

The most positive outcome as I say, put a stop to my tendency to allow distractions to creep into working time. Though it would be nice to think that I could achieve that with just a bit more self-discipline instead.

Whilst being still somewhat of a newcomer to the world of freelance development, it’s already very apparent that the call for services is always heavily affected by online trends both technological and fashionable.

It’s quite understandable, that being the nature of the beast – to be adaptable and reactive, to be able to rapidly assimilate new skills. It’s much simpler for employers to call for contractors who have already worked with a certain platform, rather than channel time and resources into training – neither of which are often in disposable measure.

Recently I’ve taken frequent forays into the Facebook platform, both in the form of Canvas-based applications and Facebook-enabled external sites (what was previously known as Facebook Connect). Both however leveraging the Facebook’s core platform functionality, the Graph API, in strikingly similar ways.

Twice I’ve been tasked with projects that have required building bespoke photo manipulation tools.

Specifically, two (unrelated) applications both offering users the ability to select photos from their Facebook albums, apply some choice or combination of filters, effects or creative imagery and produce a processed image as a result. Awarding them free opportunity to download, share, remix or do whatever they please with the final outcome.

The first was Honda’s CR-Z Mode Art, both a Canvas application and third-party site running from the same code base.

This was Flash-based and applied simple run-time blending modes and colour transformations to a user’s profile image, largely relying on a bank of ready-made illustrated assets (of theme selected by the user) to embellish and augment the picture. Snippets of their profile information were also added.

When deciding upon a methodology for the image processing, I initially suggested that it all be handed to a server to make use a library such as PHP’s GD or ImageMagick.

This would take the weight off the client, both in handling the processing itself and in the delivery of aforementioned asset library. Even with a rudimentary grasp, both offer a much more powerful level of image manipulation than I thought easily achievable with Flash.

But Flash was a must. I recommended using Pixel Bender, mainly for selfish reasons, but also because it offers a complexity of processing down to the pixel level – but that wasn’t an option due to client-imposed restrictions on the Flash Player version.

More recently I was offered a second attempt with a very similar application with Samsung’s Now Project, an entirely PHP-based Facebook Canvas app.

This application offered the user a number of presets to apply to their image, very much along the lines of Hipstamatic – or rather it’s recent successor, Instagram.

Here we made use of GD’s built-in image filter functions, which are really handy.

A lot of the time GD is called upon for the likes of simple image cropping, rotating or resampling. That describes the most of my experience, anyway.

The filter options however offer control over a number of image properties as common as you’d expect in any image editing software, brightness and contrast for example. There’s conversion to grayscale and colour inverting, there’s a colorize method to handle RGB processing, also blur and smoothing options.

All can be used in combination to achieve a wide breadth of results.

In order to create the desired presets, something requiring a more creative eye than mine, I built a custom tool for uploading and processing images on-the-fly so the project designers could tweak and exact filter combination until they found their ideal results. It provided an option to export the settings so I could implement them later.

I’ve recreated that tool in a simpler form here. This one was thrown together very quickly this evening, there are a few images to choose from and a number of settings – it’s all achieved entirely with the GD library. Almost completely untested, not optimised whatsoever – so you’ll have to bear with the waiting time. Have a play!

You get the idea anyway.

GD can be used for watermarking images, often seen applying a transparent PNG to the corner of a picture, say. The same process is used to apply vignettes and overlays of scratches or noise in this example.

It’s really adaptable and still really useful – it hasn’t seen an update since November 2007.

Full source code for the tool is available on GitHub.

I thought with the turn of the new year and whatnot, I’d make a few improvements to my blog.

For some reason when I started writing I thought it’d be a great idea to use rather abstract titles for all of my posts, something short and curiously inviting (hopefully) rather than the straightforward approach of actually describing what the post is about. I wanted to avoid long and boring titles like “Differences between Flash Player 10 and 10.1″ or “How to install.. blah blah”.

So although, as far as titles go, they could probably be more boring – they could definitely be more helpful. Most of them are obscure and aren’t really meaningful. They’re fine for human readers, especially after people have read the post, but they’re not so good for search engines or people searching for what I’ve written about.

I decided then to use WordPress’ custom fields to add a descriptive ‘subtitle’ to each post that should shed some light on what I’m actually writing about and hopefully the titles should start making sense.

I could have started anew and use descriptive titles from here on, but that would be inconsistent and make those old titles even more nonsensical. I could have renamed the lot, but the old URLs would then be meaningless (the name is in the permalink) or if I changed those URLs then previously incoming links would be broken.

Adding Custom Fields

Custom fields are essentially metadata for each post and can be pulled in by your theme.

I choose to display my subtitle under the each post title so humans, search engines and crawlers alike can read abstract title and the helpful subtitle description together.

Custom fields are added at the bottom of the ‘Add New Post’ page and can be any amount of value pairs that take a name and value.

These custom fields won’t automatically show in your post, you’ll need to edit your current theme’s PHP files to retrieve them.

The get_post_custom function will return a multidimensional array with all custom fields of a particular post or page, which you can traverse to find your custom value pair:

<?php get_post_custom($post_id); ?>

Otherwise you can use the get_post_custom_values function and send the field name to get an array of all the values with that particular key:

<?php get_post_custom_values($key); ?>

I’m using a field to add a subtitle, so I append the value onto the end of my title which you can see on the left (if you’re reading this on my site).

Semantically this is part of my title. As I stated above, for SEO purposes, I want this to be considered as a part of my title for all intents and purposes by search engines and spiders as well as humans. I only ever want them considered ‘separately’ on an aesthetic level and in WordPress’ forming of the permalink.

So I’ve styled up the subtitle to look complimentary to the ‘actual’ title with CSS, but injected it into the post’s h2 title tag in my mark-up so it stays semantically sound for machine readers and accessibility purposes.

So here’s the combined title mark-up (without style hooks):

<h2 class=”entry-title”>
<a href=”..”>Start Making Sense</a>:
<span class=”entry-subtitle”>Wordpress, SEO and pimping my blog</span>
</h2>

And the styled and un-styled versions look like this:


It’s subject to a conditional statement that checks whether the custom field has been populated, so I don’t have to rush through adding them to all my posts and those without subtitles yet won’t look broken.

This does mean however that it’s only implemented by the theme, not by the publishing platform. This means it will only be seen by visitors to the blog, it isn’t published to the RSS feed, for example, so syndicated readers won’t see it.

Sitemaps

Another improvement for SEO purposes was to add an XML sitemap.

I recently added a HTML sitemap in the form of my index page, but then saw Google’s Matt Cutts discussing HTML versus XML sitemaps who concludes that if you can have both, then do so.

It’s very easy to generate a sitemap and according to Matt’s video a simple list of URLs would suffice, but I looked up the WordPress Plugin Directory for ready-to-roll solution and found the Google XML Sitemaps which does the job for me.

It’s fully automatic so it doesn’t need my attention once I’ve installed it. It generates an XML file based on my current posts and pages and automatically updates whenever I publish or modify anything new.

It also sends notifications of updates to the main search engines – Google, Bing, Ask.com and Yahoo! – and has a number of advanced options concerning prioritising pages and setting the frequency of how often certain pages should be checked again by crawlers for updates.

The XML file it generates sits at http://blog.marchibbins.com/sitemap.xml (for me) and there’s an option to attach an XSL file for styling, but I choose just to use plain XML.

Social Bookmarking

I got to browsing some of the other popular plugins and temporarily tried out Sociable which adds typical social bookmarking links to the bottom of each post.

It supports nearly 100 different APIs and the drag-and-drop interface and nice and easy to work with.

I chose not to use it in the end, I thought it was a bit over-the-top for my site.

There’s a lot more in the directory though, like Add to Any, SexyBookmarks and ShareThis – though they’re not hard to write anyway, if I eventually change my mind.

Tweaks

I made a few other minor changes here and there, some CSS changes and design tweaks.

I threw an RSS link into the header and dropped the amount of posts that show on my front page.

I realised my posts can tend to get pretty lengthy and often have embedded videos or Flash content, so showing ten of those (the default) made the page quite heavy.

I started using the ChaosTheory theme when my blog was freely hosted on WordPress.com and I’ve stuck with it since, making odd modifications to both server-side code and the front-end as and when. To use it on a WordPress.org blog I found a port hosted by Automattic but it’s never been quite right nor entirely compliant with mark-up or CSS standards, I’ve only really maintained it for consistency.

Recently though I found the Unicorn ‘Universal Conformance Checker’ from the W3C which combines all the common validation checks and also has a MobileOK Checker which looks at the ‘mobile-friendliness’ of a site.

Shamefully my blog scores an awful 0/100, due to links with “_blank” targets, the amount of images embedded, the presence of Flash and Javascript and a ton of other things.

Over time it’ll be my aim to get all that sorted. Hopefully without having to start my own theme completely from scratch.

This month’s YDN Tuesday was presented by Steve Marshall, he spoke about code generation, ‘Writing the Code that Writes the Code’.

A bit of a disclaimer: Going into this talk I wasn’t sure how much I’d be able to take from it, or whether what Steve would be discussing would be relevant to my own kind of development. I figured he’d be talking about working with huge projects with (literally) tens of thousands of lines of code and I don’t think that the evening was part of the ‘RIA’ umbrella which is probably closest to home for me. Anyway although Steve’s talk did cover those kinds of projects, he also went through some platform-agnostic fundamentals – the kind of transferable ideas that could be applied to any coder – and it’s really those I’m looking at here.

Code Generation in Action

Steve opened with a bit of code generation evangelism, introducing a discussion as to why code generation is something that all developers should want and should do.

Each point here I think speaks for itself, but only really having gone over each benefit staring me in the face have I questioned, hold on, why don’t I do this already? It’s a convincing argument..

The most immediate is scalability – so often development projects have large amounts of repetition, say in creating views, or sets of view/mediator/component classes, or whatever – needless to say as projects get larger, generating this kind of code from templates and base classes reduces or even eliminates the room for error.

Neatly, this this was his next point – consistency. Automated code cannot be mistyped or have any part overlooked (as could when writing by hand), unless of course the source is incorrect. And even if that’s the case, a small change can be rolled out to all generated code once corrected. Which brings us to his third point – automatic code is of course far quicker to produce.

Steve also suggests that generated code is ‘predictable’, in a way. Whether looking at code that has been generated or that written for ‘expansion’ (more on this later), on the whole it’s consequently easier to digest. Likewise, source code that is to be used by a generator or to be ‘expanded upon’ is, by its very nature, a smaller volume – so therefore easier to approach.

It’s also a single point of knowledge. Only the core source code needs to be understood. A project of tens of thousands of lines of code would be near impossible to traverse or otherwise understand, knowing that code has been generated from a smaller source that you do understand offers a reassurance that the rest of the code is good.

Another advantage is the ease of which the code can be refactored. Obviously only the source code base requires changing, the larger volume is generated automatically. This kind of code is language independent.

Outside of working with code direcly, the kind of abstraction writing for code generation offers its own benefits. Although arguably a desirable trait of good programming anyway, Steve suggests this kind of code is far more decoupled with the design process. Thinking more abstractly about coding means it has less effect on design capability, and vice-versa. Design reviews or changes should have no bearing on the coding. By being naturally quicker to produce, Steve also says that generated code is a good methodology to pursue when prototyping applications.

Steve also discussed testing, offering that generated code has (or presumably can have) 100% test coverage. Similarly, full documentation is easier achieved by way of having written a smaller code base.

Overall, cleaner code that’s easier to work with and easier to understand is undeniably a more attractive prospect to work with. Developers are more enthusiastic about working with that kind of code and so that kind of code is of a higher quality – in the first place, as well as because it has been consistently deployed, tested and documented thereafter.

So how is this achieved?

Models

Steve recognised six principle methods for generating code, each technique seemingly increasingly more ‘advanced’ than the previous and, as I said before, each progressively more suited to larger-scale projects.

He saw these as:

  1. Code munging
  2. Inline code expansion
  3. Mixed code generation
  4. Partial class generation
  5. Tier generation
  6. Full domain language

 

I’ll look at each briefly.

Code munging


Code Munging
Code munging essentially takes your code, parses, introspects and creates something new that’s not code, though still of benefit and in inherently useful.

Steve’s examples were Javadoc or PHPdoc, it’s the most basic form of generation you can introduce to your workflow.

Inline code expansion


Inline Code Expansion
This the ‘expansion’ mentioned above, where source code is annotated with keywords or variables where upon additional code is generated to replace those markers to extrapolate upon the original.

Templates are the most rudimentary example. This technique gets repetitive jobs done well, quickly and correctly.

Mixed code generation


Mixed Code Generation
This is the first ‘real’ form of generation, this expands upon Inline code expansion by allowing the output code to be used as input code once created. Instead of simply replacing special keywords for example, mixed code generation may use start and end markers, expanding code as before, but allowing the result to be worked upon once completed.

This may include code snippets, or smaller templates injected within classes.

Partial class generation


Partial Class Generation
I guess here’s where things start to get serious. Partial class generation begins to look at modeling languages like UML, creating abstract system models and generating base classes from a definition file.

The base classes are designed to do the majority of the low level work of the completed class, leaving the derived class free to override specific behaviors on a case-by-case basis.

Tier generation


Tier generation
This takes partial class generation a level further, a tier generator builds and maintains an entire tier within an application – not only the base classes. Again from some kind of abstract definition file, but the generator this time outputs files that constitute all of the functionality for an entire tier of the application – rather than classes to be extended.

Full domain language

Steve didn’t really talk about this much, he said it’s very tough – and even if you think this is what you want to do, it’s probably not (though a hand from the crowd wanted to reassure that it might not be quite that intimidating).

A domain language is a complete specification dedicated solely to a particular problem – with specific types, syntax and operations that map directly to the concepts of a particular domain.

I used the Code Generation Network’s Introduction for reference here, which looks like a good resource.

Also, all the diagrams belong to Steve. He’s put his presentation slides on Slideshare and uploaded PDF and Keynote versions on his website.

The presentation was also recorded as a podcast and is now on the Skills Matter site.

In retrospect

After all these, Steve took us through an example he’d worked on where code generation really proved it’s worth (a huge project for Yahoo!) – and offered tips and best practices to use once you’ve convinced your team to take on code generation (see the slides linked to above).

Though in my opinion here he seemed to seemed to almost contradict some of the things he’d talked about before. He pointed to some tools to use within your workflow, but spoke extensively about writing your own generator. Stressing that it can be written in any language, though should be written in a different language to that of the code being produced, that its easier than perhaps you think – though still with a fair few caveats.

But what happened to cutting down the work load? As the talk summary says, developer are fundamentally very lazy – we want to do less and less, surely this requires more effort?

One of his recommendations is that you document your generator once you’ve written it. But how would you create that documentation, with another generator? Is that meant to be a specially written generator also? And that needing documentation too?

I remember an episode of Sesame Street from my childhood, where Big Bird was painting a bench. Once he’d finished he made a ‘Wet Paint’ sign with the paint remaining, only to realise that the sign also was wet, so needed a ‘Wet Paint’ sign of it’s own. Which of course, was also wet and needed it’s own sign. Cut back a scene later and the whole street is full of signs. It was upsetting, to say the least.

I’m being pedantic, I know.

But I guess that’s where I recognise a line for me, or at least the kind of development I do. Very large-scale projects do often need this kind of approach, Steve is proof of that. For me only the first few techniques are appropriate. Though those, undeniably, are incredibly beneficial when you have the opportunity to put them into play.

On a similar note, this month’s LFPUG meeting had a presentation on UML for AS3 by Matthew Press, the recorded video for which is now online – have a look!

Windows are for cheaters, chimneys for the poor.