In my last post I put together a simple template server with Express running on Node for a current project. I only usually use Node for a few build tools and prefer Python on the back-end, so just for the sake of it, here’s a Python alternative.

Using Flask

Flask is a light web framework and very easy to get going with. It’s useful for putting together pages and URL routes with minimal set-up.

Using virtualenv, install with pip:

(env) $ pip install flask

Obligatory “Hello World” app, let’s call it myapp.py:

from flask import Flask
app = Flask(__name__)

@app.route("/")
def hello():
return "Hello World!"

if __name__ == "__main__":
app.run()

Run with the following and visit localhost:5000 in your browser:

(env) $ python myapp.py

Templates use Jinja2, which would have been installed as a dependency when you grabbed Flask.

Syntactically It’s very similar to Django’s templating:

<title>{% block title %}User list{% endblock %}</title>
<ul>
{% for user in users %}
<li><a href="{{ user.url }}">{{ user.username }}</a></li>
{% endfor %}
</ul>

The block tag is used for template inheritance, which is far more useful than including partials everywhere.

For example, a second template can extend the above to reuse the loop logic but update the page title:

{% extends "base.html" %}
{% block title %}Here’s a new title!{% endblock %}

N.B. Since Express matured to 3.x it’s view system concept is going that way too — see migration docs.

Flask provides a render_template method looking for files in a directory named templates, which will sit alongside our application file:

@app.route('/')
def simple():
return render_template('simple.html', message=”Hello World”)

Working with data

Flask’s render_template method sends a view context (dictionary) to the templates, for example the message above. Jinja isn’t logic-less like Mustache, so we can use this context for conditionals, loops and filters.

Refactoring our Javascript code, we can use regular Python load some JSON:

def get_json(path):
file = open(path)
data = json.load(file)
file.close()
return data

@app.route('/')
def simple():
data = get_json(‘data/simple.json’)
return render_template('simple.html', **data)

JSON is far more suited to the other implementation, being entirely Javascript-based.

I can’t say I’ve ever used this approach, for front-end (only) builds I’d just work with a few variables to switch in templates and likely have a shared dictionary for ‘global’ data, e.g. placeholder user info.

On larger projects with back-end work I’d go for Django and a full relational database, though here we’re using static data files.

As for static media files, CSS et al, Flask serves from a static folder which sits alongside our app file and templates directory — this doesn’t have to be specified in the application logic, unlike Express.

Of course, Flask can be used for fully featured applications, three examples:

The Flask Snippets archive is also a great resource, user-provided pieces of code to bootstrap your application — auth, forms, security, sessions etc.

Source files: simple-flask on Github.

I’m currently working on a front-end build for a site whilst a second (external) team simultaneously develop the back-end. Those guys will later integrate the templates once everything is ready. It’s a fairly common scenario.

We’re in a good position with this project specifically, because although the server-side is a work in progress the developers have provided a full API specification that details all the data that the application will provide, in its entirety. Our work isn’t held up by any architectural decisions yet to be made – the spec is finalised, only the platform to serve data and render our views doesn’t exist yet.

Working from spec, we’re able to create accurate (dummy) data objects and render pages with a templating system to build a limited, realistic, navigable version of the site.

With our data objects in line with their schemas, plus agreeing on a templating system similar to their implementation, should minimise the integration period.

For templating we’re using Mustache, which has a number of server-side and client-side implementation options.

The project is also built with Bootstrap, which makes use of LESS CSS.

Working with LESS, Bootstrap or otherwise, I use the command-line compiler with a custom script to monitors file changes and automatically output the master CSS file as I go. This runs using Jake, a Javascript build tool for Node.js, which I also use to compile and minify Javascript files, plus a few other tasks.

We decided to build on the Node stack with a Javascript implementation of Mustache and a light web framework, Express, to serve our pages and provide URL routing.

Using Express

With Node installed, get Express with npm:

mkdir myapp
cd myapp
npm install express

Create a basic Hello World app in a text file, myapp.js:

var express = require('express'),
app = express();

app.get('/', function(req, res) {
res.send('Hello World');
});

app.listen(3000);
console.log('Listening on port 3000');

Run with the following and visit localhost:3000 in your browser:

node myapp.js

To serve a template with data you’ll need to install a templating engine, create the view and update the app response. The default is Jade:

npm install jade

Create the view file in the default views directory, called simple.jade – print the data:

=message

Update and restart the app file:

app.get('/', function(req, res) {
res.render('simple.jade', {'message': 'Hello World'});
});

Node Toolbox

Restarting the app gets old fast, install nodemon globally to monitor the script for changes and the reload will be automatic:

npm install nodemon -g
nodemon myapp.js

Express will look at the view name extension to load the templating engine. To switch to another, e.g. Mustache, install and configure the app to use a different module. Not all engines provide the same method for Express to render templates, so Consolidate.js maps a bunch of popular options automatically.

npm install consolidate mustache

var engines = require('consolidate');
app.engine('mustache', engines.mustache);

app.get('/', function(req, res) {
res.render('simple.mustache', {'message': 'Hello World'});
});

Update the view with Mustache syntax:

{{ message }}

Also use the engine method to map the .html extension so you don’t have to use .mustache files if you like. The view engine setting saves you from writing the extension in the render method too:

app.engine('html', engines.mustache);
app.set('view engine', 'html');

app.get('/', function(req, res) {
res.render('simple', {'message': 'Hello World'});
});

It’s not great having to write JSON data inline, so use fs to read your JSON from separate files with something like the following:

var fs = require(‘fs’);

var getJSON = function(path) {
var fileContents = fs.readFileSync(path, 'utf8');
return JSON.parse(fileContents);
}

app.get('/', function(req, res) {
var data = getJSON('views/simple.json');
res.render('simple', data);
}

We also wanted support for partials, so we moved to Hogan.js,  a Mustache compiler from Twitter.

Since our environment runs entirely on Node, we’re also able to make use of some freebie cloud hosting, the likes of Heroku and Nodejitsu, for staging at least.

Dependencies for Node projects can be described in a package manifest, specifying modules and version requirements. Our package.json file, in the directory root, looks like this:

{
"name": "simple-express",
"description": "Simple Express template server",
"version": "0.0.1",
"dependencies": {
"express": "3.x",
"nodemon": "0.6.19",
"consolidate": "0.4.0",
"mustache": "0.5.2"
}
}

With a manifest you can run a single command to grab everything you need:

npm install

Nodejitsu use package files to deploy applications, they’ve got a handy interactive cheat sheet exploring the various properties.

Source files: simple-express on Github.

In addition to my last post on Python development environments, here’s a note on using the PyDev plug-in for Eclipse with our virtualenv set-up.

Assuming everything went swimmingly — virtualenv running, Django installed with a project folder (let’s call it ‘myproject’) having done something like this:

$ mkdir dev
$ cd dev
$ virtualenv --no-site-packages env
$ source env/bin/activate
(env) $ pip install django
(env) $ django-admin.py startproject myproject

First install PyDev in Eclipse under “Install New Software” with the URL: http://pydev.org/updates

Once that’s done, configure the Python interpreter. In your Preferences, under PyDev and Interpreter – Python, hit the Auto Config option to find all your libraries and populate the Python Path.

Otherwise, manually add Python and locate your interpreter, mine is under /usr/bin/python2.7.

For now, this only includes your globally-installed system-wide libraries, not those you’ve installed within the virtualenv environment.

To create a new “PyDev Django project” however, you’ll need to have Django installed globally (or otherwise configured in the above Python Path settings) so PyDev can see it. Right now ours isn’t, so we have an extra step.

Instead, we’ll create a “New PyDev project” (non-Django), add our virtualenv location containing the libraries in our local site-packages directory, then convert that to a Django Project once PyDev is satisfied we have the goods.

This method means we don’t have to install Django globally just for the sake of using this IDE.

To do this, from the File menu and New PyDev Project, I un-tick ‘Create src folder and add it to the PYTHONPATH’, instead selecting ‘Don’t configure PYTHONPATH (to be done manually later on)’.

Right-click the project folder, go to Properties and PyDev – PYTHONPATH and add a Source Folder pointing to your virtualenv site-packages. In this instance:

dev/env/lib/python2.7/site-packages

Having found Django, PyDev now let’s us convert this to a Django project. Right-click again and under the PyDev menu select Set as Django Project.

Now everything can be performed within Eclipse, rather than by the command line.

For example, to run the server we’ll add a Custom Command. Under the Django menu select Custom Command and add the following:

runserver --noreload

You may be asked to select which manage.py to run from, choose the one within your project, i.e. myproject/manage.py.

Hit Run and test http://localhost:8000/ within your browser.

Note the --noreload option allows Eclipse to maintain control over the process, rather than spawning a new thread. This function usually allows the server to reload automatically when changes are made to your code, at your convenience.

I mentioned in my previous post that I borked my system meddling with Python. Having reset my workspace, I’ve now set up a solid system that makes handling projects and multiple development environments super simple.

The new set up easily handles multiple Python projects, without compatibility or version conflicts. The installation is equally straightforward.

Before switching to a desktop Linux, I used to sing the praises of VMware and developing with virtual machines when dealing with unique environments. By “unique”, I rather mean any odd project out of the ordinary LAMP set-up I usually work with, or something that requires a specific version of a piece of software.

Since then however, I’ve found no need. So long as you think before you leap.

Virtual boxes (as closed, single-piece software) are good and all, you can be as venturous as you wish without risk of damaging your native system. Plus, if you screw one of these you can restore a saved state in a few clicks. However, the VM safety net allows you to proceed without caution, perhaps recklessly, at the expense of fully comprehending the commands you’re executing and tasks you’re running.

In that sense, they’re great for beginners uncertain of how (or if) they should install software, e.g. Apache, PHP, Python etc — appliances and virtual stacks are helpful.

Otherwise they can convolute your workspace — and more often than not, won’t be configured exactly how you want or need them. Running software natively is simple and as intended, it also allows you to configure your entire environment without any assumptions made by distributors.

Virtualenv

Virtualenv is quite the revelation. It facilitates multiple isolated Python environments on a single system, dynamically handling your Python Path so packages are install within an enclosed local directory, rather than in amongst your top-level system packages.

This means you can create project-by-project virtual environments avoiding compatibility and version conflicts. When an environment is created (and activated) libraries are thereafter installed within discreet directories that aren’t shared with other virtualenv environments.

This means nothing is installed “system-wide”, so libraries don’t accrue over time, there’s no balancing of versions. It also means you can work with different version of Python simultaneously.

Python packages should be installed with a package manager. The latest of which is pip.

Prior to this, easy_install was the manager du jour (part of Setuptools, both now out-dated), but we’ll only be using that to install pip:

$ sudo easy_install pip

Pip is a direct replacement for easy_install, improving on a few things (a comparison can be found on the installer site). Packages that are available with easy_install should be pip-installable and the installation method is the same — the following installs virtualenv:

$ sudo pip install virtualenv

With virtualenv installed we can create an environment within your workspace, all it needs is the environment directory name, here ‘env’:

$ virtualenv env

There are a few options you have with this command. In the following example, the --no-site-packages flag means that the new environment will not inherit any system-wide global site packages. The --distribute flag will install Distribute rather than Setuptools:

$ virtualenv --no-site-packages --distribute env

Distribute is to setuptools as Pip is to easy_install. Distribute and pip are the new hotness, Setuptools and easy_install are old and busted — for now.

Anyway, activate your environment:

$ source env/bin/activate

You’ll see from your shell prompt that the environment is activated, with the name prepended.

Then we’ll install something with pip. Yolk is a tool for querying the packages currently installed on your system, so we’ll install that and grab a list:

(env) $ pip install yolk
(env) $ yolk -l

Then you’ll see everything the environment can see in the output (this will depend on your global site packages and how you created the environment, as above).

Note that you don’t need to sudo whilst in the activated environment.

As a test, we’ll deactivate the environment and run the same command, which gets the following error (unless you have yolk installed globally):

(env) $ deactivate
$ yolk -l
yolk: command not found

If installed within an environment, a package is only available whilst it is activated. This is the means to install whatever you wish, without worrying about cross-project conflicts.

More pip

Another good feature of pip is to generate a list of requirements for your working set of packages. The command is called freeze and generates a text file as follows:

(env) $ pip freeze > requirements.txt

This will create a list of all installed packages with specific versions for each library. This is in a custom syntax and looks something like this:

distribute==0.6.19
wsgiref==0.1.2
yolk==0.4.1

This list can then be distributed (e.g. to a team of developers) and used to install those packages on other systems, like so:

(env) $ pip install -r requirements.txt

Note, this isn’t couple with virtualenv, which actually has it’s own method of bootstrapping — see “Creating Your Own Bootstrap Scripts”.

Since deciding to work exclusively in a Linux environment at the beginning of the year, I’ve been more than pleasantly surprised not to have found myself needing to reset my system as a result of the frequent changes of set-up and numerous installations and removals of software that I’ve needed to perform in order to work on various projects.

The inevitable day, however, came a couple of weeks ago when I royally screwed my system messing around with Python (solution in another blog post. Update: here it is).

Once Ubuntu was reinstalled, I encountered a problem attempting to recreate my workspace having opted to encrypt my home directory during user setup.

Running the normal LAMP-server setup, Apache is unable to access files within the encrypted home.

I was tying to duplicate my previous configuration, using individual VirtualHosts locating directories within my user home, for example:

/home/marc/sites/dev/

I’m pretty sure my home directory was encrypted last time too, but this problem was new for me — perhaps something from an update in between?

The permissions problem occurs as only my user, marc, has access to the home and Apache’s user, www-data, does not. This results in a HTTP 403 Forbidden when attempting to serve files.

Having a look around, I found a convoluted method using symlinks and Apache’s UserDir then a far simpler solution, on AskUbuntu, as follows.

It’s unsafe to change your home ownership (to www-data, for example) but Apache needs execute permissions there. So selectively chmod the directory:

sudo chmod 751 /home

This grants execute access to others, who can only read files with correct knowledge of names and locations. It also removes your user’s read access to /home, so you’ll have to sudo for that.

Another precaution benefiting those on development-only machines, is to restrict IP listening within Apache’s ports.conf, so only local connections get any attention:

Listen 127.0.0.1:80

Alternatives

As for alternatives, you could encrypt your whole drive rather than just the home directory. You shouldn’t see any problems then.

Or you could just ignore encryption all together.

You could, of course, just work out of the traditional /var/www/ location, which is the Apache default. Simply create a directory there and chown to your user so you don’t have to always sudo changes.

sudo mkdir /var/www/dev/
sudo chown marc /var/www/dev/

If you’re directories are elsewhere on your system, for example in SVN repositories such as /srv/svn/ or /usr/local/svn/ then you’ll need to chown those to www-data so they’re readable, similar to our method of reading from within /home above.

The Ubuntu docs on Subversion offer the best solution for handling user permissions for SVN over HTTP.

Create a new user group, subversion, add the users marc and www-data to it and chown the repo to www-data:subversion, giving read/write access to the group (granting privileges to marc). Finally chmod with -s so that new files inherit that group ID, like so:

cd /srv/svn/
sudo chown -R www-data:subversion dev/
sudo chmod -R g+rws dev/

The -s flag means that all files created inside that directory will inherit the group of the directory, otherwise files takes on the primary group of the user. New subdirectories will also inherit this.

The -R option applies the changes recursively (i.e. existing subdirectories).

I went out for a ride and I never went back.