Tuesday 26 June 2012

Optimising, not hiding

You may remember my posting a while ago about the importance of build scripts in an Interface Developer's workflow (see here).

Lately I've been knocking out some Ant scripts and realised there's a bit of my process that's worth sharing.

Goals of optimisation

The goal of most the scripts I write at the moment (at least the part related to CSS & JavaScript) is to compress the content as much as possible and into the least amount of files. The end result is less files get downloaded and those that do make the most efficient use of their bytes.

This also means the build forms a clear distinction between development code (designed for reading & sharing by humans) and production code (designed for downloading by devices & processing by their browsers).

What we're forgetting

All of this is very good and healthy but it misses one thing we used to take for granted. As explained above, production code is for devices & their browsers but the web is not just a delivery device for programs, it's also designed for reading & sharing by humans, just like our development code.

The source code of http://adactio.com/articles

This is important. One of the beautiful things about the web is that you can dig into sites or apps people have made and see how they tick. It's no coincidence that web development has a large amount of self-taught people working in it and I feel it would be a real shame to lose this just for lack of process.

So, a conundrum. How do we satisfy both sets of requirements?

Erm, quite simply actually

Turns out it's not that difficult. There's a few ways I could think of off the top of my head but they all involve one step: providing a link to the development code in the production. Here's how I do it:

  1. Concatenate my files.
  2. Create a directory (I call mine full) & copy them into it.
  3. Optimise the original scripts.
  4. Concatenate a standard comment onto the bottom of all files explaining how to access the full versions.

Here's a sample comment from a recent project:

/* A full version of this file is stored in a directory called 'full' at the js/ or css/ level. Add 'full/' into the file path, after 'js/' or 'css/' to access. For example, the full version of /assets/js/core-scripts.js is /assets/js/full/core-scripts.js */

The last step is the most important. Providing a URL, or a way to build one means your code is available in both forms and I don't personally think the extra characters will make enough difference to not do this.

If anyone else is doing something similar but differently I'd be interested to know as this is a first-pass at best.

Wednesday 11 April 2012

Colourful times: Introducing the CSS3 gradient image generator


I've been working on a project with quite a few CSS3 gradients recently. As usual I want the site to degrade nicely back down my browser matrix with contrast ratios holding up and the general look and feel being as close as possible. (Which doesn't have to be the same.)

Establishing a workflow

I'm using only linear gradients so the requirement is for an approach that can draw the gradient either vertically or horizontally.

I settled on a workflow shown in this article whereby a background image that repeats along one of the image axis is served to browsers that don't support gradients.

So a button's default background would be defined in CSS with a gradient, i.e.

linear-gradient(#490091, #8000FF 20%, #8000FF 50%, #B469FF 50%, #B469FF 80%, #7140A3)

The CSS above gives the button this background:

Button

The background would also be defined as an image that repeats along the x-axis, i.e.

background:#490091 url(/img/purple_repeater_1.png) 0 100% repeat-x;

The CSS above gives the button this background:

Button
Cross-browser differences

The most obvious difference is that a gradient is mathematically generated and so can scale to whatever dimension needed, whereas the raster image used in the second example is fixed.

Any changes to the element size (i.e. through text resizing) will mean the image no longer fills the space but adding a background colour matching one of the gradient stops can help this. It's not quite as pretty but it keeps the colour ratio and preserves some of the look and feel:

Summary

That leaves quite a lot of tasks to perform just to get a result:

  1. Make the linear gradient in CSS.
  2. Produce all the vendor variants (made a lot easier by Lea Verou's excellent cssgradientsplease tool).
  3. Take a screen-grab of it and cut a background image for older browsers.

The biggest task here seemed to be making the background image (screengrab the web page, open Photoshop, crop & save), so I made an online tool to provide a bit of automation: http://tombye.github.com/CSS3-Gradient-image-generator/.

Using the tool

The tool accepts the same syntax as Lea Verou's parser, which is the closest we'll get to a standard at the moment, so you don't have to change it when generating the image and the CSS variants.

It will always generate an image in the page HTML for you to grab but can also download it if needed. (Note: I can't figure out how to change the download's file name, let me know if you know how.)

There's a set of unit tests in the repo for checking the CSS parsing is correct. They cover all I can think needs testing but let me know if anything isn't working as expected so I can add tests for any missed requirements.

Wednesday 21 December 2011

jQuery Group Animate

.animate

jQuery has a great method for dealing with animations called animate. While it works for most things I've been finding its requirement of an element selection to work on a bit restricting.

Animations these days are not always just about elements animating individually. Animations that use the canvas element for example tend to be more about changing values through time and using them to draw the scene onto the canvas each frame.

gibsy.com's parallax-fuelled listing page, from webdesignerwall.com's article on parallax scrolling effects

The most obvious use of this approach is the parallax effect seen on many sites recently. (While this is not technically animating through time it is the same principle of breaking a process down into 'frames'.)

Steph band's events.frame is a nice example of how to approach this (and served as my starting point for building a solution).

My requirements

So a more thorough view of what my requirements were is

  1. Animations don't have to just run on elements
  2. Animations are split down into 'frames', which is where the code will run
  3. Code inside these frames has access to information about the animation and its progress
  4. Code that runs in the frames can sit in any place in your codebase

We can solve two & three by using the .animate method's step callback, which runs whenever the timer inside an animation is fired & gives the information needed.

The other two requirements need some more thought and a bit of hacking ;-)

Solving the rest

Using .animate without a selection

The first point made above requires a bit of a hack. Turns out you can create a dummy element to use as the selection.

var $fakeElem = $('<div style="width:0px;"></div>');
...
$fakeElem.animate({'width' : '100px'}, opts);

The element isn't added to the DOM so isn't even rendered by the browser, we're only interested in what goes on inside the animation's step callback.

Freeing up the usage

The fourth requirement, to allow the code that runs inside the callback, is solved by placing a custom event inside the step.

opts.step = function (now, fx) {
$(document).trigger('frame', { 'fx' : fx });
};

This fires a 'beacon' every time the callback runs which any code can attach itself to. Any code bound to it will have full access to all the information it needs about that step of the animation.

The code

I've put some code up on Github to demo the approach. Bear in mind this isn't a jQuery plugin, just a demonstration of an approach I've found useful.


To see the code in action, have a look at it's Github page.

Why use .animate?

To conclude I want to answer one of the most obvious questions about the above approach: 'why use .animate at all?'. Most of the code doesn't really need to be written in jQuery and we're not using an element selection so why bother?

  • .animate is really nice to use.
  • It allows users to be pretty sure when their animations will end due to it checking the progress against timestamps at each step.
  • It provides your code with all the information you need when working on a per-frame basis.
  • You can plug different easing routines into it.
  • As long as you use it according to the docs the jQuery team will improve it, you don't have to. (For example, the requestAnimationFrame method used by modern browsers to improve animation was added soon after it became available.)

Ultimately though, you don't need to. All the main bits, from the use of custom events to that of jQuery can be swapped out for others, I just wanted something with all the benefits above that I could use now.

Monday 10 October 2011

Learning JavaScript: From jQuery up

Being someone who's gone from puzzling over DOM scripting to fairly advanced JavaScript I've a decent amount to write about learning it.

Starting with jQuery

jQuery is amazing. It's syntax is so clear and well thought out that you can write JavaScript by coding something that's pretty close to how you would explain what you want to do. For example:

$('img.thumbnail').mouseover(function () { $(this).css('border', 'solid 2px red'); });

Writing code like this is easily readable and, knowing the quality of jQuery's internals, pretty efficient. The issue I have with it is less to do with using jQuery and more to do with its style.

Early in my learning I wrote a lot of code like you see above but the more I had to write, the less I thought that style was the best approach. When the problem you are trying to solve passes a certain complexity, using the above style can create duplication of resources and repetition of actions.

A different approach

The best solution is to start looking at what JavaScript as a language can do to solve these problems.

Let's look at some actual real-world code as a way to look for some solutions. Below is a simple accordian, written using jQuery for most things with a bit of structure for organisation.

// hide all accordion content divs
$('div.accordionContent').hide();

// when you click a tab
$('a.accordionTab').click(function () {
       // 1. Selection of active element every time you click
       var $currentActive = $('a.accordionTab').filter('.active');
 
       // if this tab is active, close it's content div
       if ($currentActive[0] === this) {
              close($(this));
       } else {
            if ($currentActive.length > 0) {
                  close($currentActive);
            }
            open($(this));
       }
       return false;
});

// mark the tab as inactive and hide it's content div
close = function ($el) {
       // 2. DOM traversal & element selection every time this function is run
       $el.removeClass('active')
              .parent()
              .find('.accordionContent')
              .slideUp();
};

open = function ($el) {
       // 2. DOM traversal & element selection every time this function is run
       $el.addClass('active')
              .parent()
              .find('.accordionContent')
              .slideDown();
};

The main problems are, as listed above

  1. Finding the current active tab every time the click event runs through element selection.
  2. Every time open or close are run, it causes DOM traversal and element selection.

The code also runs in the same scope as any other script in the document (which can lead to variables & functions being overwritten or used by accident) when it should really be contained in a single place.

Solutions

  1. Put the whole thing in an object and store that in 1 variable*.
  2. Do your selections once and store the result in variables. That includes selection by DOM traversal.

* This variable should really be stored in a namespace when we are at the production stage.

Pray explain

OK, so in an effort to make this a bit clearer I've stuck the code on Github. Download it now (clone it if you know how to use git, or click the Downloads button and select the .zip).

**Update** Having figured out Git hub pages the code is now more easily accessible here

It doesn't need to be accessed via a server, just open the .html files in your browser and we'll work our way through, starting with base_pattern.html.

base_pattern.html

The JavaScript (js/pattern.js) here is a base pattern with this structure:

All code is contained in one object stored in the pattern variable. That object has a single method called init that you call when you have an element you want to add behaviour to.

Inside pattern is a constructor called Constr.

Constr = function (elm) {
    ...
Every time you run pattern's init method it uses Constr to create an object for each element matched to hold its behaviours.

init : function (context) {
       // 4. Searches are always performed within a context
       if (context === 'undefined') {
           context = document.body;
       }

       // 5. For each matching element, create an object using the Constr constructor 
       $('.accordion', context).each(function () {
           new Constr(this);
       });
}

Notice how searches are always performed inside an context element, even if this is document.body. This means that you can run init, not just on a whole document but also on a sub-section of one (if you replace a sub-section with AJAX for example).

Apart from that the structure we started with is mainly the same. We're still attaching an event to each accordion tab and the logic inside that is using open and close methods to control the accordion content areas.

The main difference is that, thinking a bit more programatically, we are setting all our variables at the top of Constr, including those that hold element selections.

var $elm = $(elm),
    $tabs = $elm.find('.accordionTab'),
    tabIdx = $tabs.length,
    $contentAreas = $elm.find('.accordionContent'),
    activeIdx = $tabs.index($tabs.filter('.'+ activeClass)),
    that = this,
    onClick;

By wrapping everything in pattern we also create a closed scope that means we can define what we like safely.

If you open your browser's Developer tools (in Chrome, Safari or IE9, Firebug in Firefox or Dragonfly in Opera) and type pattern you'll be able to see and inspect the pattern object.

One last efficiency

The pattern is quite nice now. The structure is a nice mapping of the logic that makes the accordion work, variables are all stored and re-used and changes to DOM elements in open and close are just to properties of their jQuery wrappers; no DOM traversal or selection is needed.

It's a bit personal but the last thing that's bugging me now is that at the top of onClick the idx variable is set each time by jQuery looping through the $tabs object which feels a bit inefficient.

onClick = function () {
            var idx = $tabs.index(this);

We are creating an onClick function for all tabs so it would make more sense to give each of these functions access to the index of that tab in $tabs. It is possible to use closure to do this so let's have a go.

base_pattern_with_closure.html

So in the JavaScript for this page (js/pattern_with_closure.html) let's have a look at the new onClick.

// This function uses closure to create a function with access to the idx at the point it is called
onClick = function (idx) {
       // capture each index using a closure
       return function (eventObj) {
              if(activeIdx !== null) {
                     that.close();
              }
              if (idx === activeIdx) {
                     activeIdx = null;
              } else {
                     activeIdx = idx;
                     that.open();
              }
       };
};

So now rather than onClick being a variable containing a function to run on the click event, it now is like a factory, returning a function to do this.

This makes more sense if we look at it's use.

// for each tab, bind a function to its click event which has access to the tab's index
while (tabIdx--) {
       $tabs.eq(tabIdx).bind('click', onClick(tabIdx));
}

The onClick function is now run at the point we bind the event and the function it returns is what fires on that event, not onClick.

When the function it returns is created (at the event binding stage), onClick sends it a single parameter called idx which is the index of the tab in $tabs that was clicked. idx only exists at the point onClick runs but, thanks to closure, the internal function will always have access to it.

Because we use closure we are effectively pushing the effort onto scope resolution rather than looping through an array.

More info

I'm not exaggerating when I say it took me almost a year to 'get' closure after I first came across it. By contrast I once explained it to a colleague (with a lot of experience of heavy programming) and they got it straight away. Depending on your speed of understanding here's a few links to help:

What else?

In the rest of the examples I've tried to explore the different options you have when approaching the problem in this way (see index.html). I'd be very interested in any suggested changes to these examples or to other options so if you can think of any, let me know (or just fork the repositry :).

Tuesday 4 October 2011

Carl Andre's Multi-channel thinking

Terrible title eh? That aside there's something that's been floating around my head for a bit to do with our current concern of supporting so many platforms for our digital content. I'm starting at a bit of a tangent so bear with me…

I remember going to a retrospective of the artist Carl Andre, best known for his minimalist sculptures, usually based around patterns of geometric shapes. Andre also produces poems & drawings, some of which were also displayed at the show. Here's a few examples:

Carl Andre 3x11 Hollow Rectangle (2008) on flickr

Untitled drawing by Carl Andre

Carl Andre by amycurtiscouture, on Flickr

Looking at all of these works together you could see that in a way, as much as physical work, Andre is producing a language describing how he sees the world.

To Andre, it seems the poems, drawings and sculpture were all just mediums in which to show this way of seeing.

I think there's a lesson in this for us in digital production. People harp on about One web and then clients & your team all panic about producing 2 million versions of one thing that all have the same 'look & feel'.

To quote from the W3C (see above 'One web' link):


One Web means making, as far as is reasonable, the same information and services available to users irrespective of the device they are using. However, it does not mean that exactly the same information is available in exactly the same representation across all devices. The context of mobile use, device capability variations, bandwidth issues and mobile network capabilities all affect the representation.

Look back at Andre's drawings, poems and sculpture. They have the same feel but their visual is as much a product of their medium as a representation of a 'look'. They work not because they are all minimal geometric shapes but because they made with the same language.

I think people get that content will be different when they are looking at it through a different medium. I think they will forgive a hell of a lot more visual difference than we think. What they will spot straight away is one thing trying to be another, which just feels fake.

Saturday 24 September 2011

An Interface Developer's guide to build tools

What are build tools to us?

Interface Developers don't traditionally have much of a relationship with build tools. HTML, CSS and JavaScript require no compilation and the Application Developers on a project will often be in charge of the build.

This is changing. Our JavaScript needs validating (via JSLint) and, along with our CSS, compressing and concatenating. If you use SASS (I'm not quite convinced enough to yet) you will already be generating the final version of your CSS. Even HTML, when split into modules can need combining into flat templates.

It's becoming more a part of our jobs but it can be a sharper learning curve than for, say, Application Developers. Build tools are written in languages unfamiliar to the average ID and most of their documentation is written for developers who work in those languages, which usually isn't us.

Having hit the above walls I thought it was worth putting my understanding down, based on the tools I've worked with to date.

What is a build tool?

Programs, in a very basic way, are us writing down a process in a way a computer understands. With that in mind it will always seem obvious to programmers to have as much of their working processes also being programs. That way, you write it once and run it when you need to.

Builds are a way to formalise this; you write a build file describing your process and the build tool runs it.

Build scripts are usually centered around a few things:

  • The chaining together of a series of processes
  • The configuration of this being separate from those processes
  • The sequence in which the processes are run is also part of the configuration

One idea of how to do it

There should be a natural workflow for using build tools:

  • Write a plan of your process (not a build script)
  • Turn this into a build script
  • Configure the use of your build tool so you can run it in one step (like clicking a button on a web interface)

The posts

I'm going to write about Apache's Ant and Maven, both Java-based tools. This is because those are the ones used by JSLint4Java (see my post) and Rhino, mentioned in my post on using it to run JSLint.

So far, we have an article on Ant. The Maven article will follow soon when I can do it justice :)

I'm also planning to actually write some build scripts for each tool to complement the articles so keep an eye out in future.

An Interface developer's guide to Ant

Apache Ant

I love Ant, it's kind of like building Java programs out of Lego (and we all like Lego).

Overview

Ant was made by the creators of Tomcat to automate it's build in a way that worked across OS's. With heritage like that it's not surprising that it takes the approach of keeping its config in an XML file. As most tags allowed in this file are well-written XML (tag names, attributes and nesting all in the right order) it's all pretty easy to see what everything is doing once you get the main principles.

Those principles

Inside the build file processes, called tasks, are grouped into targets. The sequence in which these targets are run is defined by their dependancies to each other. This mechanism is also really simple, each task will run any other tasks that are listed as dependancies before it runs it's own processes.

For example, this is a build file:

<project name="MyProject" default="init" basedir=".">
       <description>
              simple example build file
       </description>
       <property name="build" value="build"/>

       <!-- make the build folder -->
       <target name="setup">
              <echo message="Creating build folder"/>
              <mkdir dir="${build}" />
       </target>

       <!-- copy the files across -->
       <target name="init" depends="setup">
              <echo message="Moving all .js project files to build folder"/>
              <copy todir="${build}">
                     <fileset dir="./">
                            <include name="**/*.js"/>
                     </fileset>
              </copy>
       </target>
</project>

How it works

You will run the above build by entering this into your command-line/terminal:

ant

That's it, no options to set, Ant will default to the assumption that you want to use a file called build.xml which is located in the folder you are in when you run the command.

So it looks in build.xml and sees the XML above. The default attribute in the project tag specifies the init task as the one to start with & the basedir attribute that the build will execute in this folder (. means the current directory, in case you were wondering):

<project name="MyProject" default="init" basedir=".">

So Ant looks at the init target and sees that it is dependant on the setup task (see its depends attribute. As mentioned before, this means it executes the setup task before the contents of the init task.

So the setup task contains a single mkdir process to make the build folder we need for the main task.

<target name="setup">
       <echo message="Creating build folder"/>
       <mkdir dir="${build}" />
</target>

Now the main init task can run. This has a copy task that operates on files specified by its child fileset tag. This tag includes a child tag of its own that specifies a pattern files need to match to be used.

<target name="init" depends="setup">
       <echo message="Moving all .js project files to build folder"/>
       <copy todir="${build}">
              <fileset dir="./">
                     <include name="**/*.js"/>
              </fileset>
       </copy>
</target>        

A word about properties

You might have noticed the <property> tag being used. Anything in your build file that is referenced many times and stays the same should be kept in a property. This is good practice but also means if you need to change a property, you do it only once, not every time it is used in the file.

most properties are simple name:value pairs (with each set in an attribute on the property tag but they can hold a lot more

Also note that every build file has some properties already available so to avoid duplication, have a look.

So pretty straight forward really. Ant tasks are available for most things you will need to do from Apache with loads of others available from the open source community.

Good ideas

When you're writing build files it's a good idea to follow a few rules to keep it all nice and clean. To be honest, these apply to the writing of any program but I digress…

  • Keep any values you're using in multiple places as properties (like ${build} in the code above)
  • Use the echo task liberally. It is how you will know what your build is doing at each step
  • Comment your build file for clarity

What else can I do?

Everything really.

How you run it matters

You can run the ant command with a load of different options. To see all of them, type:
ant -help

You can use different build files, set properties that are used inside the build file and lots of other things that can change how the build runs.

Reducing duplication even more

Giving a tag an id attribute means you can use it as a reference in other targets. For example, tags like fileset can be given an id

       <fileset id="jsfiles" dir="./">
              <include name="**/*.js"/>
       </fileset>

…and referenced across your file instead of copying the same thing multiple times:

<fileset refid="jsfiles">
And

I'm not the biggest fan of Apache documentation but the Ant manual is worth a read for all the other things you can do with Ant. Try not to get too bogged down with all the Java bits (which is what spun my head out a bit at first).

Julien Lecomte of Yahoo has a nice post on using Ant for processes IDs would commonly use. You may not end up using all of his approach (it's quite tied into Yahoo's approach to development which may not match yours) but the good thing about Ant is you can grab the tasks you do want and just plug them in.