The wonders of buildr and winstone.

January 10, 2009

Just to remind myself how good ruby + rake + builder can be, I spent the afternoon trying to create a showcase artifact.  I thought I’d try and see how quickly I could get winstone up and running [its the app server for hudson].

So, I found out that winstone and tomcat disagree on how to handle “AUTHENTICATED_USER”, but otherwise, all I had to was mask the web.xml from my war artifact and transform it into something else.

The buildr script is below.  You have to assume there are two other projects (“db”, and “web”), and we use liquibase and hsqldb for the showcase environment.  I think the script took me 15 minutes to work out, and the rest was cycling around trying to get winstone to fire up (e.g. working out what should go in the new web.xml).

This is extremely succinct compared to ant, and even rake (without buildr) would be harder to create.

        define “showcase” do



            resources.from project(‘web’).file(‘src/main’)

            resources.from _(‘src/main/resources’)

            resources.include( ‘**/webapp/WEB-INF/web.xml’)

            resources.include( ‘**/*showcase*’ )

            resources.filter.using :ant, ‘authorisation.xml.fragment’ => xml_fragment


            package(:war).merge( project(‘web’).package(:war)).exclude( ‘WEB-INF/web.xml’)

            package(:war).include( _(‘target/resources/webapp/WEB-INF’) )

            package(:war).exclude( _(‘target/resources/**/changelog.xml’))

            package(:war).exclude( _(‘target/resources/**/*showcase*’))


            package(:jar).with :manifest =>{ ‘Main-Class’ => ‘showcase.Launcher’}

            package(:jar).include( package(:war), :as => ’embedded.war’)

            package(:jar).merge( artifact(LIQUIBASE_JAR))

            package(:jar).merge( artifact(HSQLDB_CLIENT_JAR))

            package(:jar).include( project(‘db’).file( ‘src/main/resources/changelog.xml’) )

#            package(:jar).



Restlet server for RESTful testing

January 7, 2009

We wanted a stub/mock http rest server to simulate sun’s sso server.  Here it is.  If you want change the current live behaviour, do this in your test/setup code:

//return “string=cookieName” for any request

Restlet restlet = new Restlet() {

public void handle(Request request, Response response) {
response.setEntity(new StringRepresentation(“string=” + ssoCookieName, MediaType.TEXT_PLAIN));

Here’s the server implementation.  The shutdown code is a bit clunky, but it appears the restlet server takes it’s time releasing the HTTP port….

package test;

import org.restlet.Restlet;
import org.restlet.Server;

public class RestletServer {
private static final int RESTFUL_SERVER_PORT = 8182;
private final int restfulServerPort;

private final Server server;
private Restlet activeRestlet;

public RestletServer() {

public RestletServer(int serverPort) {
restfulServerPort = serverPort;
Restlet restlet = new DelegatingRestlet();
server = new Server(Protocol.HTTP, restfulServerPort, restlet);

private static class ShutdownRestletServer implements Runnable {

private final Server server;

public ShutdownRestletServer(Server server) {
this.server = server;

public void run() {
try {
} catch (Exception e) {


private class DelegatingRestlet extends Restlet {

public void handle(Request request, Response response) {
if (activeRestlet != null) {
activeRestlet.handle(request, response);
} else {
super.handle(request, response);


public void start() throws Exception {
Runtime.getRuntime().addShutdownHook(new Thread(new ShutdownRestletServer(server)));


public void setActiveRestlet(Restlet restlet) throws Exception {
activeRestlet = restlet;
if (activeRestlet != null) {
if (!server.isStarted()) {


Environmental domain modelling 2

June 26, 2008

In my last post I discussed using a domain model of the environment to give clarity around the architecture of an application, in a way that allows acceptance tests to be written in a technology agnostic way.

Last time I discussed the case where there were two environments – a local in-memory version, and a UAT version.  I created Java class for each one, to illustrate the example.

In this post, I want to discuss what happens when you put a scripting language on top of this domain model.

When dealing with complex architectures or big teams, there are many shared environments, or at least, common infrastructure. Ideally, the staging and production environments should differ ONLY by configuration.

By using a scripting language (such as ruby) to define your environments you quickly customize an environment without having to rebuild your enterprise manifest.

For instance:


  database_host :main_ora_server do
     host ip("")
     version "10.1"
     tools_dir "/opt/ora/10.1/bin"

  database_schema :user_service_schema do
    schema_name "e_users"
    user_name "scott"
    password "tiger"
    host :main_ora_server
    migrations zip("user_schema/migrations")

  webapp :web_ui do
    artifact war( "web-ui.war" )
    environment do
      user_datasource :user_service_schema

  %w( web1, web2, web3, web4 ).each do | w |
    w_ip = #function on w
    webserver w.to_sym do
      host ip( "10.2.3.#{w_ip}" )
      hosts_webapp :web_ui

From this configuration, I’m intending to deploy a web app to 4 servers each with one web app, all pointing at the same database. I’ve also defined a migration script for that database.

This is quite similar to a capistrano configuration, except in this case I’m not actually do the deployment – I’m defining it, and allowing the same configuration to be used in different contexts. For instance, I may have a monitoring tool which takes the same file and pings each service. I may generate a diagram. Each of these semi-fictional tools would be invoked in a similar way:

deploy prod.config.rb enterprise-mainfest-1.2.6
monitor prod.config.rb service-agent@jabberd.myco
visualize prod.config.rb target-dir

This config file can be placed in your favourite version control system, merged, diffed, and generally munged.  It’s clear enough that both devs and sys admins can talk about it, and it encourages differences in environments to be encapsulated.

In general, the power of having a rich domain model means that you can share knowledge across the team AND ensure high fidelity of that information

Of course, the above domain model won’t be exactly what you need, but if you start off with a simple model, and use other agile infrastructure techniques, then you’ll find you can evolve a domain model which is JUST rich enough for your purposes.


Environmental domain modelling

June 26, 2008

In my last post, I talked about how useful building a domain model of your technical environment can be in agile system integration projects.

The argument goes something like this:
  1. There are architectural boundaries in your project – e.g. a web UI server, a SOAP/rest application server, a mainframe at the end of a message queue.
  2. You are building in quality and reducing defects by implementing automated acceptance tests.
  3. You run these automated acceptance tests locally, but also in different phases of the release process (e.g. after the unit test CI build, running with multiple browsers).
  4. In each set of tests the environment is subtly different – web servers have different urls, some systems are simulated rather than real, etc.
  5. Rather than have arbitrary configuration parameters for each part of the system, instead build a domain model that encapsulates the relationships between deployable units.
So, let’s start with a simple environment:
trait Environment {
 URI getWebUILocation();
 URI getBusinessServiceLocation();
 bool requireWebUI();
 bool requireBusinessService();
 void stop();
 void kill();
class InMemoryLocalEnvironment implements Environment {
 Server server = ...
 String uiWebApp = "..."; 
 SimulatedBusinessService simulatedBusinessService = new SimulatedBusinessService();
 int port = 8080;
 bool requireWebUI() {
   server.addWebApplication( "web-ui-context", uiWebApp ) 
 protected void startServerIfNotStarted() {  server.start(); }
 bool requireBusinessService() {  server.addHandler ( "serviceName", simulatedBusinessService()); }  
 URI getWebUILocation() { return new URI( "http://localhost:" + port + "/web-ui-context" ) }
 URI getBusinessServiceLocation() { return new URI( "http://localhost:" "+ port + "/serviceName" ) }
  public void setWebUIPath( String uiApplication ) {

This may need a bit of explanation. Firstly, you have an interface (Environment) to your environment. At it’s most general, it provides accessors for locating various services (needed during acceptance testing), and a mechanism of requiring that such services exist. In this case, I’m expecting a based web site which accesses some external REST or soap server.

Secondly, there is the first implementation of your environment class – InMemoryLocalEnvironment.  The name here is trying to indicate that you use this enviroment if you want something quick to instantiate, and don’t want to use network versions of the service.

This InMemoryLocalEnvironment uses Jetty internally to start the web app under development, and to use another web app to host a simulated business service.  The details of the simulator are out of scope here, but you can imagine it has stock responses to requests based on well known input data (eg. deny all applications from “Dr. Evil”).

During an acceptance test run, I may want to point selenium RC at my web app.  Rather than hardcode it into the acceptance test itself, I may do the following (using the RSpec story runnner under JRuby):

Given( 'an anonymous user on web UI' ) do
  environment.requireWebUI() environment.getWebUILocation )
Then ('a new user exists' ) do
  environment.requireBusinessService() "#{environment.getBusinessServiceLocation()}/users/#{new-user-id}

So far so good – this is simple indirection for run time configuration – you can do the same thing with property files, except this time, your tests don’t have to know how to create the right environment – it’s all decided before the tests run.

Now let’s create another environment:

class UatEnvironment implements Environment {
 URI getWebUILocation() { new URI( "http://uat1.myco/myui" ); }
 URI getBusinessServiceLocation() { new URI( "http://uat2.myco/businessService/" ); }
 bool requireWebUI() { //ping UAT environment... };
 bool requireBusinessService() { //ping UAT environment... };
 void stop() {};
 void kill() {}

In this case, the requirements are that an existing application has already been uploaded. We have built in checks to make sure that the web ui and business services exist prior to running the tests. All the acceptance tests (that require a particular dependency) will pass.

Again all this can be done with property files and an anaemic domain model. However, I’ve used this model to build clarity around the design of the application, which sometimes gets lost in the thrust to cut code.

Of course, there must be some extra value, over and above the property file mechanism. I’ll talk about later…

Agile infrastructure – missing pieces

June 22, 2008

My last 5 or 6 Agile projects have involved non-trivial architectures.  By this I mean that they’ve been more than a browser, application server, and a database.  While I would urge all people with architectural responsibility to avoid complexity, sometimes it’s not feasible to simplify the architecture prior to first release. 


For the record, there are several reasons why complex architectures perform poorly, and there are several reasons why the agile approach exposes these deficiencies.  I’m assuming that an agile team will be aiming to allow a single developer pair to implement or fix any story that the business needs.


I’m going to talk about how complex architectures can adversely affect the velocity of the development team, and then throw around some patterns for offsetting that break.


  • Changing environments – even with simple architectures, if there is a shared dependency (such as shared DB schema, or network service), you can assume that someone will make a change to that dependency, and it won’t be when the developer pair want it to be changed.  Typically shared dependency changes affect the entire development team, not just individual developers, causing a huge loss of either immediate development velocity, or a deferred loss of velocity due to reductions in quality.
  • Waiting for knowledge – complex environments often use a mix of technologies that take time for developer competency.  Such lead times reduce velocity.   In addition, having “experts” means that either the expert is put under huge pressure to deal with issues that exceed their capacity, or alternatively, the expert is under-utilized.
  • Investigation – when something does break in a complex architecture, it is often not immediately apparent why.  Typically there are multiple log files, multiple system accounts, multiple configurations (sometimes data driven), and multiple network servers all collaborating together.  To efficiently determine the cause of a failure can reduce velocity.
Suggested Patterns:
  • Sandbox environment – This means given each developer pair a share-nothing environment in which to develop software.  It is then the responsibility of the pair to manage their own environment, and to promote standards for this environment. Self-management means that the developer pair may make breaking changes without affecting others, and can also rule out outside interference if their own environment does break.  Providing a predictable self-managed environment forces experts to share knowledge, and to develop tooling that empowers the developer pair.  Conversely, developers will create tooling that facilitates common tasks, and share these with the rest of the team.  Note this shared-nothing environment is not necessarily restricted to a single machine, since it is desirable to be able to develop on a production-similar stack of technologies.  
  • Domain model for environment – This means building software and tooling that represents the development environment.  Using a domain model encourages both a consistent language when referring to architectural pieces, and also allows automated reasoning about a given environment.  By allowing all architectural tooling to understand a common domain model, it becomes possible to automate the setup of monitoring tools, diagrams, profiling.  Avoid IDE and product-specific tools to manage the domain model (although they may be used as appropriate by teams), and focus on a standard of deployment and configuration extrapolated from the environmental domain model.  For example, use programmatic configuration of Spring contexts that is driven from the domain model, rather than using property-file based configuration.  
  • Branching by abstraction – Agile development teams often wish to change software and hardware architecture in response to issues that have been found.  They recognize that while hacks are appropriate in production support branches, such hacks have little place in the main development branch.  Architectural transforms may range from changing a persistence mechanism to switching database vendors.  Given that one team may wish to make a significant architectural change, they should avoid “big bang” introductions.  Once time-boxed spikes have been performed (to assess feasibility), the vision for the change should be shared with the team.  Once committed to the change, work starts by incrementally transforming the architecture.  These changes are distributed across the teams in small slices (through the main branch source control), potentially with two implementations co-existing within the same application, and switched over using configuration.  This allows functional software to be delivered to production at any point in the transformation.
  • Deployment Automation – setting up a sand box environment for a given developer pair is a complex task.  As such it should be an automated task, provided from the main automated build script.  This may mean automating the use of ssh in order to clean and create new database schemas, deploy EJBs or services.  We have found that dynamic programming languages (such as ruby and python) make a great alternative to shell scripts for these tasks.
  • Automated monitoring as acceptance criteria – Identifying failures is made much easier if there a single place to find information about system availability.  Those responsible for architecture should mandate monitoring of a new service as the success criteria of that service.  It is possible to automate the creation of host and services (and groups) for open source monitoring tools such as nagios, and ruby has excellent libraries for basic network service connectivity checking.  The level of monitoring required in the acceptance criteria will depend based on the value of the service.  For instance, if a duplicate server is needed for load balancing, the monitoring criteria may ping the load balancer to ensure that it can see the new server.  On the other hand, if the new piece is an ESB, the criteria may eschew basic IP connectivity in favor of firing sample messages and verifying downstream services receive the forwarded message(s).

Technology and process innovation

April 27, 2008
In an agile/lean software development team, discussion is invited, but coordinated.  I’ve noticed that people who are passionate about technology or process often feel friction if they don’t get a good hearing for their idea.  Equally, the technical/process leadership have to help everyone on a team achieve consistency.  Regardless, there should be no “status quo”…
I’ve been playing around with a pipeline approach to technology/process, where the team can
  • achieve consensus on the current state, and next steps
  • get rid of things that aren’t working
  • propose blue-sky ideas
  • get support for taking almost-working ideas through to completion
So, every iteration, we can build up a map of our technologies and practices, and then select and shine the elements that we think are important.
  • Deprecated – was useful once, but we should remove uses of this practice/tech when it is found
  • Definite – should be (and is being) used unless it is seriously mismatched to the problem at hand
  • Sound – this is a really good, proven idea/tech that isn’t being used yet (or only sporadically), but we think it should be adopted
  • Tentative – this would be really good, but it might have some issues that make it unsuitable
  • Radical – as a concept it solves some problems we’re having, but it may raise more issues than it solves
For instance, we might have the following map (example: for a Java based web site)
  • Deprecated – Struts, JDK 1.4
  • Definite – JDK 1.5, Struts2, Spring Dependency Injection, Maven, Hibernate, Continuous Integration, CVS
  • Sound – Test Driven development, Subversion, Freemarker
  • Tentative – BDD and acceptance criteria executed with Rspec and selenium, continuous pairing
  • Radical – Scala
  • Out – consensus is around not using this tech (e.g. .Net on a java project to name a simplistic example)
This is just a snapshot.  You can see a mix of practices and technologies here.
Each iteration (for some value of each), I would like the team to produce the following:
  • System guardians for each definite technology/practice
  • A safety-factor of 1-5 from each team member for every element on the map
  • A vote/score for each sound/tentative/radical idea on the map – this is the priority associated with adoption
  • Actions that can be taken to move prioritized items towards definite (for instance,  a 15 minute “topic of the day”)
In this way, there is a collaborative understanding of what works, what doesn’t, and what we are doing about it.  
Some thoughts around this:
  • I think it’s OK for a initially unpopular idea to stay up on the map, because it allows the proponent to feel included, and it stays in the collective memory for times when the idea is the right one.  
  • No definite process/technology is immune from deprecation – although part of the criteria for moving from sound to definite is to address issues of how to phase out the old process/tech
  • Items can be split – there are situations where a technology provides some benefits, but only if it is used in a particular way.  So, Spring DI may be definite, but Spring MVC may be out.
  • A lot of discussion around these things will happen in the retrospective anyway, but the innovation map should be persistent and displayed on the team room wall.
  • The “system guardian” pattern is great for identifying subject matter experts, and then making them able to move on to other areas.  Essentially, each system guardian is responsible for finding 2/3 other people and paring with them until they are also system guardians.  Once you have 4/5 names as guardians, the knowledge will propagate quickly.
The DebtStream Guards
In agile delivery, there is often a pattern of reserving a pair or two (depending on team size) for technology and process maintenance – the technical/process “debt” stream.
I’ve used this stream effectively to
  •  Simplify the build
  • “Proof of concept” 3rd party software integration
  • replace crufty code with a big restructuring
  • etc…
This team is rarely made up of the same people.  In fact the idea is often to seed a pair who will then roll into the main streams of development, spreading their experience, and backfill them.

Schotts Infrastructure Miscellany

May 21, 2007

A few bits of language related to enterprise deployment – someone send me some references, there are probably other, better, names for this:

The differences between “green/blue” and “silver/gold” are subtle, and should probably be generalised.

Green-blue domains

Completely parallel, independent, and symmetric hardware instances at all tiers of an application, only one of which is live at any time. This allows installation/upgrade of applications and data on the non-live partition before switching over the whole partition by just using a load balancer at the front. Releases are alternated between green and blue. The difference between this and having a staging environment is that each of the blue/green environments is production ready, and there are no subtle differences between them that isn’t accounted for.

This patterns allows for frequent, automated, releases at the expense of extra hardware.

Note that, in a variation, green and blue can both run the live application. In this case, you need to isolate green (or blue) before upgrading it. There will be a reduction in redundancy and capacity while this occurs.

Gold-silver data staging

Having data from an upstream process treated as “silver” until verification, and having the current production data treated as “gold”. At some point it is necessary to (as close to atomically as possible) to promote Silver to gold. At the same time, the services used by the previous gold are isolated, and soon after they updated to the new gold data, and joined to production.


The process by which a service becomes removed from a pool, and is no longer accessible from the live application


Bringing an isolated server back into a live service set.

Virtual IP switching

A load-balancer supported technique where the pool of IPs associated with a virtual IP is added/removed to. This is one way of isolating/joining a service.