Test Persona – Persona focused testing

September 26, 2011

In my last post I introduced the concept of a Test Persona – a software pattern for building acceptance tests that encapsulate the activities of an indivdual user of the system.  I showed that acceptance tests could be refactored so that step definitions are defined in terms of actors in the system, rather than imperative bullet points.  In this post, I introduce the pattern of Persona Focussed Testing.

Test Personas can be composed of strategies

Consider using test personas to trigger conversations around the actions and experiences of users that might not be otherwise be easily be apparent when implementing the tests.  For instance, we could rewrite the acceptance criteria from the first blog:

   Given I am a price-conscious customer
   When I purchase an item that is sourced from my country
   Then I will have chosen the 'free delivery' delivery option
   And I will have paid $0.00 for delivery

The difference here is that we are simulating user behaviour, rather than specifying the appearance or process for purchasing/choosing delivery options.

The implementation in mapping might be rewritten as such:

   /I am a price-consious customer/
      me = create_customer :item_selection_policy => :cheapest 
   /I purchase an item that is sourced from my country/
      my.desired_item = find_item :stock_location => my.location
      i.purchase_item  #### Calling this makes the persona follow a purchase through to completion.
   /I will have chosen the 'free delivery' delivery option/
      my.purchases.last.delivery_option = 'free delivery' 
   /I will have paid $0.00 for delivery/
      my.purchases.last.delivery_charge.should.be 0.00

In the above example, the Customer test persona maintains a list of that persona’s own purchases.

Test Personas have agency, and access their own assets

If your Test Personas have email addresses, it might be useful to have the persona provide easy access to the email messages, e.g.: they have received:

    my.inbox.should.contain {| msg | 
 msg.subject=="Receipt #${my.purchases.last.receipt_number}" 

The idea here is to only expose access to resources that a given persona has.  For instance, I wouldn’t expect a customer to have access to, say, the inventory system, or the shipping manifests.  But emails are definitely things a customer would have access to, along with their fax machine, their delivery address, their credit card, etc.

In my next post, I’ll talk about some tentative uses of test personas in defining a good customer experience.

Test Persona: a pattern for acceptance tests

September 25, 2011

Test Persona is a software pattern for acceptance tests. The use of this pattern leads to a de-coupling of test motivation from test implementation. Using test personas in acceptance tests is part of building a rich test domain. This pattern is complementary to other patterns that focus on the test domain, such as the page object pattern.


A test persona has knowledge of the actions and information available to a particular actor in a system under test.  Acceptance tests interact with the test persons to accomplish activities that would normally be performed


Here is a typical gherkin acceptance test

Given I am a customer
When I am about to purchase an item that can be sourced in my country
Then the cheapest price should include 'free delivery' as a delivery option

Most natural language based test harnesses will require a developer to implement a mapping into structured code. For instance, using page objects and some implicit web steps

/I am a customer/
  visit CustomerHomePage
/I am about to purchase an item that can be sourced in my country/
  @item = find_item :stock_location => 'australia'
  visit url_for item :action => :purchase
/I should see 'free delivery' as an available delivery option/
    should.contain :text => 'free delivery'

Once the persona pattern is applied, we see that the actual user is more explicit, but interactions with data are less so.

before_scenario do
  attr i
  Alias_method me, my, :i
/I am a customer/
  me= create_customer
/I am about to purchase an item that can be sourced from my country/
  my.desired_item = find_item :stock_location => my.location
/I should see 'free delivery' as an available delivery option/
    should contain :text => 'free delivery'

In this refactored example, the persona (returned using create_customer) holds many of the methods that were previously implicit in the world class.

In my next post, I’ll discuss some of the patterns of use in persona-focussed acceptance testing

What else I learned about powershell this week

September 16, 2011

Remoting.  Windows Remoting.  Powershell.  And variables.  With functions….

When you use powershell to connect to remote machines, you have to worry about where and how variables gets defined.

$s = New-PSSession -Computer localhost
Invoke-Command -Session $s -ArgList 1 -ScriptBlock { param($number) $number }

Yep. That’s what it takes to define a local variable in a script block

Today, I found that the powershell session has it’s own scope that lasts as long as the powershell scope.

 Invoke-Command -Session $s -ArgList "my_value" "value" -ScriptBlock { 
           param($name, $value)
           Set-Item Variable:$name = $value
      Invoke-Command -Session $s -ScriptBlock { $my_value }

This will set a variable on a session in one remote invocation, and retrieve it on another. Crazy!  Or rather, just like an SSH session.

So I now want to make use of this to push local variables from my script over to a remote machine. Something like:

$wibble = "hi"
OnRemote -server "localhost" -locals "wibble" -action {
   Write-Host "Hi I'm on remote machine with $wibble == hi"

What the devil should OnRemote look like? Here’s the signature…

function OnRemote() {
    param($server, $locals=@(), [scriptblock]$action)

…and the body…

      $s = New-PSSession -Computer $server
      @($locals) | % { Invoke-Command -Session $s -ArgList $_ Variable:$_ -ScriptBlock {
           param($name, $value)
           New-Item -name Variable:$name -value $value
      Invoke-Command -Session $s -ScriptBlock $action
      End-Session $s

At the moment I’ve haven’t gone any further with this, and I’m sure there’s a way of binding variables into the remote context more consisely. As it is, you have to remember to keep your locals declaration in sync with the values being pushed.

If I were to go to the next step, I’d probably try to remove that need to keep local names in sync. I guess it would look something like this:

function MyFunction(){
   $remote:wibble = 5 # or [remote]$wibble=5
   OnRemote {
       Write-Host "The value $wibble comes from the host"

What I’ve learnt about Powershell this week

September 10, 2011

Much to my suprise, Powershell holds up as a decent, testable language, with a few idiosyncrasies.

So far, I’ve been able to refactor Powershell to make use of its pervasive stream handling, and there are some useful tricks that can make Powershell look like other functional-style languages you know and love ( ruby, scala, LINQ, etc).

  • There are some nifty aliases:
     ? { test } |
    % { process; result } |
    % { process_result; finish}

    ? is an alias for ‘where-object’
    % in an alias of for-each.

  • More syntactic strangeness

    creates an empty hashmap, but


    wraps things into a list context. You’ll notice that when the following expression doesn’t work like you’d expect:

    (gci -path $path -filter "*.exe").Count

    this does:

    @[(gci -path $path -filter "*.exe")].Count
  • There is no built-in inject/foldl/aggregate function:
    function Fold-Left
      param (
        [parameter(mandatory, position = 1)]$inital,
        [parameter(mandatory, position = 2)][scriptblock]$action 
      begin { $sum = $inital}
      process {$sum = & $action $sum}
      end { write-output $sum }

    We can use it to reduce intermediate variables. This will return a map containing symbol counts:

    $counts = a,a,a,a,b,c,d,e | 
       Fold-Left @{} { param($counts) $counts.$_ = $counts.$_ + 1}
      #$counts @{ "a" => 4; "b" => 1; .... }
  • For-each collects the return values of the for-each block.
    (1,2,3 | % { $_ + 1}) -eq @[2,3,4]

    However, if you want to modify items in place, it can be error-prone to remember the return value:

    @{"ting" = "hi"}, @{"thing" = "lo"} |
    % { $_.tong = $_.ting + "there"; $_} #note the return value
  • Tap is a useful function, here it is
    function Tap
      process {&action; write-output $_}
    @{"ting" = "hi"}, @{"thing" = "lo"} | Tap { $_.tong = $_.ting + "there"} #note the return value is now ignored.

    This is a kestrel, for introducing side-effects. If your style is towards immutability you might want to use it differently.
    For instance, it’s great for logging:

    "command1", "command2" | Tap { Write-Host $_ } | Invoke-Expression $_

    And for turning it off, use…

    function Dont() {
      process {write-output $_}
    "command1", "command2" | Dont { Write-Host $_ } | Invoke-Expression $_

Silverlight Unit and Integration Testing

July 2, 2009

I’m going to tell you that, with a bit of effort, you can get a really good Test Driven Red/Green cycle with Silverlight…

Assumptions – you have installed VS 2008, and the Silverlight 2 SDK

Firstly – we created 4 projects in Visual Studio:

  • Silverlight.App – holds the production code
  • Silverlight.App.UnitTests – runs tests that require no external dependencies
  • Silverlight.App.IntegrationTests – contains tests that connection to the web, database, are asynchronous, etc
  • Silverlight.App.IntegrationHost – a ASP.NET project that will run the IntegrationTests

UnitTests and IntegrationTests are created with the Silveright UnitTesting project templates. These templates cause a test page to be created. The test page basically listens to events from the TestRunner (embedded in the Test silverlight applications), and presents it as (fiddly) HTML.

Our UnitTest project hasn’t changed much since we started. We have had issues with the IntegrationTests project:


  • Code transparency – All application logic is marked as “Transparent”, which means that may call but may not extend or override “SafeCritical” or “CriticalCode”. The System.Net.WebClient is marked as SafeCritical. This means that you won’t be able to Mock/Stub WebClient. At all.
  • Internal event constructors – The constructor for DownloadStringAsyncCompleteEventArgs is internal; You won’t be able to simulate a WebClient completing an http request.
  • Silverlight applications loaded from the file system just plain cannot connect to web resources. There is no security policy that will allow it.

All of the above issues drove us to use the IntegrationHost to run the IntegrationTests, but in a web context. This is quite good, especially for within-IDE testing, since you can host your test data as flat files or as ASP driven test data in the same host.

Unit Testing

Good news

  • Rhino mocks is available for Silverlight.
  • Nunit is available as source for Silverlight (with some shims for older, pre-silverlight data structures).
  • The Silverlight UT runner works with both MSTest Metadata and nunit style assertions at the same time


  • Nunit lite appears to be at version 0.5, and maybe worth looking at instead of a custom build of Nunit
  • Specificity Looks like it may work in the Silverlight environment.
  • Asynchronous testing using the [Asynchronous] attribute is potentially very brittle
  • Maintaining the test data for the integration tests in the IntegrationHost web project seems counter intuitive

Continous Integration

  • Silverlight UT test reports are shockingly bad HTML, making downstream consumption particularly difficult. Would be nice to log semantic html somewhere.
  • For the unit tests, we found a powershell script that will spawn IE and scrape the results into a file
  • For the integration tests, we modified the powershell script to spawn a WebDev.WebServer.exe against the IntegrationHost
  • The IntegrationTest xap file that is linked from the IntegrationHost will not get automatically deployed by msbuild – we use a copy task to pull it before launching IE.
  • Visual Studio wants to put the hosted xap file in ClientBin of the IntegrationHost. This is fine, but it also wants to put it into source control, which is not. Eventually, you’ll get the right combination of check-ins, deletes, et. al. and it won’t be an issue. Expect to lose a few hours on this.


When you have to clear out some space for that web server socket:

gwmi win32_process | where { $_.CommandLine -like "*WebDev.WebServer.exe*/port:$port" } | Kill


Async silverlight tests look a bit like this:

public void ShouldHangAroundABitWaiting {
   EnqueueCondition( () => return TrueIfCanStart() );
   EnqueueCallback( () => Specify.That( something, Is.Ready() );

It is at times like this that one definitely wants a monad.

One day, async code will look like this:

public void ShouldHangAroundABit {
	Specify.That( something, IsReady() )

Refactoring to Law of Demeter

April 6, 2009

The Law of Demeter (paraphrased as “Tell, Don’t Ask”) is the developer equivalent of an early morning stretch. If you wake up stiff and sore, running through a few gentle stretches can tease out some of the muscles that have been under too much load, and make you feel limber again.

Recently, I’ve been a bit of a one-trick pony when it comes to writing code – I just say “use the Law of Demeter” and expect people to know where I’m going. I think that’s a bit harsh (especially on my colleagues), so I’d like to mumble a few words about how it can work in practice.

I’ve also had pub-code conversations around this, and for people who use Demeter on a regular basis, the question is where do you stop? I think I’ll show my answer to this, but first:

The scenario:
We have various components in our system, and we want to generate a health check page that can help production support quickly diagnose high severity failures. All we need to do is generate a web page for those components…

The setup:
We have an old-school MVC web app – We use a templated view, our view Model is a hierarchy name/value pair structure called “ViewData”, and we’re using Typed Constructor Dependency Injection.

The first draft of code looks like this

class DatabaseStatus {
  boolean allOk = true;
  public DatabaseStatus( HibernateDatabase database, Session session, DeploymentInfo info ) {...}
  public void renderTo( ViewData data ){
    List statusItems = new List();
    Properties props = addStatusForPropertyFile( statusItems );
    addStatusForDatabaseUsername( database, statusItems )
    addStatusForDatabasePassword( database, statusItems )
    addStatusForDatabaseUrl( database, statusItems )
    addStatusForDatabaseDriver( database, statusItems )

    allOk = true;
    for( StatusItem item : statusItems ){
       ViewData statusView = data.addItemToList( "status" );
       statusView.addEntry( "name", status.getName() );
       statusView.addEntry( "description", status.getDescription() );
       statusView.addEntry( "status", "ok".equalsIgnoreCase(status.getStatus())? "ok" : "issue" );
       allOk = allOk && status.getStatus();

So, it’s ok. It has some tests around it as well, but there are some obvious points of improvement.

    The addStatusToX methods smack of duplication
    This class has feature envy of StatusItem
    There is a magic value for getStatus() – which maybe used to turn on some special styling
    It’s difficult to unit test, since all the specific dependencies have to be setup

After a day of refactoring, we ended up with the following:

class DatabaseStatus {
  public DatabaseStatus( Health[] allHealth ) {...}

  public void renderTo( final ViewData viewData ){
    final MutableBoolean allOk = true;
    HealthReport report = new HealthReport() {
      boolean allOk = true;
      public void addHealthStatus( String name, String description, HealthStatus status ){
         ViewData statusView = viewData.addItemToList( "status" );
         statusView.addEntry( "name", name );
         statusView.addEntry( "description", description);
         statusView.addEntry( "status", status.isHealthy()? "ok" : "issue" );
         allOk = allOk && status.isHealthy();
    for( Health health : allHealth ){
       health.addToHealthReport( report );

interface Health {
  void addToHealthReport( HealthReport report );

interface HealthReport {
  void addHealthStatus( String name, String description, HealthStatus status );

enum HealthStatus {

  public boolean isHealthy(){
    this == Healthy
  static HealthStatus trueIsHealthy( Boolean value ) {
    return value.booleanValue()?Healthy:Unhealthy;

So by following the Law of Demeter:

    The DatabaseStatus class gets told what health is relevant.
    Each health is told what report object to write into
    The inline HealthReport encapsulates the mapping to the View, and tells it what to render

The benefits:

    It’s MUCH easier to unit test – the full range of health behaviour can be simulated now
    Each Health has to only worry about Health, not how that will be represented
    Adding more entries to the HealthReport is straightforward, and done in the object container

The interesting:

    The name of “HealthReport” makes things clearer, and allows view abstraction (e.g. using a JmxHealthReport)
    This seems a mix of object-oriented and functional code, albeit with some mutable state
    Using enums (e.g. HealthStatus) rather than primitives allows more refactoring options, since it is straightforward to push behaviour (the ‘tell’) into the enum

The introduction of HealthReport helps to crystalise where the “action” is. I think there may be a general case here – that in practice, modelling the interaction between objects will often drive out a new object encapsulating that interaction.

    When a Boy kisses a Girl – the kiss itself is important
    When money is transferred from a source account to destination account – the payment itself is important
    When a car and a wall collide – the collision is important

Using Hamcrest to assert asynchronous behaviour

January 31, 2009

In order to write a passing acceptance test, I often find the need to poll Selenium for a given situation to be true.  Such polling tests often look like this:

endTime = getCurrentTime() + duration;
failed = true;
while( failed = hasCheckFailed() ){
  sleep( intervalInMillis )
  now = getCurrentTime();
  if( now > endTime ) {
return !failed

This works, but it has 3 pieces of information embedded in it – the check, the duration, and the interval.

This check could be replaced with a hamcrest matcher…

assertThat( thing, poll(duration, interval, delegateMatcher) )

where the poll() method returns  Matcher<Thing>.

However, the implementation of the poll() matcher becomes pretty complex – it’s delegating to a matcher, AND handling the timing concerns.

So, we played with another approach, which is to run the matcher against a finite time series:

assertThat( sample(thing), retry( delegateMatcher ))

When the assertion fails, it generates a message like:

Expected: retrying( delegate matcher description )
Got: sampling every 1 SECONDS for 10 SECONDS(
The sample method returns an Iterable<Thing> whose iterator repeated returns thing until the time runs out. It’s next() method sleeps for the duration. This is where the timing concerns are handled (and nothing else).

The retry matcher just iterates through the iterable until the sequence completes, or the test passes. It doesn’t have any time related logic.

The retry matcher is now easy…:

new TypeSafeMatcher<Iterable<T>>(){
  private Matcher<T> delegate;
  public boolean matchesSafely(Iterable<T> ts) {
    for( T t : ts ){
      if( delegate.matches(t)) return true;
    return false

The sample() method is a bit trickier, but essentially it returns a RealTimeSeries object that has one method: iterator(). This returns a SampleIterator – here is the code:

private class SampleIterator implements Iterator {
   private boolean firstSample = true;
   private long expectedEnd;
   public boolean hasNext() {
     if (firstSample) {
       return true;
     final long endOfNextSample = clock.timeInMillis()
      + interval.toMillis();
     return endOfNextSample <= expectedEnd;
   public T next() {
     if (firstSample) {
       expectedEnd = clock.timeInMillis()
         + max.toMillis();
       firstSample = false;
     } else {
       try {
       } catch (InterruptedException e) {
         return sampledThing;
     return sampledThing;
   public void remove() {
     throw new UnsupportedOperationException();

In practice, we can now write code like:

assertThat( sample(selenium).duration(120), retry(elementIsPresent('element-id')))
assertThat( sample(page).duration(10), retry(hasMessageCount(5)))

and then remove the duplication here

assertThatWithin( duration(120, SECONDS), selenium, elementIsPresent('element-id'))
assertThatWithin( every(10, SECONDS), page, hasMessageCount(5))

The wonders of buildr and winstone.

January 10, 2009

Just to remind myself how good ruby + rake + builder can be, I spent the afternoon trying to create a showcase artifact.  I thought I’d try and see how quickly I could get winstone up and running [its the app server for hudson].

So, I found out that winstone and tomcat disagree on how to handle “AUTHENTICATED_USER”, but otherwise, all I had to was mask the web.xml from my war artifact and transform it into something else.

The buildr script is below.  You have to assume there are two other projects (“db”, and “web”), and we use liquibase and hsqldb for the showcase environment.  I think the script took me 15 minutes to work out, and the rest was cycling around trying to get winstone to fire up (e.g. working out what should go in the new web.xml).

This is extremely succinct compared to ant, and even rake (without buildr) would be harder to create.

        define “showcase” do



            resources.from project(‘web’).file(‘src/main’)

            resources.from _(‘src/main/resources’)

            resources.include( ‘**/webapp/WEB-INF/web.xml’)

            resources.include( ‘**/*showcase*’ )

            resources.filter.using :ant, ‘authorisation.xml.fragment’ => xml_fragment


            package(:war).merge( project(‘web’).package(:war)).exclude( ‘WEB-INF/web.xml’)

            package(:war).include( _(‘target/resources/webapp/WEB-INF’) )

            package(:war).exclude( _(‘target/resources/**/changelog.xml’))

            package(:war).exclude( _(‘target/resources/**/*showcase*’))


            package(:jar).with :manifest =>{ ‘Main-Class’ => ‘showcase.Launcher’}

            package(:jar).include( package(:war), :as => ’embedded.war’)

            package(:jar).merge( artifact(LIQUIBASE_JAR))

            package(:jar).merge( artifact(HSQLDB_CLIENT_JAR))

            package(:jar).include( project(‘db’).file( ‘src/main/resources/changelog.xml’) )

#            package(:jar).


Restlet server for RESTful testing

January 7, 2009

We wanted a stub/mock http rest server to simulate sun’s sso server.  Here it is.  If you want change the current live behaviour, do this in your test/setup code:

//return “string=cookieName” for any request

Restlet restlet = new Restlet() {

public void handle(Request request, Response response) {
response.setEntity(new StringRepresentation(“string=” + ssoCookieName, MediaType.TEXT_PLAIN));

Here’s the server implementation.  The shutdown code is a bit clunky, but it appears the restlet server takes it’s time releasing the HTTP port….

package test;

import org.restlet.Restlet;
import org.restlet.Server;
import org.restlet.data.Protocol;
import org.restlet.data.Request;
import org.restlet.data.Response;

public class RestletServer {
private static final int RESTFUL_SERVER_PORT = 8182;
private final int restfulServerPort;

private final Server server;
private Restlet activeRestlet;

public RestletServer() {

public RestletServer(int serverPort) {
restfulServerPort = serverPort;
Restlet restlet = new DelegatingRestlet();
server = new Server(Protocol.HTTP, restfulServerPort, restlet);

private static class ShutdownRestletServer implements Runnable {

private final Server server;

public ShutdownRestletServer(Server server) {
this.server = server;

public void run() {
try {
} catch (Exception e) {


private class DelegatingRestlet extends Restlet {

public void handle(Request request, Response response) {
if (activeRestlet != null) {
activeRestlet.handle(request, response);
} else {
super.handle(request, response);


public void start() throws Exception {
Runtime.getRuntime().addShutdownHook(new Thread(new ShutdownRestletServer(server)));


public void setActiveRestlet(Restlet restlet) throws Exception {
activeRestlet = restlet;
if (activeRestlet != null) {
if (!server.isStarted()) {


Environmental domain modelling 2

June 26, 2008

In my last post I discussed using a domain model of the environment to give clarity around the architecture of an application, in a way that allows acceptance tests to be written in a technology agnostic way.

Last time I discussed the case where there were two environments – a local in-memory version, and a UAT version.  I created Java class for each one, to illustrate the example.

In this post, I want to discuss what happens when you put a scripting language on top of this domain model.

When dealing with complex architectures or big teams, there are many shared environments, or at least, common infrastructure. Ideally, the staging and production environments should differ ONLY by configuration.

By using a scripting language (such as ruby) to define your environments you quickly customize an environment without having to rebuild your enterprise manifest.

For instance:


  database_host :main_ora_server do
     host ip("")
     version "10.1"
     tools_dir "/opt/ora/10.1/bin"

  database_schema :user_service_schema do
    schema_name "e_users"
    user_name "scott"
    password "tiger"
    host :main_ora_server
    migrations zip("user_schema/migrations")

  webapp :web_ui do
    artifact war( "web-ui.war" )
    environment do
      user_datasource :user_service_schema

  %w( web1, web2, web3, web4 ).each do | w |
    w_ip = #function on w
    webserver w.to_sym do
      host ip( "10.2.3.#{w_ip}" )
      hosts_webapp :web_ui

From this configuration, I’m intending to deploy a web app to 4 servers each with one web app, all pointing at the same database. I’ve also defined a migration script for that database.

This is quite similar to a capistrano configuration, except in this case I’m not actually do the deployment – I’m defining it, and allowing the same configuration to be used in different contexts. For instance, I may have a monitoring tool which takes the same file and pings each service. I may generate a diagram. Each of these semi-fictional tools would be invoked in a similar way:

deploy prod.config.rb enterprise-mainfest-1.2.6
monitor prod.config.rb service-agent@jabberd.myco
visualize prod.config.rb target-dir

This config file can be placed in your favourite version control system, merged, diffed, and generally munged.  It’s clear enough that both devs and sys admins can talk about it, and it encourages differences in environments to be encapsulated.

In general, the power of having a rich domain model means that you can share knowledge across the team AND ensure high fidelity of that information

Of course, the above domain model won’t be exactly what you need, but if you start off with a simple model, and use other agile infrastructure techniques, then you’ll find you can evolve a domain model which is JUST rich enough for your purposes.