Test Persona – Persona focused testing

September 26, 2011

In my last post I introduced the concept of a Test Persona – a software pattern for building acceptance tests that encapsulate the activities of an indivdual user of the system.  I showed that acceptance tests could be refactored so that step definitions are defined in terms of actors in the system, rather than imperative bullet points.  In this post, I introduce the pattern of Persona Focussed Testing.

Test Personas can be composed of strategies

Consider using test personas to trigger conversations around the actions and experiences of users that might not be otherwise be easily be apparent when implementing the tests.  For instance, we could rewrite the acceptance criteria from the first blog:

   Given I am a price-conscious customer
   When I purchase an item that is sourced from my country
   Then I will have chosen the 'free delivery' delivery option
   And I will have paid $0.00 for delivery

The difference here is that we are simulating user behaviour, rather than specifying the appearance or process for purchasing/choosing delivery options.

The implementation in mapping might be rewritten as such:

   /I am a price-consious customer/
      me = create_customer :item_selection_policy => :cheapest 
   /I purchase an item that is sourced from my country/
      my.desired_item = find_item :stock_location => my.location
      i.purchase_item  #### Calling this makes the persona follow a purchase through to completion.
   /I will have chosen the 'free delivery' delivery option/
      my.purchases.last.delivery_option = 'free delivery' 
   /I will have paid $0.00 for delivery/
      my.purchases.last.delivery_charge.should.be 0.00

In the above example, the Customer test persona maintains a list of that persona’s own purchases.

Test Personas have agency, and access their own assets

If your Test Personas have email addresses, it might be useful to have the persona provide easy access to the email messages, e.g.: they have received:

    my.inbox.should.contain {| msg | 
 msg.subject=="Receipt #${my.purchases.last.receipt_number}" 
 }

The idea here is to only expose access to resources that a given persona has.  For instance, I wouldn’t expect a customer to have access to, say, the inventory system, or the shipping manifests.  But emails are definitely things a customer would have access to, along with their fax machine, their delivery address, their credit card, etc.

In my next post, I’ll talk about some tentative uses of test personas in defining a good customer experience.


Test Persona: a pattern for acceptance tests

September 25, 2011

Test Persona is a software pattern for acceptance tests. The use of this pattern leads to a de-coupling of test motivation from test implementation. Using test personas in acceptance tests is part of building a rich test domain. This pattern is complementary to other patterns that focus on the test domain, such as the page object pattern.

Concept

A test persona has knowledge of the actions and information available to a particular actor in a system under test.  Acceptance tests interact with the test persons to accomplish activities that would normally be performed

Example

Here is a typical gherkin acceptance test

Given I am a customer
When I am about to purchase an item that can be sourced in my country
Then the cheapest price should include 'free delivery' as a delivery option

Most natural language based test harnesses will require a developer to implement a mapping into structured code. For instance, using page objects and some implicit web steps

/I am a customer/
  visit CustomerHomePage
/I am about to purchase an item that can be sourced in my country/
  @item = find_item :stock_location => 'australia'
  visit url_for item :action => :purchase
/I should see 'free delivery' as an available delivery option/
  on_page(PurchasePage).delivery_options.
    should.contain :text => 'free delivery'

Once the persona pattern is applied, we see that the actual user is more explicit, but interactions with data are less so.

before_scenario do
  attr i
  Alias_method me, my, :i
/I am a customer/
  me= create_customer
/I am about to purchase an item that can be sourced from my country/
  my.desired_item = find_item :stock_location => my.location
  i.start_to_purchase_item
/I should see 'free delivery' as an available delivery option/
  my.purchase_page.delivery_options.
    should contain :text => 'free delivery'

In this refactored example, the persona (returned using create_customer) holds many of the methods that were previously implicit in the world class.

In my next post, I’ll discuss some of the patterns of use in persona-focussed acceptance testing


What else I learned about powershell this week

September 16, 2011

Remoting.  Windows Remoting.  Powershell.  And variables.  With functions….

When you use powershell to connect to remote machines, you have to worry about where and how variables gets defined.

$s = New-PSSession -Computer localhost
Invoke-Command -Session $s -ArgList 1 -ScriptBlock { param($number) $number }

Yep. That’s what it takes to define a local variable in a script block

Today, I found that the powershell session has it’s own scope that lasts as long as the powershell scope.

 Invoke-Command -Session $s -ArgList "my_value" "value" -ScriptBlock { 
           param($name, $value)
           Set-Item Variable:$name = $value
      }
      Invoke-Command -Session $s -ScriptBlock { $my_value }

This will set a variable on a session in one remote invocation, and retrieve it on another. Crazy!  Or rather, just like an SSH session.

So I now want to make use of this to push local variables from my script over to a remote machine. Something like:

$wibble = "hi"
OnRemote -server "localhost" -locals "wibble" -action {
   Write-Host "Hi I'm on remote machine with $wibble == hi"
}

What the devil should OnRemote look like? Here’s the signature…

function OnRemote() {
    param($server, $locals=@(), [scriptblock]$action)
}

…and the body…

      $s = New-PSSession -Computer $server
      @($locals) | % { Invoke-Command -Session $s -ArgList $_ Variable:$_ -ScriptBlock {
           param($name, $value)
           New-Item -name Variable:$name -value $value
      }
      Invoke-Command -Session $s -ScriptBlock $action
      End-Session $s

At the moment I’ve haven’t gone any further with this, and I’m sure there’s a way of binding variables into the remote context more consisely. As it is, you have to remember to keep your locals declaration in sync with the values being pushed.

If I were to go to the next step, I’d probably try to remove that need to keep local names in sync. I guess it would look something like this:

function MyFunction(){
   $remote:wibble = 5 # or [remote]$wibble=5
   OnRemote {
       Write-Host "The value $wibble comes from the host"
   }

What I’ve learnt about Powershell this week

September 10, 2011

Much to my suprise, Powershell holds up as a decent, testable language, with a few idiosyncrasies.

So far, I’ve been able to refactor Powershell to make use of its pervasive stream handling, and there are some useful tricks that can make Powershell look like other functional-style languages you know and love ( ruby, scala, LINQ, etc).

  • There are some nifty aliases:
     ? { test } |
    % { process; result } |
    % { process_result; finish}

    ? is an alias for ‘where-object’
    % in an alias of for-each.

  • More syntactic strangeness
     @{}

    creates an empty hashmap, but

    @[]

    wraps things into a list context. You’ll notice that when the following expression doesn’t work like you’d expect:

    (gci -path $path -filter "*.exe").Count

    this does:

    @[(gci -path $path -filter "*.exe")].Count
  • There is no built-in inject/foldl/aggregate function:
    function Fold-Left
    {
      [cmdletbinding()]
      param (
        [parameter(mandatory, position = 1)]$inital,
        [parameter(mandatory, position = 2)][scriptblock]$action 
      )
      begin { $sum = $inital}
      process {$sum = & $action $sum}
      end { write-output $sum }
    }

    We can use it to reduce intermediate variables. This will return a map containing symbol counts:

    $counts = a,a,a,a,b,c,d,e | 
       Fold-Left @{} { param($counts) $counts.$_ = $counts.$_ + 1}
    
      #$counts @{ "a" => 4; "b" => 1; .... }
  • For-each collects the return values of the for-each block.
    (1,2,3 | % { $_ + 1}) -eq @[2,3,4]

    However, if you want to modify items in place, it can be error-prone to remember the return value:

    @{"ting" = "hi"}, @{"thing" = "lo"} |
    % { $_.tong = $_.ting + "there"; $_} #note the return value
  • Tap is a useful function, here it is
    function Tap
    {
      [cmdletbinding()] 
      param([parameter(position=1)][scriptblock]$action)
      process {&action; write-output $_}
    }
    
    @{"ting" = "hi"}, @{"thing" = "lo"} | Tap { $_.tong = $_.ting + "there"} #note the return value is now ignored.

    This is a kestrel, for introducing side-effects. If your style is towards immutability you might want to use it differently.
    For instance, it’s great for logging:

    "command1", "command2" | Tap { Write-Host $_ } | Invoke-Expression $_

    And for turning it off, use…

    function Dont() {
      [cmdletbinding()] 
      param([parameter(position=1)][scriptblock]$action)
      process {write-output $_}
    }
    
    "command1", "command2" | Dont { Write-Host $_ } | Invoke-Expression $_

Silverlight Unit and Integration Testing

July 2, 2009

I’m going to tell you that, with a bit of effort, you can get a really good Test Driven Red/Green cycle with Silverlight…

Assumptions – you have installed VS 2008, and the Silverlight 2 SDK

Firstly – we created 4 projects in Visual Studio:

  • Silverlight.App – holds the production code
  • Silverlight.App.UnitTests – runs tests that require no external dependencies
  • Silverlight.App.IntegrationTests – contains tests that connection to the web, database, are asynchronous, etc
  • Silverlight.App.IntegrationHost – a ASP.NET project that will run the IntegrationTests

UnitTests and IntegrationTests are created with the Silveright UnitTesting project templates. These templates cause a test page to be created. The test page basically listens to events from the TestRunner (embedded in the Test silverlight applications), and presents it as (fiddly) HTML.

Our UnitTest project hasn’t changed much since we started. We have had issues with the IntegrationTests project:

Security

  • Code transparency – All application logic is marked as “Transparent”, which means that may call but may not extend or override “SafeCritical” or “CriticalCode”. The System.Net.WebClient is marked as SafeCritical. This means that you won’t be able to Mock/Stub WebClient. At all.
  • Internal event constructors – The constructor for DownloadStringAsyncCompleteEventArgs is internal; You won’t be able to simulate a WebClient completing an http request.
  • Silverlight applications loaded from the file system just plain cannot connect to web resources. There is no security policy that will allow it.

All of the above issues drove us to use the IntegrationHost to run the IntegrationTests, but in a web context. This is quite good, especially for within-IDE testing, since you can host your test data as flat files or as ASP driven test data in the same host.

Unit Testing

Good news

  • Rhino mocks is available for Silverlight.
  • Nunit is available as source for Silverlight (with some shims for older, pre-silverlight data structures).
  • The Silverlight UT runner works with both MSTest Metadata and nunit style assertions at the same time

Puzzling

  • Nunit lite appears to be at version 0.5, and maybe worth looking at instead of a custom build of Nunit
  • Specificity Looks like it may work in the Silverlight environment.
  • Asynchronous testing using the [Asynchronous] attribute is potentially very brittle
  • Maintaining the test data for the integration tests in the IntegrationHost web project seems counter intuitive

Continous Integration

  • Silverlight UT test reports are shockingly bad HTML, making downstream consumption particularly difficult. Would be nice to log semantic html somewhere.
  • For the unit tests, we found a powershell script that will spawn IE and scrape the results into a file
  • For the integration tests, we modified the powershell script to spawn a WebDev.WebServer.exe against the IntegrationHost
  • The IntegrationTest xap file that is linked from the IntegrationHost will not get automatically deployed by msbuild – we use a copy task to pull it before launching IE.
  • Visual Studio wants to put the hosted xap file in ClientBin of the IntegrationHost. This is fine, but it also wants to put it into source control, which is not. Eventually, you’ll get the right combination of check-ins, deletes, et. al. and it won’t be an issue. Expect to lose a few hours on this.

Powershell

When you have to clear out some space for that web server socket:

gwmi win32_process | where { $_.CommandLine -like "*WebDev.WebServer.exe*/port:$port" } | Kill

Asynchrony

Async silverlight tests look a bit like this:

[Asynchronous]
public void ShouldHangAroundABitWaiting {
   EnqueueCondition( () => return TrueIfCanStart() );
   EnqueueCallback( () => Specify.That( something, Is.Ready() );
   EnqueueTestComplete();
}

It is at times like this that one definitely wants a monad.

One day, async code will look like this:

[Asynchronous]
public void ShouldHangAroundABit {
	TrueIfCanStart.Wait();
	Specify.That( something, IsReady() )
}

Refactoring to Law of Demeter

April 6, 2009

The Law of Demeter (paraphrased as “Tell, Don’t Ask”) is the developer equivalent of an early morning stretch. If you wake up stiff and sore, running through a few gentle stretches can tease out some of the muscles that have been under too much load, and make you feel limber again.

Recently, I’ve been a bit of a one-trick pony when it comes to writing code – I just say “use the Law of Demeter” and expect people to know where I’m going. I think that’s a bit harsh (especially on my colleagues), so I’d like to mumble a few words about how it can work in practice.

I’ve also had pub-code conversations around this, and for people who use Demeter on a regular basis, the question is where do you stop? I think I’ll show my answer to this, but first:

The scenario:
We have various components in our system, and we want to generate a health check page that can help production support quickly diagnose high severity failures. All we need to do is generate a web page for those components…

The setup:
We have an old-school MVC web app – We use a templated view, our view Model is a hierarchy name/value pair structure called “ViewData”, and we’re using Typed Constructor Dependency Injection.

The first draft of code looks like this

class DatabaseStatus {
  boolean allOk = true;
  public DatabaseStatus( HibernateDatabase database, Session session, DeploymentInfo info ) {...}
  public void renderTo( ViewData data ){
    List statusItems = new List();
    Properties props = addStatusForPropertyFile( statusItems );
    addStatusForDatabaseUsername( database, statusItems )
    addStatusForDatabasePassword( database, statusItems )
    addStatusForDatabaseUrl( database, statusItems )
    addStatusForDatabaseDriver( database, statusItems )

    allOk = true;
    for( StatusItem item : statusItems ){
       ViewData statusView = data.addItemToList( "status" );
       statusView.addEntry( "name", status.getName() );
       statusView.addEntry( "description", status.getDescription() );
       statusView.addEntry( "status", "ok".equalsIgnoreCase(status.getStatus())? "ok" : "issue" );
       allOk = allOk && status.getStatus();
    }
  }
}

So, it’s ok. It has some tests around it as well, but there are some obvious points of improvement.

    The addStatusToX methods smack of duplication
    This class has feature envy of StatusItem
    There is a magic value for getStatus() – which maybe used to turn on some special styling
    It’s difficult to unit test, since all the specific dependencies have to be setup

After a day of refactoring, we ended up with the following:

class DatabaseStatus {
  public DatabaseStatus( Health[] allHealth ) {...}

  public void renderTo( final ViewData viewData ){
    final MutableBoolean allOk = true;
    HealthReport report = new HealthReport() {
      boolean allOk = true;
      public void addHealthStatus( String name, String description, HealthStatus status ){
         ViewData statusView = viewData.addItemToList( "status" );
         statusView.addEntry( "name", name );
         statusView.addEntry( "description", description);
         statusView.addEntry( "status", status.isHealthy()? "ok" : "issue" );
         allOk = allOk && status.isHealthy();
      }
    }
    for( Health health : allHealth ){
       health.addToHealthReport( report );
    }
  }
}

interface Health {
  void addToHealthReport( HealthReport report );
}

interface HealthReport {
  void addHealthStatus( String name, String description, HealthStatus status );
}

enum HealthStatus {
  Healthy,
  Unhealthy

  public boolean isHealthy(){
    this == Healthy
  }
  static HealthStatus trueIsHealthy( Boolean value ) {
    return value.booleanValue()?Healthy:Unhealthy;
  } 
}

So by following the Law of Demeter:

    The DatabaseStatus class gets told what health is relevant.
    Each health is told what report object to write into
    The inline HealthReport encapsulates the mapping to the View, and tells it what to render

The benefits:

    It’s MUCH easier to unit test – the full range of health behaviour can be simulated now
    Each Health has to only worry about Health, not how that will be represented
    Adding more entries to the HealthReport is straightforward, and done in the object container

The interesting:

    The name of “HealthReport” makes things clearer, and allows view abstraction (e.g. using a JmxHealthReport)
    This seems a mix of object-oriented and functional code, albeit with some mutable state
    Using enums (e.g. HealthStatus) rather than primitives allows more refactoring options, since it is straightforward to push behaviour (the ‘tell’) into the enum

The introduction of HealthReport helps to crystalise where the “action” is. I think there may be a general case here – that in practice, modelling the interaction between objects will often drive out a new object encapsulating that interaction.

    When a Boy kisses a Girl – the kiss itself is important
    When money is transferred from a source account to destination account – the payment itself is important
    When a car and a wall collide – the collision is important

Using Hamcrest to assert asynchronous behaviour

January 31, 2009

In order to write a passing acceptance test, I often find the need to poll Selenium for a given situation to be true.  Such polling tests often look like this:

endTime = getCurrentTime() + duration;
failed = true;
while( failed = hasCheckFailed() ){
  sleep( intervalInMillis )
  now = getCurrentTime();
  if( now > endTime ) {
    fail();
  }
}
return !failed

This works, but it has 3 pieces of information embedded in it – the check, the duration, and the interval.

This check could be replaced with a hamcrest matcher…

assertThat( thing, poll(duration, interval, delegateMatcher) )

where the poll() method returns  Matcher<Thing>.

However, the implementation of the poll() matcher becomes pretty complex – it’s delegating to a matcher, AND handling the timing concerns.

So, we played with another approach, which is to run the matcher against a finite time series:


assertThat( sample(thing), retry( delegateMatcher ))

When the assertion fails, it generates a message like:

Expected: retrying( delegate matcher description )
Got: sampling every 1 SECONDS for 10 SECONDS(
thing.toString()
)
The sample method returns an Iterable<Thing> whose iterator repeated returns thing until the time runs out. It’s next() method sleeps for the duration. This is where the timing concerns are handled (and nothing else).

The retry matcher just iterates through the iterable until the sequence completes, or the test passes. It doesn’t have any time related logic.

The retry matcher is now easy…:


new TypeSafeMatcher<Iterable<T>>(){
  private Matcher<T> delegate;
  public boolean matchesSafely(Iterable<T> ts) {
    for( T t : ts ){
      if( delegate.matches(t)) return true;
    }
    return false
  }
}

The sample() method is a bit trickier, but essentially it returns a RealTimeSeries object that has one method: iterator(). This returns a SampleIterator – here is the code:


private class SampleIterator implements Iterator {
   private boolean firstSample = true;
   private long expectedEnd;
   public boolean hasNext() {
     if (firstSample) {
       return true;
     }
     final long endOfNextSample = clock.timeInMillis()
      + interval.toMillis();
     return endOfNextSample <= expectedEnd;
   }
   public T next() {
     if (firstSample) {
       expectedEnd = clock.timeInMillis()
         + max.toMillis();
       firstSample = false;
     } else {
       try {
         interval.sleep(clock);
       } catch (InterruptedException e) {
         return sampledThing;
       }
     }
     return sampledThing;
   }
   public void remove() {
     throw new UnsupportedOperationException();
   }
}

In practice, we can now write code like:


assertThat( sample(selenium).duration(120), retry(elementIsPresent('element-id')))
assertThat( sample(page).duration(10), retry(hasMessageCount(5)))

and then remove the duplication here

assertThatWithin( duration(120, SECONDS), selenium, elementIsPresent('element-id'))
assertThatWithin( every(10, SECONDS), page, hasMessageCount(5))


Follow

Get every new post delivered to your Inbox.