Friday, February 5, 2016

Lean Testing in theory and practice


This article was originally published in an earlier form on the Assurity Consulting website

There are many different definitions of software testing, and many views on what responsible testing looks like in our industry.  How you view the role of a tester informs what practices and artifacts you believe are valuable.

Saturday, June 6, 2015

Resources relating to "That's not the map I had in mind"


Resources for "That's not the map I had in mind"

I expect that these diagrams will evolve and grow over time which is why I have included them here for comment. This list will grow over time.

Xmind file for taxonomic hierarchy: XMind file

JPG file for taxonomic hierarchy: Image

 Downloadable examples of various testing models coming soon

Tuesday, December 2, 2014

WeTest Weekend Workshops 2014 theme: Evolve



This last weekend (29/11/2014), we had our second WeTest Weekend Workshops 2014. The theme was "Evolve".







Wednesday, November 5, 2014

Shallow KPIs: A Tale of Two Testers

Once upon a time there was a thread on linked in on KPIs for software testers.  A Test Manager shared the KPIs she uses for her team:

1. Amount of bugs created.
2. Amount of bugs verified
3. Amount of assigned work completed.
4. Confirm to schedule. 


(At the risk of accusing anyone on Linkedin of being sloppy with their language, I will assume by 'Amount of bugs created' she means "number of bugs logged in some bug tracking tool").

When challenged, she provided the following 'real life' scenario as if it the sheer power of this example would dazzle us all into submission:

"Tester1 – found 30 defects, verified all assigned issues by deadline.
Tester2- found 0 defects, verified 10% of issues assigned by deadline.
Who performed better Tester1 or Tester2?"

So who performed better?

Tester 2 of course. She didn't log any defects because she had established a strong working relationship with the development team, and as she found an issue, she wrote it on a sticky note, and gave it to the developer. The developer then would rapidly fix and redeploy. The tester would retest, and verify the fix. Because of this, she was able to reduce a lot of administrative overhead, and help the developers produce a high quality product.
Tester 2 was unable to verify all the issues assigned to her by the deadline because she was very thorough, and felt that meeting an arbitrary deadline didn't contribute to the overall health of the project. Instead, she focused on doing great work.

Meanwhile, Tester1 logged many defects. They were poorly written, and many of them were just different symptoms of the same underlying issue. The developers had to spend a lot of time trying to decipher them, and would often spend many hours chasing down bugs that turned out to be merely configuration errors. Once, he logged 10 'defects' that were immediately 'fixed' when someone came over and updated his java environment. A lot of time is spend administering the defects in the bug tracking tool, and trying to work out if Tester 1's defects are legitimate or not.
Tester 1 works very hard to meet the deadline when verifying issues. To do so, he performs a very shallow confirmatory check. His vulnerability to confirmation bias has led him to verify many fixes as "complete" when there were regressive side-effects he didn't pick up on.

Tester 1 meets his KPIs and is up for promotion. In two years he'll be sharing his wisdom in Linkedin.

Tester 2 has been told she isn't performing as necessary. She is going home tonight to update her resume. In a year she'll be working at a company that assesses her performance by watching her work and regularly catching up for peer review. In two years she'll be sharing her wisdom at a peer conference.
 

Friday, September 19, 2014

The Responsibilities of a Conference Facilitator

I have just returned from Let's Test Oz 2014, and like the CAST conferences, operates on a K-Card style facilitation format.

During the three day conference I saw the power a great facilitator can have. I got to experience first-hand the influence a good facilitator can have on the success of a talk, so I would like to offer my perspectives on what makes a good faciliator.

Thursday, August 21, 2014

Very Short Blog Post: A date with test cases.

Here's a test case problem:

The requirement:

"Formatting is automatically applied to all date fields (dd/mm/yy formatted)"
 
Here are my findings after a 15 minute test session:
  • Formatting is automatically applied when entering dates as
  • 12.12.2014
  • 12th Dec 2014
  • 12 Dec 2014
  • 12 December 2014
  • 12th December 2014
  • 12-12-2014
  • 12-DEC-2014
  •   Formatting is not applied to:
  • 12.12.14
  • 12122014
  • 12th Dec 14
  • 12th December 14
  • 12/12/14

a) Did the requirement 'pass'?
b) According to some claims, it is best practice to write one positive test case and one negative test case per requirement. What would I have learned by writing and executing two test cases?
c) Some test management tools would report 100% coverage with 1 test case and if it passed, it would say that the requirement passed.
 
Maybe talking about testing in terms of test cases and of passes and fails isn't useful.

 
 
 

Sunday, April 6, 2014

“Anyone Can Be a Tester” - Response

I was pointed to this article on twitter: http://www.morethanfunctional.co.uk/1/post/2014/04/anyone-can-be-a-tester.html by Jari Laakso who has already written a response here.

There are some things I agree with, some things I disagree with, and some things I think are just ugly.