All posts by julian

QA Friendly Agile

I wrote this piece because i was having trouble with understanding the Agile process that our development team were trying to use from a QA point of view. My QA team were struggling to make sense of when and what they should be testing and what input they should be having on the process as a whole.

I was thinking of 5 main points when i was trying to develop this updated process:

  1. Better visibility for QA on things that they can directly test (A Feature) and things that the develop would be better off testing with some input from QA (A Task)
  2. Input from the developer on the test plan and cases that the QA would like to use
  3. Trying to get QA involved at the beginning and making sure that QA and Dev can be working on a feature in parallel, meaning QA will have all their ducks in a row ready to test the feature as soon as it’s finished being developed.
  4. Pushing down as much of the testing to the unit test level where automation is much cheaper and more reliable.
  5. Getting a feature that is in the best shape for QA functional tests. By having good unit tests to find issues before it gets to the QA functional test stage we should find less issues.

How to do it?

We need a ticket for each feature (these might be users stories in some cases), it would be made up of tasks which are at the component level, each of these tasks should have a set of unit tests, these tasks can then flow thru the process with only a small amount QA involvement as the developer will have written the unit tests and if all these pass that task is QA passed.

Once all the tasks associated with the feature ticket have passed QA then we as QA can get involved and test that feature, this will probably be done at the UI level and we can then create some UI level automated scripts. We can also create an agile test plan  for what we are going to test in that feature and run this by the dev whilst the tasks that make up that feature are being coded and tested by the developer. This will help use to work in parallel.

The QA will know that they have to have a full set of tasks (Which will be associated with the feature ticket) passed QA before they start testing the feature.

One of the benefits of this will mean that we push much of the testing down to the lowest unit test level, this should mean that when we come to test the feature we should find less issues as the tasks that make up the feature have been tested using a good set of unit tests. This should mean that the QA is faster and leaves more time for getting the environment requirements and test data sorted and for automated tests to be created at the UI level to test the feature.

I would see the issue logging process as being the same as it is currently after QA is done, so create a new bug ticket and pop all the issues found in there and put this in the sprint backlog.

Process

So to sum up this is how i see this working:

  1. Feature A ticket created (Should include a UI element)
  2. Task (Would probably not have a UI element) tickets created that are required to implement feature A and linked to the feature A ticket.
  3. QA checks the feature makes sense to them then writes agile test plan for feature and can start to write the test cases for the feature and also work out any test setup / data that is required.
  4. QA reviews test plan with Developer and updates as necessary.
  5. Developer works on each task ticket, works with QA to validate the testing they are doing and the unit tests created, task tickets move along the board and end up in QA Passed during the course of the sprint. This can be happening at the same time as step 3 and step 4
  6. Once all the linked task tickets have passed QA, the QA can start running the test cases they have already written (In step 3)
  7. Once manual tests passed QA can automate a regression test at the UI level
  8. Feature is then DONE

Notes:

Postit  for features are placed on the sprint board at the start of the sprint, as each associated task is worked on a posit for that task is placed on the board. Feature tickets remain in development doing until all the subtasks have reached QA passed on the board. Once this has occurred feature ticket moves to QA doing.

See image below for a process flow diagram.

QA Friendly agile process - New Page

 

 

Why we chose unfuddle..

Back in a previous life i used a customised version of bugzilla for bug tracking. It was fine, the UI was a bit basic but it worked fine. Then i started to notice how much time my manager was spending on its maintenance. Making sure the back ups were working, standing it back up when it went down etc.

At that point i kinda thought to myself that when i get into a position were i am choosing a bug tracker / project management solution to use i wanted one that i didn’t have to spend time maintaining. To cut a long story short that glorious day did arrive and i set to work looking just such a product.

In order to not suffer from the maintenance headache i decided a cloud based solution was in order. There is a wide selection of these so i ran a small project to get the development teams input, i wanted the whole team to buy into using such a solution and to buy into the winning contender.

I looked at these 4 solutions:

https://unfuddle.com/

Screen Shot 2014-11-20 at 20.54.58

https://lighthouseapp.com/

Screen Shot 2014-11-20 at 20.56.03

https://bugrocket.com/

Screen Shot 2014-11-20 at 20.57.01

http://www.fogcreek.com/fogbugz/

Screen Shot 2014-11-20 at 20.52.58

As you would expect each of these solutions had pros and cons but i had a wish list of features that we as a team thought we needed:

1. Had to be easy to use (We wanted no barriers that might stop colleges from all areas of the business using the tool)

2. Had to allow configuration of bug / new job status

3. Had to have a kanban style view (We use kanban and have a whiteboard that shows our jobs so we wanted our solution to allow us to view jobs in the same way)

4. Ability to create ticket reports easily

5. Have some project management functions

6. Allow unlimited users included in the monthly fee

7. Have GIT hosting.

How they shaped up

Along with the teams feedback and my own thoughts below is the summary of how the solutions shaped up:

1. Lighhouse had a very nice user interface but fell down due to its lack of project management features.

2. Bug Rocket was super easy to use but again did not have strong project management features.

3. Fogbugz was very feature rich both for bug / new feature tracking and in the project management area, downsides were that its user interface was rather old looking and somewhat difficult to use for a non technical user.

4. Unfuddle was easy to use, had a nice user interface, had adequate project management features and had that important kanban task board view. It also had built in GIT hub hosting, although at the time of writing this we have not yet migrated from our current GITHUB hosted repos. In short it hit all of the items on my wish list, rather than just some of them like the other solutions looked at.

To sum up unfuddle does a lot of things pretty well which suited us, you of course may have different needs so do a little project of our own to find out!

Let me know what your using an why in the comments!

Thanks for reading!

 

 

 

Free test management tools

There seem to be plenty of free test tools out there but finding a good free test management tool is not so easy.

I started by using RIACase, its pretty simple and easy to use, but i soon found that it had a few features missing that i really needed. Its a good one to start with as its a simple install and it will help you to make the initial move from spreadsheets.

I am currently using Tarantula. Its not quite as straight forward an install as RIACase but its got more features and is geared towards an agile test environment. It has a nice dashboard and good user administration. It does have a rather unique work flow, so its well worth visiting the tour area of there website. Once you get used to it its pretty easy to use.

If you find anymore good free test management tools please let me know in the comments!

Python editor

I have recently started to convert some of my selenium builder test scripts into python from the default son format so i can make them a bit more modular and easier to maintain. I was searching for  good python IDE that i could use. I found that Pycharm really fitted the bill.

It has some great features and there is a free version. If your using Python for your test script language then you should check it out.

Using selenium builder to find and verify content in the page source

I needed to test some elements that were being added to certain pages on a web site. I already had some tests that looked for visible page content but no tests that check for elements only visible in the page source.

I am using the json script format used by selenium builder in this example, to do this i used this:

{
"type": "verifyElementAttribute",
"locator": {
"type": "xpath",
"value": "//link"
},
"attributeName": "hreflang",
"value": "blah"
},

The important bit is the xpath value of ‘//link’. I was looking for this string in the page source:

<link rel="alternate" href="awebsite" hreflang="blah" />

The xpath finds the link tag and once this is found i could then test that the hreflang element had the value  of ‘blah’. I can also using the same method test the other parts of the link rel tag.

Hope this help you if your looking to do a similar thing!

dpxdt visual testing tool

A large amount of my time as a tester is spent checking web pages for differences whether they be due to functionality changes or layout and styling changes. Also when a site has  some underlying technology changed or has been moved to a new platform you need to check for changes to the front end look of a site.

I spent quite a lot of time looking for a way of automated this process so i could compare web pages from before and after a change so i only had to check pages that did look different. Its quite a tricky thing to do but i finally managed to find a tool that would do this and the best bit was its free and open source.

https://dpxdt-test.appspot.com

This tool once set up (You might need a friendly dev ops to help as its not super easy) you can point it at a page and it will crawl that page and all the links from the page to a crawl depth that you can specify and take screen shots of all of them. This creates your baseline that you will compare against. You then do the same thing against the updated page and once it has completed it will provide you with a list of all the same pages in the crawl that were different from each other. For each set of the same pages it will also identify the parts of the image that are different.

The tool makes a great addition to a continuous integration set up and thats really what its aimed to do but we have it set up just as a stand alone test tool.

It also makes regression testing a whole bunch easier!

Give it a whirl and let me know how your using it!

A very simple automation framework using selenium builder

In one of my previous posts i wrote that i was going explain how i set up a very simple test automation framework using selenium builder, well here goes!

These instructions are for windozzzzz by the way!

First thing you going to need is firefox and the selenium builder plug in. Once you have these installed you can record you first script, instructions on how to do this are here.

Next we need to grab Se Interpreter, this will allow us to run our new script created using selenium builder via the cmd line.

Create a new folder and pop in the files you downloaded for Se Interpreter into it, then also pop in this folder the script you created using the selenium builder.

Now we want to run our recorded script from the cmd line, to do this your going to need to run a cmd from the folder you have you script file and your Se-builder files in.  The cmd should look something like this:

java -jar SeInterpreter.jar <path_to_script.json>

If you run this you will see firefox start up the script will run and then Firefox will close. You should see the logging in the cmd window, this will tell you whether the test passed or failed.

The next step is to get a log file created when you run a script, if we are going to be scheduling and running these tests we will need to check the log files after the test run to check our results.

We are going to need to add something to the cmd line in order to this:

java -jar SeInterpreter.jar <path_to_script.json>>C:\Path to logfile\MyLog.log 2>&1

If you now run this you will get a log file created in the folder you stipulate in the cmd above with the name you have given the log.

You can also create a test suite file, this has details of individual test files in it. This means that you can run a bunch of scripts but you only have to tell selenium interpreter to run one test suite file. You just have to switch the script name in the cmd to the test suite file and let the tool do the rest.  In your log file you will get the results of each script that is run (The ones you have stipulated in your test suite file).

Now we have the scripts and our cmd line sorted we need to schedule the scripts to run at a certain time. I like my scripts to run over night against the latest development build so i can check the next morning how the build did. I am running my test on windows so i created a bat file using the cmd line and scheduled this to run at 11pm each weekday night. I could have used the windows scheduler to do this but i found that it ran the scripts in headless mode (You could not see firefox opening up when the scripts were running on the desktop) This caused me a problem as some of my scripts tested pop up error messages and these need the browser gui to be open. The free tool i used to do this is here.

You can add more test scripts and add them to your test suite and these will then get run at the time you have scheduled. You can also add more suites to the bat file as new lines e.g.:

java -jar SeInterpreter.jar <path_to_test_suite1.json>>C:\Path to logfile\MyLog1.log 2>&1
java - jar SeInterpreter.jar <path_to_test_suite2>>C:\Path to logfile\MyLog2.log 2>&1

Each one of these lines will create its own log file for you to use to check the results.

There you have it, possibly the easiest automation frame work possible and you can get something like this set up in a few hours or less then extend it as you think of more tests!

 

Web form validation testing part 1

Most web sites will have forms, places were a user can add details in order to provoke an action from the web site. Stuff like signing up to a customer account or a checkout were you have to add your address and payment details. Most forms will have some sort of validation so that the data entered by the user is usable by the web site and also to help the user to avoid mistakes.

The eCommerce site that my company users has lots of different forms and i wanted to write some comprehensive automated regression tests to check that the validation was working after we did any changes to the site. I was specifically interested in testing the input combinations that should be allowed and ones that should provoke an error that is displayed to the user.

I was not looking at how the form should handle the incorrect format of an input e.g. wrong format of email address, but rather the combinations of good data that could be entered into the form.

For example we have on the site a contact us form,  see screen shot below for the form fields:

Screen Shot 2014-09-06 at 18.56.41

There are a number of fields on this form that are compulsory and also there are different combinations of entries that are allowed and other ones that are not allowed.

The first thing i needed to do was to get all the possible combinations of inputs for this form. Not all of these will be valid but we can start to narrow down the test cases once we have created them all.

luckily there were a few tools available that will create a truth table of results when you give a number of different input fields. I used this one:

http://truthtablesolve.sourceforge.net/

For the contact us form that has 9 input fields you just have to add nine fields into the truth table solver and hit solve. It will then spit out a matrix that should include every combination of inputs that are possible for the form. Next pop this into spreadsheet application, you should end up with something like this:

Screen Shot 2014-09-06 at 19.31.01

The ‘1’ in a cell indicates a valid input and the ‘0’ in the cell indicates no input.

You can see from the screen shot that the test results for these cells (This is just the first few rows in the sheet, there are 513 rows in the actual sheet!) are N/A that means that the combination of inputs for that row is not actually possible. You have to go thru your results and identify these cases along with the ones that will allow the form to be submitted and the ones that will provoke an error and shouldn’t allow the user to submit the form.

Now you will have a list of tests that will cover all the possible combinations of good data input, next thing is to create scripts that will test all your identifed tests that pass or fail.

Part 2 of this blog will focus on creating those automated scripts using the json format script that can be run using selenium builder!

Added benefits of bug tracking systems

If your involved in software development you need a bug tracker, there i said it. The reason for this post is to try to talk a little about the benefits of a good big tracking system, not the obvious  ones but some benefits that you may not have thought of.

1. They help you to put process into your bug fixing

By using status for the bug as it moves thru the fix process you can make sure that it passes through all the different gates before it gets released. Examples of different status that i use in my process are:

1. New = Ticket created but not yet looked at
2. Hard Shoulder = Tickets that cannot be moved due to something blocking development
3. Elaboration Doing = Ticket is in the process of being correctly spec’ed and time required estimated
4. Elaboration Done = Ticket has been spec’ed and time required estimated
5. Development Doing = Development work has started on the ticket
6. Development Done = Development work has been completed on the ticket
7. Peer Code Review Doing = New code is peer reviewed to check for errors (Static analysis and code should be run)
8. Peer Code Review Done = Review has been completed and any errors fixed
9. QA Doing = Functionality of new development tested by QA on beta
10. QA Done = Planned testing of new development functionality completed
11. UAT Doing = (UAT = User Acceptance Testing) End business users test the software to make sure it can handle required tasks in real-world scenarios, according to specifications. This is done on beta
12. UAT Done = UAT complete
13. Ready to deploy = Developer has prepared the new code to be deployed into the live environment
14. Done Live = Dev Op’s has deployed the new code to the live environment
15. Closed = All parties happy with new live functionality so ticket can be closed

This might seem like a rather long list but i have found there is definite value in each of these status and when followed the team has reassurance that any problems should be found before the functionality goes live.

2. They become a knowledge base

The first thing i do when a new bug is raised is do a search in the bug tracker for similar bugs, this provides to different things:

1. It tells me if the bug has already been raised

2. Its tells me if similar bugs have been raised. If this is the case i can look back on the issues recored in that bug and they may well help the developer diagnose the problem in the new similar bug. Also because i am logging what testing i did to check the bug fix i can often use a variation of this to carry out the fix test on the new similar bug, this can be a real time saver.

3. They help you identify troublesome areas of an application

Most bug trackers have reporting functionality, this used in conjunction with some custom bug status that are used to indicate areas of the application where the bug was found can help to identify weak points in the application. This information can then be used to work out areas that seem fragile where it would be useful to have a good set of regression tests to pick up problems early. Also it could help to flag up code areas where a refactor of the code should be done to make that area more robust.

4. Identify the dev’s who’s code has the most bugs..

Possibly a bit divisive this one but rather than use it as a stick to beat the dev with it should be used to help the dev get some extra training or mentoring to help them to improve there code.

5. See which dev is overloaded

Which dev has the most bugs assigned to them?, should some of the bugs assigned to them be spread round to other dev’s in the team?

Thats a few added benefits of bug tracking systems that you maybe haven’t thought about. I am sure there are more, if you have any examples please put some in the comments.

 

 

Some thoughts on automated web GUI testing…

I have used a quite a number of test automation tools during my career, some were expensive enterprise software packages such as QTP and Loadrunner whilst others were open source such as selenium.

Whether you think open source tools are better than paid for ones is a matter of what suits your needs but a powerful driver has and always will be cost. However thats only really half the story because even though some paid for tools may have a high up front cost there easy of use and ROI maybe better than open source tools.

Let me explain this statement, generally when using open source tools some coding knowledge is required, what if you test team does not have this knowledge? sure they can learn but this is going to be an investment and therefore a cost. They are going to take time to get up to speed, time that could be spent writing test scripts and automating tests if the testers could just get stuck in right away. The benefit of paid for tools is that they tend to be easier to get started with and can be used happily with no programming knowledge (QTP) and you should get good customer support if you have any issues.

However having just said all that i think there is finally a tool that is open source (Free) that is also very easy to use…please take a bow Selenium Builder!

I have been using Selenium builder for around a year now and its pretty great. I use it for regression testing the ecommerce site that my company uses to sell its stuff. The front end site is a mixture of html and javascript and Selenium builder seems to work pretty well when i use it.

Selenium builder is a firefox plug in, but before you all shout what about the other browsers i need to test ! It maybe worth considering that aside from visual differences most functional issues will be apparent in every browser so just using FF means that i am not going to miss many bugs at all, you of course might be testing a complex web app and in that case you will need to have good cross browser regression test coverage. You can also do this with selenium builder scripts but you will need to configure your environment to be running a selenium server.

One to the cool things about selenium builder is that is default format of script is JSON so they are easy readable and therefore easy to edit. Also the firefox plug in has a nice GUI that you can use for editing scripts without even having to open a script in an editor.

It’s one of the few record and playback tools that i have used that works well. In my experience recored scripts playback very consistently and of course this makes scripts very quick to create.

Thanks for reading!