Tuesday, December 20, 2011

Let’s Write an Automated Test Framework

I am a huge one-time fan of HP’s BPT framework.  Test execution may be slow, but it is the best tool I have found for a growing test team to use in a whole-team test automation environment.  BPT has powerful abstraction of test scripts to make it easy for business-focused people and new test team members to build tests.  It enables automators to build tests using simple keyword-driven techniques and more powerful and sophisticated test scripts when the situation demands it.  As soon as I learned to ignore the misguided documentation from HP on how to use BPT, it became a very powerful tool.

Over the past year, I have been struggling with the new (and supposedly improved) version of QC/BPT.  The past year has been a mess.  What was a promising enterprise solution for long-term use has, in my mind, crumbled into a pile of what-could-have-been.  As it looks now, it is time to write an open-source automated test framework.

There are many others who have gone down this path.  Most have mixed results (we have looked for other frameworks that meet our needs and haven’t been satisfied with anything yet).  I am still convinced that BPT (as we implemented it) is the best test automation framework out there.  And since so few other people have found BPT (or found it to be a suitable solution for them), I think my team is uniquely (I am sure we are not alone) suited to drive the development of this project.  We have seen what works well.  We have solved the key problems of automation Fire Swamp -- Remember the movie the Princess Bride (1987) when Westley and Buttercup escaped from the Lightning Quicksand?

Westley: No, no. We have already succeeded. I mean, what are the three terrors of the Fire Swamp? One, the flame spurt - no problem. There's a popping sound preceding each; we can avoid that. Two, the lightning sand, which you were clever enough to discover what that looks like, so in the future we can avoid that too.

Of course, in the next moment, he learns about the Rodents of Unusual Size (ROUS) -- the third of the problems.

In the case of test automation, the three terrors are lack of readability, lack of maintainability, and lack of flexibility.  By using our implementation of BPT, we know what problems to avoid.

Look out HP, I think we might write a framework.

Sunday, December 4, 2011

The Role of Reputation in Agile Testing

In Agile development teams, your reputation is more important than ever.

In traditional development teams, your peers certainly had opinions about you  They might love you or hate you or have no idea who you are.  The formal process of traditional development protected you, and it limited the impact that your reputation had on your professional work product.  For example, people from other groups may have approve test scripts you write.  Or you might execute test scripts written by someone else.  Your personal responsibility for your work was limited.  As long as you mechanically performed your tasks, you were covered.  

All that changes in agile.  In the most agile of shops, agile teams are self organizing, and the agile tester has a lot of discretion.  You are no longer protected by process. You can succeed or fail on your own.  

The roles on agile teams are not clearly defined in a traditional sense -- everyone should work together to complete the task, regardless of a person’s formal role.  Agile teams should be self organizing and figure out the best way to do things on their own.  What is best depends on a lot of things:  time, skills, interests, and trust.  Trust is based on reputation, and reputation is based on the sum of your past work.

At my company, we adhere strongly to the principles of agile, and development teams make a lot of their own decisions.  For example,  the amount of coding that happens during a sprint ultimately depends on the feeling of the team.  Of course, our teams are committed to delivering the most functionality they can each sprint, but the code has to be clean and the team has to have confidence in the work product.  

We deliver develop and deliver code in a two-week sprints.  In two weeks, there are only ten business days, and all our code is delivered into production at the end of the sprint.  Whether programmers feel comfortable creating new code for seven days or nine days during a sprint depends on things like testing confidence.  From the testing perspective, we need to provide good regression coverage (and share our results regularly with the team), and we must make everyone feel good about the sprint-level work we are doing.  It is, of course, essential to keep up to date with testing (immediate feedback), but the perception of success is greatly influenced by our reputation.  In this sense, a tester’s reputation has a real impact on the output of the development team.  

Sunday, November 27, 2011

Test Automation and the Art of Poetry

You never know where life will take you.  In college, I studied literature – and mostly stayed away from the math/computer science buildings on campus.  For some reason, I was interested in the early English Renaissance and Sir Philip Sidney.  In one of Sidney’s works, A Defence of Poesie (also known as The Defence of Poetry and An Apology for Poetry) , he writes about the purpose of poetry and his interpretation of art.

College was a long time ago, and I don’t remember a lot of my study of Sidney.  One thing I do remember is that Sidney wrote that the finest art hides its art.  The best art or poetry looks easy.  The finest art, the art that takes the greatest talent and the hardest work, looks simple when done well.  I like to think about this when I think about test automation.

A good test automation framework should look simple.  In fact, it should look stupid simple. It should be something that you can explain to someone without a lot of big words or long manuals.  It should be easy to read without having to think too hard.

With this as a guiding principle, I think a good automation program should rely on abstraction.  You can present complex ideas by breaking the ideas into simple, generalized parts.  The simpler parts are often easy to describe so that people can understand them.  Good test automation should be abstracted to the business level.  By abstracting to the business level, you increase readability and open your work to others. You don’t have to be specialist to understand what is going on --  just the same as you shouldn’t need to be a PhD literary critic to appreciate good poetry.

You may have some clever code and some complex code, but when it is implemented, it should look simple.  In an automation framework, the implementation should look simple.  In the actual code libraries that support your framework, the code should look simple.  If not, refactor.  We expect the application developers to do this with their code; we should do the same.

If it looks complex, it is probably either or bad automation or bad poetry. The next time you are looking for a  case for simplicity, look no further than Sir Philip Sidney.  Simplify, simplify, simplify!

Wednesday, November 16, 2011

Great Day for Ruby Programming (and Our Test Team)

While reviewing some Ruby blogs and forums, I found


This information on the ocra Ruby gem is breathing new life into our scripting program.  Using Ocra, we can package our Ruby scripts into executables that we can distribute easily.  Ocra creates .exe files that include references to all the related files and Ruby gems used in a script.  These executables can be run on machines without worrying about any dependencies -- you don't even need to install ruby on a machine.  This is one of the most liberating things I have seen in a while.

Since we started with our Ruby study on the test team, we have developed utilities ranging from a services testing framework to a utility for updating expected results for automated tests to another utility for releasing locks in our QC/QPT tests.  These utilities are great, but they suffered because it is a bother to install ruby, all the associated gems, and referenced files on each machine.  Because it is a bother (we are sort of lazy), we often didn't do it, and the great work we did gathered dust.

Using Ocra today, we created executables, and this great work from the past year came back to life.  After we verified that this gem works (thanks Josh), it is hard not to get excited about the possibilities.  We developed libraries in our bigger projects (the services test framework, most noteably) that we have referenced in other projects.  Now, I am thinking of other places we can use these resources.

It is all coming together.  The test team with the biggest tool kit wins.  Death to boring work.

Thursday, November 10, 2011

“Whole-team” Test Automation

At my company, we practice the art of “whole-team test automation.”  Very simply, this means that test automation is not a specialized skill on our test team.  For us, the ability to develop automated tests is a required skill for all members of our test team.  At our company, highly skilled automated test engineers are common as dirt.

If you thinking that this seems like a lot of work, you are right.  It takes an effort.  So, why would we invest in this idea?
  • In agile development, rapid development of automated tests is critical.  While we still struggle with keeping up with automation during sprint work, the option is always there for the testers and the scrum teams to do what is best for the project.  This may mean aggressively developing regression tests throughout the project.  It is not acceptable for one project to have good coverage just because they happen to have the good automator – all projects deserve technically skilled testers.
  • At many companies, automated tests are developed, executed, and owned by a separate group within the test team. This creates a two-tiered test team.  It creates a situation where the most technical members become more technical because they have the most practice and the least technical members have no opportunities to grow their skills.  That sucks if you not technical and have no opportunity to grow.
  • The more technical a tester is, the more tools the tester has in his toolkit, and the better the tester will adapt to new situations.  The new and changing situations range from new technologies such as services testing to a fundamental change in how software testing is done.  Adapt or die.
  • Agile testers are developers.  As members of a software development team, everyone should know how to code.  It is good for your self esteem, and it helps you gain and keep the respect of the other developers on your team.
If you are convinced that we are on to something, how do you go about it?
  • Build or select a suitable test framework and implementation methodology, and develop a transparent approach to testing.  For this, we chose HP’s BPT framework with QTP as the underlying automation engine.  Using BPT and our “Archetype” implementation technique, all members work together and take advantage previous work and each other’s skills.  With a good framework, new team members can build tests, and experienced automators have opportunities to work on challenging work.
  • Good recruiting.  Regular, run-of-the-mill “QAs” are not right people for the team.  Be prepared for this to take a long time.  Because of the history of testing (a combination of “QA” process types and non-technical manual script runners), many good people stay away from testing jobs.  My only advice here is to start with high standards, don’t let your standards drop, and keep hope.  Great people are out there.  And in my opinion, most recruiters will not be your friend.  They will pressure you to take conventional candidates.  Be aware of recruiters.
  • Be prepared to train people .  Some of the most obviously experienced candidates will take the most time.  Many experienced automators do not work in a whole-team automation environment.  As a result, they tend to build tests for themselves and not for the team.  There is often “unlearning” that has to happen.  
  • Promote technical skills. Learn new programming languages and new technologies.  This year when we studied Ruby together, we spent some time looking at Rails projects.  By looking at and studying Rails, we accidentally learned about MVC frameworks.  
  • Devote enough time. In addition to the sprint work that we do (with or without other testers), we devote an hour each day to work together on automation.  This way, we share ideas, learn from each other, set high expectations for each other, and continue to practice our art.  There is a bonus with this approach when you are working on projects that are not interesting.  If your project stinks, you can work with other team members on their automation needs.  Everyone wins.
As you live with the “whole-team” automated test philosophy, be prepared for things you do not predict.  Train your team, give them the tools and support, give up control, and let them surprise you.

Tuesday, November 8, 2011

Programming Challenges for the Test Team

This is a follow up to a earlier post on Ruby programming.  This is part of the continuing effort to raise the technical bar on the test team (and to keep it high).  Here is the latest Ruby Challenge:

A farmer has a 40-pound rock that he uses with a balance-type scale to measure grains and feed.  He lends that rock to a neighbor, and the neighbor accidently breaks the rock.  It is broken into four pieces.  He returns the rocks to the farmer and is very apologetic.   The farmer, unexpectedly, is pleased.  He says with these four pieces of rock and the balance-scale that he can now measure everything from 1 to 40 pounds.  What is the weight of each rock?

I know that some of the team know the answer (or at least remember the riddle).  But the answer doesn’t really matter.  Here is the challenge:  Using Ruby (or some other programming language), write a script that tests your guess. 

As a bonus, write an algorithm that derives all the possible combinations of stone weights.

Bonus points for elegant code.

Bragger’s rights for the first good script.  

Wednesday, September 21, 2011

Migration from BPT 10 to 11: Changes to Data Tables

In standard QTP, data tables are the work horses for managing test data.  They are used for data driving tests, passing data from one action to another, storing large volumes of test results, and for iterating tests.  In BPT, data tables take a back seat to other means of managing test data, most notably, test parameter data.  But even in BPT, there is a place for data tables.  For example, test data read from associated Excel files can be read in at run time and table data from the application under test can be stored in the component data table and reviewed in the test logs.

In BPT 10, each component has a local datasheet.  Because each component runs independently (remember the slow execution speed of BPT 10 – one component is loaded, it runs, the component is closed, the next component is loaded, it runs, …), there is no global data sheet.

There is a key benefit in BPT 10 to the way tests are constructed.  In a component, you can create data tables named “Actual” and “Expected.”  If this component is used more than once in a test, you don’t have to worry about name contention – the scope of data table names is in a single instance of a component.  In standard QTP, all data tables are exposed to the entire test and you have to make sure that no data tables share the same name. In BPT 11, tests share much more in common with standard QTP tests, and data tables must now be unique.

In my migration from BPT 10 to 11, I had to rethink some of my key assumptions about how I use data tables.

  1. Because of the appearance of the global data sheet in the business component, I changed from using the DataTable.Import method to DataTable.ImportSheet.  The Import method loads the Excel sheet into the first data table. In BPT 10, the first data table is a local sheet; in BPT 11, the first data table is the global sheet. 

A surprise change for me when I ran my components migrated from 10 in 11 is that the imported data was not where I expected it to be.  This caused verifications to fail.  I was further surprised when some of my tests ran extra iterations.  Tests ran extra iterations because the import method added rows to the global datasheet, and the global datasheet determines the number of iterations a test runs.

ImportSheet enables you to specify which sheet to import and where to place it.  When using ImportSheet, you have to know the name of the data sheet you want to overwrite – if your test iterates, this must be a unique name.

  1. Because the scope of data sheet names is not longer local to business components, you must manage the names of the sheets and make sure the names are unique.  As you do this, consider analysis of test results.  If your test has 10 iterations, there may be 10 instances of the “Expected_Result” sheet.  Getting the iteration value of component at run time is difficult so I chose to make the sheets unique by adding part of a time stamp to the name.  Ten instances of a sheet with similar names is difficult to analyze.  To give clarity, I added a line to the log file identifying which data sheet goes with which iteration.

Here is a summary of the changes I made:
BPT 10:
DataTable.Import GetCurrentTestPath() & "\" & strQCExcelSheet

BPT 11:
strUniqueSheetName = “Reference_” & Minute(Now) & Second(Now)
Reporter.ReportEvent micDone, strUniqueSheetName, “Look for reference data at “ & strUniqueSheetName
DataTable.ImportSheet GetCurrentTestPath() & "\" & strQCExcelSheet, 1, strUniqueSheetName
Generally, the migration from BPT 10 to 11 did not require changes to the automation code.  Underlying BPT is QTP, and QTP code did not change.  The change in how BPT handles data tables may be unnoticed in many implementations, depending on which features of BPT are used.  For those who push the limits of how BPT handles data, this undocumented change to data tables will be a stumbling block.  I suppose there is a price to pay to get BPT to perform at an acceptable speed.  I’ll pay this one.

Wednesday, April 13, 2011

Checklist for Agile Testing

During a recent test team discussion, we reviewed the key features of agile software testing.  As a team, we all have a good idea of what is important with agile testing and how it is different from conventional testing.  We worked on the checklist to give us a tool for keeping our agile test mindset fresh.

Use the checklist to question your own agile test performance.  It is good for a self examination or a discussion with your project team.  For example, you can use it to share your vision of agile testing with a team that is new to agile development.

For most of the questions, "yes" is the best answer.  It is doubtful that you will answer "yes" (or whatever the best answer is ) for all the questions -- use the questions and answers to provoke thought.

o       Am I taking items off the mental shelf space for the lead programmer, scrum master, and/or business user?
o       Am I the scrum master for the project? If not, could or should I have been?
o       Did I help the scrum team understand risks?
o       Do I organize work for the project?
o       Have I made decisions in the best interest of the project?  Did I give up ownership and did I take on ownership of right things to make the project successful?
o       Did I help new team members learn the project?
o       Am I documenting the project, opening tickets and describing requirements?
o       Am I writing user stories?
o       Did I come up with any of the requirements for the project?  This is an indication that you are actively participating in the project.
o       Did I define and/or develop tests at the beginning of the sprint? Early work is good work.
o       Are other members of the scrum team helping with testing?  In agile, any member of the team should be able to test.
o       Do I feel like I am up to date with programmers and business users throughout the sprint?  A catch-up period later in the sprint is a bad sign.
o       During meetings, do I write on the white board?  This is an indication that you are actively thinking, communicating, and  participating in the project.
o       Have I sat with a programmer while he or she wrote code for the project?
o       Have I sat with the business users? 
o       Have I tested on the programmer’s workstation recently?  This is key to providing really immediate feedback.
o       Is there a lot of scrum team chatter?  Teams that talk to each other are more agile than those that don’t.
o       Am I giving really fast feedback to programmers on the most important things?
o       Am I known as “Fast Feedback [your name]” or “Quick Test [your name]?”
o       Are the programmers comfortable coding for the full sprint? Good agile testing can directly increase the productivity of the programmers.
o       Did I add value as a testing expert?  For example, did I help business users perform good test analysis?
o       Did I build any tools to help verify the project?
o       Did I update existing automated tests that changed because of my project work?
o       Did I use the simplest solutions for the problems I worked on?
o       Did we get user acceptance feedback very early and very often in the sprint?
o       Have I built regression tests for new functionality before or while it is being built?
o       How long are issues sitting with me before I turn them around?  Issues should sit no longer than a day or two.
o       How old are the bugs we are finding on the project?  The only good bug is a dead bug, and the best dead bug is one that didn't live long.
o       Is a lot of testing taking place at the end of the sprint?  Best answer is "very little."  Make an effort to test early and regularly throughout the sprint.  If you are testing a lot at the end, you are probably not being very agile.
o       Was more than one person involved in user acceptance sessions?  Do the users I worked with represent the user community well?

Wednesday, March 2, 2011

Distributed Agile Test Teams -- Making It Work

I just returned from a trip to Krakow, Poland where I visited with members of my test team.  Our test team is split between two offices.  In one office, we have programmers and testers, and in the other office, we have testers and no programmers.  Because of this, it is not possible for all the scrum teams to be colocated.  This creates some challenges, but so far, everything has been a solveable problem.  In some ways, using distributed agile test teams is not idea, and in other ways, it is a big strategic advantage.

Here are some things we do to make distributed agile testing work:
  • Make the best use of the overlap hours.  In our case, we have at least three overlap hours between Poland and the eastern US.  These hours are precious.  During this time, we have stand-up scrum meetings and we work on test team projects.  
  • Invest in the right tools.  This means buying or building tools that enable everyone on all teams to have the same advantages.  For example, we migrated to Quality Center when we opened the Poland office.  QC is just about as responsive in Poland as it is in the US.
  • Communicate!  We use video conferencing, collaboration software, and instant messaging everyday.  Cheap tools can erase the miles and bring your teams together.  We have started using Adobe Connect for video conference and collaboration.  One cool feature of this is the ability to record meetings.  This gives the abilty for programmers to pass along a quick recorded demo of the day's work to the tester on the other side.  User acceptance test sessions can be recorded -- in fact, a UAT performed during off hours can be reviewed later in another office.
  • Travel between sites.  New hires in our Poland office spend several months in the US working the development and testing teams.  A big success factor is building strong relationships.  It is difficult to really get to know and to trust someone if you only talk on the phone or communicate through email.  The benefits to people traveling and to the people being visited outweight the costs of traveling.
  • Find the right projects.  Regression test analysis and many test automation-related projects are easily done in Poland.  Most new functionality projects are more easily done in the other office where there is better access to programmers and business users.  
  • Hire the right people.  This is always important, but it is really important when you have to rely on trust and communication with people that you don't see on a regular basis.

What I have found so far is that the issues related to distributed agile testing are common with distributed agile development.  If you are looking for more information on distributed agile development, here is a great resource: