Monday, December 20, 2010

Taking a Step With Ruby

Last month, I questioned what we on a test team would gain from learning Ruby and how we could apply it in our daily work.  We already use QuickTest Pro to automate much of our testing.  QTP is a powerful tool that allows us to interact with our applications and gives us a platform for VB script utilities.  With this, what can we expect from Ruby?

A few days ago, we took the first step as a team to see what there is to see.  We bought copies of Brian Marick’s “Everyday Scripting with Ruby for Team, Testers, and You.”

One of the things I like about Marick’s approach is that he stays away from GUI testing.  He mentions it and brings up WATIR, the web GUI automated testing framework for Ruby, but he focuses the book on smaller, very practical goals.  For example, instead of teaching how to automate a web page, he uses the example of building a utility to send text message to yourself when a long-running process completes.  He shows Ruby as a bionic arm for a tester, not a full-fledged robot tester.  From the perspective of learning (no one on our team has much experience with Ruby or similar object-orientated languages such as Python), this approach seems like a great idea.  It results in small projects that are easy to complete, practical to use, and valuable for building skills.

Our goal as a team is work through the book and exercises over the course of a few months.  This will be one of those voyages where we don't know exactly where we are going or when we will get there.  Wish us luck.

Wednesday, December 1, 2010

Give up the “Need” for Testing

We currently staff our development projects according to a standard ratio of programmers to testers – each project has a dedicated test resource who is expected to perform most or all of the testing.  With the prospect of rapid growth, we may not be able to keep our same ratios, and testing may be stressed.  While thinking about this challenge, I played with the idea changing our test model so that testing is built around a core test team, not around a required ratio on each project team.

Doing this would require the test team to give some things up.  For example, we would have to give up the notion of testing or QA being an independent verification entity.  This concept is a left over from traditional, waterfall development.  In agile, teams hold to a “whole team” philosophy where all members of a team work together to complete a job, no matter their formal roles.  Testing does not always have to be done by testers.  And testers may take on other roles, such as requirements analysis.

You would also have to give up the notion of the guaranteed need for a test resource.  This is an uncomfortable spot to put yourself in, but it is probably helpful to consider yourself expendable from time to time. It can be a great motivator to prove your value.

So, when you give up the “need” for testing resources, what do you have left?  You have to sell yourself and your team on your own merits.

What should the role of the test team be?  Simply, you should do what you do best: 

 ·  Develop, execute, and analyze regression tests.  In a continuous integration environment, good regression tests and great analysis are an essential safety net and the best way to provide immediate feedback on quality.
 ·  Develop test tools and find the most efficient ways to solve testing problems.
 ·  Provide insight into test analysis and help teams find weak spots in their code.
 ·  Ask questions.  Good testers are great question askers.  Keep asking these questions; they are one of the best tools to building quality into products.
 ·  Be a trusted advisor and provide trusted assessments. 
 ·  Finally, know your craft really, really well -- this cannot be understated.  If testing resources are not “needed,” they will be used only when they are respected and valued.

This is one of those times when you have to give up something in order to gain it.  Give up the guaranteed need for testing and give up the limits of conventional testing.  It is an existential leap with some risk, but the potential reward is a lean, efficient test team.

Thursday, November 11, 2010

Learning Ruby (or Python), What Does That Get Us?

In a recent team meeting, we discussed ways to increase our technological capacity.  Currently, we have a fairly technical test team.  Many of us are very good (or getting very good) at VB script, we have passable SQL skills, and possess a variety of backgrounds that include technical training and programming.  As far as most test teams go, we are very technical, and we have the interest to take it further.  The question is “where?”

One idea that came up was for us to learn another scripting language, such as Ruby or Python.  The next thought, though, was what do we do with that knowledge? Hmm. 

Using a scripting language with a testing library such as WATIR or Selenium, we can interact with web objects.  Having the ability to interact with web objects and script test logic is a powerful thing.  With this, you can write many utilities to assist with testing.  These utilities do not have to be full-fledged, stand alone automation tools, but they are great levers for hybrid testing, tool-assisted manual testing.

But we have QTP (and presently, enough licenses).  So what would Ruby or Python get us that we don’t already have?  Here a couple thoughts in no particular order:
* Technical prestige.  VB Script is a great tool, but it doesn’t get as much respect as other languages.  Jerks.
* License extender.  If you use Ruby or Python for day-to-day tasks, you do not tie up expensive QTP licenses.
* Professional growth.  Learning and using open-source tools enables you to evaluate different tools and to participate in more industry discussions.
* An open mind.  Learning how to perform a task using different tools opens you to different approaches to other things.
* Interaction with the open-source community.  Becoming proficient with these tools may give you an opportunity to give back.

Now, we need to find the time to learn Ruby (or Python)

Thursday, October 28, 2010

Am I Agile or Mini Waterfall? Do I care?

After our development team migrated to agile, we assumed that we are an agile test team.  After some time and belly button staring, we came to the conclusion that we were a mini-waterfall test team and not very agile.

We tested and delivered code every four weeks, and we went to scrum meetings.  It looked agile, but our mini waterfall approach was very traditional.  Programmers and product owners designed and implemented the product enhancements, and testers stood against the wall waiting for code to test at the end of the sprint.  We were second-class members of the development team.

The move to agile testing started by dealing with the denial that we were not already there and by trying to articulate what exactly agile testing is.  While not everything changed (we still analyze and test software), we changed our testing philosophy.  We found ways to change our role from tester to “developer.”  We took steps to own the early days of a sprint be taking on more of a business analyst role.  We engaged the product owners earlier and more often, changing from software watchdog to advocate for functionality and usability.  And we actively sought ways to provide immediate feedback throughout the sprint, reducing the end-of-sprint rush.

So, should you care if you are mini waterfall or agile?  Agile testing gave us something that we lacked with mini waterfall, a way to become full share members of the development team.

Tuesday, October 26, 2010

How HP Failed BPT

HP failed Business Process Testing (BPT) when it rolled the automated test framework out several years ago, and it has not really done much since.

HP marketed BPT as a way for non-technical content experts to build automated tests.  The original tutorials showed an implementation where the non-technical content experts designed and built components (the reusable building blocks of tests) by simply stringing together objects and keywords.  Then, the content experts drag the components together to build tests.  That tutorial demonstrated the functionality of BPT, but it failed to even give a hint about how to implement it successfully.

Non-technical content experts are the wrong group to use when building the infrastructure for a test system.  The system is good, but it is not automagic.

Non-technical content experts should never be the ones to decide the scope and purpose of components.  The components are way too important, and the approach is way too random.  There are too many ways to define a component, and if you don’t have a pattern or model in mind, you will end up with a bunch of components that are redundant, inefficient, and hard to maintain.  The strategy for designing components should be treated like a software development activity.  An automator should be involved here.

In addition to confusing the issue of who should design components, HP provided no advice on how to design them, and there is still a fair amount of confusion about this.  For example, should we build components that perform a business process?  That makes sense based on the name, “Business Process Testing.”  But, when you have a component called “Register a New Member,” you are limited to using the same positive flow.  How would you use that component if you wanted to verify negative tests such as what happens when you attempt to register a new member with an invalid social security number?  Business processes like this would make sense if you only wanted to test “happy path” cases.

After this confusing rollout, BPT languished without a lot of attention from HP or the testing community.  This has been really frustrating to those of us who use and see the value of BPT.  As I see it, BPT is the perfect tool for agile testing.  It is a great tool for getting the power of automation to all members of the team without being limited by the level of technical skills.

Monday, October 25, 2010

Testers Need Not Apply


The fault may be mine and the job description I have posted, but I am not seeing the candidates I would like to see for my agile testing team.  This is causing me to slow down and question what I want.

A few minutes of brainstorming and here is what I came up with (“up with which I came,” if you prefer):  
  • A question asker who makes no assumptions about requirements or how to test
  • A project manager who can herd developers
  • A negotiator adept at talking to gunmen in hostage situations and skilled at getting programmers to do his bidding
  • A questioner of authority
  • A nerd repelled by bureaucracy and unnecessary process
  • A get-it-done-no-matter-whose-job-it-is person who can own a project
  • A curious technician who knows how to build testing tools
  • A voracious reader and active learner who takes responsibility for professional skills and knowledge
  • A programmer and an automator (automation skill are not the same as programmer skills, but that is another story)
  • A lovable pain in the ass who looks for practical solutions, not the “process” that is always done
  • An optimist who has not been beaten down by bad experiences in conventional “QA” organizations

After reading a pile of resumes and talking with a number of candidates, I am not finding what I am looking for (or “that for which I am looking,” jerk).   I don’t think that my expectations are unreasonably high.  I want great, smart investigators who don’t know “no.”  Too many testers have been taught to limit themselves and to have low expectations.  Too many testers don’t take pride in professional knowledge and growth.

I am beginning to think that maybe I don’t want a “tester.”  Many testers have to unlearn what they know about testing before they can learn to be true agile testers.  They have to unlearn limits. That unlearning may not be worth the effort. 

Hopefully, I will find the right agile tester from the testing community, but I may look for a smart recent graduate – someone I can train and never let have low expectations.  

In testing, we often create our own limits. I hate that.

Sunday, October 24, 2010

The Magic of Automation Hour

We had a problem.  As a test team, we had two conflicting goals.  We needed to develop new automated regression tests, and we needed to work with developers and product owners at the beginning of the sprint.  Before introducing automation hour, we didn’t do either well. 

We develop software in four-week sprints.  In QA, our goal is for everyone to develop automated tests – we don’t want automation to be a specialized skill. 

Before we introduced automation hour, we found ourselves working on our backlog of automation projects after one sprint was completed – during weeks one and two of the following sprint.  When we focused on automation at the beginning of a sprint, we missed out on the planning and requirements discussions with programmers and product owners.  When we did become involved with the sprint work, we were behind and didn’t know the project well.  Weeks three and four were full-on testing efforts and our work on automation stopped.  We were never in sync with the developers.

About a year ago, we started automation hour.  Automation hour is one shared hour a day that everyone on the QA team sets aside to work on automation.  We now work on automation during all four weeks of the sprint, even the hectic third and forth weeks of the sprint.  During weeks one and two, we limit our time on automation work to the hour, and this gives and encourages us to find ways to involve ourselves more fully in the opening days and weeks of the sprint, long before anything is ready to test. We are able to follow automation hour during weeks three and four because we spread out our automation work evenly throughout the sprint.

Automation hour has freed us to focus on the broader aspects of agile testing while continuing to grow our automated test coverage.

Saturday, October 23, 2010

Introduction

In this blog, I intend to discuss topics related to agile testing: manual testing, automated testing, innovation, and professional growth. 

Some of my perspectives are colored by my frustration with how QA/test teams typically operate: testers are second-class team members with minimal technical and professional skills who have little influence on the projects being developed.  Additionally, most automated test implementations are failures, either too simple or too complex to work.  Among other things, I will share our approach to automated testing using HP’s Business Process Testing (BPT).

I look forward to sharing ideas and perspectives with you and learning from your feedback…Bob