Thursday, July 17, 2014

Tables in your Gherkin Scenarios

When writing gherkin, we describe the expected behavior in business terms.  Sometimes, this is text; other times, mixing small tables or lists helps to describe the behavior better.  In the following feature file, there are examples of three types of gherkin tables:


Each of these tables has different implementation steps. Each is simple to use, but it is easy to use the wrong method by mistake (this happened to me)

Simple List of Items

In the first example, Check for pets, there is a simple list of items.  In the Ruby/Cucumber implementation, you use the ‘raw’ method to handle the list of items.  Use raw when you do not have a column header.


Table with Headers

In the second scenario, Check for valuables, there is a table of items with a header. In this example, use the hashes method. Hashes returns an array of hashes.  With this collection of hashes, you can then retrieve your values normally.


List of Items with Side Header

In the final scenario, Check denominations of money in safe, there is an example of named list of items with the column on the left side. With this type of table, you use the rows_hash method.  This returns your data as a hash. In this example, you can see label (which is a Ruby symbol) follows the format of :dollar_bills_[denomination].  This is because Ruby symbols cannot begin with numbers. This impacts the readability a little but not too much.


Scenario outlines with examples are a way to data drive scenarios using tables.  I didn’t include an example of them.  They are generally well understood and there are more examples of them than there are of the other tables.

Monday, October 14, 2013

Making Use of Your Fail Pics' Newfound Freedom

The benefit of letting your fail pics go is that it provides special flexibility in observing and analyzing test failures as they occur. As mentioned in the "extra bonus benefit" section of our last post, we have been having some trouble with our test execution lately. Between an unsettling number of illegitimate test failures and mysterious blank snapshots, we have found ourselves in need of a tool to help troubleshoot our test execution.

Real Time? How About Node.js...

We wanted a tool that would allow us to monitor snapshots as they occur in real time by listening to our snapshots directory and pushing newly added photos to a web client. At first, Node.js seemed like the perfect tool for the job. In fact, with a combination of Node.js, Express, Socket.IO and a directory watcher module, we had a simple implementation up and running on my local machine in little time. Unfortunately, we soon discovered that, while it worked well for watching directories on my local machine, the Node.js event based file watcher does not work with network drives. As a result, we turned back to our good friend Ruby.

Back to Ruby...

Not only would we be able to watch the network drive with Ruby, it has the added benefit of being the lingua franca of our testing team. This means that all members of the testing team can feel more comfortable reading and contributing to the project's code base in the future. The application was built using Ruby, Sinatra, and a directory watcher gem, making the server side code simple enough to place in a single server.rb file:

On the client side, we long poll the server for new events, passing in the time stamp of our latest result. As you can see, the server then hands over all events with a time stamp later than the one we passed in. The client takes that response and adds each event to the page. Now, we are able to visit the "Quality Center Snapshot Feed" web page and watch the fail pics roll in.

Future Plans

The current implementation works, but leaves plenty of room for refactoring and improvement. In the future, we plan to utilize web sockets via Event Machine, rather than long polling. In addition, we would like the application to associate each snapshot with the test machine it occurred on and which team owns the test.

Wednesday, October 2, 2013

Let My Fail Pics Go

In Free Your QTP TestResults, we shared how we create a shadow set of test results that we can write to own database and can use however we want. We are freed of the shackles of the QTP test results viewer.  We didn’t stop there.  We also freed the snapshots that we take when QTP crashes or hits some wall of pain.

Before using the shadow data strategy, we had QTP take a picture that we could see in the result log.  Did something like this:

setting("SnapshotReportMode") = 0 'take a picture
Some test object.Exist(0)
setting("SnapshotReportMode") = 1 ‘turn off the pic taking

As I said, this did the job, but it also trapped the results in the result viewer.  We did some investigation on how to access the snapshots from Quality Center. There may be an easy way, but we didn’t find it.  What we found would have required us to uncompress the image file before using it and it is hard to find anyway.

To set pics of our failures free, we implemented ShadowSnaphot:

Sub ShadowSnapshot()
    On Error Resume Next
    cycle_id = QCUtil.CurrentTestSet.ID
    run_id = QCUtil.CurrentRun.ID
    strUrl = "\\somefileserver\Testing\QC\snapshots\" & CreateTimeStamp & "_" & run_id & ".png"
    Desktop.CaptureBitmap strUrl 'Write the image to file server
    'publish the url to the result viewer
    Reporter.ReportEvent micDone, "View Desktop Snaphot""See the desktop here: " & strUrl
    'Write to our database 
    strSQL = "INSERT INTO cfc.test_snapshots (sn_cycle_id, sn_run_id, sn_snapshot_url, sn_type) " &_
              "VALUES (" & cycle_id &"," & run_id & ",'" & strUrl & "', 0)"
    objRecordSet = GetRecordSet(strSQL, "QC")
    On Error GoTo 0
End Sub

We save the snapshot to a file server and we write the information we need to find our snapshot to a database table. We use ShadowSnapshot method before we exit out of a test and when the recovery scenario is kicked off.

This has been working out great.  Our results are saved as .png files which is lightweight and easy to use.  For example, we can easily view our images in web apps that we build to make our lives easier.  Personal happiness with Shadow snapshots.

Extra bonus benefit:  We are troubleshooting some problems with our test execution.  Since we are using ShahowSnapshot on most test fails, and the method adds a pic file to our file server, we can write a utility to troll for new pics in the directory and display them in real time.  Ruby project. Will share more with this soon.

Friday, September 13, 2013

Free Your QTP Test Results!

While building our web site to speed our analysis of our daily regression tests, we needed to access result data quickly.  Our failed error messages and snapshots of crashes were trapped inside QTP result logs.  As anyone who has looked into these logs knows, they were not designed for casual deciphering.

Rather than fight the HP logs, file structures, compression methods, and database, we went around it.  We started collecting our own copy of key information -- free and easy for us to use.  We are able to build this site with crash snapshots and error messages:



We developed shadow methods for collecting information.  When a test step fails or when something crashes, QTP writes information to its log, and we write a copy of the same information to our database or file server.

To support this, we created a database table that stores error messages and is keyed off cycle and run ids and wrote a ShadowFailMessage method for our QTP function library:

To make this work, we added a reference to the ShadowFailMessage subroutine after each fail (it would have been more elegant to have opened the Reporter class and added the additional functionality, but this is VBScript, not Ruby L).  This takes the error message content and writes it to a database table along with other information needed to associate the message with the right test.  The pattern looks like this:

Reporter.ReportEvent micFail, title_of_error, “Expected: “ & expected_value & “ – Actual: “ & actual_value
ShadowFailMessage title_of_error, actual_value, expected_value

Here is the code for the subroutine we referenced:

Sub ShadowFailMessage(nameOfDataValue, actual, expected)
    actual = Replace(actual,"'","''")'escape single quotes that will break the SQL query
    expected = Replace(expected,"'","''")
    On Error Resume Next
    cycle_id = QCUtil.CurrentTestSet.ID
    run_id = QCUtil.CurrentRun.ID
    If expected ="" Then
        fail_message =     Left((nameOfDataValue & ": " & "Error message: " & actual), 500)
    Else
        fail_message =     Left((nameOfDataValue & ": " & "Expected: " & expected & "; " & "Actual: " & actual), 500)
    End If
    'Write to the cfc_SNAPSHOTS table. set sn_type to 1 to indicate error message
    strSQL = "INSERT INTO test_snapshots (sn_cycle_id, sn_run_id, sn_snapshot_url, sn_type) " &_
              "VALUES (" & cycle_id &"," & run_id & ",'" & fail_message & "', 1)"
    objRecordSet = GetRecordSet(strSQL, "QC")
    On Error GoTo 0
End Sub

We also have Shadow subroutines for crash snapshots and actual/expected QTP datatables (for verifying reports and tables).

We no longer complain about the limitations of our existing tools – we own the solutions to our own problems.  In our case, we used open-course tools like Ruby and Sinatra libraries to develop our own applications for using our test results.  I have even started to like HP's problems.  They have become excuses for some fun coding -- too bad we are paying for the problems though.

Tuesday, September 3, 2013

What Have We Been Up To?

It has been a while since we updated this blog.  Fortunately for us, the reason we have not been writing is not because we are out of ideas, but because we have been implementing new ideas!

One problem that plagued us is how long it took us to analyze our BPT/QTP test results.  HP’s QC does some things well, but it is not exactly nimble. 

·         You have to use Internet Explorer (I die a little inside when I use IE, and there is no way that other developers will use IE),
·         Updating QC results in QC is sooo slow  (in this case, I do not die a little – instead, I want to kill a little.  Why does it take so long to update things in QC and why does it hang my browser while it makes updates?)
·         It takes too long to find the important information in the QTP result viewer
·         Licensing requirements to view results through QC

Anyway, it is clear that we cannot dramatically increase the speed of our daily work by using the QC tools as they come from HP.

We developed our own tool for analyzing BPT/QTP results.  In this view, we can see almost everything we need to understand why a test failed – at one quick glance, maybe even from my iPhone – and we can complete our analysis (without any of the HP “slows”).



There are a couple cool stories/technical solutions related to this tool:

·         Super-fast updating of QC results – even setting tests to rerun
·         Creation of shadow results – pictures and fail messages
·         Super support of analysis of tabular results
·         Prioritization of results
·         Browser plug in for QTP results viewer
·         How good/reliable is this test
·         Easily sharing test results with others on the development team

Expect detailed postings on each of these topics (for real, this time).

Thursday, May 2, 2013

Our Current GUI Framework – Love and Hate

I have started a series of blog posts on how to write a test framework.  I am excited to share our story (and soon share tools that we developed).  Before getting too far into that, I would like to share some details on a significant influence on how we see things:  the dreaded bad relationship.

I have a passionate love/hate relationship with our current GUI test framework.  We use HP’s Business Process Testing (BPT) framework which uses HP’s QuickTest Pro (QTP) as the execution engine.

It doesn’t take a Googling genius to find complaints about HP tools – I have written some of those complaints.  I also know that many of the complaints are bogus.  I am sure that many are written by people who have never learned how to use the tools well and have never worked with a talented team to implement them  (sometime, I will write a proper defense of QTP) .

From my perspective, HP tools are the best thing out there for GUI testing.  And I can’t wait to move far away from them (Sorry HP)

What makes BPT so great?
  • Great abstraction.  All test frameworks have some level of abstraction. BPT has several. From lowest to highest: low-level function libraries, high-level function libraries, object repository (a great xref between the properties of a GUI object and a logical name), ability to use keywords, functions, and custom scripts), business components to package all the bits into something with business user meaning, and finally the test where components and test data are assembled into tests.  As I look back over the list, I know that it is probably not so meaningful to those not on my team, but I think BPT adds better abstraction than other frameworks I have seen.
  • Super easy to read tests.  If you understand our business, you will understand our tests.
  • Clear separation between automation development and test building. Perhaps not unique to BPT, but it does it well.  Depending on your needs, you can build automation without understanding or caring about the use case, build tests without any automation knowledge, or do it all.
  • Ideal for scaling and growing teams. Several years ago, our team was small and we used regular QTP (scripted tests with shared actions – regular stuff).  It was hard to scale beyond a couple automators – we couldn’t agree on anything.  BPT’s structure and our implementation enabled us to grow from 5 to over 20 and one office to four offices in different countries. BTW, the implementation  was no thanks to HP.  Much of what we do is not standard HP implementation (because there is no standard HP implementation!)
Where is it suckish? (i.e., what are design features for our own frameworks that our experience with HP  have influenced)
  • Sucks for integration with development teams. Between licensing issues and the QC interface, this is not a tool for the full dev team.  No one outside of the test team wants anything to do with the HP tools.  For us, HP tools interfere with our goals of sharing ownership of tests.
  • Execution is slow as hell. We run our full suite every night and it takes every bit of it. As an advocate of BPT, it embarrasses me to no end when someone new watches one of these dinosaur runs. As we move to new tools (and look for ways to enhance our current tools), the new goal is to regression test our apps fully in under 20 minutes (we have to trim about 9 hours from current).  HP tools are not going to help us there.  Our new approaches are and will be fully scalable and capable of cloud busts.
  • Support is a nightmare. There was a period after a new HP release that I was sure I worked for HP as a tester.  The release was a nightmare, every issue I reported was a homework assignment for me, and I was helpless to solve my own problems. No more test tools that I cannot fix myself.
  • “A good used Toyota” – this is what you could buy for the price of a single floating QPT license.The core parts of a good framework (for web apps) are open sourced and there for the taking. Free out of pocket. I know that time is not free, but it is much more satisfying to spend time writing your own tools rather than burning time with a closed system that you can't directly influence.
Love and hate.  Not a perfect relationship, but we learned a lot from it and it’s time to move on.  Our HP experience has and will and will guide us as we move on with services testing and GUI testing frameworks.

Monday, March 25, 2013

Importance of a Common Programming Language on Test Teams


How can a technical team work together on techy stuff if they don't understand the same language?  On our team, the native languages are English, Polish, Spanish, and possibly Jive (<-- funny Airplane reference).  Fortunately, everyone speaks English (even though the native English speakers  don't speak Polish, Spanish, or Jive -- other than enough Polish to get me slapped).  We are able to work together in English on business facing testing issues.  To work as a technical test team, we need a technical lingua franca.

A lingua franca is a common language shared by people from different backgrounds so they can understand each other and they can work together.  At home, different languages are used; at work (or at market or international diplomacy), an agreed upon, common language is used.  The common language is no better than the others, but it enables people to communicate better.

Programming teams are typically formed with a built-in lingua franca.   If the team is developing web apps in Java, then all the programmers hopefully share a working knowledge of Java.   At home or on side projects, these same developers may switch a language that is more comfortable for them, such as Python, F#, or Zimbu.  Even though these programmers may speak Zimbu at home, they can still communicate with other programmers at work in Java.

Test teams don't often have the same advantage.  Testers often range from non-technical to super technical.  And because there is no industry standard for testers (not advocating this), it is more challenging for test teams to work together on technical solutions.  For example, members of our test team  have experience with VB (VBA/VB Script), Ruby, Python, Java, C#, and a few other languages.

We are literate, but if we are not literate in the same language, what good is it?  We can build things for ourselves, but we cannot leverage ourselves and our influence.  We cannot grow greater than one.   If we each build tools and testing applications in VB, Ruby, Python, Java, and C#, we cannot take advantage of each other's work, understand each other's work, develop significant tools, or maintain that work.  With a common programming language, we can become more powerful team.

There are a few considerations in picking a common language. It might make sense to form around the common development language in your shop (C#, in our case), go with the most common language on your team (probably VB) , or select from open-source languages commonly used in the test space, such as Ruby or Python.  Anyway you go, pick a language, and go with it.

We selected Ruby as our lingua franca.  When we started, none of us had any experience with it.  Part of our shared experience was learning to use it.  We worked together before and after work and at lunch to learn the basics of the syntax and understand the personality of the language.

As a team, we work together, challenging and motivating each other.  This common language is helping to pull us together as a team.  We work together on skill-building challenges and building applications to help us with our jobs.  Simple tools are becoming more ambitious as we grow skills and gain confidence.

A side benefit of our experience with Ruby as our shared language is that it helped many of us on the test team better understand technology discussions related to our application development.  For instance, when our .net development team talks about MVC frameworks, ASP files, and reconciling NuGet references, we can translate to our language to understand:  Rails type framework, .erb or .haml files, and RubyGems.  We can understand others better, and we now have a vocabulary talking about these things.