Monday, October 14, 2013

Making Use of Your Fail Pics' Newfound Freedom

The benefit of letting your fail pics go is that it provides special flexibility in observing and analyzing test failures as they occur. As mentioned in the "extra bonus benefit" section of our last post, we have been having some trouble with our test execution lately. Between an unsettling number of illegitimate test failures and mysterious blank snapshots, we have found ourselves in need of a tool to help troubleshoot our test execution.

Real Time? How About Node.js...

We wanted a tool that would allow us to monitor snapshots as they occur in real time by listening to our snapshots directory and pushing newly added photos to a web client. At first, Node.js seemed like the perfect tool for the job. In fact, with a combination of Node.js, Express, Socket.IO and a directory watcher module, we had a simple implementation up and running on my local machine in little time. Unfortunately, we soon discovered that, while it worked well for watching directories on my local machine, the Node.js event based file watcher does not work with network drives. As a result, we turned back to our good friend Ruby.

Back to Ruby...

Not only would we be able to watch the network drive with Ruby, it has the added benefit of being the lingua franca of our testing team. This means that all members of the testing team can feel more comfortable reading and contributing to the project's code base in the future. The application was built using Ruby, Sinatra, and a directory watcher gem, making the server side code simple enough to place in a single server.rb file:

On the client side, we long poll the server for new events, passing in the time stamp of our latest result. As you can see, the server then hands over all events with a time stamp later than the one we passed in. The client takes that response and adds each event to the page. Now, we are able to visit the "Quality Center Snapshot Feed" web page and watch the fail pics roll in.

Future Plans

The current implementation works, but leaves plenty of room for refactoring and improvement. In the future, we plan to utilize web sockets via Event Machine, rather than long polling. In addition, we would like the application to associate each snapshot with the test machine it occurred on and which team owns the test.

Wednesday, October 2, 2013

Let My Fail Pics Go

In Free Your QTP TestResults, we shared how we create a shadow set of test results that we can write to own database and can use however we want. We are freed of the shackles of the QTP test results viewer.  We didn’t stop there.  We also freed the snapshots that we take when QTP crashes or hits some wall of pain.

Before using the shadow data strategy, we had QTP take a picture that we could see in the result log.  Did something like this:

setting("SnapshotReportMode") = 0 'take a picture
Some test object.Exist(0)
setting("SnapshotReportMode") = 1 ‘turn off the pic taking

As I said, this did the job, but it also trapped the results in the result viewer.  We did some investigation on how to access the snapshots from Quality Center. There may be an easy way, but we didn’t find it.  What we found would have required us to uncompress the image file before using it and it is hard to find anyway.

To set pics of our failures free, we implemented ShadowSnaphot:

Sub ShadowSnapshot()
    On Error Resume Next
    cycle_id = QCUtil.CurrentTestSet.ID
    run_id = QCUtil.CurrentRun.ID
    strUrl = "\\somefileserver\Testing\QC\snapshots\" & CreateTimeStamp & "_" & run_id & ".png"
    Desktop.CaptureBitmap strUrl 'Write the image to file server
    'publish the url to the result viewer
    Reporter.ReportEvent micDone, "View Desktop Snaphot""See the desktop here: " & strUrl
    'Write to our database 
    strSQL = "INSERT INTO cfc.test_snapshots (sn_cycle_id, sn_run_id, sn_snapshot_url, sn_type) " &_
              "VALUES (" & cycle_id &"," & run_id & ",'" & strUrl & "', 0)"
    objRecordSet = GetRecordSet(strSQL, "QC")
    On Error GoTo 0
End Sub

We save the snapshot to a file server and we write the information we need to find our snapshot to a database table. We use ShadowSnapshot method before we exit out of a test and when the recovery scenario is kicked off.

This has been working out great.  Our results are saved as .png files which is lightweight and easy to use.  For example, we can easily view our images in web apps that we build to make our lives easier.  Personal happiness with Shadow snapshots.

Extra bonus benefit:  We are troubleshooting some problems with our test execution.  Since we are using ShahowSnapshot on most test fails, and the method adds a pic file to our file server, we can write a utility to troll for new pics in the directory and display them in real time.  Ruby project. Will share more with this soon.

Friday, September 13, 2013

Free Your QTP Test Results!

While building our web site to speed our analysis of our daily regression tests, we needed to access result data quickly.  Our failed error messages and snapshots of crashes were trapped inside QTP result logs.  As anyone who has looked into these logs knows, they were not designed for casual deciphering.

Rather than fight the HP logs, file structures, compression methods, and database, we went around it.  We started collecting our own copy of key information -- free and easy for us to use.  We are able to build this site with crash snapshots and error messages:



We developed shadow methods for collecting information.  When a test step fails or when something crashes, QTP writes information to its log, and we write a copy of the same information to our database or file server.

To support this, we created a database table that stores error messages and is keyed off cycle and run ids and wrote a ShadowFailMessage method for our QTP function library:

To make this work, we added a reference to the ShadowFailMessage subroutine after each fail (it would have been more elegant to have opened the Reporter class and added the additional functionality, but this is VBScript, not Ruby L).  This takes the error message content and writes it to a database table along with other information needed to associate the message with the right test.  The pattern looks like this:

Reporter.ReportEvent micFail, title_of_error, “Expected: “ & expected_value & “ – Actual: “ & actual_value
ShadowFailMessage title_of_error, actual_value, expected_value

Here is the code for the subroutine we referenced:

Sub ShadowFailMessage(nameOfDataValue, actual, expected)
    actual = Replace(actual,"'","''")'escape single quotes that will break the SQL query
    expected = Replace(expected,"'","''")
    On Error Resume Next
    cycle_id = QCUtil.CurrentTestSet.ID
    run_id = QCUtil.CurrentRun.ID
    If expected ="" Then
        fail_message =     Left((nameOfDataValue & ": " & "Error message: " & actual), 500)
    Else
        fail_message =     Left((nameOfDataValue & ": " & "Expected: " & expected & "; " & "Actual: " & actual), 500)
    End If
    'Write to the cfc_SNAPSHOTS table. set sn_type to 1 to indicate error message
    strSQL = "INSERT INTO test_snapshots (sn_cycle_id, sn_run_id, sn_snapshot_url, sn_type) " &_
              "VALUES (" & cycle_id &"," & run_id & ",'" & fail_message & "', 1)"
    objRecordSet = GetRecordSet(strSQL, "QC")
    On Error GoTo 0
End Sub

We also have Shadow subroutines for crash snapshots and actual/expected QTP datatables (for verifying reports and tables).

We no longer complain about the limitations of our existing tools – we own the solutions to our own problems.  In our case, we used open-course tools like Ruby and Sinatra libraries to develop our own applications for using our test results.  I have even started to like HP's problems.  They have become excuses for some fun coding -- too bad we are paying for the problems though.

Tuesday, September 3, 2013

What Have We Been Up To?

It has been a while since we updated this blog.  Fortunately for us, the reason we have not been writing is not because we are out of ideas, but because we have been implementing new ideas!

One problem that plagued us is how long it took us to analyze our BPT/QTP test results.  HP’s QC does some things well, but it is not exactly nimble. 

·         You have to use Internet Explorer (I die a little inside when I use IE, and there is no way that other developers will use IE),
·         Updating QC results in QC is sooo slow  (in this case, I do not die a little – instead, I want to kill a little.  Why does it take so long to update things in QC and why does it hang my browser while it makes updates?)
·         It takes too long to find the important information in the QTP result viewer
·         Licensing requirements to view results through QC

Anyway, it is clear that we cannot dramatically increase the speed of our daily work by using the QC tools as they come from HP.

We developed our own tool for analyzing BPT/QTP results.  In this view, we can see almost everything we need to understand why a test failed – at one quick glance, maybe even from my iPhone – and we can complete our analysis (without any of the HP “slows”).



There are a couple cool stories/technical solutions related to this tool:

·         Super-fast updating of QC results – even setting tests to rerun
·         Creation of shadow results – pictures and fail messages
·         Super support of analysis of tabular results
·         Prioritization of results
·         Browser plug in for QTP results viewer
·         How good/reliable is this test
·         Easily sharing test results with others on the development team

Expect detailed postings on each of these topics (for real, this time).

Thursday, May 2, 2013

Our Current GUI Framework – Love and Hate

I have started a series of blog posts on how to write a test framework.  I am excited to share our story (and soon share tools that we developed).  Before getting too far into that, I would like to share some details on a significant influence on how we see things:  the dreaded bad relationship.

I have a passionate love/hate relationship with our current GUI test framework.  We use HP’s Business Process Testing (BPT) framework which uses HP’s QuickTest Pro (QTP) as the execution engine.

It doesn’t take a Googling genius to find complaints about HP tools – I have written some of those complaints.  I also know that many of the complaints are bogus.  I am sure that many are written by people who have never learned how to use the tools well and have never worked with a talented team to implement them  (sometime, I will write a proper defense of QTP) .

From my perspective, HP tools are the best thing out there for GUI testing.  And I can’t wait to move far away from them (Sorry HP)

What makes BPT so great?
  • Great abstraction.  All test frameworks have some level of abstraction. BPT has several. From lowest to highest: low-level function libraries, high-level function libraries, object repository (a great xref between the properties of a GUI object and a logical name), ability to use keywords, functions, and custom scripts), business components to package all the bits into something with business user meaning, and finally the test where components and test data are assembled into tests.  As I look back over the list, I know that it is probably not so meaningful to those not on my team, but I think BPT adds better abstraction than other frameworks I have seen.
  • Super easy to read tests.  If you understand our business, you will understand our tests.
  • Clear separation between automation development and test building. Perhaps not unique to BPT, but it does it well.  Depending on your needs, you can build automation without understanding or caring about the use case, build tests without any automation knowledge, or do it all.
  • Ideal for scaling and growing teams. Several years ago, our team was small and we used regular QTP (scripted tests with shared actions – regular stuff).  It was hard to scale beyond a couple automators – we couldn’t agree on anything.  BPT’s structure and our implementation enabled us to grow from 5 to over 20 and one office to four offices in different countries. BTW, the implementation  was no thanks to HP.  Much of what we do is not standard HP implementation (because there is no standard HP implementation!)
Where is it suckish? (i.e., what are design features for our own frameworks that our experience with HP  have influenced)
  • Sucks for integration with development teams. Between licensing issues and the QC interface, this is not a tool for the full dev team.  No one outside of the test team wants anything to do with the HP tools.  For us, HP tools interfere with our goals of sharing ownership of tests.
  • Execution is slow as hell. We run our full suite every night and it takes every bit of it. As an advocate of BPT, it embarrasses me to no end when someone new watches one of these dinosaur runs. As we move to new tools (and look for ways to enhance our current tools), the new goal is to regression test our apps fully in under 20 minutes (we have to trim about 9 hours from current).  HP tools are not going to help us there.  Our new approaches are and will be fully scalable and capable of cloud busts.
  • Support is a nightmare. There was a period after a new HP release that I was sure I worked for HP as a tester.  The release was a nightmare, every issue I reported was a homework assignment for me, and I was helpless to solve my own problems. No more test tools that I cannot fix myself.
  • “A good used Toyota” – this is what you could buy for the price of a single floating QPT license.The core parts of a good framework (for web apps) are open sourced and there for the taking. Free out of pocket. I know that time is not free, but it is much more satisfying to spend time writing your own tools rather than burning time with a closed system that you can't directly influence.
Love and hate.  Not a perfect relationship, but we learned a lot from it and it’s time to move on.  Our HP experience has and will and will guide us as we move on with services testing and GUI testing frameworks.

Monday, March 25, 2013

Importance of a Common Programming Language on Test Teams


How can a technical team work together on techy stuff if they don't understand the same language?  On our team, the native languages are English, Polish, Spanish, and possibly Jive (<-- funny Airplane reference).  Fortunately, everyone speaks English (even though the native English speakers  don't speak Polish, Spanish, or Jive -- other than enough Polish to get me slapped).  We are able to work together in English on business facing testing issues.  To work as a technical test team, we need a technical lingua franca.

A lingua franca is a common language shared by people from different backgrounds so they can understand each other and they can work together.  At home, different languages are used; at work (or at market or international diplomacy), an agreed upon, common language is used.  The common language is no better than the others, but it enables people to communicate better.

Programming teams are typically formed with a built-in lingua franca.   If the team is developing web apps in Java, then all the programmers hopefully share a working knowledge of Java.   At home or on side projects, these same developers may switch a language that is more comfortable for them, such as Python, F#, or Zimbu.  Even though these programmers may speak Zimbu at home, they can still communicate with other programmers at work in Java.

Test teams don't often have the same advantage.  Testers often range from non-technical to super technical.  And because there is no industry standard for testers (not advocating this), it is more challenging for test teams to work together on technical solutions.  For example, members of our test team  have experience with VB (VBA/VB Script), Ruby, Python, Java, C#, and a few other languages.

We are literate, but if we are not literate in the same language, what good is it?  We can build things for ourselves, but we cannot leverage ourselves and our influence.  We cannot grow greater than one.   If we each build tools and testing applications in VB, Ruby, Python, Java, and C#, we cannot take advantage of each other's work, understand each other's work, develop significant tools, or maintain that work.  With a common programming language, we can become more powerful team.

There are a few considerations in picking a common language. It might make sense to form around the common development language in your shop (C#, in our case), go with the most common language on your team (probably VB) , or select from open-source languages commonly used in the test space, such as Ruby or Python.  Anyway you go, pick a language, and go with it.

We selected Ruby as our lingua franca.  When we started, none of us had any experience with it.  Part of our shared experience was learning to use it.  We worked together before and after work and at lunch to learn the basics of the syntax and understand the personality of the language.

As a team, we work together, challenging and motivating each other.  This common language is helping to pull us together as a team.  We work together on skill-building challenges and building applications to help us with our jobs.  Simple tools are becoming more ambitious as we grow skills and gain confidence.

A side benefit of our experience with Ruby as our shared language is that it helped many of us on the test team better understand technology discussions related to our application development.  For instance, when our .net development team talks about MVC frameworks, ASP files, and reconciling NuGet references, we can translate to our language to understand:  Rails type framework, .erb or .haml files, and RubyGems.  We can understand others better, and we now have a vocabulary talking about these things.

Wednesday, March 13, 2013

How to Write a Testing Framework


Sometimes, test frameworks seem to be a dime a dozen.  There are many commercial and homegrown frameworks.  All have the same general goal – make automated testing easier.  This is an awesome goal.  When done well, it enables testers to focus on high-value work and lets the framework take care of low-value stuff (important but low value for people to do).

Over the course of the several posts, I will share the story of a services testing framework that several members of my test team and I wrote (with some technical and design help from friends and well wishers).  We started with a few goals and principles and we are driving toward an open-source release of the tool.

To give you a sneak peak, here is the current start page of Cham (there is more than a start page -- we have  a working system with high levels of abstraction and fast, scalable performance). 

Start page of Cham
From this view, you can see some of the things that are important to us when we think about frameworks.
  • Separate the technical stuff from the tests.  This is the difference between the templates section and the build tests section.  Hide technical stuff where you can and build awesome tests using business words (also known as people words).  Test built using people words are easier to share, to discuss, and to maintain.
  • Focus on making everything understandable by a wide audience.
  • Results are easy to share with people who care.  In Cham, we do this a couple ways.  One is by simply making it easy to pass specific results to people (our pages are RESTful and you can simply give someone a URL). Another way (not seen here) is by making the tests work with the developer workflow.  For example, programmers can use Cham to test services before code is checked in.
  • Make it easily accessible.  You can’t see this here, but everything is done with open-source code (go Ruby) and libraries so there are no licensing issues limiting access.
  • Make it look nice – this gives users more confidence in the tool.  And it makes you feel better about yourself.  We can all use that.



Tuesday, February 12, 2013

Automatic Rerun: QualityCenter Workflow

For those of you who haven't been keeping up with the View into the Automation Lab story,  we have testers who are spending way too much time babysitting tests to make sure they run.  Combine that with the fact that we can't see what is actually running and where it's running at, we end up having a huge gap in machine utilization, meaning we are not only wasting people time, but also machine time.

The Automatic Rerun

Our attempt to automate the rerun process through QualityCenter had to start with some backend processing.  First we needed a database table to store the rerun information.  I'm including the database here since it has to be done and it's pretty simple.  We implemented a single table with 5 columns added to the QualityCenter database.  The columns are id, test_id, test_name, run_position, host_group so that we can order the tests and run them on the host machine they are supposed to run on.  Next, we decided to use QualityCenter workflow to drive the work that needed to be done by writing to this table.  Once that is complete, we can process this information the way we want.

Ideal Workflow

  1. Test is identified as "Rerun"
  2. Tester changes the status of the run to "Rerun"
  3. QualityCenter would take over and insert a table row with the test run information
  4. QualityCenter would change the status from "Rerun" to "Queued" to show that the test has been picked up
  5. Separate process would chug through the table and run tests as they were popped onto the table
  6. This process would also be multi-threaded based on the number of available machines
  7. Web page would show current test status (tests in the queue, order of tests in the queue, tests currently running and which machine they were running on)
  8. Web page would allow for reordering tests in the queue
  9. Web page would also show the current machine status
I haven't worked a lot with QualityCenter workflow and even less in the workflow scripting.  I knew the code I needed to put in...but WHERE?  I started my quest in the script editor after reading a few descriptions of what could be done I decided that the TestSetTests and TestSet were two that made the most sense and I needed either FieldChange or FieldCanChange.  , so add a few message boxes in and start changing test status to "Rerun" to see which function was being hit.  After a little back and forth, I was able to identify the correct Function as *** Insert the function here ***  Now when a test status is set to "Rerun" the workflow updates the status to "Queued" so that we know the test information has been sent to the next step in the process to run the test by way of writing to a database table (Steps 1-4 above)

Heavy Lifting (aka Backend Code)

Most of what we need here is setup and connection to QualityCenter because we already have the database and simple SQL gets everything we need.
Step 1: Read the first row of the table to get the Test ID and the Host
Step 2: Check the host to see if it's a group or a specific machine.

  • Specific machine: we have everything we need, just send that machine name to QC
  • Host Group: we need to follow a few extra steps to get an available machine for running the test
    • Get the first machine name from the host group
    • As long as you have something returned, put the data into an array so you can easily loop through each machine to see what's available to run on, something like this:
    • Finally, the magic SQL to tell whether a machine is available to use or not:

Step 3: Connect to QualityCenter (this is all standard QC API stuff)
Step 4: Return the test set folder object of the test you are trying to run
Step 5: Return the test set object of the test you are trying to run
Step 6: Start the scheduler and run the test (don't forget to remove the row from the table)
Step 7: Finish up by monitoring the execution object so that you can close out all of the objects when done
Step 8: Clean up after yourself and do it all over again

That's it!

Monday, January 21, 2013

Is a Technical Test Team a Good Thing?


During a recent exchange with another testing blogger, he made the following comment:
“It's fabulous to have at least one testing toolsmith on a project, but a project that's dominated by them will often tend to focus on the tools, rather than the testing.”
This is an interesting comment. And it goes against just about everything we been working for over the past few years. We built a technical test team where everyone is an automated tester, and everyone is a programmer/tool builder.

In software testing, there is a risk of spending too much time on the wrong things.  I imagine that is common that testers who are automators prefer to spend more time working on automation rather than manual testing and other non-technical tasks.  It is like a child wanting to skip the vegetables and go straight to dessert. Delicious but not very healthy.

The other blogger also suggested that a team of technical testers may share a single point of view with the application programmers.
"Usually, there are more than enough programmers on a project, and therefore more than enough people who think like programmers."
His concern is that many programmers think alike and have similar biases. The role of testers is to bring a fresh perspective to the project and a technical-orientated team may not be able to do that.

Interesting points.  I can only respond to these arguments from my own experience.

  • We have a large technical test team. Everyone is a programmer or at least has growing programming skills. Non-technical testers are not considered during the recruiting process.
  • We are an agile shop where we are heavily influenced by the principles of agile. I make this distinction because there many agile shops and yet there are relatively few agile shops. Many places dress traditional command and control project management in agile clothes and then think they are agile.
  • In our shop, development teams are largely self organizing – they own most of their own decisions and figure out how to work most efficiently. One key to making this work is direct feedback. If things are not going well, it is easy to tell (solving problems is another story).
  • Success of our testing team is measured by how well it enables the development process.  If we are doing testing right, we speed programming and application development.  Testers enable development to start effectively very early in the sprint, provide immediate feedback throughout the sprint, and give the team confidence to program and to commit code late in the sprint.  We deliver our code to production at the end of each sprint, so testers have to do things right to keep this process moving.  
So, back to the comments.

Technical testers will “focus on tools, rather than the testing.”  This cannot happen (at least for long) in an agile shop like ours.  A tester who does not support the development process will slow it down and the direct-feedback machine will roar.  In fact, the only way we are able to develop and to test efficiently and deliver to production in short cycles is because we have technical testers who can leverage themselves.

The development team will fall into 'group think' when application programmers and testers share the same skill set.  As I look around the development team floor, I see a diverse cast of characters, and group think does not seem a big concern to me.  Instead of seeing a limitation when programmers and testers share a skill, I see it as a bonus.  If the programmer and the tester don’t speak the same language, they probably won’t talk as much or share as many ideas.  It is much easier for programmers to explain their code to technical testers than it is to those who have no idea what the programmer is saying.

The programmers working on our test team want to be on the test team.  They are not here as a punishment or to bide their time.  They are testers who are also programmers.  Not all programmers have this interest in testing, and that difference may be enough to vaccinate us against group think.



Thursday, January 17, 2013

View Into the Automation Lab


The fun of having technical skills on a test team is that you can fix problems that bother you. One  bothersome thing that irked us is babysitting automated test execution.  Starting the execution is easy  (the build system does it), but managing the run to completion (rerunning failed tests) eats time.

Here are details about our situation:
  • Our automation environment has a limited number of test machines (11-ish to be exact), these machines are locked away in a room because the desktop cannot be locked to run the automated tests, and they are shared by teams across three continents (Europe, North and South America).  
  • Today we have a team of testers that monitor the automated regression run and make sure that all of the tests have run.  When tests are identified for rerun this team will then take a group of tests and manually kick them off.
  • Once the tests are running, someone on the team then monitors this set to make sure the tests complete and then kicks off another set of reruns as they are identified.
  • If anyone from the other offices wants to kick off a test on those machines, he has to coordinate it with the regression team.  There is no good visibility.
  • The end result here is a lot of time spent watching tests run and potentially creating large gaps in machine usage.

Cool Tech to Solve Problems

“Why not build a web based application that can be accessed by anybody in the company needing to know a current status of their tests?”  Right away the ideas started to fly around and the solution, while all relatively easy to accomplish, all of a sudden became further reaching than just automatically rerunning tests.  It would also be nice to have some sort of a dashboard that shows the status of the test machines.  Oh, and how about the ability to change which machine a test is running on.  While we’re at it, since certain projects may have a higher priority let’s add the ability to reorder tests in the queue so those can run first....the ideas were now flowing.

The first step of this solution was to build a function to monitor the test machines.  The rerun app will need to know this.  It would also be cool for people to know this too, so we built a web based dashboard that shows the current status of each machine.  It’s very simple, but provides a huge amount of information that is helpful when evaluating test runs.


This part is taken care of through Ruby and some SQL queries against the QualityCenter database.  Here's how QualityCenter database tells if a machine is available or not:

1: SELECT RN_HOST
2:   FROM td.RUN
3:  WHERE RN_STATUS <> 'Not Completed' 
4:    AND RN_HOST IN ('<your machine name here>') 
5:    AND RN_RUN_ID IN
6:       (SELECT MAX(RN_RUN_ID)
7:          FROM td.RUN
8:         GROUP BY RN_HOST)


With the dashboard in place, the lion’s share of the work is left up to backend code. The rest is display, database updates and other code-type stuff to give it some appeal and not look like a site that a bunch of QA folks came up with. Afterall, we are a technical test team!

Wednesday, January 9, 2013

The Technical On-Deck List


There have been times when I have questioned our approach to staffing the test team with only technical people.  What will we do when all the test frameworks and testing-related tools and utilities are built? How do we keep all these programmers in testing happy and productive?

Happily that day never seems to come.  The applications and business needs keep changing and, better yet, the more cool (and productive) stuff we do, the more cool (and productive) stuff we find to do. I call this the escalating awesomeness factor.

This struck me today as I was sitting with a few members of my team troubleshooting some automation code.  As we worked on this, my mind raced to a list of other projects not getting attention.  Here is a sampling from my technical on-desk list:

Automation of Test Maintenance for Existing Tests
We have a large test suite that we run every day.  Running them each day enables us to keep up with test maintenance (things shouldn’t pile up until the end of a sprint before production delivery).  Even with proper staffing (the right people and the right amount of the right people), some parts of test maintenance is tedious.  On the short list of things to do is to look at all the tasks we do for test maintenance and then program our way out of it.

Automated management for rerunning tests
Our build system kicks off our test suite, and most of the processes are automated.  But each day, we still spend too much valuable time babysitting tests.  One of the biggest time sucks is rerunning tests.  With our current application suite, we cannot send all our tests to the grid or cloud, and we have to work with a limited set of resources – licenses and workstations.  Using smart people to babysit machines is so wrong.  We must make tests and machines take care of themselves. We know what to do (in fact, we have a pretty cool plan for what to build), but the challenge is to find time and do it.

Create custom test portals for each project
Our current test repository is geared for the test team.  It does not present information in a way that a specific development team (product owners, application programmers, and testers) cares about.  Further, we are between test frameworks.  Most tests are in our old GUI framework, some are in our new services framework, and soon tests will be in our new GUI framework – and there are unit and programmer-written integration tests.  Project teams don’t care about which frameworks we use, they just want to know if their code and features are working each day.  We have a prototype that pulls all this together and presents it in a useful way, but it is one more side project that we have to find time and energy to complete.  The product teams will love it, though.

Being better with TDD and unit tests on our test team projects
This is a case of the shoe maker’s son having no shoes.  We on the test team have developed applications with no unit tests and no regression tests.  Eek.  We have some work to do to clean up our development projects.  Even though there is work here that doesn’t add any cool new features to our frameworks, I am looking forward to this as an excuse to work on some new testing tools.  We have been looking at Cucumber and RSpec for a while.  This will be a good opportunity for all of us.

Next generation tools
We are still using commercial tools and I am anxious to move away from them (and to save the licensing fees for the company).  There are things that I like about the current tools that we have to replicate before we leave.  Taking a relatively raw tool like WATIR (webdriver) and building all the features needed for abstraction, readability, reusability, and maintainability is a big job.  Even though I feel confident that we can use tools that are not complete and still get value, it is daunting task.  It hurts so good.

I love having a full queue of meaty technical work, and I love even more knowing that when this is done, the next work will be even more challenging (and fun)!

Thursday, January 3, 2013

Hello World


Welcome to our new blog.  Our goal is to build a technical testing community to share experiences as we work to improve our existing automated testing tools, develop new frameworks using open-source libraries, and build new tools to solve other problems as we see them.

We will share technical information and points of view that promote the technicalization (not a real word but I like it) of testing. We will share code, descriptions of our projects, failures, hacking stories.  Expect to see references to agile testing -- agile development is the best way we have seen to develop (program and test) applications.  Agile testing (which is fundamentally different from traditional QA) cannot work without technical testing.

What you will not see here is anything that resembles traditional software testing or QA.  If you are working in that environment, I am sorry.  I have been there, and I have no intention of going back.

The core group of this new community works together in the same development team.  As we grow, we hope to gather kindred spirits and hackers who share our interests and technical goals.

In addition to the blog, we will actively tweet while we hack and otherwise goof off together.  We use the hashtag #rubytest.  Watch for it.

Death to software testing.  Long live software testing!

-- Bob Jones