Test Listeners
Copyright © 2005, David A. Epstein.
All Rights Reserved.

Test Listeners

The primary idea discussed here is the implementation of listeners to run automated tests in a software application. Rather than solely making serially programmatic calls to test product features, APIs, and such, the actual automated test cases would be implemented in the listeners. Each listener would capture asynchronous or synchronous calls sent to it, then based upon the listener implementations return callbacks to the caller to perform specified actions during automation execution. They would be used to not only test end-user product features, but also transition states, internal calls and messaging, database activities, memory management, and a host of other issues.

This has the following advantages:

  1. Every test case would require at least a minimal implementation. If not, there would be a compiler error in the test application. Registration of these listeners will insure test coverage compliance.
  2. Various transition states, messages, request & response notifications can be captured and posted to log files. This will help with test validation and debugging problems that arise.
  3. It would allow a greater variety of test scenarios. In addition to serial calls, these scenarios could simulate customer usage, utilize random calls, or leverage asynchronous methods. 

There is one main disadvantage:

  1. It will require more event handling code to support the listeners.

Areas where this testing approach could be particularly effective:

  1. Online services. Here, the traditional request/response model will call for these types of listeners: states & progress of requests, states & progress of responses, handshaking. In addition, if applicable, it could include listeners to monitor keep-alive requests, pipelining, DB queries & update, and farming of requests.
  2. Database access and update. These would include listening to the creation, updating and query of DB tables, states of requests and responses, message handling, and lock states in the application.
  3. Multi-threaded concurrency. The focus would be upon capturing the various states of tasks run on separate threads, like file accessing or updating, including implementations for verifying file permissions, lock states, firewall access, remote file sharing, etc. Separate listeners could be implemented for tracking file-open networking, IP addressing, message handling, and synchronizing results from independent thread operations.

Examples of Test Listeners:

These methods could either be called synchronously or asynchronously. Synchronous calls would occur in traditional queued test cases, while asynchronous ones would occur when there is a randomized calling of test cases, keystroke capture replay, or multi-user events.

Though these methods could be called synchronously, most likely they would be called asynchronously to emulate real-world scenarios. Again, it could involve a randomized calling of test cases or keystroke capture replay. Events for file networking, file open, handling deadlock states, and request handling would be tracked by listeners.

Here are some examples of test listener implementations in C# for a generic client/server application. These listeners capture and regulate primary and backup server activity:

public void RegisterListener(IListener listnerObj)
{

    ReceiveResponseEvent += new SendToEventHandler(listnerObj.ReceiveResponse_Listener);
 
    PrimaryServerDownEvent += new SendToEventHandler(listnerObj.PrimaryServerDown_Listener);

    BackupServerDownEvent += new SendToEventHandler(listnerObj.BackupServerDown_Listener);

    BothServersDownEvent += new SendToEventHandler(listnerObj.BothServersDown_Listener);


    NetworkMsgs.RegisterListener(listnerObj);

    NetworkMsgs.ReceiveResponseEvent += new Gateway_Messages.FileHandler(ReceiveResponse_Listener);

    NetworkMsgs.PrimaryServDownEvent += new GatewayMessages.FileHandler(PrimaryServerDown_Listener);
   
NetworkMsgs.BackupServDownEvent += new Gateway_Messages.FileHandler(BackupServerDown_Listener);
    NetworkMsgs
.BothServDownEvent += new Gateway_Messages.FileHandler(BothServersDown_Listener);                


public void PrimaryServerDown_Listener(SendToEventHandler PrimaryServerDownEvent, FileEventArgs fe)
{
     
    if
(
PrimaryServerDownEvent!= null)
    {

        GatewayEventArgs e = new GatewayEventArgs();

                e.errorCode = fe.errCode;
     
e.Message = "Primary server is currently unreachable.";
        qaLogging(
e.Message);
        PrimaryServerDownEvent(this, e);  // test case to send another request to primary server
    }
    else
    {
       
e.Message = "Primary server is running.";
       
qaLogging(e.Message);
    }
}

public void BackupServerDown_Listener(SendToEventHandler BackupServerDownEvent, FileEventArgs fe)
{
     
    if
(
BackupServerDownEvent != null)
    {

        GatewayEventArgs e = new GatewayEventArgs();

                e.errorCode = fe.errCode;
     
e.Message = "Backup server is currently unreachable.";
        BackupServerDownEvent(this, e); 
// test case to send another request to secondary server
    }
    else
    {
       
e.Message = "Secondary server is running.";
       
qaLogging(e.Message);
    }

}

Events:

class Gateway
{

    //  Define delegates for each of the events.

    public
delegate void SendToEventHandler(object sender, GatewayEventArgs e);

    // Define Events here.

    public event SendToEventHandler ReceiveResponseFromEvent; 

    public event
SendToEventHandler PrimaryServerDownEvent;
    public event
SendToEventHandler BackupServerDownEvent;
   
public event
SendToEventHandler BothServersDownEvent;    …

    private bool CallServerEvents()
 
    {
        bool retval = false;
        if
(GatewayProcessor.
PrimaryServerDownEvent)
        {
                  
            if (PrimaryServerDownEvent != null)
            {

                GatewayEventArgs e = new GatewayEventArgs();

                e.errorCode = 1;

                e.Message = "Primary Gateway server is currently unreachable.";

               
qaLogging(e.Message);                  
   
                if
(!GatewayProcessor.
BackupServerDownEvent)
                {
                    e.Message += " Sending to Backup server...";

                   
qaLogging(e.Message);
                }
                         

                BackupServerDownEvent(this, e); // send request to backup server
            }

        }
    
    }
}

The test implementations can be enhanced with code to handle different states, test different messages, and simulate error conditions (e.g. timeouts, read or write errors).

Some Other Useful Approaches:

  1. Validating internal states. In various parts of the product, there are different states for file updating, requests and responses, message passing, transferring data, etc. Correct functionality of these features or procedures can be validated by checking the notifications and callbacks, especially the order they’re called when applicable.
  2. Debugging a data error. If there is a computational, storage, transmitted or retrieved data error, a listener could have an implementation to capture the data value in different states. This way, it can be determined where in the process the data value was altered or became corrupted.
  3. Testing error conditions. Just as different states can be passed into listeners, so can error codes. These codes can be used to simulate error conditions that may arise internally in the product such as during transition states and update calls to the database. Such error codes could include FileNotFound, RecordLocked, and Timeout.

Further Information:

For examples of how we implemented listeners in the open source Mozilla embedding project, see

http://lxr.mozilla.org/seamonkey/source/embedding/qa/testembed/BrowserImplWebPrgrsLstnr.cpp

http://lxr.mozilla.org/seamonkey/source/embedding/qa/testembed/BrowserImpl.cpp#492

http://lxr.mozilla.org/seamonkey/source/embedding/qa/testembed/BrowserImplHistoryLstnr.cpp

Also see Section 6 of Embedding API Testing Approaches.

Document History:

David Epstein.              Created.                                             01/17/05