Sunday, February 10, 2008

Compress SOA Lifecycle through Development & QA Collaboration

Learn six effective ways of bridging the divide between SOA development and QA teams to compress the SOA Lifecycle and maximize resource utilization, improve focus, and keep projects on track.

Web services, the foundation of modern SOA, are being rolled out today in ever increasing numbers across enterprises. The key benefit of web services is reusability of services across applications in a distributed environment. Reuse is especially valuable in exposing monolithic, legacy functionality as self-contained services. With the promise of SOA and web services come also the challenges for successful implementation and testing. These challenges fall directly on the developer and QA teams to meet deadlines while also deploying a robust, resilient and reliable SOA solution.

The challenge of building robust and reliable services within a SOA exposes age-old fissures between Development and QA: Who is really responsible for testing across distributed environments. In this article, we will explore the gaps and recommend ways of bridging such fissures to ensure greater efficiency in developing and deploying web services-based Service Oriented Architecture.

1. The Best Way to Communicate - Speak the Same Language

SOA is a paradigm, not a technology. Thus, it is important to bind the paradigm by following the standard based technology that the paradigm defines. Web Services technologies such as WSDL, XML, SOAP, WS-Security, WS-Addressing, etc. are all bounded by OASIS or W3C standards that define how each of these technologies are to be utilized.

The overused term “Governance” is thrown around frequently in the SOA world. Some see it as an audit mechanism to keep things in line. However, governance as it applies to the SOA development lifecycle should be viewed as a guide or reference for enterprise-wide SOA best practices. Like most spoken languages with rules for syntax, semantics, grammar and idioms that form the basis of a conversation, SOA governance provides rules and constructs for the conversation between QA and development teams.

For example, the first tangible handoff from Development to QA involves a Web Services Definition Language (WSDL). WSDL defines the contract between the producer and a consumer of a SOA web service. The WSDL file itself describes the interface of the producing web service. Unfortunately, the specification for WSDL is broad and can lead to testing problems, deployment problems, and most importantly interoperability issues. To facilitate consistent and robust published WSDLs:

  • Define WSDL Best Practices: Define governance best practices for your organization’s WSDLs so that the producers and consumers of the WSDL can have a common basis to measure interface robustness. The WS-I standard provides profiles that can serve as interoperability guidelines. Defining and enforcing additional rules for your organizations WSDLs ensures measurable quality across enterprise-wide SOA teams.
  • Adherence Metrics: Provide a testing framework that allows the consistent measurement and adherence of the organization’s governance criteria. Governance should be quantifiable within an enterprise and not a mere buzz word adorning corporate presentations and documents.
Bottom-line: Know your WSDLs. Focusing on the quality of the WSDLs is the first critical step in ensuring a robust and reliable SOA.

2. Invite Everyone to the Party - Meet Early and Often

Balancing timelines, deliverables, and tasks is a well known shuffle in a typical software development lifecycle. The SOA lifecycle is no different. A common mistake in a SOA project is to wait and include teams — such as the QA team — as required instead of inviting everyone to the party upfront. Typically, a development group is responsible for both the architecture and development of SOA web services. Unfortunately, this sense of sacred responsibility sometimes relegates the QA group to a secondary role in the early phases of the SOA lifecycle where QA might not be invited to early architecture meetings. The secondary role assigned to QA also exist as a result of a historical and cultural bias against testing by development groups — “I am a developer, not a tester.”

Another reason for delaying QA’s involvement is to control the time and resource expense of different groups on a SOA project. However, this approach has just the opposite impact. Instead of saving time and resources, the project is delayed while the QA team ramps up on project goals and technical requirements.

Involving SOA architects, developers, and testers in all aspects of the planning and implementation process enhances the quality focus of the initiatives and paves the way for parallel efforts on test and implementation strategy. Each member of the team is equally responsibly for a resilient, robust solution. For efficient team collaboration with a SOA project, consider the following actions:

  • Share Perspectives: Avoid downstream project pitfalls by sharing perspectives between QA and Development early in the planning process. Consider the impact of implementation decisions as they relate to QA tasks.
  • Define Roles and Responsibilities: Clearly determine and define roles and responsibilities early in the project to reduce redundancy in parallel tasks.
Bottom-line: QA involvement early in the SOA lifecycle, even as silent observers, in architectural meetings is of utmost importance.

3. Get in the Fast Lane — Share Unit Tests to Jumpstart System Tests

In traditional software development lifecycles, unit testing and system testing are performed in different environment with different tools. With SOA implementations, however, this does not need to be the case. In fact, leveraging the framework and techniques that have been utilized and implemented by the development team provides an excellent opportunity to jump start the system testing initiatives.

Where unit testing performs higher level testing scenarios, choosing a common framework — such as JUNIT — rather than allowing a custom developer testing solution, can facilitate re-use of the testing scenarios. This can be particularly useful in the handoff process between implementation and test. If the handoff includes not only the updated code and features, but also a base set of test scenarios, this can provide the testing team a baseline to build from and thus jumpstart the testing implementation for those features. The following are actions to consider:

  • Define Reusable Unit Tests: When planning unit testing, be cognizant of the system impact and isolate those tests which can be reusable in a system test environment.
  • Use Common Test Framework: For unit testing and system testing ensure that the same testing framework is used. This allows unit tests written against this framework to be reused for system testing baselines to jumpstart the scenario authoring.

Bottom-line: Dependable, reusable, and automated unit tests are the foundation of a quality SOA Deployment.

4. Devel, QA, and Back Again — Compress the Issue Resolution Cycle.

Once the SOA lifecycle has reached the testing phase, the cycle begins where issues are found, reported, and fed back to the development team. This becomes one of the most tangible areas to improve timelines and compress the lifecycle.

The development team members must go through the standard vetting process on issues. Without details about how to reproduce the problem, issue identification remains ambiguous. This is why choosing a testing framework that allows preservation of the test simulation can greatly reduce cyclic bug thrashing. A standard representation of the testing information (inputs, expected results, etc) and simulation sequence can be provided directly to the development team where the issue can be reproduced, and resolved, and later easily verified using the same sequence. Key aspects of improving the bug resolution cycle:

  • HTR Template: “How to Reproduce” template that details all the aspects of what should be included when reporting issues. For example, type of machine, network environment; build number, detailed sequence of steps, etc. All issues should be reported using the same template.
  • Preserve Test Scenario: Provide a testing framework that allows preservation of the testing sequence in a file format. This allows sharing of this information among the teams for ease of reproducing the issue, verifying the fixed issue, and re-testing the scenario repeatedly to ensure the issue does not resurface.
Bottom-line: Clearly identify issues within a SOA by reproducing bugs and preserving test scenarios for regression testing.

5. Making SOA Security Testing Part of QA and Developer.

Often Security plays a pivotal role in SOA deployments, especially when applications with weak security controls are exposed as web services and these web services are consumed across corporate domains.

Traditionally, security testing has been the domain of security officers in an enterprise where security validation has been performed in a post-deployment scenario. This post-deployment approach has the following shortcomings:

  • Delayed Risk Identification: Statistics clearly show that security bugs detected early in the life cycle of an application deployment has dramatic cost savings. Rectifying a security flaw identified in an application after it has been developed and put in production has a disruptive impact with severe financial implications in re-architecture costs and application down-time.
  • Inadequate Security Testing: Post-development testing is traditionally restricted to black box testing. Security techniques that have relied on black box testing methodology provide incomplete test coverage.
So how is SOA Security testing responsibility divided among the two groups?

  • Development Group Responsibilities:
    • Devel is responsible for White Box testing of SOA web services based on a simple fact it has the most visibility and knowledge of web services source code. Close inspection of exception handling code and memory allocation/de-allocation techniques are some of the examples that can have immediate impact on reducing the risk posture of a web service.
    • If WS-Security is part of the deployment to secure web services messages, then it is Devel’s responsibility to ensure the basic provisions of WS-Security are implemented properly and are tested to ensure base line interoperability. However, it is not Devel’s responsibility to test their WS-Security implementation with every vendor’s WS-Security implementation for interoperability. It is QA’s responsibility to build and maintain an extensive interoperability test matrix.
  • QA Group Responsibilities:
    • QA’s responsibility in the domain of security testing should involve every aspect of security testing methodology except for White Box testing. Black Box and Grey Box testing are two techniques that should be part of QA’s arsenal.
    • Another aspect of SOA security testing that falls into QA’s lap is the detection of sensitive data leakages in web services response messages. The detection of sensitive data leakage is a run-time issue that is highly dependent on creation of parameterized regression suites of web services request messages. If there are data leakage violations, it is QA’s responsibility to provide feedback to Devel on what filters to apply to prevent the leakage of certain data from the web services application.

SOA Security testing is best addressed by the divide-and-conquer approach between Devel and QA. WS-Security is a fairly new area for both groups and it promises to be the acid test of how far both groups will go to collaborate on detecting SOA security bugs.

Bottom-line: To augment the SOA security validation process, part of the security validation responsibilities should be shifted from typical security teams towards QA and Development Teams.

6. Find the Chain of Command — Understand Service Chain Testing

Service chaining is the invocation of services in a sequence where output of one service becomes the input for the other operations. Since in an enterprise deployment chaining of services usually traverses multiple services possibly across corporate boundaries, it is the responsibility of QA team to ensure that the whole sequence of services is parameterized and validated.

On the other hand, the development team’s visibility into the service chain is limited to only the services its produces. A developer may produce a service that is dependent on a number of distributed web services. In such cases, where a services producer consumes other web services, the QA team should be responsible for testing all the external web services that are invoked by the service that the developers are producing.

It should also be noted that service chaining clearly requires an understanding of interdependencies between different services and their inputs and outputs. To create a test suite that encompasses the whole sequence of services requires some base-line knowledge of the SOA architecture. QA that is not involved in early architecture discussions will have a tougher time understanding how these operations interact. That is why attending SOA architecture meetings early helps in the understanding the creation of comprehensive test suites. This practice will reduce QA’s dependency on different Devel groups in an enterprise when it is ready to create service chain test suites close to the release date.

Bottom-line: Understanding service dependencies is crucial is developing effective test cases in a highly-distributed Service Oriented Architecture.


SOA testing plays a critical role in ensuring a successful deployment in any enterprise. A successful SOA testing strategy greatly hinges on cooperation between QA and Devel where roles and responsibilities are shared fairly. An early indicator of this successful strategy will be based on the level of QA involvement’s in early part of SOA life-cycle that starts with attendance of SOA architecture meetings. An active role played early in the SOA lifecycle ensures QA’s independent role in testing during the latest stages of deployment without influence from Devel on how tests are to be conducted. A delicate balance between QA and Devel is achieved by QA taking responsibility of SOA system and integration testing while Devel taking responsibility of such tasks as White Box testing, Unit testing and WSDL validation. This delicate balance is achieved by using tools that provide demarcation between the roles or Devel and QA.

SOAPSonar and SOAPSimulator from Crosscheck Networks are tools that provide comprehensive functional, performance, interoperability and security testing of web services. SOAPSonar is a unified SOA Testing tool that clearly delineates the roles and responsibilities across the QA and Development Groups within a large enterprise focused on deploying a Services Oriented Architecture.

SOAPSimulator is a Service Simulation product that has a significant impact on SOA Development Lifecycle by enabling QA professional to start developing Test Cases even before the Service is coded. SOAPSimulator also enables SOA teams to move Quality upstream by enforcing developers to meet governance criteria before releasing their services to the QA team. With the roles and responsibilities clearly defined, and SOA Testing tool such as SOAPSonar and SOAPSimulator that meet the needs of both QA and Developer communities, building and deploying a reliable and robust SOA is no longer a pipe dream.

About Crosscheck Networks

Crosscheck Network’s mission is to provide products for testing, diagnosing and controlling enterprise Web Services. Crosscheck Networks’ products provide QA professionals, security personnel, and compliance officers with necessary information about the functional completeness, scalability, security and interoperability compliance of their Service Oriented Architecture (SOA). Through SOAPSonar and SOAPSimulator - industry leading comprehensive SOA testing products - IT professionals make more informed decisions that enable their companies to stay within corporate quality and regulatory boundaries. The SOA diagnostics architecture is the first in the industry to encompass security, compliance, performance and functional regression under a unified architectural framework. The architecture enables the product to be deployed from the core to the perimeter of an enterprise.

Visit Crosscheck Networks at