How To Use WSCLim?

  1. WSCLim Graphical User Interface
  2. Case Study
    1. Description of the composite service TravelAgency
    2. Preparation of the specification
    3. Test scenarios
  3. WSCLim Overhead

  1. WSCLim Graphical User Interface

  2. In order to validate the proposed testing architecture, we have developed a tool for load testing and limitations detection of Web services compositions. Java is the programming language used to implement this tool. We present in this section a brief description of the main interface of our WSCLim tool.

    WSCLim Tool Initial Interface

    This interface allows the user to specify:
    - The path of the specification (Timed Automata) used as a reference in the test: this specification must be described in XML and generated by UPPAAL tool.
    - The path of the WSDL specification of the Web service composition under test.
    - The number of BPEL concurrent instances.
    - The delay between each two successive invocations of the BPEL process under test.
    By clicking the button “Execute”, the test is running. During execution, details of the test are stored in log files. At the end of the test, the analysis of results is launched by clicking the button “Start Analyze” and the interface containing the verdicts of the test is displayed. We will expose this interface in the next section by the application of a case study.

  3. Case Study

  4. In this section, we illustrate how to use our proposed WSCLim tool by the application of a case study related to a travel agency as a composite service TravelAgency implemented in BPEL.

    1. Description of the composite service TravelAgency:

    2. We suppose that the required business process (written in BPEL) composes services of: fly search (FS), hotel search (HS), fly book (FB) and hotel book (HB). As described in the next figure, When a client sends a trip request for the travel agency, the travel search process interacts with information systems of airline companies (resp. hotel chains) for flights (resp. hotel rooms) that match client needs. These two search are conditioned by a waiting time. Indeed, the process should receive a response from “FS” (resp. “HS”) within maximum 30 seconds. Otherwise, the process execution is stopped. In case of receiving both a fly search response and a hotel search response before reaching 30 seconds, “FB” service and “HB” service are invoked successively to perform travel booking. Finally, a detailed reply informing about final results is sent to the concerned client.
      The TravelAgency process
      The BPEL composite service TravelAgency is implemented and deployed using the Oracle JDeveloper tool. The BPEL code of The TravelAgency process can be download from here.

    3. Preparation of the specification:

    4. Before starting the test of BPEL composition, the WSCLim tool user has to design the timed automata using UPPAAL; you can see the tutorial available on UPPAAL website. UPPAAL is used also to simulate and verify the correctness of the specification.
      The TravelAgency process modeled in Timed automata using UPPAAL
      The xml file generated by UPPAAL can be download from this link.

    5. Test scenarios:

    6. In order to study the behavior of the composition TravelAgency, we defined several possible scenarios. In this section, we present two proposed scenarios. The first scenario is used to illustrate some errors which may occur in the application. The second scenario is designed to subject the composition under test to a higher load to identify the non-functional problems. In whalt follows, we assume that the maximum network waiting time is equal to 120 seconds.
      - Scenario 1:
      In this scenario, we assume that the developer has make mistakes while coding the BPEL composition as shown in the next figure (red color). In fact, the service “FB” was added in the BPEL implementation when exceeding the time limit for the flight search “FS”. Moreover, the timeout response of service “HS” implemented (60 seconds) is different from that specified in the timed automata (30 seconds). In this scenario, we invoked forty times the process TravelAgency with a one second delay between each two sucessives invocations.
      Non-compliant BPEL implementation
      - Scenario 1 result:
      The next figure shows the generated analysis interface, according to the first scenario. This interface consists of four blocks:
      1. “Test Verdicts” block:
      2. This block shows the test verdicts percentage. We note that in this scenario, the percentage of FAIL is equal to 7.5%. This means that 3 BPEL instances from 40 have verdict FAIL.
      3. “FAIL Natures & Causes” block:
      4. This block present the nature and the cause (each cause is distinguished by a color) of each observed verdict. For this execution, the application (ie the composition under test) is the cause of these errors. 66% of all errors (2 FAIL) are erroneous delay and 33% are non specified behavior (1 FAIL).
      5. “BPEL Instance vs Response Time” block:
      6. This third block present response times of invoked BPEL instances.
      7. “Performance Monitoring” block:
      8. The fourth block graphically shows the performance data recorded during the test by the PerfMon tool. In our scenarios, we monitored the CPU occupancy rate (blue color) and exchange between memory and disk (red color). The values of these two criteria are taken periodically every five seconds throughout the load test. We plan to use the performance data in analysis and errors identification. Thus, we want to examine the possibility of introducing these observed performance in the interpretation of FAIL verdicts.
      Analysis interface corresponding to the first scenario
      Referring to the test report generated, we find an instance that traverses a path that does not exist in the specification. We regard this instance having the identifier 60303.
      Report corresponding to the instance 60303
      The above figure shows an unexpected behavior including an invocation of service “FB” while the service “FS” does not meet the proper time. The next figure is the report of the instance 60295 which contains an incorrect implementation of service “HS” response time. Indeed, the service “HS” responds within 33 seconds: a delay exceeding the upper limit implemented, which involves the termination BPEL process (onAlarm branch), whereas according to the path taken by this instance, the process continues execution (onMessage branch). This behavior clearly shows the presence of an erroneous implementation of delay at the level of the “HS” Service.
      Report corresponding to the instance 60295
      - Scenario 2:
      In the second scenario, we invoked a hundred times the TravelAgency process with a delay of 0.5 second between each two sucessives invocations. In addition, we consider an implementation complies with the specification. Then, We do not suspect problems with the application.
      - Scenario 2 result:
      Analysis of the results of this execution shows that there is a FAIL percentage of 7% (7 instances among 100). As shown in the following figure, 57% of problems (4 instances) are at the SUT node and 42% (3 instances) are problems of connection to partner services caused by the execution environment.
      Analysis interface corresponding to the second scenario
      The following figure illustrates one instance (having tha identifier 100098) with an error due to the SUT node. By observing the behavior of this instance, we find that the service “FS” has been invoked, and it answered the composed process in a time (1 second) much less than the specified period (30 seconds). However, the BPEL process follows the branch “onAlarm”. There is a delay in treatment of the response sent by the service “FS” caused by the increase of the load.
      Report corresponding to the instance 100098
      The figure below shows one of the three instances that show problems connecting to partner services. In this instance having the ID 100204, the two search services (FS and HS) responded to the composition under test before the scheduled time (30 seconds). Thus, the service (FB) must be invoked. But we do not observe this invocation in the next report. Then, We conclude that the service (FB) could not be invoked by the BPEL process. This problem of connection to the service partner (FB) under load is caused by the test environment.
      Report corresponding to the instance 100204

  5. WSCLim Overhead

  6. In order to determine the averhead of our WSCLim tool, we plotted, for both cases, the measurement curves of average execution time by varying the load. In the first case, tests are performed with WSCLim tool. In the second case, test executions are performed directly from the console of the orchestration server and without using the WSCLim tool. We have used the same process TravelAgency described the previous section in these experiments.
    Evolution of the response time with and without the WSCLim tool
    As shown in the above figure, the use of WSCLim did not cause a significant additional overhead to the average execution time. Indeed, for a given load, the difference between the two corresponding time is on the order of a few seconds (5 seconds on average). This negligible overhead can be explained by additional activities (verification of variable types, logging activity, etc..) carried by the tool during the test.
Afef Jmal Maâlej -ReDCAD
Last updated: July 2013.