Monthly Archives: June 2013

Performance Testing With Load Runner -4- : Working with LoadRunner (creating&recording scripts)

Working with LoadRunner

When testing or monitoring an environment, you need to emulate the true behavior of users on your system. HP testing tools emulate an environment in which users concurrently work on, or access your system. To do this emulation, the human was replaced with a virtual user, or a Vuser. The actions that a Vuser performs are described in a Vuser script. The primary tool for creating Vuser scripts is the Virtual User Generator, VuGen.

Vusers emulate the actions of human users by performing typical business processes in your application. The actions that a Vuser performs during the recording session are described in a Vuser script.

Using VuGen, you can run scripts as standalone tests. Running scripts from VuGen is useful for debugging as it enables you to see how a Vuser will behave and which enhancements need to be made.

Steps for Creating Scripts:

VuGen enables you to record a variety of Vuser types, each suited to a particular load testing environment or topology. When you open a new test, VuGen displays a complete list of the supported protocols.

This window opens up as soon as you open the VuGen. You can select the protocol of you application and click ok. For most of the web application its Web (HTTP/HTML) Protocol

Open VuGen

To start VuGen, choose Start > Programs > <App_Name> (for example LoadRunner) > Applications > Virtual User Generator from the Start menu.

To open an existing script, not in the recent list, click Open Existing Script. To create a new script using a recent protocol, click the protocol in the Recently used protocols list.

To create a new script in a protocol that is not listed, click New Vuser Script.

Choose File > Zip Operations > Import From Zip File … to open an existing Script from a zip archive.


Now click on the New Protocol Script and you will see the following Window. From this window you have to choose the protocol on which the application you are going to load test works.

VuGen provides a variety of Vuser technologies that allow you to emulate your system. Each technology is suited to a particular architecture and results in a specific type of Vuser script. For example, you use Web Vuser Scripts to emulate users operating Web browsers. You use FTP Vusers to emulate an FTP session. The various Vuser technologies can be used alone or together, to create effective load tests or Business Process Monitor profiles.


Now set the General options for VuGen.

For Example To set the environment-related options:
Select Tools > General Options and click the Environment tab.

To save the current script information for auto recovery, select the Save AutoRecover Information option and specify the time in minutes between the saves.

To set the editor font, click Select Font. The Font dialog box opens. Select the desired font, style, and size and click OK. Note that only fixed size fonts (Courier, Lucida Console, FixedSys, and so on) are available.

To use a comparison tool other than WDiff, select Use custom comparison tool and then specify or browse for the desired tool.

Click OK to accept the settings and close the General Options dialog box.

Now Set the Recording Options for recording the user actions of the application under load test.



Now that you are ready for recording Click on the record button


Give the Url of the application that needs to be load tested like the one below


The following table describes the criteria for determining the business functions or processes to be included while recording 


Now record the transaction in either Vuser_init or Vuser_end or action by using the recording tool bar.


Once the recording is over a recording log is also generated

To view a log of the messages that were issued during recording, click the Recording Log tab. You can set the level of detail for this log in the advanced tab of the Recording options.

Now the script will be generated for the recorded user actions and will be displayed like this 



Performance Testing With Load Runner -3- : Load Runner and its components

Load Runner:

HP LoadRunner, a tool for performance testing, stresses your entire application to isolate and identify potential client, network, and server bottlenecks.

HP LoadRunner load tests your application by emulating an environment in which multiple users work concurrently. While the application is under load, LoadRunner accurately measures, monitors, and analyzes a system’s performance and functionality.

How LoadRunner Addresses the Performance Testing:

  • LoadRunner reduces personnel requirements by replacing human users with virtual users or Vusers. These Vusers emulate the behavior of real users— operating real applications.
  • Because numerous Vusers can run on a single computer, LoadRunner reduces the amount of hardware required for testing.
  • The HP LoadRunner Controller allows you to easily and effectively control all the
  • Vusers—from a single point of control.
  • LoadRunner monitors the application performance online, enabling you to fine- tune

    your system during test execution.

  • LoadRunner automatically records the performance of the application during a test. You

    can choose from a wide variety of graphs and reports to view the performance data.

  • LoadRunner checks where performance delays occur: network or client delays, CPU

    performance, I/O delays, database locking, or other issues at the database server. LoadRunner monitors the network and server resources to help you improve performance.

  • Because LoadRunner tests are fully automated, you can easily repeat them as often as you need.

Various Components of LoadRunner:

Vuser Generator is the Script generation component of LoadRunner. This component has two main things and are described below:

Vusers: In the scenario, LoadRunner replaces human users with virtual users or Vusers. When you run a scenario, Vusers emulate the actions of human users working with your application. While a workstation accommodates only a single human user, many Vusers can run concurrently on a single workstation. In fact, a scenario can contain tens, hundreds, or even thousands of Vusers.

Vuser Scripts: The actions that a Vuser performs during the scenario are described in Vuser script. When you run a scenario, each Vuser executes a Vuser script. The Vuser scripts include functions that measure and record the performance of your application’s components.

Controller: You use the HP LoadRunner Controller to manage and maintain your scenarios. Using the Controller, you control all the Vusers in a scenario from a single workstation.

Load Generator: When you execute a scenario, the Controller distributes each Vuser in the scenario to a load generator. The load generator is the machine that executes the Vuser script, enabling the Vuser to emulate the actions of a human user.

Performance analysis: Vuser scripts include functions that measure and record system performance during load-testing sessions. During a scenario run, you can monitor the network and server resources. Following a scenario run, you can view performance analysis data in reports and graphs.

Performance Testing With Load Runner -2- : Load Testing Process & Load Test Terminologies

  • Load Testing Process
    • Identify the performance-critical scenarios.
    • Identify the workload profile for distributing the entire load among the key scenarios.
    • Identify the metrics that you want to collect in order to verify them against your
    • Performance objectives.
    • Design tests to simulate the load.
    • Use tools to implement the load according to the designed tests, and capture the metrics.
    • Analyze the metric data captured during the tests.
  • Load Test Terminologies:

Scenarios are anticipated user paths that generally incorporate multiple application activities. Key scenarios are those for which you have specific performance goals, those considered to be high-risk, those that are most commonly used, or those with a significant performance impact. The basic steps for identifying key scenarios are.

Metrics are measurements obtained by running performance tests as expressed on a commonly understood scale. Some metrics commonly obtained through performance tests include processor utilization over time and memory usage by load.

Response time is a measure of how responsive an application or subsystem is to a client request.

Throughput is the number of units of work that can be handled per unit of time; for instance, requests per second, calls per day, hits per second, reports per year, etc.

Workload is the stimulus applied to a system, application, or component to simulate a usage pattern, in regard to concurrency and/or data inputs. The workload includes the total number of users, concurrent active users, data volumes, and transaction volumes, along with the transaction mix. For performance modeling, you associate a workload with an individual scenario.

Resource utilization is the cost of the project in terms of system resources. The primary resources are processor, memory, disk I/O, and network I/O.

Scalability refers to an application’s ability to handle additional workload, without adversely affecting performance, by adding resources such as processor, memory, and storage capacity.

Performance requirements are those criteria that are absolutely non-negotiable due to contractual obligations, service level agreements (SLAs), or fixed business needs. Any performance criterion that will not unquestionably lead to a decision to delay a release until the criterion passes is not absolutely required and therefore, not a requirement.

Performance Testing With Load Runner -1- : What is Load test ? What is the purpose of load test? What functions or business processes should be load tested?

  • Load Test:

Load Tests are end to end performance tests under anticipated production load. The objective of such tests are to determine the response times for various time critical transactions and business processes and ensure that they are within documented expectations (or Service Level Agreements – SLAs). Load tests also measures the capability of an application to function correctly under load, by measuring transaction pass/fail/error rates.

Load Tests are major tests, requiring substantial input from the business, so that anticipated activity can be accurately simulated in a test environment. If the project has a pilot in production then logs from the pilot can be used to generate ‘usage profiles’ that can be used as part of the testing process, and can even be used to ‘drive’ large portions of the Load Test.

Load testing must be executed on “today’s” production size database, and optionally with a “projected” database. If some database tables will be much larger in some months time, then Load testing should also be conducted against a projected database. It is important that such tests are repeatable, and give the same results for identical runs. They may need to be executed several times in the first year of wide scale deployment, to ensure that new releases and changes in database size do not push response times beyond prescribed SLAs.

  • What is the purpose of a Load Test?

The purpose of any load test should be clearly understood and documented. A load test usually fits into one of the following categories:

Quantification of risk: Determine, through formal testing, the likelihood that system performance will meet the formal stated performance expectations of stakeholders, such as response time requirements under given levels of load. This is a traditional Quality Assurance (QA) type test. Note that load testing does not mitigate risk directly, but through identification and quantification of risk, presents tuning opportunities and an impetus for remediation that will mitigate risk.

Determination of minimum configuration: Determine, through formal testing, the minimu m configuration that will allow the system to meet the formal stated performance expectations of stakeholders – so that extraneous hardware, software and the associated cost of ownership can be minimized. This is a Business Technology Optimization (BTO) type test.

Assessing release readiness by: Enabling you to predict or estimate the performance characteristics of an application in production and evaluate whether or not to address performance concerns based on those predictions. These predictions are also valuable to the stakeholders who make decisions about whether an application is ready for release or capable of handling future growth, or whether it requires a performance improvement/hardware upgrade prior to release.

  • What functions or business processes should be load tested? 


Invoke a Web service with Eclipse BPEL designer and Apache ODE

BPEL’s WS invocation

  1. Create a new BPEL project named “BPEL_WS” by selecting File→New→Others→BPEL 2.0→BPEL Project. Select Next. Type the project name asBPEL_WS and select the Target Runtime as Apache ODE 1.x Runtime. Click Finish.
  2. Create a new BPEL process file named WS_Invocation by right clicking on the BPEL_WS/bpelContent folder, select New→Others→BPEL 2.0→New BPEL Process File. Click Next. Fill in BPEL Process Name the string WS_Invocation, and in the Namespace the string http://ws.invocation.tps. Select the Template as Synchronous BPEL Process. Click Next.

  3. Modify the Service Address as http://localhost:8080/ode/processes/WS_Invocation. Click Finish.
  4. Click and drag an Invoke action from the Palette to the BPEL process and name it as InvokePingPongService.
  5. Insert two other Assign actions before and after the Invoke action named AssignInputToSOAPRequest and AssignOutPutToResult.

  6. To invoke a web service, you have to know its description. In our example, we will invoke a PingPong web service and Here is its description file. Download it and copy it to your project.
  7. Now, click on InvokePingPongService, in the Properties view, select Details→Partner Link→Create Global Partner Link. Name it as PingPongPL. Click OK.
  8. A pop-up dialog appears. In this step, you have to select the Partner Link Type Structure which is the PingPong service. Select Add WSDL button. Then select the option From Project. Select the PingPong interface as the Partner Link Type Structure. Click OK.

  9. Type the Partner Link Type Name as PingPongPLT. Click Next.
  10. Type the Role Name as PingPongPLRole, select the PingPong Port Type. Click Next.
  11. Ignore the second Role of the Partner Link. Click Finish.
  12. Continue with the Details of the InvokePingPongService action. Select the Operation echoInput at the Quick Pick area. Other two variables are automatically generated which are PingPongPLRequest and PingPongPLResponse.

  13. Click on the AssignInputToSOAPRequest action. In the Properties view, select Details→New and assign the input→payload→input toPingPongPLRequest→Parameters→input. A pop-up dialog appears to ask you about the variable initiation. Select Yes.

  14. By the same way, select the AssignOutPutToResult action and assign the PingPongPLResponse→parameters→echoInputReturn to theoutput→payload→result.

  15. Save the files. Right click on the BPEL_WS/bpelContent, select New→Others→BPEL 2.0→Apache ODE Deployment Descriptor. Click Next. Verify the BPEL Project name as /BPEL_WS/bpelContent. Click Finish.
  16. Now you have the file deploy.xml in your project. Right click on this file. Select Open With→ODE Deployment Descriptor Editor. At the Inbound Interfaces (Services), Partner Link is the client, select the Associated Port as WS_InvocationPort and click on other field. Eclipse will automatically fill in other fields.
  17. At the Outbound Interfaces (Invokes), Partner Link is PingPongPL, select the Associated Port as PingPong and click on other field to fulfill it.

  18. Save file and open the Server view, right click on the Ode v1.x Server at localhost, select Add and Remove Projects. Select theBPEL_WS/WS_Invocation.bpel in the Available projects box and click Add. Then, click Finish.
  19. Start the ODE server to deploy the WS_Invocation BPEL process on the server. We will use the Eclipse’s Web Services Explorer to test our application.
  20. Right click on the WS_InvocationArtifacts.wsdl file. Select Web Services→Test with Web Services Explorer.
  21. Select the operation process. Then, type Hello PingPong Service in the input textbox. If your deployment was successful, you will get the string like the folowing in the SOAP response.