Saturday, December 1, 2007

Performance Test

The performance testing is a measure of the performance characteristics of an application.

The main objective of a performance testing is to demonstrate that the system functions to Specification with acceptable response times while processing the required transaction volumes in real-time production database.

The performance of a computer system is based on human expectations and the ability of the computer system to fulfill these expectations. The objective for performance tuning is to match expectations and fulfillment. The path to achieving this objective is a balance between appropriate expectations and optimizing the available system resources.

Performance analyses are carried for various purposes such as:

  • During a design or redesign of a module or a part of the system, more than one alternative presents itself. In such cases, the evaluation of a design alternative is the prime mover for an analysis.

  • Modern applications are not entirely designed from the ground up. Use of third-party tools and components is almost a given. In such scenarios, the comparison of two or more systems must be undertaken.

  • Post-deployment realities create a need for the tuning the existing system. A systematic approach like performance analysis is essential to extract maximum benefit from an existing system.

  • Identification of bottlenecks in a system is more of an effort at troubleshooting. This helps to replace and focus efforts at improving overall system response.

  • Capacity planning is usually done with a view to the future. The results are used as ipso facto for decisions regarding the purchase of additional hardware resources, software tools, etc.

  • As the user base grows, the cost of failure becomes increasingly unbearable. To increase confidence and to provide an advance warning of potential problems in case of load conditions, analysis must be done to forecast performance under load.

Typically to debug applications, developers would execute their applications using different execution streams (i.e., completely exercise the application) in an attempt to find errors. When looking for errors in the application, performance is a secondary issue to features; however, it is still an issue.

In summary main objectives of performance tests are,

  • Whether the system is able to handle the number of users
  • Whether the system is able to give good response time and throughput
  • Whether the systems BATCH process (all those flat files, incoming files/outgoing files or daily process/monthly processes. etc.) is with in a targeted time frame window.

The performance tests should include,

Type

Description

Load

testing

How many users can the application support? What actual response times will users experience

Scalability

Testing

How reliable is the system beyond normal usage point. , Stress testing - what are the application limitations and failure behavior

Infrastructure

Configuration

testing

Encompasses setting up and testing various system configuration to asses requirement and investment needs

Smoke

Testing

To check how system behaves when there is sudden surge in the transaction as a particular time during the day.

Endurance

testing

Long duration load or stress test. Instead of test execution period lasting tens of minutes, tests are executed for hours or even days. This mainly to asses the slow memory leaks, accrual of uncommitted transactions in a rollback buffer, queuing of downstream systems or gradual impact upon system resources.

What is required for performance tests?

Before going into actually doing the performance tests, you need to invest in getting the tools required for doing the performance tests. There are tools in the market, but be very sure that none of them will fit exactly that is required by you. Hence you need to spend time to fine tune the tools available in the open market to do your tests. Typically these tools should do the following,

  • Simulate online users
  • Simulate the interfaces
  • Simulate the batch process.

These simulations should at the end of the tests be capable of giving the following,

  • Throughput.
  • Response Time.

Where,

Throughput is a measure of the amount of work performed over a period of time.

Response time is the time elapsed between when a request is submitted and when the response from that request is returned?

Now let’s move into a one typical situation in the real life. Lets take t24/Globus clients as example. T24 is used by at least 500 banks around the world. There are at least 10 new banks implementing the system at any time. There are at least 30% of the current client base, who are upgrading the system and their main concern is the performance. The performance tests are required in the following situations,

  • Upgrade of t24/Globus (Service pack upgrade or Release Upgrade)
  • Upgrade of database(jbase)
  • Change of database(Universe – jbase)
  • Hardware change/Upgrade.
  • Addition of products/more business to the system

These concerns can be addressed and banks can have peaceful upgrade/implementation/support by performing the performance tests in advance and addressing the issues that rose in these tests.

In the performance tests, you need to,

  1. Identify performance bottlenecks as in at t24/Globus core or at t24/Globus Local (developments). (Application level)
  2. Identify performance bottlenecks as jBase or t24/Globus (Database level)
  3. Identify performance bottlenecks as Server or jbase (Server Level)

To resolve performance problems, you need to,

  1. Identify the party responsible for the changes(Bank or Vendor)
  2. Plan for implementing these changes.

If the issues are due to local developments, then the changes to the local developments is to be done and performance tests repeated until all the local developments issues are solved. (Generally fixing up Local development issues solves the core performance issues).

Every tests should give a summary of the test results and the documentation should cover the following.,

Example of resulting summary document –

The following are the results of the PERFORMANCE CYCLE NN test. (Where NN is the reference number)

Assessment of existing SW/HW adequacy

1. In current hardware and software configuration t24/Globus system is capable of processing about XX% of required volume of transactions in peak half hour period.

2. Performance of hard disks doesn't impact online processing of users and interfaces.

3. Local developments optimization speeds up processing of transactions received from interfaces. It enables parallelization of processing and because of that linear increase of performance with additional CPUs is possible. Parallelization happens only if input data is properly randomized.

4. COB batch was successful on ~XX% of daily volume of transactions. It took 30 minutes, with pre and post backups 25 minutes each. It still needs to be tested with full daily volume of transactions.

5. All of the bottlenecks identified in must be solved in order to make t24/Globus system capable of processing 100% of required volume of transactions.

Identified Bottlenecks:

Identified Issues

Description

Owner

Main reasons (bottlenecks) for this situation

1. Large Customers

Processing rate for single Customer (hitting single account) is too low

Current rate of processing for is 1 transaction per second.

Additional CPUs wouldn't help in this case.


Areas not yet tested

Area Not tested

Reason

Comments

Large Customers in general

Issues with processing rate of transactions hitting the same account are expected.

Large record size can be a problem.


Full number of Customers

Tests were performed on limited number of Customers (14.000). ~50.000 is needed.


Scalability with additional CPUs

Tests were performed on 4 CPUs. Test with additional CPUs should prove that performance would indeed increase linearly with more CPUs.


Scalability with additional memory (RAM)

Tests were performed on ‘n ‘GB of RAM. Test with additional RAM is needed (‘nn’ GB).


For maintaining the performance at optimal level, following activities and monitoring has to be done.

  • Monitor the throughput and response time everyday.
  • Monitor at least top 10 most time consuming jobs in the COB/EOD and address them.
  • Check the file size of all the files every week. You can have a script checking the file size every Thursday and resizing the files every Friday. (Frequency of this depends on the database growth – so decide for yourself which is the best frequency for you).
  • Archive the files regularly (again frequency is to be decided by you).


Gp.

1 comment:

Unknown said...

Nice post.
valuable information here.
Thanks for sahring with us.
Project Development Companies | Process Intelligence | Data migration experts