With growing number of “change requests” coming to your application, it is increasingly difficult task to ensure that your application is not introducing performance issues. One of the best practices to ensure this is to “baseline” your performance statistics and “make it a yardstick” to measure the performance of application post major release (having many CRs). Across the industry, it is generally called as “Benchmarking".
Benchmarking is one of the best strategies to measure your hardware performance (to know its capacity) or any OS/application server performance or any framework performance, generally for “capacity modeling” or “evaluate & choose the best hardware/software/framework” on the basis of benchmarking numbers.
For Java, Benchmarking JVM, GC (Garbage Collectors) or any JDBC library can be used to evaluate & choose the best among the pack.
Now, coming back to Benchmarking your own custom application, it will give you following benefits:
• Capacity Modeling – Knowing the capacity of your application stack
• Baseline (Yardstick) – Establishing a yardstick against which all future releases will be based upon; subsequent releases should improve these numbers, not decrease.
• Measurability
• Choosing the best alternative on the basis of stats
Suggested Benchmarking Parameters for your application:
• Response Time in Seconds
• Throughput – Requests Processed Per Second
• Memory Usage, CPU Usage, Database Connections Usage, Disk I/O Usage
One of the most important steps in establishing Benchmarking numbers is “cycles of performance runs” (at least 3 cycles, more is better as it gives more reliable data).
References:
1. Microsoft Article - Benchmarking Web Services using Doculabs
2. IBM Article – Benchmarking Method for Comparing Open Source App Servers
3. Benchmarking AOP Implementations - http://docs.codehaus.org/display/AW/AOP+Benchmark
4. Benchmarking ESB: http://esbperformance.org/wiki/ESB_Performance_Test_Framework
No comments:
Post a Comment