A Dip on Performance Testing

There are many sources over internet to describe performance testing. But most are not for new comer to kick start when people have tasks right in hands seeking for an jump start point to enter. This post targets to provide a glace of it with practical hints and introduce some simple methods and toolkits. (In progress)

Types of Performance Testing

There are two kinds of performance testing mostly seen in practices.

Quantitive Measurement

One type is to seek a quantitive measurement on software + hardware(cloud/infrastructure environment). Usually the practice is with automation toolkits to simulate concurrent transactions (for OLTP) or streams (for transmission systems). The key factors would be sorted into categories and critical factors will be figured during the test.

An example of gnuplot chart on performance testing with apache ab toolkit

[benchmark performance test chart][https://imgur.com/4UtYp66]

According to different quantative measurement perpectives, the measurements are also named to different performance test kinds. Among them, Load Test and Capacity Test are often planned on verticle and horizontal views.

Load Test is executed against service level agreement on temporal and spcial factors. Industry practice, especially in telecommunication, the measurement is on statistical results according to standard definition of MTBF(Mean Time Between Failure) and MTTF(Mean Time To Failure). Here MTBF is the sum of MTTF and MTTR (Mean Time To Repair).

Capacity Test figures out which dimension is critical for scaling. Horizontal as scaling-out or Vertical as scaling-up.

Stressing Test

Another type of performance test is to stress the software system into an predesigned extreme situation and verify if the availability and IO/Transaction rate can still satisfy the criteria of such circumstance.

For example, I made a dynamic lib with toolchain to load into embedded Linux card to consume a certain portion (read from cli) of CPU time on the designated core. If the portion is 70%, the test will show system behavior with one core 70% busy as background traffic effects. The result is satifying or not compliant with estimated criteria per requirement key checkpoints by agile team.

Another example is to generate 8 TCP streams with IPerf tool and test if the system can still holds the acceptance success rate.

Innovation of testing methodologies could save the cost and time by introducing new techniques to stress system. One case is that I used to set Linux IP Filter to drop 50% packets from some source and verify if the fault handling is still correct.

Toolkits and Tips to Implement Performance Testing

Simulate the Background Load

Background load here refers to the prerequisites to kick off stressing test. It usually includes one of below item or a combination of picked up factors.

CPU load

(per core or for specific cores)

Memory Pressure

(Pay attention to libc-rt malloc and overcommitment.)

Traffics (w/ or w/o stream/packet dropping rate)

(Stream: shape, rate, ToS/QoS, Jumbo packets, Back-to-back)

(IP Filter based packet drop rate)

(SQL transaction drop rate)

(Load Balance Policy based failure rate)

(Other failure rate)

(TBC)

Persistance status (disk occupation rate, disk IO pressure)

(IOwait measurement, simulation)

(TBC)

Performance Issues Sharing

(Experiences of solving performance issues)

(TBC)

Change Log

Jul26, 2017: Initial post draft.