Wednesday, September 29, 2021

Differences

Baseline And Benchmark Testing
Baseline testing is the process of running a set of tests to capture performance information. This information can be used as a point of reference when in future changes are made to the application
Example: We can run baseline test of an application, collect and analyze results, and then modify several indexes on a SQL Server database and run the same test again, using the previous results to determine whether or not the new results were better, worse, or about the same.
Benchmarking is the process of comparing your system performance against an industry standard that is given by some other organization.

Performance Testing And Performance Engineering
1. Performance Testing is a distinctive QA process that occurs once a round of development is completed whereas performance engineering is an ongoing process that occurs through all phases of the development cycle i.e. from the design phase to development, to QA.

2. A dedicated performance tester or team conduct the Performance Testing who has sound knowledge of performance testing concept, tool operation, result analysis etc. A Performance Engineer is a person who has enough knowledge of application design, architecture, development, tuning, performance optimization and bottleneck root cause investigation and fixing.

3. When a bottleneck is identified during performance testing then the role of the performance tester is to analyse the test result and raise a defect. On the other hand, the job of a performance engineer is to investigate the root cause and propose the solution to resolve the bottleneck.

4. Performance Tester does not much care about the design and architecture of the application. He just focused on the application behaviour under load whereas Performance Engineer cares about how efficient each component of the application is performing under load.

5. Performance Testing Life Cycle (PTLC) covers all phases of Performance Testing whereas Performance Engineering Life Cycle (PELC) covers all the engineering activities and deliverables.

Difference between Performance center and Controller ?
Performance center
            1. PC is web interface of the controller
            2. we can book the slot to execute the test.
            3. we can download the scripts and also .lrr files where ever we are.

Controller
            1. controller is stand alone machine.
            2. we can not book the slot to execute the test.
            3. Not possible to download .lrr file.

Difference between Web Server and Application Server ?
  1. Web servers are desirable for static content whereas application servers are appropriate for the dynamic content.
  2. Web servers support scripting languages like Perl, PHP, ASP, JSP, etc. As against, application server assists scripting languages as well as application level services such as connection pooling, transaction support, object pooling, messaging services etc.
  3. Application server contains web and EJB containers and a web server as an incorporated part of them. In contrast, a web server only contains web or servlet container and can employ EJB.
  4. Web server does not support multithreading whereas application server assists multithreading and distributed transaction.
  5. Web server uses HTML and HTTP protocol. On the other hand, the application server could use graphical user interface and protocols like RPC/RMI including HTTP.
  6. Load Limit or capacity is higher in case of the application server as compared to the web server.
  7. The web server provides an environment to run a web application and features like caching and scalability. On the contrary, the application server provides an environment to run web with enterprise applications.
Differences between Request and Hit?
Request : User action is a request.
Hit: Successful request is a hit.

web_submit_data and web_submit_form
Ans : web_submit_data :   Specific to the page and context less
    web_submit_form :   Specific to the form and context base 
    context base means the request will be dependent on the previous response.

lr_eval_string and lr_save_string

lr_eval_string : It reads value from the variable

                         Replace all occurrences of a parameter with its current value.
                        To save the parameter value to other parameter using lr_eval_string
lr_save_string : it assign the value to lr variable.
                          it save the null terminated string to a parameter
    syntax : lr_save_string("cstring", "LR string");
    eg : lr_save_string("some string value", "prm_str");
lr_output_message(lr_eval_string("value of stri :", {prm_str}));

difference between lr_error_message () and lr_output_message()?
lr_error_message () function we can send the error message to the controller message area but using lr_output_message() we didn't.

SOAP and REST

SOAP

  • SOAP is a protocol
  • SOAP is Simple Object Access Protocol
  • SOAP is supports WS-security and SSL
  • SOAP requires more resources and bandwidth
  • SOAP only works with XML formats
  • SOAP uses service interface to expose it functionality to client application.

REST :

  • Rest is Architectural Pattern
  • Rest stands for Representational State Transfer Protocol
  • REST works with plain text, XML. HTML & Json.
  • REST does not need much bandwidth
  • REST uses uniform service locators to access to the components on the hardware device

Difference between GET and POST
GET :
  • GET is not a secured request
  • It will send small amount of information to the server
  • It can be available in history cached and bookmarks
  • GET request is often cacheable
  • To retrieve the information
  • This method supports only string data types.
POST :
  • POST is a secured request
  • It will send large amount of information
  • It can't be available in history, cached and bookmarks
  • POST request is NOT cacheable
  • To submit the data to server
  • This method supports different data types, such as string, numeric, binary, etc.
Difference between HTML mode and  URL mode
Context Dependency: HTML mode performs context-based recording whereas URL mode is free from context i.e. contextless recording. Since the resources are separated by individual request in URL mode, so there is no context dependency between them.
Resources: In HTML mode, the page resources like image, .css etc. are recorded in a single request whereas individual requests are created for each resource in URL mode.
User Action: In HTML mode, each user action belongs to one request whereas in URL mode, multiple requests associated with one user action depends on the number of resources available on the page.
Script Size: The size of the HTML mode script is comparatively smaller than the size of the URL mode script. The reason is in URL mode script each resource has a separate request.
Correlation Effort: HTML mode script requires less effort than URL mode script to correlate the dynamic value.
Resource Download during replay: Resources are downloaded during replay in HTML mode. But in URL mode individual request triggers for each resource hence resource downloading is not required.
Execution Speed: HTML mode script is slower than URL mode script due to parsing of code.
Script Maintenance: The maintenance of the script is less in HTML mode. On the other hand, the URL mode script requires more maintenance and re-work due to changes in the application.
Explicit support: HTML mode supports explicit URL whereas URL mode does not support explicit HTML.
Used for: HTML mode is used for browser-based application whereas URL mode is used for non-browser (thick-client) application.
Functions: In HTML mode, web_submit_form is used whereas in URL mode, web_submit_data, web_image etc. are used.

Difference between WinInet and Socket Level
Sockets-based (default)
WinInet based. 
The WinInet is the engine used by Internet Explorer and it supports all of the features incorporated into the IE browser. The limitations of the WinInet replay engine are that it is not scalable and does not support Linux. In addition, when working with threads, the WinInet engine does not accurately emulate the modem speed and number of connections. 
VuGen’s proprietary sockets-based replay is a lighter engine that is scalable for load testing. It is also accurate when working with threads. The limitation of the sockets-based engine is that it does not support SOCKS proxy. If you are recording in that type of environment, use the WinInet replay engine.

Difference between Entry Criteria and Exit Criteria :
Entry criteria are:
  • Finalized NFRs
  • Script completion
  • Test environment must be ready
  • Deployment of the latest functionally tested code
  • Test Data readiness
Exit criteria are:
  • NFRs must be met
  • No performance bottleneck
  • No open defect
  • Final performance test report submission

Run Vuser as a Thread” vs “Run Vuser as a Process

. What Is The Advantage Of Running The Vuser As Thread?
Answer : VuGen provides the facility to use multi-threading. This enables more Vusers to be run per generator. If the Vuser is run as a process, the same driver program is loaded into memory for each Vuser, thus taking up a large amount of memory. This limits the number of Vusers that can be run on a single generator. 
If the Vuser is run as a thread, only one instance of the driver program is loaded into memory for the given number of Vusers (say 100). Each thread shares the memory of the parent driver program, thus enabling more Vusers to be run per generator.

A generic statement is Run Vuser as a thread is used for web-based applications whereas 
                                    Run Vuser as a process is used for client-based applications testing.

This feature gives an option to enable the multithreading mode of the Vusers as per requirement. To enable it you can go to: 

VuGen -> Runtime Setting -> General -> Miscellaneous -> Multithreading

Run as a Thread vs Run as a Process
LoadRunner Runtime Settings, you will find 2 radio buttons “Run Vuser as a thread” and “Run Vuser as a process”. You need to select one of them for your test. What are the criteria of selection ?

As per Java Multithreading definition, both processes and threads are independent sequences of execution, so running a Vuser as a thread or as a process is an independent task for each Vuser
there are some major differences  between both the options which are covered below:

Run Vuser as a thread vs Run Vuser as a process:
By Definition: A program (i.e. LoadRunner script) is referred to as a process while a thread is a subset of the process, also referred to as a lightweight process.

Program Loading:
In Run Vuser as a process mode, the controller loads the same driver program into the memory again and again for every instance of the Vuser and increases the need for RAM. 
In Run Vuser as a thread mode, the controller loads only one instance of the driver program for all the threads.

Memory Sharing:
Running Vuser as a process mode needs separate memory space for each thread which is not shared with any other thread. 
In Running Vuser as a thread option, the allocated memory space is shared between all the threads.

Memory Consumption:
Running Vusers as a thread mode requires less memory due to its memory sharing feature. 
Running Vusers as a process mode needs significantly more memory space.

Vuser Count:
Running Vuser as a thread mode can accommodate more Vusers count than Running Vuser as a process mode.

Execution:
Running Vuser as a process runs separately using allocated resources. There should be at least one thread of that process called primary thread (multithreading basic). 
Running Vuser as a thread creates multiple threads of a single process. These threads execute concurrently and share the allocated resources of the process.

Number of Processes:
Running Vusers as a thread initiates only 1 process i.e. mdrv.exe for all the threads. On the other hand, Running Vusers as a process launches individual process for each thread.

Example: If 10 Vusers are running as a thread then there will be only 1 mdrv.exe process in the task manager whereas you can see 10 different mdrv.exe processes while running the test with Run Vuser as a process mode.

Killing mdrv.exe:
On killing the mdrv.exe process, all the threads will stop and exit the test in Run Vuser as threads mode whereas in Run Vuser as a process, only associated thread will stop and exit the test on killing a particular mdrv.exe process.

Where to use:
A generic statement is Run Vuser as a thread is used for web-based applications whereas Run Vuser as a process is used for client-based applications testing.
In case of any unhandled exception (in LR code, not the script code) caused by one of the Vusers which in turn killing the .mdrv process that leads failing of all the running Vusers. In such case Running Vuser as a process mode will be the best option because termination of one .mdrv process does not interrupt the functioning of other processes.
One Key Point:
If the application supports multithreading feature then the Vuser script will also support multithreading and you can choose Run Vuser as a thread mode.

Additional Information on Process and Thread:
Attributes:
Virtual address space, global variables, open files, child processes, executable code, a security context, a unique process identifier, a priority class, minimum and maximum working set sizes, and at least one thread of execution all these are associated with Running Vuser as a process.

Program Counters, Registers, Stack, Thread State, scheduling priority, thread local storage, a unique thread identifier all these are associated with Running Vuser as a thread.

Communication:
A thread uses methods like wait(), notify(), notifyAll() to communicate with other thread (of the same process). A process can communicate with other processes by using inter-process communication only.

Starting of New instance:
The creation of the thread is easy and requires less time. However, the creation of a new process requires the duplication of the parent process and need more time than a thread creation.

Control:
Threads have control over the other threads of the same process. A process does not have control over the sibling process, it has control over its child processes only.

Context Switching:
Thread context switching is faster than process context switching.

Creation and Termination:
Thread creation and termination are quicker than process creation and termination.
=========================

Hits/sec and Transaction/sec :-

A Transaction is a group of requests which creates multiple Hits on the server. Hence we will see multiple Hits against one Transaction.
Single Transaction can create multiple Hits on the server.
Hits/Sec graph helps to identify the request rate sent by the testing tool.
High Response Time may cause less Hits/Sec
Eg : A Login operation which involves 10 HTTP requests can be group together into a single transaction.
when you see 1 dot in TXN/Sec graph there will be a 10 dots in Hit/sec Graph.

Absolute Graph and Relative Graph :

Absolute Graph : Based on the System Time
Relative Graph : Based on the Elapsed Time

Simultaneous Users and Concurrent Users :


Difference Between Overlay Graph And Correlate Graph?
Overlay Graph: It overlay the content of two graphs that shares a common x-axis. Left Y-axis on the merged graph shows the current graphs value & Right Y-axis show the value of Y-axis of the graph that was merged.
Correlate Graph: Plot the Y-axis of two graphs against each other. The active graphs Y-axis becomes X-axis of merged graph. Y-axis of the graph that was merged becomes merged graphs Y-axis.

Difference between values of Average response time in Summary Report and Average transaction response time graph

Sometimes it is confusing, when we observe the difference in Avgmax and min values of response time in “Summary report” and values in “Average transaction response time” graph.

To understand this we will have to understand how these values are calculated.

The Avg, max and min values of response time in “Summary report” is calculated using the complete data of transactions that executed during the test duration and is most precise.

The “Average transaction response time” graph does not use all the data points captured during execution. The values in the Graph are averaged out based on the granularity to make it more readable. So, the the Avgmax and min values of response time in “Average transaction response time” graph is calculated based on the averaged values (calculated based on granularity) used as data points to plot this graph. These values will change if there is change in the granularity.

You may also find that the Avg, max and min values of response time shown during execution in controller is different from the above two. It could be because there is some think time coded with in the transaction scope and it is added to the response time value, shown in controller. This think time is excluded by default in the LoadRunner analysis.

Difference between Summary Data and Complete Data ?
Summary Data :Raw & Unprocessed Data
Complete Data : It refers the Result Data after it has been Processed for use with analysis.

Difference between Summary Report and Transaction Analysis Report ?
Summary Report : It provides the General information about the scenario run.
You can access the Summary Report at any time from the Session Explorer.

Transaction Analysis Report :- It provides a detailed analysis of specific Transaction over a specific Time period.
===
Difference Between Performance Testing And Functional Testing?
Functional Testing : To verify the accuracy of the software with definite inputs against expected output, functional testing is done
This testing can be done manually or automated
One user performs all the operations
Customer, Tester and Development involvement is required
Production sized test environment is not necessary, and H/W requirements are minimal.
Performance Testing : To validate the behavior of the system at various load conditions performance testing is done.
It gives the best result if automated
Several user performs desired operations
Customer, Tester, Developer, DBA and N/W management team
Requires close to production test environment and several H/W facilities to populate the load

Difference between Page Load Time and Page Rendering Time ? 
Page Load Time :- How much time it is taking the page to load all the content in the page.
Page Rendering Time : Total time is taking to get the immediate next page.

Difference between Cookie, Cache and Session ?
Cookies keep information such as user preferences, while 
Cache will keep resource files such as audio, video or flash files. 
– Typically, Cookies expire after some time, but cache is kept in the client’s machine until they are removed manually by the user.
Cookies are client-side files that contain user information, whereas Sessions are server-side files that contain user information.
Cookie is not dependent on session, but Session is dependent on Cookie.
Cookie expires depending on the lifetime you set for it, while a Session ends when a user closes his/her browser.
The maximum cookie size is 4KB whereas in session, you can store as much data as you like.
Cookie does not have a function named unsetcookie() while in Session you can use Session_destroy(); which is used to destroy all registered data or to unset some

Difference between Soft Parse and Hard Parse ? which one is best ?
Soft Parse is good.
Oracle SQL is parsed, before execution checked for syntax (and parts of the semantic check) the SQL is loaded into the library cache.
soft parse does not require a shared pool reload (and the associated RAM memory allocation).
Oracle should be perform a syntax check and semantic check because it is possible that a DDL change altered one of the target tables or views
Whenever we run a SQL qry, the soft parse is directly taken from Buffer cache or Library Cache and it will executed very fastly.

A hard parse is when your SQL must be re-loaded into the shared pool.  
A hard parse is worse than a soft parse because of the overhead involved in shared pool RAM allocation and memory management.
A Hard Parse is taken more and more time to execute SQl qry.

Difference between Web_add_header And Web_add_auto_header ?
Web_add_header: Web_add_header function is used to add header only to HTTP request that follows it.
Example:
web_add_header("Cookie", "{Cookie_ID}");
web_add_header("Pragma", "no-cache");

web_url("webmail",
    "URL=http://{URL}/etc/apps/webmail/",
    "Resource=0",
    "RecContentType=text/html",
    "Referer=",
    "Snapshot=t1.inf",
    "Mode=HTML",
    LAST);

web_add_header("Cookie", "{Cookie_ID}");
web_add_header("Pragma", "no-cache");

web_url("webmail_3",
    "URL=http://{URL}/etc/apps/webmail/?_task=mail&_action=getunread&_remote=1&_unlock=0&_={TimeStamp}",
    "Resource=0",
    "RecContentType=text/plain",
    "Referer=http://{URL}/etc/apps/webmail/?_task=mail",
    "Snapshot=t2.inf",
    "Mode=HTML",
    LAST);

web_add_auto_header: Web_add_auto_header function is used to add header to all the consecutive HTTP requests.

Web scripts usually send the standard header requests automatically for each request. If you need additional headers to be sent then you can use web_add_header or web_add_auto_header.

web_add_header only sends to the HTTP request that follows it, whereas web_add_auto_header sends to all the succeeding requests.

Web_add_header and web_add_auto_header is automatically generated in your script, if you enable this in Record -> recording options->Advanced->Headers->Record Headers not in list.
Example:
web_add_auto_header("Cookie", "{Cookie_ID}");
web_add_auto_header("Pragma", "no-cache");

web_url("webmail",
    "URL=http://{URL}/etc/apps/webmail/",
    "Resource=0",
    "RecContentType=text/html",
    "Referer=",
    "Snapshot=t1.inf",
    "Mode=HTML",
    LAST);

web_url("webmail_3",
    "URL=http://{URL}/etc/apps/webmail/?_task=mail&_action=getunread&_remote=1&_unlock=0&_={TimeStamp}",
    "Resource=0",
    "RecContentType=text/plain",
    "Referer=http://{URL}/etc/apps/webmail/?_task=mail",
    "Snapshot=t3.inf",
    "Mode=HTML",
    LAST);

Saturday, September 25, 2021

Rendezvous Point

Rendezvous point is used to force the Vusers to perform the simultaneous task during the test execution. It generates intense user load on the server for a particular functionality/page and instructs LoadRunner to measure server performance under such situation. Rendezvous point instructs Vusers to wait during test execution for multiple Vusers to arrive at a certain point, in order that they may simultaneously perform a task.
Let’s try to understand Rendezvous Point with an example:
Suppose you want to measure how an online shopping portal application performs when five Vusers submit the order of a product simultaneously. To emulate the required user load on the server, you instruct all the Vusers to halt before ‘Submit Order’ transaction. Once all five Vusers arrived, LoadRunner will release them simultaneously i.e. exactly at the same time.
 How to insert Rendezvous point in the LoadRunner script?
There are two ways to add Rendezvous Point in the script:
1. Insert during the script recording:
The recording bar of VuGen has an option to insert rendezvous point while recording the script. Its icon is like 4 arrows headed each other. If you hover the mouse over the icon then you can see ‘Insert Rendezvous’ text. You can click the icon and VuGen will insert Rendezvous point between recorded and being recorded steps.

Figure 02: Recording Bar
2. Insert after the script recording: 
In case you forgot to add Rendezvous Point during recording still you have another option to add it later. You need to right-click in the script where you want to insert rendezvous point, then either hover ‘Insert’ and click ‘Rendezvous’ (Refer to below Figure 03)

Figure 03: Insertion of Rendezvous Point
or in the VuGen menu Select Design > Insert in Script > Rendezvous (Refer to below Figure 04)
Figure 04: Insertion of Rendezvous Point

Rendezvous policy
The additional part of the rendezvous point in LoadRunner is rendezvous policy. You can set a rendezvous policy according to which the controller releases the Vusers from the rendezvous point either when the required number of Vusers arrives, or when a specified amount of time has passed.

Difference between Rendezvous Point and Rendezvous Policy
Rendezvous Point is defined in the VuGen (Test script) whereas Rendezvous Policy is defined in the controller (Test scenario)
Rendezvous Point instructs to LoadRunner ‘Where to stop the Vusers?’. On the other hand, Rendezvous Policy instructs to LoadRunner ‘How to release the Vusers?’
Importance of Rendezvous Policy:
Let’s say you have inserted a rendezvous point and as per definition, all Vusers have to wait until the last one arrives. Due to some issues, the last Vuser did not reach to the rendezvous point and all Vusers halted for a long duration, waiting for stuck Vuser. This will spoil the purpose of the test. To avoid such situation Rendezvous policy is used. Rendezvous policy helps to keep running the test by meeting the provided condition.
How to add Rendezvous policy in a LoadRunner scenario?
Rendezvous points are only effective for group mode with the manual scenario. It is disabled for a goal-oriented scenario. When you add a Vuser group or script to the scenario, LoadRunner scans the included scripts for rendezvous points and adds them to a central list of rendezvous points. In the controller, you have to
1. Select Scenario (Design) > Rendezvous to view this list.

 Figure 01
2. Click “Rendezvous…”

Figure 02
3. Click “Policy…”

Figure 03
4. Choose any one option as per your requirement and click ‘OK’

Policies:
Release when XXX % of all Vusers arrive at the rendezvous: Let’s say I have 100 Vusers; 20 are in downstate, 20 are in ready and 60 are in run state and I want to release 40% (=40 Vusers) after waiting them for 30 seconds, so I will give 40 as a value in the box.
Release when XXX % of all running Vusers arrive at the rendezvous: Let’s say I have 100 Vusers; 20 are in downstate, 20 are in ready and 60 are in run state and I want to release 40% of running Vusers(=24 Vusers) after waiting them for 30 seconds, so I will give 24 as a value in the box.
Release when XXX Vusers arrive at the rendezvous: Let’s say I have 100 Vusers; 20 are in downstate, 20 are in ready and 60 are in run state and I want to release 60 Vusers after waiting them for 30 seconds so I will give 60 as a value in the box.
Timeout between Vusers YY Sec.: The time the controller waits before releasing Vusers from a rendezvous.
Note:
Rendezvous points are only effective for group mode; not for percentage mode
In goal-oriented scenarios, a script’s rendezvous points are disabled.
Do not insert Rendezvous point inside a transaction.
The Vuser count or percentage in rendezvous policy should be given considerably high so that proper load can be generated on the server and meet the purpose.
The lr_rendezvous_ex function creates a rendezvous point in a Vuser script. When this statement is executed, the Vusers stop and wait for permission to continue.
The default timeout is 30 sec. The timeout is set in the scenario policy in the Controller.
This function can only be used in an action section, and not  in vuser_init or vuser_end 

The difference between lr_rendezvous_ex  and lr_rendezvous is 
lr_rendezvous does not return a range of values.
lr_rendezvous  function always returns zero.
lr_rendezvous_ex  function returns one of the values described in the table below
The return values of the lr_rendezvous_ex function are defined in the Lrun.h file. On Linux platforms, the function returns 0 for success and -1 for an illegal name. On PC platforms, the function returns one of the values described in the table below:


int rend_status; 
if ((rend_status= lr_rendezvous_ex("Meeting")) != LR_REND_ALL_ARRIVED) lr_output_message("rendezvous unsuccessful %d", rend_status); 
else 
lr_output_message("rendezvous successful %d", rend_status);

Q22. What Is Rendezvous Point? When The Rendezvous Point Is Insert?
Answer : To emulate peak load on the server.
 When multiple vuser to perform tasks at exactly the same time then insert the rendezvous point to emulate the peak load on the server.
 You insert rendezvous points into Vuser scripts to emulate heavy user load on the server. Rendezvous points instruct Vusers to wait during test execution for multiple Vusers to arrive at a certain point, in order that they may simultaneously perform a task. For example, to emulate peak load on the bank server, you can insert a rendezvous point instructing 100 Vusers to deposit cash into their accounts at the same time.
Q313. I wanna know the response time for a page with 1000 simultaneous users. What is your approach?
Answer: Insert the rendezvous point in the script on the above the targeted request and enable rendezvous point in the controller with targeted no. of uses. After that if we executed the test . The test result will give the response time .

Wednesday, September 15, 2021

strtok

strtok - Capture a sub string or data from a string based on delimiters in Loadrunner

There is an inbuilt function in loadrunner that can be used to capture data from string by specifying delimiters. strtok function can be used to do the trick -

In below example it is shown that how all words can be captured from string “http://localhost/app/myapp:8080” by using 2 delimeters / and :

 extern char * strtok(char * string, const char * delimiters ); // Explicit declaration

    char String_org[] = “http://localhost/app/myapp:8080”; // original string 

    char delimiter[] = “/:”;

    char * token;

    token = (char *)strtok(String_org, delimiter); // capture 1st sub string based on defined delimiter

    if (!token) {

        lr_output_message (“No tokens found in string!”);

        return( -1 );

    }

    while (token != NULL ) { // While valid tokens are returned

        lr_output_message (“%s”, token );

        token = (char *)strtok(NULL, delimiter); // Get the next token

    } 

 Output:

Starting iteration 1.

Starting action Action.

Action.c(15): http

Action.c(15): localhost

Action.c(15): app

Action.c(15): myapp

Action.c(15): 8080

Ending action Action.


Wednesday, September 8, 2021

Base64 Encode/Decode

Base64 Encode/Decode for LoadRunner
Code:

#include "base64.h"
vuser_init()
{
int res;
// ENCODE
lr_save_string("ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789","plain");
b64_encode_string( lr_eval_string("{plain}"), "b64str" );
lr_output_message("Encoded: %s", lr_eval_string("{b64str}") );

// DECODE
b64_decode_string( lr_eval_string("{b64str}"), "plain2" );
lr_output_message("Decoded: %s", lr_eval_string("{plain2}") );

// Verify decoded matches original plain text
res = strcmp( lr_eval_string("{plain}"), lr_eval_string("{plain2}") );
if (res==0) lr_output_message("Decoded matches original plain text");

return 0;
}

 base64.h include file

/*
Base 64 Encode and Decode functions for LoadRunner
==================================================
This include file provides functions to Encode and Decode
LoadRunner variables. It's based on source codes found on the
internet and has been modified to work in LoadRunner.

Created by Kim Sandell / Celarius - www.celarius.com
*/
// Encoding lookup table
char base64encode_lut[] = {
'A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q',
'R','S','T','U','V','W','X','Y','Z','a','b','c','d','e','f','g','h',
'i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y',
'z','0','1','2','3','4','5','6','7','8','9','+','/','='};

// Decode lookup table
char base64decode_lut[] = {
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0,62, 0, 0, 0,63,52,53,54,55,56,57,58,59,60,61, 0, 0,
0, 0, 0, 0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9,10,11,12,13,14,
15,16,17,18,19,20,21,22,23,24,25, 0, 0, 0, 0, 0, 0,26,27,28,
29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,
49,50,51, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, };

void base64encode(char *src, char *dest, int len)
// Encodes a buffer to base64
{
int i=0, slen=strlen(src);
for(i=0;i
{ // Enc next 4 characters
*(dest++)=base64encode_lut[(*src&0xFC)>>0x2];
*(dest++)=base64encode_lut[(*src&0x3)<<0x4 amp="" src="" xf0="">>0x4];
*(dest++)=((i+1)>0x6]:'=';
*(dest++)=((i+2)
}
*dest='\0'; // Append terminator
}

void base64decode(char *src, char *dest, int len)
// Encodes a buffer to base64
{
int i=0, slen=strlen(src);
for(i=0;i
{ // Store next 4 chars in vars for faster access
char c1=base64decode_lut[*src], c2=base64decode_lut[*(src+1)], c3=base64decode_lut[*(src+2)], c4=base64decode_lut[*(src+3)];
// Decode to 3 chars
*(dest++)=(c1&0x3F)<<0x2 amp="" c2="" x30="">>0x4;
*(dest++)=(c3!=64)?((c2&0xF)<<0x4 amp="" c3="" x3c="">>0x2):'\0';
*(dest++)=(c4!=64)?((c3&0x3)<<0x6 amp="" c4="" div="" x3f="">
}
*dest='\0'; // Append terminator
}

int b64_encode_string( char *source, char *lrvar )
// ----------------------------------------------------------------------------
// Encodes a string to base64 format
//
// Parameters:
// source Pointer to source string to encode
// lrvar LR variable where base64 encoded string is stored
//
// Example:
//
// b64_encode_string( "Encode Me!", "b64" )
// ----------------------------------------------------------------------------
{
int dest_size;
int res;
char *dest;
// Allocate dest buffer
dest_size = 1 + ((strlen(source)+2)/3*4);
dest = (char *)malloc(dest_size);
memset(dest,0,dest_size);
// Encode & Save
base64encode(source, dest, dest_size);
lr_save_string( dest, lrvar );
// Free dest buffer
res = strlen(dest);
free(dest);
// Return length of dest string
return res;
}

int b64_decode_string( char *source, char *lrvar )
// ----------------------------------------------------------------------------
// Decodes a base64 string to plaintext
//
// Parameters:
// source Pointer to source base64 encoded string
// lrvar LR variable where decoded string is stored
//
// Example:
//
// b64_decode_string( lr_eval_string("{b64}"), "Plain" )
// ----------------------------------------------------------------------------
{
int dest_size;
int res;
char *dest;
// Allocate dest buffer
dest_size = strlen(source);
dest = (char *)malloc(dest_size);
memset(dest,0,dest_size);
// Encode & Save
base64decode(source, dest, dest_size);
lr_save_string( dest, lrvar );
// Free dest buffer
res = strlen(dest);
free(dest);
// Return length of dest string
return res;

Dynatrace

Appmon Components :
    Appmon Server
    Memory Analytics Server
    Frontend Server
    Collector
    Application Agents
    Performance Warehouse Database
Diagnostics :
  •     PurePath
  •     CPU Profiling
  •     Memory Dumps
  •     Process Crashes
Split Mode :-
  •     Split by Services
  •     Merge by Services
PurePath Technologies :
    Horizantel
    Vertical
Purepaths are stored in Dynatrace Session Files
Measures are stored in Dynatrace warehouse.
Application Performance Monitoring  :
Analysing slow Transactions
Analysing OutOfMemory Errors
How to retrieve Performance Metrics with windows Performance Monitoring
Identify the Problems in your Applications
Memory Analysis.

A Dynatrace ActiveGate acts as a secure proxy between Dynatrace OneAgents and Dynatrace Clusters or between Dynatrace OneAgents and other ActiveGates—those closer to the Dynatrace Cluster
Types of ActiveGates—Environment ActiveGates or Cluster ActiveGates

OneAgent is responsible for collecting all monitoring data within your monitored environment. A single OneAgent per host is required to collect all relevant monitoring data—even if your hosts are deployed within Docker containers, microservices architectures, or cloud-based infrastructure

If I enable memory dump analysis on multiple ActiveGates, which ActiveGate will perform the memory dump?
ActiveGates have a priority assigned automatically. If there is more than one ActiveGate with the same priority, an endpoint will be selected randomly.

What happens if a file transfer to an ActiveGate fails?
OneAgent attempts to send the dump list to all available endpoints until it finds one that works. This process is retried until it's successful or until the dumps are deleted by aging tasks (for example, if there are too many or if they are too old).

What happens if ActiveGate runs out of space for memory dumps?
ActiveGate will first delete outdated dumps. If there are no outdated dumps, ActiveGate will delete the oldest dumps first.

Difference between Elapsed Time, Self Time, Duration :-

Elapsed time shows the time the instrumented point in code started execution (calculated from the beginning of the purepath, so first entry will have zero time)
Self-time shows the duration of the code itself (the instrumented row), excluding subcalls
Duration shows the duration of the code itself including subcalls.
Synthetic Monitoring :-  you can troubleshoot with accuracy using detailed object-level, page, connection and host data across multiple browsers. It provides detailed diagnostics including DNS, connect time, SSL, first byte and content download times and errors, for every object on every page, and stored for postmortem and routine review.
Types of synthetic monitors: single-URL browser monitors, browser clickpaths, and HTTP monitors.
View garbage collection CPU consumption : View garbage collection CPU consumption | Dynatrace news

Monday, September 6, 2021

Performance Testing Types

 What is a Baseline test?

Ans: We can run Baseline test of an application, collect and analyze the results, then modify the several indexes on a SQL server DB and run the same test again, using previous results to determine whether or not the new results were better, worse or the same.

Bench

How are benchmark and baseline tests different?

Ans: Benchmark Testing : It is a process of capturing your system performance against an industry standards that is given by some other organization.

This info can be used as a reference point when changes in the future are made to applications. On the other hand, benchmark tests are the process of comparing the performance of your system against industry standards giving by other organizations. For example, run baseline tests on applications, analyze the collected results and modify many indexes on the database of a SQL Server before running the identical test once more using the previous same results to find out whether or not new results are the same, worse or better.

What is Load Test?

Load Test is a type of non-functional test which verifies the performance of an application or system under a peak load condition. Load Test also validates the resource usage, stability and reliability of the software system under peak load.

What is Peak Load?

Peak Load is the highest load identified during a day, a month or a year depends upon the production data selection criteria. To understand the peak load refer to the below graph.

Figure 1: Production Data Graph to identify the Peak Load

This graph shows the number of active customers per day in the month of Jun. The highest number of active customers is 927 on 30th Jun. Hence 927 is the peak load for the load test.

For new application: Since a new application does not have any production data, so peak load needs to be predicted. The client or project business analyst (BA) confirms the expected peak load on the application. A performance tester can use the expected peak load to prepare the workload model for the load test.

Purpose of Load Test:

To Identify whether the application can handle the peak load

Observe the behaviour of the application in terms of response time

To check whether the resources (CPU, Memory and Disk) do not breach the defined performance limit

To identify if there is any bottleneck

Load Test is also conducted in a regression manner to identify the performance issue due to the weekly, fortnightly or monthly code releases.

Approach:

NFR document has some separate set of NFRs for the load test. These NFRs are related to the count of peak user load, response time, transactions per second etc. A performance tester designs the workload model using these NFRs and executes the test. Ideally, the duration of the load test is 1 hour (excluding ramp-up and ramp-down period). A typical load test user graph is:

Figure 2: A Typical Load Test Graph

This load test graph has a steady state of 1 hour along with 10 minutes ramp-up and 10 minutes ramp-down period. Therefore the test will run for 1 hour and 20 minutes. After completion of the test, a performance tester verifies the result against the defined load test NFRs.

What is a Soak Test or Endurance Test?

Soak Test is a type of non-functional test which helps to identify the Memory Leakage into the software system. Another name of the Soak Test is ‘Endurance Test’. In the soak test, a significant load is applied on the server for an extended period of time. Generally, the test duration of the soak test is in between 8 to 24 hours, but it may vary as per project requirement. A longer period performance test with an average load provides information about the behavior of Garbage collector and memory management.

Soak testing highlights issues such as:

  • A constant degradation in response time when the system is run over time.
  • Deteriorations in system resources that aren't evident during short runs, but will surface when a test is run for a longer time. For example, memory consumption and performance, free disk space, operating capacity, etc.
  • Any periodical process that may affect the performance of the system, but which can only be detected during a prolonged system run. For example, a backup process that runs once a day, exporting data into a 3rd party system, etc.

How to calculate the Average Load?

The business analyst calculates the average load by referring to historical data. He may consider past days, weeks, months or years statistics to get the average number. He calculates the figure by using some mathematical formulae. To understand the average load calculation refers to the below graph. It is a typical graph; not a practical one:


Figure 1: A typical historical data for calculating the average load

Above graph shows the historical data of a month. On calculating the average of the given figures a number comes which is 100. Hence 100 is the average user load for the soak test.

In some cases, if data is not available for the calculation then 50%, 60% or 75% of peak load can be considered for the test. But before conducting the test get the confirmation on these volumes from the project team.

For a new application: Since a new application does not have any historical data, so the above-suggested % of peak load can be used for soak testing with proper approval from the project team. The purpose of the soak test does not change for a new or an existing application.

Purpose of Soak Test:

  1. Verify the sustainability of the application
  2. Check if there is any spike in the response time
  3. Identify the Memory Leak
  4. Check the behaviour of Garbage Collector
  5. Identify if there is any bottleneck
  6. Check the type of error due to the prolonged duration of the test and get the error percentage
  7. Check whether the resources (CPU, Memory and Disk) do not breach the defined performance limit

Approach:

Many clients are not aware of the advantages of the Soak test. Hence they skip this test and do not provide any specific requirement for a soak test. Soak Test is one of the important performance tests and helps to detect memory leakage. NFR document must have a separate set of NFRs for the endurance test. These NFRs are related to the average user load count, response time, transactions per second etc. In the absence of an average load, conduct the test at 50%, 60% or 75% of peak load and analyse the behaviour of the application.

A performance tester designs the longer duration workload model using defined or calculated NFRs and executes the test. Ideally, the duration of the endurance test is 8 to 24 hours (excluding ramp-up and ramp-down period). A typical endurance test user graph is:



Figure 2: A Typical Soak (Endurance)Test Graph

This endurance test graph has a steady state of 23 hours along with 30 minutes ramp-up and 30 minutes ramp-down period. Therefore the test will run for 24 hours. After completion of the test, a performance tester verifies the result against the defined load test NFRs.

It is recommended to plan at least 2 soak tests in a single performance testing cycle and if both the tests have consistent results then only jump on to next type of performance test.

What is Spike Test?

Spike Test refers to a performance test which simulates a sudden high load on the server for a shorter period of time. This is a type of non-functional test which helps to identify the behaviour of an application or software system when an unexpected huge load arrives. The outcome of the spike test concludes whether the application can able to handle a sudden load or not. And, if it is then how much load?

The average load is considered as a base load. In some special cases, the peak load replaces the average load and becomes the base load.

Type of Spike Test:

Spike tests are of three types:

1. Constant Spike Test: A constant spike load applies to the server after a certain interval of time. In this type of spike test, all the spike have same height i.e. the same load.


Figure 1: Constant Spike Test Graph

2. Step-up Spike Test: A gradually increase spike load applies to the server after a certain interval of time. In this type of spike test, response time should measure at each spike and analyse how much it deviates from baseload response time?

Figure 2: Step-up Spike Test Graph

3. Random Spike Test: A random spike load applies to the server at random interval. Such a test is conducted for an application that frequently gets spikes in the production environment.

Figure 3: Random Spike Test Graph

How to calculate Spike Load?

The business analyst analyses the historical data and checks if any of the sudden spikes appeared in the past. As per his analysis, he suggests the number of users for spike load. He may also predict the numbers by analysing the company’s requirement. For example, if a company has a plan to conduct a flash sale or one-minute sale then he needs to calculate the spike load as per the registered and active user count. Although there are multiple methods to calculate the spike load which a business analyst must aware. This is not a task of performance tester. A performance tester has a responsibility to design the workload model as per spike test requirement.

For a new application: This is an optional test for a new application. Still, if the business wants to test the application with spike load then a performance tester can design the step-up spike test and note down the behaviour of the application.

Purpose:

  1. Verify the sustainability of the application at the sudden huge load
  2. Identify the deviation in the response time during spike load
  3. Check the failure percentage of the transactions
  4. Identify the type of error like 500, 504 etc.
  5. Note the recovery time of the application in case application is down during spike
  6. Identify if there is any bottleneck
  7. Check whether the resources (CPU, Memory and Disk) do not breach the defined performance limit even during the spike period

Approach:

Spike test is an optional test in the performance testing world. Spike test scenario is rare in the production environment but unavoidable. If any huge spike identifies in the production then it must require an investigation. During the investigation, the same scenario simulates in the performance test environment and identifies the root cause.

To handle the same situation in the future, a spike test performs in the test environment and tune the application in case of any issue occurs.

A performance tester prepares the workload by referring to the spike test NFR. NFRs must have the defined Base Load and Spike Load so that an accurate workload can be created. Ideally, the duration of the spike test is 1 hour (excluding ramp-up and ramp-down period). The typical spike user graphs are shown above (Figure 1, Figure 2 and Figure 3).

In the result, application response time during the spike period is an important metric. Also, performance tester must give the attention to the break (failure) point of the application (if any) along with the type of errors and application recovery period. The type of errors provides an idea of whether the application is fully down or some of the weak functionalities are impacted due to sudden the spike and what is the reason for failure? The root cause analysis helps to identify the exact issue.

It is recommended to plan at least 2 identical spike tests in a single performance testing cycle and if both the tests have consistent results then only jump on to next type of performance test.

Performance Testing Process

Performance Testing Process?
Ans: Following are the major steps:

  1. First, identify the impacted components.
  2. Identify the performance acceptance criteria: It contains constraints and goals for throughput, response times and resource allocation
  3. Figure out the physical test environment before carrying performance testing, like hardware, software and network configuration
  4. Plan and design Performance tests: Define how usage is likely to vary among end users and find key scenarios to test for all possible use cases
  5. Test environment configuration: Before the execution, prepare the testing environment and arranges tools, other resources, etc.
  6. Test design implementation: According to your test design, create a performance test
  7. Performance Test Data: Arrange or prepare the sufficient amount of test data
  8. Run the tests: Execute and monitor the tests
  9. Analyze the test result and raise the defects (if found)
  10. Repeat the test after the tuning of the application
  11. Prepare the test report and conclude the test result

Generic parameters considered for performance testing :
The parameters are:
  1. Memory usage
  2. Processor usage
  3. Bandwidth
  4. Memory pages
  5. Network output queue length
  6. Response time
  7. Query time
  8. CPU interruption per second
  9. Committed memory
  10. Thread counts
  11. Top waits, etc.
The common performance problems are:
  1. Low Throughput
  2. Poor Response Time
  3. Poor Scalability
  4. Server unable to handle the X amount of user load
  5. Heap Issue
  6. Thread Pool Issue
  7. Long-running DB query

Friday, September 3, 2021

Eliminate method

Eliminate method leads to filter out certain requests, transactions or components from the list of culprits which cause performance issue. The Drill Down method where we got an issue and continue the investigation until we got the root cause. But what will you do when you get more than one critical issues during the test? Drill down method may take more time than expected. In such a case, you have to filter out the most critical issue and park the other issues for a while. Here, you will require the ‘Eliminate Method’.

It has been observed that around 20%-30% newly build systems/applications have many performance issues. Some are really critical and some are having the least priority to resolve. This method is evolved because developers cannot concentrate on all the defects at the same time and provide proper fixes in short timelines. Some of the fixes again re-generates the old defects.

Why Eliminate Method only?
To narrow down the percentage of bottlenecks with a quick and quality solution we use Eliminate method. This is a method where you can use your cognitive skills and as innovative as possible to decide which bottleneck is critical and which can be parked for some time (until next release). So it could be like to eliminate the hardware resource relate bottlenecks and focusing on software resource bottlenecks only. You can also choose to remove DB server bottleneck first and then focus on the app server or you may go for removing the server bottlenecks and then concentrate on the code, but whatever you are eliminating; should not be forgotten. 

Since this is one of the methods to carry out your performance test analysis, so I would never say to ignore any performance bottleneck. Just make a list of open bottlenecks or mention in the sign-off report or keep open those bottlenecks in QC/PC, so that developers can pay attention to those bugs and provide fixes in the next release.

Example:
Let’s assume one case where we found multiple performance bottlenecks in an application. The first bottleneck is that the application is unable to handle the stress-test load and crashed. Although the load test has few errors (504/Gateway Timeout) on the payment page.

The second issue is more critical than the first one. Because it is related to the payment gateway. Any failure in processing the payment in the live environment hamper the user experience index. Since the application is able to handle the peak load (current load) in the load test so stress test issue can be parked. The stress test issue indicates that the application is unable to handle the future load. The future load could have figures of 2 to 3 years later. Therefore eliminate the stress test load issue and concentrate to resolve Gateway issue first.

Another case in which again you got 2 bottlenecks. High CPU utilization in Load Test and Memory Leakage in Soak Test. As per requirement, the peak user load on this application is 500 and peak hours are from 0900 to 1200 and from 1500 to 1800. In this case, first, reduce the CPU utilization and make the application responsive to 500 users. You can eliminate the memory leakage issue for a while.

Conclusion:
Eliminate method does not emphasis to neglect even a small performance issue. It states to keep the low priority issue aside until the developer resolves critical issues. Eliminate Method is effective for getting a quick resolution of high priority issue in tight timelines. For low priority issue, note down them and include in the next round of testing.

Graph Correlation

Micro Focus Analysis Tool has graph merging option. This option merges the graphs and provides a single view which is helpful for issue identification.  

Graph correlation is about establishing a relationship between performance metrics by comparing the data. The software system is always complex with multi-tier architecture, different technologies and interfaces with internal or external systems. So the easy way to do correlation is to compare the end-user performance with the server-side metrics. It is like comparing the trend from one set of metrics like Response time with other metrics like webserver CPU utilization.

What do we get by correlating the graph?
When you see a relationship between two sets of data like an increase in response time corresponds to increase in no. of users and heap size that uncovers an area for further investigation and analysis. Refer to the below graph:
Figure 01: Graph Correlation
It shows an association between client-side metrics and the server data. The graph shows that when no. of users become 40 then a sudden increase in heap size and similar we can see when users become 70 and 100. It shows that the garbage collector needs investigation. 

In a similar fashion, you can merge the client and server-side graphs to identify the exact bottleneck.

What are the best graph combinations in Correlate Method?
Case 1: High Response Time without error
Correlate Response time graph with Data Throughput graph, Memory Utilization graph and DB Query Processing Time graph

Case 2: Error
Correlate Error graph with Response Time graph, CPU Utilization graph, Memory Utilization graph and DB Query Processing Time graph

Along with the above correlation of the graph refer Heap Dump, Thread Dump and GC for more clarity on the root cause.

Note: 
  1. In an ideal scenario, if metrics are directly proportional then y-axis graph lines should follow each other. Else the opposite trend should be followed when metrics are inversely proportional.   Example: During steady-state, Throughput and Hits/ sec graph follow each other while Errors/sec and Average Response Time graph follow the opposite direction
  2. A linear line in the forward direction indicates good results and the stability of the application whereas step-up, step-down and spike in the correlated graphs lead the investigation.

Compare

As per ComCorDEEPT technique, ‘Compare’ is the first and foremost method of Performance Test Result Analysis. ‘Compare’ method is a simple way to match the defined NFRs with the actual result. For Example, if the defined response time NFR for a Login page is 5 seconds and you observed the actual response time during load test is 4.2 seconds then you can compare the defined vs actual response time and conclude the result. In this case, the actual response time is better than the expected one. Similarly Transactions/sec (TPS), Throughput, resource utilization etc. can be compared.

How does the ‘Compare’ method work when NFRs are not predefined?
For New Application:  In this approach start the test with a small user load and increase the load until application breaks. The pattern to increase the user load makes a step.

Compare the result of each step with the previous step and find out the deviation of the performance metrics like response time, throughput, number of error etc.

For Existing Application: First, execute a test on old (existing) code version and set baseline metrics then deploy the latest code and execute the test with the same load and compare the result with the baseline test result. ‘Compare’ method is very useful for Baseline-Benchmark test approach where you can do an apple-to-apple comparison. The baseline test result can be compared with benchmark test result provided that both the tests were executed with the same user load and scenario configuration.

Likewise, you can apply the ‘Compare’ method in the absence of predefined NFR of new as well as existing application.

How does the ‘Compare’ method help in analyzing the result?
Let’s take one example. There is one existing application which has a baseline test result from the previous release. During baseline test execution following were the NFRs:

                            No. of Users: 100
                            End to End Response Time: 40 seconds
                            TPS: 0.8 TPS
                            CPU Utilization: Below 70%
                            Java Heap Size: <500 MB

The same NFRs are still applicable for the latest code base but here we have some addition result (from baseline test) which will help to perform an apple to apple comparison.

Refer to the below graphs, the first column has a Baseline test result, the second column has Benchmark 1 test result and the third column has Benchmark 2 test result. Now, apply the compare method:


The comparison of client and server performance metrics helps to understand the acceptable thresholds for the system and its resources and figure out what is significantly different. The places where the differences appear are the areas to look for bottlenecks.

Hence by comparing benchmark test result 1 and 2 with either defined NFRs or baseline test result, we can conclude benchmark 1 test result met NFRs while benchmark 2 test result breached NFRs due to Java Heap Size issue which require further investigation to find out the exact root cause.

How to conclude the result?
After comparing all the important metrics, you will get to know which are the NFRs/Baseline test results met. Based on your analysis, if the results are under the acceptance limit then you can easily provide GREEN sign-off. If some of the NFRs breaches (with low or no risk) then provide AMBER sign-off with proper risk and recommendations. At last, if very few or none of the NFRs meets then provide the RED sign-off and highlighting the risks.

PT

Client Side Performance : Client Side Performance Testing
                                           Client-Side Performance Testing - DZone

2. Performance Testing – Overview:
  1. Baseline Test
  2. Benchmark Test
  3. Load Test
  4. Stress Test
  5. Soak Test or Endurance Test
  6. Spike Test
  7. Break-Point Test
  8. Step-Up Test
  9. Early Performance Test
  10. Other Non-Functional Tests
  11. Test data
5. Performance Testing Tool
  1. Open Source Tool
  2. Licensed Tool (Performance Center)
  3. Cloud-Based Tool
  •     Insights on Cloud-Based Performance Testing
6.  How to choose the right Performance Testing Tool?
7.  Client-side (UI) Performance Testing Tools
8.  Important Terminologies
9.  Realistic Performance Test
10.  Extension of Little’s Law
11.  Performance Testing Life Cycle:
  • Overview
  • Risk Assessment
  • Non-functional Requirement Gathering
  • Performance Test Planning
  • Performance Test Design (Scripting)
  • Workload Modelling
  • Performance Test Execution
  • Performance Test Reporting
11.  Performance Test Result Analysis
    1. Compare Method
    2. Correlate Method
    3. Drill down Method
    4. Eliminate Method
    5. Extrapolation Method
    6. Pattern Method
    7. Trend Method
    8. Basic Server Monitoring Counters
    9. Performance Testing Document Template:
    10. Risk Assessment Document
    11. Non-Functional Requirement Document
    12. Performance Test Plan Document
    13. Interim Performance Test Report
    • Baseline And Benchmark Testing
    • Performance Testing and Performance Engineering
    • Performance Center and Controller
    • Web Server and Application Server 
    • Request and Hit
    • web_submit_data and web_submit_form.
    • lr_exit and lr_abort.
    • GET and POST.
    • SOAP and REST.
    • HTML mode and URL mode.
    • Socket level and Win-Inet level.
    • Entry and Exit criteria.
    • Save offset and Save length.
    • Vuser run as a Process and Vuser run as a Thread.
    • Throughput and Response Time.
    • Hits/Sec and Transaction/Sec.
    • Absolute Graph and Relative Graph.
    • Simultaneous Users and Concurrent Users.
    • Overlay Graph And Correlate Graph?
    • Avg response time in Summary Report and Avg transaction response time graph
    • Summary Data and Complete Data
    • Summary Report and Transaction Analysis Report
    • Performance Testing And Functional Testing
    • Page Load Time and Page Rendering Time
    • Cookie, Cache and Session
    • Soft Parse and Hard Parse
    • web_add_header and werb_add_auto_header
    • Priority and Severity
    DataBase :-

    WebService:-
    Others :
    Links :
                https://www.rapidtables.com/    --- for multiple tools

    Thread

    Native Thread Demon Thread Non-Demon Thread Native Thread: - Any Method/Thread which is mapped to OS is called Native Thread or Method. Demo...