Latency(Network Delay):
A request is travel time from client to server and server to the client is called Latency. It’s measuring units are millisecond, second, minute or hour. Let’s say:
A request starts at t=0
Reaches to a server in 1 second (at t=1)
The server takes 2 seconds to process (at t=3)
Reaches to the client end in 1.2 seconds (at t=4)
So, the network latency will be 2.2 seconds (= 1 + 1.2).
Reaches to a server in 1 second (at t=1)
The server takes 2 seconds to process (at t=3)
Reaches to the client end in 1.2 seconds (at t=4)
So, the network latency will be 2.2 seconds (= 1 + 1.2).
Bandwidth:
Bandwidth shows the capacity of the pipe (communication channel). It indicates the max water passes through the pipe. In performance testing term the max amount of data that can be transferred per unit of time through a communication channel is called channel’s bandwidth. Let’s say an ISDN having 64Kbps of bandwidth and we can increase it by adding one more 64Kbps channel, so total bandwidth will be 128Kbps, so max 128Kbps data can be transferred through ISDN channel.
Bandwidth shows the capacity of the pipe (communication channel). It indicates the max water passes through the pipe. In performance testing term the max amount of data that can be transferred per unit of time through a communication channel is called channel’s bandwidth. Let’s say an ISDN having 64Kbps of bandwidth and we can increase it by adding one more 64Kbps channel, so total bandwidth will be 128Kbps, so max 128Kbps data can be transferred through ISDN channel.
Throughput:
The amount of data moved successfully from one place to another in a given time period is called Data Throughput‘. The higher the throughput, the more information a server can process and the better it performs and the more users it can serve. If a website can process 100 hits per second during its first test and 150 hits per second after an update, that means 50 more people can view the website at once without waiting.
The amount of data moved successfully from one place to another in a given time period is called Data Throughput‘. The higher the throughput, the more information a server can process and the better it performs and the more users it can serve. If a website can process 100 hits per second during its first test and 150 hits per second after an update, that means 50 more people can view the website at once without waiting.
It is typically measured in bits per second (bps), as in megabits per second (Mbps) or gigabits per second (Gbps). Let’s say, 20bits data transferred at t=4th second, so throughput at t=4 is 20bps.
Note : Data Throughput can never be more than Network Bandwidth.
Response Time:
Response time is the amount of time from the moment that a user sends a request until the time that the application indicates that the request has completed and reaches back to the user. In the Latency example, Response time will be 4 seconds.
Response time is the amount of time from the moment that a user sends a request until the time that the application indicates that the request has completed and reaches back to the user. In the Latency example, Response time will be 4 seconds.
Some important points for Throughput:
- Solving bandwidth is easier than solving latency.
- If throughput is nearly equal to bandwidth, it means the full capacity of the network is being utilized which may lead network bandwidth issue.
- Increase in response time with flat throughput graph shows a network bandwidth issue. This bottleneck can be rectified by adding extra channels i.e. by increasing network bandwidth.
- Ideally, consistent throughput indicates an expected capacity of network bandwidth.
- Some tools do not express the throughput in units per unit of time but in clock periods. This is incorrect but commonly used because of convenience.
- Ideally, response time is directly proportional to throughput during the user ramp-up period. If throughput decreases with an increase in response time then it indicates instability of application/system.
- Ideally, response time and throughput should be constant during steady state. A less deviation in both the terms indicates the stability of the application.
- The Number of threads is directly proportional to the throughput.
- If you have low latency and small bandwidth then it will take a longer time for data to travel from point A to point B compared to a connection that has low latency and high bandwidth.
- Latency is affected by connection type, distance and network congestion.
Q:. Can You Tell A Scenario Where Throughput Is Increasing With Response Time Means When They Are Directly Proportional?
Answer: Yes it can be possible when you have lots of CSS (Cascading Style Sheet) in your application which takes a lot of time to display. We can expect this type of situation where throughput will be increasing as well as the response time.
No comments:
Post a Comment