+(65) 8344 4290 Ciscodumps.net@gmail.com Room 907, Block B, Baoneng Entrepreneurship Center, Guangrong Road, Hongqiao District, Tianjin

ThinkMo EDU Share – network 40.TCP congestion control

蒂娜 No Comments 11/07/2022

ThinkMo EDU Share – network 40.TCP congestion control

Congestion Control Principle

In practice, packet loss is generally caused by router buffer overflows when the network becomes congested. Packet retransmissions are thus treated as a symptom of network congestion (loss of a particular transport-layer segment), but cannot address the cause of network congestion because there are too many sources sending data at an excessively high rate. In order to deal with the causes of network congestion, some mechanism is needed to throttle the sender in the face of network congestion.

1. Causes and costs of congestion

Case 1: Two senders and a router with infinite cache

Even in this (extremely) idealized case, a cost of congested networks has been found, namely that packets experience huge queuing delays when their arrival rate approaches the link capacity.

Case 2: Two senders and a router with limited cache

Another cost of network congestion is that the sender must perform retransmissions to compensate for dropped (lost) packets due to buffer overflow.

Another cost of network congestion, unnecessary retransmissions by the sender when large delays are encountered, can cause routers to use their link bandwidth to forward unnecessary copies of packets.

Case 3: 4 senders and multiple routers with limited cache and multi-hop paths

Another cost of dropping packets due to congestion is that when a packet is dropped along a path, the transmission capacity used by each upstream router to forward the packet to dropping the packet is ultimately wasted.

2. Congestion control method

TCP congestion control methods are explained in the next section. Here, two main congestion control methods used in practice are pointed out, and specific network architectures and congestion control protocols that specifically use these methods are discussed.

At the broadest level, congestion control methods can be differentiated according to whether the network layer provides explicit assistance for transport layer congestion control.

End-to-End Congestion Control

  1. In the end-to-end congestion control approach, the network layer does not provide explicit support for transport layer congestion control. Even if there is congestion in the network, end systems must infer it from observations of network behavior such as packet loss and delay. TCP takes an end-to-end approach to congestion control because the IP layer does not provide feedback information about network congestion to end systems.
  2. Loss of TCP segments (known by timeout or 3 redundant acknowledgments) is considered a sign of network congestion, and TCP reduces its window length accordingly. There are also some recent proposals for TCP congestion control, using an increased value of round-trip delay as an indicator of increased network congestion.

Network-Assisted Congestion Control

  1. In network-assisted congestion control, a router provides explicit feedback information to the sender about the state of congestion in the network. This feedback can be as simple as a bit to indicate congestion in the link. For example, in ATM Available Bite Rate (ABR) congestion control, the router explicitly informs the sender of the maximum host sending rate that it (the router) can support on the outgoing link.
  2. The default Internet versions of IP and TCP employ an end-to-end congestion control method. Recently, however, IP and TCP have also been able to selectively implement network-assisted congestion control.
  3. For network-assisted congestion control, there are usually two ways for congestion information to be fed back from the network to the sender, as shown in the following figure:

Direct feedback information can be sent to the sender by the network router. Notifications this way usually take the form of a choke packet (which basically says “I’m blocked!”).

A second form of notification, which is more general, is that the router marks or updates a field in packets flowing from the sender to the receiver to indicate the occurrence of congestion. Once a marked packet is received, the receiver notifies the sender of the network congestion indication. Note that at least one full round-trip time is required for the latter form of notification.

TCP congestion control

1.The principle of congestion control

TCP provides reliable data transfer services between two processes running on different hosts.

Another key part of TCP is its congestion control mechanism.

TCP must use end-to-end congestion control rather than network-assisted congestion control because the IP layer does not provide explicit network congestion feedback to end systems.

The approach taken by TCP is for each sender to limit the rate at which it can send traffic to the connection based on the level of network congestion it perceives. If a TCP sender perceives no congestion along the path from it to the destination, the TCP sender increases its sending rate; if the sender perceives congestion along the path, the sender reduces its sending rate.

But this approach raises three questions:

How does a TCP sender limit the rate at which it sends traffic to its connection?

How does a TCP sender perceive congestion on the path from it to the destination?

When the sender perceives end-to-end congestion, what algorithm does it use to change its sending rate?

Let’s start by analyzing how a TCP sender restricts traffic to its connection.

Each end of a TCP connection consists of a receive buffer, a send buffer, and several variables (LastByteRead, rwnd, etc.). The TCP congestion control mechanism running on the sender keeps track of an additional variable, the congestion window. The congestion window, denoted cwnd, places a limit on the rate at which a TCP sender can send traffic into the network.

In particular, the amount of unacknowledged data in a sender does not exceed the minimum of cwnd and rwnd, ie:

LastByteSent − LastByteAcke≤min{cwnd,rwnd} LastByteSent−LastByteAcked≤min\{cwnd,rwnd\}LastByteSent−LastByteAcked≤min{cwnd,rwnd}

The above constraints limit the amount of unacknowledged data in the sender and thus indirectly limit the sender’s sending rate. To understand this, consider a connection with negligible packet loss and transmission delay. So roughly speaking, at the beginning of each round trip time (RTT), the above constraints allow the sender to send cwnd bytes of data to the connection, and at the end of the RTT the sender receives an acknowledgment message for the data.

Therefore, the sender’s sending rate is roughly cwnd/RTT cwnd/RTTcwnd/RTT bytes/sec. By adjusting the value of cwnd, the sender can thus adjust the rate at which it sends data to the connection.

Next consider how the TCP sender perceives congestion on the path between it and the destination.

  1. Define a “packet loss event” for a TCP sender as: either a timeout occurs, or 3 redundant ACKs are received from the receiver. When excessive congestion occurs, the buffers of one (or more) routers along the path can overflow, causing a datagram (containing a TCP segment) to be dropped. Dropped datagrams in turn cause a packet loss event at the sender (either timeout or receiving 3 redundant ACKs), which the sender considers an indication of congestion on the sender-to-receiver path.
  2. After considering the problem of congestion detection, consider the more optimistic situation that the network is not congested, that is, there is no packet loss event. In this case, the sender at TCP will receive an acknowledgment for the previously unacknowledged segment. TCP takes the arrival of these acknowledgments as an indication that everything is fine, that segments in transit on the network are being successfully delivered to their destination, and uses acknowledgments to increase the length of the window (and its transmission rate).
  3. Note that if acknowledgments arrive at a fairly slow rate (eg, if the end-to-end path has high latency or contains a segment of low bandwidth links), the congestion window will increase at a fairly slow rate. On the other hand, if acknowledgements arrive at a high rate, the congestion window will increase more rapidly. Because TCP uses acknowledgments to trigger (or time) an increase in its congestion window length, TCP is said to be self-clocking.

How does the TCP sender determine the rate at which it should send?

  1. Given the mechanism for adjusting the cwnd value to control the sending rate, the key question remains: how does the TCP sender determine the rate at which it should send? If many TCP senders send too fast overall, they can congest the network, causing congestion to collapse. However, if TCP senders are too cautious and send too slowly, they cannot fully utilize the bandwidth of the network;
  2. That is, the TCP sender is able to send at a higher rate without congesting the network. So how do TCP senders determine their sending rate so that the network is not congested, while still taking full advantage of all available bandwidth? Are TCP senders explicitly cooperating, or is there a distributed way for TCP senders to set their send rates based only on local information?

TCP answers these questions using the following guiding principles:

  1. A table of missing segments means congestion, so the TCP sender’s rate should be reduced when segments are lost.

For a given segment, a timeout event or four acknowledgments (an initial ACK followed by three redundant ACKs) are interpreted as a kind of implicit “loss event” for the segment following the four ACKs. instructions included. From a congestion control point of view, the problem is how a TCP sender should reduce its congestion window length, ie, its send rate, in response to such speculative packet loss events.

  1. An acknowledgment segment indicates that the network is delivering the sender’s segment to the receiver, thus increasing the sender’s rate when an acknowledgment for a previously unacknowledged segment arrives.

The arrival of an acknowledgment is considered an implicit indication that all is well, that the segment is being successfully delivered from the sender to the receiver, and therefore the network is not congested. The congestion window length can thus be increased.

  1. Bandwidth test.

Given an ACK indicating no congestion on the source-to-destination path, and a packet loss event indicating path congestion, TCP’s strategy for regulating its transmission rate is to increase its rate in response to incoming ACKs, unless a packet loss event occurs, at which point the transmission rate is reduced. .

So, to detect the rate at which congestion starts to occur, the TCP sender increases its transmission rate, backs off from that rate, and starts probing again to see if the congestion onset rate has changed. The TCP sender may behave like a child asking for (and getting) more and more candy, until at the end he/she is told “No!”, the child steps back a bit, and then starts making requests again after a while. Note that there is no explicit congestion state instruction in the network, ie ACK and packet loss events act as implicit signals, and each TCP sender acts on local information asynchronously with other TCP senders.

2.TCP congestion control algorithm

The algorithm consists of three main parts: ① slow start; ② congestion avoidance; ③ fast recovery.

Where slow start and congestion avoidance are mandatory parts of TCP , the difference between the two is the way the cwnd length is increased in response to a received ACK. Slow start increases the length of cwnd faster than congestion avoidance.

Fast recovery is a recommended part , not required for the TCP sender.

① Slow start When a TCP connection starts, the value of cwnd is usually initially set to a smaller value of MSS, which makes the initial sending rate about MSS/RTT. (For example MSS = 500 bytes and RTT = 200ms, the resulting initial transmit rate is only about 20kbps).

Since the available bandwidth to the TCP sender may be much larger than MSS/RTT, the TCP sender wants to quickly find the amount of available bandwidth.

Thus, in the slow-start state, the value of cwnd starts with 1 MSS and is incremented by 1 MSS each time a transmitted segment is acknowledged for the first time.

Illustration: TCP sends the first segment to the network and waits for an acknowledgment. When this acknowledgment arrives, the TCP sender increases the congestion window by one MSS and sends out two segments of maximum length. If these two segments are confirmed, the sender increases the congestion window by one MSS for each confirmation segment, so that the congestion window becomes 4 MSS, and so on. The sending rate doubles for every RR in this process. Therefore, the TCP send rate starts slowly but increases exponentially during the slow start phase.

When will this exponential growth end?

Slow start offers several answers to this question:

First, if there is a packet loss event (ie congestion) indicated by a timeout, the TCP sender sets cwnd to 1 and restarts the slow start process. It also sets the value of the second state variable, ssthresh (shorthand for “slow start threshold”) to cwnd/2, which sets ssthresh to half the value of the congestion window when congestion is detected.

The second way that slow start ends is directly associated with the value of ssthresh. Since ssthresh is set to half the value of cwnd when congestion is detected, it may be reckless to continue doubling cwnd when the value of ssthresh is reached or exceeded. Therefore, when the value of cwnd is equal to ssthresh, slow start ends and TCP transitions to congestion avoidance mode. It will be seen that TCP increases cwnd more cautiously when entering congestion avoidance mode.

The last way to end slow start is if 3 redundant ACKs are detected, then TCP performs a kind of fast retransmission and enters fast recovery state.

TCP behavior in slow start is summarized in the FSM description of TCP congestion control in Figure 3-51.

②Congestion avoidance Once the congestion avoidance state is entered, the value of cwnd is about half of the last time congestion was encountered, that is, the congestion may not be far away! Therefore, TCP cannot double the value of cwnd after each RTT, but adopts a more conservative method, and only increases the value of cwnd by one MSS per RTT. This can be done in several ways:

A common approach is for the TCP sender to increment cwnd by one MSS (MSS/cwnd) byte whenever a new acknowledgment arrives. (Example, if MSS is 1460 bytes and cwnd is 14600 bytes, then 10 segments are sent in one RTT. Each arriving ACK (assuming one ACK per segment) increases the congestion window length by 1/10 MSS , so after receipt of acknowledgments for all 10 segments, the value of the congestion window is increased by one MSS.

When should the linear growth of congestion avoidance (1MSS per RTT) be ended?

When a timeout occurs, TCP’s congestion avoidance algorithm behaves the same. As in the case of slow start, the value of cwnd is set to 1 MSS, and when a packet loss event occurs, the value of ssthresh is updated to half the value of cwnd.

When the packet loss event is triggered by a 3 redundant ACK event, the network continues to deliver segments to the sender to the receiver (as indicated by the receipt of redundant ACKs). Therefore, TCP’s behavior for such a packet loss event should be less dramatic than the packet loss indicated by the timeout: TCP halves the value of cwnd (for better measurement results, taking into account the 3 redundant packets received The remaining ACKs are added with 3 MSSs), and when 3 redundant ACKs are received, the value of ssthresh is recorded as half of the value of cwnd. Next, access the fast recovery state.

③ Fast recovery In fast recovery, for the missing segment that causes TCP to enter the fast recovery state, for each redundant ACK received, the value of cwnd is increased by one MSS.

Eventually, when an ACK for the missing segment arrives, TCP enters the congestion avoidance state after lowering cwnd.

If a timeout event occurs, the fast recovery transitions to the slow start state after performing the same actions as in slow start and congestion avoidance: when a packet loss event occurs, the value of cwnd is set to 1 MSS, and the value of ssthresh is set is half of the cwnd value.

Fast recovery is a recommended but not required component of TCP. Interestingly, an early version of TCP called TCP Tahoe unconditionally reduced its congestion window to 1 MSS regardless of whether a packet loss event indicated by a timeout or a packet loss event indicated by 3 redundant ACKs occurred. , and enter the slow start phase. The newer version of TCP, TCP Reno, incorporates fast recovery.

3.A Complete FSM Description of the TCP Congestion Control Algorithm

4.TCP congestion control

Recalling that ignoring the initial slow-start phase at the beginning of a connection, assuming packet loss is indicated by 3 redundant ACKs rather than timeouts, TCP’s congestion control is: cwnd increases linearly (additively) by 1 MSS per RTT, followed by 3 cwnd is halved (multiplicatively decremented) on redundant ACK events. Therefore, TCP congestion control is often referred to as additive and multiplicative congestion control.

AMID congestion control induces the “sawtooth” behavior shown in Figure 3-53:

This also nicely illustrates the previous intuition of TCP when detecting bandwidth, that TCP linearly increases its congestion window length (and thus its transmission rate) until 3 redundant ACK events occur . It then reduces its congestion window length by a factor of 2, and then starts to grow linearly again, detecting if there is additional bandwidth available.

As mentioned earlier, many TCP implementations use the Reno algorithm. Many variants of the Reno algorithm have been proposed. The TCP Vegas algorithm tries to avoid congestion while maintaining good throughput. The basic idea of Vegas is: ① Detect congestion in routers between source and destination before packet loss occurs; ② Linearly reduce the transmission rate when impending packet loss is detected. The impending packet loss is predicted by observing the RTT. The longer the RTT of the packet, the more severe the congestion in the router.

As of late 2015, the Ubuntu Linux implementation of TCP provides slow start, congestion avoidance, fast recovery, fast retransmission, and SACK by default, as well as other congestion control algorithms such as TCP Vegas and BIC.

ThinkMo CCNA Dump exam information exchange group:

CCNA/CCNP/CCIE telegram study group:https://t.me/ccie_ei_lab
WAHTAPP:+65 83444290
WAHTAPP:+63 9750724648

ThinkMo CCNA 200-301 Tutorial VIP Exclusive:

The complete EVE_NG file, free learning PDF and PPT that can be used directly, as well as video explaining the technical points are all here!

Post Tags :

Leave a Reply