CS6250 Computer Networks Exam 1 |
100% Correctly Answered and Rated
A+ | Latest 2025/2026
What is transmission control and why do we need to control it?
- Correct Answer - Transmission control is implemented in the transport
layer. It deals with issues of fairness in using the network. Transmission
control has two parts, flow control and congestion control.
What is flow control and why do we need to control it?
- Correct Answer - Flow control is TCP's rate control mechanism that
helps match the sender's rate against the receiver's rate of reading the
data. The sending host maintains a "receive window" which provides the
sender an idea of how much data the receiver can handle at that
moment.
"TCP provides a flow-control service to its applications to eliminate the
possibility of the sender overflowing the receiver's buffer. Flow control is
thus a speed matching service—matching the rate at which the sender is
sending against the rate at which the receiving application is reading." -
Kurose 3.5.5
What is congestion control?
,- Correct Answer - Congestion control controls the transmission rate to
protect the network from congestion to avoid longer queues and packet
drops
What are the goals of congestion control?
- Correct Answer - Efficiency. We should get high throughput or
utilization of the network should be high.
Match the load to available capacity.
Fairness. Each user should have its fair share of the network bandwidth.
The notion of fairness is dependent on the network policy. For this
context, we will assume that every flow under the same bottleneck link
should get equal bandwidth.
Low delay. In theory, it is possible to design protocols that have
consistently high throughput assuming infinite buffer. Essentially, we
could just keep sending the packets to the network and they will get
stored in the buffer and will eventually get delivered. However, it will lead
to long queues in the network leading to delays. Thus, applications that
are sensitive to network delays such as video conferencing will suffer.
Thus, we want the network delays to be small.
Fast convergence. The idea here is that a flow should be able to
converge to its fair allocation fast. This is important as a typical network's
workload is composed of a lot of short flows and few long flows. If the
convergence to fair share is not fast enough, the network will still be
unfair for these short flows.
,What is network-assisted congestion control?
- Correct Answer - In this we rely on the network layer to provide explicit
feedback to the sender about congestion in the network.
For instance, routers could use ICMP source quench to notify the source
that the network is congested.
However, under severe congestion, even the ICMP packets could be
lost, rendering the network feedback ineffective.
What is end-to-end congestion control?
- Correct Answer - E2E does not provide any explicit feedback about
congestion to the end hosts. Instead, the hosts infer congestion from the
network behavior and adapt the transmission rate.
Eventually, TCP ended up using the end-to-end approach. This largely
aligns with the end-to-end principle adopted in the design of the
networks. Congestion control is a primitive provided in the transport
layer, whereas routers operate at the network layer. Therefore, the
feature resides in the end nodes with no support from the network. Note
that this is no longer true as certain routers in the modern networks can
provide explicit feedback to the end-host by using protocols such as
ECN and QCN.
How does a host infer congestion?
, - Correct Answer - The host infer congestion from the network behavior
mainly through 2 signals:
First is the packet delay. As the network gets congested, the queues in
the router buffers build up. This leads to increased packet delays. Thus,
an increase in the round-trip time, which can be estimated based on
ACKs, can be an indicator of congestion in the network. However, it
turns out that packet delay in a network tends to be variable, making
delay-based congestion inference quite tricky.
Another signal for congestion is packet loss. As the network gets
congested, routers start dropping packets. Note that packets can also be
lost due to other reasons such as routing errors, hardware failure, TTL
expiry, error in the links, or flow control problems, although it is rare.
How does a TCP sender limit the sending rate?
- Correct Answer - TCP uses a congestion window which is similar to the
receive window used for flow control. It represents the maximum number
of unacknowledged data that a sending host can have in transit (sent but
not yet acknowledged).
TCP uses a probe-and-adapt approach in adapting the congestion
window. Under regular conditions, TCP increases the congestion window
trying to achieve the available throughput. Once it detects congestion
then the congestion window is decreased.
In the end, the number of unacknowledged data that a sender can have
is the minimum of the congestion window and the receive window.
100% Correctly Answered and Rated
A+ | Latest 2025/2026
What is transmission control and why do we need to control it?
- Correct Answer - Transmission control is implemented in the transport
layer. It deals with issues of fairness in using the network. Transmission
control has two parts, flow control and congestion control.
What is flow control and why do we need to control it?
- Correct Answer - Flow control is TCP's rate control mechanism that
helps match the sender's rate against the receiver's rate of reading the
data. The sending host maintains a "receive window" which provides the
sender an idea of how much data the receiver can handle at that
moment.
"TCP provides a flow-control service to its applications to eliminate the
possibility of the sender overflowing the receiver's buffer. Flow control is
thus a speed matching service—matching the rate at which the sender is
sending against the rate at which the receiving application is reading." -
Kurose 3.5.5
What is congestion control?
,- Correct Answer - Congestion control controls the transmission rate to
protect the network from congestion to avoid longer queues and packet
drops
What are the goals of congestion control?
- Correct Answer - Efficiency. We should get high throughput or
utilization of the network should be high.
Match the load to available capacity.
Fairness. Each user should have its fair share of the network bandwidth.
The notion of fairness is dependent on the network policy. For this
context, we will assume that every flow under the same bottleneck link
should get equal bandwidth.
Low delay. In theory, it is possible to design protocols that have
consistently high throughput assuming infinite buffer. Essentially, we
could just keep sending the packets to the network and they will get
stored in the buffer and will eventually get delivered. However, it will lead
to long queues in the network leading to delays. Thus, applications that
are sensitive to network delays such as video conferencing will suffer.
Thus, we want the network delays to be small.
Fast convergence. The idea here is that a flow should be able to
converge to its fair allocation fast. This is important as a typical network's
workload is composed of a lot of short flows and few long flows. If the
convergence to fair share is not fast enough, the network will still be
unfair for these short flows.
,What is network-assisted congestion control?
- Correct Answer - In this we rely on the network layer to provide explicit
feedback to the sender about congestion in the network.
For instance, routers could use ICMP source quench to notify the source
that the network is congested.
However, under severe congestion, even the ICMP packets could be
lost, rendering the network feedback ineffective.
What is end-to-end congestion control?
- Correct Answer - E2E does not provide any explicit feedback about
congestion to the end hosts. Instead, the hosts infer congestion from the
network behavior and adapt the transmission rate.
Eventually, TCP ended up using the end-to-end approach. This largely
aligns with the end-to-end principle adopted in the design of the
networks. Congestion control is a primitive provided in the transport
layer, whereas routers operate at the network layer. Therefore, the
feature resides in the end nodes with no support from the network. Note
that this is no longer true as certain routers in the modern networks can
provide explicit feedback to the end-host by using protocols such as
ECN and QCN.
How does a host infer congestion?
, - Correct Answer - The host infer congestion from the network behavior
mainly through 2 signals:
First is the packet delay. As the network gets congested, the queues in
the router buffers build up. This leads to increased packet delays. Thus,
an increase in the round-trip time, which can be estimated based on
ACKs, can be an indicator of congestion in the network. However, it
turns out that packet delay in a network tends to be variable, making
delay-based congestion inference quite tricky.
Another signal for congestion is packet loss. As the network gets
congested, routers start dropping packets. Note that packets can also be
lost due to other reasons such as routing errors, hardware failure, TTL
expiry, error in the links, or flow control problems, although it is rare.
How does a TCP sender limit the sending rate?
- Correct Answer - TCP uses a congestion window which is similar to the
receive window used for flow control. It represents the maximum number
of unacknowledged data that a sending host can have in transit (sent but
not yet acknowledged).
TCP uses a probe-and-adapt approach in adapting the congestion
window. Under regular conditions, TCP increases the congestion window
trying to achieve the available throughput. Once it detects congestion
then the congestion window is decreased.
In the end, the number of unacknowledged data that a sender can have
is the minimum of the congestion window and the receive window.