Transport Layer
Transport-layerservices
Multiplexing and demultiplexing
Connectionless transport: UDP
Principles of reliable data transfer
Connection-oriented transport: TCP
Principles of congestion control
TCP congestion control
Evolution of transport-layer functionality
COMPSCI 453 Computer Networks
Professor Jim Kurose
College of Information and Computer Sciences
University of Massachusetts
Class textbook:
Computer Networking: A Top-
Down Approach (8th
ed.)
J.F. Kurose, K.W. Ross
Pearson, 2020
http://gaia.cs.umass.edu/kurose_ross
2.
TCP: overview RFCs:793,1122, 2018, 5681, 7323
cumulative ACKs
pipelining:
• TCP congestion and flow control
set window size
connection-oriented:
• handshaking (exchange of control
messages) initializes sender,
receiver state before data exchange
flow controlled:
• sender will not overwhelm receiver
point-to-point:
• one sender, one receiver
reliable, in-order byte
steam:
• no “message boundaries"
full duplex data:
• bi-directional data flow in
same connection
• MSS: maximum segment size
3.
2. Reliable, In-OrderByte
Stream
TCP provides:
1. Reliable delivery:
If a packet is lost or corrupted, TCP retransmits it.
2. In-order delivery:
Even if packets arrive out of order, TCP reorders them
before delivering to the application.
3. Byte stream:
TCP treats all the data as one long stream of bytes, like
this: HELLO WORLD example.
Each byte has a sequence number (used for
reliability and ordering).
So TCP is managing this long “stream of bytes,” not
separate packets.
Transport Layer: 3-3
1. Point-to-Point
TCP is point-to-point,
meaning:
One connection has exactly
one sender and one receiver.
Example:
When your browser connects
to a web server
that’s one TCP connection
between your computer and
that server.
(Unlike UDP, TCP doesn’t do
one-to-many or broadcast
communication.)
Byte
Number
1 2 3 4 5 6 7 ...
Data H E L L O W ...
4.
4. MSS— Maximum Segment
Size
MSS means Maximum Segment
Size — the largest chunk of data
(in bytes) that TCP can send in
one segment.
Typical MSS ≈ 1460 bytes (for
Ethernet, since total frame =
1500 bytes).
Example:
If MSS = 1000 bytes → TCP
sends at most 1000 bytes of
user data per segment.
Transport Layer: 3-4
3. Full Duplex Data
TCP allows full duplex
communication — data
can flow in both directions
simultaneously.
Example:
While your browser is
sending a request to a
web server, the server can
also start sending data
back at the same time.
5.
6. Pipelining
TCPallows multiple packets to
be sent before receiving an
ACK (unlike Stop-and-Wait).
This improves efficiency.
The number of
unacknowledged packets that
can be in flight depends on the
window size (defined by flow
control and congestion control).
Transport Layer: 3-5
5. Cumulative
Acknowledgments (ACKs)
TCP uses cumulative ACKs,
which means:
The ACK number represents
the next byte expected by the
receiver.
Example:
If the receiver has received
bytes 1–500, it will send ACK
= 501,
meaning “I got everything up
to 500, now send me from
501.”
If packet 501–600 is lost, it
keeps sending ACK = 501 until
it gets that data.
6.
8. Congestion Control(Network Side)
Congestion control prevents the
network (not just the receiver) from
getting overloaded.
TCP automatically slows down when
it detects congestion.
Main algorithms:
• Slow Start
• Congestion Avoidance
• Fast Retransmit
• Fast Recovery
These help maintain fairness and
stability in the Internet.
Transport Layer: 3-6
7. Flow Control (Receiver Side)
Flow control ensures the
sender doesn’t overwhelm
the receiver.
The receiver tells the sender
how much data it can handle
— this is called the Receive
Window (rwnd).
Example:
If rwnd = 4000 bytes → sender
can send 4000 bytes total
before waiting for an ACK.
This prevents the receiver’s
buffer from overflowing.
7.
10. Flow ControlledThe
sender will not overwhelm the
receiver.
TCP adjusts its sending rate
based on:
•Receiver’s buffer capacity
(flow control)
•Network congestion
(congestion control)
That’s why TCP is often called
“reliable and self-adjusting.”
Transport Layer: 3-7
9. Connection-Oriented
(Handshake)
TCP is connection-oriented,
meaning it establishes a
connection before sending
data.
This is done using a 3-way
handshake:
1.SYN: Sender requests
connection
2.SYN-ACK: Receiver agrees
3.ACK: Sender confirms
After this, both sides are ready
to exchange data.
This handshake initializes
sequence numbers, buffers,
and states.
8.
TCP segment structure
sourceport # dest port #
32 bits
not
used receive window flow control: # bytes
receiver willing to accept
sequence number
segment seq #: counting
bytes of data into bytestream
(not segments!)
application
data
(variable length)
data sent by
application into
TCP socket
A
acknowledgement number
ACK: seq # of next expected
byte; A bit: this is an ACK
options (variable
length)
TCP options
head
len
length (of TCP header)
checksum
Internet checksum
RST, SYN, FIN: connection
management
F
S
R
Urg data pointer
P
U
C E
C, E: congestion notification
9.
TCP sequence numbers,ACKs
Sequence numbers:
• byte stream “number” of
first byte in segment’s data
source port # dest port #
sequence number
acknowledgement number
checksum
rwnd
urg pointer
outgoing segment from receiver
A
sent
ACKed
sent, not-
yet ACKed
(“in-flight”)
usable
but not
yet sent
not
usable
window size
N
sender sequence number space
source port # dest port #
sequence number
acknowledgement number
checksum
rwnd
urg pointer
outgoing segment from sender
Acknowledgements:
• seq # of next byte expected
from other side
• cumulative ACK
10.
TCP sequence numbers,ACKs
host ACKs receipt
of echoed ‘C’
host ACKs receipt of‘C’,
echoes back ‘C’
simple telnet scenario
Host B
Host A
User types‘C’
Seq=42, ACK=79, data = ‘C’
Seq=79, ACK=43, data = ‘C’
Seq=43, ACK=80
11.
TCP round triptime, timeout
Q: how to set TCP timeout
value?
longer than RTT, but RTT varies!
too short: premature timeout,
unnecessary retransmissions
too long: slow reaction to
segment loss
Q: how to estimate RTT?
SampleRTT:measured time
from segment transmission until
ACK receipt
• ignore retransmissions
SampleRTT will vary, want
estimated RTT “smoother”
• average several recent
measurements, not just current
SampleRTT
12.
TCP round triptime, timeout
EstimatedRTT = (1- )*EstimatedRTT + *SampleRTT
exponential weighted moving average (EWMA)
influence of past sample decreases exponentially fast
typical value: = 0.125
RTT: gaia.cs.umass.edu to fantasia.eurecom.fr
100
150
200
250
300
350
1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106
time (seconnds)
RTT
(milliseconds)
SampleRTT Estimated RTT
RTT
(milliseconds)
RTT: gaia.cs.umass.edu to fantasia.eurecom.fr
sampleRTT
EstimatedRTT
time
(seconds)
13.
TCP round triptime, timeout
timeout interval: EstimatedRTT plus “safety margin”
• large variation in EstimatedRTT: want a larger safety margin
TimeoutInterval = EstimatedRTT + 4*DevRTT
estimated RTT “safety margin”
* Check out the online interactive exercises for more examples: http://gaia.cs.umass.edu/kurose_ross/interactive/
DevRTT = (1-)*DevRTT + *|SampleRTT-EstimatedRTT|
(typically, = 0.25)
DevRTT: EWMA of SampleRTT deviation from EstimatedRTT:
14.
TCP Sender (worksbased on events)
event: data received from application
•The application (like a browser or email app) gives
data to TCP to send.
•TCP creates a segment (packet) to send that data.
•TCP adds a sequence number (seq#) →
This number shows the byte number of the first
byte of data in that segment.
(So, it helps the receiver keep everything in order.)
•TCP starts a timer — but only if it’s not already
running.
•The timer is for the oldest segment that has been
sent but not yet acknowledged (ACKed).
•The TimeOutInterval is the time TCP waits for an
ACK before it decides to retransmit.
event: timeout
• If the timer runs out (no ACK received
in time), TCP assumes the segment is
lost.
•It retransmits that segment — the
one that caused the timeout.
•Then it restarts the timer again.
event: ACK received
•The receiver sends an ACK to confirm it received
data.
•When the sender gets this ACK, it checks:
•Does this ACK cover (acknowledge) any data that
was still unacknowledged?
If yes those segments are now considered
→
successfully delivered.
•TCP then updates its record of which bytes have
been acknowledged.
•If there are still some unacknowledged
segments, TCP keeps the timer running (or
restarts it).
•If everything is acknowledged, it stops the timer.
15.
TCP Receiver: ACKgeneration [RFC 5681]
Event at receiver
arrival of in-order segment with
expected seq #. All data up to
expected seq # already ACKed
arrival of in-order segment with
expected seq #. One other
segment has ACK pending
arrival of out-of-order segment
higher-than-expect seq. # .
Gap detected
arrival of segment that
partially or completely fills gap
TCP receiver action
delayed ACK. Wait up to 500ms
for next segment. If no next segment,
send ACK
immediately send single cumulative
ACK, ACKing both in-order segments
immediately send duplicate ACK,
indicating seq. # of next expected byte
immediate send ACK, provided that
segment starts at lower end of gap
16.
TCP: retransmission scenarios
lostACK scenario
Host B
Host A
Seq=92, 8 bytes of data
Seq=92, 8 bytes of data
ACK=100
X
ACK=100
timeout
premature timeout
Host B
Host A
Seq=92, 8
bytes of data
ACK=120
timeout
ACK=100
ACK=120
SendBase=100
SendBase=120
SendBase=120
Seq=92, 8 bytes of data
Seq=100, 20 bytes of data
SendBase=92
send cumulative
ACK for 120
17.
TCP: retransmission scenarios
cumulativeACK
covers for earlier lost
ACK
Host B
Host A
Seq=92, 8 bytes of data
Seq=120, 15 bytes of data
Seq=100, 20 bytes of data
X
ACK=100
ACK=120
18.
TCP fast retransmit
HostB
Host A
timeout
ACK=100
ACK=100
ACK=100
ACK=100
X
Seq=92, 8 bytes of data
Seq=100, 20 bytes of
data
Seq=100, 20 bytes of data
Receipt of three duplicate ACKs
indicates 3 segments received
after a missing segment – lost
segment is likely. So retransmit!
if sender receives 3 ACKs for
same data (“triple duplicate
ACKs”), resend unACKed
segment with smallest seq #
likely that unACKed segment
lost, so don’t wait for timeout
TCP fast retransmit
19.
TCP flow control
application
process
TCPsocket
receiver buffers
TCP
code
IP
code
receiver protocol stack
Q: What happens if network
layer delivers data faster than
application layer removes
data from socket buffers?
Network layer
delivering IP datagram
payload into TCP
socket buffers
from sender
Application removing
data from TCP socket
buffers
20.
TCP flow control
application
process
TCPsocket
receiver buffers
TCP
code
IP
code
receiver protocol stack
Q: What happens if network
layer delivers data faster than
application layer removes
data from socket buffers?
Network layer
delivering IP datagram
payload into TCP
socket buffers
from sender
Application removing
data from TCP socket
buffers
21.
TCP flow control
application
process
TCPsocket
receiver buffers
TCP
code
IP
code
receiver protocol stack
Q: What happens if network
layer delivers data faster than
application layer removes
data from socket buffers?
from sender
Application removing
data from TCP socket
buffers
receive window
flow control: # bytes
receiver willing to accept
22.
TCP flow control
application
process
TCPsocket
receiver buffers
TCP
code
IP
code
receiver protocol stack
Q: What happens if network
layer delivers data faster than
application layer removes
data from socket buffers?
receiver controls sender, so
sender won’t overflow
receiver’s buffer by
transmitting too much, too fast
flow control
from sender
Application removing
data from TCP socket
buffers
23.
TCP flow control
TCP receiver “advertises” free buffer
space in rwnd field in TCP header
• RcvBuffer size set via socket
options (typical default is 4096 bytes)
• many operating systems autoadjust
RcvBuffer
sender limits amount of unACKed
(“in-flight”) data to received rwnd
guarantees receive buffer will not
overflow
buffered data
free buffer space
rwnd
RcvBuffer
TCP segment payloads
to application process
TCP receiver-side buffering
24.
TCP flow control
TCP receiver “advertises” free buffer
space in rwnd field in TCP header
• RcvBuffer size set via socket
options (typical default is 4096 bytes)
• many operating systems autoadjust
RcvBuffer
sender limits amount of unACKed
(“in-flight”) data to received rwnd
guarantees receive buffer will not
overflow
flow control: # bytes receiver willing to accept
receive window
TCP segment format
25.
TCP connection management
beforeexchanging data, sender/receiver “handshake”:
agree to establish connection (each knowing the other willing to establish connection)
agree on connection parameters (e.g., starting seq #s)
connection state: ESTAB
connection variables:
seq # client-to-server
server-to-client
rcvBuffer size
at server,client
application
network
connection state: ESTAB
connection Variables:
seq # client-to-server
server-to-client
rcvBuffer size
at server,client
application
network
Socket clientSocket =
newSocket("hostname","port number");
Socket connectionSocket =
welcomeSocket.accept();
26.
Agreeing to establisha connection
Q: will 2-way handshake always
work in network?
variable delays
retransmitted messages (e.g.
req_conn(x)) due to message loss
message reordering
can’t “see” other side
2-way handshake:
Let’s talk
OK
ESTAB
ESTAB
choose x
req_conn(x)
ESTAB
ESTAB
acc_conn(x)
2-way handshake scenarios
client
terminate
s
ESTAB
choosex
req_conn(x)
ESTAB
acc_conn(x)
data(x+1) accept
data(x+1)
connection
x completes server
forgets x
Problem: dup data
accepted!
data(x+1)
retransmit
data(x+1)
accept
data(x+1)
retransmit
req_conn(x
)
ESTAB
req_conn(x)
30.
TCP 3-way handshake
SYNbit=1,Seq=x
choose init seq num, x
send TCP SYN msg
ESTAB
SYNbit=1, Seq=y
ACKbit=1; ACKnum=x+1
choose init seq num, y
send TCP SYNACK
msg, acking SYN
ACKbit=1, ACKnum=y+1
received SYNACK(x)
indicates server is live;
send ACK for SYNACK;
this segment may contain
client-to-server data
received ACK(y)
indicates client is live
SYNSENT
ESTAB
SYN RCVD
Client state
LISTEN
Server state
LISTEN
clientSocket = socket(AF_INET, SOCK_STREAM)
serverSocket = socket(AF_INET,SOCK_STREAM)
serverSocket.bind((‘’,serverPort))
serverSocket.listen(1)
connectionSocket, addr = serverSocket.accept()
clientSocket.connect((serverName,serverPort))
31.
A human 3-wayhandshake protocol
1. On belay?
2. Belay on.
3. Climbing.
32.
Closing a TCPconnection
client, server each close their side of connection
• send TCP segment with FIN bit = 1
respond to received FIN with ACK
• on receiving FIN, ACK can be combined with own FIN
simultaneous FIN exchanges can be handled
33.
Transport Layer
Transport-layerservices
Multiplexing and demultiplexing
Connectionless transport: UDP
Principles of reliable data transfer
Connection-oriented transport: TCP
Principles of congestion control
TCP congestion control
Evolution of transport-layer functionality
COMPSCI 453 Computer Networks
Professor Jim Kurose
College of Information and Computer Sciences
University of Massachusetts
Class textbook:
Computer Networking: A Top-
Down Approach (8th
ed.)
J.F. Kurose, K.W. Ross
Pearson, 2020
http://gaia.cs.umass.edu/kurose_ross
Video: 2020, J.F. Kurose, All Rights Reserved
Powerpoint: 1996-2020, J.F. Kurose, K.W. Ross, All Rights Reserved
34.
TCP: Transport ControlProtocol
segment structure
reliable data transfer
sequence numbers
ACKs
timers
35.
TCP sender (simplified)
TransportLayer: 3-35
wait
for
event
NextSeqNum = InitialSeqNum
SendBase = InitialSeqNum
L
retransmit not-yet-acked
segment with smallest
seq. #
start timer
timeout
if (y > SendBase) {
SendBase = y
/* SendBase–1: last cumulatively ACKed byte */
if (there are currently not-yet-acked segments)
start timer
else stop timer
}
ACK received, with ACK field value y
create segment, seq. #: NextSeqNum
pass segment to IP (i.e., “send”)
NextSeqNum = NextSeqNum + length(data)
if (timer currently not running)
start timer
data received from application above
#10 The key thing to note here is that the ACK number (43) on the B-to-A segment is one more than the sequence number (42) on the A-toB segment that triggered that ACK
Similarly, the ACK number (80) on the last A-to-B segment is one more than the sequence number (79) on the B-to-A segment that triggered that ACK
#12
This is how TCP re-computes the estimated RTT each time a new SampleRTT is taken.
The process is knows as an exponeitally weighted moving average, shown by the equation here.
<say it>
Where alpha reflects the influence of the most recent measurements on the estimated RTT; a typical value of alpha used in implementaitons is .125
The graph at the bottom show measured RTTs beween a host in the Massachusetts and a host in France, as well as the estimated, “smoothed” RTT
#13 Given this value of the estimated RTT, TCP computes the timeout interval to be the estimated RTT plus a “safety margin”
And the intuition is that if we are seeing a large variation in SAMPLERTT – the RTT estimates are fluctuating a lot - then we’ll want a larger savety margin
So TCP computes the Timeout interval to be the Estimated RTT plus 4 times a measure of deviation in the RTT.
The deviation in the RTT is computed as the eWMA of the difference between the most recently measured SampleRTT from the Estimated RTT
#14
Given these details of TCP sequence numbers, acks, and timers, we can now describe the big picture view of how the TCP sender and receiver operate
You can check out FSMs in book; let’s just give an English text description here and let’s start with the sender.
#15
Rather than immediately ACKnowledig this segment, many TCP implementations will wait for half a second for another in-order segment to arrive, and then generate a single cumulative ACK for both segments – thus decreasing the amount of ACK traffic. The arrival of this second in-order segment and the cumulative ACK generation that covers both segments is the second row in this table.
#16 To cement our understanding of TCP reliability, let’s look a a few retransmission scenarios
In the first case a TCP segments is transmited and the ACK is lost, and the TCP timeout mechanism results in another copy of being transmitted and then re-ACKed a the sender
In the second example two segments are sent and acknowledged, but there is a premature timeout e for the first segment, which is retransmitted. Notet that when this retransmitted segment is received, the receiver has already received the first two segments, and so resends a cumulative ACK for both segments received so far, rather than an ACK for just this fist segment.
#17 And in this last example, two segments are again transmitted, the first ACK is lost but the second ACK, a cumulative ACK arrives at the sender, which then can transmit a third segment, knowing that the first two have arrived, even though the ACK for the first segment was lost
#18 Let’s wrap up our study of TCP reliability by discussing an optimization to the original TCP known as TCP fast retransmit,
Take a look at this example on the right where 5 segments are transmitted and the second segment is lost. In this case the TCP receiver sends an ACK 100 acknowledging the first received segment.
When the third segment arrives at the receiver, the TCP receiver sends another ACK 100 since the second segment has not arrived. And similarly for the 4th and 5th segments to arrive.
Now what does the sender see? The sender receives the first ACK 100 it has been hoping for, but then three additional duplicate ACK100s arrive. The sender knows that somethings’ wrong – it knows the first segment arrived at the receiver but three later arriving segments at the receiver – the ones that generated the three duplicate ACKs – we received correctly but were not in order. That is, that there was a missing segment at the receiver when each of the three duplicate ACK were generated.
With fast retransmit, the arrival of three duplicate ACK causes the sender to retransmit its oldest unACKed segment, without waiting for a timeout event. This allows TCP to recover more quickly from what is very likely a loss event
specifically that the second segment has been lost, since three higher -numbered segments were received
#19 (Presuming an intro)
Before diving into the details of TCP flow control, let’s first get the general context and motivate the need for flow control.
This diagram show a typical transport-layer implementation
A segment is brought up the protocol stack to the transport layer, and the segment’s payload is removed from the segment and written INTO socket buffers.
How does data get taken OUT of socket buffers? By applications performing socket reads, as we learned in Chapter 2.
And so the question is “What happens if network layer delivers data faster than an application-layer process removes data from socket buffers?”
Let’s watch a video of what happens when things arrive way too fast to fast to be processed.
<video>. (I love that video). Another human analogy showing the need for flow control is the saying – to use some English slang - “no one can drink from a firehose”
Flow control is a mechanism to the calamity of a receiver being over-run by a sender that is sending too fast – it allows the RECEIVER to explictly control the SENDER so sender won’t overflow receiver’s buffer by transmitting too much, too fast
#20 (Presuming an intro)
Before diving into the details of TCP flow control, let’s first get the general context and motivate the need for flow control.
This diagram show a typical transport-layer implementation
A segment is brought up the protocol stack to the transport layer, and the segment’s payload is removed from the segment and written INTO socket buffers.
How does data get taken OUT of socket buffers? By applications performing socket reads, as we learned in Chapter 2.
And so the question is “What happens if network layer delivers data faster than an application-layer process removes data from socket buffers?”
Let’s watch a video of what happens when things arrive way too fast to fast to be processed.
<video>. (I love that video). Another human analogy showing the need for flow control is the saying – to use some English slang - “no one can drink from a firehose”
Flow control is a mechanism to the calamity of a receiver being over-run by a sender that is sending too fast – it allows the RECEIVER to explictly control the SENDER so sender won’t overflow receiver’s buffer by transmitting too much, too fast
#21 (Presuming an intro)
Before diving into the details of TCP flow control, let’s first get the general context and motivate the need for flow control.
This diagram show a typical transport-layer implementation
A segment is brought up the protocol stack to the transport layer, and the segment’s payload is removed from the segment and written INTO socket buffers.
How does data get taken OUT of socket buffers? By applications performing socket reads, as we learned in Chapter 2.
And so the question is “What happens if network layer delivers data faster than an application-layer process removes data from socket buffers?”
Let’s watch a video of what happens when things arrive way too fast to fast to be processed.
<video>. (I love that video). Another human analogy showing the need for flow control is the saying – to use some English slang - “no one can drink from a firehose”
Flow control is a mechanism to the calamity of a receiver being over-run by a sender that is sending too fast – it allows the RECEIVER to explictly control the SENDER so sender won’t overflow receiver’s buffer by transmitting too much, too fast
#22 (Presuming an intro)
Before diving into the details of TCP flow control, let’s first get the general context and motivate the need for flow control.
This diagram show a typical transport-layer implementation
A segment is brought up the protocol stack to the transport layer, and the segment’s payload is removed from the segment and written INTO socket buffers.
How does data get taken OUT of socket buffers? By applications performing socket reads, as we learned in Chapter 2.
And so the question is “What happens if network layer delivers data faster than an application-layer process removes data from socket buffers?”
Let’s watch a video of what happens when things arrive way too fast to fast to be processed.
<video>. (I love that video). Another human analogy showing the need for flow control is the saying – to use some English slang - “no one can drink from a firehose”
Flow control is a mechanism to the calamity of a receiver being over-run by a sender that is sending too fast – it allows the RECEIVER to explictly control the SENDER so sender won’t overflow receiver’s buffer by transmitting too much, too fast
#23 Here’s how TCP implement flow control. The basic idea is simple – the receiver informs the sender how much free buffer space there is, and the sender is limited to send no more than this amount of data. That the value o RWND in the diagram to the right.
This information is carried from the receiver to the sender in the “receiver advertised window” (do a PIP of header) in the TCP header, and the value will change as the amount of free buffer space fluctuates over time.
#24 Here’s how TCP implement flow control. The basic idea is simple – the receiver informs the sender how much free buffer space there is, and the sender is limited to send no more than this amount of data. That the value o RWND in the diagram to the right.
This information is carried from the receiver to the sender in the “receiver advertised window” (do a PIP of header) in the TCP header, and the value will change as the amount of free buffer space fluctuates over time.
#25
The other TCP topic we’ll want to consider here is that of “connection management”
The TCP sender and reciver have a number of pieces of shared state that they must establish before actually communication
FIRST theym ust both agree that they WANT to communicate with each other
Secondly there are connection parameters – the initial sequence number and the initial receiver-advertised bufferspace that they’ll want to agree on
This is done via a so-called handshake protocol – the client reaching our to the server, and the server answering back.
And before diving into the TCP handshake protocol, let’s first consider the problem of handshaking, of establishing shared state.
#26
Here’s an example of a two way handshake. Alice reaches out to Bob and say’s “let’s talk” and Bob says OK, and they start their conversation
For a network protocol, the equivalent protocol would be a client sending a “request connection” message saying ”let’s talk, the initial sequence number is x”
And the server would respond with a message ”I accept your connect x”
And the question we want to ask ourselves is <talk through>
Will this work? Let’s look at a few scenarios…
#30 TCP’s three way handshake, that operates as follows
Let’s say the client and server both create a TCP socket as we learned about in Chapter 2 and enter the LISTEN state
The client then connects to the server sending a SYN message with a sequence number x (SYN Message is an TCP Segment with SYN but set in the header – you might want to go back and review the TCP segment format!)
The server is waiting for a connection, and receives the SYN message enters the SYN received state (NOT the established state and sends a SYN ACK message back.
Finally the client sends an ACK message to the server, and when the server receiver this enters the ESTABLished state. This is when the application process would see the return from the wait on the socket accept() call
#31 As usual, there’s a human protocol analogy to the three way handshake, and I still remember thinking about this clinging for my life while climbing up a rockface
When you want start climbing you first say ON BELOW (meaning ARE YOU READY WITH MY SAFETY ROPE)
THE BELYER (server) responds BELAY ON (that lets you know the belayer is ready for you)
And then you say CLIMING
It’s amazing what can pass through your head when your clinging for your life o a
#32 All good things must come to an end, and that’s true for a TCP connection as well.
And of course there’s a protocol for one side to gracefully close of a TCP connection using a FIN message, to which the other side sends a FINACK message and waits around a bit to respond to any retransmitted FIN messages before timing out.