Brian R. Bondy
Brian R. Bondy

Reputation: 347236

In protocol design, why would you ever use 2 ports?

When a TCP Server does a socket accept on a port, it gets a new socket to work with that Client.
The accepting socket remains valid for that port and can accept further clients on that port.

Why did the original FTP specification RFC 959 decide to create both a control port and a data port?

Would there be any reason to do this in a similar custom protocol?

It seems to me that this could have been easily specified on a single port.

Given all the problems with firewalls and NATS with FTP, it seems that a single port would have been much better.

For a general protocol implementation, the only reason I could think that you would want to do this is so that you can serve the files from a different host than the commands are going to.

Upvotes: 30

Views: 18053

Answers (13)

Hannes de Jager
Hannes de Jager

Reputation: 2923

In the case of FTP its not just only a different port but could also be a different host and its probably because of this in the RFC 959:

In another situation a user might wish to transfer files between

two hosts, neither of which is a local host. The user sets up

control connections to the two servers and then arranges for a

data connection between them. In this manner, control information

is passed to the user-PI but data is transferred between the

server data transfer processes. Following is a model of this

server-server interaction.

                    Control     ------------   Control 
                    ---------->| User-FTP |<----------- 
                    |          | User-PI  |           | 
                    |          |   "C"    |           | 
                    V          ------------           V 
            --------------                        -------------- 
            | Server-FTP |   Data Connection      | Server-FTP | 
            |    "A"     |<---------------------->|    "B"     | 
            -------------- Port (A)      Port (B) -------------- 
 
 
                                 Figure 2 
The protocol requires that the control connections be open while

data transfer is in progress. It is the responsibility of the

user to request the closing of the control connections when

finished using the FTP service, while it is the server who takes

the action. The server may abort data transfer if the control

connections are closed without command.

Upvotes: 2

Yash Zade
Yash Zade

Reputation: 163

see this image

When a user starts a FTP session with a remote host, FTP first sets up a control TCP connection on server port number 21. The client side of FTP sends the user identification and password over this control connection. The client side of FTP also sends, over the control connection, commands to change the remote directory.

When the user requests a file transfer (either to, or from, the remote host), FTP opens a TCP data connection on server port number 20. FTP sends exactly one file over the data connection and then closes the data connection. If, during the same session, the user wants to transfer another file, FTP opens another data TCP connection. Thus, with FTP, the control connection remains open throughout the duration of the user session, but a new data connection is created for each file transferred within a session (i.e., the data connections are non-persistent).

Upvotes: 0

Coincoin
Coincoin

Reputation: 28596

The initial rationale behind this was so that you could:

  • Continue sending and receiving control instruction on the control connection while you are transfering data.
  • Have more than one data connection active at the same time.
  • The server decides when it's ready to send you data.

True, they could have achieved the same result by specifying a complicated multiplexing protocol integrated to the FTP protocol, but since at that time NAT was a non issue, they chose to use what already existed, TCP ports.

Here is an example:

Alice wants two files from Bob. Alice connects to Bob port 21 and asks for the files. Bob open connections to Alice port 20 when it's ready and send the files there. Meanwhile, Charles needs a file on Alice's server. Charles connects to 21 on Alice and asks for the file. Alice connects to port 20 on Charles when ready, and sends the files.

As you can see, port 21 is for client connecting to servers and port 20 is for servers connecting to clients, but those clients could still serve files on 21.

Both ports serve a totally different purpose, and again for sake of simplicity, they chose to use two different ports instead of implementing a negotiation protocol.

Upvotes: 23

D.Shawley
D.Shawley

Reputation: 59563

IIRC, the issue wasn't that FTP uses two (i.e., more than one) ports. The issue is that the control connection is initiated by the client and the data channel was initiated by the server. The largest difference between FTP and HTTP is that in HTTP the client pulls data and in FTP the server pushes it. The NAT issue is related to the server pushing data through a firewall that doesn't know to expect the connection.

FTP's usage of separate ports for control and data is rather elegant IMHO. Think about all of the headaches in HTTP surrounding the multiplexing of control and data - think Trailing headers, rules surrounding pipelined requests, connection keep-alives, and what not. Much of that is avoided by explicitly separating control and data not to mention it is possible to do interesting things like encrypt one and not the other or make the control channel have a higher QoS than the data.

Upvotes: 5

Einstein
Einstein

Reputation: 4538

The IETF has banned the practice of allocating more than one port for new protocols so we likely won't be seeing this in the future.

Newer IP protocols such as SCTP are designed to solve some of the shortcommings of TCP that could lead one to use multiple ports. TCPs 'head-of-line' blocking prevents you from having multiple separate requests/streams in flight which can be a problem for some realtime applications.

Upvotes: 4

shodanex
shodanex

Reputation: 15406

You should have a look at the RTSP + RTP protcol. It is a similar design : each stream can be sent on a different port, and statistics about jitter, reordering etc... is sent on yet another port.

Plus there is no connection since it is UDP. However it was developped when firewall where already something banal (sorry for my english), so a mode was developped where all this connection could be embedded in one TCP connection with HTTP syntax.

Guess what ? The multi port protocol is much simpler (IMO) to implement than the http multiplexed one and and it has a lot more features. If you are not concerned with firewall problem, why choose complicated multiplexing scheme when there is already one existing (TCP port)?

Upvotes: 2

Paul Dixon
Paul Dixon

Reputation: 300855

FTP has a very long history, being one of the first ARPANET protocols back in the early seventies (for instance, see RFC114 from 1971). Design decisions which may seem odd now made more sense then. Connections were much slower, and performing the connection control "out of band" probably seemed a good move with the available network technology.

The current RFC959 includes a good section on history, linking to earlier RFCs, if you fancy doing a little archaeology...

Upvotes: 6

MSalters
MSalters

Reputation: 179867

FTP was designed at a time when the stupidity of a modern firewall was inconceivable. TCP ports are intended for this functionality; multiplexing multiple connections on a single IP. They are NOT a substitute for Access Control Lists. They are NOT intended to extend IPv4 to 48 bits addresses, either.

Any new non-IPv6 protocol will have to deal with the current mess, so it should stick to a small contiguous range of ports.

Upvotes: 8

SingleNegationElimination
SingleNegationElimination

Reputation: 156158

Like many of the older wire protocols, FTP is suited for use by humans. That is it is quite easy to use FTP from a terminal session. The designers of FTP anticipated that users might want to continue working with the remote host while data was transferring. This would have been difficult if command and data were going over the same channel.

Upvotes: 4

Roger Lipscombe
Roger Lipscombe

Reputation: 91845

Because FTP allows for separate control and data. IIRC, as originally designed, you could have 3 hosts: Host A could ask host B to send data to host C.

Upvotes: 9

Michael Burr
Michael Burr

Reputation: 340218

I think they did this so that while a transfer was occuring you could continue to work with the server and start new transfers easily (if your client could support this).

Note that passive mode solves nearly all firewall/NAT problems.

Upvotes: 1

MatthieuP
MatthieuP

Reputation: 1126

In my view, it's just a bad design choice in the first place. In the old ages where it was invented, firewall and NAT were not existing ... Nowadays, the real question is more "why people still want to use FTP" ? Everything FTP does can be done using HTTP in a better way.

Upvotes: 0

Paul Tomblin
Paul Tomblin

Reputation: 182782

FTP is an old protocol. That's really the only reason. The designers thought that the amount of data flowing over the data port would make it so that they couldn't send control commands in a timely manner, so they did it as two ports. Firewalls, and especially NAT, came much later.

Upvotes: 2

Related Questions