Tim
Tim

Reputation: 99408

Why does a client not close its socket?

I am reading a socket client example from APUE at https://github.com/hayatoito/apue-2e/blob/master/sockets/ruptime.c. I don't find it close or shutdown its socket. Is it true in general that a client doesn't have to close its socket file descriptor? When does a client need to close its socket file descriptor, and when not?

For comparison, its server closes the socket at the end: https://github.com/hayatoito/apue-2e/blob/master/sockets/ruptimed.c

Client:

#include "apue.h"
#include <netdb.h>
#include <errno.h>
#include <sys/socket.h>

#define MAXADDRLEN  256
#define BUFLEN      128

extern int connect_retry(int, const struct sockaddr *, socklen_t);

void
print_uptime(int sockfd)
{
    int n;
    char buf[BUFLEN];

    while ((n = recv(sockfd, buf, BUFLEN, 0)) > 0)
        write(STDOUT_FILENO, buf, n);
    if (n < 0)
        err_sys("recv error");
}

int
main(int argc, char *argv[])
{
    struct addrinfo *ailist, *aip;
    struct addrinfo hint;
    int sockfd, err;

    if (argc != 2)
        err_quit("usage: ruptime hostname");
    hint.ai_flags = 0;
    hint.ai_family = 0;
    hint.ai_socktype = SOCK_STREAM;
    hint.ai_protocol = 0;
    hint.ai_addrlen = 0;
    hint.ai_canonname = NULL;
    hint.ai_addr = NULL;
    hint.ai_next = NULL;
    if ((err = getaddrinfo(argv[1], "ruptime", &hint, &ailist)) != 0)
        err_quit("getaddrinfo error: %s", gai_strerror(err));
    for (aip = ailist; aip != NULL; aip = aip->ai_next) {
        if ((sockfd = socket(aip->ai_family, SOCK_STREAM, 0)) < 0)
            err = errno;
        if (connect_retry(sockfd, aip->ai_addr, aip->ai_addrlen) < 0) {
            err = errno;
        } else {
            print_uptime(sockfd);
            exit(0);
        }
    }
    fprintf(stderr, "can't connect to %s: %s\n", argv[1], strerror(err));
    exit(1);
}

Server:

#include "apue.h"
#include <netdb.h>
#include <errno.h>
#include <syslog.h>
#include <sys/socket.h>

#define BUFLEN  128
#define QLEN 10

#ifndef HOST_NAME_MAX
#define HOST_NAME_MAX 256
#endif

extern int initserver(int, struct sockaddr *, socklen_t, int);

void
serve(int sockfd)
{
    int clfd;
    FILE *fp;
    char buf[BUFLEN];

    for (;;) {
        clfd = accept(sockfd, NULL, NULL);
        if (clfd < 0) {
            syslog(LOG_ERR, "ruptimed: accept error: %s", strerror(errno));
            exit(1);
        }
        if ((fp = popen("/usr/bin/uptime", "r")) == NULL) {
            sprintf(buf, "error: %s\n", strerror(errno));
            send(clfd, buf, strlen(buf), 0);
        } else {
            while (fgets(buf, BUFLEN, fp) != NULL)
                send(clfd, buf, strlen(buf), 0);
            pclose(fp);
        }
        close(clfd);
    }
}

int
main(int argc, char *argv[])
{
    struct addrinfo *ailist, *aip;
    struct addrinfo hint;
    int sockfd, err, n;
    char *host;

    if (argc != 1)
        err_quit("usage: ruptimed");
#ifdef _SC_HOST_NAME_MAX
    n = sysconf(_SC_HOST_NAME_MAX);
    if (n < 0)                  /* best guess */
#endif
        n = HOST_NAME_MAX;
    host = malloc(n);
    if (host == NULL)
        err_sys("malloc error");
    if (gethostname(host, n) < 0)
        err_sys("gethostname error");
    daemonize("ruptimed");
    hint.ai_flags = AI_CANONNAME;
    hint.ai_family = 0;
    hint.ai_socktype = SOCK_STREAM;
    hint.ai_protocol = 0;
    hint.ai_addrlen = 0;
    hint.ai_canonname = NULL;
    hint.ai_addr = NULL;
    hint.ai_next = NULL;
    if ((err = getaddrinfo(host, "ruptime", &hint, &ailist)) != 0) {
        syslog(LOG_ERR, "ruptimed: getaddrinfo error: %s",
               gai_strerror(err));
        exit(1);
    }
    for (aip = ailist; aip != NULL; aip = aip->ai_next) {
        if ((sockfd = initserver(SOCK_STREAM, aip->ai_addr,
                                 aip->ai_addrlen, QLEN)) >= 0) {
            serve(sockfd);
            exit(0);
        }
    }
    exit(1);
}

Upvotes: 0

Views: 638

Answers (2)

Klaudia Lip&#243;czki
Klaudia Lip&#243;czki

Reputation: 27

Allways have to close the socket, anyway it is leaked resource.

There is lot of way to close it, but before the close, be sure the socket data is empty. So you have to call the shutdown, anf after close it.

Upvotes: 2

Nominal Animal
Nominal Animal

Reputation: 39316

Is it true in general that a client doesn't have to close its socket file descriptor?

No, it is not true.

A variant of that belief resulted in a number of keepalive issues in early Microsoft Internet Explorer browsers (versions 1 through 5), that had to be worked around on the server end. (Essentially, the OS did not ensure a proper, full TCP connection termination.)

However, if the process is about to exit, it is not a bug to not close all sockets, because POSIX.1 (the standard that defines this functionality and the C interface used here) says explicitly (in e.g. exit()) that all open streams are closed when the process exits. In theory, it is a similar situation as dynamic memory allocation: it is not necessary for the process to free() all dynamically allocated memory when it exits, because all (non-shared) dynamically allocated memory is automatically freed when the process exits.

In practice, it is much more robust to explicitly close all socket descriptors. This is especially true for TCP connections, because the connection termination involves a FIN and an ACK packet exchange. While one could trust the OS to always get it right, the MSIE example shows reality is much less trustworthy, and being thorough makes for a better user experience.

When does a client need to close its socket file descriptor, and when not?

There are two cases in practice:

  1. When the connection is terminated.

    Descriptors are a finite resource, and closing them as soon as they are no longer needed ensures resources are not wasted. There really is no good reason for keeping a socket connection open for any longer than necessary. Certain things, like traversing a filesystem hierarchy using nftw(), are much more efficient when they can use a potentially large number of descriptors, so taking care a process does not run out of them due to programmer laziness is a good idea.
     

  2. When creating a child process via fork(), that should not have access to that socket connection.

    Current Linux, MacOS, FreeBSD, and OpenBSD at least support a close-on-exec flag (often via fcntl(sfd, F_SETFD, FD_CLOEXEC)). In Linux, you can create close-on-exec socket descriptors using socket(domain, type | SOCK_CLOEXEC, protocol) and socket pairs using socketpair(domain, type | SOCK_CLOEXEC, protocol, sfd).

    Close-on-exec descriptors are closed when an exec call is successful, replacing that process with whatever else is being executed. Thus, if the fork is followed by an exec or _Exit, and all socket descriptors are close-on-exec, the duplicate sockets are closed "automatically", and you don't need to worry about it.

    Note that if your code uses popen(), you better have your socket descriptors close-on-exec, or the command you run may have access to the connections. Pity it is completely nonstandard at this point in time (early 2019).

    Do also note that if the child process does not execute another binary, but for example drops privileges (rare for a client), close-on-exec won't do anything. So, closing (in the child process) unneeded duplicates of the socket descriptors, explicitly "by hand", is still important for proper privilege separation. But that is rarely an issue with client applications, more with services and such.

In other words, whenever you wish to terminate the socket connection, or when you have an extraneous duplicate of the socket connection, you close() them.

Upvotes: 3

Related Questions