Taylor H
Taylor H

Reputation: 161

How to tell if a TCP socket has been closed by the client in Ruby?

I've read some things suggesting that because of the design of TCP this might not be possible (such as: Java socket API: How to tell if a connection has been closed?), but I'm trying to find explicit confirmation. I have a basic TCP server that accepts connections, and a client that initiates a connection, sends a message, and then closes the connection. Is there a way for the server to know that the client closed the connection?

I found some suggestions to look into checking the file descriptors for the sockets (source: How to check if a given file descriptor stored in a variable is still valid?), using the kernel select command (source: https://bytes.com/topic/c/answers/866296-detecting-if-file-descriptor-closed) as well as using recv to check if the client returns 0 (source: http://man7.org/linux/man-pages/man2/recv.2.html#RETURN_VALUE), but these do not seem to work, at least not when called by Ruby. To test this, I wrote a basic server and client:

test_server.rb

require 'socket'
require 'fcntl'

TIMEOUT = 5
server = TCPServer.new('localhost', 8080)

puts "Starting server"
loop do
  client = server.accept
  puts "New client: #{client}"
  puts "** before closed #{Time.now.to_i} closed=#{client.closed?}"
  result = IO.select([client], nil, nil, TIMEOUT)
  puts "select result=#{result}"

  fd = client.fcntl(Fcntl::F_GETFD, 0)
  puts "client fd=#{fd}"

  stuff = client.recv(30)
  puts "received '#{stuff}'"

  begin
    r = client.recv(1)
  rescue => e
  end
  puts "received #{r} nil?=#{r.nil?}"

  sleep 3

  puts "** after closed #{Time.now.to_i} closed=#{client.closed?}"
  result = IO.select([client], nil, nil, TIMEOUT)
  puts "select result=#{result}"

  fd = client.fcntl(Fcntl::F_GETFD, 0)
  puts "client fd=#{fd}"

  begin
    r = client.recv(1)
  rescue => e
  end
  puts "received #{r} nil?=#{r.nil?}"
  puts "done!"
end

test_client.rb

require 'socket'

class Client
  def initialize
    @socket = tcp_socket
  end

  def tcp_socket
    Thread.current[:socket] = TCPSocket.new("localhost", 8080)
  end

  def send(s, args={})
    puts "sending str '#{s}'"
    nbytes = @socket.send(s, 0)
    puts "received #{nbytes} bytes"

    sleep 1
    @socket.close
    puts "done at #{Time.now.to_i}: #{@socket.closed?}"
  end
end

msg = 'hello world this is my message'
server = Client.new
server.send(msg)

The client sends a 30-byte message, waits 1s, then closes the connection. The server accepts the connection, calls select and fcntl on it to check its status, receives the message, tries to read 1 more byte, sleeps for 3 seconds, then calls select and fcntl and again tries to read 1 byte. The intent here is to check if anything changes that the server can see before and after the client closed the connection (hence the 3-second sleep). The result I get from running the server and then the client code is:

Starting server
New client: #<TCPSocket:0x00007fa0930f0880>
** before closed 1578005539 closed=false
select result=[[#<TCPSocket:fd 10>], [], []]
client fd=1
received 'hello world this is my message'
received  nil?=false
** after closed 1578005543 closed=false
select result=[[#<TCPSocket:fd 10>], [], []]
client fd=1
received  nil?=false
done!

Before and after the client closed the connection, select still sees the socket as readable, the underlying file descriptor does not change, and recv returns empty string (It's possible the kernel call is returning 0 as specified in the man-page but Ruby is capturing that, and if so I don't know how to see it.). Thus none of these seem to be a reliable indicator of whether the connection was closed from the other side. Is there something I'm missing?

I have seen some other suggestions to incorporate a regular heartbeat back to the client, but I'm wondering if there's a way to avoid that. Reason is that I'm trying to accommodate a case where the client may be sending a message in several pieces separated by a delay (e.g. 100 bytes at 1 second each byte). If the server sends a heartbeat message in the middle of that operation and listens for an OK, I presume the client has to be listening for the heartbeat as well and send its OK back, separate from the ongoing message send, and in my test case, I can't change the client to do that.

Upvotes: 0

Views: 1245

Answers (1)

Myst
Myst

Reputation: 19221

I have seen some other suggestions to incorporate a regular heartbeat back to the client, but I'm wondering if there's a way to avoid that.

A heartbeat (ping) is the only viable solution.

There is no way to reliably know if the connection is live except by trying to send data over the wire.

Since TCP/IP doesn't require any traffic when data isn't being sent (or received), there's no way for the TCP stack (not even in the OS kernel) to know if the connection is "live" without attempting to exchange data over the wire.

Some connections will close gracefully, allowing the TCP stack to recognize that the connection was closed - but this isn't always true (you can read more about "half-open" or "half-closed" connections).

For this reason, all servers implement a timeout / ping mechanism to test for lost connectivity.

I'm trying to accommodate a case where the client may be sending a message in several pieces separated by a delay (e.g. 100 bytes at 1 second each byte)...

Remember that TCP/IP is a stream based protocol, not a message based protocol.

This means that your 100 bytes might arrive fragmented or they might be combined with a previous message.

If you're sending messages (rather than streaming data), you need - by design - to mark message boundaries.

Since these message boundaries must be marked, it becomes relatively easy to add a message type marker (to mark ping/pong messages).

You can observer the WebSocket protocol message format to learn more about message based protocol design using a TCP/IP (streamed) connection.

Upvotes: 1

Related Questions