Reputation: 499
I'm trying to create a generic TLS over TCP socket in C++, using Openssl. The socket would be used in programs running a select loop and utilizing non-blocking I/O.
I'm concerned about the case where the underlying TCP socket becomes readable after the previous SSL_get_error
call returned SSL_ERROR_WANT_WRITE
. I can think of two situations where this may occur:
SSL_write
simultaneously and subsequent SSL_get_error
calls on both applications return SSL_ERROR_WANT_WRITE
. The TCP packets sent from both applications cross on the wire. The local application's TCP socket is now readable after the previous SSL_get_error
call returned SSL_ERROR_WANT_WRITE
.SSL_write
call, prior to writing any application data. This simply changes the meaning of the data received on the local application's TCP socket from encrypted application data to session re-negotiation data.How should the local application handle this data? Should it:
SSL_write
as it is currently mid-write?SSL_read
as would happen if the socket is idle?Upvotes: 0
Views: 918
Reputation: 123639
SSL_ERROR_WANT_READ and SSL_ERROR_WANT_WRITE can be caused by (re)negotiations or full socket buffers and can not only occure within SSL_read and SSL_write but also SSL_connect and SSL_accept on non-blocking sockets. All you have to do is to wait for the wanted socket state (e.g. readable or writable) and then repeat the same operation. E.g. if you get an SSL_ERROR_WANT_READ from SSL_write you wait until the socket gets readable (with select, poll or similar) and then call SSL_write again. Same with SSL_read.
It might also be useful to use SSL_CTX_set_mode with SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER|SSL_MODE_ENABLE_PARTIAL_WRITE .
Upvotes: 1