Steven Lu
Steven Lu

Reputation: 43447

Boost Asio sample HTTP Server -- taking this example and making it "production ready"

In my search for a clean, simple, modern, and cross-platform HTTP server, I settled on the Boost.Asio C++11 example HTTP Server. You may find it here, and in the boost source directory boost_1_55_0/doc/html/boost_asio/example/cpp11/http/server.

I have reviewed the code a little, and it looks to be quite clean and very very well documented, so it's nearly ideal. I just have a few small questions which probably only have an impact on performance, which for now is a secondary priority (the primary being stability), as I do intend to use the same portable code on mobile and embedded platforms.

This magic number 512 appears in request_handler::handle_request() in request_handler.cpp:

  // Fill out the reply to be sent to the client.
  rep.status = reply::ok;
  char buf[512];
  while (is.read(buf, sizeof(buf)).gcount() > 0)
    rep.content.append(buf, is.gcount());
  rep.headers.resize(2);
  rep.headers[0].name = "Content-Length";
  rep.headers[0].value = std::to_string(rep.content.size());
  rep.headers[1].name = "Content-Type";
  rep.headers[1].value = mime_types::extension_to_type(extension);

And also in connection.hpp the connection class has this member:

/// Buffer for incoming data.
std::array<char, 8192> buffer_;

I'm not sure why these sizes, 512 bytes and 8K bytes are used. It seems pretty clear to me that the local file to be served is being read and dumped into the response's std::string 512 bytes at a time. I do wonder if 4K or 8K would be a more appropriate chunking size.

As for the 8K buffer_, it seems to be used for holding the data arriving over the network. This is a little harder for me to figure out since I find myself inside of asio's guts. I'm primarily worried about 8K not being enough. While a single packet will never exceed this length (I think... there is a theoretical max packet size of 64K, though.), I just don't know why this has to be 8K.

Upvotes: 3

Views: 1517

Answers (3)

Darren Cook
Darren Cook

Reputation: 28928

Currently Apache and nginx also use an 8KB limit (nginx used a 4KB limit until recently). However they do make it configurable:

The increase from 4KB to 8KB would have been because cookies are getting larger and larger. But some limit is a good idea, to prevent connections from evil or broken clients. Make sure you send back a 4xx error if a client does send too big a request.

Upvotes: 0

kenba
kenba

Reputation: 4549

I agree with @Torsten Robitzki, it's just the size of the receive buffer.

Like you I was looking for clean, simple, cross-platform HTTP server for an embedded project over a year ago. I tried a few of the asio based HTTP libraries around at the time, but got frustrated with them and ended up writing my own. You can find it here: via-httplib

I'd be glad of any feedback, especially negative, although positive is always welcome. ;)

Upvotes: 1

Torsten Robitzki
Torsten Robitzki

Reputation: 2555

The 512 byte buffer is used to read data from file and then to add it to the body that is constructed. This is for sure an unnecessary copy operation, but this is just an example. And copying a file into process local memory and to send it as one message is for sure not optimal.

This code is just an example and before you allow others access to your filesystem (even if it's just reading access), you should really be sure that no confidential informations could be read this way.

To handle really large files, you would probably want to use chunk encoding and read and send files chunk by chunk having an overlap of the waitings for the disk and network.

For a usual http request, 8k seems enough to me, if request bodies are handled otherwise.

But keep in mind, that this is only an example. If you want to have a http server, that aims to serve nearly all possible purposes, you should look somewhere else. But don't expect that an implementation of this would be as trivial, as this boost example.

Upvotes: 2

Related Questions