Akash
Akash

Reputation: 969

Using hugepages with boost interprocess managed shared memory

I am using https://www.boost.org/doc/libs/1_80_0/doc/html/interprocess/managed_memory_segments.html (boost::interprocess::managed_shared_memory) for a vector:

// Vector, SharedData typedefs
mempool_ = segment_.find_or_construct<Vector<SharedData>>((const char *) mempool_name_.c_str())(size_, segment_.get_segment_manager());

which works as expected, with size_ = 32K above. When I profile the memory access of the vector in several iterations, I observe that access of contiguous elements across the vector is having latency jumps exactly at every 4KiB memory access! The default page size in OS being 4KiB. This is certainly due to page faults occurring at the boundaries of next memory accesses.

To avoid these page faults at my latency sensitive use-case, I am trying to increase the page size of this shared memory segment by using hugepages.

I found a conversation in boost google group which is exactly related to this: https://groups.google.com/g/boost-developers-archive/c/bDSd9DOTbp0

A ticket is mentioned there which seems to be fixed: https://svn.boost.org/trac10/ticket/8030

But I cannot seem to find documentation on how to use MAP_HUGETLB flag for the managed_memory_segment. Any help would be appreciated!

Upvotes: 2

Views: 498

Answers (1)

sehe
sehe

Reputation: 393694

The support for custom map options only exists for mapped_region. I analyze the code below.

You could still have your cake and eat it, by using mapped_region (e.g. on top of shared_memory_object or anonymous_shared_memory). You can then mount a managed_external_buffer on it to get a managed segment inside that custom mapping.

Perhaps considering the level of control you are after, you could just use the mapped_region directly.

Code Dive

The commit lives as commit f9c10bd60d8a1ad30ebc7ef86ca5cb9184fbd966 in git:

The release notes for 1.54.0 say

*  Added support for platform-specific flags to mapped_region (ticket #8030)

The following documentation is found at mapped_region:

The map is created using default_map_options. This flag is OS dependant and it should not be changed unless the user needs to specify special options.

In Windows systems map_options is a DWORD value passed as dwDesiredAccess to MapViewOfFileEx. If default_map_options is passed it's initialized to zero. map_options is XORed with FILE_MAP_[COPY|READ|WRITE].

In UNIX systems and POSIX mappings map_options is an int value passed as flags to mmap. If default_map_options is specified it's initialized to MAP_NOSYNC if that option exists and to zero otherwise. map_options XORed with MAP_PRIVATE or MAP_SHARED.

In UNIX systems and XSI mappings map_options is an int value passed as shmflg to shmat. If default_map_options is specified it's initialized to zero. map_options is XORed with SHM_RDONLY if needed.

DEMO

In response to the comment made a simplified example using managed_external_buffer on top of mapped_region on top of shared_memory_object:

Live On Coliru

#include <boost/interprocess/managed_external_buffer.hpp>
#include <boost/interprocess/mapped_region.hpp>
#include <boost/interprocess/shared_memory_object.hpp>

// demo only
#include <boost/interprocess/containers/vector.hpp>
#include <iostream>

namespace bip = boost::interprocess;
using Segment                  = bip::managed_external_buffer;
template <class T> using Alloc = bip::allocator<T, Segment::segment_manager>;
using Data                     = bip::vector<double, Alloc<double>>;

struct MySharedSegment {
    MySharedSegment(bip::create_only_t, char const* name, size_t size)
        : _smo{bip::create_only, name, bip::mode_t::read_write} //
    {
        _smo.truncate(size);

        _buf  = bip::mapped_region(_smo, bip::mode_t::read_write, 0, size);
        _mb   = Segment(bip::create_only, _buf.get_address(), _buf.get_size());
        _data = _mb.construct<Data>("vec")(_mb.get_segment_manager());
        _data->reserve(1000);
    }

    MySharedSegment(bip::open_only_t, char const* filename)
        : _smo{bip::open_only, filename, bip::mode_t::read_write}
        , _buf{_smo, bip::mode_t::read_write}
        , _mb{bip::open_only, _buf.get_address(), _buf.get_size()} //
    {
        assert(_mb.get_size() <= _buf.get_size());

        auto [v, vok] = _mb.find<Data>("vec");

        if (!v || !vok)
            throw std::runtime_error("expected object was not found");

        _data   = v;
    }

    Data& data() {
        assert(_data);
        return *_data;
    }

    Segment::segment_manager* get_segment_manager() {
        return _mb.get_segment_manager();
    }

  private:
    bip::shared_memory_object _smo;
    bip::mapped_region        _buf;
    Segment                   _mb;
    Data*                     _data = nullptr;
};

int main(int argc, char**) {
    if (argc < 2) {
        std::cout << "First run, creating\n";
        bip::shared_memory_object::remove("SHM_NAME");

        MySharedSegment mss(bip::create_only, "SHM_NAME", 10 * 1024 * 1024);

        auto& db  = mss.data();

        db.emplace_back(1.1);
        db.emplace_back(2.2);
        db.emplace_back(3.3);
        std::cout << "\n";
    } else {
        std::cout << "Second run, opening\n";
        MySharedSegment mss(bip::open_only, "SHM_NAME");

        for (auto el : mss.data())
            std::cout << el << " ";

        std::cout << "\n";
    }
}

Local demo:

enter image description here

Upvotes: 0

Related Questions