Reputation: 3347
What I did:
Enable huge page with root (my system supports 1MB huge page)
$ echo 20 > /proc/sys/vm/nr_hugepages
Mount huge page filesystem to /mnt/hugepages
$ mount -t hugetlbfs nodev /mnt/hugepages
Create a file in huge page filesystem
$ touch /mnt/hugepages/hello
Then map a huge page using mmap
to address 0 as shown in the code below
#define FILE_NAME "/mnt/hugepages/hello"
#define PROTECTION (PROT_READ | PROT_WRITE) // page flag
#define LENGTH (1024*1024*1024) // huge page size
#define FLAGS (MAP_SHARED) //page flag
#define ADDR (void *) (0x0UL) //starting address of the page
fd = open(FILE_NAME, O_CREAT | O_RDWR, 0755);
if (fd < 0) { //
perror("Open failed");
exit(1);
}
// allocate a buffer using huge pages
buf = mmap(ADDR, LENGTH, PROTECTION, FLAGS, fd, 0);
if (buf == MAP_FAILED) {
perror("mmap");
unlink(FILE_NAME);
exit(1);
}
The program outputs:
mmap: Cannot allocate memory
Upvotes: 3
Views: 8491
Reputation: 136208
Linux only supports huge pages for private anonymous mappings (not backed by a file). I.e. you can only enable huge tables for stack, data and heap.
Nowadays, there is hugeadm
to configure the system huge page pools, no need to fiddle with /proc
and mount
. And hugectl
to use huge pages for code and data.
Upvotes: 6
Reputation: 2338
It is not clear if the OP is talking about 1GB pagesize or is on ARMv7 and indeed has 1MB page sizes (subject does not match description). This answer is in reference to using 1GB pagesizes.
Anyway, if you want 1GB pagesizes you must enable it at boot time (unless your memory is exceptionally clean, as hugepages can only be allocated if you have hugepagesz contiguous free memory). To enable gigabyte hugepages add hugepagesz=1GB hugepages=n to GRUB_CMDLINE_LINUX where n is the number of 1GB pages you want to add.
You can now use 1GB hugepages using interfaces like get_huge_pages() (yah!), but you still can't allocate using shm_get/mmap (boo!). Both of these have no mechanism to specify the hugepagesz and require a work-around which is setting default_hugepagesz=1GB as an additional parameter to your kernel boot command line.
Once you have set all three parameters say goodbye to TLB faults and bask in the glory that is 1GB page sizes!... Unless you are on power and then you should be basking in the glory that is 16GB page sizes ;).
# Script to create /hugepages mount point and enable 1GB hugepages
# For RHEL (6) Systems!
#
# MAKE SURE YOU KNOW WHAT THIS SCRIPT DOES BEFORE RUNNING!
echo "hugetlbfs /hugepages hugetlbfs rw,mode=0777,pagesize=1G 0 0" \
>> /etc/fstab
mkdir /hugepages
sed 's/rhgb quiet/hugepagesz=1GB default_hugepagesz=1GB hugepages=16 selinux=0/' /etc/default/grub > grub
cp /etc/default/grub grub.old
mv -f grub /etc/default/grub
grub2-mkconfig > /etc/grub2-efi.cfg
# Now reboot
Upvotes: 2
Reputation: 4752
Note that you will also need to use ftruncate(2)
to adjust the size of the file so that it actually holds the amount of memory you use. The mmap(2)
will still work for a zero-sized file, but you'll get a SIGBUS
when trying to access the memory:
Use of a mapped region can result in these signals:
...
SIGBUS Attempted access to a portion of the buffer that does not correspond to the file (for example, beyond the end of the file, including the case where another process has truncated the file).
(From mmap(2)
.)
To check that the area is really using huge pages, you could inspect /proc/[pid]/smaps
(documented in proc(5)
on Linux). Check if VmFlags
contains ht
for the memory area.
Edit:
Have you looked into libhugetlbfs by the way?
Upvotes: 0