Federico Loro
Federico Loro

Reputation: 85

Another postgres not starting: "could not map anonymous shared memory"

I noticed there are several questions about Postgres (10) not able to boot because of the shared memory; despite that I couldn't really make it run. Now everytime I try to start the cluster I keep getting this error:

2021-10-24 10:13:43.269 UTC  [11253] FATAL:  could not map anonymous shared memory: Cannot allocate memory
2021-10-24 10:13:43.269 UTC  [11253] HINT:  This error usually means that PostgreSQL's request for a shared memory segment exceeded available memory, swap space, or huge pages. To reduce the request size (currently 5507432448 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.

I tryed to check and modify the kernel shmall parameter, but didn't work.

~$ cat /proc/sys/kernel/shmall
18446744073692774399
~$ cat /proc/sys/kernel/shmmax
18446744073692774399

I can't reduce the memory into the config file because these are the settings I must use although I tryied anyway but without success (I even used the minimum values).

Thanks for the help!

EDIT 1:

Ok, I found something strange:

  1. I purged postgresql completely from the system
  2. I reinstalled postgresql
  3. Started the cluster and everything went well
  4. Stopped the cluster and edited the configuration file with the settings I need
  5. Tryed to start the cluster again but got the same error
  6. Reverted back to the default configuration file but still getting the same error. Why is this happening?

The system has 16 GB of RAM.

EDIT 2: ADDING DETAILS

Ok so basically I had to migrate a postgresql database by moving the data directory. That's the only thing it could be done. After moving the data folder completely here's the error. I noticed there weren't a swap memory so I created it but still same error.

The shared memory and segment between the two server are basically identical. When I start the cluster It immediately fails, so maybe this can be a clue for you. The destination server is an EC2 machine.

I don't know what else I can do..

EDIT 3: OTHER DETAILS

This is what the command lsipc shows:

RESOURCE DESCRIPTION                                              LIMIT USED  USE%
MSGMNI   Number of message queues                                 32000    0 0.00%
MSGMAX   Max size of message (bytes)                               8192    -     -
MSGMNB   Default max size of queue (bytes)                        16384    -     -
SHMMNI   Shared memory segments                                    4096    0 0.00%
SHMALL   Shared memory pages                       18446744073692774399    0 0.00%
SHMMAX   Max size of shared memory segment (bytes) 18446744073692774399    -     -
SHMMIN   Min size of shared memory segment (bytes)                    1    -     -
SEMMNI   Number of semaphore identifiers                          32000    0 0.00%
SEMMNS   Total number of semaphores                          1024000000    0 0.00%
SEMMSL   Max semaphores per semaphore set.                        32000    -     -
SEMOPM   Max number of operations per semop(2)                      500    -     -
SEMVMX   Semaphore max value                                      32767    -     -

These values are the same of the source machine I'm trying to migrate. The data directory is identical with the original one, but everythime I go with this command:

sudo pg_ctlcluster 10 main start

I get the same error. I really need help!

Upvotes: 2

Views: 13325

Answers (5)

ijustlovemath
ijustlovemath

Reputation: 938

I ran into this error when using podman-compose to setup a Postgres container. This issue only happened on a MacOS machine, but it could be more general. The fix was to increase the amount of memory used by the podman VM:

podman machine stop
podman machine set --memory $desired_memory_in_MB
podman machine start

Usually the error message will include some number of bytes it tried to allocate for shared_buffers, so set $desired_memory_in_MB to something larger than that.

Additionally, you can limit the amount of memory consumed by the container by adding limits to the command section of your docker-compose.yml

docker-compose.yml

services:
  db:
    image: "docker.io/postgres:16.1-alpine"
    ...
    command:
      - "postgres"
      - "-c"
      - "shared_buffers=2G"

References

[1] - Increase Podman memory limit

[2] - How to customize the configuration file of the official PostgreSQL Docker image?

Upvotes: 0

Atehe
Atehe

Reputation: 117

Rebooting my server fixed this for me

sudo reboot

Upvotes: 0

olegabr
olegabr

Reputation: 545

In my case it was postgresql.auto.conf file I've created myself with the ALTER SYSTEM command. Cleaning up this file contents reset settings to system default and error fixed.

Upvotes: 0

Laurenz Albe
Laurenz Albe

Reputation: 247765

As the documentation says:

By default, PostgreSQL allocates a very small amount of System V shared memory, as well as a much larger amount of anonymous mmap shared memory.

So unless you changed shared_memory_type to sysv, your kernel configuration is pretty irrelevant.

I can think of two possible reasons for the error:

  • The kernel does not have enough memory available to satisfy the request. This could be because shared_buffers is set too large, or because other processes use too much memory, or because there is already a lot of shared memory mapped for other purposes.

  • You configured huge_pages = on, but there are no huge pages defined.

Upvotes: 1

Karol Murawski
Karol Murawski

Reputation: 384

I had similar problem. As you working with Postgresql DB you should create/provide swap space with same size or bigger than shared_buffers value.

How to create swapfile (ex. 8GB):

sudo fallocate -l 8G /.swapfile; sudo chmod 600 /.swapfile; sudo mkswap /.swapfile; sudo swapon /.swapfile; 

then add swapfile to /etc/fstab:

/.swapfile swap swap defaults 0 0

Upvotes: 2

Related Questions