Reputation: 2055
I set docker memory correct - use 50GB but using only 12.64 isolation - a process
Where I made mistake?
demon.json
{
"registry-mirrors": [],
"insecure-registries": [],
"debug": true,
"experimental": false,
"storage-opt": [ "dm.basesize=40G" ],
"hosts": ["tcp://10.0.0.32:2376", "npipe://"]
}
kill moment
using
using Docker.DotNet;
using Docker.DotNet.Models;
set memory
return await client.Containers.CreateContainerAsync(
new CreateContainerParameters
{
Env = environmentVariables,
Name = containerName,
Image = imageName,
ExposedPorts = new Dictionary<string, EmptyStruct>
{
{
"80", default(EmptyStruct)
}
},
HostConfig = new HostConfig
{
Memory = containerMemory,
Isolation = "process", //Memory = containerMemory,
CPUCount = numberOfCores,
PortBindings = new Dictionary<string, IList<PortBinding>>
{
{
"80",
new List<PortBinding>
{
new PortBinding { HostPort = port.ToString(CultureInfo.InvariantCulture) }
}
}
},
PublishAllPorts = true
}
}).ConfigureAwait(false);
In the new docker, I can not set a memory ram limit for the machine. I think the resources in the comments are much older than the current docker version.
This not worked I added this at beginning
Upvotes: 1
Views: 836
Reputation: 53411
As stated in the different comments, you may have enough resources but maybe the application itself, or the .netcore runtime, is causing the out of memory error.
Please, try doing a postmortem inspection of your docker container logs and try finding some related problems, I think it can be valuable.
As the container is killed and not removed, you can still access its logs with:
docker logs <container id>
You can find the container id by running:
docker ps --all
Upvotes: 2
Reputation: 263637
The storage-opt:
"storage-opt": [ "dm.basesize=40G" ],
will have no effect. It is used for device mapper which isn't used by default in any current version of docker and previously only applied to RedHat based systems that didn't have aufs/overlay support. With overlay2, docker will allow the container to use all the storage available in /var/lib/docker unless you set the container filesystem to read-only.
From the rest of the question, it's not clear whether you're trying to limit memory (RAM) or storage (disk). These are not the same thing. It's also not clear whether these are native Windows containers or Linux containers running in the embedded VM.
Assuming you want to limit the memory of a Linux container, simply start the container with the --memory
or -m
option set to your desired limit. E.g.:
docker run -m 30g some_image
This is a limit, it doesn't allocate the memory, but limits the container and will kill it with an OOM error if the container attempts to exceed it.
When docker is run from within Docker Desktop, you also need to set the memory and/or disk allocated to the embedded VM. Any container or process within that VM is then limited based on the capacity of the VM itself. Setting these varies by how you have installed Docker and details for this are found in Docker's documentation.
Upvotes: 2