James Wierzba
James Wierzba

Reputation: 17538

Specify memory allocation in JupyterHub?

We deploy JupyterHub to Kubernetes (specifically, AWS managed kubernetes “EKS”) We deploy it via helm. We run version 0.8.2 of JupyterHub.

We want to know:

(1) What is the default memory allocation for notebook servers?

(2) Is it possible to increase it? How?

For reference, this is our helm chart:

auth:
  admin:
    access: true
    users:
      - REDACTED
  type: github
  github:
    clientId: "REDACTED"
    clientSecret: "REDACTED"
    callbackUrl: "REDACTED"
    org_whitelist:
      - "REDACTED"
  scopes:
    - read:org

singleuser:
  image:
    # Get the latest image tag at:
    # https://hub.docker.com/r/jupyter/datascience-notebook/tags/
    # Inspect the Dockerfile at:
    # https://github.com/jupyter/docker-stacks/tree/master/datascience-notebook/Dockerfile
    # name: jupyter/datascience-notebook
    # tag: 177037d09156
    name: REDACTED
    tag: REDACTED
    pullPolicy: Always
  storage:
    capacity: 32Gi

  lifecycleHooks:
    postStart:
      exec:
        command: ["/bin/sh", "-c", "touch ~/.env && chmod 777 ~/.env"]

hub:
  # cookie_max_age_days - determines how long we keep the github
  # cookie in the hub server (in days).
  # cull_idle_servers time out - determines how long it takes before
  # we kick out an inactive user and shut down their user server.
  extraConfig: |
    import sys
    c.JupyterHub.cookie_max_age_days = 2
    c.JupyterHub.services = [
        {
            "name": "cull-idle",
            "admin": True,
            "command": [sys.executable, "/usr/local/bin/cull_idle_servers.py", "--timeout=3600"],
        }
    ]

Upvotes: 1

Views: 1176

Answers (1)

Matt
Matt

Reputation: 74680

The jupyterhub v0.8.2 charts default memory resource request for each singleuser pod/container is 1G in the chart values. Note that this is a resource request which informs the kubernetes scheduler what the container should require on a node. The container is free to use up available memory on the node if needed. Kubernetes should only start evicting pods when the whole node is under memory pressure, which is basically less than 100MiB free memory total

To change this, override the singleuser.memory.guarantee value to set a different request (not sure why they changed the name).

singleuser:
  memory:
    guarantee: '1024Mi'

The other option is to set a hard limit at which the container can be killed. There is no limit set by default in the helm chart default values. To enforce a limit, override the singleuser.memory.limit value when running helm.

singleuser:
  memory:
    limit: '1024Mi'

If you are looking at managing overall usage, you might want to look at resource quotas on the namespace you have jupyterhub running in as it looks like any of the above settings would be per user/singleuser instance.

Upvotes: 2

Related Questions