Reputation: 855
I've tried to use some docker containers in Azure, but they always end up in 'waiting' state.
I used kuma docker image that I pulled locally and pushed to Azure Container Registry.
This runs locally just fine, and deploys from command line and portal. But after deployment or restart the instance always ends up in 'Waiting' state. There doesn't seem to be any logs. After that i tried the stock 'nginx' docker in case there are some problems with this one, but it fails just like kuma. What am I doing wrong?
deployment yaml for reference:
api-version: 2021-12-25
location: swedencentral
name: uptime-kuma
properties:
imageRegistryCredentials:
- server: acrcustom.azurecr.io
username: admin
password: ...
containers:
- name: uptime-kuma-app
properties:
image: acrcustom.azurecr.io/nginx-app:v1
ports:
- port: 3001
protocol: TCP
resources:
requests:
cpu: 1.0
memoryInGB: 1.5
volumeMounts:
- name: uptime-kuma
mountPath: /app/data
volumes:
- name: uptime-kuma
emptyDir: {}
ipAddress:
dnsNameLabel: stagingkuma
ports:
- port: 3001
protocol: TCP
type: Public
osType: Linux
tags: null
type: Microsoft.ContainerInstance/containerGroups
Is there a way I can get some debug logs and see what's wrong?
Upvotes: 0
Views: 44
Reputation: 855
Okay, I added log configuration to YAML like so:
api-version: 2021-12-25
location: swedencentral
name: uptime-kuma
properties:
diagnostics:
logAnalytics:
workspaceId: ed75d545-80ba-41fe-ace7-3b70017eb188
workspaceKey: ...
containers:
...
And saw that deploying my local image resulted in exec format error
which was because I'm on an ARM mac and the image format was arm.
After that I tried with docker hub image from portal, and the error was a failed directory (mount point was needed but portal does not allow to setup)
After that I changed to yaml to use docker hub image (which is amd64 arch) with existing mount config, and it started. Phew.
Upvotes: 0