Reputation: 545
When describing a node, there are history conditions that show up.
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Tue, 10 Aug 2021 10:55:23 +0700 Tue, 10 Aug 2021 10:55:23 +0700 CalicoIsUp Calico is running on this node
MemoryPressure False Mon, 16 Aug 2021 12:02:18 +0700 Thu, 12 Aug 2021 14:55:48 +0700 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 16 Aug 2021 12:02:18 +0700 Thu, 12 Aug 2021 14:55:48 +0700 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 16 Aug 2021 12:02:18 +0700 Thu, 12 Aug 2021 14:55:48 +0700 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Mon, 16 Aug 2021 12:02:18 +0700 Mon, 16 Aug 2021 11:54:02 +0700 KubeletNotReady PLEG is not healthy: pleg was last seen active 11m17.462332922s ago; threshold is 3m0s
I have 2 questions:
Upvotes: 6
Views: 7603
Reputation: 5145
kubectl get events --field-selector involvedObject.kind=Node,type!=Normal --sort-by='.lastTimestamp'
Note: you can combine multiple field-selectors by using commas. I came up with this gem that's super useful for debugging clusters with 100's of nodes, as it only shows relevant data !=Normal filters the kruft and sorts it by time.
Upvotes: 0
Reputation: 4614
You're right, the kubectl describe <NODE_NAME>
command shows the current condition status (False
/True
).
You can monitor Nodes events using the following command:
# kubectl get events --watch --field-selector involvedObject.kind=Node
LAST SEEN TYPE REASON OBJECT MESSAGE
3m50s Warning EvictionThresholdMet node/kworker Attempting to reclaim inodes
44m Normal NodeHasDiskPressure node/kworker Node kworker status is now: NodeHasDiskPressure
To view only status related events, you can use grep
with the previous command:
# kubectl get events --watch --field-selector involvedObject.kind=Node | grep "status is now"
44m Normal NodeHasDiskPressure node/kworker Node kworker status is now: NodeHasDiskPressure
By default, these events are retained for 1 hour. However, you can run the kubectl get events --watch --field-selector involvedObject.kind=Node
command from within a Pod and collect the output from that command using a log aggregation system like Loki. I've described this approach with a detailed explanation here.
Upvotes: 9
Reputation: 763
You can use kubectl describe node <nodename> --show-events=true
and kubectl get events
which will show you the events related to the described object which are persisted in etcd and provide high-level information on what is happening in the cluster.
Upvotes: 1