Kubeintel Logo

Kubeintel

  • Search
  • Pods
  • Deployments
  • Statefulsets
  • jobJobs
  • Daemonsets
  • Namespaces
  • Nodes
  • Services
  • Configmaps
  1. Home
  2. /
  3. namespaces
  4. /
  5. kube-system
  6. /
  7. pods
  8. /
  9. cilium-g7zrz
  10. /
  11. investigator
Summary
Metadata
Containers
Spec
Status
All
Events
Logs
Investigator
Pod Details

Name: cilium-g7zrz

Namespace: kube-system

Status: Running

IP: 10.108.0.2

Node: system-0-655pn

Ready: 1/1

Kubectl Commands
  • View
  • Delete
  • Describe
  • Debug
Containers
Name
Image
Ready
Restarts
...
cilium-agentghcr.io/digitalocean-packages/cilium:v1....Ready-
  • 1
Init Containers
Name
Image
Ready
Restarts
...
delay-cilium-for-ccmghcr.io/digitalocean-packages/cilium:v1....Completed-
configghcr.io/digitalocean-packages/cilium:v1....Completed-
mount-cgroupghcr.io/digitalocean-packages/cilium:v1....Completed-
apply-sysctl-overwritesghcr.io/digitalocean-packages/cilium:v1....Completed-
mount-bpf-fsghcr.io/digitalocean-packages/cilium:v1....Completed-
  • 1
  • 2
Metadata

Creation Time: 2025-04-17T22:04:46Z

Labels:

  • app.kubernetes.io/name: cilium-agent...
  • app.kubernetes.io/part-of: cilium...
  • controller-revision-hash: 79f45cdb77...
  • doks.digitalocean.com/managed: true...
  • k8s-app: cilium
  • kubernetes.io/cluster-service: true...
  • pod-template-generation: 6...

Annotation:

  • clusterlint.digitalocean.com/disabled-checks: privileged-container...
  • container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites: unconfined...
  • container.apparmor.security.beta.kubernetes.io/cilium-agent: unconfined...
  • container.apparmor.security.beta.kubernetes.io/clean-cilium-state: unconfined...
  • container.apparmor.security.beta.kubernetes.io/mount-cgroup: unconfined...
  • kubectl.kubernetes.io/default-container: cilium-agent...
  • prometheus.io/port: 9090...
  • prometheus.io/scrape: true...
name: cilium-g7zrz
generateName: cilium-
namespace: kube-system
uid: b737ef68-c16e-4507-ac42-c65a5eab8d0a
resourceVersion: '92140142'
creationTimestamp: '2025-04-17T22:04:46Z'
labels:
app.kubernetes.io/name: cilium-agent
app.kubernetes.io/part-of: cilium
controller-revision-hash: 79f45cdb77
doks.digitalocean.com/managed: 'true'
k8s-app: cilium
kubernetes.io/cluster-service: 'true'
pod-template-generation: '6'
annotations:
clusterlint.digitalocean.com/disabled-checks: privileged-containers,non-root-user,resource-requirements,hostpath-volume
container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites: unconfined
container.apparmor.security.beta.kubernetes.io/cilium-agent: unconfined
container.apparmor.security.beta.kubernetes.io/clean-cilium-state: unconfined
container.apparmor.security.beta.kubernetes.io/mount-cgroup: unconfined
kubectl.kubernetes.io/default-container: cilium-agent
prometheus.io/port: '9090'
prometheus.io/scrape: 'true'
ownerReferences:
- apiVersion: apps/v1
kind: DaemonSet
name: cilium
uid: f644a837-ae29-48a0-89c7-2d886e50903e
controller: true
blockOwnerDeletion: true
- name: cilium-agent
image: ghcr.io/digitalocean-packages/cilium:v1.14.18-conformance-fix
command:
- cilium-agent
args:
- '--config-dir=/tmp/cilium/config-map'
- >-
--k8s-api-server=https://f6ce2907-8531-4ab3-861e-4e2affa620b1.k8s.ondigitalocean.com
- '--ipv4-native-routing-cidr=10.244.0.0/16'
ports:
- name: peer-service
hostPort: 4244
containerPort: 4244
protocol: TCP
- name: prometheus
hostPort: 9090
containerPort: 9090
protocol: TCP
env:
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: CILIUM_K8S_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: CILIUM_CLUSTERMESH_CONFIG
value: /var/lib/cilium/clustermesh/
- name: KUBERNETES_SERVICE_HOST
value: f6ce2907-8531-4ab3-861e-4e2affa620b1.k8s.ondigitalocean.com
- name: KUBERNETES_SERVICE_PORT
value: '443'
resources:
requests:
cpu: 300m
memory: 300Mi
volumeMounts:
- name: host-proc-sys-net
mountPath: /host/proc/sys/net
- name: host-proc-sys-kernel
mountPath: /host/proc/sys/kernel
- name: bpf-maps
mountPath: /sys/fs/bpf
mountPropagation: HostToContainer
- name: cilium-run
mountPath: /var/run/cilium
- name: etc-cni-netd
mountPath: /host/etc/cni/net.d
- name: clustermesh-secrets
readOnly: true
mountPath: /var/lib/cilium/clustermesh
- name: lib-modules
readOnly: true
mountPath: /lib/modules
- name: xtables-lock
mountPath: /run/xtables.lock
- name: hubble-tls
readOnly: true
mountPath: /var/lib/cilium/tls/hubble
- name: tmp
mountPath: /tmp
- name: kube-api-access-t7zzb
readOnly: true
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
livenessProbe:
httpGet:
path: /healthz
port: 9879
host: 127.0.0.1
scheme: HTTP
httpHeaders:
- name: brief
value: 'true'
initialDelaySeconds: 120
timeoutSeconds: 5
periodSeconds: 30
successThreshold: 1
failureThreshold: 10
readinessProbe:
httpGet:
path: /healthz
port: 9879
host: 127.0.0.1
scheme: HTTP
httpHeaders:
- name: brief
value: 'true'
timeoutSeconds: 5
periodSeconds: 30
successThreshold: 1
failureThreshold: 3
startupProbe:
httpGet:
path: /healthz
port: 9879
host: 127.0.0.1
scheme: HTTP
httpHeaders:
- name: brief
value: 'true'
timeoutSeconds: 1
periodSeconds: 2
successThreshold: 1
failureThreshold: 105
lifecycle:
postStart:
exec:
command:
- bash
- '-c'
- >
set -o errexit
set -o pipefail
set -o nounset
# When running in AWS ENI mode, it's likely that 'aws-node' has
# had a chance to install SNAT iptables rules. These can result
# in dropped traffic, so we should attempt to remove them.
# We do it using a 'postStart' hook since this may need to run
# for nodes which might have already been init'ed but may still
# have dangling rules. This is safe because there are no
# dependencies on anything that is part of the startup script
# itself, and can be safely run multiple times per node (e.g. in
# case of a restart).
if [[ "$(iptables-save | grep -E -c
'AWS-SNAT-CHAIN|AWS-CONNMARK-CHAIN')" != "0" ]];
then
echo 'Deleting iptables rules created by the AWS CNI VPC plugin'
iptables-save | grep -E -v 'AWS-SNAT-CHAIN|AWS-CONNMARK-CHAIN' | iptables-restore
fi
echo 'Done!'
preStop:
exec:
command:
- /cni-uninstall.sh
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
add:
- CHOWN
- KILL
- NET_ADMIN
- NET_RAW
- IPC_LOCK
- SYS_MODULE
- SYS_ADMIN
- SYS_RESOURCE
- DAC_OVERRIDE
- FOWNER
- SETGID
- SETUID
drop:
- ALL
seLinuxOptions:
type: spc_t
level: s0
volumes:
- name: host-kubectl
hostPath:
path: /usr/bin/kubectl
type: File
- name: tmp
emptyDir: {}
- name: cilium-run
hostPath:
path: /var/run/cilium
type: DirectoryOrCreate
- name: bpf-maps
hostPath:
path: /sys/fs/bpf
type: DirectoryOrCreate
- name: hostproc
hostPath:
path: /proc
type: Directory
- name: cilium-cgroup
hostPath:
path: /run/cilium/cgroupv2
type: DirectoryOrCreate
- name: cni-path
hostPath:
path: /opt/cni/bin
type: DirectoryOrCreate
- name: etc-cni-netd
hostPath:
path: /etc/cni/net.d
type: DirectoryOrCreate
- name: lib-modules
hostPath:
path: /lib/modules
type: ''
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
- name: clustermesh-secrets
projected:
sources:
- secret:
name: cilium-clustermesh
optional: true
- secret:
name: clustermesh-apiserver-remote-cert
items:
- key: tls.key
path: common-etcd-client.key
- key: tls.crt
path: common-etcd-client.crt
- key: ca.crt
path: common-etcd-client-ca.crt
optional: true
defaultMode: 256
- name: host-proc-sys-net
hostPath:
path: /proc/sys/net
type: Directory
- name: host-proc-sys-kernel
hostPath:
path: /proc/sys/kernel
type: Directory
- name: hubble-tls
projected:
sources:
- secret:
name: hubble-server-certs
items:
- key: tls.crt
path: server.crt
- key: tls.key
path: server.key
- key: ca.crt
path: client-ca.crt
optional: true
defaultMode: 256
- name: kube-api-access-t7zzb
projected:
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
name: kube-root-ca.crt
items:
- key: ca.crt
path: ca.crt
- downwardAPI:
items:
- path: namespace
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
defaultMode: 420
initContainers:
- name: delay-cilium-for-ccm
image: ghcr.io/digitalocean-packages/cilium:v1.14.18-conformance-fix
command:
- bash
- '-e'
- '-c'
- >
# This will get the node object for the local node and search through
# the assigned addresses in the object in order to check whether CCM
# already set the internal AND external IP since cilium needs both
# for a clean startup.
# The grep matches regardless of the order of IPs.
until /host/usr/bin/kubectl get node ${HOSTNAME} -o
jsonpath="{.status.addresses[*].type}" | grep -E
"InternalIP.*ExternalIP|ExternalIP.*InternalIP"; do echo "waiting for
CCM to store internal and external IP addresses in node object:
${HOSTNAME}" && sleep 3; done;
env:
- name: KUBERNETES_SERVICE_HOST
value: f6ce2907-8531-4ab3-861e-4e2affa620b1.k8s.ondigitalocean.com
- name: KUBERNETES_SERVICE_PORT
value: '443'
resources:
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: host-kubectl
mountPath: /host/usr/bin/kubectl
- name: kube-api-access-t7zzb
readOnly: true
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
- name: config
image: ghcr.io/digitalocean-packages/cilium:v1.14.18-conformance-fix
command:
- cilium
- build-config
- '--source=config-map:cilium-config'
env:
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: CILIUM_K8S_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: KUBERNETES_SERVICE_HOST
value: f6ce2907-8531-4ab3-861e-4e2affa620b1.k8s.ondigitalocean.com
- name: KUBERNETES_SERVICE_PORT
value: '443'
resources: {}
volumeMounts:
- name: tmp
mountPath: /tmp
- name: kube-api-access-t7zzb
readOnly: true
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
imagePullPolicy: IfNotPresent
- name: mount-cgroup
image: ghcr.io/digitalocean-packages/cilium:v1.14.18-conformance-fix
command:
- sh
- '-ec'
- >
cp /usr/bin/cilium-mount /hostbin/cilium-mount;
nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt
"${BIN_PATH}/cilium-mount" $CGROUP_ROOT;
rm /hostbin/cilium-mount
env:
- name: CGROUP_ROOT
value: /run/cilium/cgroupv2
- name: BIN_PATH
value: /opt/cni/bin
resources: {}
volumeMounts:
- name: hostproc
mountPath: /hostproc
- name: cni-path
mountPath: /hostbin
- name: kube-api-access-t7zzb
readOnly: true
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
add:
- SYS_ADMIN
- SYS_CHROOT
- SYS_PTRACE
drop:
- ALL
seLinuxOptions:
type: spc_t
level: s0
- name: apply-sysctl-overwrites
image: ghcr.io/digitalocean-packages/cilium:v1.14.18-conformance-fix
command:
- sh
- '-ec'
- |
cp /usr/bin/cilium-sysctlfix /hostbin/cilium-sysctlfix;
nsenter --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-sysctlfix";
rm /hostbin/cilium-sysctlfix
env:
- name: BIN_PATH
value: /opt/cni/bin
resources: {}
volumeMounts:
- name: hostproc
mountPath: /hostproc
- name: cni-path
mountPath: /hostbin
- name: kube-api-access-t7zzb
readOnly: true
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
add:
- SYS_ADMIN
- SYS_CHROOT
- SYS_PTRACE
drop:
- ALL
seLinuxOptions:
type: spc_t
level: s0
- name: mount-bpf-fs
image: ghcr.io/digitalocean-packages/cilium:v1.14.18-conformance-fix
command:
- /bin/bash
- '-c'
- '--'
args:
- mount | grep "/sys/fs/bpf type bpf" || mount -t bpf bpf /sys/fs/bpf
resources: {}
volumeMounts:
- name: bpf-maps
mountPath: /sys/fs/bpf
mountPropagation: Bidirectional
- name: kube-api-access-t7zzb
readOnly: true
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
- name: clean-cilium-state
image: ghcr.io/digitalocean-packages/cilium:v1.14.18-conformance-fix
command:
- /init-container.sh
env:
- name: CILIUM_ALL_STATE
valueFrom:
configMapKeyRef:
name: cilium-config
key: clean-cilium-state
optional: true
- name: CILIUM_BPF_STATE
valueFrom:
configMapKeyRef:
name: cilium-config
key: clean-cilium-bpf-state
optional: true
- name: KUBERNETES_SERVICE_HOST
value: f6ce2907-8531-4ab3-861e-4e2affa620b1.k8s.ondigitalocean.com
- name: KUBERNETES_SERVICE_PORT
value: '443'
resources: {}
volumeMounts:
- name: bpf-maps
mountPath: /sys/fs/bpf
- name: cilium-cgroup
mountPath: /run/cilium/cgroupv2
mountPropagation: HostToContainer
- name: cilium-run
mountPath: /var/run/cilium
- name: kube-api-access-t7zzb
readOnly: true
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
add:
- NET_ADMIN
- SYS_MODULE
- SYS_ADMIN
- SYS_RESOURCE
drop:
- ALL
seLinuxOptions:
type: spc_t
level: s0
- name: install-cni-binaries
image: ghcr.io/digitalocean-packages/cilium:v1.14.18-conformance-fix
command:
- /install-plugin.sh
resources:
requests:
cpu: 100m
memory: 10Mi
volumeMounts:
- name: cni-path
mountPath: /host/opt/cni/bin
- name: kube-api-access-t7zzb
readOnly: true
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
drop:
- ALL
seLinuxOptions:
type: spc_t
level: s0
containers:
- name: cilium-agent
image: ghcr.io/digitalocean-packages/cilium:v1.14.18-conformance-fix
command:
- cilium-agent
args:
- '--config-dir=/tmp/cilium/config-map'
- >-
--k8s-api-server=https://f6ce2907-8531-4ab3-861e-4e2affa620b1.k8s.ondigitalocean.com
- '--ipv4-native-routing-cidr=10.244.0.0/16'
ports:
- name: peer-service
hostPort: 4244
containerPort: 4244
protocol: TCP
- name: prometheus
hostPort: 9090
containerPort: 9090
protocol: TCP
env:
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: CILIUM_K8S_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: CILIUM_CLUSTERMESH_CONFIG
value: /var/lib/cilium/clustermesh/
- name: KUBERNETES_SERVICE_HOST
value: f6ce2907-8531-4ab3-861e-4e2affa620b1.k8s.ondigitalocean.com
- name: KUBERNETES_SERVICE_PORT
value: '443'
resources:
requests:
cpu: 300m
memory: 300Mi
volumeMounts:
- name: host-proc-sys-net
mountPath: /host/proc/sys/net
- name: host-proc-sys-kernel
mountPath: /host/proc/sys/kernel
- name: bpf-maps
mountPath: /sys/fs/bpf
mountPropagation: HostToContainer
- name: cilium-run
mountPath: /var/run/cilium
- name: etc-cni-netd
mountPath: /host/etc/cni/net.d
- name: clustermesh-secrets
readOnly: true
mountPath: /var/lib/cilium/clustermesh
- name: lib-modules
readOnly: true
mountPath: /lib/modules
- name: xtables-lock
mountPath: /run/xtables.lock
- name: hubble-tls
readOnly: true
mountPath: /var/lib/cilium/tls/hubble
- name: tmp
mountPath: /tmp
- name: kube-api-access-t7zzb
readOnly: true
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
livenessProbe:
httpGet:
path: /healthz
port: 9879
host: 127.0.0.1
scheme: HTTP
httpHeaders:
- name: brief
value: 'true'
initialDelaySeconds: 120
timeoutSeconds: 5
periodSeconds: 30
successThreshold: 1
failureThreshold: 10
readinessProbe:
httpGet:
path: /healthz
port: 9879
host: 127.0.0.1
scheme: HTTP
httpHeaders:
- name: brief
value: 'true'
timeoutSeconds: 5
periodSeconds: 30
successThreshold: 1
failureThreshold: 3
startupProbe:
httpGet:
path: /healthz
port: 9879
host: 127.0.0.1
scheme: HTTP
httpHeaders:
- name: brief
value: 'true'
timeoutSeconds: 1
periodSeconds: 2
successThreshold: 1
failureThreshold: 105
lifecycle:
postStart:
exec:
command:
- bash
- '-c'
- >
set -o errexit
set -o pipefail
set -o nounset
# When running in AWS ENI mode, it's likely that 'aws-node' has
# had a chance to install SNAT iptables rules. These can result
# in dropped traffic, so we should attempt to remove them.
# We do it using a 'postStart' hook since this may need to run
# for nodes which might have already been init'ed but may still
# have dangling rules. This is safe because there are no
# dependencies on anything that is part of the startup script
# itself, and can be safely run multiple times per node (e.g. in
# case of a restart).
if [[ "$(iptables-save | grep -E -c
'AWS-SNAT-CHAIN|AWS-CONNMARK-CHAIN')" != "0" ]];
then
echo 'Deleting iptables rules created by the AWS CNI VPC plugin'
iptables-save | grep -E -v 'AWS-SNAT-CHAIN|AWS-CONNMARK-CHAIN' | iptables-restore
fi
echo 'Done!'
preStop:
exec:
command:
- /cni-uninstall.sh
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
add:
- CHOWN
- KILL
- NET_ADMIN
- NET_RAW
- IPC_LOCK
- SYS_MODULE
- SYS_ADMIN
- SYS_RESOURCE
- DAC_OVERRIDE
- FOWNER
- SETGID
- SETUID
drop:
- ALL
seLinuxOptions:
type: spc_t
level: s0
restartPolicy: Always
terminationGracePeriodSeconds: 1
dnsPolicy: ClusterFirst
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: cilium
serviceAccount: cilium
automountServiceAccountToken: true
nodeName: system-0-655pn
hostNetwork: true
securityContext: {}
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchFields:
- key: metadata.name
operator: In
values:
- system-0-655pn
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
k8s-app: cilium
topologyKey: kubernetes.io/hostname
schedulerName: default-scheduler
tolerations:
- operator: Exists
- key: node.kubernetes.io/not-ready
operator: Exists
effect: NoExecute
- key: node.kubernetes.io/unreachable
operator: Exists
effect: NoExecute
- key: node.kubernetes.io/disk-pressure
operator: Exists
effect: NoSchedule
- key: node.kubernetes.io/memory-pressure
operator: Exists
effect: NoSchedule
- key: node.kubernetes.io/pid-pressure
operator: Exists
effect: NoSchedule
- key: node.kubernetes.io/unschedulable
operator: Exists
effect: NoSchedule
- key: node.kubernetes.io/network-unavailable
operator: Exists
effect: NoSchedule
priorityClassName: system-node-critical
priority: 2000001000
enableServiceLinks: true
preemptionPolicy: PreemptLowerPriority
phase: Running
conditions:
- type: PodReadyToStartContainers
status: 'True'
lastProbeTime: null
lastTransitionTime: '2025-04-17T22:05:06Z'
- type: Initialized
status: 'True'
lastProbeTime: null
lastTransitionTime: '2025-04-17T22:05:13Z'
- type: Ready
status: 'True'
lastProbeTime: null
lastTransitionTime: '2025-04-17T22:05:18Z'
- type: ContainersReady
status: 'True'
lastProbeTime: null
lastTransitionTime: '2025-04-17T22:05:18Z'
- type: PodScheduled
status: 'True'
lastProbeTime: null
lastTransitionTime: '2025-04-17T22:04:46Z'
hostIP: 10.108.0.2
podIP: 10.108.0.2
podIPs:
- ip: 10.108.0.2
startTime: '2025-04-17T22:04:47Z'
initContainerStatuses:
- name: delay-cilium-for-ccm
state:
terminated:
exitCode: 0
reason: Completed
startedAt: '2025-04-17T22:05:05Z'
finishedAt: '2025-04-17T22:05:05Z'
containerID: >-
containerd://5cad42abc2d5f864fde4735e377461c0630af08b7a56c2e7e91c8de7681105a4
lastState: {}
ready: true
restartCount: 0
image: ghcr.io/digitalocean-packages/cilium:v1.14.18-conformance-fix
imageID: >-
ghcr.io/digitalocean-packages/cilium@sha256:2466e77785d14d01810bd8d9907893fbd5163460b966912ff1972219fb2a21a2
containerID: >-
containerd://5cad42abc2d5f864fde4735e377461c0630af08b7a56c2e7e91c8de7681105a4
started: false
- name: config
state:
terminated:
exitCode: 0
reason: Completed
startedAt: '2025-04-17T22:05:07Z'
finishedAt: '2025-04-17T22:05:07Z'
containerID: >-
containerd://6956cf408e4724970ee6a52486ed925caa1e779dca23dbb997446f0558de2fe9
lastState: {}
ready: true
restartCount: 0
image: ghcr.io/digitalocean-packages/cilium:v1.14.18-conformance-fix
imageID: >-
ghcr.io/digitalocean-packages/cilium@sha256:2466e77785d14d01810bd8d9907893fbd5163460b966912ff1972219fb2a21a2
containerID: >-
containerd://6956cf408e4724970ee6a52486ed925caa1e779dca23dbb997446f0558de2fe9
started: false
- name: mount-cgroup
state:
terminated:
exitCode: 0
reason: Completed
startedAt: '2025-04-17T22:05:08Z'
finishedAt: '2025-04-17T22:05:08Z'
containerID: >-
containerd://5d78c27fa86987f0a3fa51a536be849d057c2ddeff25c5382b06498fb0b4b05c
lastState: {}
ready: true
restartCount: 0
image: ghcr.io/digitalocean-packages/cilium:v1.14.18-conformance-fix
imageID: >-
ghcr.io/digitalocean-packages/cilium@sha256:2466e77785d14d01810bd8d9907893fbd5163460b966912ff1972219fb2a21a2
containerID: >-
containerd://5d78c27fa86987f0a3fa51a536be849d057c2ddeff25c5382b06498fb0b4b05c
started: false
- name: apply-sysctl-overwrites
state:
terminated:
exitCode: 0
reason: Completed
startedAt: '2025-04-17T22:05:09Z'
finishedAt: '2025-04-17T22:05:09Z'
containerID: >-
containerd://50258722f5c9aaaeb030257bdf6d61fce449308930782a15657e3d9dbf420e98
lastState: {}
ready: true
restartCount: 0
image: ghcr.io/digitalocean-packages/cilium:v1.14.18-conformance-fix
imageID: >-
ghcr.io/digitalocean-packages/cilium@sha256:2466e77785d14d01810bd8d9907893fbd5163460b966912ff1972219fb2a21a2
containerID: >-
containerd://50258722f5c9aaaeb030257bdf6d61fce449308930782a15657e3d9dbf420e98
started: false
- name: mount-bpf-fs
state:
terminated:
exitCode: 0
reason: Completed
startedAt: '2025-04-17T22:05:10Z'
finishedAt: '2025-04-17T22:05:10Z'
containerID: >-
containerd://2fc78efb0180960b0ea71c0a19d39f5b8f3b29a3087b4a32f5ad9ae3039ad418
lastState: {}
ready: true
restartCount: 0
image: ghcr.io/digitalocean-packages/cilium:v1.14.18-conformance-fix
imageID: >-
ghcr.io/digitalocean-packages/cilium@sha256:2466e77785d14d01810bd8d9907893fbd5163460b966912ff1972219fb2a21a2
containerID: >-
containerd://2fc78efb0180960b0ea71c0a19d39f5b8f3b29a3087b4a32f5ad9ae3039ad418
started: false
- name: clean-cilium-state
state:
terminated:
exitCode: 0
reason: Completed
startedAt: '2025-04-17T22:05:11Z'
finishedAt: '2025-04-17T22:05:11Z'
containerID: >-
containerd://3acd0b1f35a64d1fd8c17f350c1165c7b1a908667734aa0bd2f254cc04525481
lastState: {}
ready: true
restartCount: 0
image: ghcr.io/digitalocean-packages/cilium:v1.14.18-conformance-fix
imageID: >-
ghcr.io/digitalocean-packages/cilium@sha256:2466e77785d14d01810bd8d9907893fbd5163460b966912ff1972219fb2a21a2
containerID: >-
containerd://3acd0b1f35a64d1fd8c17f350c1165c7b1a908667734aa0bd2f254cc04525481
started: false
- name: install-cni-binaries
state:
terminated:
exitCode: 0
reason: Completed
startedAt: '2025-04-17T22:05:12Z'
finishedAt: '2025-04-17T22:05:12Z'
containerID: >-
containerd://126ad01811023bd6393d2541419d645821f6050eb31df73c2ee348d9e5b79291
lastState: {}
ready: true
restartCount: 0
image: ghcr.io/digitalocean-packages/cilium:v1.14.18-conformance-fix
imageID: >-
ghcr.io/digitalocean-packages/cilium@sha256:2466e77785d14d01810bd8d9907893fbd5163460b966912ff1972219fb2a21a2
containerID: >-
containerd://126ad01811023bd6393d2541419d645821f6050eb31df73c2ee348d9e5b79291
started: false
containerStatuses:
- name: cilium-agent
state:
running:
startedAt: '2025-04-17T22:05:13Z'
lastState: {}
ready: true
restartCount: 0
image: ghcr.io/digitalocean-packages/cilium:v1.14.18-conformance-fix
imageID: >-
ghcr.io/digitalocean-packages/cilium@sha256:2466e77785d14d01810bd8d9907893fbd5163460b966912ff1972219fb2a21a2
containerID: >-
containerd://ead05b0607a380bcd9968c83eba8979fc46495fc068594ef988d2c253f1cf132
started: true
qosClass: Burstable
metadata:
name: cilium-g7zrz
generateName: cilium-
namespace: kube-system
uid: b737ef68-c16e-4507-ac42-c65a5eab8d0a
resourceVersion: '92140142'
creationTimestamp: '2025-04-17T22:04:46Z'
labels:
app.kubernetes.io/name: cilium-agent
app.kubernetes.io/part-of: cilium
controller-revision-hash: 79f45cdb77
doks.digitalocean.com/managed: 'true'
k8s-app: cilium
kubernetes.io/cluster-service: 'true'
pod-template-generation: '6'
annotations:
clusterlint.digitalocean.com/disabled-checks: privileged-containers,non-root-user,resource-requirements,hostpath-volume
container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites: unconfined
container.apparmor.security.beta.kubernetes.io/cilium-agent: unconfined
container.apparmor.security.beta.kubernetes.io/clean-cilium-state: unconfined
container.apparmor.security.beta.kubernetes.io/mount-cgroup: unconfined
kubectl.kubernetes.io/default-container: cilium-agent
prometheus.io/port: '9090'
prometheus.io/scrape: 'true'
ownerReferences:
- apiVersion: apps/v1
kind: DaemonSet
name: cilium
uid: f644a837-ae29-48a0-89c7-2d886e50903e
controller: true
blockOwnerDeletion: true
spec:
volumes:
- name: host-kubectl
hostPath:
path: /usr/bin/kubectl
type: File
- name: tmp
emptyDir: {}
- name: cilium-run
hostPath:
path: /var/run/cilium
type: DirectoryOrCreate
- name: bpf-maps
hostPath:
path: /sys/fs/bpf
type: DirectoryOrCreate
- name: hostproc
hostPath:
path: /proc
type: Directory
- name: cilium-cgroup
hostPath:
path: /run/cilium/cgroupv2
type: DirectoryOrCreate
- name: cni-path
hostPath:
path: /opt/cni/bin
type: DirectoryOrCreate
- name: etc-cni-netd
hostPath:
path: /etc/cni/net.d
type: DirectoryOrCreate
- name: lib-modules
hostPath:
path: /lib/modules
type: ''
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
- name: clustermesh-secrets
projected:
sources:
- secret:
name: cilium-clustermesh
optional: true
- secret:
name: clustermesh-apiserver-remote-cert
items:
- key: tls.key
path: common-etcd-client.key
- key: tls.crt
path: common-etcd-client.crt
- key: ca.crt
path: common-etcd-client-ca.crt
optional: true
defaultMode: 256
- name: host-proc-sys-net
hostPath:
path: /proc/sys/net
type: Directory
- name: host-proc-sys-kernel
hostPath:
path: /proc/sys/kernel
type: Directory
- name: hubble-tls
projected:
sources:
- secret:
name: hubble-server-certs
items:
- key: tls.crt
path: server.crt
- key: tls.key
path: server.key
- key: ca.crt
path: client-ca.crt
optional: true
defaultMode: 256
- name: kube-api-access-t7zzb
projected:
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
name: kube-root-ca.crt
items:
- key: ca.crt
path: ca.crt
- downwardAPI:
items:
- path: namespace
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
defaultMode: 420
initContainers:
- name: delay-cilium-for-ccm
image: ghcr.io/digitalocean-packages/cilium:v1.14.18-conformance-fix
command:
- bash
- '-e'
- '-c'
- >
# This will get the node object for the local node and search through
# the assigned addresses in the object in order to check whether CCM
# already set the internal AND external IP since cilium needs both
# for a clean startup.
# The grep matches regardless of the order of IPs.
until /host/usr/bin/kubectl get node ${HOSTNAME} -o
jsonpath="{.status.addresses[*].type}" | grep -E
"InternalIP.*ExternalIP|ExternalIP.*InternalIP"; do echo "waiting for
CCM to store internal and external IP addresses in node object:
${HOSTNAME}" && sleep 3; done;
env:
- name: KUBERNETES_SERVICE_HOST
value: f6ce2907-8531-4ab3-861e-4e2affa620b1.k8s.ondigitalocean.com
- name: KUBERNETES_SERVICE_PORT
value: '443'
resources:
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: host-kubectl
mountPath: /host/usr/bin/kubectl
- name: kube-api-access-t7zzb
readOnly: true
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
- name: config
image: ghcr.io/digitalocean-packages/cilium:v1.14.18-conformance-fix
command:
- cilium
- build-config
- '--source=config-map:cilium-config'
env:
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: CILIUM_K8S_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: KUBERNETES_SERVICE_HOST
value: f6ce2907-8531-4ab3-861e-4e2affa620b1.k8s.ondigitalocean.com
- name: KUBERNETES_SERVICE_PORT
value: '443'
resources: {}
volumeMounts:
- name: tmp
mountPath: /tmp
- name: kube-api-access-t7zzb
readOnly: true
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
imagePullPolicy: IfNotPresent
- name: mount-cgroup
image: ghcr.io/digitalocean-packages/cilium:v1.14.18-conformance-fix
command:
- sh
- '-ec'
- >
cp /usr/bin/cilium-mount /hostbin/cilium-mount;
nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt
"${BIN_PATH}/cilium-mount" $CGROUP_ROOT;
rm /hostbin/cilium-mount
env:
- name: CGROUP_ROOT
value: /run/cilium/cgroupv2
- name: BIN_PATH
value: /opt/cni/bin
resources: {}
volumeMounts:
- name: hostproc
mountPath: /hostproc
- name: cni-path
mountPath: /hostbin
- name: kube-api-access-t7zzb
readOnly: true
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
add:
- SYS_ADMIN
- SYS_CHROOT
- SYS_PTRACE
drop:
- ALL
seLinuxOptions:
type: spc_t
level: s0
- name: apply-sysctl-overwrites
image: ghcr.io/digitalocean-packages/cilium:v1.14.18-conformance-fix
command:
- sh
- '-ec'
- |
cp /usr/bin/cilium-sysctlfix /hostbin/cilium-sysctlfix;
nsenter --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-sysctlfix";
rm /hostbin/cilium-sysctlfix
env:
- name: BIN_PATH
value: /opt/cni/bin
resources: {}
volumeMounts:
- name: hostproc
mountPath: /hostproc
- name: cni-path
mountPath: /hostbin
- name: kube-api-access-t7zzb
readOnly: true
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
add:
- SYS_ADMIN
- SYS_CHROOT
- SYS_PTRACE
drop:
- ALL
seLinuxOptions:
type: spc_t
level: s0
- name: mount-bpf-fs
image: ghcr.io/digitalocean-packages/cilium:v1.14.18-conformance-fix
command:
- /bin/bash
- '-c'
- '--'
args:
- mount | grep "/sys/fs/bpf type bpf" || mount -t bpf bpf /sys/fs/bpf
resources: {}
volumeMounts:
- name: bpf-maps
mountPath: /sys/fs/bpf
mountPropagation: Bidirectional
- name: kube-api-access-t7zzb
readOnly: true
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
- name: clean-cilium-state
image: ghcr.io/digitalocean-packages/cilium:v1.14.18-conformance-fix
command:
- /init-container.sh
env:
- name: CILIUM_ALL_STATE
valueFrom:
configMapKeyRef:
name: cilium-config
key: clean-cilium-state
optional: true
- name: CILIUM_BPF_STATE
valueFrom:
configMapKeyRef:
name: cilium-config
key: clean-cilium-bpf-state
optional: true
- name: KUBERNETES_SERVICE_HOST
value: f6ce2907-8531-4ab3-861e-4e2affa620b1.k8s.ondigitalocean.com
- name: KUBERNETES_SERVICE_PORT
value: '443'
resources: {}
volumeMounts:
- name: bpf-maps
mountPath: /sys/fs/bpf
- name: cilium-cgroup
mountPath: /run/cilium/cgroupv2
mountPropagation: HostToContainer
- name: cilium-run
mountPath: /var/run/cilium
- name: kube-api-access-t7zzb
readOnly: true
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
add:
- NET_ADMIN
- SYS_MODULE
- SYS_ADMIN
- SYS_RESOURCE
drop:
- ALL
seLinuxOptions:
type: spc_t
level: s0
- name: install-cni-binaries
image: ghcr.io/digitalocean-packages/cilium:v1.14.18-conformance-fix
command:
- /install-plugin.sh
resources:
requests:
cpu: 100m
memory: 10Mi
volumeMounts:
- name: cni-path
mountPath: /host/opt/cni/bin
- name: kube-api-access-t7zzb
readOnly: true
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
drop:
- ALL
seLinuxOptions:
type: spc_t
level: s0
containers:
- name: cilium-agent
image: ghcr.io/digitalocean-packages/cilium:v1.14.18-conformance-fix
command:
- cilium-agent
args:
- '--config-dir=/tmp/cilium/config-map'
- >-
--k8s-api-server=https://f6ce2907-8531-4ab3-861e-4e2affa620b1.k8s.ondigitalocean.com
- '--ipv4-native-routing-cidr=10.244.0.0/16'
ports:
- name: peer-service
hostPort: 4244
containerPort: 4244
protocol: TCP
- name: prometheus
hostPort: 9090
containerPort: 9090
protocol: TCP
env:
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: CILIUM_K8S_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: CILIUM_CLUSTERMESH_CONFIG
value: /var/lib/cilium/clustermesh/
- name: KUBERNETES_SERVICE_HOST
value: f6ce2907-8531-4ab3-861e-4e2affa620b1.k8s.ondigitalocean.com
- name: KUBERNETES_SERVICE_PORT
value: '443'
resources:
requests:
cpu: 300m
memory: 300Mi
volumeMounts:
- name: host-proc-sys-net
mountPath: /host/proc/sys/net
- name: host-proc-sys-kernel
mountPath: /host/proc/sys/kernel
- name: bpf-maps
mountPath: /sys/fs/bpf
mountPropagation: HostToContainer
- name: cilium-run
mountPath: /var/run/cilium
- name: etc-cni-netd
mountPath: /host/etc/cni/net.d
- name: clustermesh-secrets
readOnly: true
mountPath: /var/lib/cilium/clustermesh
- name: lib-modules
readOnly: true
mountPath: /lib/modules
- name: xtables-lock
mountPath: /run/xtables.lock
- name: hubble-tls
readOnly: true
mountPath: /var/lib/cilium/tls/hubble
- name: tmp
mountPath: /tmp
- name: kube-api-access-t7zzb
readOnly: true
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
livenessProbe:
httpGet:
path: /healthz
port: 9879
host: 127.0.0.1
scheme: HTTP
httpHeaders:
- name: brief
value: 'true'
initialDelaySeconds: 120
timeoutSeconds: 5
periodSeconds: 30
successThreshold: 1
failureThreshold: 10
readinessProbe:
httpGet:
path: /healthz
port: 9879
host: 127.0.0.1
scheme: HTTP
httpHeaders:
- name: brief
value: 'true'
timeoutSeconds: 5
periodSeconds: 30
successThreshold: 1
failureThreshold: 3
startupProbe:
httpGet:
path: /healthz
port: 9879
host: 127.0.0.1
scheme: HTTP
httpHeaders:
- name: brief
value: 'true'
timeoutSeconds: 1
periodSeconds: 2
successThreshold: 1
failureThreshold: 105
lifecycle:
postStart:
exec:
command:
- bash
- '-c'
- >
set -o errexit
set -o pipefail
set -o nounset
# When running in AWS ENI mode, it's likely that 'aws-node' has
# had a chance to install SNAT iptables rules. These can result
# in dropped traffic, so we should attempt to remove them.
# We do it using a 'postStart' hook since this may need to run
# for nodes which might have already been init'ed but may still
# have dangling rules. This is safe because there are no
# dependencies on anything that is part of the startup script
# itself, and can be safely run multiple times per node (e.g. in
# case of a restart).
if [[ "$(iptables-save | grep -E -c
'AWS-SNAT-CHAIN|AWS-CONNMARK-CHAIN')" != "0" ]];
then
echo 'Deleting iptables rules created by the AWS CNI VPC plugin'
iptables-save | grep -E -v 'AWS-SNAT-CHAIN|AWS-CONNMARK-CHAIN' | iptables-restore
fi
echo 'Done!'
preStop:
exec:
command:
- /cni-uninstall.sh
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
add:
- CHOWN
- KILL
- NET_ADMIN
- NET_RAW
- IPC_LOCK
- SYS_MODULE
- SYS_ADMIN
- SYS_RESOURCE
- DAC_OVERRIDE
- FOWNER
- SETGID
- SETUID
drop:
- ALL
seLinuxOptions:
type: spc_t
level: s0
restartPolicy: Always
terminationGracePeriodSeconds: 1
dnsPolicy: ClusterFirst
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: cilium
serviceAccount: cilium
automountServiceAccountToken: true
nodeName: system-0-655pn
hostNetwork: true
securityContext: {}
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchFields:
- key: metadata.name
operator: In
values:
- system-0-655pn
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
k8s-app: cilium
topologyKey: kubernetes.io/hostname
schedulerName: default-scheduler
tolerations:
- operator: Exists
- key: node.kubernetes.io/not-ready
operator: Exists
effect: NoExecute
- key: node.kubernetes.io/unreachable
operator: Exists
effect: NoExecute
- key: node.kubernetes.io/disk-pressure
operator: Exists
effect: NoSchedule
- key: node.kubernetes.io/memory-pressure
operator: Exists
effect: NoSchedule
- key: node.kubernetes.io/pid-pressure
operator: Exists
effect: NoSchedule
- key: node.kubernetes.io/unschedulable
operator: Exists
effect: NoSchedule
- key: node.kubernetes.io/network-unavailable
operator: Exists
effect: NoSchedule
priorityClassName: system-node-critical
priority: 2000001000
enableServiceLinks: true
preemptionPolicy: PreemptLowerPriority
status:
phase: Running
conditions:
- type: PodReadyToStartContainers
status: 'True'
lastProbeTime: null
lastTransitionTime: '2025-04-17T22:05:06Z'
- type: Initialized
status: 'True'
lastProbeTime: null
lastTransitionTime: '2025-04-17T22:05:13Z'
- type: Ready
status: 'True'
lastProbeTime: null
lastTransitionTime: '2025-04-17T22:05:18Z'
- type: ContainersReady
status: 'True'
lastProbeTime: null
lastTransitionTime: '2025-04-17T22:05:18Z'
- type: PodScheduled
status: 'True'
lastProbeTime: null
lastTransitionTime: '2025-04-17T22:04:46Z'
hostIP: 10.108.0.2
podIP: 10.108.0.2
podIPs:
- ip: 10.108.0.2
startTime: '2025-04-17T22:04:47Z'
initContainerStatuses:
- name: delay-cilium-for-ccm
state:
terminated:
exitCode: 0
reason: Completed
startedAt: '2025-04-17T22:05:05Z'
finishedAt: '2025-04-17T22:05:05Z'
containerID: >-
containerd://5cad42abc2d5f864fde4735e377461c0630af08b7a56c2e7e91c8de7681105a4
lastState: {}
ready: true
restartCount: 0
image: ghcr.io/digitalocean-packages/cilium:v1.14.18-conformance-fix
imageID: >-
ghcr.io/digitalocean-packages/cilium@sha256:2466e77785d14d01810bd8d9907893fbd5163460b966912ff1972219fb2a21a2
containerID: >-
containerd://5cad42abc2d5f864fde4735e377461c0630af08b7a56c2e7e91c8de7681105a4
started: false
- name: config
state:
terminated:
exitCode: 0
reason: Completed
startedAt: '2025-04-17T22:05:07Z'
finishedAt: '2025-04-17T22:05:07Z'
containerID: >-
containerd://6956cf408e4724970ee6a52486ed925caa1e779dca23dbb997446f0558de2fe9
lastState: {}
ready: true
restartCount: 0
image: ghcr.io/digitalocean-packages/cilium:v1.14.18-conformance-fix
imageID: >-
ghcr.io/digitalocean-packages/cilium@sha256:2466e77785d14d01810bd8d9907893fbd5163460b966912ff1972219fb2a21a2
containerID: >-
containerd://6956cf408e4724970ee6a52486ed925caa1e779dca23dbb997446f0558de2fe9
started: false
- name: mount-cgroup
state:
terminated:
exitCode: 0
reason: Completed
startedAt: '2025-04-17T22:05:08Z'
finishedAt: '2025-04-17T22:05:08Z'
containerID: >-
containerd://5d78c27fa86987f0a3fa51a536be849d057c2ddeff25c5382b06498fb0b4b05c
lastState: {}
ready: true
restartCount: 0
image: ghcr.io/digitalocean-packages/cilium:v1.14.18-conformance-fix
imageID: >-
ghcr.io/digitalocean-packages/cilium@sha256:2466e77785d14d01810bd8d9907893fbd5163460b966912ff1972219fb2a21a2
containerID: >-
containerd://5d78c27fa86987f0a3fa51a536be849d057c2ddeff25c5382b06498fb0b4b05c
started: false
- name: apply-sysctl-overwrites
state:
terminated:
exitCode: 0
reason: Completed
startedAt: '2025-04-17T22:05:09Z'
finishedAt: '2025-04-17T22:05:09Z'
containerID: >-
containerd://50258722f5c9aaaeb030257bdf6d61fce449308930782a15657e3d9dbf420e98
lastState: {}
ready: true
restartCount: 0
image: ghcr.io/digitalocean-packages/cilium:v1.14.18-conformance-fix
imageID: >-
ghcr.io/digitalocean-packages/cilium@sha256:2466e77785d14d01810bd8d9907893fbd5163460b966912ff1972219fb2a21a2
containerID: >-
containerd://50258722f5c9aaaeb030257bdf6d61fce449308930782a15657e3d9dbf420e98
started: false
- name: mount-bpf-fs
state:
terminated:
exitCode: 0
reason: Completed
startedAt: '2025-04-17T22:05:10Z'
finishedAt: '2025-04-17T22:05:10Z'
containerID: >-
containerd://2fc78efb0180960b0ea71c0a19d39f5b8f3b29a3087b4a32f5ad9ae3039ad418
lastState: {}
ready: true
restartCount: 0
image: ghcr.io/digitalocean-packages/cilium:v1.14.18-conformance-fix
imageID: >-
ghcr.io/digitalocean-packages/cilium@sha256:2466e77785d14d01810bd8d9907893fbd5163460b966912ff1972219fb2a21a2
containerID: >-
containerd://2fc78efb0180960b0ea71c0a19d39f5b8f3b29a3087b4a32f5ad9ae3039ad418
started: false
- name: clean-cilium-state
state:
terminated:
exitCode: 0
reason: Completed
startedAt: '2025-04-17T22:05:11Z'
finishedAt: '2025-04-17T22:05:11Z'
containerID: >-
containerd://3acd0b1f35a64d1fd8c17f350c1165c7b1a908667734aa0bd2f254cc04525481
lastState: {}
ready: true
restartCount: 0
image: ghcr.io/digitalocean-packages/cilium:v1.14.18-conformance-fix
imageID: >-
ghcr.io/digitalocean-packages/cilium@sha256:2466e77785d14d01810bd8d9907893fbd5163460b966912ff1972219fb2a21a2
containerID: >-
containerd://3acd0b1f35a64d1fd8c17f350c1165c7b1a908667734aa0bd2f254cc04525481
started: false
- name: install-cni-binaries
state:
terminated:
exitCode: 0
reason: Completed
startedAt: '2025-04-17T22:05:12Z'
finishedAt: '2025-04-17T22:05:12Z'
containerID: >-
containerd://126ad01811023bd6393d2541419d645821f6050eb31df73c2ee348d9e5b79291
lastState: {}
ready: true
restartCount: 0
image: ghcr.io/digitalocean-packages/cilium:v1.14.18-conformance-fix
imageID: >-
ghcr.io/digitalocean-packages/cilium@sha256:2466e77785d14d01810bd8d9907893fbd5163460b966912ff1972219fb2a21a2
containerID: >-
containerd://126ad01811023bd6393d2541419d645821f6050eb31df73c2ee348d9e5b79291
started: false
containerStatuses:
- name: cilium-agent
state:
running:
startedAt: '2025-04-17T22:05:13Z'
lastState: {}
ready: true
restartCount: 0
image: ghcr.io/digitalocean-packages/cilium:v1.14.18-conformance-fix
imageID: >-
ghcr.io/digitalocean-packages/cilium@sha256:2466e77785d14d01810bd8d9907893fbd5163460b966912ff1972219fb2a21a2
containerID: >-
containerd://ead05b0607a380bcd9968c83eba8979fc46495fc068594ef988d2c253f1cf132
started: true
qosClass: Burstable
Pod Details

Name: cilium-g7zrz

Namespace: kube-system

Status: Running

IP: 10.108.0.2

Node: system-0-655pn

Ready: 1/1

Enrich LLM Context
 
Select a container
Examples
Troubleshoot
Describe Pod Command
Describe Affinity
Explain Event Timeline
Investigator
cilium-agent
Pod Details

Name: cilium-g7zrz

Namespace: kube-system

Status: Running

IP: 10.108.0.2

Node: system-0-655pn

Ready: 1/1

Enrich LLM Context
 
Select a container
Examples
Troubleshoot
Describe Pod Command
Describe Affinity
Explain Event Timeline
Investigator
Pod Details

Name: cilium-g7zrz

Namespace: kube-system

Status: Running

IP: 10.108.0.2

Node: system-0-655pn

Ready: 1/1

Enrich LLM Context
 
Select a container
Examples
Troubleshoot
Describe Pod Command
Describe Affinity
Explain Event Timeline
Investigator
Kubeintel ©2024