DaemonSet Details
Name: cilium
Namespace: kube-system
Pods: 2/2
Selector: k8s-app: ciliumkubernetes.io/cluster-service: true...
Kubectl Commands
- View
- Delete
- Describe
Containers
Name | Image | Ports | ... |
---|---|---|---|
cilium-agent | docker.io/digitalocean/cilium:v1.14.14-c... | 4244/TCP9090/TCP | ... |
Init Containers
Name | Image | Ports | ... |
---|---|---|---|
delay-cilium-for-ccm | docker.io/digitalocean/cilium:v1.14.14-c... | N/A | ... |
config | docker.io/digitalocean/cilium:v1.14.14-c... | N/A | ... |
mount-cgroup | docker.io/digitalocean/cilium:v1.14.14-c... | N/A | ... |
apply-sysctl-overwrites | docker.io/digitalocean/cilium:v1.14.14-c... | N/A | ... |
mount-bpf-fs | docker.io/digitalocean/cilium:v1.14.14-c... | N/A | ... |
Metadata
Creation Time: 2024-07-01T18:52:38Z
Labels:
- app.kubernetes.io/name: cilium-agent...
- app.kubernetes.io/part-of: cilium...
- c3.doks.digitalocean.com/component: cilium...
- c3.doks.digitalocean.com/plane: data...
- doks.digitalocean.com/managed: true...
- k8s-app: cilium
- kubernetes.io/cluster-service: true...
Annotation:
- deprecated.daemonset.template.generation: 4...
name: ciliumnamespace: kube-systemuid: f644a837-ae29-48a0-89c7-2d886e50903eresourceVersion: '45900368'generation: 4creationTimestamp: '2024-07-01T18:52:38Z'labels:app.kubernetes.io/name: cilium-agentapp.kubernetes.io/part-of: ciliumc3.doks.digitalocean.com/component: ciliumc3.doks.digitalocean.com/plane: datadoks.digitalocean.com/managed: 'true'k8s-app: ciliumkubernetes.io/cluster-service: 'true'annotations:deprecated.daemonset.template.generation: '4'
- name: cilium-agentimage: docker.io/digitalocean/cilium:v1.14.14-conformance-fixcommand:- cilium-agentargs:- '--config-dir=/tmp/cilium/config-map'- >---k8s-api-server=https://f6ce2907-8531-4ab3-861e-4e2affa620b1.k8s.ondigitalocean.com- '--ipv4-native-routing-cidr=10.244.0.0/16'ports:- name: peer-servicehostPort: 4244containerPort: 4244protocol: TCP- name: prometheushostPort: 9090containerPort: 9090protocol: TCPenv:- name: K8S_NODE_NAMEvalueFrom:fieldRef:apiVersion: v1fieldPath: spec.nodeName- name: CILIUM_K8S_NAMESPACEvalueFrom:fieldRef:apiVersion: v1fieldPath: metadata.namespace- name: CILIUM_CLUSTERMESH_CONFIGvalue: /var/lib/cilium/clustermesh/- name: KUBERNETES_SERVICE_HOSTvalue: f6ce2907-8531-4ab3-861e-4e2affa620b1.k8s.ondigitalocean.com- name: KUBERNETES_SERVICE_PORTvalue: '443'resources:requests:cpu: 300mmemory: 300MivolumeMounts:- name: host-proc-sys-netmountPath: /host/proc/sys/net- name: host-proc-sys-kernelmountPath: /host/proc/sys/kernel- name: bpf-mapsmountPath: /sys/fs/bpfmountPropagation: HostToContainer- name: cilium-runmountPath: /var/run/cilium- name: etc-cni-netdmountPath: /host/etc/cni/net.d- name: clustermesh-secretsreadOnly: truemountPath: /var/lib/cilium/clustermesh- name: lib-modulesreadOnly: truemountPath: /lib/modules- name: xtables-lockmountPath: /run/xtables.lock- name: hubble-tlsreadOnly: truemountPath: /var/lib/cilium/tls/hubble- name: tmpmountPath: /tmplivenessProbe:httpGet:path: /healthzport: 9879host: 127.0.0.1scheme: HTTPhttpHeaders:- name: briefvalue: 'true'initialDelaySeconds: 120timeoutSeconds: 5periodSeconds: 30successThreshold: 1failureThreshold: 10readinessProbe:httpGet:path: /healthzport: 9879host: 127.0.0.1scheme: HTTPhttpHeaders:- name: briefvalue: 'true'timeoutSeconds: 5periodSeconds: 30successThreshold: 1failureThreshold: 3startupProbe:httpGet:path: /healthzport: 9879host: 127.0.0.1scheme: HTTPhttpHeaders:- name: briefvalue: 'true'timeoutSeconds: 1periodSeconds: 2successThreshold: 1failureThreshold: 105lifecycle:postStart:exec:command:- bash- '-c'- >set -o errexitset -o pipefailset -o nounset# When running in AWS ENI mode, it's likely that 'aws-node' has# had a chance to install SNAT iptables rules. These can result# in dropped traffic, so we should attempt to remove them.# We do it using a 'postStart' hook since this may need to run# for nodes which might have already been init'ed but may still# have dangling rules. This is safe because there are no# dependencies on anything that is part of the startup script# itself, and can be safely run multiple times per node (e.g. in# case of a restart).if [[ "$(iptables-save | grep -E -c'AWS-SNAT-CHAIN|AWS-CONNMARK-CHAIN')" != "0" ]];thenecho 'Deleting iptables rules created by the AWS CNI VPC plugin'iptables-save | grep -E -v 'AWS-SNAT-CHAIN|AWS-CONNMARK-CHAIN' | iptables-restorefiecho 'Done!'preStop:exec:command:- /cni-uninstall.shterminationMessagePath: /dev/termination-logterminationMessagePolicy: FallbackToLogsOnErrorimagePullPolicy: IfNotPresentsecurityContext:capabilities:add:- CHOWN- KILL- NET_ADMIN- NET_RAW- IPC_LOCK- SYS_MODULE- SYS_ADMIN- SYS_RESOURCE- DAC_OVERRIDE- FOWNER- SETGID- SETUIDdrop:- ALLseLinuxOptions:type: spc_tlevel: s0
currentNumberScheduled: 2numberMisscheduled: 0desiredNumberScheduled: 2numberReady: 2observedGeneration: 4updatedNumberScheduled: 2numberAvailable: 2
selector:matchLabels:k8s-app: ciliumkubernetes.io/cluster-service: 'true'template:metadata:creationTimestamp: nulllabels:app.kubernetes.io/name: cilium-agentapp.kubernetes.io/part-of: ciliumdoks.digitalocean.com/managed: 'true'k8s-app: ciliumkubernetes.io/cluster-service: 'true'annotations:clusterlint.digitalocean.com/disabled-checks: >-privileged-containers,non-root-user,resource-requirements,hostpath-volumecontainer.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites: unconfinedcontainer.apparmor.security.beta.kubernetes.io/cilium-agent: unconfinedcontainer.apparmor.security.beta.kubernetes.io/clean-cilium-state: unconfinedcontainer.apparmor.security.beta.kubernetes.io/mount-cgroup: unconfinedkubectl.kubernetes.io/default-container: cilium-agentprometheus.io/port: '9090'prometheus.io/scrape: 'true'spec:volumes:- name: host-kubectlhostPath:path: /usr/bin/kubectltype: File- name: tmpemptyDir: {}- name: cilium-runhostPath:path: /var/run/ciliumtype: DirectoryOrCreate- name: bpf-mapshostPath:path: /sys/fs/bpftype: DirectoryOrCreate- name: hostprochostPath:path: /proctype: Directory- name: cilium-cgrouphostPath:path: /run/cilium/cgroupv2type: DirectoryOrCreate- name: cni-pathhostPath:path: /opt/cni/bintype: DirectoryOrCreate- name: etc-cni-netdhostPath:path: /etc/cni/net.dtype: DirectoryOrCreate- name: lib-moduleshostPath:path: /lib/modulestype: ''- name: xtables-lockhostPath:path: /run/xtables.locktype: FileOrCreate- name: clustermesh-secretsprojected:sources:- secret:name: cilium-clustermeshoptional: true- secret:name: clustermesh-apiserver-remote-certitems:- key: tls.keypath: common-etcd-client.key- key: tls.crtpath: common-etcd-client.crt- key: ca.crtpath: common-etcd-client-ca.crtoptional: truedefaultMode: 256- name: host-proc-sys-nethostPath:path: /proc/sys/nettype: Directory- name: host-proc-sys-kernelhostPath:path: /proc/sys/kerneltype: Directory- name: hubble-tlsprojected:sources:- secret:name: hubble-server-certsitems:- key: tls.crtpath: server.crt- key: tls.keypath: server.key- key: ca.crtpath: client-ca.crtoptional: truedefaultMode: 256initContainers:- name: delay-cilium-for-ccmimage: docker.io/digitalocean/cilium:v1.14.14-conformance-fixcommand:- bash- '-e'- '-c'- ># This will get the node object for the local node and searchthrough# the assigned addresses in the object in order to check whether CCM# already set the internal AND external IP since cilium needs both# for a clean startup.# The grep matches regardless of the order of IPs.until /host/usr/bin/kubectl get node ${HOSTNAME} -ojsonpath="{.status.addresses[*].type}" | grep -E"InternalIP.*ExternalIP|ExternalIP.*InternalIP"; do echo "waitingfor CCM to store internal and external IP addresses in node object:${HOSTNAME}" && sleep 3; done;env:- name: KUBERNETES_SERVICE_HOSTvalue: f6ce2907-8531-4ab3-861e-4e2affa620b1.k8s.ondigitalocean.com- name: KUBERNETES_SERVICE_PORTvalue: '443'resources:requests:cpu: 100mmemory: 100MivolumeMounts:- name: host-kubectlmountPath: /host/usr/bin/kubectlterminationMessagePath: /dev/termination-logterminationMessagePolicy: FileimagePullPolicy: IfNotPresent- name: configimage: docker.io/digitalocean/cilium:v1.14.14-conformance-fixcommand:- cilium- build-config- '--source=config-map:cilium-config'env:- name: K8S_NODE_NAMEvalueFrom:fieldRef:apiVersion: v1fieldPath: spec.nodeName- name: CILIUM_K8S_NAMESPACEvalueFrom:fieldRef:apiVersion: v1fieldPath: metadata.namespace- name: KUBERNETES_SERVICE_HOSTvalue: f6ce2907-8531-4ab3-861e-4e2affa620b1.k8s.ondigitalocean.com- name: KUBERNETES_SERVICE_PORTvalue: '443'resources: {}volumeMounts:- name: tmpmountPath: /tmpterminationMessagePath: /dev/termination-logterminationMessagePolicy: FallbackToLogsOnErrorimagePullPolicy: IfNotPresent- name: mount-cgroupimage: docker.io/digitalocean/cilium:v1.14.14-conformance-fixcommand:- sh- '-ec'- >cp /usr/bin/cilium-mount /hostbin/cilium-mount;nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt"${BIN_PATH}/cilium-mount" $CGROUP_ROOT;rm /hostbin/cilium-mountenv:- name: CGROUP_ROOTvalue: /run/cilium/cgroupv2- name: BIN_PATHvalue: /opt/cni/binresources: {}volumeMounts:- name: hostprocmountPath: /hostproc- name: cni-pathmountPath: /hostbinterminationMessagePath: /dev/termination-logterminationMessagePolicy: FallbackToLogsOnErrorimagePullPolicy: IfNotPresentsecurityContext:capabilities:add:- SYS_ADMIN- SYS_CHROOT- SYS_PTRACEdrop:- ALLseLinuxOptions:type: spc_tlevel: s0- name: apply-sysctl-overwritesimage: docker.io/digitalocean/cilium:v1.14.14-conformance-fixcommand:- sh- '-ec'- |cp /usr/bin/cilium-sysctlfix /hostbin/cilium-sysctlfix;nsenter --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-sysctlfix";rm /hostbin/cilium-sysctlfixenv:- name: BIN_PATHvalue: /opt/cni/binresources: {}volumeMounts:- name: hostprocmountPath: /hostproc- name: cni-pathmountPath: /hostbinterminationMessagePath: /dev/termination-logterminationMessagePolicy: FallbackToLogsOnErrorimagePullPolicy: IfNotPresentsecurityContext:capabilities:add:- SYS_ADMIN- SYS_CHROOT- SYS_PTRACEdrop:- ALLseLinuxOptions:type: spc_tlevel: s0- name: mount-bpf-fsimage: docker.io/digitalocean/cilium:v1.14.14-conformance-fixcommand:- /bin/bash- '-c'- '--'args:- mount | grep "/sys/fs/bpf type bpf" || mount -t bpf bpf /sys/fs/bpfresources: {}volumeMounts:- name: bpf-mapsmountPath: /sys/fs/bpfmountPropagation: BidirectionalterminationMessagePath: /dev/termination-logterminationMessagePolicy: FallbackToLogsOnErrorimagePullPolicy: IfNotPresentsecurityContext:privileged: true- name: clean-cilium-stateimage: docker.io/digitalocean/cilium:v1.14.14-conformance-fixcommand:- /init-container.shenv:- name: CILIUM_ALL_STATEvalueFrom:configMapKeyRef:name: cilium-configkey: clean-cilium-stateoptional: true- name: CILIUM_BPF_STATEvalueFrom:configMapKeyRef:name: cilium-configkey: clean-cilium-bpf-stateoptional: true- name: KUBERNETES_SERVICE_HOSTvalue: f6ce2907-8531-4ab3-861e-4e2affa620b1.k8s.ondigitalocean.com- name: KUBERNETES_SERVICE_PORTvalue: '443'resources: {}volumeMounts:- name: bpf-mapsmountPath: /sys/fs/bpf- name: cilium-cgroupmountPath: /run/cilium/cgroupv2mountPropagation: HostToContainer- name: cilium-runmountPath: /var/run/ciliumterminationMessagePath: /dev/termination-logterminationMessagePolicy: FallbackToLogsOnErrorimagePullPolicy: IfNotPresentsecurityContext:capabilities:add:- NET_ADMIN- SYS_MODULE- SYS_ADMIN- SYS_RESOURCEdrop:- ALLseLinuxOptions:type: spc_tlevel: s0- name: install-cni-binariesimage: docker.io/digitalocean/cilium:v1.14.14-conformance-fixcommand:- /install-plugin.shresources:requests:cpu: 100mmemory: 10MivolumeMounts:- name: cni-pathmountPath: /host/opt/cni/binterminationMessagePath: /dev/termination-logterminationMessagePolicy: FallbackToLogsOnErrorimagePullPolicy: IfNotPresentsecurityContext:capabilities:drop:- ALLseLinuxOptions:type: spc_tlevel: s0containers:- name: cilium-agentimage: docker.io/digitalocean/cilium:v1.14.14-conformance-fixcommand:- cilium-agentargs:- '--config-dir=/tmp/cilium/config-map'- >---k8s-api-server=https://f6ce2907-8531-4ab3-861e-4e2affa620b1.k8s.ondigitalocean.com- '--ipv4-native-routing-cidr=10.244.0.0/16'ports:- name: peer-servicehostPort: 4244containerPort: 4244protocol: TCP- name: prometheushostPort: 9090containerPort: 9090protocol: TCPenv:- name: K8S_NODE_NAMEvalueFrom:fieldRef:apiVersion: v1fieldPath: spec.nodeName- name: CILIUM_K8S_NAMESPACEvalueFrom:fieldRef:apiVersion: v1fieldPath: metadata.namespace- name: CILIUM_CLUSTERMESH_CONFIGvalue: /var/lib/cilium/clustermesh/- name: KUBERNETES_SERVICE_HOSTvalue: f6ce2907-8531-4ab3-861e-4e2affa620b1.k8s.ondigitalocean.com- name: KUBERNETES_SERVICE_PORTvalue: '443'resources:requests:cpu: 300mmemory: 300MivolumeMounts:- name: host-proc-sys-netmountPath: /host/proc/sys/net- name: host-proc-sys-kernelmountPath: /host/proc/sys/kernel- name: bpf-mapsmountPath: /sys/fs/bpfmountPropagation: HostToContainer- name: cilium-runmountPath: /var/run/cilium- name: etc-cni-netdmountPath: /host/etc/cni/net.d- name: clustermesh-secretsreadOnly: truemountPath: /var/lib/cilium/clustermesh- name: lib-modulesreadOnly: truemountPath: /lib/modules- name: xtables-lockmountPath: /run/xtables.lock- name: hubble-tlsreadOnly: truemountPath: /var/lib/cilium/tls/hubble- name: tmpmountPath: /tmplivenessProbe:httpGet:path: /healthzport: 9879host: 127.0.0.1scheme: HTTPhttpHeaders:- name: briefvalue: 'true'initialDelaySeconds: 120timeoutSeconds: 5periodSeconds: 30successThreshold: 1failureThreshold: 10readinessProbe:httpGet:path: /healthzport: 9879host: 127.0.0.1scheme: HTTPhttpHeaders:- name: briefvalue: 'true'timeoutSeconds: 5periodSeconds: 30successThreshold: 1failureThreshold: 3startupProbe:httpGet:path: /healthzport: 9879host: 127.0.0.1scheme: HTTPhttpHeaders:- name: briefvalue: 'true'timeoutSeconds: 1periodSeconds: 2successThreshold: 1failureThreshold: 105lifecycle:postStart:exec:command:- bash- '-c'- >set -o errexitset -o pipefailset -o nounset# When running in AWS ENI mode, it's likely that 'aws-node'has# had a chance to install SNAT iptables rules. These canresult# in dropped traffic, so we should attempt to remove them.# We do it using a 'postStart' hook since this may need to run# for nodes which might have already been init'ed but maystill# have dangling rules. This is safe because there are no# dependencies on anything that is part of the startup script# itself, and can be safely run multiple times per node (e.g.in# case of a restart).if [[ "$(iptables-save | grep -E -c'AWS-SNAT-CHAIN|AWS-CONNMARK-CHAIN')" != "0" ]];thenecho 'Deleting iptables rules created by the AWS CNI VPC plugin'iptables-save | grep -E -v 'AWS-SNAT-CHAIN|AWS-CONNMARK-CHAIN' | iptables-restorefiecho 'Done!'preStop:exec:command:- /cni-uninstall.shterminationMessagePath: /dev/termination-logterminationMessagePolicy: FallbackToLogsOnErrorimagePullPolicy: IfNotPresentsecurityContext:capabilities:add:- CHOWN- KILL- NET_ADMIN- NET_RAW- IPC_LOCK- SYS_MODULE- SYS_ADMIN- SYS_RESOURCE- DAC_OVERRIDE- FOWNER- SETGID- SETUIDdrop:- ALLseLinuxOptions:type: spc_tlevel: s0restartPolicy: AlwaysterminationGracePeriodSeconds: 1dnsPolicy: ClusterFirstnodeSelector:kubernetes.io/os: linuxserviceAccountName: ciliumserviceAccount: ciliumautomountServiceAccountToken: truehostNetwork: truesecurityContext: {}affinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchLabels:k8s-app: ciliumtopologyKey: kubernetes.io/hostnameschedulerName: default-schedulertolerations:- operator: ExistspriorityClassName: system-node-criticalupdateStrategy:type: RollingUpdaterollingUpdate:maxUnavailable: 10%maxSurge: 0revisionHistoryLimit: 10
metadata:name: ciliumnamespace: kube-systemuid: f644a837-ae29-48a0-89c7-2d886e50903eresourceVersion: '45900368'generation: 4creationTimestamp: '2024-07-01T18:52:38Z'labels:app.kubernetes.io/name: cilium-agentapp.kubernetes.io/part-of: ciliumc3.doks.digitalocean.com/component: ciliumc3.doks.digitalocean.com/plane: datadoks.digitalocean.com/managed: 'true'k8s-app: ciliumkubernetes.io/cluster-service: 'true'annotations:deprecated.daemonset.template.generation: '4'spec:selector:matchLabels:k8s-app: ciliumkubernetes.io/cluster-service: 'true'template:metadata:creationTimestamp: nulllabels:app.kubernetes.io/name: cilium-agentapp.kubernetes.io/part-of: ciliumdoks.digitalocean.com/managed: 'true'k8s-app: ciliumkubernetes.io/cluster-service: 'true'annotations:clusterlint.digitalocean.com/disabled-checks: >-privileged-containers,non-root-user,resource-requirements,hostpath-volumecontainer.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites: unconfinedcontainer.apparmor.security.beta.kubernetes.io/cilium-agent: unconfinedcontainer.apparmor.security.beta.kubernetes.io/clean-cilium-state: unconfinedcontainer.apparmor.security.beta.kubernetes.io/mount-cgroup: unconfinedkubectl.kubernetes.io/default-container: cilium-agentprometheus.io/port: '9090'prometheus.io/scrape: 'true'spec:volumes:- name: host-kubectlhostPath:path: /usr/bin/kubectltype: File- name: tmpemptyDir: {}- name: cilium-runhostPath:path: /var/run/ciliumtype: DirectoryOrCreate- name: bpf-mapshostPath:path: /sys/fs/bpftype: DirectoryOrCreate- name: hostprochostPath:path: /proctype: Directory- name: cilium-cgrouphostPath:path: /run/cilium/cgroupv2type: DirectoryOrCreate- name: cni-pathhostPath:path: /opt/cni/bintype: DirectoryOrCreate- name: etc-cni-netdhostPath:path: /etc/cni/net.dtype: DirectoryOrCreate- name: lib-moduleshostPath:path: /lib/modulestype: ''- name: xtables-lockhostPath:path: /run/xtables.locktype: FileOrCreate- name: clustermesh-secretsprojected:sources:- secret:name: cilium-clustermeshoptional: true- secret:name: clustermesh-apiserver-remote-certitems:- key: tls.keypath: common-etcd-client.key- key: tls.crtpath: common-etcd-client.crt- key: ca.crtpath: common-etcd-client-ca.crtoptional: truedefaultMode: 256- name: host-proc-sys-nethostPath:path: /proc/sys/nettype: Directory- name: host-proc-sys-kernelhostPath:path: /proc/sys/kerneltype: Directory- name: hubble-tlsprojected:sources:- secret:name: hubble-server-certsitems:- key: tls.crtpath: server.crt- key: tls.keypath: server.key- key: ca.crtpath: client-ca.crtoptional: truedefaultMode: 256initContainers:- name: delay-cilium-for-ccmimage: docker.io/digitalocean/cilium:v1.14.14-conformance-fixcommand:- bash- '-e'- '-c'- ># This will get the node object for the local node and searchthrough# the assigned addresses in the object in order to check whetherCCM# already set the internal AND external IP since cilium needs both# for a clean startup.# The grep matches regardless of the order of IPs.until /host/usr/bin/kubectl get node ${HOSTNAME} -ojsonpath="{.status.addresses[*].type}" | grep -E"InternalIP.*ExternalIP|ExternalIP.*InternalIP"; do echo "waitingfor CCM to store internal and external IP addresses in nodeobject: ${HOSTNAME}" && sleep 3; done;env:- name: KUBERNETES_SERVICE_HOSTvalue: f6ce2907-8531-4ab3-861e-4e2affa620b1.k8s.ondigitalocean.com- name: KUBERNETES_SERVICE_PORTvalue: '443'resources:requests:cpu: 100mmemory: 100MivolumeMounts:- name: host-kubectlmountPath: /host/usr/bin/kubectlterminationMessagePath: /dev/termination-logterminationMessagePolicy: FileimagePullPolicy: IfNotPresent- name: configimage: docker.io/digitalocean/cilium:v1.14.14-conformance-fixcommand:- cilium- build-config- '--source=config-map:cilium-config'env:- name: K8S_NODE_NAMEvalueFrom:fieldRef:apiVersion: v1fieldPath: spec.nodeName- name: CILIUM_K8S_NAMESPACEvalueFrom:fieldRef:apiVersion: v1fieldPath: metadata.namespace- name: KUBERNETES_SERVICE_HOSTvalue: f6ce2907-8531-4ab3-861e-4e2affa620b1.k8s.ondigitalocean.com- name: KUBERNETES_SERVICE_PORTvalue: '443'resources: {}volumeMounts:- name: tmpmountPath: /tmpterminationMessagePath: /dev/termination-logterminationMessagePolicy: FallbackToLogsOnErrorimagePullPolicy: IfNotPresent- name: mount-cgroupimage: docker.io/digitalocean/cilium:v1.14.14-conformance-fixcommand:- sh- '-ec'- >cp /usr/bin/cilium-mount /hostbin/cilium-mount;nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt"${BIN_PATH}/cilium-mount" $CGROUP_ROOT;rm /hostbin/cilium-mountenv:- name: CGROUP_ROOTvalue: /run/cilium/cgroupv2- name: BIN_PATHvalue: /opt/cni/binresources: {}volumeMounts:- name: hostprocmountPath: /hostproc- name: cni-pathmountPath: /hostbinterminationMessagePath: /dev/termination-logterminationMessagePolicy: FallbackToLogsOnErrorimagePullPolicy: IfNotPresentsecurityContext:capabilities:add:- SYS_ADMIN- SYS_CHROOT- SYS_PTRACEdrop:- ALLseLinuxOptions:type: spc_tlevel: s0- name: apply-sysctl-overwritesimage: docker.io/digitalocean/cilium:v1.14.14-conformance-fixcommand:- sh- '-ec'- |cp /usr/bin/cilium-sysctlfix /hostbin/cilium-sysctlfix;nsenter --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-sysctlfix";rm /hostbin/cilium-sysctlfixenv:- name: BIN_PATHvalue: /opt/cni/binresources: {}volumeMounts:- name: hostprocmountPath: /hostproc- name: cni-pathmountPath: /hostbinterminationMessagePath: /dev/termination-logterminationMessagePolicy: FallbackToLogsOnErrorimagePullPolicy: IfNotPresentsecurityContext:capabilities:add:- SYS_ADMIN- SYS_CHROOT- SYS_PTRACEdrop:- ALLseLinuxOptions:type: spc_tlevel: s0- name: mount-bpf-fsimage: docker.io/digitalocean/cilium:v1.14.14-conformance-fixcommand:- /bin/bash- '-c'- '--'args:- >-mount | grep "/sys/fs/bpf type bpf" || mount -t bpf bpf/sys/fs/bpfresources: {}volumeMounts:- name: bpf-mapsmountPath: /sys/fs/bpfmountPropagation: BidirectionalterminationMessagePath: /dev/termination-logterminationMessagePolicy: FallbackToLogsOnErrorimagePullPolicy: IfNotPresentsecurityContext:privileged: true- name: clean-cilium-stateimage: docker.io/digitalocean/cilium:v1.14.14-conformance-fixcommand:- /init-container.shenv:- name: CILIUM_ALL_STATEvalueFrom:configMapKeyRef:name: cilium-configkey: clean-cilium-stateoptional: true- name: CILIUM_BPF_STATEvalueFrom:configMapKeyRef:name: cilium-configkey: clean-cilium-bpf-stateoptional: true- name: KUBERNETES_SERVICE_HOSTvalue: f6ce2907-8531-4ab3-861e-4e2affa620b1.k8s.ondigitalocean.com- name: KUBERNETES_SERVICE_PORTvalue: '443'resources: {}volumeMounts:- name: bpf-mapsmountPath: /sys/fs/bpf- name: cilium-cgroupmountPath: /run/cilium/cgroupv2mountPropagation: HostToContainer- name: cilium-runmountPath: /var/run/ciliumterminationMessagePath: /dev/termination-logterminationMessagePolicy: FallbackToLogsOnErrorimagePullPolicy: IfNotPresentsecurityContext:capabilities:add:- NET_ADMIN- SYS_MODULE- SYS_ADMIN- SYS_RESOURCEdrop:- ALLseLinuxOptions:type: spc_tlevel: s0- name: install-cni-binariesimage: docker.io/digitalocean/cilium:v1.14.14-conformance-fixcommand:- /install-plugin.shresources:requests:cpu: 100mmemory: 10MivolumeMounts:- name: cni-pathmountPath: /host/opt/cni/binterminationMessagePath: /dev/termination-logterminationMessagePolicy: FallbackToLogsOnErrorimagePullPolicy: IfNotPresentsecurityContext:capabilities:drop:- ALLseLinuxOptions:type: spc_tlevel: s0containers:- name: cilium-agentimage: docker.io/digitalocean/cilium:v1.14.14-conformance-fixcommand:- cilium-agentargs:- '--config-dir=/tmp/cilium/config-map'- >---k8s-api-server=https://f6ce2907-8531-4ab3-861e-4e2affa620b1.k8s.ondigitalocean.com- '--ipv4-native-routing-cidr=10.244.0.0/16'ports:- name: peer-servicehostPort: 4244containerPort: 4244protocol: TCP- name: prometheushostPort: 9090containerPort: 9090protocol: TCPenv:- name: K8S_NODE_NAMEvalueFrom:fieldRef:apiVersion: v1fieldPath: spec.nodeName- name: CILIUM_K8S_NAMESPACEvalueFrom:fieldRef:apiVersion: v1fieldPath: metadata.namespace- name: CILIUM_CLUSTERMESH_CONFIGvalue: /var/lib/cilium/clustermesh/- name: KUBERNETES_SERVICE_HOSTvalue: f6ce2907-8531-4ab3-861e-4e2affa620b1.k8s.ondigitalocean.com- name: KUBERNETES_SERVICE_PORTvalue: '443'resources:requests:cpu: 300mmemory: 300MivolumeMounts:- name: host-proc-sys-netmountPath: /host/proc/sys/net- name: host-proc-sys-kernelmountPath: /host/proc/sys/kernel- name: bpf-mapsmountPath: /sys/fs/bpfmountPropagation: HostToContainer- name: cilium-runmountPath: /var/run/cilium- name: etc-cni-netdmountPath: /host/etc/cni/net.d- name: clustermesh-secretsreadOnly: truemountPath: /var/lib/cilium/clustermesh- name: lib-modulesreadOnly: truemountPath: /lib/modules- name: xtables-lockmountPath: /run/xtables.lock- name: hubble-tlsreadOnly: truemountPath: /var/lib/cilium/tls/hubble- name: tmpmountPath: /tmplivenessProbe:httpGet:path: /healthzport: 9879host: 127.0.0.1scheme: HTTPhttpHeaders:- name: briefvalue: 'true'initialDelaySeconds: 120timeoutSeconds: 5periodSeconds: 30successThreshold: 1failureThreshold: 10readinessProbe:httpGet:path: /healthzport: 9879host: 127.0.0.1scheme: HTTPhttpHeaders:- name: briefvalue: 'true'timeoutSeconds: 5periodSeconds: 30successThreshold: 1failureThreshold: 3startupProbe:httpGet:path: /healthzport: 9879host: 127.0.0.1scheme: HTTPhttpHeaders:- name: briefvalue: 'true'timeoutSeconds: 1periodSeconds: 2successThreshold: 1failureThreshold: 105lifecycle:postStart:exec:command:- bash- '-c'- >set -o errexitset -o pipefailset -o nounset# When running in AWS ENI mode, it's likely that 'aws-node'has# had a chance to install SNAT iptables rules. These canresult# in dropped traffic, so we should attempt to remove them.# We do it using a 'postStart' hook since this may need torun# for nodes which might have already been init'ed but maystill# have dangling rules. This is safe because there are no# dependencies on anything that is part of the startupscript# itself, and can be safely run multiple times per node(e.g. in# case of a restart).if [[ "$(iptables-save | grep -E -c'AWS-SNAT-CHAIN|AWS-CONNMARK-CHAIN')" != "0" ]];thenecho 'Deleting iptables rules created by the AWS CNI VPC plugin'iptables-save | grep -E -v 'AWS-SNAT-CHAIN|AWS-CONNMARK-CHAIN' | iptables-restorefiecho 'Done!'preStop:exec:command:- /cni-uninstall.shterminationMessagePath: /dev/termination-logterminationMessagePolicy: FallbackToLogsOnErrorimagePullPolicy: IfNotPresentsecurityContext:capabilities:add:- CHOWN- KILL- NET_ADMIN- NET_RAW- IPC_LOCK- SYS_MODULE- SYS_ADMIN- SYS_RESOURCE- DAC_OVERRIDE- FOWNER- SETGID- SETUIDdrop:- ALLseLinuxOptions:type: spc_tlevel: s0restartPolicy: AlwaysterminationGracePeriodSeconds: 1dnsPolicy: ClusterFirstnodeSelector:kubernetes.io/os: linuxserviceAccountName: ciliumserviceAccount: ciliumautomountServiceAccountToken: truehostNetwork: truesecurityContext: {}affinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchLabels:k8s-app: ciliumtopologyKey: kubernetes.io/hostnameschedulerName: default-schedulertolerations:- operator: ExistspriorityClassName: system-node-criticalupdateStrategy:type: RollingUpdaterollingUpdate:maxUnavailable: 10%maxSurge: 0revisionHistoryLimit: 10status:currentNumberScheduled: 2numberMisscheduled: 0desiredNumberScheduled: 2numberReady: 2observedGeneration: 4updatedNumberScheduled: 2numberAvailable: 2
Name | Namespace | Status | Created | Restarts | Node | IP Address |
---|---|---|---|---|---|---|
cilium-rmgzn | kube-system | Running | a month ago | 0 | system-0-bf7s0 | 10.108.0.2 |
cilium-kpszl | kube-system | Running | a month ago | 0 | system-0-r5hb1 | 10.108.0.3 |
Name | Namespace | Status | Created | Restarts | Node | IP Address |
---|---|---|---|---|---|---|
cilium-rmgzn | kube-system | Running | a month ago | 0 | system-0-bf7s0 | 10.108.0.2 |
cilium-kpszl | kube-system | Running | a month ago | 0 | system-0-r5hb1 | 10.108.0.3 |
Name | Namespace | Status | Created | Restarts | Node | IP Address |
---|---|---|---|---|---|---|
cilium-rmgzn | kube-system | Running | a month ago | 0 | system-0-bf7s0 | 10.108.0.2 |
cilium-kpszl | kube-system | Running | a month ago | 0 | system-0-r5hb1 | 10.108.0.3 |