Manifest란?
매니페스트(Manifest)
JSON 또는 YAML 형식으로 명시한 쿠버네티스 API 오브젝트의 명세.[-]
매니페스트는 사용자가 해당 매니페스트를 적용했을 때 쿠버네티스가 유지해야 하는 오브젝트의 의도한 상태(desired state)를 나타낸다. 각 구성 파일은 여러 매니페스트를 포함할 수 있다.
출처 - kubernetes.io.docs
01. 특정 pod를 생성한다. 조건 1: volume mount path 사용, 조건 2: volume must not be persistent
[solve]
[root@k8s-master ~]# vi 8-test.yaml
apiVersion: v1
kind: Pod
metadata:
name: non-persistent-redis
spec:
containers:
- image: redis
name: non-persistent-redis
volumeMounts:
- mountPath: /data/redis
name: non-persistent-redis
volumes:
- name: non-persistent-redis
emptyDir:
[root@k8s-master ~]# kubectl apply -f 8-test.yaml
pod/non-persistent-redis created
: 생성 후 만들어진 pod 확인
[root@k8s-master ~]# kubectl describe pods non-persistent-redis
Name: non-persistent-redis
Namespace: default
Priority: 0
Service Account: default
Node: k8s-node1/192.168.56.31
Start Time: Fri, 27 Dec 2024 01:19:09 +0900
Labels: <none>
Annotations: cni.projectcalico.org/containerID: 88092e14472079faf0f00596a56c94c3ecbc556ea780eb3b73e14d06c700259e
cni.projectcalico.org/podIP: 20.96.36.73/32
cni.projectcalico.org/podIPs: 20.96.36.73/32
Status: Running
IP: 20.96.36.73
IPs:
IP: 20.96.36.73
Containers:
non-persistent-redis:
Container ID: containerd://06332dc5442db3adced7627cf3c08aa58cdad9f20569cb17563567c219459880
Image: redis
Image ID: docker.io/library/redis@sha256:bb142a9c18ac18a16713c1491d779697b4e107c22a97266616099d288237ef47
Port: <none>
Host Port: <none>
State: Running
Started: Fri, 27 Dec 2024 01:19:17 +0900
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/data/redis from non-persistent-redis (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4stqj (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
non-persistent-redis:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kube-api-access-4stqj:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 24s default-scheduler Successfully assigned default/non-persistent-redis to k8s-node1
Normal Pulling 7s kubelet Pulling image "redis"
Normal Pulled 0s kubelet Successfully pulled image "redis" in 8.043583999s (8.043589202s including waiting)
Normal Created 0s kubelet Created container non-persistent-redis
Normal Started 0s kubelet Started container non-persistent-redis
[root@k8s-master ~]#
02. 포트 80의 트래픽을 허용하는 pod를 생성한다.
[solve]
[root@k8s-master ~]# vi 9-test.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
[root@k8s-master ~]# kubectl apply -f 9-test.yaml
pod/nginx created
: 만들어진 pod 정보를 확인한다.
[root@k8s-master ~]# kubectl get pods nginx
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 11s
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl describe pod
poddisruptionbudgets.policy pods pods.metrics.k8s.io podtemplates
[root@k8s-master ~]# kubectl describe pods nginx
Name: nginx
Namespace: default
Priority: 0
Service Account: default
Node: k8s-node1/192.168.56.31
Start Time: Fri, 27 Dec 2024 01:26:23 +0900
Labels: <none>
Annotations: cni.projectcalico.org/containerID: d7e10f6af4d810a0c1696d14cd4ca8e4beb050e38407e1ec1a762c00df99c726
cni.projectcalico.org/podIP: 20.96.36.74/32
cni.projectcalico.org/podIPs: 20.96.36.74/32
Status: Running
IP: 20.96.36.74
IPs:
IP: 20.96.36.74
Containers:
nginx:
Container ID: containerd://b87bcaee8a5d099693f62281c4ca8b72983bc874a921c89d2eafb4baf8e00b1f
Image: nginx:1.14.2
Image ID: docker.io/library/nginx@sha256:f7988fb6c02e0ce69257d9bd9cf37ae20a60f1df7563c3a2a6abe24160306b8d
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 27 Dec 2024 01:26:24 +0900
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ghfmt (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-ghfmt:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 32s default-scheduler Successfully assigned default/nginx to k8s-node1
Normal Pulled 15s kubelet Container image "nginx:1.14.2" already present on machine
Normal Created 15s kubelet Created container nginx
Normal Started 15s kubelet Started container nginx
[root@k8s-master ~]#
03. single app container를 가진 pod를 생성. 컨테이너는 3개의 이미지를 가진다.
[solve]
[root@k8s-master ~]# vi 12-test.yaml
apiVersion: v1
kind: Pod
metadata:
name: kucc8
spec:
containers:
- name: nginx
image: nginx:1.14.2
- name: redis
image: redis
- name: memcached
image: memcached
[root@k8s-master ~]# kubectl apply -f 12-test.yaml
pod/kucc8 created
[root@k8s-master ~]# kubectl describe pods kucc8
Name: kucc8
Namespace: default
Priority: 0
Service Account: default
Node: k8s-node1/192.168.56.31
Start Time: Fri, 27 Dec 2024 03:16:53 +0900
Labels: <none>
Annotations: cni.projectcalico.org/containerID: 87c7f633264271a810c282c3aa98806159c5e8d448fefd25ccdabf1208359a70
cni.projectcalico.org/podIP: 20.96.36.79/32
cni.projectcalico.org/podIPs: 20.96.36.79/32
Status: Running
IP: 20.96.36.79
IPs:
IP: 20.96.36.79
Containers:
nginx:
Container ID: containerd://33733b16a720fcaf74bb33ede74a2abd4d4680241a6631a64e0bb3f1fe738eb1
Image: nginx:1.14.2
Image ID: docker.io/library/nginx@sha256:f7988fb6c02e0ce69257d9bd9cf37ae20a60f1df7563c3a2a6abe24160306b8d
Port: <none>
Host Port: <none>
State: Running
Started: Fri, 27 Dec 2024 03:16:54 +0900
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fl5xg (ro)
redis:
Container ID: containerd://55a68baa4603167efaf4a7e806a2026495004e3cda35d0bd6d1b81c2d69f9efe
Image: redis
Image ID: docker.io/library/redis@sha256:bb142a9c18ac18a16713c1491d779697b4e107c22a97266616099d288237ef47
Port: <none>
Host Port: <none>
State: Running
Started: Fri, 27 Dec 2024 03:16:56 +0900
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fl5xg (ro)
memcached:
Container ID: containerd://a8a209124e561140675aaef4820b3ed73b80b88a9d5927c8049e3144b4aeafb2
Image: memcached
Image ID: docker.io/library/memcached@sha256:de65617c7bf16c4de35efcbd0340c268f48bab274dea26f003f97ee6542fc483
Port: <none>
Host Port: <none>
State: Running
Started: Fri, 27 Dec 2024 03:17:02 +0900
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fl5xg (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-fl5xg:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 19s default-scheduler Successfully assigned default/kucc8 to k8s-node1
Normal Pulled 1s kubelet Container image "nginx:1.14.2" already present on machine
Normal Created 1s kubelet Created container nginx
Normal Started 1s kubelet Started container nginx
Normal Pulling 1s kubelet Pulling image "redis"
Normal Pulled 0s kubelet Successfully pulled image "redis" in 1.765658861s (1.765664631s including waiting)
Normal Created 0s kubelet Created container redis
Normal Started 0s kubelet Started container redis
Normal Pulling 0s kubelet Pulling image "memcached"
Normal Pulled <invalid> kubelet Successfully pulled image "memcached" in 5.9188971s (5.918903813s including waiting)
Normal Created <invalid> kubelet Created container memcached
Normal Started <invalid> kubelet Started container memcached
[root@k8s-master ~]#
04. pod의 node selector로 특정 노드에서 스케줄링이 되도록 pod를 생성한다.
[precondition]
: k8s-node1의 label을 "disktype=ssd"로 설정한다.
[root@k8s-master ~]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s-master Ready control-plane 44h v1.27.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
k8s-node1 Ready <none> 44h v1.27.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kubernetes.io/os=linux
k8s-node2 Ready <none> 44h v1.27.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node2,kubernetes.io/os=linux
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl label nodes k8s-node1 disktype=ssd
node/k8s-node1 labeled
[root@k8s-master ~]#
[solve]
[root@k8s-master ~]# vi 36-test.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-kusc00101
labels:
disktype: ssd
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
disktype: ssd
[root@k8s-master ~]# kubectl apply -f 36-test.yaml
pod/nginx-kusc00101 created
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
dnsutils 1/1 Running 1 (34m ago) 36h 20.96.169.157 k8s-node2 <none> <none>
env-pod 1/1 Running 1 (34m ago) 43h 20.96.169.164 k8s-node2 <none> <none>
frontend 0/1 CrashLoopBackOff 133 (2m4s ago) 44h 20.96.169.156 k8s-node2 <none> <none>
nginx-deployment-cbdccf466-4k8t8 1/1 Running 1 (34m ago) 39h 20.96.169.155 k8s-node2 <none> <none>
nginx-deployment-cbdccf466-bjtj5 1/1 Running 1 (34m ago) 39h 20.96.169.168 k8s-node2 <none> <none>
nginx-deployment-cbdccf466-d5kxz 1/1 Running 1 (34m ago) 34h 20.96.169.160 k8s-node2 <none> <none>
nginx-deployment-cbdccf466-kz992 1/1 Running 1 (34m ago) 34h 20.96.169.162 k8s-node2 <none> <none>
nginx-deployment-cbdccf466-rvflr 1/1 Running 1 (34m ago) 34h 20.96.169.159 k8s-node2 <none> <none>
nginx-deployment-cbdccf466-wf8gk 1/1 Running 1 (34m ago) 41h 20.96.169.163 k8s-node2 <none> <none>
nginx-kusc00101 1/1 Running 0 16s 20.96.36.85 k8s-node1 <none> <none>
nginx1 1/1 Running 1 (34m ago) 44h 20.96.169.169 k8s-node2 <none> <none>
nginx2 1/1 Running 1 (34m ago) 44h 20.96.169.166 k8s-node2 <none> <none>
[root@k8s-master ~]# kubectl get pods -o wide | grep nginx-kusc
nginx-kusc00101 1/1 Running 0 33s 20.96.36.85 k8s-node1 <none> <none>
[root@k8s-master ~]#
05. 특정 커맨드를 출력하는 pod를 생성한다.
[solve]
[root@k8s-master ~]# vi 38-test.yaml
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- name: busybox
image: busybox:1.28
command: ['sh', '-c', 'echo sleep 3600']
ports:
- containerPort: 80
[root@k8s-master ~]# kubectl apply -f 38-test.yaml
pod/busybox created
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl describe pods busybox
Name: busybox
Namespace: default
Priority: 0
Service Account: default
Node: k8s-node1/192.168.56.31
Start Time: Sat, 28 Dec 2024 19:56:14 +0900
Labels: <none>
Annotations: cni.projectcalico.org/containerID: 5fb65da15d061fc760f30f16a10a3a96aa0d29f8fc37f49fcd39b6b7a0f6e874
cni.projectcalico.org/podIP: 20.96.36.86/32
cni.projectcalico.org/podIPs: 20.96.36.86/32
Status: Running
IP: 20.96.36.86
IPs:
IP: 20.96.36.86
Containers:
busybox:
Container ID: containerd://0116bc479c3711f84199b65962fcfc8306c290b75af84d1ff269e3ee3a5dc7b1
Image: busybox
Image ID: docker.io/library/busybox@sha256:2919d0172f7524b2d8df9e50066a682669e6d170ac0f6a49676d54358fe970b5
Port: 80/TCP
Host Port: 0/TCP
Command:
sh
-c
echo sleep 3600
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Sat, 28 Dec 2024 19:56:22 +0900
Finished: Sat, 28 Dec 2024 19:56:22 +0900
Ready: False
Restart Count: 1
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bqftz (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-bqftz:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 14s default-scheduler Successfully assigned default/busybox to k8s-node1
Normal Pulling 8s (x2 over 14s) kubelet Pulling image "busybox"
Normal Pulled 8s kubelet Successfully pulled image "busybox" in 5.867052006s (5.86705789s including waiting)
Normal Created 6s (x2 over 8s) kubelet Created container busybox
Normal Started 6s (x2 over 8s) kubelet Started container busybox
Normal Pulled 6s kubelet Successfully pulled image "busybox" in 1.593856363s (1.593863164s including waiting)
Warning BackOff 5s (x2 over 6s) kubelet Back-off restarting failed container busybox in pod busybox_default(f91328e8-b784-4f2e-922b-012c5abfbfd9)
[root@k8s-master ~]#
06. secret을 생성한다. 생성한 secret을 mount하는 pod를 생성한다. 추가로 패스워드를 exports하는 pod를 생성한다.
[solve]
[root@k8s-master ~]# vi 39-test.yaml
apiVersion: v1
kind: Secret
metadata:
name: super-secret
type: kubernetes.io/basic-auth
stringData:
username: super-secret
password: bob
---
apiVersion: v1
kind: Pod
metadata:
name: pod-secrets-via-file
spec:
containers:
- name: pod-secrets-via-file
image: redis
volumeMounts:
# name must match the volume name below
- name: super-secret
mountPath: /secrets
readOnly: true
# The secret data is exposed to Containers in the Pod through a Volume.
volumes:
- name: super-secret
secret:
secretName: super-secret
---
apiVersion: v1
kind: Pod
metadata:
name: pod-secrets-via-env
spec:
containers:
- name: pod-secrets-via-env
image: redis
env:
- name: BACKEND_USERNAME
valueFrom:
secretKeyRef:
name: backend-user
key: CONFIDENTIAL
[root@k8s-master ~]# kubectl apply -f 39-test.yaml
secret/super-secret configured
pod/pod-secrets-via-file created
pod/pod-secrets-via-env unchanged
07. init container가 포함된 pod 생성하기
[solve]
[root@k8s-master ~]# kubectl apply -f /opt/KUCC00108/hungry-bear
pod/hungry-bear created
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl get pods hungry-bear
NAME READY STATUS RESTARTS AGE
hungry-bear 0/1 Init:CrashLoopBackOff 2 (23s ago) 37s
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl describe pod hungry-bear
Name: hungry-bear
Namespace: default
Priority: 0
Service Account: default
Node: k8s-node1/192.168.56.31
Start Time: Sat, 28 Dec 2024 20:34:35 +0900
Labels: app.kubernetes.io/name=MyApp
Annotations: cni.projectcalico.org/containerID: d53bb5d158e7d531c95bbd839b2351f2811b0ee92652068a831162c5c538711a
cni.projectcalico.org/podIP: 20.96.36.89/32
cni.projectcalico.org/podIPs: 20.96.36.89/32
Status: Pending
IP: 20.96.36.89
IPs:
IP: 20.96.36.89
Init Containers:
init-myservice:
Container ID: containerd://76c0563d39278dcd60772e85ae901490ba7682bc81e7a9ec0cb0886f5bb959b3
Image: busybox:1.28
Image ID: docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47
Port: <none>
Host Port: <none>
Command:
touch
/workdir/calm.txt If /workdir/calm.txt
State: Terminated
Reason: Error
Exit Code: 1
Started: Sat, 28 Dec 2024 20:36:11 +0900
Finished: Sat, 28 Dec 2024 20:36:11 +0900
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Sat, 28 Dec 2024 20:35:19 +0900
Finished: Sat, 28 Dec 2024 20:35:19 +0900
Ready: False
Restart Count: 4
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tbf7h (ro)
Containers:
myapp-container:
Container ID:
Image: busybox:1.28
Image ID:
Port: <none>
Host Port: <none>
Command:
sh
-c
echo The app is running! && sleep 3600
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tbf7h (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-tbf7h:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 101s default-scheduler Successfully assigned default/hungry-bear to k8s-node1
Normal Pulled 5s (x5 over 100s) kubelet Container image "busybox:1.28" already present on machine
Normal Created 5s (x5 over 100s) kubelet Created container init-myservice
Normal Started 5s (x5 over 100s) kubelet Started container init-myservice
Warning BackOff 4s (x9 over 99s) kubelet Back-off restarting failed container init-myservice in pod hungry-bear_default(a48c1c24-3046-4cd6-9cb2-68b052a82d88)
[root@k8s-master ~]#
08. 특정 label과 namespace를 이용하여 pod를 생성한다.
[solve]
[root@k8s-master ~]# vi 42-test.yaml
apiVersion: v1
kind: Pod
metadata:
name: label-demo
namespace: engineering
labels:
env: test
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
[root@k8s-master ~]# kubectl apply -f 42-test.yaml
pod/label-demo created
[root@k8s-master ~]# kubectl get pods -l env=test -n engineering
NAME READY STATUS RESTARTS AGE
label-demo 1/1 Running 0 56s
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl get pods -n engineering --show-labels
NAME READY STATUS RESTARTS AGE LABELS
label-demo 1/1 Running 0 7m52s app=nginx,env=test
[root@k8s-master ~]#
'Compute > kubernetis' 카테고리의 다른 글
[CKA] 09. persistent volume | persistent volume claim 생성 (0) | 2025.01.14 |
---|---|
[CKA] 08. deployment 생성 | deployment scale out (0) | 2025.01.14 |
[CKA] 06. kubectl create | kubectl delete | kubectl edit 명령어 사용 문제 (0) | 2025.01.11 |
[CKA] 05.kubelet 복구 | systemctl status kubelet | restart kubelet | enable kubelet (0) | 2025.01.11 |
[CKA] 04. kubectl describe 명령어 사용 문제 (0) | 2025.01.11 |