쿠버 로키에 설치 개선버전

로키 리눅스에 여러번 설치하며 절차를 개선

kubeadm cli 수행이 되는 상태부터 시작

kubeadm init 수행

아래 옵션들을 설정하여 kubeadm init을 실행하면 네트워크 설정이 명확해지고, Pod와 API 서버의 통신이 원활하게 이루어지게 합니다.

옵션 ( 1 ) --pod-network-cidr 10.100.0.0/16

Pod 네트워크의 CIDR(Classless Inter-Domain Routing) 정의,
쿠버네티스 클러스터 내에서 모든 파드가 IP 주소를 할당 받을 때 
사용할 IP 범위를 설정하는 옵션이며 10.100.0.0/16은 10.100.0.0 ~ 10.100.255.255까지 설정

옵션 ( 2 ) --apiserver-advertise-address 192.168.2.172

API 서버가 외부에 노출될 때 사용할 IP 주소를 설정하는 옵션. 외부에서 제어 플레인 노드의 API 서버에 접근 할 때 사용

옵션 ( 3 ) --service-dns-domain 192.168.2.172

클러스터 내 서비스에 할당될 DNS 도메인을 설정. 기본적으로는 cluster.local 도메인을 사용하지만 이 옵션을 통해서 변경가능

가이드 CLI

kubeadm init --pod-network-cidr 10.100.0.0/16 --apiserver-advertise-address 192.168.2.172 --service-dns-domain 192.168.2.172

INIT 다음 마스터클러스터에서 root에게 권한

export KUBECONFIG=/etc/kubernetes/admin.conf

admin.conf 영구설정

nano ~/.bashrc
export KUBECONFIG=/etc/kubernetes/admin.conf
source ~/.bashrc

kubeadm Join 수행

init 수행 후 출력된 구문에서 Join절을 복사해서 수행합니다. 일반적으로 큰 이슈사항 없으며, 간혹 나오는 /etc/containerd 이슈는 아래에 해결절차 있습니다.

Join 수행시에 나오는 failed to load plugin io.containerd.grpc.v1.cri 이슈 해결절차

cd /etc/containerd
sudo rm -r /etc/containerd/config.toml
sudo cp /etc/containerd/config.toml /etc/containerd/config.toml.orig
sudo sh -c 'sudo containerd config default > /etc/containerd/config.toml'
sudo containerd config default > /etc/containerd/config.toml

sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
sudo systemctl restart containerd
sudo systemctl status containerd

kubeadm master에서 calico 수행

파일을 복사 내려받으려면

curl -O https://docs.projectcalico.org/manifests/calico.yaml

수행 CLI

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

calico 적용 전 후의 pod 모습

수행전

Pasted image 20241022133333.png

수행후 Pasted image 20241022135139.png

쿠버네티스 대시보드 설치

다운로드 대시보드, 매트릭서버

쿠버네티스 대시보드를 배포하고 접속하기 | Kubernetes

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.1/aio/deploy/recommended.yaml

Metallb 설치 그리고 ip pool 설정

sudo curl -fsSLo metallb.yaml https://raw.githubusercontent.com/metallb/metallb/v0.14.3/config/manifests/metallb-native.yaml

ip-addr-pool.yaml 작성

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: first-pool
  namespace: metallb-system
spec:
  addresses:
  - 192.168.180.140-192.168.180.160
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: second-pool
  namespace: metallb-system
spec:
  addresses:
  - 192.168.180.161-192.168.180.180
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: my-l2
  namespace: metallb-system
spec:
  ipAddressPools:
  - first-pool
  - second-pool

적용

kubectl apply -f metallb.yaml
kubectl apply -f ip-addr-pool.yaml

적용전

Pasted image 20241022142135.png

적용후

Pasted image 20241022144304.png

fluentd 설치

---
#
# 정확한 작성 방법은 모르나 아래 도메인만 수정하여 배포진행
# replace opensearch-board-dev.x2bee.com
# opensearch-board-dev.x2bee.com
# data에 username, password / base64
# 참고 URL https://www.base64decode.org/
#
apiVersion: v1
kind: ConfigMap
metadata:
  name: fluentd-config
  namespace: logging
  labels:
    k8s-app: fluentd-logging
data:
  fluent.conf: |
    # Prevent fluentd from handling records containing its own logs. Otherwise
    # it can lead to an infinite loop, when error in sending one message generates
    # another message which also fails to be sent and so on.
    <match fluent.**>
      type null
    </match>
    <source>
      @type tcp
      port 24220
      format json
      tag applog
    </source>
    <match applog>
      @type rewrite_tag_filter
      <rule>
        key project
        pattern ^(.+)$
        tag $1.${tag}
      </rule>
    </match>

    <match **applog**>
      @type copy
      <store>
        @type opensearch
        hosts https://opensearch-dev.x2bee.com
        scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"
        ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'true'}"
        ssl_version "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERSION'] || 'TLSv1_2'}"
        user "#{ENV['FLUENT_ELASTICSEARCH_USER'] || use_default}"
        password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD'] || use_default}"        
        reload_connections false
        reconnect_on_error true
        reload_on_failure true
        log_es_400_reason true
        logstash_format true
        logstash_prefix ${tag}
        logstash_dateformat %Y%m%d
        include_tag_key true
        tag_key @log_name
        request_timeout 30000s
        slow_flush_log_threshold 300.0
        flush_mode interval
        <buffer>
          flush_thread_count "8"
          flush_interval "10s"
          chunk_limit_size "5M"
          queue_limit_length "512"
          retry_max_interval "30"
          retry_forever true
        </buffer>
      </store>
    </match>
  config-copy.sh: |
    #!/bin/sh
    cp -a /config-volume/..data/fluent.conf /fluentd/etc/fluent.conf
    tini -- /fluentd/entrypoint.sh
    # cp -a /config-volume/..data/fluent.conf /opt/bitnami/fluentd/conf
    # /opt/bitnami/ruby/bin/ruby /opt/bitnami/fluentd/bin/fluentd --config /opt/bitnami/fluentd/conf/fluentd.conf --plugin /opt/bitnami/fluentd/plugins

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: fluentd
  namespace: logging
  labels:
    k8s-app: fluentd-logging
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: fluentd-logging
  template:
    metadata:
      labels:
        k8s-app: fluentd-logging
    spec:
      ##affinity:
      ##  nodeAffinity:
      ##    requiredDuringSchedulingIgnoredDuringExecution:
      ##      nodeSelectorTerms:
      ##        - matchExpressions:
      ##            - key: node-group
      ##              operator: In
      ##              values:
      ##                - mgmt
      containers:
        - name: fluentd
          command: ["sh", "-c", "/config-volume/..data/config-copy.sh"]
          image: fluent/fluentd-kubernetes-daemonset:v1.16-debian-opensearch-1
          env:
            - name: FLUENT_ELASTICSEARCH_HOST
              value: "opensearch-dev.x2bee.com"
            - name: FLUENT_ELASTICSEARCH_PORT
              value: "443"
            - name: FLUENT_ELASTICSEARCH_SCHEME
              value: "https"
            - name: FLUENTD_SYSTEMD_CONF
              value: "disable"
            - name: FLUENT_UID
              value: "0"
            - name: FLUENT_ELASTICSEARCH_SSL_VERIFY
              value: "false"
            # Option to configure elasticsearch plugin with tls
            # ================================================================
            - name: FLUENT_ELASTICSEARCH_SSL_VERSION
              value: "TLSv1_2"
            # X-Pack Authentication
            # =====================
            - name: FLUENT_ELASTICSEARCH_USER
              valueFrom:
                secretKeyRef:
                  name: opensearch-credentials
                  key: username
            - name: FLUENT_ELASTICSEARCH_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: opensearch-credentials
                  key: password
          resources:
            limits:
              memory: 400Mi
            requests:
              cpu: 100m
              memory: 200Mi
          ports:
            - name: fluentd-source
              containerPort: 24220
              protocol: TCP
          volumeMounts:
            - name: config-volume
              mountPath: /config-volume
      terminationGracePeriodSeconds: 30
      volumes:
        - name: config-volume
          configMap:
            name: fluentd-config
            defaultMode: 0777
---
apiVersion: v1
kind: Service
metadata:
  name: fluentd-svc
  namespace: logging
  labels:
    k8s-app: fluentd-logging
spec:
  type: ClusterIP
  selector:
    k8s-app: fluentd-logging
  ports:
    - name: fluentd-source
      port: 24220
      targetPort: fluentd-source
      protocol: TCP
---
apiVersion: v1
data:
  username: YWRtaW4= # admin
  password: WDJjb21tZXJjZSEx # WDJjb21tZXJjZSEx
kind: Secret
metadata:
  name: opensearch-credentials
  namespace: logging

istio 설치

curl -L https://istio.io/downloadIstio | sh -

export PATH="$PATH:/home/tech/data/k8s/istio-1.21.2/bin"
istioctl x precheck
kubectl create namespace istio-system
istioctl install --set profile=default -y

#kubectl label namespace default istio-injection=enabled

istio 네임스페이스 레이블링 활성화, 사이드카 프록시 주입

kubectl label namespace default istio-injection=enabled

kubectl get svc istio-ingressgateway -n istio-system

istioctl dashboard kiali

istioctl 영구적용

echo 'export PATH="$PATH:/home/tech/data/k8s/istio-1.23.2/bin"' >> ~/.bashrc
source ~/.bashrc

Pasted image 20241023152634.png

Kiali는 istio의 대시보드툴

export ISTIO_HOME=/home/tech/data/istio-1.23.2
cd ${ISTIO_HOME}/samples/addons
curl -O https://raw.githubusercontent.com/istio/istio/release-1.23/samples/addons/kiali.yaml
kubectl apply -f ${ISTIO_HOME}/samples/addons/kiali.yaml

kubectl port-forward svc/kiali 20001:20001 -n istio-system

argocd 설치

수행 CLI

kubectl create namespace argocd
curl -O https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
sudo mv install.yaml argocd.yaml
kubectl apply -f argocd.yaml

적용후 Pasted image 20241022145040.png

네임스페이스가 default라서 재진행

수정 CLI

kubectl apply -f argocd.yaml -n argocd

Pasted image 20241022145249.png