본문 바로가기
Openstack

openstack helm을 이용한 kubernetes환경에서 openstack 배포

Openstack Helm을 Centos 환경에서  Ceph RBD로 구축하는 방법에 대하여 이야기한다.

공식 매뉴얼이 Ubuntu 및 Swift기반이기 때문에 커스텀하게 수정한 부분이 다소 존재한다. 하지만, 기본 배경은 공식 문서(https://docs.openstack.org/openstack-helm/latest/)를 기반으로 작성되었다.

Openstack helm?

openstack helm은 이름에서 알 수 있듯이 openstack을 kubernetes환경에서 helm chart를 이용하여 배포할 수 있는 프로젝트이다. Kubernetes의 기본 기능인  Self-Healing을 비롯하여 Kubernetes의 많은 장점들을 이용하여 Openstack을 관리하기 때문에 확장 등의 라이프 사이클 관리에 도움이 되며 을 구축, 업그레이드, 확장등의 관리를 손쉽게 할 수 있다. 대부분 AT&T를 주도적으로 99 cloud , Suse등과 함께 국내에서는 SK Telecom이 많은 기여 하고 있는 프로젝트이다.

https://www.stackalytics.com/?module=openstack-helm-group%EF%BB%BF

환경 설정 

아래의 환경과 같이 Centos7의 동일한 운영체제를 사용하며, 총 5개의 NIC이 용도별로 설정되어 있어야 한다. 이중 001 노드에서는 Kubernetes Master 및 Ceph Monitor의 역할과 배포의 역할을 담당하게 된다.

 

kube-cy4-kube001 CentOS Linux release 7.8.2003

eth0 : 10.4.10.21 (API, Deploy )

eth1 : 10.4.20.21 (Ceph Storage Public  )

eth2 : 10.4.30.21 (Ceph Storage Replication )

eth3 : 10.4.40.21 (Openstack Tanent Network )

eth4 : 192.168.193.21 (Openstack External Netowkr: Provider Network )

Kubernetes Master

Ceph Monitor

kube-cy4-kube002 CentOS Linux release 7.8.2003

eth0 : 10.4.10.22 (API, Deploy )

eth1 : 10.4.20.22 (Ceph Storage Public  )

eth2 : 10.4.30.22 (Ceph Storage Replication )

eth3 : 10.4.40.22 (Openstack Tanent Network )

eth4 : 192.168.193.22 (Openstack External Netowkr: Provider Network )

Kubernetes Worker

Ceph OSD

kube-cy4-kube003 CentOS Linux release 7.8.2003

eth0 : 10.4.10.23 (API, Deploy )

eth1 : 10.4.10.23 (Ceph Storage Public  )

eth2 : 10.4.10.23 (Ceph Storage Replication )

eth3 : 10.4.10.23 (Openstack Tanent Network )

eth4 : 192.168.193.23 (Openstack External Netowkr: Provider Network )

Kubernetes Worker

Ceph OSD

kube-cy4-kube004 CentOS Linux release 7.8.2003

eth0 : 10.4.10.24 (API, Deploy )

eth1 : 10.4.20.24 (Ceph Storage Public  )

eth2 : 10.4.30.24 (Ceph Storage Replication )

eth3 : 10.4.40.24 (Openstack Tanent Network )

eth4 : 192.168.193.24 (Openstack External Netowkr: Provider Network )

Kubernetes Worker

Ceph OSD

 

 

Deploy Ceph 

Ceph는 RBD형태로 Openstack상에서 볼륨을 사용하거나 인스턴스를 이용할 경우 사용되는 스토리지로 사용되며, Kubernetes상에 배포되는 Openstack-helm-infra의 Mariadb, RabbitMQ의 PV로 사용된다.

배포에 필요한 스크립트는 "/hoem/deploy"에서 관리하기 위하여 해당 디렉터리를 생성하며, 프로젝트를 클론 받기 위하여  git과 배포 시 필요한 파이선 패키지 관리를 위하여  pip 패키지를 설치한다.

[root@kube-cy4-kube001 ~]# mkdir  /home/deploy ; cd  /home/deploy
[root@kube-cy4-kube001 deploy]# yum install -y git epel*
[root@kube-cy4-kube001 deploy]# yum install -y python-pip

ceph 배포를 위하여  ceph-ansible을 클론 받고, v4.0.20로 checkout 한다. 그리고 배포를 위한 파이선 패키지 설치를 위하여 requirements.txt 파일을 기반으로 파이선 패키지를 설치한다.

[root@kube-cy4-kube001 deploy]# git clone https://github.com/ceph/ceph-ansible.git
[root@kube-cy4-kube001 deploy]# cd ceph-ansible/
[root@kube-cy4-kube001 ceph-ansible]# git checkout v4.0.20
[root@kube-cy4-kube001 ceph-ansible]# pip install -r requirements.txt


ceph은 ansible을 통하여 배포하기 때문에 deploy 인터페이스 아이피로 "/etc/hosts" 파일을 수정한다.

[root@kube-cy4-kube001 ceph-ansible]# tee /etc/hosts << EOF
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.4.10.21 kube-cy4-kube001 kube1
10.4.10.22 kube-cy4-kube003 kube2
10.4.10.23 kube-cy4-kube002 kube3
10.4.10.24 kube-cy4-kube004 kube4
EOF

ceph-ansible에서 각 역할별로 구분하여 배포하기 때문에 Group 별로 나눠 ceph-ansible의 inventory파일을 생성하며, 해당 inventory파일로 통신이 되는지 확인한다.

[root@kube-cy4-kube001 ceph-ansible]# tee ./ceph-hosts << EOF
[mons]
kube-cy4-kube001
[osds]
kube-cy4-kube002
kube-cy4-kube003
kube-cy4-kube004
[mdss]
[rgws]
[nfss]
[rbdmirrors]
[clients]
kube-cy4-kube001
[mgrs]
kube-cy4-kube001
[iscsigws]
[iscsi-gws]
[grafana-server]
[rgwloadbalancers]
[all:vars]
ansible_become=true
#### 설정된 User로 변경 필요
ansible_user=centos
#### 설정된 Password로 변경 필요
ansible_ssh_pass=password
EOF
 
[root@kube-cy4-kube001 ceph-ansible]# yum install -y sshpass
[root@kube-cy4-kube001 ceph-ansible]# ansible -i ceph-hosts -m ping  all
kube-cy4-kube003 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
kube-cy4-kube002 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
...

group_vars/all.yml 파일은 ceph-ansible을 이용하여 배포 시 설정할 수 있는 전반적인 배포 변수를 설정한다.

[root@kube-cy4-kube001 ceph-ansible]# tee  group_vars/all.yml << EOF
osd_scenario: lvm
osd_objectstore: bluestore
 
## monitor로 통신할 인터페이스로 실제 rbd를 사용할 경우 해당 인터페이스를 사용하게 된다.
monitor_interface: eth1
## monitor_interface 에 설정되어 있는 ip 대역으로 설정
public_network: 10.4.20.0/24
## 각 osd마다 복제를 위해 사용하는 인터페이스의 ip 대역으로 설정 한다.
cluster_network: 10.4.30.0/24
 
ceph_stable_release: nautilus
ceph_origin: repository
ceph_repository: community
ceph_mirror: http://download.ceph.com
ceph_stable_key: https://download.ceph.com/keys/release.asc
 
ntp_service_enabled: true
osd_auto_discovery: false
dashboard_enabled: false
cluster: ceph
ceph_conf_overrides:
  global:
    mon_allow_pool_delete: false
    mon_osd_down_out_subtree_limit: host
    osd_pool_default_size: 2
    osd_pool_default_min_size: 1
  osd:
    osd_min_pg_log_entries: 10
    osd_max_pg_log_entries: 10
    osd_pg_log_dups_tracked: 10
    osd_pg_log_trim_min: 10
EOF

osds.yml 에서는 실제 osd로 사용할 디바이스를 지정한다. osd 노드의 연결되어 있는 디바이스를 확인 후 아래와 같이 osds.yml파일을 설정한다.

[root@kube-cy4-kube001 ceph-ansible]# ansible -i ceph-hosts  -ba 'lsblk' osds
kube-cy4-kube004 | CHANGED | rc=0 >>
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0     11:0    1  366K  0 rom
vda    253:0    0   68G  0 disk
└─vda1 253:1    0   68G  0 part /
vdb    253:16   0   40G  0 disk
vdc    253:32   0   40G  0 disk
kube-cy4-kube002 | CHANGED | rc=0 >>
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0     11:0    1  366K  0 rom
vda    253:0    0   68G  0 disk
└─vda1 253:1    0   68G  0 part /
vdb    253:16   0   40G  0 disk
vdc    253:32   0   40G  0 disk
kube-cy4-kube003 | CHANGED | rc=0 >>
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0     11:0    1  366K  0 rom
vda    253:0    0   68G  0 disk
└─vda1 253:1    0   68G  0 part /
vdb    253:16   0   40G  0 disk
vdc    253:32   0   40G  0 disk
 
[root@kube-cy4-kube001 ceph-ansible]# tee  group_vars/osds.yml << EOF
---
devices:
 - /dev/vdb
 - /dev/vdc
EOF

clients.yml 파일에 admin key를 복사하여 ceph client명령이 수행할 수 있도록 설정한다.

[root@kube-cy4-deploy ceph-ansible]#  tee  group_vars/clients.yml << EOF
---
copy_admin_key: true
EOF

site.yml.sample을 이용하여 설정한 group_vars의 변수를 기반으로 ansible-playbook 명령을 이용하여 ceph를 배포한다.

[[root@kube-cy4-kube001 ceph-ansible]# ansible-playbook  -i ceph-hosts  site.yml.sample

만약 아래와 같은 에러가 발생하여 정상적으로 설치가 안 되는 경우가 발생할 수 있다.
...

TASK [check for python] *******************************************************************************

Saturday 01 August 2020  15:46:14 +0900 (0:00:00.058)       0:00:00.058 *******

fatal: [kube-cy4-kube001]: FAILED! =>

  msg: The ips_in_ranges filter requires python's netaddr be installed on the ansible controller.

fatal: [kube-cy4-kube002]: FAILED! =>

  msg: The ips_in_ranges filter requires python's netaddr be installed on the ansible controller.

fatal: [kube-cy4-kube003]: FAILED! =>

  msg: The ips_in_ranges filter requires python's netaddr be installed on the ansible controller.

fatal: [kube-cy4-kube004]: FAILED! =>

  msg: The ips_in_ranges filter requires python's netaddr be installed on the ansible controller.

...

배포하는 호스트에서 netaddr 패키지를 설치한 뒤 다시 배포를 진행하면 정상적으로 배포가 진행된다. 
[root@kube-cy4-kube001 ceph-ansible]# yum install python-netaddr.noarch -y

만약, ceph cluster를 삭제하기 위해서는 아래와 같이 purge-cluster.yml  플레이북을 실행하여 삭제할 수 있다,
[root@kube-cy4-kube001 ceph-ansible]# ansible-playbook  -i ceph-hosts infrastructure-playbooks/purge-cluster.yml

배포가 완료되면 "ceph -s "명령을 이용하여 "health: HEALTH_OK" 상태를 확인한다.

[root@kube-cy4-kube001 ceph-ansible]# ceph -s
  cluster:
    id:     f9b17cb6-b38c-455b-b10d-5c44d7bcc36b
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum kube-cy4-kube001 (age 3m)
    mgr: kube-cy4-kube001(active, since 2m)
    osd: 6 osds: 6 up (since 85s), 6 in (since 98s)
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   6.0 GiB used, 234 GiB / 240 GiB avail
    pgs:

Install Docker

Kubernetes를 배포하기 전 Container Runtime으로 사용될 Docker를 설치한다.

아래와 같이 docker-ce repository를 등록한 뒤 docker-ce를 yum으로 설치한다. 해당 과정은 모든 노드에서 진행한다.

[root@kube-cy4-kube001 ~]# yum install -y yum-utils
 
[root@kube-cy4-kube001 ~]# yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
 
[root@kube-cy4-kube001 ~]# yum install docker-ce docker-ce-cli containerd.io
 
[root@kube-cy4-kube001 ~]# systemctl enable --now  docker
 
[root@kube-cy4-kube001 ~]# systemctl  status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2020-08-01 13:56:13 KST; 2h 34min ago
     Docs: https://docs.docker.com
 Main PID: 1368 (dockerd)
    Tasks: 15
   Memory: 147.2M
   CGroup: /system.slice/docker.service
           └─1368 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
...

Deploy Kubernetes 

kubernetes를 설치하기 위해서 kubeadm을 사용할 것이다. kubeadm설치와 kubelet설치를 위하 kubernetes upstream repository를 등록하여 kubeadm, kubelet설치를 한다.

해당 설치는 모든 호스트에서 진행한다.

[root@kube-cy4-kube001 ~]#  cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
  
[root@kube-cy4-kube001 ~]# yum install kubeadm  -y
[root@kube-cy4-kube001 ~]#  systemctl enable kubelet --now

"kubeadm init " 명령을 이용하여 master노드로 사용할 kube-cy4-kube001 호스트에서 초기화 과정을 진행한다.

이때 kubernetes api를 사용할 인터페이스인 eth0의 ip로 "--apiserver-advertise-address" 옵션을 추가하며, pod들이 통신하기 위하여 사용되는 네트워크 cidr를 172.16.0.0/16으로 진행한다.

[root@kube-cy4-kube001 ~]# echo 1 > /proc/sys/net/ipv4/ip_forward
[root@kube-cy4-kube001 ~]# kubeadm init  --apiserver-advertise-address=10.4.10.21  --pod-network-cidr=172.16.0.0/16

정상적으로 진행되었다면 아래와 같이 "initialized successfully" 메시지를 확인할 수 있다.  

마지막에 나온 hash 값을 통하여 다른 노드들이 join 할 수 있도록 메시지를 출력한 것을 확인한다.

...
Your Kubernetes control-plane has initialized successfully!
 
To start using your cluster, you need to run the following as a regular user:
 
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
 
Then you can join any number of worker nodes by running the following on each as root:
 
kubeadm join 10.4.10.21:6443 --token 26amyi.687200qzjh5lkkxw \
    --discovery-token-ca-cert-hash sha256:e1a4959da94c40d0d21aaf8fb39878608c0002a4a6be6122bc8fa3d116b5db9f

위에서 중간에 나온 메시지처럼  master노드인 kube-cy4-kube001에서 kubernetes clinet를 위해 사용되는 kube config파일을 복사하도록 하여 kubernetes clinet(kubectl)을 실행하여 본다.

[root@kube-cy4-kube001 ~]#   mkdir -p $HOME/.kube
[root@kube-cy4-kube001 ~]#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@kube-cy4-kube001 ~]#   sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@kube-cy4-kube001 ~]# kubectl get nodes
NAME               STATUS     ROLES    AGE     VERSION
kube-cy4-kube001   NotReady   master   2m45s   v1.18.6

kube-cy4-kube002 노드에서  master 노드로 join을 한다.

[root@kube-cy4-kube002 ~]# echo 1 > /proc/sys/net/ipv4/ip_forward
[root@kube-cy4-kube002 ~]# kubeadm join 10.4.10.21:6443 --token 26amyi.687200qzjh5lkkxw \
>     --discovery-token-ca-cert-hash sha256:e1a4959da94c40d0d21aaf8fb39878608c0002a4a6be6122bc8fa3d116b5db9f
kube-

kube-cy4-kube003 노드에서  master 노드로 join을 한다.

[root@kube-cy4-kube003 ~]# echo 1 > /proc/sys/net/ipv4/ip_forward
[root@kube-cy4-kube003 ~]# kubeadm join 10.4.10.21:6443 --token 26amyi.687200qzjh5lkkxw \
>     --discovery-token-ca-cert-hash sha256:e1a4959da94c40d0d21aaf8fb39878608c0002a4a6be6122bc8fa3d116b5db9f

kube-cy4-kube004 노드에서  master 노드로 join을 한다.

[root@kube-cy4-kube004 ~]# echo 1 > /proc/sys/net/ipv4/ip_forward
[root@kube-cy4-kube004 ~]# kubeadm join 10.4.10.21:6443 --token 26amyi.687200qzjh5lkkxw \
>     --discovery-token-ca-cert-hash sha256:e1a4959da94c40d0d21aaf8fb39878608c0002a4a6be6122bc8fa3d116b5db9f

다시 master노드로 들어와서  "kubectl get nodes -o wide" 명령을 수행하여 join 된  노드를 확인할 수 있다.

하지만 아직 CNI(Container Network Interface)가 설정이 안 되어 있기 때문에  STATUS 가 아직 NotReady 상태이다. 

[root@kube-cy4-kube001 ~]# kubectl get nodes -o wide
NAME               STATUS     ROLES    AGE    VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
kube-cy4-kube001   NotReady   master   6m8s   v1.18.6   10.4.10.21    <none>        CentOS Linux 7 (Core)   3.10.0-1127.13.1.el7.x86_64   docker://19.3.12
kube-cy4-kube002   NotReady   <none>   40s    v1.18.6   10.4.20.22    <none>        CentOS Linux 7 (Core)   3.10.0-1127.13.1.el7.x86_64   docker://19.3.12
kube-cy4-kube003   NotReady   <none>   38s    v1.18.6   10.4.20.23    <none>        CentOS Linux 7 (Core)   3.10.0-1127.13.1.el7.x86_64   docker://19.3.12
kube-cy4-kube004   NotReady   <none>   36s    v1.18.6   10.4.20.24    <none>        CentOS Linux 7 (Core)   3.10.0-1127.13.1.el7.x86_64   docker://19.3.12
Deploy 

Deploy Calico

calico배포를 위한 매니패스트를 다운로드한 뒤 pod 네트워크를 위한 인터페이스 추가 및 IP in IP 모드를 Never로 변경하여 L2 통신간에 encapsulation 없이 통신하도록 한다.

[root@kube-cy4-kube001 ~]# mkdir /home/deploy/calico ; cd /home/deploy/calico
[root@kube-cy4-kube001 calico]# yum install -y wget
[root@kube-cy4-kube001 calico]# wget https://docs.projectcalico.org/manifests/calico.yaml
[root@kube-cy4-kube001 calico]# vi calico.yaml
...
      containers:
        # Runs calico-node container on each Kubernetes node. This
        # container programs network policy and routes on each
        # host.
        - name: calico-node
          image: calico/node:v3.15.1
          env:
            ### pod네트워크 인터페이스 추가
            - name: IP_AUTODETECTION_METHOD
              value: "interface=eth0"
            # Use Kubernetes API as the backing datastore.
            - name: DATASTORE_TYPE
              value: "kubernetes"
            # Wait for the datastore.
            - name: WAIT_FOR_DATASTORE
              value: "true"
            # Set based on the k8s node name.
....
            ### IP in IP 모드를 사용하지 않도록 하기 위하여 Always를 Never로 변경
            - name: CALICO_IPV4POOL_IPIP
            #  value: "Always"
              value: "Never"
[root@kube-cy4-kube001 calico]# kubectl  create -f calico.yaml

이제, calico-system namespace에 pod들이 정상적으로 동작하는 것을 확인할 수 있으며, " kubectl get nodes"명령 수행 시 Ready 상태로 변경된 것을 확인할 수 있다.

[root@kube-cy4-kube001 calico]#  kubectl get pods -n calico-system -o wide
NAME                                       READY   STATUS    RESTARTS   AGE    IP              NODE               NOMINATED NODE   READINESS GATES
calico-kube-controllers-5687f44fd5-nj49z   1/1     Running   0          2m8s   172.16.10.130   kube-cy4-kube004   <none>           <none>
calico-node-48lw6                          1/1     Running   0          2m8s   10.4.20.24      kube-cy4-kube004   <none>           <none>
calico-node-6kt28                          1/1     Running   0          2m8s   10.4.10.21      kube-cy4-kube001   <none>           <none>
calico-node-jpqf5                          1/1     Running   0          2m8s   10.4.20.23      kube-cy4-kube003   <none>           <none>
calico-node-lh7fp                          1/1     Running   0          2m8s   10.4.20.22      kube-cy4-kube002   <none>           <none>
calico-typha-7648bcdddb-4cblz              1/1     Running   0          60s    10.4.20.22      kube-cy4-kube002   <none>           <none>
calico-typha-7648bcdddb-4mjvp              1/1     Running   0          60s    10.4.20.24      kube-cy4-kube004   <none>           <none>
calico-typha-7648bcdddb-5qz8m              1/1     Running   0          2m8s   10.4.20.23      kube-cy4-kube003   <none>           <none>
calico-typha-7648bcdddb-kq5q6              1/1     Running   0          60s    10.4.10.21      kube-cy4-kube001   <none>           <none>
 
[root@kube-cy4-kube001 calico]# kubectl  get nodes -o wide
NAME               STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
kube-cy4-kube001   Ready    master   24m   v1.18.6   10.4.10.21    <none>        CentOS Linux 7 (Core)   3.10.0-1127.13.1.el7.x86_64   docker://19.3.12
kube-cy4-kube002   Ready    <none>   19m   v1.18.6   10.4.20.22    <none>        CentOS Linux 7 (Core)   3.10.0-1127.13.1.el7.x86_64   docker://19.3.12
kube-cy4-kube003   Ready    <none>   19m   v1.18.6   10.4.20.23    <none>        CentOS Linux 7 (Core)   3.10.0-1127.13.1.el7.x86_64   docker://19.3.12
kube-cy4-kube004   Ready    <none>   18m   v1.18.6   10.4.20.24    <none>    

calicoctl 바이너리를 다운로드하여 calico cni 가 제어할 pod들의 ippool의 목록과 bgp peer정보를 확인한다.

[root@kube-cy4-kube001 calico]# curl -O -L  https://github.com/projectcalico/calicoctl/releases/download/v3.15.1/calicoctl
[root@kube-cy4-kube001 calico]# chmod +x calicoctl
[root@kube-cy4-kube001 calico]# sudo mv calicoctl /usr/local/bin/
[root@kube-cy4-kube001 calico]# tee /root/calico.rc << EOF
export KUBECONFIG=/root/.kube/config
export DATASTORE_TYPE=kubernetes
EOF
[root@kube-cy4-kube001 calico]# source  ~/calico.rc
[root@kube-cy4-kube001 calico]# calicoctl get ippools
NAME                  CIDR            SELECTOR
default-ipv4-ippool   172.16.0.0/16   all()
 
[root@kube-cy4-kube001 calico]# calicoctl get nodes -o wide
NAME               ASN       IPV4            IPV6
kube-cy4-kube001   (64512)   10.4.10.21/24
kube-cy4-kube002   (64512)   10.4.10.22/24
kube-cy4-kube003   (64512)   10.4.10.23/24
kube-cy4-kube004   (64512)   10.4.10.24/24

실제 pod통신은 아래와 같이 eth0인터페이스를 통하여 172.16.0.0/16 대역에서 각 노드별 block 대역으로 직접 라우팅 되는 것을 확인할 수 있다.

[root@kube-cy4-kube001 calico]# route -n | grep 172.16
172.16.10.128   10.4.10.24      255.255.255.192 UG    0      0        0 eth0
172.16.39.128   10.4.10.22      255.255.255.192 UG    0      0        0 eth0
172.16.43.192   10.4.10.23      255.255.255.192 UG    0      0        0 eth0

Install Helm 

openstack helm과 openstack helm infra는 helm기반으로 배포가 되기 때문에 helm을 설치한다.

[root@kube-cy4-kube001 ~ ]#  curl -LO https://git.io/get_helm.sh
[root@kube-cy4-kube001 ~ ]#  chmod 700 get_helm.sh
[root@kube-cy4-kube001 ~ ]#  ./get_helm.sh
[root@kube-cy4-kube001 ~ ]# kubectl create serviceaccount --namespace kube-system tiller
[root@kube-cy4-kube001 ~ ]# kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
[root@kube-cy4-kube001 ~ ]# kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
[root@kube-cy4-kube001 ~ ]# helm init --service-account tiller --upgrade
[root@kube-cy4-kube001 ~ ]# helm version
Client: &version.Version{SemVer:"v2.16.9", GitCommit:"8ad7037828e5a0fca1009dabe290130da6368e39", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.9", GitCommit:"8ad7037828e5a0fca1009dabe290130da6368e39", GitTreeState:"clean"}

local helm repository를 등록한다.

[root@kube-cy4-kube001 ~ ]#  tee /etc/systemd/system/helm-serve.service <<EOF
[Unit]
Description=Helm Server
After=network.target
  
[Service]
User=root
Restart=always
ExecStart=/usr/local/bin/helm serve
  
[Install]
WantedBy=multi-user.target
EOF
 
[root@kube-cy4-kube001 ~ ]#  systemctl daemon-reload ; systemctl enable helm-serve --now
[root@kube-cy4-kube001 calico]# helm repo list
NAME    URL
local   http://localhost:8879/charts

Setting CSI(Ceph RBD)

앞에서 배포한 ceph cluster를 kubernetes에서 storage class를 총하여 pv를 생성하기 위하여 csi를 설정한다.

"ceph mon dump" 명령을 이용하여 cluster fsid를 확인한다.

[root@kube-cy4-kube001 ~]# ceph mon dump
dumped monmap epoch 1
epoch 1
fsid f9b17cb6-b38c-455b-b10d-5c44d7bcc36b
last_changed 2020-08-01 16:23:11.258185
created 2020-08-01 16:23:11.258185
min_mon_release 14 (nautilus)
0: [v2:10.4.20.21:3300/0,v1:10.4.20.21:6789/0] mon.kube-cy4-kube001

storage class로 사용할 "kubernetes"이름의 pool을 생성하며, kubernetes pool 인증에 사용할 kubernetes유저를 생성한다.

[root@kube-cy4-kube001 ~]# ceph osd pool create kubernetes 64 64
pool 'kubernetes' created
[root@kube-cy4-kube001 ~]# rbd pool init kubernetes
[root@kube-cy4-kube001 ~]# ceph auth get-or-create client.kubernetes mon 'profile rbd' osd 'profile rbd pool=kubernetes'
[client.kubernetes]
    key = AQBMeiVf1CKrHBAAYeIVScZlRiDo6D58xvPM4Q==

ceph-csi 프로젝트를 클론 받는다.

[root@kube-cy4-kube001 ~]# cd /home/deploy/
[root@kube-cy4-kube001 deploy]# git clone https://github.com/ceph/ceph-csi.git ; cd ceph-csi/

csi-config-map에서는 앞에서 확인한 fsid와 초기 ceph구성 시 monitor로 지정한 노드의 public network 대역의 아이피로 수정한 뒤 configmap을 적용한다.

[root@kube-cy4-kube001 ceph-csi]# vi deploy/rbd/kubernetes/csi-config-map.yaml
---
apiVersion: v1
kind: ConfigMap
data:
  config.json: |-
    [
      {
##### fsid
        "clusterID": "f9b17cb6-b38c-455b-b10d-5c44d7bcc36b",
        "monitors": [
##### Monitor 호스트의 아이피
          "10.4.20.21:6789"
        ]
      }
    ]
metadata:
  name: ceph-csi-config
[root@kube-cy4-kube001 ceph-csi]# kubectl  create -f deploy/rbd/kubernetes/csi-config-map.yaml

secret에는 앞에서 생성한 pool에 인증하기 위한 userkey와 id를 추가한 뒤 secret을 생성한다.

[root@kube-cy4-kube001 ceph-csi]# vi examples/rbd/secret.yaml
---
apiVersion: v1
kind: Secret
metadata:
  name: csi-rbd-secret
  namespace: default
stringData:
  # Key values correspond to a user name and its key, as defined in the
  # ceph cluster. User ID should have required access to the 'pool'
  # specified in the storage class
  #userID: <plaintext ID>
  userID: kubernetes
  #userKey: <Ceph auth key corresponding to ID above>
  userKey: AQBMeiVf1CKrHBAAYeIVScZlRiDo6D58xvPM4Q==
 
  # Encryption passphrase
  encryptionPassphrase: test_passphrase
 
[root@kube-cy4-kube001 ceph-csi]# kubectl  create -f  examples/rbd/secret.yaml

pool은 앞서 생성한 pool의 이름으로, clusterID는 ceph fsid를 입력한다.

[root@kube-cy4-kube001 ceph-csi]# vi  examples/rbd/storageclass.yaml
...
 
   #clusterID: <cluster-id>
   clusterID: f9b17cb6-b38c-455b-b10d-5c44d7bcc36b
   # If you want to use erasure coded pool with RBD, you need to create
   # two pools. one erasure coded and one replicated.
   # You need to specify the replicated pool here in the `pool` parameter, it is
   # used for the metadata of the images.
   # The erasure coded pool must be set as the `dataPool` parameter below.
   # dataPool: ec-data-pool
   #pool: rbd
   pool: kubernetes
...
[root@kube-cy4-kube001 ceph-csi]# kubectl create -f examples/rbd/storageclass.yaml

 ceph-csi가 vault를 이용한 키 관리를 하기 때문에 해당 부분을 적용한다.

[root@kube-cy4-kube001 ceph-csi]# kubectl create -f examples/kms/vault/kms-config.yaml

이제 적용한 내용으로 plugin을 배포한다.

[root@kube-cy4-kube001 ceph-csi]# cd examples/rbd/
[root@kube-cy4-kube001 rbd]# ./plugin-deploy.sh

정상적으로 배포가 완료되었다면 아래와 같이 pod를 확인할 수 있으며, storage class를 이용하여 pvc를 정상적으로 생성할 수 있다.

[root@kube-cy4-kube001 rbd]# kubectl  get pod
NAME                                        READY   STATUS    RESTARTS   AGE
csi-rbdplugin-2m68m                         3/3     Running   0          19s
csi-rbdplugin-8xfpd                         3/3     Running   0          19s
csi-rbdplugin-provisioner-b77dfc64c-469b6   6/6     Running   0          20s
csi-rbdplugin-provisioner-b77dfc64c-lwgg9   6/6     Running   0          20s
csi-rbdplugin-provisioner-b77dfc64c-wnxkt   6/6     Running   0          20s
csi-rbdplugin-r9v28                         3/3     Running   0          19s
 
 
[root@kube-cy4-kube001 rbd]# kubectl  get sc
NAME         PROVISIONER        RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
csi-rbd-sc   rbd.csi.ceph.com   Delete          Immediate           true                   79s
 
[root@kube-cy4-kube001 rbd]# cd /home/deploy/ceph-csi/
[root@kube-cy4-kube001 ceph-csi]# kubectl create -f examples/rbd/pvc.yaml
[root@kube-cy4-kube001 ceph-csi]# kubectl  get pvc
NAME      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
rbd-pvc   Bound    pvc-a33aeeec-e51a-463a-a708-f6ede4dbbc8a   1Gi        RWO            csi-rbd-sc     3s

ceph cluster에서 rbd명령을 이용하여 해당 이미지를 확인할 수 있다.

[root@kube-cy4-kube001 ceph-csi]# rbd ls -p kubernetes
csi-vol-91ae5b24-d477-11ea-8fdb-1a270cdb0b8f

Deploy Openstack Helm/Helm infra

openstack helm, helm infra배포를 하기 전 배포를 진행하는 노드에서 필요한 패키지를 설치한다.

[root@kube-cy4-kube001 ~]# yum install git jq python2-pip gcc python-devel -y

openstack helm, helm infra 프로젝트를 clone 한다. 

[root@kube-cy4-kube001 deploy]# git clone https://opendev.org/openstack/openstack-helm.git
[root@kube-cy4-kube001 deploy]# git clone https://opendev.org/openstack/openstack-helm-infra.git
각

각 프로젝트에서 make all 명령을 수행한다. 수행이 완료되면 앞서 생성한 local helm repository에 각 차트들이 업로드된다.

[root@kube-cy4-kube001 deploy]# cd openstack-helm ; make all
[root@kube-cy4-kube001 openstack-helm]# cd ../openstack-helm-infra ; make all
 
[root@kube-cy4-kube001 openstack-helm-infra]# helm search
NAME                                    CHART VERSION   APP VERSION DESCRIPTION
local/aodh                              0.1.0                       Openstack-Helm Aodh
local/barbican                          0.1.0                       OpenStack-Helm Barbican
local/ca-issuer                         0.1.0           1.0         Certificate Issuer chart for OSH
local/calico                            0.1.0                       OpenStack-Helm Calico
local/ceilometer                        0.1.0                       OpenStack-Helm Ceilometer
local/ceph-client                       0.1.0                       OpenStack-Helm Ceph Client
...

배포 전 OSH_INFRA_PATH 환경변수에 openstack-helm-infra 프로젝트의 경로를 선언해준다.

[root@kube-cy4-kube001 openstack-helm-infra]# cd /home/deploy/openstack-helm
[root@kube-cy4-kube001 openstack-helm-infra]# export OSH_INFRA_PATH="/home/deploy/openstack-helm-infra"

각 노드에 대한 label을 지정한다. 002~4 노드는 control plane의 역할을 할 것이다.

[root@kube-cy4-kube001 openstack-helm]# kubectl  get nodes
NAME               STATUS   ROLES    AGE   VERSION
kube-cy4-kube001   Ready    master   15h   v1.18.6
kube-cy4-kube002   Ready    <none>   15h   v1.18.6
kube-cy4-kube003   Ready    <none>   15h   v1.18.6
kube-cy4-kube004   Ready    <none>   15h   v1.18.6
 
[root@kube-cy4-kube001 openstack-helm]# kubectl label node  kube-cy4-kube002 openstack-control-plane=enabled
 
[root@kube-cy4-kube001 openstack-helm]# kubectl label node  kube-cy4-kube003 openstack-control-plane=enabled
 
[root@kube-cy4-kube001 openstack-helm]# kubectl label node  kube-cy4-kube004 openstack-control-plane=enabled

 

1. ingress

ingress는 openstack component 간의 통신 시 도메인 기반으로 통신시 사용된다. 기본 chart의 value에 override 하여 helm chart를 배포하기 위하여  배포 전 value파일을 만들어 변경사항만 추가하여 배포하도록 한다.

[root@kube-cy4-kube001 openstack-helm]# tee /tmp/ingress-kube-system.yaml << EOF
pod:
  replicas:
    error_page: 2
deployment:
  mode: cluster
  type: DaemonSet
network:
  host_namespace: true
EOF

kube-system namespace에 ingress 차트를 배포한다.

[root@kube-cy4-kube001 openstack-helm]#  helm upgrade --install ingress-kube-system ${OSH_INFRA_PATH}/ingress --namespace=kube-system --values=/tmp/ingress-kube-system.yaml
[root@kube-cy4-kube001 openstack-helm]# ./tools/deployment/common/wait-for-pods.sh kube-system

openstack namespace에 ingress를 배포한다.

[root@kube-cy4-kube001 openstack-helm]#  tee /tmp/ingress-openstack.yaml << EOF
pod:
  replicas:
    ingress: 2
    error_page: 2
EOF
 
[root@kube-cy4-kube001 openstack-helm]#  helm upgrade --install ingress-openstack ${OSH_INFRA_PATH}/ingress --namespace=openstack --values=/tmp/ingress-openstack.yaml
 
[root@kube-cy4-kube001 openstack-helm]#  ./tools/deployment/common/wait-for-pods.sh openstack

2. MariaDB

MariaDB 차트를 배포한다. 이때, storage class의 이름을 csi-rbd-sc로 앞서 생성한  storage class이름으로 변경한다.\

[root@kube-cy4-kube001 openstack-helm]# tee /tmp/mariadb.yaml << EOF
pod:
  replicas:
    server: 3
    ingress: 3
volume:
  class_name: csi-rbd-sc
  size: 5Gi
EOF
[root@kube-cy4-kube001 openstack-helm]#  helm upgrade --install mariadb ${OSH_INFRA_PATH}/mariadb --namespace=openstack --values=/tmp/mariadb.yaml
 
[root@kube-cy4-kube001 openstack-helm]# ./tools/deployment/common/wait-for-pods.sh openstack

3. RabbitMQ

rabbitmq도 mariadb와 동일하게 배포 storage class이름을 수정하여 배포한다.

[root@kube-cy4-kube001 openstack-helm]# tee  /tmp/rabbitmq.yaml << EOF
volume:
  size: 10Gi
  class_name: csi-rbd-sc
 
[root@kube-cy4-kube001 openstack-helm]#  helm upgrade --install rabbitmq ${OSH_INFRA_PATH}/rabbitmq --namespace=openstack --values=/tmp/rabbitmq.yaml
 
[root@kube-cy4-kube001 openstack-helm]# ./tools/deployment/common/wait-for-pods.sh openstack

4. Memcached

memcached는 사용할 pod selector를 지정하여 value 파일을 생성 뒤 배포한다.

[root@kube-cy4-kube001 openstack-helm]# tee /tmp/memcached.yaml <<EOF
manifests:
  network_policy: true
network_policy:
  memcached:
    ingress:
      - from:
        - podSelector:
            matchLabels:
              application: keystone
        - podSelector:
            matchLabels:
              application: heat
        - podSelector:
            matchLabels:
              application: glance
        - podSelector:
            matchLabels:
              application: cinder
        - podSelector:
            matchLabels:
              application: horizon
        - podSelector:
            matchLabels:
              application: nova
        - podSelector:
            matchLabels:
              application: neutron
        ports:
        - protocol: TCP
          port: 11211
EOF
[root@kube-cy4-kube001 openstack-helm]#  helm upgrade --install memcached ${OSH_INFRA_PATH}/memcached --namespace=openstack --values=/tmp/memcached.yaml
 
[root@kube-cy4-kube001 openstack-helm]# ./tools/deployment/common/wait-for-pods.sh openstack

5. Keystone

keystone도 동일한 방법으로 배포한다.

[root@kube-cy4-kube001 openstack-helm]# tee /tmp/keystone.yaml << EOF
pod:
  replicas:
    api: 2
EOF
[root@kube-cy4-kube001 openstack-helm]#  helm upgrade --install keystone ./keystone --namespace=openstack  --values=/tmp/keystone.yaml
 
[root@kube-cy4-kube001 openstack-helm]# ./tools/deployment/common/wait-for-pods.sh openstack

openstack 인증정보를 파일로 작성하고 openstack client 컨테이너를 이용하여 keystone이 정상적으로 작동하는지 확인한다.

client는 host에 직접 설치하지 않고 아래 방법과 같이 컨테이너를 이용하여 사용한다. keystone의 ingress도메인은 호스트 파일에 worker로 사용되는 노드의 아이피로 설정한다. 

[root@kube-cy4-kube001 openstack-helm-infra]# mkdir -p /etc/openstack
[root@kube-cy4-kube001 openstack-helm-infra]# tee /etc/openstack/openrc.env << EOF
OS_AUTH_URL=http://keystone.openstack.svc.cluster.local:80/v3
OS_IDENTITY_API_VERSION=3
OS_IMAGE_API_VERSION=2
OS_PROJECT_DOMAIN_NAME=default
OS_USER_DOMAIN_NAME=default
OS_PROJECT_NAME=admin
OS_USERNAME=admin
OS_PASSWORD=password
EOF
 
[root@kube-cy4-kube001 openstack-helm]# echo "10.4.10.22 keystone.openstack.svc.cluster.local" >> /etc/hosts
 
[root@kube-cy4-kube001 openstack-helm-infra]# docker run -it --network host -v /images:/images --env-file /etc/openstack/openrc.env docker.io/sktdev/openstackclient:stein bash
 
 
openstackclient@kube-cy4-kube001:~$ openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------------------------------+
| ID                               | Region    | Service Name | Service Type | Enabled | Interface | URL                                                     |
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------------------------------+
| 0b33f6a61fdb4860b49ab2278e6ff50c | RegionOne | keystone     | identity     | True    | internal  | http://keystone-api.openstack.svc.cluster.local:5000/v3 |
| 24103bb6eacb403facc31812019e6fbf | RegionOne | keystone     | identity     | True    | public    | http://keystone.openstack.svc.cluster.local/v3          |
| 52edf255656c421f978bea28fd22f023 | RegionOne | keystone     | identity     | True    | admin     | http://keystone.openstack.svc.cluster.local/v3          |
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------------------------------+

6. Glance

glance는 ceph의 rbd pool을 자동적으로 생성하여 사용된다. 그러기 위해서는 glance에서  ceph cluster에 접속할 정보가 있는  ceph.conf 파일을 configmap형태로 필요하기 때문에 config map을 생성한다. 이때, ceph의 monitor버전이 nautilus버전부터 v1, v2를 선택적으로 사용할 수 있도록 리스트 형태로 생성되지만 glance에서 해당 버전을 인식하지 못하기 때문에 아래와 같이 버전을 v1만 명시한다.

[root@kube-cy4-kube001 openstack-helm]# vi /etc/ceph/ceph.conf
...
# Please do not change this file directly since it is managed by Ansible and will be overwritten
[global]
cluster network = 10.4.30.0/24
fsid = f9b17cb6-b38c-455b-b10d-5c44d7bcc36b
#mon host = [v2:10.4.20.21:3300,v1:10.4.20.21:6789]
mon host = 10.4.20.21:6789
...
 
[root@kube-cy4-kube001 openstack-helm]# kubectl create configmap ceph-etc -n openstack --from-file=/etc/ceph/ceph.conf

ceph의 admin 인증정보가 필요하기 때문에 admin keyring정보를 추가한 뒤 glance를 배포한다.

[root@kube-cy4-kube001 openstack-helm]# ceph auth get client.admin | grep key
exported keyring for client.admin
    key = AQBgGCVfjOayKBAAT4iPx2CSDEMU60aSQtgBXg==
[root@kube-cy4-kube001 openstack-helm]# tee /tmp/glance.yaml  << EOF
storage: rbd
pod:
  replicas:
    api: 2
    registry: 2
conf:
  ceph:
    enabled: true
    admin_keyring: AQBgGCVfjOayKBAAT4iPx2CSDEMU60aSQtgBXg==
  glance:
    DEFAULT:
      enable_v1_api: true
      enable_v2_registry: true
EOF
 
[root@kube-cy4-kube001 openstack-helm]#  helm upgrade --install glance ./glance --namespace=openstack --values=/tmp/glance.yaml
 
[root@kube-cy4-kube001 openstack-helm]# ./tools/deployment/common/wait-for-pods.sh openstack

정상적으로 pod가 배포된 것을 확인한다.

[root@kube-cy4-kube001 openstack-helm]#kubectl  get pod -n openstack  | grep glance                      
 
glance-api-ff94f9577-ph6fx                     1/1     Running     0          2m50s
glance-api-ff94f9577-scs69                     1/1     Running     0          2m50s
glance-bootstrap-csjd4                         0/1     Completed   0          2m49s
glance-db-init-8lfws                           0/1     Completed   0          2m50s
glance-db-sync-24t8f                           0/1     Completed   0          2m50s
glance-ks-endpoints-fjczv                      0/3     Completed   0          2m50s
glance-ks-service-d59gp                        0/1     Completed   0          2m50s
glance-ks-user-q2jv6                           0/1     Completed   0          2m50s
glance-metadefs-load-tgtwn                     0/1     Completed   0          2m50s
glance-rabbit-init-sq4k4                       0/1     Completed   0          2m50s
glance-storage-init-d68nf                      0/1     Completed   0          2m50s

glance ingress 도메인을 hosts 파일에 추가 후 openstack client에 접속하면 bootstrap과정에서 업로드된 cirros이미지를 확인할 수 있다.

[root@kube-cy4-kube001 openstack-helm]# echo "10.4.10.22 glance.openstack.svc.cluster.local" >> /etc/hosts
[root@kube-cy4-kube001 openstack-helm]#  docker run -it --network host -v /images:/images --env-file /etc/openstack/openrc.env docker.io/sktdev/openstackclient:stein bash
openstackclient@kube-cy4-kube001:~$ openstack image list
+--------------------------------------+---------------------+--------+
| ID                                   | Name                | Status |
+--------------------------------------+---------------------+--------+
| 8869f634-9f67-4990-9e9a-84c110d816f4 | Cirros 0.3.5 64-bit | active |
+--------------------------------------+---------------------+--------+

7. Cinder

cinder는 glance와 동일하게 ceph의 rbd pool을 사용하기 때문에 admin keyring정보를 추가한 뒤 배포한다.

기본적으로 cinder volume과 cinder volume backup 두 가지 서비스를 위한 pool을 각각 ceph의 pool로 자동 생성한다.

[root@kube-cy4-kube001 openstack-helm]# ceph auth get  client.admin | grep key
exported keyring for client.admin
    key = AQBgGCVfjOayKBAAT4iPx2CSDEMU60aSQtgBXg==
 
[root@kube-cy4-kube001 openstack-helm]# tee  /tmp/cinder.yaml << EOF
pod:
  replicas:
    api: 2
    volume: 1
    scheduler: 1
    backup: 1
conf:
  ceph:
    admin_keyring: AQBgGCVfjOayKBAAT4iPx2CSDEMU60aSQtgBXg==
    enabled: true
  cinder:
    DEFAULT:
      backup_driver: cinder.backup.drivers.ceph.CephBackupDriver
EOF
 
 
[root@kube-cy4-kube001 openstack-helm]#  helm upgrade --install cinder ./cinder --namespace=openstack --values=/tmp/cinder.yaml
 
[root@kube-cy4-kube001 openstack-helm]# ./tools/deployment/common/wait-for-pods.sh openstack

완료되면 cinder의 pod를 확인할 수 있다.

[root@kube-cy4-kube001 openstack-helm]# kubectl  get pod -n openstack | grep cinder
cinder-api-64f59cbcb-5jjzq                     1/1     Running     0          112s
cinder-api-64f59cbcb-jsjjp                     1/1     Running     0          112s
cinder-backup-6c47fff559-2w2xm                 1/1     Running     0          112s
cinder-backup-storage-init-cjlb4               0/1     Completed   0          112s
cinder-bootstrap-h7bbj                         0/1     Completed   0          112s
cinder-create-internal-tenant-52s8p            0/1     Completed   0          112s
cinder-db-init-6gpws                           0/1     Completed   0          113s
cinder-db-sync-xt9kq                           0/1     Completed   0          113s
cinder-ks-endpoints-mqb9c                      0/9     Completed   0          112s
cinder-ks-service-d4bdf                        0/3     Completed   0          113s
cinder-ks-user-jx8wn                           0/1     Completed   0          113s
cinder-rabbit-init-x7659                       0/1     Completed   0          113s
cinder-scheduler-f8b98c7b4-p42jm               1/1     Running     0          113s
cinder-storage-init-6rz8c                      0/1     Completed   0          113s
cinder-volume-5d67df7bdd-sq2hx                 1/1     Running     0          112s

cinder ingress 도메인 정보를 hosts 파일에 추가 후 openstack client를 통하여 볼륨 서비스 확인 및 테스트 볼륨을 생성하여 확인한다.

[root@kube-cy4-kube001 openstack-helm]# echo "10.4.10.22 cinder.openstack.svc.cluster.local" >> /etc/hosts
[root@kube-cy4-kube001 openstack-helm]#  docker run -it --network host -v /images:/images --env-file /etc/openstack/openrc.env docker.io/sktdev/openstackclient:stein bash
openstackclient@kube-cy4-kube001:~$ openstack voluem lki
 
openstackclient@kube-cy4-kube001:~$ openstack volume service list
+------------------+---------------------------+------+---------+-------+----------------------------+
| Binary           | Host                      | Zone | Status  | State | Updated At                 |
+------------------+---------------------------+------+---------+-------+----------------------------+
| cinder-scheduler | cinder-volume-worker      | nova | enabled | up    | 2020-08-02T05:56:41.000000 |
| cinder-backup    | cinder-volume-worker      | nova | enabled | up    | 2020-08-02T05:56:40.000000 |
| cinder-volume    | cinder-volume-worker@rbd1 | nova | enabled | up    | 2020-08-02T05:56:41.000000 |
+------------------+---------------------------+------+---------+-------+----------------------------+
openstackclient@kube-cy4-kube001:~$ openstack volume create --size 1 test
 
openstackclient@kube-cy4-kube001:~$ cinder list
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
| ID                                   | Status    | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
| d47b5120-3d57-465f-aeb2-c655aceb565a | available | test | 1    | rbd1        | false    |             |
+--------------------------------------+-----------+------+------+-------------+----------+-------------+

8. Openvswitch 

openvswitch 차트는 daemonset으로 "openvswitch=enabled " label이 설정된 노드에만 배포된다. worker노드에 openvswitch label을 설정한다.

[root@kube-cy4-kube001 openstack-helm]# kubectl label node kube-cy4-kube002 openvswitch=enabled
node/kube-cy4-kube002 labeled
[root@kube-cy4-kube001 openstack-helm]# kubectl label node kube-cy4-kube003 openvswitch=enabled
node/kube-cy4-kube003 labeled
[root@kube-cy4-kube001 openstack-helm]# kubectl label node kube-cy4-kube004 openvswitch=enabled
node/kube-cy4-kube004 labeled

openvswitch 차트를 배포한다.

[root@kube-cy4-kube001 openstack-helm]# helm upgrade --install openvswitch ${OSH_INFRA_PATH}/openvswitch --namespace=openstack

배포가 완료된 openvswitch pod를 확인한다.

[root@kube-cy4-kube001 openstack-helm]# kubectl  get pod -n openstack  | grep openv
openvswitch-db-8llk2                           1/1     Running     0          3m29s
openvswitch-db-gw9w5                           1/1     Running     0          3m33s
openvswitch-db-q86zr                           1/1     Running     0          3m37s
openvswitch-vswitchd-2chg8                     1/1     Running     0          3m37s
openvswitch-vswitchd-lvntw                     1/1     Running     0          3m29s
openvswitch-vswitchd-vdwmx                     1/1     Running     0          3m33s

9. Libvirt, Neutron, Nova

 Libvirt, Neutron, Nova 차트는 서로 의존성이 있기 때문에 모두 정상적으로 동작해야 차트 배포가 완료된다.

각 노드에 label을 지정한다. 003,004 노드는 compute노드로 사용된다.

[root@kube-cy4-kube001 openstack-helm]# kubectl label node  kube-cy4-kube002 openstack-helm-node-class=primary
node/kube-cy4-kube002 labeled
[root@kube-cy4-kube001 openstack-helm]# kubectl label node  kube-cy4-kube003 openstack-compute-node=enabled
node/kube-cy4-kube003 labeled
[root@kube-cy4-kube001 openstack-helm]# kubectl label node  kube-cy4-kube004 openstack-compute-node=enabled
node/kube-cy4-kube004 labeled

ceph admin과 cinder 유저 인증 정보를 추가한 뒤 libvirt 차트를 배포한다. 아직 다른 component가 정상적으로 안 올라와서 pod가 생성되지 않는다.

[root@kube-cy4-kube001 openstack-helm]# ceph auth get client.admin | grep key
exported keyring for client.admin
    key = AQBgGCVfjOayKBAAT4iPx2CSDEMU60aSQtgBXg==
 
[root@kube-cy4-kube001 openstack-helm]# ceph auth get client.cinder | grep key
exported keyring for client.cinder
    key = AQDHVCZfithVDBAALjJxP9UZob3Y0IC3KhGsrA==
[root@kube-cy4-kube001 openstack-helm]# tee /tmp/libvirt.yaml << EOF
network:
  backend:
    - openvswitch
conf:
  ceph:
    enabled: true
    admin_keyring: AQBgGCVfjOayKBAAT4iPx2CSDEMU60aSQtgBXg==
    cinder:
      keyring: AQDHVCZfithVDBAALjJxP9UZob3Y0IC3KhGsrA==
      secret_uuid: 582393ff-9a5c-4a2e-ae0d-86ec18c36afc
 
 
EOF
 
[root@kube-cy4-kube001 openstack-helm]# helm upgrade --install libvirt ${OSH_INFRA_PATH}/libvirt --namespace=openstack --values=/tmp/libvirt.yaml
 
[root@kube-cy4-kube001 openstack-helm]# kubectl  get pod -n openstack | grep libvirt
libvirt-libvirt-default-4vxp5                  0/1     Init:0/3    0          27s
libvirt-libvirt-default-5spwb                  0/1     Init:0/3    0          27s

nova에서 ceph의 admin정보와 cinder 인정 정보를 차트에 추가해주며 , virt type을 qemu로 설정하여 배포한다.

[root@kube-cy4-kube001 openstack-helm]# ceph auth get client.admin | grep key
exported keyring for client.admin
    key = AQBgGCVfjOayKBAAT4iPx2CSDEMU60aSQtgBXg==
 
[root@kube-cy4-kube001 openstack-helm]# ceph auth get client.cinder | grep key
exported keyring for client.cinder
    key = AQDHVCZfithVDBAALjJxP9UZob3Y0IC3KhGsrA==
 
[root@kube-cy4-kube001 openstack-helm]# tee /tmp/nova.yaml << EOF
labels:
  api_metadata:
    node_selector_key: openstack-helm-node-class
    node_selector_value: primary
conf:
  ceph:
    enabled: true
    admin_keyring: AQBgGCVfjOayKBAAT4iPx2CSDEMU60aSQtgBXg==
    cinder:
      user: cinder
      keyring: AQDHVCZfithVDBAALjJxP9UZob3Y0IC3KhGsrA==
    nova:
      libvirt:
        images_type: rbd
        rbd_user: cinder
        rbd_secret_uuid: 582393ff-9a5c-4a2e-ae0d-86ec18c36afc
        virt_type: qemu
pod:
  replicas:
    api_metadata: 1
    placement: 2
    osapi: 2
    conductor: 2
    consoleauth: 2
    scheduler: 1
    novncproxy: 1
EOF
 
[root@kube-cy4-kube001 openstack-helm]#  helm upgrade --install nova ./nova --namespace=openstack --values=/tmp/nova.yaml

앞서 이야기한 것처럼 compute노드들의  eth3인터페이스는 tanent 네트워크로 사용하기 때문에 명시하며, eth4 인터페이스는 자동으로 ovs 브릿지를 만들기 위해서 auto_bridge_add 변수에 추가한다. br-ex로 만든 인터페이스는 provider 이름의 flat network로 floating ip를 사용하기 위해서 생성된다.

[root@kube-cy4-kube001 openstack-helm]# tee /tmp/neutron.yaml << EOF
network:
  interface:
    tunnel: eth3
pod:
  replicas:
    server: 1
conf:
  auto_bridge_add:
    br-ex: eth4
  neutron:
    DEFAULT:
      l3_ha: False
      max_l3_agents_per_router: 1
      l3_ha_network_type: vxlan
      dhcp_agents_per_network: 1
  plugins:
    ml2_conf:
      ml2_type_flat:
        flat_networks: provider
    openvswitch_agent:
      agent:
        tunnel_types: vxlan
        l2_population: True
        arp_responder: True
      ovs:
        bridge_mappings: provider:br-ex
 
EOF
 
[root@kube-cy4-kube001 openstack-helm]#  helm upgrade --install neutron ./neutron --namespace=openstack --values=/tmp/neutron.yaml

모든 pod와 서비스 job들이 종료될 때까지 스크립트를 통하여 대기한다.

[root@kube-cy4-kube001 openstack-helm]# ./tools/deployment/common/wait-for-pods.sh openstack

정상적으로 종료되었다면  client에 접속하여  nova, neutron서비스를 확인한다.

[root@kube-cy4-kube001 openstack-helm]# echo "10.4.10.22 nova.openstack.svc.cluster.local" >> /etc/hosts
10.4.10.22 nova.openstack.svc.cluster.local /etc/hosts
[root@kube-cy4-kube001 openstack-helm]# echo "10.4.10.22 neutron.openstack.svc.cluster.local" >> /etc/hosts
 
[root@kube-cy4-kube001 openstack-helm]#  docker run -it --network host -v /images:/images --env-file /etc/openstack/openrc.env docker.io/sktdev/openstackclient:stein bash
 
 
openstackclient@kube-cy4-kube001:~$ openstack compute service list
+----+------------------+-----------------------------------+----------+---------+-------+----------------------------+
| ID | Binary           | Host                              | Zone     | Status  | State | Updated At                 |
+----+------------------+-----------------------------------+----------+---------+-------+----------------------------+
| 34 | nova-consoleauth | nova-consoleauth-5468477744-qlr5d | internal | enabled | up    | 2020-08-02T07:10:37.000000 |
| 37 | nova-consoleauth | nova-consoleauth-5468477744-d27wr | internal | enabled | up    | 2020-08-02T07:10:37.000000 |
| 40 | nova-conductor   | nova-conductor-54f649d6bd-nznqv   | internal | enabled | up    | 2020-08-02T07:10:38.000000 |
| 43 | nova-scheduler   | nova-scheduler-c5f45fb88-whbr5    | internal | enabled | up    | 2020-08-02T07:10:29.000000 |
| 58 | nova-conductor   | nova-conductor-54f649d6bd-9w5hg   | internal | enabled | up    | 2020-08-02T07:10:29.000000 |
| 61 | nova-compute     | kube-cy4-kube004                  | nova     | enabled | up    | 2020-08-02T07:10:38.000000 |
| 64 | nova-compute     | kube-cy4-kube003                  | nova     | enabled | up    | 2020-08-02T07:10:37.000000 |
+----+------------------+-----------------------------------+----------+---------+-------+----------------------------+
openstackclient@kube-cy4-kube001:~$ openstack network agent list
+--------------------------------------+--------------------+------------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host             | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+------------------+-------------------+-------+-------+---------------------------+
| 261a37c4-58fc-4512-aafc-81bba3519003 | Metadata agent     | kube-cy4-kube004 | None              | :-)   | UP    | neutron-metadata-agent    |
| 2f015c71-9243-4774-bb2a-5d0d070ef4f3 | Open vSwitch agent | kube-cy4-kube004 | None              | :-)   | UP    | neutron-openvswitch-agent |
| 39f2dcf4-fbf3-46cd-b712-13d808b38dd6 | L3 agent           | kube-cy4-kube002 | nova              | :-)   | UP    | neutron-l3-agent          |
| 4a1266f9-0182-462b-9e8f-3424337483f7 | DHCP agent         | kube-cy4-kube002 | nova              | :-)   | UP    | neutron-dhcp-agent        |
| 4e1bac9f-577a-48d2-b0f7-f981cad85440 | DHCP agent         | kube-cy4-kube003 | nova              | :-)   | UP    | neutron-dhcp-agent        |
| 675ee208-2f49-4b58-9540-8de865fb3865 | Open vSwitch agent | kube-cy4-kube003 | None              | :-)   | UP    | neutron-openvswitch-agent |
| 7d6056bf-9dbb-4e55-99b4-84a056042449 | Open vSwitch agent | kube-cy4-kube002 | None              | :-)   | UP    | neutron-openvswitch-agent |
| 8ba71881-7367-4874-a41a-46f8d81cd0c2 | Metadata agent     | kube-cy4-kube003 | None              | :-)   | UP    | neutron-metadata-agent    |
| 97c7da9e-1a12-4cef-bbdf-e4c021b1345d | DHCP agent         | kube-cy4-kube004 | nova              | :-)   | UP    | neutron-dhcp-agent        |
| d0a5085e-d3a4-408c-bab8-a458d32d047b | Metadata agent     | kube-cy4-kube002 | None              | :-)   | UP    | neutron-metadata-agent    |
| d856ab20-547e-481f-857f-50a0b7a87e87 | L3 agent           | kube-cy4-kube003 | nova              | :-)   | UP    | neutron-l3-agent          |
| decd265a-9ea0-41a4-9516-c7467f2d7cad | L3 agent           | kube-cy4-kube004 | nova              | :-)   | UP    | neutron-l3-agent          |
+--------------------------------------+--------------------+------------------+-------------------+-------+-------+---------------------------+

openstack client에서  기본 network 생성하고 ,  인스턴스를 생성해본다.

openstackclient@kube-cy4-kube001:~$ openstack network create --share --external \
--provider-physical-network provider \
--provider-network-type flat provider
    
openstackclient@kube-cy4-kube001:~$ o openstack subnet create --network provider \
--allocation-pool start=192.168.193.210,end=192.168.193.240 \
--dns-nameserver 8.8.4.4 --gateway 192.168.0.1 \
--subnet-range 192.168.0.1/16 provider
      
openstackclient@kube-cy4-kube001:~$ o openstack network create selfservice
  
openstackclient@kube-cy4-kube001:~$ o openstack subnet create --network selfservice \
--dns-nameserver 8.8.4.4 --gateway 11.11.1.1 \
--subnet-range 11.11.1.0/24 selfservice
  
openstackclient@kube-cy4-kube001:~$ o openstack router create  router
  
openstackclient@kube-cy4-kube001:~$ o neutron router-interface-add router selfservice
  
openstackclient@kube-cy4-kube001:~$ o neutron router-gateway-set router provider
 
openstackclient@kube-cy4-kube001:~$ openstack network list
+--------------------------------------+-------------+--------------------------------------+
| ID                                   | Name        | Subnets                              |
+--------------------------------------+-------------+--------------------------------------+
| 3e37ecae-fed8-432d-a7ca-0de991623717 | provider    | 360e99c6-5bdc-43e3-8275-3336a0d6ef80 |
| 9364f2bb-58ea-4ce5-a867-308a0115e3ba | selfservice | 69079fee-decb-41d6-9da2-f2cfca4cc9ca |
+--------------------------------------+-------------+--------------------------------------+
 
 
openstackclient@kube-cy4-kube001:~$ openstack flavor list
+--------------------------------------+-----------+-------+------+-----------+-------+-----------+
| ID                                   | Name      |   RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------------------------------+-----------+-------+------+-----------+-------+-----------+
| 0a866d33-ad39-45c7-8461-e90b21d37524 | m1.large  |  8192 |   80 |         0 |     4 | True      |
| 17b234fc-ff37-493e-a96a-02df7e4cf574 | m1.tiny   |   512 |    1 |         0 |     1 | True      |
| 401af6df-2c9a-4771-803d-f847b4c37d33 | m1.medium |  4096 |   40 |         0 |     2 | True      |
| 7ffcb940-fd02-46e9-9d63-9556210b31d1 | m1.xlarge | 16384 |  160 |         0 |     8 | True      |
| fe9146fa-a62c-41c6-a45c-02931fdedc5a | m1.small  |  2048 |   20 |         0 |     1 | True      |
+--------------------------------------+-----------+-------+------+-----------+-------+-----------+
openstackclient@kube-cy4-kube001:~$ openstack security group list
+--------------------------------------+---------+------------------------+----------------------------------+------+
| ID                                   | Name    | Description            | Project                          | Tags |
+--------------------------------------+---------+------------------------+----------------------------------+------+
| 8c210974-5d4c-4a8a-ac62-669846fb7ded | default | Default security group | d24347196d1a42999290eadba5c51151 | []   |
| ad3441b9-eb4e-475a-a979-517ef556936c | default | Default security group |                                  | []   |
+--------------------------------------+---------+------------------------+----------------------------------+------+
openstackclient@kube-cy4-kube001:~$ openstack   security group rule create --ingress --dst-port 22 8c210974-5d4c-4a8a-ac62-669846fb7ded
 
 
openstackclient@kube-cy4-kube001:~$ openstack image list
+--------------------------------------+---------------------+--------+
| ID                                   | Name                | Status |
+--------------------------------------+---------------------+--------+
| 8869f634-9f67-4990-9e9a-84c110d816f4 | Cirros 0.3.5 64-bit | active |
+--------------------------------------+---------------------+--------+
 
 
openstackclient@kube-cy4-kube001:~$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/openstackclient/.ssh/id_rsa):
Created directory '/home/openstackclient/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/openstackclient/.ssh/id_rsa.
Your public key has been saved in /home/openstackclient/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:rmKh6dug9CYW0bCIIcuWjSPLhrcD+woRDLYJoPmm8m0 openstackclient@kube-cy4-kube001
The key's randomart image is:
+---[RSA 2048]----+
|=.               |
|Ooo              |
|O*B              |
|=@ o             |
|*.=     S        |
|+B. .  .         |
|==o+ .  .        |
|==*=E  .         |
|++B*o..          |
+----[SHA256]-----+
 
openstackclient@kube-cy4-kube001:~$ openstack server create  --image 8869f634-9f67-4990-9e9a-84c110d816f4 --security-group 8c210974-5d4c-4a8a-ac62-669846fb7ded --flavor m1.tiny --key-name admin_client_key --network 9364f2bb-58ea-4ce5-a867-308a0115e3ba test-cirros-vm

provider network에서 floating ip 생성 뒤 앞서 생성한 인스턴스에 붙이고 ssh를 이용하여 접속한다.

openstackclient@kube-cy4-kube001:~$ openstack floating ip create  provider
+---------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field               | Value                                                                                                                                                                             |
+---------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| created_at          | 2020-08-02T07:19:31Z                                                                                                                                                              |
| description         |                                                                                                                                                                                   |
| dns_domain          | None                                                                                                                                                                              |
| dns_name            | None                                                                                                                                                                              |
| fixed_ip_address    | None                                                                                                                                                                              |
| floating_ip_address | 192.168.193.235                                                                                                                                                                   |
| floating_network_id | 3e37ecae-fed8-432d-a7ca-0de991623717                                                                                                                                              |
| id                  | 7cfd6a27-4bfb-46fa-b32b-2ce5c0c021e5                                                                                                                                              |
| location            | Munch({'cloud': '', 'region_name': '', 'zone': None, 'project': Munch({'id': 'd24347196d1a42999290eadba5c51151', 'name': 'admin', 'domain_id': None, 'domain_name': 'default'})}) |
| name                | 192.168.193.235                                                                                                                                                                   |
| port_details        | None                                                                                                                                                                              |
| port_id             | None                                                                                                                                                                              |
| project_id          | d24347196d1a42999290eadba5c51151                                                                                                                                                  |
| qos_policy_id       | None                                                                                                                                                                              |
| revision_number     | 0                                                                                                                                                                                 |
| router_id           | None                                                                                                                                                                              |
| status              | DOWN                                                                                                                                                                              |
| subnet_id           | None                                                                                                                                                                              |
| tags                | []                                                                                                                                                                                |
| updated_at          | 2020-08-02T07:19:31Z                                                                                                                                                              |
+---------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
openstackclient@kube-cy4-kube001:~$ openstack   server add floating ip test-cirros-vm 192.168.193.235
openstackclient@kube-cy4-kube001:~$ ssh cirros@192.168.193.235
The authenticity of host '192.168.193.235 (192.168.193.235)' can't be established.
RSA key fingerprint is SHA256:45KMfL6+lSzqdN2fLLkd9vvxnfvfUg+h0kZUFF411uY.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.193.235' (RSA) to the list of known hosts.
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast qlen 1000
    link/ether fa:16:3e:8b:86:9f brd ff:ff:ff:ff:ff:ff
    inet 11.11.1.198/24 brd 11.11.1.255 scope global eth0
    inet6 fe80::f816:3eff:fe8b:869f/64 scope link
       valid_lft forever preferred_lft forever

 

10. Horizon

horizon 배포 전 브라우저를 통하여 접속하기 위하여 node port설정이 필요하다. 31000 포트로 지정하여 loca_settings 변수에 필요한 기능만 활성화한 뒤 배포를 한다.

[root@kube-cy4-kube001 openstack-helm]# tee /tmp/horizon.yaml << EOF
network:
  node_port:
    enabled: true
    port: 31000
conf:
  horizon:
    local_settings:
      config:
        openstack_neutron_network:
          enable_router: "True"
          enable_quotas: "True"
          enable_ipv6: "False"
          enable_ha_router: "True"
          enable_lb: "True"
          enable_firewall: "False"
          enable_vpn: "False"
          enable_fip_topology_check: "True"
EOF
 
[root@kube-cy4-kube001 openstack-helm]#  helm upgrade --install horizon ./horizon --namespace=openstack --values=/tmp/horizon.yaml
[root@kube-cy4-kube001 openstack-helm]# ./tools/deployment/common/wait-for-pods.sh openstack

배포가 완료되면 ,  http://{worker노드 ip}:31000를 브라우저를 통하여 접속 확인한다.

앞서 생성한 인스턴스를 확인할 수 있다.

 


참고 사이트 

반응형