본문 바로가기
Openstack

Podman 기반 Opensatck 구성 [1] (Kolla-ansible Bobcat/Cephadm Ceph)

Openstack을 구성하는 다양한 방법이 있다.  Airship Project를 이용하여 K8s 환경에 하는 방법이나 TripleO(사실상 종료...), Redhat 이 Operator를 이용한 배포 하는 방법 등이 있다. 

Kolla-ansible을 이용 한 방법도 많이 사용되는 방법으로, 오래된 방법인 만큼 안정성도 많고 확장성도 많이 고려된 방법 중 하나이다. 더욱이 최근 Docker 기반의 구성에서 Podman으로 배포하는 방법이 가능해짐에 따라 Podman의 장점을 더욱 활용할 수 있게 되었다. 

이번에 테스트를 통하여 기존 Kolla-ansible Docker 에 비하여 변경된 Pdoman으로 배포하는 과정을 테스트하고, 전체 Cluster와 함께 Ceph 도 Podman으로 하여 Docker 없이 구성하는 테스트를 진행한다. 

Kolla에서 Podman에 대한 지원이 최근에 추가됨 만큼 문서 작성 이후 많은 변화가 있을 수 있으니, 참고 용도로만 한다.

인프라 배포가 완료면 마지막으로 deploy서버에 구축된 KeyCloak 을 통하여 Keystone과 OpenID Connector로 연결하여 SSO구성을 하는 것이 최종 목적이다.

 

구성도는 아래와 같다, ens3을 배포 API용도로 사용 되며, 외부 인터넷 망으로 정의한 네트워크는 실제로는 외부 통신 되는 구간이 아닌 외부 인터넷이 되는 것처럼 테스트를 위하여 연결된 네트워크다.

인스턴스가 Floating IP 할당 후 실제 테스트는 deploy 서버에서 ens5 인터페이스를 통하여 테스틀 진행 하며, 만약 실제 환경일 경우 이 네트워크가 공인 네트워크로 연결하면 된다.

그렇기 때문에 Nuetron 에서 Floating IP대역을 해당 네트워크 대역인 10.113.1.0/24 대역으로 맞춰서 할당하도록 한다.

Ceph는 ens4를 사용하는 네트워크를 Ceph Public 망으로  모두 연결하여 deploy 서버에서 Ceph-ADM으로 BootStrap 후 각 노드에 배포하며, 복제망은 ens5로 사용한다. 

Openstack 노드의 ens5는 테넌트 네트워크 용도로 사용 된다.

 


배포 사전 작업 

 배포 타깃이 되는 호스트는 Deploy서버 기준으로 접근 가능 하도록 미리 Key 가 등록 되어 Ansible 설치 후 SSH기반으로 연결이 가능한 상태이다. 

모든 OS는 Ubuntu 22.04 환경에서 진행되며, 방화벽 내/외부 모두 허용되어 있는 상태이다. 

이때 Deploy 서버는 외부 도메인인 dev24 deploy.cyuucloud.xyz은 공인 아이피가 아닌 내부 사설로 연결할 수 있도록 Deploy 서버의 Hosts파일에  정보를 추가해준다.

root@cyyoon-c1-deploy-010:~# cat /etc/hostst
127.0.0.1 localhost
172.21.1.12 cyyoon-c1-ceph-012
172.21.1.11 cyyoon-c1-ceph-011
172.21.1.13 cyyoon-c1-ceph-013
172.21.1.51 cyyoon-c1-openstack-051 # controller
172.21.1.52 cyyoon-c1-openstack-052 # controller 
172.21.1.53 cyyoon-c1-openstack-053 # controller
172.21.1.54 cyyoon-c1-openstack-054 # compute
172.21.1.55 cyyoon-c1-openstack-055 # comoute
172.21.1.10 cyyoon-c1-deploy-010  dev24deploy.cyuucloud.xyz ### <----- dev24deploy.cyuucloud.xyz  도메인에 대하여 배포서버 진입 시 사설로 연결 하기 위해서 :Registry 용도
172.21.1.99 dev24vip.cyuucloud.xyz  ### <--- Openstack External VIP Endpoint FQDN : Openstack API와 Horizon 연결 용도
 
root@cyyoon-c1-deploy-010:~# apt install ansible -y
root@cyyoon-c1-deploy-010:~# cat /etc/ansible/hosts
[all]
cyyoon-c1-ceph-01[1:3]
cyyoon-c1-openstack-05[1:5]
cyyoon-c1-deploy-010  
 
root@cyyoon-c1-deploy-010:~# ansible -m ping all
cyyoon-c1-ceph-011 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
cyyoon-c1-ceph-013 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
//...
 
root@cyyoon-c1-deploy-010:~# ansible -m shell -ba 'lsb_release -a|grep -i desc' -i /etc/ansible/hosts  all
cyyoon-c1-ceph-012 | CHANGED | rc=0 >>
Description:    Ubuntu 22.04.3 LTSNo LSB modules are available.
cyyoon-c1-ceph-011 | CHANGED | rc=0 >>
Description:    Ubuntu 22.04.3 LTSNo LSB modules are available.
cyyoon-c1-ceph-013 | CHANGED | rc=0 >>
Description:    Ubuntu 22.04.3 LTSNo LSB modules are available.
cyyoon-c1-openstack-052 | CHANGED | rc=0 >>
Description:    Ubuntu 22.04.3 LTSNo LSB modules are available.
cyyoon-c1-openstack-051 | CHANGED | rc=0 >>
Description:    Ubuntu 22.04.3 LTSNo LSB modules are available.
cyyoon-c1-openstack-055 | CHANGED | rc=0 >>
Description:    Ubuntu 22.04.3 LTSNo LSB modules are available.
//...

배포의 편의성을 위하여 Registry를 구성 하는데, 이때 Podman 버전을 4 이상을 사용하기 위하여,  Kubic rRepository 로 등록 하고, Podman 설치 한다. 그리고, 인증서를 등록하지 않고 Registry 를 사용하기 위함과 short-name-mode를 변경을 위하여 registries.conf 파일을 수정한다.

root@cyyoon-c1-deploy-010:~#  mkdir -p /etc/apt/keyrings
root@cyyoon-c1-deploy-010:~#  curl -fsSL https://download.opensuse.org/repositories/devel:kubic:libcontainers:unstable/xUbuntu_$(lsb_release -rs)/Release.key \
  | gpg --dearmor \
  | sudo tee /etc/apt/keyrings/devel_kubic_libcontainers_unstable.gpg > /dev/null
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/devel_kubic_libcontainers_unstable.gpg]\
    https://download.opensuse.org/repositories/devel:kubic:libcontainers:unstable/xUbuntu_$(lsb_release -rs)/ /" \
  | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:unstable.list > /dev/null
root@cyyoon-c1-deploy-010:~#  apt update -qq && apt -qq -y install podman
root@cyyoon-c1-deploy-010:~# podman  version
Client:       Podman Engine
Version:      4.6.2
API Version:  4.6.2
Go Version:   go1.18.1
Built:        Thu Jan  1 00:00:00 1970
OS/Arch:      linux/amd64
root@cyyoon-c1-deploy-010:~# vi  /etc/containers/registries.conf
//...
unqualified-search-registries = ["cyyoon-c1-deploy-010", "docker.io", "quay.io"]
//...
short-name-mode="permissive"
insecure = true
//...

 Registry를 Podman으로 실행시키고, Login까지 테스트해본다. 해당 도메인은 별도로 구매한 개인 도메인을 사용했고 letsencrypt

zerossl. 와 같은 기간이 짧은 무료 인증서를 통하여 테스트를 한다.

 물론 kolla-ansible 자체에도 사설 인증서를 테스트를 위해 생성할 수도 있고, Openssl 명령으로 생성이 가능하기 때문에 편한 방법으로 인증서만 보유해서 사용하면 된다. 

root@cyyoon-c1-deploy-010:~# mkdir -p /data/registry/data
root@cyyoon-c1-deploy-010:~# mkdir -p /data/registry/config/auth
root@cyyoon-c1-deploy-010:~# podman  run --rm -ti docker.io/xmartlabs/htpasswd:latest cyyoon cyyoon-password >  /data/registry/config/auth/htpasswd
root@cyyoon-c1-deploy-010:~# cat  /data/registry/config/auth/htpasswd
cyyoon:$2y$05$.o7JloR6j.VkYiwfNT617uLF/jzU8ewq6M4gR8fgnJw4ZdclE/hja
root@cyyoon-c1-deploy-010:~# podman  run --name local-docker-registry -d \
--restart=always -p 5000:5000  \
-v  /data/registry/config/auth:/auth  \
-v  /root/ssl/dev24deploy.cyuucloud.xyz:/certs  \
-v /data/registry/data:/var/lib/registry/Docker/registry/v2 \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/certificate.crt   \
-e REGISTRY_HTTP_TLS_KEY=/certs/private.key  \
-e REGISTRY_AUTH=htpasswd  \
-e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm"  \
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd  \
registry:2.8.2
42030e94f3356d8cc6041394f681478421ba41c39c67bf3545741345bdabad2b
 root@cyyoon-c1-deploy-010:~# podman  ps
CONTAINER ID  IMAGE                             COMMAND               CREATED         STATUS         PORTS                   NAMES
f0ef8476dadd  docker.io/library/registry:2.8.2  /etc/docker/regis...  20 seconds ago  Up 20 seconds  0.0.0.0:5000->5000/tcp  local-docker-registry 
 
 
## Curl TLS 통신 테스트를 위하여 CA인증서 업데이트를 해당 서버에 하고 Curl 로 테스트 해야 한다.
root@cyyoon-c1-deploy-010:~# apt-get upgrade ca-certificates&&  update-ca-certificates
root@cyyoon-c1-deploy-010:~# curl -u "cyyoon:cyyoon-password" https://dev24deploy.cyuucloud.xyz:5000/v2/_catalog
{"repositories":[]}
 
 root@cyyoon-c1-deploy-010:~# mkdir -p   ~/.config/containers/
 root@cyyoon-c1-deploy-010:~# podman login    --authfile ~/.config/containers/auth.json dev24deploy.cyuucloud.xyz:5000
Username: cyyoon
Password:
Login Succeeded!
root@cyyoon-c1-deploy-010:~/ssl/dev24deploy.cyuucloud.xyz
# cat  ~/.config/containers/auth.json
{
        "auths": {
                "dev24deploy.cyuucloud.xyz:5000": {
                        "auth": "Y3l5b29uOmN5eW9vbi1wYXNzd29yZA=="
                }
        }

Skopeo를 사용하여 테스트 Container Image를 생성한 Registry로 Copy 하는 테스트를 수행해 본다.

root@cyyoon-c1-deploy-010:~# podman run --rm --security-opt seccomp=unconfined --net host quay.io/skopeo/stable copy --dest-tls-verify=false \
 --dest-creds cyyoon:cyyoon-password \
  docker://quay.io/openstack.kolla/prometheus-libvirt-exporter:2023.2-ubuntu-jammy \
  docker://dev24deploy.cyuucloud.xyz:5000/openstack.kolla/prometheus-libvirt-exporter:2023.2-ubuntu-jammy
 //...
 Getting image source signatures
Copying blob sha256:df2fac849a4581b035132d99e203fd83dc65590ea565435a266cb0e14a508838
Copying blob sha256:4aa22da760be3229029adbd1459d59b17a69d549bef2650317706b428692378a
//...
root@cyyoon-c1-deploy-010:~#  curl -u "cyyoon:cyyoon-password" https://dev24deploy.cyuucloud.xyz:5000/v2/_catalog
{"repositories":["openstack.kolla/prometheus-libvirt-exporter"]}

배포 시 다양한 Python 패키지들에 대한 의존성 문제가 발생할 수 있기 때문에  pip 설치와 함께 VirtualEnv 설정을 진행한다.

root@cyyoon-c1-deploy-010:~/test# apt install python3-pip -y
root@cyyoon-c1-deploy-010:~/test# pip3 install virtualenv
root@cyyoon-c1-deploy-010:~/test# virtualenv  /home/cy-deploy-env/
created virtual environment CPython3.10.12.final.0-64 in 416ms
  creator CPython3Posix(dest=/home/cy-deploy-env, clear=False, no_vcs_ignore=False, global=False)
  seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/root/.local/share/virtualenv)
    added seed packages: pip==23.3.1, setuptools==69.0.2, wheel==0.42.0
  activators BashActivator,CShellActivator,FishActivator,NushellActivator,PowerShellActivator,PythonActivator
root@cyyoon-c1-deploy-010:~/test# source  /home/cy-deploy-env/bin/activate
(cy-deploy-env) root@cyyoon-c1-deploy-010:~/test# which python
/home/cy-deploy-env/bin/python

 


Cephadm Ceph Cluster (v18.2.1 Reef)

Ceph배포를 위하여 사용되는 Container Image는 미리 Skopeo를 이용하여 Registry에 등록해둔다.

root@cyyoon-c1-deploy-010:~#  podman run --rm --security-opt seccomp=unconfined --net host quay.io/skopeo/stable copy \
 --dest-tls-verify=false --dest-creds cyyoon:cyyoon-password \
  docker://quay.io/ceph/ceph:v18.2.1-20240118 \
  docker://dev24deploy.cyuucloud.xyz:5000/ceph/ceph:v18.2.1-20240118
 
root@cyyoon-c1-deploy-010:~#  curl -u "cyyoon:cyyoon-password" http://dev24deploy.cyuucloud.xyz:5000/v2/_catalog
{"repositories":["ceph/ceph"]}

이제 Ceph-ansiable 설치 Tool에 대한 제공이 종료된 시점에서 Ceph Cluster 배포는 Cephadm과 Rook Operator를 이용하는 방법 두 가지로 나눠졌다(https://docs.ceph.com/en/reef/install/)

Upstream에서 Cephadm으로 설치되는 Package 설치 이후, Reef(18.2)으로 업데이트하는 방법이 공식 문서에서 제안하는 방법이다. 해당 방법으로 Cephadm을 설치한다.(https://docs.ceph.com/en/reef/cephadm/install/#update-cephadm)

(cy-deploy-env) root@cyyoon-c1-deploy-010:/home# apt install cephadm -y
//...
cy-deploy-env) root@cyyoon-c1-deploy-010:/home# cephadm version
ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)
 
(cy-deploy-env) root@cyyoon-c1-deploy-010:/home# cephadm add-repo --release reef
Installing repo GPG key from https://download.ceph.com/keys/release.gpg...
Installing repo file at /etc/apt/sources.list.d/ceph.list...
Updating package list...
Completed adding repo.
(cy-deploy-env) root@cyyoon-c1-deploy-010:/home# cephadm install
Installing packages ['cephadm']...
(cy-deploy-env) root@cyyoon-c1-deploy-010:/home# which cephadm
/usr/sbin/cephadm
 
root@cyyoon-c1-deploy-010:/home/cephadm# cephadm --image dev24deploy.cyuucloud.xyz:5000/ceph/ceph:v18.2.1-202401 version
cephadm version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)

 

배포 전 Chrony 설정하여 시간 동기화를 진행한다.

root@cyyoon-c1-deploy-010:~# cat /etc/ansible/hosts
[ceph]
cyyoon-c1-ceph-01[1:3]
cyyoon-c1-deploy-010
 
root@cyyoon-c1-deploy-010:~# ansible -m shell -ba 'apt install chrony -y ' ceph
root@cyyoon-c1-deploy-010:~# ansible -m shell -ba 'systemctl enable --now chrony' ceph
root@cyyoon-c1-deploy-010:~# ansible -m shell -ba 'chronyc  sources' ceph
cyyoon-c1-ceph-012 | CHANGED | rc=0 >>
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^- prod-ntp-3.ntp1.ps5.cano>     2   6    77    64   +532us[ +532us] +/-  127ms
^- prod-ntp-4.ntp1.ps5.cano>     2   6   177     1   -705us[ -705us] +/-  127ms
^- prod-ntp-5.ntp1.ps5.cano>     2   6    77    64  +4124us[+4124us] +/-  134ms
^- alphyn.canonical.com          2   6    77    64  -1744us[-1623us] +/-  120ms
^- 106.247.248.106               2   6   177     0   +482us[ +482us] +/-   26ms
^- ntp-seoul.gombadi.com         2   6   237     3    -49ms[  -49ms] +/-  134ms
^* 193.123.243.2                 2   6   177     5  +2386ns[+2169ns] +/- 4632us
^- 121.174.142.82                3   6   177     7  +1539us[+1539us] +/-   42ms
cyyoon-c1-ceph-011 | CHANGED | rc=0 >>
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^- prod-ntp-4.ntp1.ps5.cano>     2   6   177    16    +25us[  +25us] +/-  127ms
^- prod-ntp-5.ntp4.ps5.cano>     2   6   177    15   -164us[ -164us] +/-  137ms
^- prod-ntp-3.ntp4.ps5.cano>     2   6   177    15  +3274us[+3274us] +/-  130ms
^- alphyn.canonical.com          2   6   177    16   +432us[ +432us] +/-  122ms

hosts 파일을 각 노드에 복사해준다.

root@cyyoon-c1-deploy-010:/home/cephadm# cat /etc/hosts
127.0.0.1 localhost
172.21.1.12 cyyoon-c1-ceph-012
172.21.1.11 cyyoon-c1-ceph-011
172.21.1.13 cyyoon-c1-ceph-013
172.21.1.51 cyyoon-c1-openstack-051
172.21.1.52 cyyoon-c1-openstack-052
172.21.1.53 cyyoon-c1-openstack-053
172.21.1.54 cyyoon-c1-openstack-054
172.21.1.55 cyyoon-c1-openstack-055
//...
root@cyyoon-c1-deploy-010:/home/cephadm# ansible -m copy -ba 'src=/etc/hosts dest=/etc/hosts' ceph

Ceph 배포를 위하여 사용되는 ceph-011~3 노드에 Podman을 각각 설치한다.

# ceph-011
root@cyyoon-c1-ceph-011:~# curl -fsSL https://download.opensuse.org/repositories/devel:kubic:libcontainers:unstable/xUbuntu_$(lsb_release -rs)/Release.key \
  | gpg --dearmor \
  | sudo tee /etc/apt/keyrings/devel_kubic_libcontainers_unstable.gpg > /dev/null
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/devel_kubic_libcontainers_unstable.gpg]\
    https://download.opensuse.org/repositories/devel:kubic:libcontainers:unstable/xUbuntu_$(lsb_release -rs)/ /" \
  | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:unstable.list > /dev/null
root@cyyoon-c1-ceph-011:~# apt update -qq && apt -qq -y install podman
root@cyyoon-c1-ceph-011:~# podman  version  | grep -i ^version
Version:      4.6.2
root@cyyoon-c1-ceph-011:~# cat /etc/containers/registries.conf
//..추가
[[registry]]
insecure = true
location = "dev24deploy.cyuucloud.xyz:5000"
root@cyyoon-c1-ceph-011:~# podman login dev24deploy.cyuucloud.xyz:5000 Username: cyyoon
Password:
Login Succeeded!
root@cyyoon-c1-ceph-011:~# podman pull dev24deploy.cyuucloud.xyz:5000/ceph/ceph:v18.2.1-20240118ㄱㄷ
Trying to pull dev24deploy.cyuucloud.xyz:5000/ceph/ceph:v18.2.1-20240118...
 
 # ceph-012
root@cyyoon-c1-ceph-012:~#  mkdir -p /etc/apt/keyrings
root@cyyoon-c1-ceph-012:~# curl -fsSL https://download.opensuse.org/repositories/devel:kubic:libcontainers:unstable/xUbuntu_$(lsb_release -rs)/Release.key \
  | gpg --dearmor \
  | sudo tee /etc/apt/keyrings/devel_kubic_libcontainers_unstable.gpg > /dev/null
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/devel_kubic_libcontainers_unstable.gpg]\
    https://download.opensuse.org/repositories/devel:kubic:libcontainers:unstable/xUbuntu_$(lsb_release -rs)/ /" \
  | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:unstable.list > /dev/null
root@cyyoon-c1-ceph-012:~#   apt update -qq && apt -qq -y install podman
root@cyyoon-c1-ceph-012:~# podman version | grep -i ^version
Version:      4.6.2
root@cyyoon-c1-ceph-012:~# cat /etc/containers/registries.conf
//..추가
[[registry]]
insecure = true
location = "dev24deploy.cyuucloud.xyz:5000"
root@cyyoon-c1-ceph-012:~# podman login dev24deploy.cyuucloud.xyz:5000 Username: cyyoon
Password:
Login Succeeded!
root@cyyoon-c1-ceph-012:~# podman pull dev24deploy.cyuucloud.xyz:5000/ceph/ceph:v18.2.1-20240118
 
 # ceph-013
root@cyyoon-c1-ceph-013:~# podman pull dev24deploy.cyuucloud.xyz:5000/ceph/ceph:v18.2.1-20240118  # ceph-013
root@cyyoon-c1-ceph-013:~# mkdir -p /etc/apt/keyrings
root@cyyoon-c1-ceph-013:~# curl -fsSL https://download.opensuse.org/repositories/devel:kubic:libcontainers:unstable/xUbuntu_$(lsb_release -rs)/Release.key \
  | gpg --dearmor \
  | sudo tee /etc/apt/keyrings/devel_kubic_libcontainers_unstable.gpg > /dev/null
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/devel_kubic_libcontainers_unstable.gpg]\
    https://download.opensuse.org/repositories/devel:kubic:libcontainers:unstable/xUbuntu_$(lsb_release -rs)/ /" \
  | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:unstable.list > /dev/null
root@cyyoon-c1-ceph-013:~# podman version | grep -i ^version
Version:      4.6.2
root@cyyoon-c1-ceph-013:~# cat /etc/containers/registries.conf
//..추가
[[registry]]
insecure = true
location = "dev24deploy.cyuucloud.xyz:5000"
root@cyyoon-c1-ceph-013:~# podman  login dev24deploy.cyuucloud.xyz:5000 Username: cyyoon
Password:
Login Succeeded!
root@cyyoon-c1-ceph-013:~# podman  pull dev24deploy.cyuucloud.xyz:5000/ceph/ceph:v18.2.1-20240118

실제 운영 환경에서는 초기에 많은 ceph 설정이 들어가기 때문에 아래와 같이 initial ceph 설정 파일을 생성하고, 자동으로 ceph mgr db에 들어갈 수 있도록 한다.

root@cyyoon-c1-deploy-010:~# cd /home/cephadm/
root@cyyoon-c1-deploy-010:/home/cephadm# cat initial-ceph.conf
[global]
debug asok = 0/0
debug auth = 0/0
debug bdev = 0/0
debug bluefs = 0/0
debug bluestore = 0/0
debug buffer = 0/0
debug civetweb = 0/0
debug client = 0/0
debug compressor = 0/0
debug context = 0/0
debug crush = 0/0
 
[osd]
osd_min_pg_log_entries = 10
osd_max_pg_log_entries =10

배포를 위한 OSD /MON 등의 배치는 Spec을 통하여 진행할 수 있도록 한다.

root@cyyoon-c1-deploy-010:/home/cephadm# cat cluster-spec.yaml
service_type: host
addr: 172.21.1.10
hostname: cyyoon-c1-deploy-010
---
service_type: host
addr: 172.21.1.11
hostname: cyyoon-c1-ceph-011
location:
  root: default
  datacenter: DC1
  rack: rack-a
labels:
  - osd
  - mon
  - mgr
---
service_type: host
addr: 172.21.1.12
hostname: cyyoon-c1-ceph-012
location:
  root: default
  datacenter: DC1
  rack: rack-b
labels:
  - osd
  - mon
  - mgr
---
service_type: host
addr: 172.21.1.13
hostname: cyyoon-c1-ceph-013
location:
  root: default
  datacenter: DC1
  rack: rack-c
labels:
  - osd
  - mon
  - mgr
---
service_type: mon
placement:
  hosts:
    - cyyoon-c1-ceph-011
    - cyyoon-c1-ceph-012
    - cyyoon-c1-ceph-013
---
service_type: mgr
placement:
  hosts:
    - cyyoon-c1-ceph-011
    - cyyoon-c1-ceph-012
    - cyyoon-c1-ceph-013
---
service_type: osd
service_id: cyyoon_osd
placement:
  hosts:
    - cyyoon-c1-ceph-011
    - cyyoon-c1-ceph-012
    - cyyoon-c1-ceph-013
spec:
  data_devices:
    paths:
      - /dev/sdb

각 노드들이 Registry에 접근할 때 사용 하는 인증 정보는 별도 registry.json를 통하여 배포 관리 하도록 한다.

root@cyyoon-c1-deploy-010:/home/cephadm# cat registry.json
{
         "url":"dev24deploy.cyuucloud.xyz:5000",
         "username":"cyyoon",
           "password":"cyyoon-password"
}

이제 , 이러한 정보를 기반으로 Bootstrap을 수행 한다. 명령어에서 알 수 있듯이 10.111.2.0 대역은 Cluster Network 로 사용 하며, Monitor Network 는 Bootstrap 을 수행하는 노드의 아이피로 해당 아이피 대역으로 설정되어 Cluster 가 배포 된다.

root@cyyoon-c1-deploy-010:/home/cephadm# cephadm   --image dev24deploy.cyuucloud.xyz:5000/ceph/ceph:v18.2.1-20240118   \
 bootstrap  --ssh-user=root  --mon-ip=10.111.1.10  --cluster-network=10.112.1.0/24  --registry-json registry.json \
   --config initial-ceph.conf \
   --apply-spec=cluster-spec.yaml  \
   --initial-dashboard-user=admin --initial-dashboard-password=password  --skip-monitoring-stack
//...  
Saving cluster configuration to /var/lib/ceph/94aac626-b80d-11ee-963c-8d6b99fb8b9d/config directory
Enabling autotune for osd_memory_target
You can access the Ceph CLI as following in case of multi-cluster or non-default config:
 
        sudo /usr/sbin/cephadm shell --fsid 94aac626-b80d-11ee-963c-8d6b99fb8b9d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
 
Or, if you are only running a single cluster on this host:
 
        sudo /usr/sbin/cephadm shell
 
Please consider enabling telemetry to help improve Ceph:
 
        ceph telemetry on
 
For more information see:
 
        https://docs.ceph.com/en/latest/mgr/telemetry/
 
Bootstrap complete.

Bootstrap 명령어가 종료 때 Chepadm 명령으로 Sheall에 들어갈 수 있도록 fsid와 함께 명령줄을 제공해준다. 해당 명령어를 이용하여 Shell에 진입한다. 어느 정도 시간이 지나서 확인하면, ceph orch 명령으로 배포된 상태를 확인할 수 있다.

root@cyyoon-c1-deploy-010:/home/cephadm#         sudo /usr/sbin/cephadm shell --fsid 94aac626-b80d-11ee-963c-8d6b99fb8b9d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
root@cyyoon-c1-deploy-010:/# ceph -s
  cluster:
    id:     94aac626-b80d-11ee-963c-8d6b99fb8b9d
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum cyyoon-c1-ceph-013,cyyoon-c1-ceph-012,cyyoon-c1-ceph-011 (age 50s)
    mgr: cyyoon-c1-deploy-010.ftnoav(active, since 2m), standbys: cyyoon-c1-ceph-013.hsikld, cyyoon-c1-ceph-011.ntghis, cyyoon-c1-ceph-012.lwfgzn
    osd: 3 osds: 3 up (since 9s), 3 in (since 49s)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 2 objects, 257 KiB
    usage:   79 MiB used, 195 GiB / 195 GiB avail
    pgs:     1 active+clean
 
root@cyyoon-c1-deploy-010:/# ceph osd df tree
ID  CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP  META    AVAIL    %USE  VAR   PGS  STATUS  TYPE NAME
-1         0.19048         -  195 GiB   81 MiB  2.1 MiB   0 B  78 MiB  195 GiB  0.04  1.00    -          root default
-4         0.19048         -  195 GiB   81 MiB  2.1 MiB   0 B  78 MiB  195 GiB  0.04  1.00    -              datacenter DC1
-3         0.06349         -   65 GiB   27 MiB  732 KiB   0 B  26 MiB   65 GiB  0.04  1.01    -                  rack rack-a
-2         0.06349         -   65 GiB   27 MiB  732 KiB   0 B  26 MiB   65 GiB  0.04  1.01    -                      host cyyoon-c1-ceph-011
 1    hdd  0.06349   1.00000   65 GiB   27 MiB  732 KiB   0 B  26 MiB   65 GiB  0.04  1.01    1      up                  osd.1
-6         0.06349         -   65 GiB   27 MiB  732 KiB   0 B  26 MiB   65 GiB  0.04  1.00    -                  rack rack-b
-5         0.06349         -   65 GiB   27 MiB  732 KiB   0 B  26 MiB   65 GiB  0.04  1.00    -                      host cyyoon-c1-ceph-012
 0    hdd  0.06349   1.00000   65 GiB   27 MiB  732 KiB   0 B  26 MiB   65 GiB  0.04  1.00    1      up                  osd.0
-8         0.06349         -   65 GiB   27 MiB  732 KiB   0 B  26 MiB   65 GiB  0.04  1.00    -                  rack rack-c
-7         0.06349         -   65 GiB   27 MiB  732 KiB   0 B  26 MiB   65 GiB  0.04  1.00    -                      host cyyoon-c1-ceph-013
 2    hdd  0.06349   1.00000   65 GiB   27 MiB  732 KiB   0 B  26 MiB   65 GiB  0.04  1.00    1      up                  osd.2
                       TOTAL  195 GiB   81 MiB  2.1 MiB   0 B  78 MiB  195 GiB  0.04
MIN/MAX VAR: 1.00/1.01  STDDEV: 0
root@cyyoon-c1-deploy-010:/# ceph orch ls
NAME             PORTS  RUNNING  REFRESHED  AGE  PLACEMENT
crash                       4/4  39s ago    3m   *
mgr                         4/3  39s ago    2m   cyyoon-c1-ceph-011;cyyoon-c1-ceph-012;cyyoon-c1-ceph-013
mon                         3/3  39s ago    2m   cyyoon-c1-ceph-011;cyyoon-c1-ceph-012;cyyoon-c1-ceph-013
osd.service_osd               3  39s ago    2m   cyyoon-c1-ceph-011;cyyoon-c1-ceph-012;cyyoon-c1-ceph-013
root@cyyoon-c1-deploy-010:/# ceph orch ps
NAME                             HOST                  PORTS        STATUS          REFRESHED   AGE  MEM USE  MEM LIM  VERSION  IMAGE ID      CONTAINER ID
crash.cyyoon-c1-ceph-011         cyyoon-c1-ceph-011                 running (99s)      2s ago   98s    6656k        -  18.2.1   7f099bcd7014  b1aa816057f1
crash.cyyoon-c1-ceph-012         cyyoon-c1-ceph-012                 running (102s)     2s ago  101s    6656k        -  18.2.1   7f099bcd7014  d527178bf9b8
crash.cyyoon-c1-ceph-013         cyyoon-c1-ceph-013                 running (105s)     2s ago  105s    6665k        -  18.2.1   7f099bcd7014  f05b153ccab7
crash.cyyoon-c1-deploy-010       cyyoon-c1-deploy-010               running (2m)       1s ago    2m    6656k        -  18.2.1   7f099bcd7014  53e526c44daa
mgr.cyyoon-c1-ceph-011.ntghis    cyyoon-c1-ceph-011    *:8443,8765  running (93s)      2s ago   92s     438M        -  18.2.1   7f099bcd7014  328f0cd1a815
mgr.cyyoon-c1-ceph-012.lwfgzn    cyyoon-c1-ceph-012    *:8443,8765  running (90s)      2s ago   90s     438M        -  18.2.1   7f099bcd7014  2102bcd87efd
mgr.cyyoon-c1-ceph-013.hsikld    cyyoon-c1-ceph-013    *:8443,8765  running (96s)      2s ago   96s     437M        -  18.2.1   7f099bcd7014  905a72d8df4f
mgr.cyyoon-c1-deploy-010.ftnoav  cyyoon-c1-deploy-010  *:9283,8765  running (4m)       1s ago    4m     488M        -  18.2.1   7f099bcd7014  7e17dfc4571c
mon.cyyoon-c1-ceph-011           cyyoon-c1-ceph-011                 running (77s)      2s ago   76s    29.0M    2048M  18.2.1   7f099bcd7014  50ff23032031
mon.cyyoon-c1-ceph-012           cyyoon-c1-ceph-012                 running (83s)      2s ago   83s    29.4M    2048M  18.2.1   7f099bcd7014  5e32b2929a38
mon.cyyoon-c1-ceph-013           cyyoon-c1-ceph-013                 running (87s)      2s ago   86s    38.0M    2048M  18.2.1   7f099bcd7014  818083dbffc7
osd.0                            cyyoon-c1-ceph-012                 running (49s)      2s ago   48s    53.5M    4096M  18.2.1   7f099bcd7014  9f59a19cd76e
osd.1                            cyyoon-c1-ceph-011                 running (48s)      2s ago   48s    54.1M    4096M  18.2.1   7f099bcd7014  54d0aee3f52d
osd.2                            cyyoon-c1-ceph-013                 running (49s)      2s ago   48s    52.1M    4096M  18.2.1   7f099bcd7014  78ad78032c07

Build Kolla Container Images

테스트하는 2024년 1월 기준 Bobcat Kolla Project 에는 이제 Container Build 시 Podman을 이용하는 방법도 추가되었다.

( https://docs.openstack.org/kolla/latest/admin/image-building.html)

Podman의 장점 중 하나인 Daemon-less로 동작하기 때문에 Kolla에서 Build 시 Container API 연결에 필요한 Socket를 별도로 실행해야 한다.  그래서 아래와 같이 "systemctl enable --now podman.socket" 명령을 수행하여 Build 시 Podman에 의하여 Build 및 Push 가 되도록 하는 것이다.

(cy-deploy-env) root@cyyoon-c1-deploy-010:/home# pip install git+https://github.com/openstack/kolla.git@stable/2023.2
Collecting git+https://github.com/openstack/kolla.git@stable/2023.2
  Cloning https://github.com/openstack/kolla.git (to revision stable/2023.2) to /tmp/pip-req-build-l0x9yo81
  Running command git clone --filter=blob:none --quiet https://github.com/openstack/kolla.git /tmp/pip-req-build-l0x9yo81
  Running command git checkout -b stable/2023.2 --track origin/stable/2023.2
  Switched to a new branch 'stable/2023.2'
(cy-deploy-env) root@cyyoon-c1-deploy-010:/home# python3 -m pip install podman
(cy-deploy-env) root@cyyoon-c1-deploy-010:/home#  systemctl  enable --now podman.socket
Created symlink /etc/systemd/system/sockets.target.wants/podman.socket → /lib/systemd/system/podman.socket.

 

Kolla-build 시 Profile이 기본적으로 설정되어 있기 때문에 원하는 경우 Custom 한 Profile로 Container Build시 선택 하는 Kolla Container Image를 편하게 지정할 수 있다. 

아래 config.py 에는 이번 테스트에서 진행하려는 Default Profile 시 Build 되는 Component를 확인할 있다. 

(cy-deploy-env) root@cyyoon-c1-deploy-010:/home# cat /home/cy-deploy-env/lib/python3.10/site-packages/kolla/common/config.py
//...
    cfg.ListOpt('default',
                default=[
                    'cron',
                    'kolla-toolbox',
                    'fluentd',
                    'glance',
                    'haproxy',
                    'heat',
                    'horizon',
                    'keepalived',
                    'keystone',
                    'mariadb',
                    'memcached',
                    'neutron',
                    'nova-',
                    'placement',
                    'proxysql',
                    'openvswitch',
                    'rabbitmq',
                ],
                help='Default images'),
//...

Profile Default로 확인된  Component를 Build 하고, 추가적으로 Cinder도 Build 하고 Registry에 Push 가 자동으로 될 수 있도록 한다. 다 완료 되고 Registry 에 등록된 Image를 확인한다.

 

Deploy Openstack (Kolla-ansible) 

- 배포 준비

배포 준비 https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html 문서 내용과 같이 배포 노드에서 배포를 위한 kolla-ansible 설치 및 관련 패키지들을 설치 및 기본 설정 복사를 진행한다.

(cy-deploy-env) root@cyyoon-c1-deploy-010:~# sudo apt install git python3-dev libffi-dev gcc libssl-dev
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# pip install git+https://github.com/openstack/kolla-ansible.git@stable/2023.2
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# pip install 'ansible-core>=2.14,<2.16'
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# kolla-ansible --version
17.0.1
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# sudo mkdir -p /etc/kolla && sudo chown $USER:$USER /etc/kolla
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# cp -r /home/cy-deploy-env/share/kolla-ansible/etc_examples/kolla/* /etc/kolla
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# cp /home/cy-deploy-env/share/kolla-ansible/ansible/inventory/multinode  /etc/kolla/
 (cy-deploy-env) root@cyyoon-c1-deploy-010:~# kolla-ansible install-deps
Installing Ansible Galaxy dependencies
//...

kolla-genpwd를 이용하여 설정 필요 혹은 지정할 password를 제외하고 모두 자동 생성하도록 한다. 본 테스트에서는 "docker_registry_password" 변수에 앞서 registry 생성 시 사용된 비밀번호와 , admin keystone 비밀번호만 설정한다.  아래와 같이 복사된 파일에서 value로 설정 후 value 가 없는 변수이름의 비밀번호에 대하여 자동으로  kolla-genpwd 명령으로 생성되도록 한다.

(cy-deploy-env) root@cyyoon-c1-deploy-010:~# cat /etc/kolla/passwords.yml | grep cyyoon
docker_registry_password: cyyoon-password
keystone_admin_password: cyyoon-password
 
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# kolla-genpwd
WARNING: Passwords file "/etc/kolla/passwords.yml" is world-readable. The permissions will be changed.
 
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# tail -n 5 /etc/kolla/passwords.yml
vmware_vcenter_host_password: QTvhgMqLzbkN1HTdUzWpE5HVBRJ6DvwJcobmBqCB
watcher_database_password: ZyJm7Z4yjn3W9dyJYILEEfF7v84My6RB3tuuDpoL
watcher_keystone_password: vndmbF9fdZm5idkMRThwAzLGm2ZND6EmeCxZa1XB
zun_database_password: mr4zmkQrnHYP0Q8APwiUZt5eN5VcgGl4NU0YnpQq
zun_keystone_password: hNWVUSQg3TWzPKFaoPYuVPsVqjIQJdTZ63GMdvuw

배포에 사용되는 invetory 파일은 multinode파일을 수정하는데, openstack-051~53은 Controller 노드로 Network 노드의 역할을 겸하도록 한다. 나머지 openstack-054~55는 Compute 노드의 역할을 하도록 한다.

(cy-deploy-env) root@cyyoon-c1-deploy-010:~# head -n 30 /etc/kolla/multinode
# These initial groups are the only groups required to be modified. The
# additional groups are for more control of the environment.
[control]
# These hostname must be resolvable from your deployment host
cyyoon-c1-openstack-05[1:3]
# The above can also be specified as follows:
#control[01:03]     ansible_user=kolla
 
# The network nodes are where your l3-agent and loadbalancers will run
# This can be the same as a host in the control group
[network]
cyyoon-c1-openstack-05[1:3]
[compute]
cyyoon-c1-openstack-05[4:5]
[monitoring]
cyyoon-c1-openstack-05[1:3]
 
# When compute nodes and control nodes use different interfaces,
# you need to comment out "api_interface" and other interfaces from the globals.yml
# and specify like below:
#compute01 neutron_external_interface=eth0 api_interface=em1 tunnel_interface=em1
 
[storage]
cyyoon-c1-openstack-05[1:3]
 
[deployment]
localhost       ansible_connection=local
 
[baremetal:children]
control //...

- Ceph 연동 설정

openstack 배포에 필요한 pool 생성하고, keyring 설정을 진행한다. 이때 Ceph Client 설정을 CephADM Shell 이 아닌 호스트 OS 에서 직접 설치하여 진행 한다. 이미 CephADM Bootstrap 과정에서 자동으로 "/etc/ceph/" 디렉터리 밑으로 Admin Keyring 정보와 Ceph.conf Client 정보가 있기 때문에 Client 만 설치하면 Admin 계정으로 접근이 가능하다.

(cy-deploy-env) root@cyyoon-c1-deploy-010:~# ls -al /etc/ceph/
total 24
drwxr-xr-x   2 root root 4096 Jan 21 03:36 .
drwxr-xr-x 105 root root 4096 Jan 22 09:35 ..
-rw-------   1 root root  151 Jan 21 03:36 ceph.client.admin.keyring
-rw-r--r--   1 root root  265 Jan 21 03:36 ceph.conf
-rw-r--r--   1 root root  595 Jan 21 03:32 ceph.pub
-rw-------   1 root root  101 Jan 21 03:36 podman-auth.json
 
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# apt install ceph-common  -y
Reading package lists... Done
//...
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# ceph -s
  cluster:
    id:     94aac626-b80d-11ee-963c-8d6b99fb8b9d
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum cyyoon-c1-ceph-013,cyyoon-c1-ceph-012,cyyoon-c1-ceph-011 (age 32h)
    mgr: cyyoon-c1-ceph-013.hsikld(active, since 32h), standbys: cyyoon-c1-ceph-011.ntghis, cyyoon-c1-ceph-012.lwfgzn
    osd: 3 osds: 3 up (since 32h), 3 in (since 32h)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 2 objects, 577 KiB
    usage:   186 MiB used, 195 GiB / 195 GiB avail
    pgs:     1 active+clean
 
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# ceph osd pool create volumes 32
pool 'volumes' created
(cy-deploy-env) root@cyyoon-c1-deploy-010:~#  ceph osd pool create backups 8
pool 'backups' created
(cy-deploy-env) root@cyyoon-c1-deploy-010:~#  ceph osd pool create images 8
pool 'images' created
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# ceph osd pool create vms 8
pool 'vms' created
 
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
[client.cinder]
        key = AQDrXa5lDXecMxAAeQqA8ZTIXwyMCN7GO0e85g==
(cy-deploy-env) root@cyyoon-c1-deploy-010:~#  ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups'
[client.cinder-backup]
        key = AQDxXa5lZr6KARAAcrbdhRay0+stxYju4KCLNA==
(cy-deploy-env) root@cyyoon-c1-deploy-010:~#  ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
[client.glance]
        key = AQDzXa5lotCfNBAA4c9tdRWh9mVedI4PkN4nXw==

생성한 pool과 keyring을 각각 cinder, nova, glance 가 사용할 수 있도록 "/etc/kolla/config" 하위 디렉터리에 저장한다.

(cy-deploy-env) root@cyyoon-c1-deploy-010:~#  mkdir -p /etc/kolla/config/glance/
(cy-deploy-env) root@cyyoon-c1-deploy-010:~#  mkdir  /etc/kolla/config/cinder/
(cy-deploy-env) root@cyyoon-c1-deploy-010:~#  mkdir  /etc/kolla/config/nova/
(cy-deploy-env) root@cyyoon-c1-deploy-010:~#  mkdir  /etc/kolla/config/cinder/cinder-volume/
(cy-deploy-env) root@cyyoon-c1-deploy-010:~#  mkdir  /etc/kolla/config/cinder/cinder-backup/
 
## ceph.conf 파일 복사 하기 전에 원본 ceph.conf 파일에 tab으로 시작 하는 부분은 공백 없이 수정 해준다.
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# cat /etc/ceph/ceph.conf
# minimal ceph.conf for 94aac626-b80d-11ee-963c-8d6b99fb8b9d
[global]
        fsid = 94aac626-b80d-11ee-963c-8d6b99fb8b9d
        mon_host = [v2:10.111.1.11:3300/0,v1:10.111.1.11:6789/0] [v2:10.111.1.12:3300/0,v1:10.111.1.12:6789/0] [v2:10.111.1.13:3300/0,v1:10.111.1.13:6789/0]
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# vi /etc/ceph/ceph.conf
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# cat /etc/ceph/ceph.conf
# minimal ceph.conf for 94aac626-b80d-11ee-963c-8d6b99fb8b9d
[global]
fsid = 94aac626-b80d-11ee-963c-8d6b99fb8b9d
mon_host = [v2:10.111.1.11:3300/0,v1:10.111.1.11:6789/0] [v2:10.111.1.12:3300/0,v1:10.111.1.12:6789/0] [v2:10.111.1.13:3300/0,v1:10.111.1.13:6789/0]
 
(cy-deploy-env) root@cyyoon-c1-deploy-010:~#  ceph auth get-or-create client.glance > /etc/kolla/config/glance/ceph.client.glance.keyring
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# cp /etc/ceph/ceph.conf /etc/kolla/config/glance/
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# ceph auth get-or-create client.cinder> /etc/kolla/config/cinder/cinder-volume/ceph.client.cinder.keyring
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# ceph auth get-or-create client.cinder> /etc/kolla/config/cinder/cinder-backup/ceph.client.cinder.keyring
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# ceph auth get-or-create client.cinder-backup > /etc/kolla/config/cinder/cinder-backup/ceph.client.cinder-backup.keyring
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# cp /etc/ceph/ceph.conf /etc/kolla/config/cinder/cinder-volume/
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# cp /etc/ceph/ceph.conf /etc/kolla/config/cinder/cinder-backup/
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# ceph auth get-or-create client.cinder> /etc/kolla/config/nova/ceph.client.cinder.keyring
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# cp /etc/ceph/ceph.conf /etc/kolla/config/nova/

/etc/kolla/globals.yml 파일은 아래와 같이 구성하며, 배포를 위한 설정이 들어간다.

(cy-deploy-env) root@cyyoon-c1-deploy-010:~# egrep  -v '^#|^$' /etc/kolla/globals.yml
---
workaround_ansible_issue_8743: yes
kolla_base_distro: "ubuntu"
openstack_release: "2023.2"
openstack_tag: "17.1.1"
kolla_internal_vip_address: "172.21.1.100"
kolla_external_vip_address: "172.21.1.99"
kolla_external_fqdn: "dev24vip.cyuucloud.xyz"
kolla_container_engine: podman
docker_registry: "dev24deploy.cyuucloud.xyz:5000"
docker_registry_username: "cyyoon"
docker_namespace: "kolla"
network_interface: "ens3"
api_interface: "{{ network_interface }}"
tunnel_interface: "ens5"
neutron_external_interface: "ens6"
neutron_plugin_agent: "openvswitch"
keepalived_virtual_router_id: "51"
kolla_enable_tls_internal: "no"
kolla_enable_tls_external: "yes"
kolla_certificates_dir: "{{ node_config }}/certificates"
kolla_external_fqdn_cert: "{{ kolla_certificates_dir }}/certificate.crt"
enable_openstack_core: "yes"
enable_glance: "{{ enable_openstack_core | bool }}"
enable_keepalived: "{{ enable_haproxy | bool }}"
enable_keystone: "{{ enable_openstack_core | bool }}"
enable_mariadb: "yes"
enable_memcached: "yes"
enable_neutron: "{{ enable_openstack_core | bool }}"
enable_nova: "{{ enable_openstack_core | bool }}"
enable_rabbitmq: "{{ 'yes' if om_rpc_transport == 'rabbit' or om_notify_transport == 'rabbit' else 'no' }}"
enable_cinder: "yes"
enable_neutron_dvr: "yes"
enable_skyline: "yes"
external_ceph_cephx_enabled: "yes"
ceph_glance_keyring: "client.glance.keyring"
ceph_glance_user: "glance"
ceph_glance_pool_name: "images"
ceph_cinder_keyring: "client.cinder.keyring"
ceph_cinder_user: "cinder"
ceph_cinder_pool_name: "volumes"
ceph_cinder_backup_keyring: "client.cinder-backup.keyring"
ceph_cinder_backup_user: "cinder-backup"
ceph_cinder_backup_pool_name: "backups"
ceph_nova_keyring: "{{ ceph_cinder_keyring }}"
ceph_nova_user: "cinder"
ceph_nova_pool_name: "vms"
glance_backend_ceph: "yes"
glance_backend_file: "no"
cinder_backend_ceph: "yes"
nova_backend_ceph: "yes"
nova_compute_virt_type: "qemu" ## 만약 운영 환경으로 한다면 반드시 kvm 혹은 다른 virt 타입이 필요 . qemu 는 테스트 용도로만 
nova_console: "novnc"

- 인증서 구성 

이제 API에 대한 TLS 처리를 위하여 인증서를 등록해준다. 인증서는 앞서 미리 준비한 "dev24 vip.cyuucloud.xyz" 도메인에 대한 인증서로 사용된다.  반드시 Pem 파일에는 개인키와 인증서 그리고 CA까지 같이 포함되어야 한다.(https://openmetal.io/docs/manuals/operators-manual/day-4/kolla-ansible/enable-tls#prepare-ssl-file)

인증서는 역할에  따라서  Internal 혹은   External Endpoint에  TLS 설정이  되는데,  Haproxy에서  로드밸런싱 되는  과정에서 노출 시 인증서가  추가된다. 해당 구성에서는  아래구성과  같이 External에 TLS 설정을  하여  배포한다.(https://docs.openstack.org/kolla-ansible/latest/admin/tls.html

(cy-deploy-env) root@cyyoon-c1-deploy-010:~# mkdir /etc/kolla/certificates/
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# cp /root/ssl/dev24vip.cyuucloud.xyz/certificate.crt  /etc/kolla/certificates/haproxy.pem
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# echo  "" >> /etc/kolla/certificates/haproxy.pem
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# cat  /root/ssl/dev24vip.cyuucloud.xyz/private.key >>  /etc/kolla/certificates/haproxy.pem
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# cat /backups/kolla/globals.yml
//...
#kolla_enable_tls_internal: "no"
kolla_enable_tls_external: "yes"
kolla_certificates_dir: "{{ node_config }}/certificates"
kolla_external_fqdn_cert: "{{ kolla_certificates_dir }}/haproxy.pem"
//...

 

- Kolla-ansible 이용한 배포 및 테스트 

kolla-ansible bootstrap-servers를 진행하여 배포하려는 노드에 Kolla 배포 전 Podman 등 필요한 패키지들을 설치한다.

(cy-deploy-env) root@cyyoon-c1-deploy-010:~#  kolla-ansible -i /etc/kolla/multinode   bootstrap-servers  -e ansible_python_interpreter=/usr/bin/python3
//...
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# kolla-ansible -i /etc/kolla/multinode  prechecks -e ansible_python_interpreter=/usr/bin/python3
//...

kolla-ansible deploy를 실행하여 배포를 진행한다. 

(cy-deploy-env) root@cyyoon-c1-deploy-010:~# kolla-ansible  -i /etc/kolla/multinode  deploy   -e ansible_python_interpreter=/usr/bin/python3  
//...

앞의 배포가 완료가 잘 되었다면 post-deploy를 수행하여 admin 계정의 환경 변수 파일 생성 후 openstack client를 설치하여, 올라온 서비스 상태를 확인한다.

(cy-deploy-env) root@cyyoon-c1-deploy-010:~# kolla-ansible -i /etc/kolla/multinode post-deploy  -e ansible_python_interpreter=/usr/bin/python3
 //...
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# cat /etc/kolla/admin-openrc.sh
# Ansible managed
 
# Clear any old environment that may conflict.
for key in $( set | awk '{FS="="}  /^OS_/ {print $1}' ); do unset $key ; done
export OS_PROJECT_DOMAIN_NAME='Default'
export OS_USER_DOMAIN_NAME='Default'
export OS_PROJECT_NAME='admin'
export OS_TENANT_NAME='admin'
export OS_USERNAME='admin'
export OS_PASSWORD='cyyoon-password'
export OS_AUTH_URL='http://172.21.1.100:5000'
export OS_INTERFACE='internal'
export OS_ENDPOINT_TYPE='internalURL'
export OS_IDENTITY_API_VERSION='3'
export OS_REGION_NAME='RegionOne'
export OS_AUTH_PLUGIN='password'
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# source  /etc/kolla/admin-openrc.sh
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# pip install python-openstackclient==6.5.0
 
## Nova 서비스 상태 확인
 (cy-deploy-env) root@cyyoon-c1-deploy-010:~# openstack compute service list
+--------------------------------------+----------------+-------------------------+----------+---------+-------+----------------------------+
| ID                                   | Binary         | Host                    | Zone     | Status  | State | Updated At                 |
+--------------------------------------+----------------+-------------------------+----------+---------+-------+----------------------------+
| 9120de96-b55c-4435-81f8-546c9cd85601 | nova-scheduler | cyyoon-c1-openstack-051 | internal | enabled | up    | 2024-02-10T03:56:53.000000 |
| 7f002186-def6-43b0-9ed6-fe61382cc907 | nova-scheduler | cyyoon-c1-openstack-053 | internal | enabled | up    | 2024-02-10T03:56:48.000000 |
| d2639dc2-fd94-4c4d-b105-11689d920179 | nova-scheduler | cyyoon-c1-openstack-052 | internal | enabled | up    | 2024-02-10T03:56:54.000000 |
| 01eed177-d61c-485b-83ad-8138c083d77e | nova-conductor | cyyoon-c1-openstack-051 | internal | enabled | up    | 2024-02-10T03:56:55.000000 |
| 843946bb-2607-4e97-b853-18fc5cdb89dc | nova-conductor | cyyoon-c1-openstack-052 | internal | enabled | up    | 2024-02-10T03:56:54.000000 |
| 61e975d1-9f28-4e49-b98c-7e06325d9e55 | nova-conductor | cyyoon-c1-openstack-053 | internal | enabled | up    | 2024-02-10T03:56:48.000000 |
| 8b6b8633-06eb-46d6-a007-a8314f14ac2c | nova-compute   | cyyoon-c1-openstack-054 | nova     | enabled | up    | 2024-02-10T03:56:50.000000 |
| 4b510123-ae8c-4f40-a7d6-61a663155cf8 | nova-compute   | cyyoon-c1-openstack-055 | nova     | enabled | up    | 2024-02-10T03:56:00.000000 |
+--------------------------------------+----------------+-------------------------+----------+---------+-------+----------------------------+
 
## Neutron  서비스 상태 확인
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# openstack network agent list
+--------------------------------------+--------------------+-------------------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host                    | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+-------------------------+-------------------+-------+-------+---------------------------+
| 0548e6ce-6427-4a24-b40a-fe91f262ff91 | Open vSwitch agent | cyyoon-c1-openstack-051 | None              | :-)   | UP    | neutron-openvswitch-agent |
| 1ed7d061-8ae4-4529-bbf0-2ad13b86524d | L3 agent           | cyyoon-c1-openstack-052 | nova              | :-)   | UP    | neutron-l3-agent          |
| 3622af6d-ce26-4818-b22c-995a1b00c48d | Metadata agent     | cyyoon-c1-openstack-053 | None              | :-)   | UP    | neutron-metadata-agent    |
| 376b4601-fd51-4f61-b3a8-fb0c732d83e7 | DHCP agent         | cyyoon-c1-openstack-051 | nova              | :-)   | UP    | neutron-dhcp-agent        |
| 444bd114-0ee2-45f2-a5a8-36acf455191d | Open vSwitch agent | cyyoon-c1-openstack-054 | None              | :-)   | UP    | neutron-openvswitch-agent |
| 4709ec0c-bb95-4309-8100-395fdb1accbc | L3 agent           | cyyoon-c1-openstack-051 | nova              | :-)   | UP    | neutron-l3-agent          |
| 57b0c506-5c3c-4501-bc82-98763dd8c687 | DHCP agent         | cyyoon-c1-openstack-053 | nova              | :-)   | UP    | neutron-dhcp-agent        |
| 663268e9-31cc-4413-9379-0c8d3278edd6 | L3 agent           | cyyoon-c1-openstack-054 | nova              | :-)   | UP    | neutron-l3-agent          |
| 79c220f3-1842-4a9d-b4c9-6c4ce12eb8cc | L3 agent           | cyyoon-c1-openstack-055 | nova              | :-)   | UP    | neutron-l3-agent          |
| 8581daee-ae70-4fd0-b21b-9964c9981bd1 | L3 agent           | cyyoon-c1-openstack-053 | nova              | :-)   | UP    | neutron-l3-agent          |
| 8e88fc90-3ea9-46cd-8fc9-e86280bf73a8 | Open vSwitch agent | cyyoon-c1-openstack-055 | None              | :-)   | UP    | neutron-openvswitch-agent |
| a4019a8c-fede-4bcc-85c6-81ffdfed8c70 | DHCP agent         | cyyoon-c1-openstack-052 | nova              | :-)   | UP    | neutron-dhcp-agent        |
| aed51df7-f77d-4615-bbb7-43a93de590df | Metadata agent     | cyyoon-c1-openstack-051 | None              | :-)   | UP    | neutron-metadata-agent    |
| b5912c0e-31e5-4b72-bea6-343c24cea0f3 | Metadata agent     | cyyoon-c1-openstack-052 | None              | :-)   | UP    | neutron-metadata-agent    |
| c2c1002d-c3b8-4775-9056-73309ac0b6c1 | Open vSwitch agent | cyyoon-c1-openstack-052 | None              | :-)   | UP    | neutron-openvswitch-agent |
| d2ca06cf-45fa-41ca-bdd8-f57b80ca22d3 | Open vSwitch agent | cyyoon-c1-openstack-053 | None              | :-)   | UP    | neutron-openvswitch-agent |
| df0a7b02-3ae2-4f39-9166-57af658a3f73 | Metadata agent     | cyyoon-c1-openstack-054 | None              | :-)   | UP    | neutron-metadata-agent    |
| dffcea92-d79b-40b5-975a-371dbb403402 | Metadata agent     | cyyoon-c1-openstack-055 | None              | :-)   | UP    | neutron-metadata-agent    |
+--------------------------------------+--------------------+-------------------------+-------------------+-------+-------+---------------------------+
 
## Cinder 서비스 상태 확인
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# openstack volume service list
+------------------+-------------------------------+------+---------+-------+----------------------------+
| Binary           | Host                          | Zone | Status  | State | Updated At                 |
+------------------+-------------------------------+------+---------+-------+----------------------------+
| cinder-scheduler | cyyoon-c1-openstack-052       | nova | enabled | up    | 2024-02-10T04:06:09.000000 |
| cinder-scheduler | cyyoon-c1-openstack-053       | nova | enabled | up    | 2024-02-10T04:06:09.000000 |
| cinder-scheduler | cyyoon-c1-openstack-051       | nova | enabled | up    | 2024-02-10T04:06:09.000000 |
| cinder-volume    | cyyoon-c1-openstack-052@rbd-1 | nova | enabled | up    | 2024-02-10T04:06:09.000000 |
| cinder-volume    | cyyoon-c1-openstack-053@rbd-1 | nova | enabled | up    | 2024-02-10T04:06:09.000000 |
| cinder-volume    | cyyoon-c1-openstack-051@rbd-1 | nova | enabled | up    | 2024-02-10T04:06:02.000000 |
| cinder-backup    | cyyoon-c1-openstack-052       | nova | enabled | up    | 2024-02-10T04:06:09.000000 |
| cinder-backup    | cyyoon-c1-openstack-053       | nova | enabled | up    | 2024-02-10T04:06:09.000000 |
| cinder-backup    | cyyoon-c1-openstack-051       | nova | enabled | up    | 2024-02-10T04:06:06.000000 |
+------------------+-------------------------------+------+---------+-------+----------------------------+

 

이제 배포된 Openstack에 기능을 확인하기 위해서  이미지 등록 및 Flavor , 네트워크 생성 등의 과정을 수동으로 진행한다.

https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html 문서에 나와 있는 것과 같이 init-runonce를 통하여 바로 스크립트 실행 할 수도 있지만 다소 환경에 따라 수정할 부분이 있어서 현 구성에서는 수동으로 진행한다.

 

Ubuntu Cloud Image 다운로드 후 raw 포맷 변환 후 Glance에 등록한다.

(cy-deploy-env) root@cyyoon-c1-deploy-010:~# wget https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img
//...
(cy-deploy-env) root@cyyoon-c1-deploy-010:~#   apt-get install qemu-utils -y
//...
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# qemu-img  info jammy-server-cloudimg-amd64.img| grep format
file format: qcow2
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# qemu-img convert -f qcow2 -O raw jammy-server-cloudimg-amd64.img  jammy-server-cloudimg-amd64.raw
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# qemu-img  info jammy-server-cloudimg-amd64.raw | grep format
file format: raw
## Glance 이미지 등록
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# openstack image create --container-format bare --disk-forma raw --public --file jammy-server-cloudimg-amd64.raw jammy-server-cloudimg-amd64
//...
 
## 등록된 이미지 확인
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# openstack image show jammy-server-cloudimg-amd64
+------------------+-----------------------------------------------------------------------------------------------------------------------------------------------+
| Field            | Value                                                                                                                                         |
+------------------+-----------------------------------------------------------------------------------------------------------------------------------------------+
| checksum         | ba8aca11adc5cc96126765d723043c3a                                                                                                              |
| container_format | bare                                                                                                                                          |
| created_at       | 2024-02-10T04:16:18Z                                                                                                                          |
| disk_format      | raw                                                                                                                                           |
| file             | /v2/images/96bdcd5a-319d-412e-ab25-034d12556396/file                                                                                          |
| id               | 96bdcd5a-319d-412e-ab25-034d12556396                                                                                                          |
| min_disk         | 0                                                                                                                                             |
| min_ram          | 0                                                                                                                                             |
| name             | jammy-server-cloudimg-amd64                                                                                                                   |
| owner            | ba817ea71e4f4836bd93dfc915d15c66                                                                                                              |
| properties       | os_hash_algo='sha512', os_hash_value='171a6ae7d85518490769eadf9c479d5e118670616a0d6e7f0cff7ea62a5c2e685c7669fbe82ce86813a5ea6e6150d13332607aa |
|                  | 4a0022843a8be0fdb798e7112', os_hidden='False', owner_specified.openstack.md5='', owner_specified.openstack.object='images/jammy-server-       |
|                  | cloudimg-amd64', owner_specified.openstack.sha256='', stores='rbd'                                                                            |
| protected        | False                                                                                                                                         |
| schema           | /v2/schemas/image                                                                                                                             |
| size             | 2361393152                                                                                                                                    |
| status           | active                                                                                                                                        |
| tags             |                                                                                                                                               |
| updated_at       | 2024-02-10T04:18:48Z                                                                                                                          |
| virtual_size     | 2361393152                                                                                                                                    |
| visibility       | public                                                                                                                                        |
+------------------+-----------------------------------------------------------------------------------------------------------------------------------------------

다음으로 , SSH 접근을 위한 Keypair 등록 및 앞에서 등록한 Ubuntu 이미지 생성할 때 사용할 Flavor를 추가한다.

(cy-deploy-env) root@cyyoon-c1-deploy-010:~# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:PR7jEtfO5tXhouApcy88pD1x6Q3Du75bdVYaBa5CA8U root@cyyoon-c1-deploy-010
The key's randomart image is:
+---[RSA 3072]----+
|        .o.   ...|
|         .E  . . |
|          o   o .|
|         o o . o.|
|        S O + .oo|
|         B &  ooo|
|        *.* Oo...|
|       +.Oo=oo.  |
|        +o*B=    |
+----[SHA256]-----+
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# openstack flavor create --id 1 --vcpus 2 --ram 2048 --disk 20  test

네트워크를 생성한다. 이때 neutron_external_interface: "ens6" 설정을 진행했기 때문에 외부로 나가는 External Network의 Subnet 대역은 해당 대역이 통신 가능한 대역으로 요청해야 한다.

(cy-deploy-env) root@cyyoon-c1-deploy-010:~# cat /etc/kolla/globals.yml |grep neutron_external_interface
neutron_external_interface: "ens6"
 
(cy-deploy-env) root@cyyoon-c1-deploy-010:~#  openstack network create --share --external \
--provider-physical-network physnet1 \
--provider-network-type flat provider
 
(cy-deploy-env) root@cyyoon-c1-deploy-010:~#  openstack subnet create --network provider \
--allocation-pool start=10.113.1.210,end=10.113.1.230 \
--dns-nameserver 8.8.4.4 --gateway 10.113.1.1 \
--subnet-range 10.113.1.0/24 provider
 
(cy-deploy-env) root@cyyoon-c1-deploy-010:~#  openstack network create test-network
(cy-deploy-env) root@cyyoon-c1-deploy-010:~#   openstack subnet create --network test-network \
--dns-nameserver 8.8.4.4 --gateway 192.168.200.1 \
--subnet-range 192.168.200/24 test-subnet
 
## Router 생성 하여 앞에서 생성한 사설 네트워크 연결 후, 외부 통신 가능 하게 외부 External 네트워크를 Router 의 게이트웨이로 설정
(cy-deploy-env) root@cyyoon-c1-deploy-010:~#   openstack router create test-router
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# openstack router add subnet test-router test-subnet
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# openstack router set --external-gateway provider test-router

테스트를 위해 사용할 Security Group과 Rule을 생성한다. 테스트를 위해서 ICMP와 TCP 연결을 모두 허용하도록 한다.

(cy-deploy-env) root@cyyoon-c1-deploy-010:~#  openstack security group create test
(cy-deploy-env) root@cyyoon-c1-deploy-010:~#   openstack security group rule create --proto icmp test
(cy-deploy-env) root@cyyoon-c1-deploy-010:~#   openstack security group rule create --proto icmp --egress test
(cy-deploy-env) root@cyyoon-c1-deploy-010:~#   openstack security group rule create --proto tcp --egress test
(cy-deploy-env) root@cyyoon-c1-deploy-010:~#   openstack security group rule create --proto tcp --ingress test

이제,  앞에서 생성한 Image , Security Group , Network 등을 이용하여 Test를 위한 Instance를 생성한다.

(cy-deploy-env) root@cyyoon-c1-deploy-010:~# NET_ID=$(openstack network list --name test-network  -f value -c ID)
(cy-deploy-env) root@cyyoon-c1-deploy-010:~#    openstack server create --flavor test  --image jammy-server-cloudimg-amd64 --nic net-id=$NET_ID --security-group test  --key-name mykey    test-instance
## 생성 확인
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# openstack server list
+--------------------------------------+---------------+--------+-----------------------------+-----------------------------+--------+
| ID                                   | Name          | Status | Networks                    | Image                       | Flavor |
+--------------------------------------+---------------+--------+-----------------------------+-----------------------------+--------+
| a52c072a-36c3-41e6-8b95-2e8574fadbb3 | test-instance | ACTIVE | test-network=192.168.200.31 | jammy-server-cloudimg-amd64 | test   |
+--------------------------------------+---------------+--------+-----------------------------+-----------------------------+--------+

Floaitng IP 생성 후, 생성한 Instance에 Attach 후, SSH 통신을 확인해본다.

(cy-deploy-env) root@cyyoon-c1-deploy-010:~# openstack floating ip create provider
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| created_at          | 2024-02-17T05:59:32Z                 |
| description         |                                      |
| dns_domain          | None                                 |
| dns_name            | None                                 |
| fixed_ip_address    | None                                 |
| floating_ip_address | 10.113.1.212                         |
| floating_network_id | 50fc08fc-d675-4844-bd28-475761d3d885 |
| id                  | a76e0d09-d22b-45be-8bf2-d0005115e523 |
| name                | 10.113.1.212                         | ##<ㅡ----- Floating IP확인
| port_details        | None                                 |
| port_id             | None                                 |
| project_id          | d399bb8349844656a743d73fae3361e1     |
| qos_policy_id       | None                                 |
| revision_number     | 0                                    |
| router_id           | None                                 |
| status              | DOWN                                 |
| subnet_id           | None                                 |
| tags                | []                                   |
| updated_at          | 2024-02-17T05:59:32Z                 |
+---------------------+--------------------------------------+
 
 
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# openstack server add floating ip test-instance 10.113.1.212
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# openstack server list
+--------------------------------------+---------------+--------+--------------------------------------------+-----------------------------+--------+
| ID                                   | Name          | Status | Networks                                   | Image                       | Flavor |
+--------------------------------------+---------------+--------+--------------------------------------------+-----------------------------+--------+
| 1c61a29e-4cb8-4d10-9f0b-37b0bc1e1c3f | test-instance | ACTIVE | test-network=10.113.1.212, 192.168.200.123 | jammy-server-cloudimg-amd64 | test   |
+--------------------------------------+---------------+--------+--------------------------------------------+-----------------------------+--------+
 
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# ping 10.113.1.212  -c 1
PING 10.113.1.212 (10.113.1.212) 56(84) bytes of data.
64 bytes from 10.113.1.212: icmp_seq=1 ttl=62 time=3.60 ms
 
--- 10.113.1.212 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 3.598/3.598/3.598/0.000 ms
 
(cy-deploy-env) root@cyyoon-c1-deploy-010:~# ssh 10.113.1.212 -l ubuntu   -i ~/.ssh/id_rsa
Welcome to Ubuntu 22.04.3 LTS (GNU/Linux 5.15.0-92-generic x86_64)
 
 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/pro
 
 System information disabled due to load higher than 2.0
 
 
Expanded Security Maintenance for Applications is not enabled.
 
0 updates can be applied immediately.
 
Enable ESM Apps to receive additional future security updates.
See https://ubuntu.com/esm or run: sudo pro status
 
 
The list of available updates is more than a week old.
To check for new updates run: sudo apt update
 
Last login: Sat Feb 17 06:06:03 2024 from 10.113.1.10
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
 
idubuntu@test-instance:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1350 qdisc fq_codel state UP group default qlen 1000
    link/ether fa:16:3e:1a:66:d2 brd ff:ff:ff:ff:ff:ff
    altname enp0s3
    inet 192.168.200.123/24 metric 100 brd 192.168.200.255 scope global dynamic ens3
       valid_lft 86211sec preferred_lft 86211sec
    inet6 fe80::f816:3eff:fe1a:66d2/64 scope link
       valid_lft forever preferred_lft forever

 

주의사항 1) 

현 구성에서 SSH를 이용한 Instance를 연결한 것은 아래와 같이 그림으로 표현할 수 있다. 실제 운영 환경이 아니기 때문에 External Network(10.113.1.0/24)를 공인 네트워크 망으로 생각하고 테스트했기 때문에 단순 SSH 통신 테스트가 가능하게 한 것이다. 운영 환경일 경우 해당 네트워크가 공인 네트워크와 연결되고, 상단 L3에서 라우팅이 되는 구조여야 할 것이다. 그렇기 때문에 현 구성에서 Instance는 외부 인터넷으로 연결이 안 되는 상태이다. 만약 이 상태에서 외부 인터넷이 가능하도록 하기 위해서는 별도의 Proxy 혹은 Nat 구성이 필요로 할 것이다.

 

주의사항 2) 

디버깅이나, 내부 분석을 위하여 네트워크를 직접 확인한다면 아래와 같은 이슈를 볼 수 있다.

통신은 잘 되는 것으로 확인되나 아래와 같이 NetworkNamespace를 확인하는 과정에서 "Peer netns reference is invalid." 이 발생할 것이다. 

exec로 명령어 전달도 불가하다.

이 이슈는 Podman으로 배포하면 발생되며 호스트 OS 파일시스템에  Networ Namepsace 가 마운트 안돼서 아래와 같은 메시지가 발생되고 있는 것이다. (https://rodolfo-alonso.com/network-namespaces-and-containers)

root@cyyoon-c1-openstack-054:~# ip netns
Error: Peer netns reference is invalid.
Error: Peer netns reference is invalid.
Error: Peer netns reference is invalid.
qrouter-90ac5963-2490-4f49-9868-165dd78af743
Error: Peer netns reference is invalid.
fip-c3265f9a-128c-41f1-9c5e-a6e8f0dd6ae3
 
root@cyyoon-c1-openstack-054:~# ip netns exec qrouter-90ac5963-2490-4f49-9868-165dd78af743 ip a
setting the network namespace "qrouter-90ac5963-2490-4f49-9868-165dd78af743" failed: Invalid argument
 
 
## Docker 로 동일 설정 배포 시
root@cyyoon-c1-openstack-064:~# findmnt -oTARGET,SOURCE,FSTYPE,PROPAGATION
TARGET                                                      SOURCE           FSTYPE     PROPAGATION
/                                                           /dev/sda1        ext4       shared
├─/sys                                                      sysfs            sysfs      shared
│ ├─/sys/kernel/security                                    securityfs       securityfs shared
│ ├─/sys/fs/cgroup                                          cgroup2          cgroup2    shared
//....
├─/run                                                      tmpfs            tmpfs      shared
│ ├─/run/lock                                               tmpfs            tmpfs      shared
│ ├─/run/credentials/systemd-sysusers.service               none             ramfs      shared
│ ├─/run/snapd/ns                                           tmpfs[/snapd/ns] tmpfs      private
│ │ └─/run/snapd/ns/lxd.mnt                                 nsfs[mnt:[4026532401]]
│ │                                                                          nsfs       private
│ ├─/run/netns/qrouter-e39e0091-666e-40a0-a51c-3ec5e13d0714 nsfs[net:[4026532521]]
│ │                                                                          nsfs       shared
│ ├─/run/docker/netns/default                               nsfs[net:[4026531840]]
│ │                                                                          nsfs       shared
│ ├─/run/netns/fip-28b7787a-d512-43c9-9548-e60aad1fb1cb     nsfs[net:[4026532585]]
│ │                                                                          nsfs       shared
│ └─/run/user/0                                             tmpfs            tmpfs      shared
 
## Pdoman 으로 동일 설정 배포 시
root@cyyoon-c1-openstack-054:~# findmnt -oTARGET,SOURCE,FSTYPE,PROPAGATION
TARGET                                                                                                                       SOURCE      FSTYPE        PROPAGATION
/                                                                                                                            /dev/sda1   ext4          shared
├─/sys                                                                                                                       sysfs       sysfs         shared
//...
│ ├─/run/lock                                                                                                                tmpfs       tmpfs         shared
│ ├─/run/credentials/systemd-sysusers.service                                                                                none        ramfs         shared
│ ├─/run/snapd/ns                                                                                                            tmpfs[/snapd/ns]
│ │                                                                                                                                      tmpfs         private
│ │ └─/run/snapd/ns/lxd.mnt                                                                                                  nsfs[mnt:[4026532446]]
│ │                                                                                                                                      nsfs          private
│ └─/run/user/0                                                                                                              tmpfs       tmpfs         shared

실제 파일시스템 마운트는 neutron_l3_agent와 같은 Container에서 확인이 가능하다.

root@cyyoon-c1-openstack-054:~# podman  exec  neutron_l3_agent ip netns
qrouter-90ac5963-2490-4f49-9868-165dd78af743 (id: 0)
fip-c3265f9a-128c-41f1-9c5e-a6e8f0dd6ae3 (id: 1)
 
root@cyyoon-c1-openstack-054:~# podman  exec  neutron_l3_agent  findmnt -oTARGET,SOURCE,FSTYPE,PROPAGATION
TARGET                                                      SOURCE                                                                                                                                FSTYPE  PROPAGATION
/                                                           overlay                                                                                                                               overlay shared
├─/dev                                                      tmpfs                                                                                                                                 tmpfs   private
│ ├─/dev/pts                                                devpts                                                                                                                                devpts  private
│ ├─/dev/mqueue                                             mqueue                                                                                                                                mqueue  private
│ └─/dev/shm                                                shm                                                                                                                                   tmpfs   private
├─/sys                                                      sysfs                                                                                                                                 sysfs   private
│ └─/sys/fs/cgroup                                          cgroup2                                                                                                                               cgroup2 private
├─/proc                                                     proc                                                                                                                                  proc    private
├─/usr/lib/modules                                          /dev/sda1[/usr/lib/modules]                                                                                                           ext4    private
├─/run/netns                                                tmpfs[/netns]                                                                                                                         tmpfs   shared
│ ├─/run/netns/qrouter-90ac5963-2490-4f49-9868-165dd78af743 nsfs[net:[4026532469]]                                                                                                                nsfs    shared
│ └─/run/netns/fip-c3265f9a-128c-41f1-9c5e-a6e8f0dd6ae3     nsfs[net:[4026532594]]                                                                                                                nsfs    shared

 


Summary

드디어 Kolla-ansible 사용 시 Docker 를 탈출 할 수 있게 되었다. Podman 의 장점인 Daemonless 한 구성이 좀더 운영하는데 안정적일 것이라 보인다. 다만 RootLess 구성으로 될것 이라 봤는데 현실적으로 고려할 부분이나, 성능적인 이슈에서 아직 Root 로 동작 하는 것을 확인 했다. Podman 도 버전이 빠르게 오르면서 CNI 가 삭제 되고 다른 대체 되는 것 처럼 RootLess 에 대한 안정성/성능 이 점점 오르게 된다면 RootLess 로 가지 않을까 라는 생각도 든다.

 

 

 

반응형