본문 바로가기
Ceph

Rook Oeperator 환경의 RGW MultiSite 구성

Rook환경으로 Ceph를 다양한 CRD를 이용하여 배포/관리 할수 있다. 만약 서로 다른 Kubernetes 환경에서 서로 다른 Ceph Cluster를 Rook을 이용하여 배포 하여 서로 Active-Active 구성으로 하기 위해서는 Multisite 구성을 통하여 진행할 수 있다. Rook환경에서는 Multisite 구성을 위한 realm,zongroup,zone을 CRD형태로 제공하고 있으며, 실제 구성상에서 어떻게 사용했는지를이야기 하고자 한다. 

 

Master Ceph Cluster 구성

Master Kubernetes에 외부 RGW Endpoint 연결을 위해서 MetalLB를 구성 한다. 만일, 실제 환경에서 물리적인 Loadbalancer가 있다면 MetalLB를 사용할 필요는 없다. 

root@cy01-ceph231:~# kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e "s/strictARP: false/strictARP: true/" | \
kubectl diff -f - -n kube-system
  
root@cy01-ceph231:~# kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e "s/strictARP: false/strictARP: true/" | \
kubectl apply -f - -n kube-system
 
 
root@cy01-ceph231:~# kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/namespace.yaml
root@cy01-ceph231:~# kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/metallb.yaml
root@cy01-ceph231:~# kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
 
 
root@cy01-ceph231:~# tee l2.yaml << EOF
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 10.10.2.150-10.10.2.160
EOF
  
root@cy01-ceph231:~#  kubectl create -f l2.yaml
configmap/config created

crushmap 을 위하여 노드에 label을 할당한다.

root@cy01-ceph231:~#  kubectl label node cy01-ceph231 topology.rook.io/datacenter=dc1
root@cy01-ceph231:~#  kubectl label node cy01-ceph232 topology.rook.io/datacenter=dc1
root@cy01-ceph231:~#  kubectl label node cy01-ceph233 topology.rook.io/datacenter=dc1
root@cy01-ceph231:~#  kubectl label node cy01-ceph231 topology.rook.io/rack=rack1
root@cy01-ceph231:~#  kubectl label node cy01-ceph232 topology.rook.io/rack=rack2
root@cy01-ceph231:~#  kubectl label node cy01-ceph233 topology.rook.io/rack=rack3
   
   
root@cy01-ceph231:~#  kubectl  get nodes -L topology.rook.io/rack -L topology.rook.io/datacenter
NAME           STATUS   ROLES    AGE     VERSION   RACK    DATACENTER
cy01-ceph231   Ready    master   6m12s   v1.19.7   rack1   dc1
cy01-ceph232   Ready    master   5m31s   v1.19.7   rack2   dc1
cy01-ceph233   Ready    master   5m17s   v1.19.7   rack3   dc1

rook 릴리즈인 1.5.6버전을 다운 로드 한다.

1.5.7 이상에서는 버그가 있어서 zonegroup 이 생성 되지 않는다. 해당 내용은 bug로 등록 했다.
https://github.com/rook/rook/issues/7318

root@cy01-ceph231:~# cd /home/
root@cy01-ceph231:/home# wget https://github.com/rook/rook/archive/v1.5.8.tar.gz
root@cy01-ceph231:/home# tar xvzf v1.5.8.tar.gz
root@cy01-ceph231:/home# cd rook-1.5.6/cluster/examples/kubernetes/ceph/

첫번째 클러스터로 구성할 master cluster는 "rook-1.5.6/cluster/examples/kubernetes/ceph/"  디렉토리에 있는 매니페스트를 이용해서 작업을 한다.

"operator.yam" 파일에서 별도의 csi를 사용하지 않기 때문에 해당 변수를 false로 변경 한다.

...
  ROOK_CSI_ENABLE_CEPHFS: "false"
  # Enable the default version of the CSI RBD driver. To start another version of the CSI driver, see image properties below.
  ROOK_CSI_ENABLE_RBD: "false"
..

master cluster구성을 위한 crd,common,operator를 배포 하며, mon과 osd 간의 네트워크 설정을 위하여 rook-config-override  cm을 배포 한다.

root@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph# kubectl create -f crds.yaml -f common.yaml -f operator.yaml
root@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph#  cat <<EOF >   rook-config-override.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: rook-config-override
  namespace: rook-ceph
data:
  config: |
    [global]
    public network =  10.10.2.0/24
    cluster network = 10.10.3.0/24
    public addr = ""
    cluster addr = ""
    rgw dns name = rgw15.cyuucloud.xyz
    public addr = ""
    cluster addr = ""
    rgw_ops_log_rados = true
    rgw_enable_ops_log = true
    rgw_enable_usage_log = true
    debug rgw = 20
    rgw_log_http_headers = http_x_forwarded_for
    rgw_resolve_cname = true
EOF
root@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph#  kubectl  create -f  rook-config-override.yaml

cluster.yaml 파일에서 pg_autoscaler를 작동 안하도록 하며, hostPath 디렉토리를 별도의 이름으로 변경한다.

  # In Minikube, the '/data' directory is configured to persist across reboots. Use "/data/rook" in Minikube environment.
  dataDirHostPath: /var/lib/rook-master
...
  mgr:
    modules:
    # Several modules should not need to be included in this list. The "dashboard" and "monitoring" modules
    # are already enabled by other settings in the cluster CR.
    - name: pg_autoscaler
      enabled: false
...
  network:
    # enable host networking
    provider: host
...
  storage: # cluster level storage configuration and selection
    useAllNodes: false
    useAllDevices: false
    #deviceFilter:
    config:
      encryptedDevice: "false"
    nodes:
    - name: "cy01-ceph231"
      devices:
      - name: "vdd"
        config:
          deviceClass: "hdd"
          storeType: bluestore
    - name: "cy01-ceph232"
      devices:
      - name: "vdd"
        config:
          deviceClass: "hdd"
          storeType: bluestore
    - name: "cy01-ceph233"
      devices:
      - name: "vdd"
        config:
          deviceClass: "hdd"
          storeType: bluestore
...

cephcluster와 toolbox를 배포 하고 pod의 상태 및 ceph 의 상태를 확인 해본다.

 

root@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph# kubectl create -f cluster.yaml
root@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph# kubectl create -f toolbox.yaml
root@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph# kubectl get pod -n rook-ceph
NAME                                                     READY   STATUS      RESTARTS   AGE
rook-ceph-crashcollector-cy01-ceph231-56f767b5fd-5fg56   1/1     Running     0          8m58s
rook-ceph-crashcollector-cy01-ceph232-7b56599fc5-gxbx4   1/1     Running     0          8m57s
rook-ceph-crashcollector-cy01-ceph233-7d5b45648d-8g24b   1/1     Running     0          8m57s
rook-ceph-crashcollector-cy01-ceph181-5cb8d846d5-dh58c   1/1     Running     0          9m21s
rook-ceph-crashcollector-cy01-ceph182-6b894d5696-wj2f2   1/1     Running     0          9m40s
rook-ceph-crashcollector-cy01-ceph183-ddcdcf95c-fk7fp    1/1     Running     0          9m12s
rook-ceph-mgr-a-7c86568c67-h928b                         1/1     Running     0          9m12s
rook-ceph-mon-a-6996549b7d-nl6h6                         1/1     Running     0          9m49s
rook-ceph-mon-b-6579f5b459-scqhv                         1/1     Running     0          9m31s
rook-ceph-mon-c-678987585f-ftrdv                         1/1     Running     0          9m21s
rook-ceph-operator-6b8b9958c5-b4xwd                      1/1     Running     0          10m
rook-ceph-osd-0-c4996c49-ffvmn                           1/1     Running     0          8m58s
rook-ceph-osd-1-68cff4b597-w2ckg                         1/1     Running     0          8m58s
rook-ceph-osd-2-6fb7c9889-zcp5t                          1/1     Running     0          8m57s
rook-ceph-osd-prepare-cy01-ceph231-hjwtj                 0/1     Completed   0          9m11s
rook-ceph-osd-prepare-cy01-ceph232-75472                 0/1     Completed   0          9m10s
rook-ceph-osd-prepare-cy01-ceph233-kxpjd                 0/1     Completed   0          9m10s
rook-ceph-tools-875444c55-d8wh9                          1/1     Running     0          9m57s

Slave Ceph Cluster 구성

Slave 에서도 Master와 동일하게 MetalLB를 먼저 배포 하여  Master Cluster와 연결하기 위한 endpoint 로 사용 한다.

root@cy01-ceph241:~# kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e "s/strictARP: false/strictARP: true/" | \
kubectl diff -f - -n kube-system
  
root@cy01-ceph241:~# kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e "s/strictARP: false/strictARP: true/" | \
kubectl apply -f - -n kube-system
 
 
root@cy01-ceph241:~# kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/namespace.yaml
root@cy01-ceph241:~# kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/metallb.yaml
root@cy01-ceph241:~# kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
 
 
root@cy01-ceph241:~# tee l2.yaml << EOF
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 10.10.2.140-10.10.2.149
EOF
  
root@cy01-ceph241:~#  kubectl create -f l2.yaml
configmap/config created

rook 릴리즈인 1.5.6버전을 다운 로드 한다.

root@cy01-ceph241:~# cd /home/
root@cy01-ceph241:/home# wget https://github.com/rook/rook/archive/v1.5.8.tar.gz
root@cy01-ceph241:/home# tar xvzf v1.5.8.tar.gz
root@cy01-ceph241:/home# cd rook-1.5.6/cluster/examples/kubernetes/ceph/

첫번째 클러스터로 구성할 master cluster는 "rook-1.5.6/cluster/examples/kubernetes/ceph/"  디렉토리에 있는 매니페스트를 이용해서 작업을 한다.

"operator.yam" 파일에서 별도의 csi를 사용하지 않기 때문에 해당 변수를 false로 변경 한다.

...
  ROOK_CSI_ENABLE_CEPHFS: "false"
  # Enable the default version of the CSI RBD driver. To start another version of the CSI driver, see image properties below.
  ROOK_CSI_ENABLE_RBD: "false"
..

master cluster구성을 위한 crd,common,operator를 배포 하며, mon과 osd 간의 네트워크 설정을 위하여 rook-config-override  cm을 배포 한다.

root@cy01-ceph241:/home/rook-1.5.6/cluster/examples/kubernetes/ceph# kubectl create -f crds.yaml -f common.yaml -f operator.yaml
root@cy01-ceph241:/home/rook-1.5.6/cluster/examples/kubernetes/ceph#  cat <<EOF >   rook-config-override.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: rook-config-override
  namespace: rook-ceph
data:
  config: |
    [global]
    public network =  10.10.2.0/24
    cluster network = 10.10.3.0/24
    public addr = ""
    cluster addr = ""
    rgw dns name = rgw-slave.cyuucloud.xyz
    public addr = ""
    cluster addr = ""
    rgw_ops_log_rados = true
    rgw_enable_ops_log = true
    rgw_enable_usage_log = true
    debug rgw = 20
    rgw_log_http_headers = http_x_forwarded_for
    rgw_resolve_cname = true
EOF
root@cy01-ceph241:/home/rook-1.5.6/cluster/examples/kubernetes/ceph#  kubectl  create -f  rook-config-override.yaml

cluster.yaml 파일에서 pg_autoscaler를 작동 안하도록 하며, hostPath 디렉토리를 별도의 이름으로 변경한다

  # In Minikube, the '/data' directory is configured to persist across reboots. Use "/data/rook" in Minikube environment.
  dataDirHostPath: /var/lib/rook-master
...
  mgr:
    modules:
    # Several modules should not need to be included in this list. The "dashboard" and "monitoring" modules
    # are already enabled by other settings in the cluster CR.
    - name: pg_autoscaler
      enabled: false
...
  network:
    # enable host networking
    provider: host
...
  storage: # cluster level storage configuration and selection
    useAllNodes: false
    useAllDevices: false
    #deviceFilter:
    config:
      encryptedDevice: "false"
    nodes:
    - name: "cy01-ceph241"
      devices:
      - name: "vdd"
        config:
          deviceClass: "hdd"
          storeType: bluestore
    - name: "cy01-ceph242"
      devices:
      - name: "vdd"
        config:
          deviceClass: "hdd"
          storeType: bluestore
    - name: "cy01-ceph243"
      devices:
      - name: "vdd"
        config:
          deviceClass: "hdd"
          storeType: bluestore
...

cephcluster와 toolbox를 배포 하고 pod의 상태 및 ceph 의 상태를 확인 해본다.

root@cy01-ceph241:/home/rook-1.5.6/cluster/examples/kubernetes/ceph# kubectl create -f cluster.yaml
root@cy01-ceph241:/home/rook-1.5.6/cluster/examples/kubernetes/ceph# kubectl create -f toolbox.yaml
root@cy01-ceph241:/home/rook-1.5.6/cluster/examples/kubernetes/ceph# kubectl get pod -n rook-ceph
NAME                                                     READY   STATUS      RESTARTS   AGE
rook-ceph-crashcollector-cy01-ceph241-56f767b5fd-5fg56   1/1     Running     0          8m58s
rook-ceph-crashcollector-cy01-ceph242-7b56599fc5-gxbx4   1/1     Running     0          8m57s
rook-ceph-crashcollector-cy01-ceph243-7d5b45648d-8g24b   1/1     Running     0          8m57s
rook-ceph-crashcollector-cy01-ceph181-5cb8d846d5-dh58c   1/1     Running     0          9m21s
rook-ceph-crashcollector-cy01-ceph182-6b894d5696-wj2f2   1/1     Running     0          9m40s
rook-ceph-crashcollector-cy01-ceph183-ddcdcf95c-fk7fp    1/1     Running     0          9m12s
rook-ceph-mgr-a-7c86568c67-h928b                         1/1     Running     0          9m12s
rook-ceph-mon-a-6996549b7d-nl6h6                         1/1     Running     0          9m49s
rook-ceph-mon-b-6579f5b459-scqhv                         1/1     Running     0          9m31s
rook-ceph-mon-c-678987585f-ftrdv                         1/1     Running     0          9m21s
rook-ceph-operator-6b8b9958c5-b4xwd                      1/1     Running     0          10m
rook-ceph-osd-0-c4996c49-ffvmn                           1/1     Running     0          8m58s
rook-ceph-osd-1-68cff4b597-w2ckg                         1/1     Running     0          8m58s
rook-ceph-osd-2-6fb7c9889-zcp5t                          1/1     Running     0          8m57s
rook-ceph-osd-prepare-cy01-ceph241-hjwtj                 0/1     Completed   0          9m11s
rook-ceph-osd-prepare-cy01-ceph242-75472                 0/1     Completed   0          9m10s
rook-ceph-osd-prepare-cy01-ceph243-kxpjd                 0/1     Completed   0          9m10s
rook-ceph-tools-875444c55-d8wh9                          1/1     Running     0          9m57s

 

Mutisite구성

이제, master ceph cluster의 realm,zonegroup을 구성 해본다.

root@cy01-ceph231:~# cd /home/rook-1.5.6/cluster/examples/kubernetes/ceph
root@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph# tee master-m1.yaml << EOF
apiVersion: ceph.rook.io/v1
kind: CephObjectRealm
metadata:
  name: cy-realm
  namespace: rook-ceph
---
apiVersion: ceph.rook.io/v1
kind: CephObjectZoneGroup
metadata:
  name: cy-zonegroup
  namespace: rook-ceph
spec:
  realm: cy-realm
EOF
root@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph# kubectl create -f master-m1.yaml

master ceph cluster의 zone과 해당 zone을 사용하는 rgw배포 한다.

root@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph# tee master-m2.yaml << EOF
---
apiVersion: ceph.rook.io/v1
kind: CephObjectZone
metadata:
  name: cy-zone-a
  namespace: rook-ceph
spec:
  zoneGroup: cy-zonegroup
  metadataPool:
    failureDomain: host
    replicated:
      size: 3
  dataPool:
    failureDomain: host
    replicated:
      size: 3
    parameters:
      compression_mode: none
---
apiVersion: ceph.rook.io/v1
kind: CephObjectStore
metadata:
  name: cy-zone-rgw
  namespace: rook-ceph
spec:
  gateway:
    type: s3
    port: 80
    instances: 1
  zone:
    name: cy-zone-a
EOF
root@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph# kubectl create -f master-m2.yaml

생성된 zone과 해당zone의 pool을 확인 한다.

[root@rook-ceph-tools-7bf6744b9c-z8c8j /]# ceph osd pool ls
device_health_metrics
.rgw.root
cy-zone-a.rgw.control
cy-zone-a.rgw.meta
cy-zone-a.rgw.log
cy-zone-a.rgw.buckets.index
cy-zone-a.rgw.buckets.non-ec
cy-zone-a.rgw.buckets.data
[root@rook-ceph-tools-7bf6744b9c-z8c8j /]# radosgw-admin zone list
{
    "default_info": "a42dae9d-c93e-4246-a88a-409ab6ab84cd",
    "zones": [
        "cy-zone-a"
    ]
}

외부 client 접속을 위한 loadbalancer 타입의 서비스를 생성한다.

root@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph# tee master-m3.yaml << EOF
apiVersion: v1
kind: Service
metadata:
  name: rook-ceph-rgw-cy-store-external
  namespace: rook-ceph
  labels:
    app: rook-ceph-rgw
    rook_cluster: rook-ceph
    rook_object_store: cy-zone-rgw
spec:
  ports:
  - name: rgw2
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: rook-ceph-rgw
    rook_cluster: rook-ceph
    rook_object_store: cy-zone-rgw
  sessionAffinity: None
  type: LoadBalancer
EOF
 
root@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph# kubectl  create -f master-m3.yaml
 
root@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph# kubectl  get svc -n rook-ceph
NAME                              TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
rook-ceph-mgr                     ClusterIP      10.233.57.162   <none>        9283/TCP       16h
rook-ceph-mgr-dashboard           ClusterIP      10.233.45.222   <none>        8443/TCP       16h
rook-ceph-rgw-cy-store-external   LoadBalancer   10.233.26.223   10.10.2.150   80:31076/TCP   145m
rook-ceph-rgw-cy-zone-rgw         ClusterIP      10.233.58.116   <none>        80/TCP         145m
root@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph# curl 10.10.2.150
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>root@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph#

"radosgw-admin zonegroup get " 을 하게 되면  slave쪽 cluster가 동기화하기 위한 master의 zone endpoint와 zonegroup 의 endpoint 가 기본적으로 kubernetes의 cluster ip 로 바인딩 되기 때문에 다른 kubernetes cluster에 구성된 slave ceph cluster에서는 접근할 수 없다.

해당 부분을 위에서 외부 접속이 가능한 external 아이피로 바인딩을 수정 하여 저장 한다.

[root@rook-ceph-tools-7bf6744b9c-mxgd8 /]# radosgw-admin zonegroup get
{
    "id": "4ec656c7-6947-4559-bdc6-37bc2c309ead",
    "name": "cy-zonegroup",
    "api_name": "cy-zonegroup",
    "is_master": "true",
    "endpoints": [
        "http://10.10.2.150:80"
    ],
    "hostnames": [],
    "hostnames_s3website": [],
    "master_zone": "662d26d0-c011-48b8-9726-9eabf4844ac0",
    "zones": [
        {
            "id": "662d26d0-c011-48b8-9726-9eabf4844ac0",
            "name": "cy-zone-a",
            "endpoints": [
                "http://10.233.58.116:80"
            ],
            "log_meta": "false",
            "log_data": "false",
            "bucket_index_max_shards": 11,
            "read_only": "false",
            "tier_type": "",
            "sync_from_all": "true",
            "sync_from": [],
            "redirect_zone": ""
        }
    ],
    "placement_targets": [
        {
            "name": "default-placement",
            "tags": [],
            "storage_classes": [
                "STANDARD"
            ]
        }
    ],
    "default_placement": "default-placement",
    "realm_id": "e084cbbf-d89b-4fa3-b25b-23fe7efc773e",
    "sync_policy": {
        "groups": []
    }
}
 
 
[root@rook-ceph-tools-7bf6744b9c-mxgd8 /]# radosgw-admin zonegroup get > a.json
[root@rook-ceph-tools-7bf6744b9c-mxgd8 /]# vi a.json
...
    "endpoints": [
        "http://10.233.58.116:80" -> "http://10.10.2.150:80" 으로 변경
    ],
...
 
            "name": "cy-zone-a",
            "endpoints": [
                "http://10.233.58.116:80" -> "http://10.10.2.150:80" 으로 변경
            ],
...
[root@rook-ceph-tools-7bf6744b9c-mxgd8 /]# radosgw-admin zonegroup set   --infile=a.json
[root@rook-ceph-tools-7bf6744b9c-mxgd8 /]# radosgw-admin period update --commit

다음으로 master에서 생성된  realm 이름의 user를 확인할 수 있는데 해당 계정은 slave에서 master에 접근할때 인증을 하기 위한 용도로 사용 된다. operator에 의하여 생성 되어 secret 에서 base64로 인코딩 되어 있는 정보를 확인할 수 있다.

[root@rook-ceph-tools-7bf6744b9c-z8c8j /]# radosgw-admin user list
[
    "rook-ceph-internal-s3-user-checker-5d4e9841-f7ba-4888-91d2-deb4ae1c746a",
    "cy-realm-system-user"
]
root@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph-slave# kubectl  get secret -n rook-ceph cy-realm-keys     -o yaml
apiVersion: v1
data:
  access-key: WEVSWU0xb3hQVFZlVFZ0cmRGST0=
  secret-key: Y2psYVhqWXVLVVkzUUhNOE5TUldMRmRpUGl3L1VHTThVMkZBVGc9PQ==
kind: Secret
...

위 정보로  slave ceph cluster에서 인증 정보를 secret으로 생성 한다.

root@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph#  cd ../ceph-slave
root@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph-slave#  tee  slave-s0.yaml << EOF
apiVersion: v1
kind: Secret
metadata:
  name: cy-realm-keys
  namespace: rook-ceph
data:
  access-key: WEVSWU0xb3hQVFZlVFZ0cmRGST0=
  secret-key: Y2psYVhqWXVLVVkzUUhNOE5TUldMRmRpUGl3L1VHTThVMkZBVGc9PQ==
EOF
root@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph-slave#   kubectl create -f  slave-s0.yaml
secret/test-realm-keys created

 master의 realm 정보를 받아 오기 위한 endpoint 도메인으로 slave에서 realm 을 master에서 받아오도록 설정 하며, crd에서 zonegroup이 등록 될수 있도록 zonegroup을 동일한 이름으로 정의 한다.

root@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph-slave#  tee slave-s1.yaml  << EOF
apiVersion: v1
kind: Secret
metadata:
  name: cy-realm-keys
  namespace: rook-ceph
data:
  access-key: WEVSWU0xb3hQVFZlVFZ0cmRGST0=
  secret-key: Y2psYVhqWXVLVVkzUUhNOE5TUldMRmRpUGl3L1VHTThVMkZBVGc9PQ==
root@cy01-ceph241:/home/rook-1.5.6/cluster/examples/kubernetes/ceph-slave# cat slave-s1.yaml
apiVersion: ceph.rook.io/v1
kind: CephObjectRealm
metadata:
  name: cy-realm
  namespace: rook-ceph
spec:
  pull:
    endpoint: http://10.10.2.150:80
---
apiVersion: ceph.rook.io/v1
kind: CephObjectZoneGroup
metadata:
  name: cy-zonegroup
  namespace: rook-ceph
spec:
  realm: cy-realm
EOF
root@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph-slave#    kubectl  create -f slave-s1.yaml   

ceph 명령을 통하여 zonegroup과 realm을 확인한다.

[root@rook-ceph-tools-7bf6744b9c-j7qhz /]# radosgw-admin  zonegroup list
{
    "default_info": "ee5c640b-7eb3-4dea-b726-f1e01c70208d",
    "zonegroups": [
        "cy-zonegroup"
    ]
}
[root@rook-ceph-tools-7bf6744b9c-j7qhz /]# radosgw-admin  realm list
{
    "default_info": "3a57ae7f-8065-41f0-b71c-6132ab0ac15a",
    "realms": [
        "cy-realm"
    ]
}

slave ceph cluster 에서 사용할 cy-zone-b이름의 zone을 생성 하고 해당 zone을 사용하는 rgw를 배포 한다.

root@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph-slave# tee slave-s2.yaml  << EOF
---
apiVersion: ceph.rook.io/v1
kind: CephObjectZone
metadata:
  name: cy-zone-b
  namespace: rook-ceph
spec:
  zoneGroup: cy-zonegroup
  metadataPool:
    failureDomain: host
    replicated:
      size: 3
  dataPool:
    failureDomain: host
    replicated:
      size: 3
    parameters:
      compression_mode: none
---
apiVersion: ceph.rook.io/v1
kind: CephObjectStore
metadata:
  name: cy-zone-rgw
  namespace: rook-ceph
spec:
  gateway:
    type: s3
    port: 80
    instances: 1
  zone:
    name: cy-zone-b
EOF
root@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph-slave# kubectl create -f slave-s2.yaml

ceph 명령을 사용하여 생성된 zone과 pool을 확인한다. 동기화 상태를 확인 하면, master ceph cluster와의 동기화 상태를 확인이 가능 하다.

[root@rook-ceph-tools-7bf6744b9c-j7qhz /]# radosgw-admin  zone list
{
    "default_info": "7647e90b-b42e-41c5-8e95-729878c0bc0a",
    "zones": [
        "cy-zone-b"
    ]
}
[root@rook-ceph-tools-7bf6744b9c-j7qhz /]# ceph osd pool ls
device_health_metrics
.rgw.root
cy-zone-b.rgw.control
cy-zone-b.rgw.meta
cy-zone-b.rgw.log
cy-zone-b.rgw.buckets.index
cy-zone-b.rgw.buckets.non-ec
cy-zone-b.rgw.buckets.data
 
 
 
[root@rook-ceph-tools-7bf6744b9c-j7qhz /]# radosgw-admin   sync status
          realm 3a57ae7f-8065-41f0-b71c-6132ab0ac15a (cy-realm)
      zonegroup ee5c640b-7eb3-4dea-b726-f1e01c70208d (cy-zonegroup)
           zone 7647e90b-b42e-41c5-8e95-729878c0bc0a (cy-zone-b)
  metadata sync syncing
                full sync: 0/64 shards
                incremental sync: 64/64 shards
                metadata is caught up with master
      data sync source: a42dae9d-c93e-4246-a88a-409ab6ab84cd (cy-zone-a)
                        syncing
                        full sync: 0/128 shards
                        incremental sync: 128/128 shards
                        data is caught up with source
 
 
 
[root@rook-ceph-tools-7bf6744b9c-j7qhz /]# radosgw-admin user list
[
    "dashboard-admin",
    "cy-realm-system-user",
    "rook-ceph-internal-s3-user-checker-1bc26b63-1a97-49e6-b53e-809f08e9fc2e"
]

slave ceph cluster에서도 외부 client 접속을 위한 loadbalancer 타입의 서비스를 생성한다.

oot@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph-slave# tee slave-s4.yaml << EOF
apiVersion: v1
kind: Service
metadata:
  name: rook-ceph-rgw-cy-store-external
  namespace: rook-ceph-slave
  labels:
    app: rook-ceph-rgw
    rook_cluster: rook-ceph
    rook_object_store: cy-zone-rgw
spec:
  ports:
  - name: rgw2
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: rook-ceph-rgw
    rook_cluster: rook-ceph
    rook_object_store: cy-zone-rgw
  sessionAffinity: None
  type: LoadBalancer
EOF
 
root@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph-slave# kubectl  create -f slave-s4.yaml
root@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph-slave# kubectl  get svc -n rook-ceph-
NAME                              TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
NAME                              TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
rook-ceph-mgr                     ClusterIP      10.233.58.3     <none>        9283/TCP       16h
rook-ceph-mgr-dashboard           ClusterIP      10.233.32.41    <none>        8443/TCP       16h
rook-ceph-rgw-cy-store-external   LoadBalancer   10.233.38.156   10.10.2.140   80:30168/TCP   136m
rook-ceph-rgw-cy-zone-rgw         ClusterIP      10.233.0.222    <none>        80/TCP         142m
root@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph-slave# curl  10.10.2.140
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>root@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph-slave#

slave에서도 기본적으로 slave에서 생성된 zone 의 endpoint 가 master가 인식 못하는 cluster ip이기 때문에 external ip로 변경 한다.

[root@rook-ceph-tools-7bf6744b9c-nqzvv /]# radosgw-admin zonegroup get
{
    "id": "4ec656c7-6947-4559-bdc6-37bc2c309ead",
    "name": "cy-zonegroup",
    "api_name": "cy-zonegroup",
    "is_master": "true",
    "endpoints": [
        "http://10.10.2.150:80"
    ],
    "hostnames": [],
    "hostnames_s3website": [],
    "master_zone": "662d26d0-c011-48b8-9726-9eabf4844ac0",
    "zones": [
        {
            "id": "662d26d0-c011-48b8-9726-9eabf4844ac0",
            "name": "cy-zone-a",
            "endpoints": [
                "http://10.10.2.150:80"
            ],
            "log_meta": "false",
            "log_data": "false",
            "bucket_index_max_shards": 11,
            "read_only": "false",
            "tier_type": "",
            "sync_from_all": "true",
            "sync_from": [],
            "redirect_zone": ""
        },
        {
            "id": "7666792c-851d-4a7e-94be-1e56f4b6ebaa",
            "name": "cy-zone-b",
            "endpoints": [
                "http://10.233.0.222:80"
            ],
            "log_meta": "false",
            "log_data": "true",
            "bucket_index_max_shards": 11,
            "read_only": "false",
            "tier_type": "",
            "sync_from_all": "true",
            "sync_from": [],
            "redirect_zone": ""
        }
    ],
    ],
    "placement_targets": [
        {
            "name": "default-placement",
            "tags": [],
            "storage_classes": [
                "STANDARD"
            ]
        }
    ],
    "default_placement": "default-placement",
    "realm_id": "e084cbbf-d89b-4fa3-b25b-23fe7efc773e",
    "sync_policy": {
        "groups": []
    }
}
 
 
[root@rook-ceph-tools-7bf6744b9c-nqzvv /]# radosgw-admin zonegroup get > a.json
[root@rook-ceph-tools-7bf6744b9c-nqzvv /]# vi a.json
...
            "name": "cy-zone-b",
            "endpoints": [
                "http://10.233.0.222:80" -> "http://10.10.2.140:80" 으로 변경
            ],
...
[root@rook-ceph-tools-7bf6744b9c-nqzvv /]# radosgw-admin zonegroup set   --infile=a.json
[root@rook-ceph-tools-7bf6744b9c-nqzvv /]# radosgw-admin period update --commit

테스트를 위하여 CephObjectStoreUser 를 이용하여  테스트 유저를 master cluster에 생성 한다.

root@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph-slave# tee ob-user1.yaml << EOF
apiVersion: ceph.rook.io/v1
kind: CephObjectStoreUser
metadata:
  name: test-user
  namespace: rook-ceph
spec:
  store: cy-zone-rgw
  displayName: "test user"
EOF
 
 
root@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph-slave# kubectl create -f ob-user1.yaml

master ceph cluster에서 해당 user생성을 확인할 수 있다.

[root@rook-ceph-tools-7bf6744b9c-z8c8j /]# radosgw-admin user info --uid=test-user
{
    "user_id": "test-user",
    "display_name": "test user",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "subusers": [],
    "keys": [
        {
            "user": "test-user",
            "access_key": "YN1T944RIUSN5S12FD9G",
            "secret_key": "Bx8mZPyAwZTDBqLuDgWL6EpKgxV16rqrtwCymsBo"

slave ceph cluster에서도 동일한 유저 정보를 확인할 수 있다.

[root@rook-ceph-tools-7bf6744b9c-j7qhz /]# radosgw-admin user list
[
    "dashboard-admin",
    "test-user",
    "cy-realm-system-user",
    "rook-ceph-internal-s3-user-checker-1bc26b63-1a97-49e6-b53e-809f08e9fc2e"
]

master ceph cluster에 생성한 rgw endpoint 도메인으로 s3cmd 설정을 변경 하고 해당 클라이언트를 이용 하여 버킷 생성과 오브젝트를 업로드 해본다.

root@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph#  vi ~/.s3cfg
[default]
...
access_key = YN1T944RIUSN5S12FD9G
...
host_base = rgw15.cyuucloud.xyz
host_bucket = %(bucket)s.rgw15.cyuucloud.xyz
...
secret_key = Bx8mZPyAwZTDBqLuDgWL6EpKgxV16rqrtwCymsBo
...
root@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph# s3cmd  mb  s3://test
Bucket 's3://test/' created
root@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph# s3cmd  put --acl-public /etc/passwd s3://test/password
upload: '/etc/passwd' -> 's3://test/password'  [1 of 1]
 1866 of 1866   100% in    0s    40.20 kB/s  done
Public URL of the object is: http://test/password

slave에서도 s3cfg파일 설정후 확인하면 생성된 버킷과 오브젝트가 동기화 된것을 확인 해볼수 있다.

root@cy01-ceph241:/home/rook-1.5.6/cluster/examples/kubernetes/ceph-slave# s3cmd ls s3://test
2021-03-02 00:57      1866   s3://test/password

slave쪽에서 버킷을 생성 해보자.

root@cy01-ceph241:/home/rook-1.5.6/cluster/examples/kubernetes/ceph-slave# s3cmd mb s3://ar11/
Bucket 's3://ar11/' created

master에서도 버킷이 생성되는것을 확인할 수 있다. 즉, active-active 로 서로의 rgw endpoint 로 동기화 된다는 것이다.

root@cy01-ceph231:/home/rook-1.5.6/cluster/examples/kubernetes/ceph# s3cmd ls
2021-03-02 01:11  s3://ar11
2021-03-02 00:53  s3://test-123
반응형

'Ceph' 카테고리의 다른 글

Ceph Pacific 버전의 Multi-site 구성 동기화 과정  (0) 2022.03.07
Radosgw Object 저장과 삭제 과정  (0) 2021.03.04
Ceph zabbix plugin  (0) 2018.05.18