前提
下载rook
1 | #下载,以1.0.5版本为例 |
安装rook
- 安装相关组件
1 | $ cd rook-1.0.5/cluster/examples/kubernetes/ceph/ |
- 检查相关pod是否启动
1
$ kubectl -n rook-ceph get pod
在rook创建ceph集群
- 修改目录下cluster-test.yaml文件,使其如下
hq-cluster.yaml
。此配置表示:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
name: rook-ceph
namespace: rook-ceph
spec:
cephVersion:
image: ceph/ceph:v14.2.1-20190430
allowUnsupported: false
dataDirHostPath: /var/lib/rook
mon:
count: 3
allowMultiplePerNode: false
dashboard:
enabled: true
network:
hostNetwork: true
rbdMirroring:
workers: 0
annotations:
resources:
storage: # cluster level storage configuration and selection
useAllNodes: false
useAllDevices: false
deviceFilter:
location:
config:
nodes:
- name: "node4"
devices: # specific devices to use for storage can be specified for each node
- name: "sdb"
- name: "sdc"
- name: "sdd"
config: # configuration can be specified at the node level which overrides the cluster level config
storeType: filestore
- name: "node5"
devices: # specific devices to use for storage can be specified for each node
- name: "sdb"
- name: "sdc"
- name: "sdd"
config: # configuration can be specified at the node level which overrides the cluster level config
storeType: filestore
- name: "node6"
devices: # specific devices to use for storage can be specified for each node
- name: "sdb"
- name: "sdc"
- name: "sdd"
config: # configuration can be specified at the node level which overrides the cluster level config
storeType: filestore
- useAllNodes: false 不针对所有的节点,而是手动指定节点
- useAllDevices: false 不选择所有空闲的磁盘,而是手动指定
- nodes: 数组格式,指定node4上面的sdb,sdc,sdd三块盘和node5上sdb,sdc,sdd三块盘及node6上sdb,sdc,sdd
- storeType: filestore ceph类型使用filestore
创建该
hq-cluster.yaml
1
$ kubectl create -f hq-cluster.yaml
检查相关组件是否启动
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25$ kubectl get pod -n rook-ceph
NAME READY STATUS RESTARTS AGE
rook-ceph-agent-2zxz4 1/1 Running 1 60d
rook-ceph-agent-hrj58 1/1 Running 1 60d
rook-ceph-agent-m5hns 1/1 Running 0 60d
rook-ceph-mgr-a-84dcffc8f6-hwlcc 1/1 Running 0 12h
rook-ceph-mon-a-967c6dcbd-rtm74 1/1 Running 0 3d15h
rook-ceph-mon-b-b56bcf68c-xkj49 1/1 Running 1 60d
rook-ceph-mon-c-5b9984bccd-jz7fw 1/1 Running 0 57d
rook-ceph-operator-68cb95fc7c-mvczx 1/1 Running 1 3d15h
rook-ceph-osd-0-6cd844b7db-kc7p7 1/1 Running 1 60d
rook-ceph-osd-1-fc784fb6d-nrvpx 1/1 Running 0 3d15h
rook-ceph-osd-2-79cf8d88f6-crcw7 1/1 Running 1 60d
rook-ceph-osd-3-5787545c8-5chm4 1/1 Running 0 3d15h
rook-ceph-osd-4-76b64b8974-ffmnn 1/1 Running 1 60d
rook-ceph-osd-5-99c99748-gk7pn 1/1 Running 0 3d15h
rook-ceph-osd-6-5dbfdf6cb7-sqr2z 1/1 Running 0 4d14h
rook-ceph-osd-7-568f65bf8d-w475v 1/1 Running 0 4d14h
rook-ceph-osd-8-598b9565d5-qldtn 1/1 Running 0 4d14h
rook-ceph-osd-prepare-node4-vqst4 0/2 Completed 0 12h
rook-ceph-osd-prepare-node5-zplb2 0/2 Completed 0 12h
rook-ceph-osd-prepare-node6-nkldv 0/2 Completed 1 12h
rook-discover-4ddcl 1/1 Running 2 60d
rook-discover-gqcht 1/1 Running 0 60d
rook-discover-qvwbn 1/1 Running 1 60d
在k8s中创建storageclass供k8s使用
- 当前文件中包含相关资源,直接创建即可
1 | $ kubectl apply -f storageclass.yaml |
- 检查sc是否创建成功
1 | $ kubectl get sc -n rook-ceph |
此时新创建的storageclassrook-ceph-block
便可在k8s中使用了。
其他组件
- 安装ceph-tools,可以执行ceph相关命令
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22$ kubectl apply -f toolbox.yaml
$kubectl exec -ti rook-ceph-tools-6544484c68-m64vz -n
[....]$ ceph status
cluster:
id: db4f6f7a-5606-4a7d-9eba-9b4901cd7a38
health: HEALTH_OK
services:
mon: 3 daemons, quorum a,b,c (age 3d)
mgr: a(active, since 12h)
mds: myfs:1 {0=myfs-a=up:active} 1 up:standby-replay
osd: 9 osds: 9 up (since 3d), 9 in (since 3d)
data:
pools: 3 pools, 300 pgs
objects: 5.55k objects, 17 GiB
usage: 30 GiB used, 366 GiB / 396 GiB avail
pgs: 300 active+clean
io:
client: 1.2 KiB/s rd, 5.7 KiB/s wr, 2 op/s rd, 0 op/s wr - dashboard
在cluster-test.yaml
有 dashboard选项,设置为true,则自动部署dashboard,查看dashboard登陆的svc为1
2$ kubectl get svc -n rook-ceph |grep dashboard
rook-ceph-mgr-dashboard LoadBalancer 10.233.9.154 3.1.20.51 8443:32600/TCP 60d