当前位置: 代码迷 >> 综合 >> k8s——数据管理
  详细解决方案

k8s——数据管理

热度:57   发布时间:2024-01-31 15:19:03.0

k8s——数据管理

    • Volume:
      • emptyDir
      • hostpath volume:
      • 外部 storage provider
        • 回收pv(删除pvc):
        • MYSQL使用pv,pvc:

Volume:

Volume可以持久化保存数据,Volume的生命周期独立于容器,Pod中的容器有可能出现意外,而volume会被保留,Pod 中的所有容器都可以共享 Volume,它们可以指定各自的 mount 路径

emptyDir

emptyDir 是主机的一个空目录,对于容器他是持久的,而对于pod则不是,删除pod,emptydir volume也会被删除,emptyDir Volume 的生命周期与 Pod 一致

apiVersion: v1
kind: Pod
metadata:name: empty-pod
spec:containers:- image: busyboxname: testvolumeMounts:- mountPath: /aaaname: emptyargs:- /bin/sh- -c- echo "it's test"    >/aaa/aaa; sleep  300000000000- image: busyboxname: test1volumeMounts:- mountPath: /helloname: emptyargs:- /bin/sh- -c- cat  /hello/aaa; sleep 3000000volumes:- name: emptyemptyDir: {}

在一个pod下创建了两个容器,两个容器都挂载了一个相同的volume,虽然在容器内的名字不一样,但是都是指向同一个volume。

查看pod:

[root@k8smaster emptydir]# kubectl    get    pod  -o  wide
NAME        READY   STATUS    RESTARTS   AGE    IP            NODE       NOMINATED NODE   READINESS GATES
empty-pod   2/2     Running   0          2m8s   10.244.1.86   k8snode2   <none>           <none>

查看kubectl logs:

[root@k8smaster emptydir]# kubectl    logs    empty-pod test
[root@k8smaster emptydir]# kubectl    logs    empty-pod test1
it's test

可以看到第一个的日志没有显示 因为我们只是往里面写入
第二个日志显示了我们在第一个容器写入的信息

去节点查看挂载的目录:

[root@k8snode2 ~]# docker  inspect     2agc5
"Mounts" : [{"Type": "bind","Source": "//var/lib/kubelet/pods/43ba2f73-184a-49a1-a0ec-6b859c9f4cd8/volumesty-dir/empty
[root@k8snode2 ~]# docker  inspect    hgjq2"Source": "//var/lib/kubelet/pods/43ba2f73-184a-49a1-a0ec-6b859c9f4cd8/volumesty-dir/empty         

这两个容器但是挂载的目标位置是一样的

查看目录

[root@k8snode2 7]# cd   /var/lib/kubelet/pods/43ba2f73-184a-49a1-a0ec-6b859c9f4cd8/volumesty-dir/empty
[root@k8snode2 empty]# ls
aaa
[root@k8snode2 empty]# cat  aaa 
it's test

删除pod 这个目录也就不存在了

hostpath volume:

hostpath volume 是把主机的目录共享到pod里的容器上

[root@k8smaster emptydir]# kubectl   edit  pod  --namespace=kube-system   kube-apiserver-k8smastervolumeMounts:- mountPath: /etc/ssl/certsname: ca-certsreadOnly: true- mountPath: /etc/pkiname: etc-pkireadOnly: true- mountPath: /etc/kubernetes/pkiname: k8s-certsreadOnly: truevolumes:- hostPath:path: /etc/ssl/certstype: DirectoryOrCreatename: ca-certs- hostPath:path: /etc/pkitype: DirectoryOrCreatename: etc-pki- hostPath:path: /etc/kubernetes/pkitype: DirectoryOrCreatename: k8s-certs

这种 volume是pod删除了 数据还会保存,但是主机坏掉了数据也就没了

外部 storage provider

如 ceph glusterfs 独立于 kubernetes 集群的 就算集群瘫痪了 数据也不会丢失

PV(persistentVolume)和PVC(persistentVolumeClaim):
Pv是外部存储系统的一块存储空间,具有持久性,生命周期独立于pod
Pvc是对pv的申请,用户创建pvc时,指明存储资源的大小和访问模式等信息,kubernetes会查找符合条件的pv
Kubernetes支持多中persistentVolume:NFS Ceph EBS等

在创建pv和pvc之前,先搭建一个NFS

Nfs 192.168.19.163

所有节点都安装:

yum   -y install  rpcbind  nfs-utils 

在nfs节点上:

[root@localhost ~]# vim   /etc/exports
/volume    *(rw,sync,no_root_squash)

创键这个目录:

[root@localhost ~]# mkdir   /volume

启动服务:所有节点

systemctl   start   rpcbind  && systemctl    enable  rpcbind

在nfs节点:

[root@localhost ~]# systemctl start   nfs-server.service   &&systemctl enable   nfs-server.service

关闭防火墙和selinux:

[root@localhost ~]# setenforce 0
[root@localhost ~]# systemctl stop   firewalld

除了nfs节点 随便找一个节点测试一下可以访问nfs节点的目录么:

[root@k8smaster emptydir]# showmount   -e   192.168.19.163
Export list for 192.168.19.163:
/volume *

创键pv:

[root@k8smaster pv]# cat    pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:name: mypv
spec:capacity:storage: 1GiaccessModes:- ReadWriteOncepersistentVolumeReclaimPolicy: RecyclestorageClassName: nfsnfs:path: /volume/mypvserver: 192.168.19.163kind: PersistentVolume    ## 类型是pvcapacity:
storage: 1Gi     ## 指定pv的容量
accessModes:- ReadWriteOnce    ## 指定访问的模式   读写模式mount到单个节点
persistentVolumeReclaimPolicy: Recycle      ## 指定pv的回收策略
storageClassName: nfs   ## 指定pv的class       pvc ##可以指定class中的pv
path: /volume/mypv    ##  需要手动在nfs节点上创建  不然会报错

ReadWriteMany ## 读写模式mount到多个节点
ReadOnlyMany ## 只读的模式mount到多个节点
Recycle ## 删除pv中的数据
Retain ## 需要手动清除数据
Delete ##删除 storage provider上对应的存储资源

查看pv:

[root@k8smaster pv-pvs]# kubectl    get   pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
mypv   1Gi        RWO            Recycle          Available           nfs                     14s

创建pvc:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: mypvc
spec:accessModes:- ReadWriteOnceresources:requests:storage: 1GistorageClassName: nfs

指定资源类型,访问模式 ,容量大小,指定class

查看pvc:

[root@k8smaster pv-pvs]# kubectl    get   pvc
NAME    STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mypvc   Bound    mypv     1Gi        RWO            nfs            14s

再次查看pv:

[root@k8smaster pv-pvs]# kubectl    get   pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM           STORAGECLASS   REASON   AGE
mypv   1Gi        RWO            Recycle          Bound    default/mypvc   nfs                     4m22s

可以看到pvc已经成功bound到pv上了 之前没创建pvc时 pv的状态为Available

创建一个pod,使用这个存储:

apiVersion: v1
kind: Pod
metadata:name: mypod
spec:containers:- image: busyboxname: testargs:- /bin/sh- -c- sleep 300000000000volumeMounts:- mountPath: /aaaaname: myvoluvolumes:- name: myvolupersistentVolumeClaim:claimName: mypvc

查看Pod:

[root@k8smaster pv-pvs]# kubectl   get pod   
NAME    READY   STATUS    RESTARTS   AGE
mypod   1/1     Running   0          38s

测试创建一个文件试一下:

[root@k8smaster pv-pvs]# kubectl   exec   mypod   touch   /aaaa/test

在nfs节点的目录查看:

[root@localhost ~]# cd   /volume/mypv/
[root@localhost mypv]# ls
test

可以看到已经保存到nfs 节点的/volume/mypv/下了

回收pv(删除pvc):

先删除pod:

[root@k8smaster pv-pvs]# kubectl    delete pod   mypod 
pod "mypod" deleted

删除pvc:

[root@k8smaster pv-pvs]# kubectl   delete   pvc  mypvc 
persistentvolumeclaim "mypvc" deleted

查看一下pv的状态:

[root@k8smaster pv-pvs]# kubectl   get     pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
mypv   1Gi        RWO            Recycle          Available           nfs                     16m

可以看到状态又变为 Available了,并且 nfs 节点的数据也没有了

数据被清除 是因为pv的回收策略是 recycle 所以数据被删除了
要想不被删除 需要改成 retain

apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: mypvc
spec:capacity:storage: 1GiaccessModes:- ReadWriteOncepersistentVolumeReeclaimPolicy: RetainstorageClassName: nfsnfs:path: /volume/mypvserver: 192.168.19.163

执行文件

查看一下更新后的pv:

[root@k8smaster pv-pvs]# kubectl   get    pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
mypv   1Gi        RWO            Retain           Available           nfs                     20m

MYSQL使用pv,pvc:

创建pv:

apiVersion: v1
kind: PersistentVolume
metadata:name: mysql-pv
spec:accessModes:- ReadWriteOncecapacity:storage: 1GipersistentVolumeReclaimPolicy: RetainstorageClassName: nfsnfs:path: /volume/mysql-pvserver: 192.168.2.200path: /volume/mysql-pv   

创建pvc:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: mysql-pvc
spec:accessModes:- ReadWriteOnceresources:requests:storage: 1GistorageClassName: nfs

查看 pv 与 pvc:

[root@k8smaster mysql]# kubectl    get  pv
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
mysql-pv   1Gi        RWO            Retain           Available           nfs                     45s
[root@k8smaster mysql]# kubectl    get  pvc
NAME        STATUS   VOLUME     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mysql-pvc   Bound    mysql-pv   1Gi        RWO            nfs            5s

创建MYSQL:

apiVersion: apps/v1
kind: Deployment
metadata:name: mysqllabels:app: mysql
spec:selector:matchLabels:app: mysqltemplate:metadata:labels:app: mysqlspec:containers:- image: mysql:5.7name: mysqlenv:- name: MYSQL_ROOT_PASSWORDvalue: passwordports:- containerPort: 3306name: mysql  volumeMounts:- name: mysql-volumemountPath: /var/lib/mysqlvolumes:- name: mysql-volumepersistentVolumeClaim:claimName: mysql-pvc---
apiVersion: v1
kind: Service
metadata:labels:app: mysqlname: mysql
spec:ports:- protocol: TCPport: 3306targetPort: 3306selector:app: mysql

进入mysql:

[root@k8smaster mysql]# kubectl    run   -it   --rm  --image=mysql:5.7  --restart=Never   mysql-client  -- mysql -h mysql -ppassword

进入数据库创建库,表,插入数据

mysql> create  database   aaa;
Query OK, 1 row affected (0.00 sec)mysql> use  aaa;
Database changed
mysql> create  table  test(id  int,name varchar(20));
Query OK, 0 rows affected (0.03 sec)mysql> insert   into   test values (1,"aa");
Query OK, 1 row affected (0.03 sec)mysql> insert   into   test values (2,"bb");
Query OK, 1 row affected (0.00 sec)mysql> insert   into   test values (3,"cc"),(4,"dd");mysql> select   *  from   test;
+------+------+
| id   | name |
+------+------+
|    1 | aa   |
|    2 | bb   |
|    3 | cc   |
|    4 | dd   |
+------+------+

查看pod:

[root@k8smaster mysql]# kubectl   get   pod   -o  wide
NAME                     READY   STATUS    RESTARTS   AGE     IP            NODE       NOMINATED NODE   READINESS GATES
mysql-7774bd7c76-5sk2l   1/1     Running   0          6m33s   10.244.1.90   k8snode2   <none>           <none>

可以看到 pod在node2上

关闭node2模拟故障

等待一段时间会发现 mysql服务转移到node1上了

[root@k8smaster mysql]# kubectl    get   pod   -o wide
NAME                     READY   STATUS        RESTARTS   AGE   IP            NODE       NOMINATED NODE   READINESS GATES
mysql-7774bd7c76-5sk2l   1/1     Terminating   0          14m   10.244.1.90   k8snode2   <none>           <none>
mysql-7774bd7c76-6w49h   1/1     Running       0          73s   10.244.2.83   k8snode1   <none>           <none>

登陆mysql,验证数据

[root@k8smaster mysql]# kubectl     run   -it   --rm  --image=mysql:5.7   --restart=Never  mysql-client  -- mysql   -h mysql -ppasswordmysql> show  databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| aaa                |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
5 rows in set (0.01 sec)mysql> use  aaa;
mysql> show   tables;
+---------------+
| Tables_in_aaa |
+---------------+
| test          |
+---------------+
1 row in set (0.00 sec)mysql> select  *  from   test;
+------+------+
| id   | name |
+------+------+
|    1 | aa   |
|    2 | bb   |
|    3 | cc   |
|    4 | dd   |
+------+------+
4 rows in set (0.00 sec)

数据并没有丢失