Skip to content

存储方案

存储类型对比

方案适用场景优点缺点
Local Path单节点/测试简单高效不共享
NFS共享存储跨节点共享性能一般
Ceph生产环境高可用复杂
LonghornK8s 原生轻量易用需要 K8s

Local Path Provisioner(默认)

工作原理

部署

bash
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.24/deploy/local-path-provisioner.yaml

# 设为默认
kubectl patch storageclass local-path \
  -p '{"metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

使用示例

yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-data
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: local-path
  resources:
    requests:
      storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  template:
    spec:
      containers:
        - name: myapp
          volumeMounts:
            - name: data
              mountPath: /data
      volumes:
        - name: data
          persistentVolumeClaim:
            claimName: my-data

NFS 共享存储(可选)

服务器部署

bash
# 安装 NFS
sudo apt-get install -y nfs-kernel-server

# 创建共享目录
sudo mkdir -p /exports/shared
sudo chown nobody:nogroup /exports/shared
sudo chmod 777 /exports/shared

# 配置导出
echo '/exports/shared *(rw,sync,no_subtree_check,no_root_squash)' | sudo tee /etc/exports

# 重启服务
sudo systemctl restart nfs-server

Kubernetes 配置

bash
# 安装 NFS CSI 驱动
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/v4.4.0/deploy/csi-nfs-driver.yaml

# 或使用 NFS Subdir External Provisioner
helm install nfs-subdir-external-provisioner \
  nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
  --set nfs.server=nfs.your-domain.com \
  --set nfs.path=/exports/shared

StorageClass 配置

yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-shared
provisioner: k8s.io/minikube-hostpath
parameters:
  type: nfs
  server: nfs.your-domain.com
  path: /exports/shared
 reclaimPolicy: Retain

存储配额管理

ResourceQuota

yaml
apiVersion: v1
kind: ResourceQuota
metadata:
  name: storage-quota
  namespace: dev
spec:
  hard:
    requests.storage: 100Gi
    persistentvolumeclaims: "10"

LimitRange

yaml
apiVersion: v1
kind: LimitRange
metadata:
  name: storage-limits
spec:
  limits:
    - type: PersistentVolumeClaim
      max:
        storage: 10Gi
      min:
        storage: 1Gi

数据备份

Velero 备份

bash
# 安装 Velero
velero install \
  --provider aws \
  --plugins velero/velero-plugin-for-aws:v1.8.0 \
  --backup-location-config region=minio,s3Url=http://minio:9000 \
  --snapshot-location-config region=minio

# 备份命名空间
velero backup create my-backup --include-namespaces default

# 定时备份
velero schedule create daily-backup --schedule="@daily"

# 恢复
velero restore create --from-backup my-backup

数据库备份

bash
#!/bin/bash
# backup-databases.sh

# PostgreSQL
docker exec postgres pg_dump -U postgres -d mydb > /exports/backups/mydb_$(date +%Y%m%d).sql

# MySQL
docker exec mysql mysqldump -u root -p<password> mydb > /exports/backups/mydb_$(date +%Y%m%d).sql

# 清理旧备份
find /exports/backups -mtime +7 -delete

监控存储

存储使用告警

yaml
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  name: storage-alerts
spec:
  groups:
    - name: storage
      rules:
        - alert: HighStorageUsage
          expr: |
            (node_filesystem_size_bytes{mountpoint="/data"} - node_filesystem_avail_bytes{mountpoint="/data"}) /
            node_filesystem_size_bytes{mountpoint="/data"} * 100 > 85
          for: 5m
          labels:
            severity: warning
          annotations:
            summary: "High storage usage on {{ $labels.instance }}"

扩容方案

LVM 扩容

bash
# 查看卷组
vgs

# 扩展卷组
vgextend data-vg /dev/sdc

# 扩展逻辑卷
lvextend -L +100G /dev/data-vg/data-lv

# 扩展文件系统
resize2fs /dev/data-vg/data-lv

NFS 扩容

bash
# 在 NFS 服务器添加新磁盘
sudo fdisk /dev/sdc
sudo pvcreate /dev/sdc1
sudo vgextend nfs-vg /dev/sdc1
sudo lvextend -l +100%FREE /dev/nfs-vg/nfs-lv
sudo resize2fs /dev/nfs-vg/nfs-lv

下一步

基于开源技术构建