前言
本文主要讲述 2023 K3s Rancher 部署Redis集群
如果你还没有部署 K3s 和 Rancher ,你可以浏览这篇文章:【K3S】01 - 异地集群初始化
如果你没部署过 单节点 Redis,你可以浏览这篇文章:【K3S】02 - Rancher 中间件单节点部署
环境声明
hostname |
系统 |
配置 |
节点 |
角色 |
部署 |
m1 |
Ubuntu-Server(20.04) |
2c4g |
192.168.0.67/32 |
control-plane,etcd,master |
k3s(v1.24.6+k3n1) server nginx rancher(2.7.1) Helm(3.10.3) |
n1 |
Ubuntu-Server(20.04) |
1c2g |
192.168.0.102/32 |
control-plane,etcd,master |
k3s(v1.24.6+k3n1) server
|
m2 |
Ubuntu-Server(20.04) |
2c4g |
172.25.4.244/32 |
control-plane,etcd,master |
k3s(v1.24.6+k3n1) server
|
harbor |
Ubuntu-Server(20.04) |
2c4g |
192.168.0.88 |
Docker-Hub Jenkins CI/CD |
Harbor(2.7.1) Jenkins(2.3) Docker-Compose |
节点均用 WireGuard 打通内网,后续所有节点路由均用内网ip访问
部署
首先添加国内镜像仓库
1
| helm repo add stable http://mirror.azure.cn/kubernetes/charts/
|
拉取仓库stable/redis-ha
1
| helm pull stable/redis-ha
|
解压
1 2
| tar -xvf redis-ha* cd redis-ha
|
修改文件values.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
| nano values.yaml
auth: true
redisPassword: 123456
storageClass: "你的nfs class名字"
haproxy: enabled: true readOnly: enabled: true port: 6380
|
创建命名空间
1
| kubectl create namespace redis
|
开始安装
1
| helm install redis ../redis-ha -n redis
|
完成安装
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
| kubectl get all -n redis
NAME READY STATUS RESTARTS AGE pod/redis-redis-ha-haproxy-6857795996-jxkgk 1/1 Running 0 22m pod/redis-redis-ha-haproxy-6857795996-ljqgl 1/1 Running 0 22m pod/redis-redis-ha-haproxy-6857795996-n4blb 1/1 Running 0 22m pod/redis-redis-ha-server-0 2/2 Running 0 5m4s pod/redis-redis-ha-server-1 2/2 Running 0 5m11s pod/redis-redis-ha-server-2 2/2 Running 0 5m15s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/redis-redis-ha ClusterIP None <none> 6379/TCP,26379/TCP 3d4h service/redis-redis-ha-announce-0 ClusterIP 10.43.183.216 <none> 6379/TCP,26379/TCP 3d4h service/redis-redis-ha-announce-1 ClusterIP 10.43.134.158 <none> 6379/TCP,26379/TCP 3d4h service/redis-redis-ha-announce-2 ClusterIP 10.43.7.107 <none> 6379/TCP,26379/TCP 3d4h service/redis-redis-ha-haproxy ClusterIP 10.43.202.228 <none> 6379/TCP,6380/TCP 22m
NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/redis-redis-ha-haproxy 3/3 3 3 22m
NAME DESIRED CURRENT READY AGE replicaset.apps/redis-redis-ha-haproxy-6857795996 3 3 3 22m
NAME READY AGE statefulset.apps/redis-redis-ha-server 3/3 3d4h
|
测试
我们测试主节点故障,看是否通过哨兵机制,选举出新的主节点
svc master
这里我们通过连接 redis-redis-ha-haproxy
来测试svc是否可用
进入任意一个redis容器内,这里我们进入redis-redis-ha-server-0
1
| kubectl exec -it redis-redis-ha-server-0 -n redis sh
|
连接master的svc,地址为 redis-redis-ha-haproxy
,端口为 6379
1
| redis-cli -h redis-redis-ha-haproxy -p 6379
|
登录redis
查看节点信息
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
| info replication
role:master connected_slaves:2 min_slaves_good_slaves:2 slave0:ip=10.43.183.216,port=6379,state=online,offset=352447,lag=1 slave1:ip=10.43.134.158,port=6379,state=online,offset=352600,lag=1 master_replid:49e62ddd4d5305b62249c87443ccca80ecdcdc98 master_replid2:042f69511bda1f94b935cc92f3750bab9f39e3cf master_repl_offset:352600 second_repl_offset:193667 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:192108 repl_backlog_histlen:160493
|
可以看到 role:master
,成功连接到master
svc slave
这里我们通过连接 redis-redis-ha-haproxy
来测试svc是否可用
进入任意一个redis容器内,这里我们进入redis-redis-ha-server-0
1
| kubectl exec -it redis-redis-ha-server-0 -n redis sh
|
连接slave的svc,地址为 redis-redis-ha-haproxy
,端口为 6380
1
| redis-cli -h redis-redis-ha-haproxy -p 6380
|
登录redis
查看节点信息
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
| info replication
role:slave master_host:10.43.7.107 master_port:6379 master_link_status:up master_last_io_seconds_ago:1 master_sync_in_progress:0 slave_repl_offset:382813 slave_priority:100 slave_read_only:1 connected_slaves:0 min_slaves_good_slaves:0 master_replid:49e62ddd4d5305b62249c87443ccca80ecdcdc98 master_replid2:042f69511bda1f94b935cc92f3750bab9f39e3cf master_repl_offset:382813 second_repl_offset:193667 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:192827 repl_backlog_histlen:189987
|
可以看到 role:slave
,成功连接到slave
master宕机
通过上面的章节 svc master
,我们连接上master后直接执行关闭操作,这里我尝试了两次关闭才触发重新部署的情况
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
| info replication
role:master connected_slaves:2 min_slaves_good_slaves:2 slave0:ip=10.43.183.216,port=6379,state=online,offset=352447,lag=1 slave1:ip=10.43.134.158,port=6379,state=online,offset=352600,lag=1 master_replid:49e62ddd4d5305b62249c87443ccca80ecdcdc98 master_replid2:042f69511bda1f94b935cc92f3750bab9f39e3cf master_repl_offset:352600 second_repl_offset:193667 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:192108 repl_backlog_histlen:160493
SHUTDOWN
|
再次登录执行查看
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
| redis-cli -h redis-redis-ha-haproxy -p 6379 auth 123456 info replication
role:master connected_slaves:2 min_slaves_good_slaves:2 slave0:ip=10.43.183.216,port=6379,state=online,offset=26739,lag=1 slave1:ip=10.43.7.107,port=6379,state=online,offset=26882,lag=1 master_replid:a2b43240f72a49afa15882adf121988b518be146 master_replid2:bb01954025ce094c3e0df86d846f5922abc5acad master_repl_offset:26882 second_repl_offset:14241 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1269 repl_backlog_histlen:25614
|
对比
1 2 3 4 5 6 7
| slave0:ip=10.43.183.216,port=6379,state=online,offset=352447,lag=1 slave1:ip=10.43.134.158,port=6379,state=online,offset=352600,lag=1
slave0:ip=10.43.183.216,port=6379,state=online,offset=26739,lag=1 slave1:ip=10.43.7.107,port=6379,state=online,offset=26882,lag=1
|
不难发现,158
这个节点已经由slave升级为master
,这里可以直接连接158
查看即可,此处不再论述
注意:只有master
节点才允许写入,slave
节点是无法写入的,即便你配置了slave-read-only: "no"
,让slave
节点支持了写入能力,也会因为同步偏移量的问题,导致从节点写入,其他节点读不到的情况,所以我们只需要连接master的svc即可
SpringBoot
这一部分我们将哨兵模式引入到SpringBoot中,本章的前提是你已经引入了redis相关组件,且能成功读写
svc
分别创建三个Headless连接redis
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
| apiVersion: v1 kind: Service metadata: name: redis-ha-sen-0 annotations: {} labels: {} namespace: redis spec: selector: app: redis-ha statefulset.kubernetes.io/pod-name: redis-redis-ha-server-0 clusterIP: None ports: - name: server port: 6379 protocol: TCP targetPort: redis - name: sentinel port: 26379 protocol: TCP targetPort: sentinel sessionAffinity: None type: ClusterIP __clone: true
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
| apiVersion: v1 kind: Service metadata: name: redis-ha-sen-1 annotations: {} labels: {} namespace: redis spec: selector: app: redis-ha statefulset.kubernetes.io/pod-name: redis-redis-ha-server-1 clusterIP: None ports: - name: server port: 6379 protocol: TCP targetPort: redis - name: sentinel port: 26379 protocol: TCP targetPort: sentinel sessionAffinity: None type: ClusterIP __clone: true
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
| apiVersion: v1 kind: Service metadata: name: redis-ha-sen-2 annotations: {} labels: {} namespace: redis spec: selector: app: redis-ha statefulset.kubernetes.io/pod-name: redis-redis-ha-server-2 clusterIP: None ports: - name: server port: 6379 protocol: TCP targetPort: redis - name: sentinel port: 26379 protocol: TCP targetPort: sentinel sessionAffinity: None type: ClusterIP __clone: true
|
application.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28
| spring: data: redis: host: ${REDIS_HOST} port: ${REDIS_PORT} password: ${REDIS_PASSWORD} database: 0 lettuce: pool: max-active: 200 max-idle: 20 min-idle: 5 max-wait: 2000 sentinel: master: mymaster nodes: - redis-ha-sen-0.redis.svc.cluster.local:26379 - redis-ha-sen-1.redis.svc.cluster.local:26379 - redis-ha-sen-2.redis.svc.cluster.local:26379
logging: level: root: info io.lettuce.core: debug org.springframework.data.redis: debug
|
配置
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44
| import io.lettuce.core.ClientOptions; import io.lettuce.core.ReadFrom; import io.lettuce.core.protocol.ProtocolVersion; import org.springframework.boot.autoconfigure.data.redis.LettuceClientConfigurationBuilderCustomizer; import org.springframework.boot.autoconfigure.data.redis.RedisProperties; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.data.redis.connection.RedisConnectionFactory; import org.springframework.data.redis.connection.RedisSentinelConfiguration; import org.springframework.data.redis.connection.lettuce.LettuceClientConfiguration; import org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory; import org.springframework.data.redis.connection.lettuce.LettucePoolingClientConfiguration;
import java.util.HashSet;
@Configuration public class WriteToMasterReadFromReplicaConfiguration implements LettuceClientConfigurationBuilderCustomizer {
@Bean public RedisConnectionFactory lettuceConnectionFactory(RedisProperties redisProperties) { RedisSentinelConfiguration redisSentinelConfiguration = new RedisSentinelConfiguration( redisProperties.getSentinel().getMaster(), new HashSet<>(redisProperties.getSentinel().getNodes()));
redisSentinelConfiguration.setPassword(redisProperties.getPassword()); redisSentinelConfiguration.setDatabase(redisProperties.getDatabase());
LettucePoolingClientConfiguration lettuceClientConfiguration = LettucePoolingClientConfiguration.builder() .readFrom(ReadFrom.ANY_REPLICA).build(); return new LettuceConnectionFactory(redisSentinelConfiguration, lettuceClientConfiguration); }
@Override public void customize(LettuceClientConfiguration.LettuceClientConfigurationBuilder clientConfigurationBuilder) { clientConfigurationBuilder.clientOptions(ClientOptions.builder() .protocolVersion(ProtocolVersion.RESP2).build()); } }
|
模拟写入
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
| import jakarta.annotation.Resource; import lombok.RequiredArgsConstructor; import lombok.extern.slf4j.Slf4j; import org.springframework.boot.ApplicationArguments; import org.springframework.boot.ApplicationRunner; import org.springframework.data.redis.core.RedisTemplate; import org.springframework.stereotype.Component;
import java.util.concurrent.TimeUnit;
@Slf4j @RequiredArgsConstructor @Component public class RedisInit implements ApplicationRunner {
@Resource private final RedisTemplate<String, Object> redisTemplate;
@Override public void run(ApplicationArguments args) throws Exception { for (int i = 0; i < 10; i++) { try { redisTemplate.opsForValue().set("k" + i, "v" + i); log.info("set value success: {}", i);
Object val = redisTemplate.opsForValue().get("k" + i); log.info("get value success: {}", val); TimeUnit.SECONDS.sleep(1); } catch (Exception e) { log.error("error: {}", e.getMessage()); } } log.info("finished..."); } }
|
分析日志
主库216写入,从库158读取
主库216写入,从库107读取
参考文章
[1] helm安装redis_黑科技王子的博客-CSDN博客
[2] charts/stable/redis-ha at master · helm/charts (github.com)
[3] redis读写分离之lettuce - 腾讯云开发者社区-腾讯云 (tencent.com)