相关信息
在k8s1.24以后默认不再支持docker,而是使用containerd,大家普遍使用docker熟练了,转到containerd上难免有些不便,也有人使用cri-dockerd, Kubernetes自v1.24移除了对docker-shim的支持,而Docker Engine默认又不支持CRI规范,因而二者将无法直接完成整合。为此,Mirantis和Docker联合创建了cri-dockerd项目,用于为Docker Engine提供一个能够支持到CRI规范的垫片,从而能够让Kubernetes基于CRI控制Docker。但是kubernetes社区明显在推containerd,因为它更轻量化,并且稳定性和可靠性也比docker更强。
kubeasz提供快速部署高可用k8s集群的工具, 基于二进制方式部署和利用ansible-playbook实现自动化;既提供一键安装脚本, 也可以根据安装指南分步执行安装各个组件。这个项目一直跟随k8s官方更新而更新,时间已经很久了,并且生产环境广泛使用。因为是基于ansible部署的,最好懂点ansible-playbook。
项目地址:https://github.com/easzlab/kubeasz/
IP地址 | 角色 | 系统版本 |
---|---|---|
10.0.0.200 | LB-VIP | |
10.0.0.201 | master01 | Ubuntu 20.04 |
10.0.0.202 | master02 | Ubuntu 20.04 |
10.0.0.203 | master03 | Ubuntu 20.04 |
10.0.0.204 | LB-1 | Ubuntu 20.04 |
10.0.0.205 | LB-2 | Ubuntu 20.04 |
10.0.0.206 | harbor | Ubuntu 20.04 |
10.0.0.207 | node01 | Ubuntu 20.04 |
10.0.0.208 | node02 | Ubuntu 20.04 |
10.0.0.209 | node03 | Ubuntu 20.04 |
使用master复用为ansible安装主机,安装ansible,配置ssh免密登录,ssh-copy-id $IP。 这里就不再介绍harbor和LB的安装了,网站上harbor、ka和ha文章里已经有了。
下载工具脚本ezdown,举例使用kubeasz版本3.5.0 export release=3.5.0 wget https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown chmod +x ./ezdown
#国内环境 ./ezdown -D # 海外环境 #./ezdown -D -m standard
# 按需下载 ./ezdown -X flannel ./ezdown -X prometheus
./ezdown -P 上述脚本运行成功后,所有文件(kubeasz代码、二进制、离线镜像)均已整理好放入目录/etc/kubeasz
# 容器化运行kubeasz ./ezdown -S # 创建新集群 k8s-01 docker exec -it kubeasz ezctl new k8s-01 2021-01-19 10:48:23 DEBUG generate custom cluster files in /etc/kubeasz/clusters/k8s-01 2021-01-19 10:48:23 DEBUG set version of common plugins 2021-01-19 10:48:23 DEBUG cluster k8s-01: files successfully created. 2021-01-19 10:48:23 INFO next steps 1: to config '/etc/kubeasz/clusters/k8s-01/hosts' 2021-01-19 10:48:23 INFO next steps 2: to config '/etc/kubeasz/clusters/k8s-01/config.yml'
shvim /etc/kubeasz/clusters/k8s-01/config.yml
############################
# prepare
############################
# 可选离线安装系统软件包 (offline|online)
INSTALL_SOURCE: "online"
# 可选进行系统安全加固 github.com/dev-sec/ansible-collection-hardening
OS_HARDEN: false
############################
# role:deploy
############################
# default: ca will expire in 100 years
# default: certs issued by the ca will expire in 50 years
CA_EXPIRY: "876000h"
CERT_EXPIRY: "438000h"
# force to recreate CA and other certs, not suggested to set 'true'
CHANGE_CA: false
# kubeconfig 配置参数
CLUSTER_NAME: "cluster1"
CONTEXT_NAME: "context-{{ CLUSTER_NAME }}"
# k8s version
K8S_VER: "1.24.13"
# set unique 'k8s_nodename' for each node, if not set(default:'') ip add will be used
# CAUTION: 'k8s_nodename' must consist of lower case alphanumeric characters, '-' or '.',
# and must start and end with an alphanumeric character (e.g. 'example.com'),
# regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*'
K8S_NODENAME: "{%- if k8s_nodename != '' -%} \
{{ k8s_nodename|replace('_', '-')|lower }} \
{%- else -%} \
{{ inventory_hostname }} \
{%- endif -%}"
############################
# role:etcd
############################
# 设置不同的wal目录,可以避免磁盘io竞争,提高性能
ETCD_DATA_DIR: "/var/lib/etcd"
ETCD_WAL_DIR: ""
############################
# role:runtime [containerd,docker]
############################
# ------------------------------------------- containerd
# [.]启用容器仓库镜像
ENABLE_MIRROR_REGISTRY: true
# [containerd]基础容器镜像,这里修改为我本地的harbor仓库,但是默认是下载不了的,因为本地仓库没有设置信任,等会讲怎么才能下载
SANDBOX_IMAGE: "harbor.zhang.org/baseimages/pause:3.9"
# [containerd]容器持久化存储目录
CONTAINERD_STORAGE_DIR: "/var/lib/containerd"
# ------------------------------------------- docker
# [docker]容器存储目录
DOCKER_STORAGE_DIR: "/var/lib/docker"
# [docker]开启Restful API
ENABLE_REMOTE_API: false
# [docker]信任的HTTP仓库
INSECURE_REG: '["http://easzlab.io.local:5000"]'
############################
# role:kube-master
############################
# k8s 集群 master 节点证书配置,可以添加多个ip和域名(比如增加公网ip和域名)
MASTER_CERT_HOSTS:
- "10.1.1.1"
- "k8s.easzlab.io"
#- "www.test.com"
# node 节点上 pod 网段掩码长度(决定每个节点最多能分配的pod ip地址)
# 如果flannel 使用 --kube-subnet-mgr 参数,那么它将读取该设置为每个节点分配pod网段
# https://github.com/coreos/flannel/issues/847
NODE_CIDR_LEN: 24
############################
# role:kube-node
############################
# Kubelet 根目录
KUBELET_ROOT_DIR: "/var/lib/kubelet"
# node节点最大pod 数,这里根据k8s集群配置修改,一般都要改大点
MAX_PODS: 110
# 配置为kube组件(kubelet,kube-proxy,dockerd等)预留的资源量
# 数值设置详见templates/kubelet-config.yaml.j2
KUBE_RESERVED_ENABLED: "no"
# k8s 官方不建议草率开启 system-reserved, 除非你基于长期监控,了解系统的资源占用状况;
# 并且随着系统运行时间,需要适当增加资源预留,数值设置详见templates/kubelet-config.yaml.j2
# 系统预留设置基于 4c/8g 虚机,最小化安装系统服务,如果使用高性能物理机可以适当增加预留
# 另外,集群安装时候apiserver等资源占用会短时较大,建议至少预留1g内存
SYS_RESERVED_ENABLED: "no"
############################
# role:network [flannel,calico,cilium,kube-ovn,kube-router]
############################
# ------------------------------------------- flannel
# [flannel]设置flannel 后端"host-gw","vxlan"等
FLANNEL_BACKEND: "vxlan"
DIRECT_ROUTING: false
# [flannel]
flannel_ver: "v0.19.2"
# ------------------------------------------- calico
# [calico] IPIP隧道模式可选项有: [Always, CrossSubnet, Never],跨子网可以配置为Always与CrossSubnet(公有云建议使用always比较省事,其他的话需要修改各自公有云的网络配置,具体可以参考各个公有云说明)
# 其次CrossSubnet为隧道+BGP路由混合模式可以提升网络性能,同子网配置为Never即可.
CALICO_IPV4POOL_IPIP: "Always"
# [calico]设置 calico-node使用的host IP,bgp邻居通过该地址建立,可手工指定也可以自动发现
IP_AUTODETECTION_METHOD: "can-reach={{ groups['kube_master'][0] }}"
# [calico]设置calico 网络 backend: brid, vxlan, none
CALICO_NETWORKING_BACKEND: "brid"
# [calico]设置calico 是否使用route reflectors
# 如果集群规模超过50个节点,建议启用该特性
CALICO_RR_ENABLED: false
# CALICO_RR_NODES 配置route reflectors的节点,如果未设置默认使用集群master节点
# CALICO_RR_NODES: ["192.168.1.1", "192.168.1.2"]
CALICO_RR_NODES: []
# [calico]更新支持calico 版本: ["3.19", "3.23"]
calico_ver: "v3.24.5"
# [calico]calico 主版本
calico_ver_main: "{{ calico_ver.split('.')[0] }}.{{ calico_ver.split('.')[1] }}"
# ------------------------------------------- cilium
# [cilium]镜像版本
cilium_ver: "1.12.4"
cilium_connectivity_check: true
cilium_hubble_enabled: false
cilium_hubble_ui_enabled: false
# ------------------------------------------- kube-ovn
# [kube-ovn]选择 OVN DB and OVN Control Plane 节点,默认为第一个master节点
OVN_DB_NODE: "{{ groups['kube_master'][0] }}"
# [kube-ovn]离线镜像tar包
kube_ovn_ver: "v1.5.3"
# ------------------------------------------- kube-router
# [kube-router]公有云上存在限制,一般需要始终开启 ipinip;自有环境可以设置为 "subnet"
OVERLAY_TYPE: "full"
# [kube-router]NetworkPolicy 支持开关
FIREWALL_ENABLE: true
# [kube-router]kube-router 镜像版本
kube_router_ver: "v0.3.1"
busybox_ver: "1.28.4"
############################
# role:cluster-addon
############################
# coredns 自动安装
dns_install: "yes"
corednsVer: "1.9.3"
ENABLE_LOCAL_DNS_CACHE: true
dnsNodeCacheVer: "1.22.13"
# 设置 local dns cache 地址
LOCAL_DNS_CACHE: "169.254.20.10"
# metric server 自动安装
metricsserver_install: "yes"
metricsVer: "v0.5.2"
# dashboard 自动安装
dashboard_install: "yes"
dashboardVer: "v2.7.0"
dashboardMetricsScraperVer: "v1.0.8"
# prometheus 自动安装
prom_install: "no"
prom_namespace: "monitor"
prom_chart_ver: "39.11.0"
# nfs-provisioner 自动安装
nfs_provisioner_install: "no"
nfs_provisioner_namespace: "kube-system"
nfs_provisioner_ver: "v4.0.2"
nfs_storage_class: "managed-nfs-storage"
nfs_server: "192.168.1.10"
nfs_path: "/data/nfs"
# network-check 自动安装
network_check_enabled: false
network_check_schedule: "*/5 * * * *"
############################
# role:harbor
############################
# harbor version,完整版本号
HARBOR_VER: "v2.6.3"
HARBOR_DOMAIN: "harbor.easzlab.io.local"
HARBOR_PATH: /var/data
HARBOR_TLS_PORT: 8443
HARBOR_REGISTRY: "{{ HARBOR_DOMAIN }}:{{ HARBOR_TLS_PORT }}"
# if set 'false', you need to put certs named harbor.pem and harbor-key.pem in directory 'down'
HARBOR_SELF_SIGNED_CERT: true
# install extra component
HARBOR_WITH_NOTARY: false
HARBOR_WITH_TRIVY: false
HARBOR_WITH_CHARTMUSEUM: true
########################修改hosts文件###############################
vim /etc/kubeasz/clusters/k8s-01/hosts
# 'etcd' cluster should have odd member(s) (1,3,5,...)
#更改为自己的存储节点,根据集群规模而定,一般和master节点复用
[etcd]
10.0.0.201
10.0.0.202
10.0.0.203
# master node(s), set unique 'k8s_nodename' for each node
# CAUTION: 'k8s_nodename' must consist of lower case alphanumeric characters, '-' or '.',
# and must start and end with an alphanumeric character
#master节点
[kube_master]
10.0.0.201 k8s_nodename='master-01'
10.0.0.202 k8s_nodename='master-02'
10.0.0.203 k8s_nodename='master-03'
# work node(s), set unique 'k8s_nodename' for each node
# CAUTION: 'k8s_nodename' must consist of lower case alphanumeric characters, '-' or '.',
# and must start and end with an alphanumeric character
#work节点
[kube_node]
10.0.0.207 k8s_nodename='worker-01'
10.0.0.208 k8s_nodename='worker-02'
10.0.0.209 k8s_nodename='worker-03'
# [optional] harbor server, a private docker registry
# 'NEW_INSTALL': 'true' to install a harbor server; 'false' to integrate with existed one
[harbor]
#10.0.0.8 NEW_INSTALL=false
# [optional] loadbalance for accessing k8s from outside
#这里去掉注释,但是不使用他推荐的LB,修改VIP为集群中VIP即可
[ex_lb]
10.0.0.6 LB_ROLE=backup EX_APISERVER_VIP=10.0.0.200 EX_APISERVER_PORT=8443
10.0.0.7 LB_ROLE=master EX_APISERVER_VIP=10.0.0.200 EX_APISERVER_PORT=8443
# [optional] ntp server for the cluster
[chrony]
#10.0.0.1
[all:vars]
# --------- Main Variables ---------------
# Secure port for apiservers
SECURE_PORT="6443"
# Cluster container-runtime supported: docker, containerd
# if k8s version >= 1.24, docker is not supported
CONTAINER_RUNTIME="containerd"
# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
CLUSTER_NETWORK="calico"
# Service proxy mode of kube-proxy: 'iptables' or 'ipvs'
PROXY_MODE="ipvs"
# K8S Service CIDR, not overlap with node(host) networking
#按需修改services网段
SERVICE_CIDR="10.200.0.0/16"
# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
#按需修改pod网段
CLUSTER_CIDR="10.100.0.0/16"
# NodePort Range
#按需修改开放端口
NODE_PORT_RANGE="30000-62767"
# Cluster DNS Domain
#按需修改DNS解析名称
CLUSTER_DNS_DOMAIN="zhang.local"
# -------- Additional Variables (don't change the default value right now) ---
# Binaries Directory
bin_dir="/opt/kube/bin"
# Deploy Directory (kubeasz workspace)
base_dir="/etc/kubeasz"
# Directory for a specific cluster
cluster_dir="{{ base_dir }}/clusters/k8s-01"
# CA and other components cert/key Directory
ca_dir="/etc/kubernetes/ssl"
# Default 'k8s_nodename' is empty
k8s_nodename=''
sh#只需修改image下载链接为本地harbor地址即可,这三个镜像都是提前下载好上传到harbor中的,docker search一下就能找到,然后修改tag,push即可
root@k8s-master1:/data# grep image /etc/kubeasz/roles/calico/templates/calico-v3.24.yaml.j2
image: harbor.zhang.org/baseimages/calico-cni:v3.24.5
imagePullPolicy: IfNotPresent
image: harbor.zhang.org/baseimages/calico-node:v3.24.5
imagePullPolicy: IfNotPresent
image: harbor.zhang.org/baseimages/calico-node:v3.24.5
imagePullPolicy: IfNotPresent
image: harbor.zhang.org/baseimages/calico-kube-controllers:v3.24.5
imagePullPolicy: IfNotPresent
sh#建议使用alias命令,查看~/.bashrc 文件应该包含:alias dk='docker exec -it kubeasz'
source ~/.bashrc
# 一键安装,等价于执行docker exec -it kubeasz ezctl setup k8s-01 all
dk ezctl setup k8s-01 all
# 或者分步安装,具体使用 dk ezctl help setup 查看分步安装帮助信息
# dk ezctl setup k8s-01 01
# dk ezctl setup k8s-01 02
# dk ezctl setup k8s-01 03
# dk ezctl setup k8s-01 04
上面是官方文档,为了避免docker和containerd冲突,我没有在master01上安装docker,在别的服务器上初始化了一下然后把hosts文件和config.yml文件复制过来,然后直接进入配置目录直接安装的,以下是我的安装过程。
sh#先看帮助,先看帮助,先看帮助!!!官方的不一定就适用你的环境,这里可以使用all全部安装,也可以一步一步安装,
root@k8s-master1:/etc/kubeasz# ./ezctl help setup
Usage: ezctl setup <cluster> <step>
available steps:
01 prepare to prepare CA/certs & kubeconfig & other system settings
02 etcd to setup the etcd cluster
03 container-runtime to setup the container runtime(docker or containerd)
04 kube-master to setup the master nodes
05 kube-node to setup the worker nodes
06 network to setup the network plugin
07 cluster-addon to setup other useful plugins
90 all to run 01~07 all at once
10 ex-lb to install external loadbalance for accessing k8s from outside
11 harbor to install a new harbor server or to integrate with an existed one
examples: ./ezctl setup test-k8s 01 (or ./ezctl setup test-k8s prepare)
./ezctl setup test-k8s 02 (or ./ezctl setup test-k8s etcd)
./ezctl setup test-k8s all
./ezctl setup test-k8s 04 -t restart_master
./ezctl setup k8s-01 01
./ezctl setup k8s-01 02
./ezctl setup k8s-01 03
./ezctl setup k8s-01 04
./ezctl setup k8s-01 05
#过程太长且无意义,这里就不贴出来了。
提示
如果你环境准备的没问题,到第五步是没问题的,第六步开始前需要修改一些配置,在刚才calico的roles中的templates中我们修改了镜像下载地址为本地镜像下载地址,我们现在使用的是containerd,和docker不一样,并且配置信任仓库也不一致。
sh#在/etc/containerd/config.toml配置文件最后添加仓库,如果是私有仓库还需添加用户名和密码,我这里设置的是公有仓库。
[plugins."io.containerd.grpc.v1.cri".registry.configs."harbor.zhang.org".tls]
insecure_skip_verify = true #跳过认证
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."harbor.zhang.org"]
endpoint = ["http://harbor.zhang.org"] #仓库地址
#修改完配置文件后重启containerd
systemctl restart containerd
警告
每台服务器在/etc/hosts文件中添加对harbor仓库域名的解析,10.0.0.206 harbor.zhang.org,不然解析不了地址,下载不了镜像。
shroot@k8s-master1:/etc/kubeasz# ./ezctl setup k8s-01 06
ansible-playbook -i clusters/k8s-01/hosts -e @clusters/k8s-01/config.yml playbooks/06.network.yml
2023-06-26 00:04:15 INFO cluster:k8s-01 setup step:06 begins in 5s, press any key to abort:
PLAY [kube_master,kube_node] ***********************************************************************************************
TASK [Gathering Facts] *****************************************************************************************************
ok: [10.0.0.201]
ok: [10.0.0.207]
ok: [10.0.0.203]
ok: [10.0.0.208]
ok: [10.0.0.202]
ok: [10.0.0.209]
TASK [calico : 创建calico 证书请求] ****************************************************************************************
ok: [10.0.0.201]
TASK [calico : 创建 calico证书和私钥] **************************************************************************************
changed: [10.0.0.201]
TASK [calico : 删除旧 calico-etcd-secrets] *********************************************************************************
changed: [10.0.0.201]
TASK [calico : 创建 calico-etcd-secrets] ***********************************************************************************
changed: [10.0.0.201]
TASK [calico : 配置 calico DaemonSet yaml文件] *****************************************************************************
changed: [10.0.0.201]
TASK [calico : 运行 calico网络] ********************************************************************************************
changed: [10.0.0.201]
TASK [calico : 在节点创建相关目录] *****************************************************************************************
ok: [10.0.0.201] => (item=/etc/calico/ssl)
ok: [10.0.0.208] => (item=/etc/calico/ssl)
ok: [10.0.0.207] => (item=/etc/calico/ssl)
ok: [10.0.0.202] => (item=/etc/calico/ssl)
ok: [10.0.0.203] => (item=/etc/calico/ssl)
ok: [10.0.0.209] => (item=/etc/calico/ssl)
TASK [calico : 分发calico证书相关] *****************************************************************************************
ok: [10.0.0.201] => (item=ca.pem)
ok: [10.0.0.203] => (item=ca.pem)
ok: [10.0.0.207] => (item=ca.pem)
ok: [10.0.0.208] => (item=ca.pem)
changed: [10.0.0.203] => (item=calico.pem)
changed: [10.0.0.201] => (item=calico.pem)
ok: [10.0.0.202] => (item=ca.pem)
changed: [10.0.0.208] => (item=calico.pem)
changed: [10.0.0.207] => (item=calico.pem)
changed: [10.0.0.208] => (item=calico-key.pem)
changed: [10.0.0.201] => (item=calico-key.pem)
changed: [10.0.0.207] => (item=calico-key.pem)
changed: [10.0.0.203] => (item=calico-key.pem)
ok: [10.0.0.209] => (item=ca.pem)
changed: [10.0.0.209] => (item=calico.pem)
changed: [10.0.0.202] => (item=calico.pem)
changed: [10.0.0.209] => (item=calico-key.pem)
changed: [10.0.0.202] => (item=calico-key.pem)
TASK [calico : 删除默认cni配置] ********************************************************************************************
ok: [10.0.0.201]
ok: [10.0.0.203]
ok: [10.0.0.207]
ok: [10.0.0.208]
ok: [10.0.0.202]
ok: [10.0.0.209]
TASK [calico : 下载calicoctl 客户端] ***************************************************************************************
ok: [10.0.0.201] => (item=calicoctl)
ok: [10.0.0.208] => (item=calicoctl)
ok: [10.0.0.207] => (item=calicoctl)
ok: [10.0.0.203] => (item=calicoctl)
ok: [10.0.0.202] => (item=calicoctl)
ok: [10.0.0.209] => (item=calicoctl)
TASK [calico : 准备 calicoctl配置文件] *************************************************************************************
ok: [10.0.0.201]
ok: [10.0.0.202]
ok: [10.0.0.207]
ok: [10.0.0.203]
ok: [10.0.0.208]
ok: [10.0.0.209]
FAILED - RETRYING: [10.0.0.208]: 轮询等待calico-node 运行 (15 retries left).
FAILED - RETRYING: [10.0.0.203]: 轮询等待calico-node 运行 (15 retries left).
FAILED - RETRYING: [10.0.0.202]: 轮询等待calico-node 运行 (15 retries left).
TASK [calico : 轮询等待calico-node 运行] ***********************************************************************************
changed: [10.0.0.201]
FAILED - RETRYING: [10.0.0.207]: 轮询等待calico-node 运行 (15 retries left).
FAILED - RETRYING: [10.0.0.209]: 轮询等待calico-node 运行 (15 retries left).
FAILED - RETRYING: [10.0.0.208]: 轮询等待calico-node 运行 (14 retries left).
FAILED - RETRYING: [10.0.0.202]: 轮询等待calico-node 运行 (14 retries left).
FAILED - RETRYING: [10.0.0.203]: 轮询等待calico-node 运行 (14 retries left).
FAILED - RETRYING: [10.0.0.207]: 轮询等待calico-node 运行 (14 retries left).
FAILED - RETRYING: [10.0.0.209]: 轮询等待calico-node 运行 (14 retries left).
FAILED - RETRYING: [10.0.0.208]: 轮询等待calico-node 运行 (13 retries left).
FAILED - RETRYING: [10.0.0.203]: 轮询等待calico-node 运行 (13 retries left).
FAILED - RETRYING: [10.0.0.209]: 轮询等待calico-node 运行 (13 retries left).
FAILED - RETRYING: [10.0.0.207]: 轮询等待calico-node 运行 (13 retries left).
FAILED - RETRYING: [10.0.0.202]: 轮询等待calico-node 运行 (13 retries left).
FAILED - RETRYING: [10.0.0.208]: 轮询等待calico-node 运行 (12 retries left).
FAILED - RETRYING: [10.0.0.203]: 轮询等待calico-node 运行 (12 retries left).
FAILED - RETRYING: [10.0.0.209]: 轮询等待calico-node 运行 (12 retries left).
FAILED - RETRYING: [10.0.0.207]: 轮询等待calico-node 运行 (12 retries left).
changed: [10.0.0.202]
FAILED - RETRYING: [10.0.0.208]: 轮询等待calico-node 运行 (11 retries left).
FAILED - RETRYING: [10.0.0.203]: 轮询等待calico-node 运行 (11 retries left).
FAILED - RETRYING: [10.0.0.209]: 轮询等待calico-node 运行 (11 retries left).
FAILED - RETRYING: [10.0.0.207]: 轮询等待calico-node 运行 (11 retries left).
FAILED - RETRYING: [10.0.0.203]: 轮询等待calico-node 运行 (10 retries left).
FAILED - RETRYING: [10.0.0.208]: 轮询等待calico-node 运行 (10 retries left).
changed: [10.0.0.209]
changed: [10.0.0.207]
changed: [10.0.0.203]
changed: [10.0.0.208]
PLAY RECAP *****************************************************************************************************************
10.0.0.201 : ok=13 changed=7 unreachable=0 failed=0 skipped=37 rescued=0 ignored=0
10.0.0.202 : ok=7 changed=2 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0
10.0.0.203 : ok=7 changed=2 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0
10.0.0.207 : ok=7 changed=2 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0
10.0.0.208 : ok=7 changed=2 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0
10.0.0.209 : ok=7 changed=2 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0
sh#查看网络插件是否正常运行
#容器已经运行中
root@k8s-master1:/data# crictl pods
POD ID CREATED STATE NAME NAMESPACE ATTEMPT RUNTIME
7d6019654c8ae About an hour ago Ready calico-node-4fkct kube-system 0 (default)
#网络插件calico也已经启动
root@k8s-master1:/data# calicoctl node status
Calico process is running.
IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+--------------+-------------------+-------+----------+-------------+
| 10.0.0.202 | node-to-node mesh | up | 16:05:51 | Established |
| 10.0.0.207 | node-to-node mesh | up | 16:06:23 | Established |
| 10.0.0.209 | node-to-node mesh | up | 16:06:24 | Established |
| 10.0.0.208 | node-to-node mesh | up | 16:06:24 | Established |
| 10.0.0.203 | node-to-node mesh | up | 16:06:24 | Established |
+--------------+-------------------+-------+----------+-------------+
IPv6 BGP status
No IPv6 peers found.
#已经识别到各个master和node节点,并且都是ready状态,集群初步安装成功
root@k8s-master1:/data# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-01 Ready,SchedulingDisabled master 144m v1.24.13
master-02 Ready,SchedulingDisabled master 144m v1.24.13
master-03 Ready,SchedulingDisabled master 144m v1.24.13
worker-01 Ready node 141m v1.24.13
worker-02 Ready node 141m v1.24.13
worker-03 Ready node 141m v1.24.13
本文作者:笑一个吧~
本文链接:
版权声明:本博客所有文章除特别声明外,均采用 本文为博主「笑一个吧~」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。 许可协议。转载请注明出处!