普拉多VX

人生一路,不问来时,不知归期

0%

Kubernetes 1.19 集群安装 for Centos 7.8

安装方式

kubeadm 搭建kubernetes集群环境

集群节点规划

1
2
3
k8s-master     172.19.153.97      Centos 7.8       2核4G
k8s-node1 172.19.153.98 Centos 7.8 2核4G
k8s-node2 172.19.153.99 Centos 7.8 2核4G

环境准备

1.修改hostname

分别设置各台主机设备的hostname
master 节点执行

1
2
3
4
[root@iZ2zefbuojpotsgnr3i6weZ ~]# hostnamectl set-hostname k8s-master
[root@iZ2zefbuojpotsgnr3i6weZ ~]# hostname
k8s-master
[root@iZ2zefbuojpotsgnr3i6weZ ~]#

在node1节点执行

1
2
3
4
[root@iZ2zefbuojpotsgnr3i6wfZ ~]# hostnamectl set-hostname k8s-node1
[root@iZ2zefbuojpotsgnr3i6wfZ ~]# hostname
k8s-node1
[root@iZ2zefbuojpotsgnr3i6wfZ ~]#

在node2节点执行

1
2
3
4
[root@iZ2zefbuojpotsgnr3i6wfZ ~]# hostnamectl set-hostname k8s-node2
[root@iZ2zefbuojpotsgnr3i6wfZ ~]# hostname
k8s-node2
[root@iZ2zefbuojpotsgnr3i6wfZ ~]#

2.添加hosts解析

操作节点:所有节点执行

1
2
3
4
5
cat >>/etc/hosts<<EOF
172.19.153.97 k8s-master
172.19.153.98 k8s-node1
172.19.153.99 k8s-node2
EOF

ping测试下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@k8s-master ~]# 
[root@k8s-master ~]# ping k8s-node1
PING k8s-node1 (172.19.153.98) 56(84) bytes of data.
64 bytes from k8s-node1 (172.19.153.98): icmp_seq=1 ttl=64 time=0.342 ms
64 bytes from k8s-node1 (172.19.153.98): icmp_seq=2 ttl=64 time=0.221 ms
^C
--- k8s-node1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.221/0.281/0.342/0.062 ms
[root@k8s-master ~]# ping k8s-node2
PING k8s-node2 (172.19.153.99) 56(84) bytes of data.
64 bytes from k8s-node2 (172.19.153.99): icmp_seq=1 ttl=64 time=0.357 ms
64 bytes from k8s-node2 (172.19.153.99): icmp_seq=2 ttl=64 time=0.205 ms
^C
--- k8s-node2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.205/0.281/0.357/0.076 ms
[root@k8s-master ~]#

3.系统优化配置

操作节点:所有节点执行

防火墙配置
如果节点间无安全组限制,可以忽略。否则需要打开如下端口
k8s-master:tcp:6443,2379,2380,60080,60081,udp全部打开
k8s-node节点:udp协议开放

1
iptables -P FORWARD ACCEPT

关闭selinux和防火墙

1
2
3
sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久关闭
setenforce 0 #临时关闭
systemctl stop firewalld && systemctl disable firewalld

检查所有节点是否为disabled状态,阿里云默认关闭了selinux

1
2
[root@k8s-master ~]# getenforce  #查看selinux状态
Disabled

关闭swap分区

1
2
swapoff -a
sed -i 's/.\\*swap.\\*/#&/' /etc/fstab #防止开机自动挂载

修改内核参数

1
2
3
4
5
6
7
8
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
vm.max_map_count=262144
EOF
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf

4.配置yum源

配置docker、linux、kubernetes默认的yum源

1
2
3
4
5
6
7
8
9
10
11
12
13
curl -o /etc/yum.repos.d/CentOS-7.repo http://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum clean all && yum makecache

5.安装Docker

操作节点:所有节点执行

获取阿里的加速器地址

1
2
3
4
5
6
7
8
9
10
11
12
13
# 安装最新版本
yum install docker-ce

# 修改docker daemon配置文件
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://rsgc4jk0.mirror.aliyuncs.com"]
}
EOF

# 启动docker
systemctl enable docker && systemctl start docker

启动状态检查

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[root@k8s-master ~]# systemctl  status docker-ce
Unit docker-ce.service could not be found.
[root@k8s-master ~]# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2020-10-20 18:01:20 CST; 1min 13s ago
Docs: https://docs.docker.com
Main PID: 11667 (dockerd)
Tasks: 10
Memory: 38.2M
CGroup: /system.slice/docker.service
└─11667 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

Oct 20 18:01:20 k8s-master dockerd[11667]: time="2020-10-20T18:01:20.767736680+08:00" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Oct 20 18:01:20 k8s-master dockerd[11667]: time="2020-10-20T18:01:20.767752808+08:00" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/co...odule=grpc
Oct 20 18:01:20 k8s-master dockerd[11667]: time="2020-10-20T18:01:20.767764079+08:00" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Oct 20 18:01:20 k8s-master dockerd[11667]: time="2020-10-20T18:01:20.796876109+08:00" level=info msg="Loading containers: start."
Oct 20 18:01:20 k8s-master dockerd[11667]: time="2020-10-20T18:01:20.885563057+08:00" level=info msg="Default bridge (docker0) is assigned with an IP address 17...P address"
Oct 20 18:01:20 k8s-master dockerd[11667]: time="2020-10-20T18:01:20.928135025+08:00" level=info msg="Loading containers: done."
Oct 20 18:01:20 k8s-master dockerd[11667]: time="2020-10-20T18:01:20.943819098+08:00" level=info msg="Docker daemon" commit=4484c46d9d graphdriver(s)=overlay2 v...n=19.03.13
Oct 20 18:01:20 k8s-master dockerd[11667]: time="2020-10-20T18:01:20.943914032+08:00" level=info msg="Daemon has completed initialization"
Oct 20 18:01:20 k8s-master dockerd[11667]: time="2020-10-20T18:01:20.968326432+08:00" level=info msg="API listen on /var/run/docker.sock"
Oct 20 18:01:20 k8s-master systemd[1]: Started Docker Application Container Engine.
Hint: Some lines were ellipsized, use -l to show in full.

安装部署Kubernetes

参看可以安装的版本,目前最新版本为1.19.3

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@k8s-master ~]# yum list kubelet --showduplicates | tail 
Repository base is listed more than once in the configuration
Repository updates is listed more than once in the configuration
Repository extras is listed more than once in the configuration
kubelet.x86_64 1.18.4-1 kubernetes
kubelet.x86_64 1.18.5-0 kubernetes
kubelet.x86_64 1.18.6-0 kubernetes
kubelet.x86_64 1.18.8-0 kubernetes
kubelet.x86_64 1.18.9-0 kubernetes
kubelet.x86_64 1.18.10-0 kubernetes
kubelet.x86_64 1.19.0-0 kubernetes
kubelet.x86_64 1.19.1-0 kubernetes
kubelet.x86_64 1.19.2-0 kubernetes
kubelet.x86_64 1.19.3-0 kubernetes
[root@k8s-master ~]#

1.安装kubeadm,kubelet,kubectl

操作节点:所有节点执行

1
2
3
4
5
6
7
8
# 安装
yum install kubelet-1.19.3 kubeadm-1.19.3 kubectl-1.19.3 -y

# 检查版本
bueadm version

# 设置开机启动
systemctl enable kubelet

2.初始化kubeadm配置

操作节点:只在master

1
[root@k8s-master ~]# kubeadm config print init-defaults > kubeadm.yaml

修改三处配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
[root@k8s-master ~]# vim kubeadm.yaml

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.19.153.97 # master ip
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-master
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers # aliyun镜像地址
kind: ClusterConfiguration
kubernetesVersion: v1.19.0
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16 # pod 网段
serviceSubnet: 10.96.0.0/12
scheduler: {}

3.下载镜像

参看需要使用的镜像列表

1
2
3
4
5
6
7
8
9
[root@k8s-master ~]# kubeadm config images list --config kubeadm.yaml 
W1020 18:41:31.595354 12031 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.19.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.19.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.19.0
registry.aliyuncs.com/google_containers/pause:3.2
registry.aliyuncs.com/google_containers/etcd:3.4.13-0
registry.aliyuncs.com/google_containers/coredns:1.7.0

使用pull下载镜像

1
2
3
4
5
6
7
8
9
10
[root@k8s-master ~]# kubeadm config images pull --config kubeadm.yaml     
W1020 18:47:17.290279 12065 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.19.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.19.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.19.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.2
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.13-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.7.0
[root@k8s-master ~]#

4.初始化kubeadm

操作节点:只在master

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
[root@k8s-master ~]# kubeadm init --config 
flag needs an argument: --config
To see the stack trace of this error execute with --v=5 or higher
[root@k8s-master ~]# kubeadm init --config kubeadm.yaml
W1020 18:55:45.497737 12312 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.0
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster

.......

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:
# 使用以前命令加入到集群
kubeadm join 172.19.153.97:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:31a4ae4022fed0410031c099424152ff3bf7a91d95a7e67f66b17fe3e1372e02
[root@k8s-master ~]#

执行上面提示的操作

1
2
3
[root@k8s-master ~]#  mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

5.添加node节点到集群

操作节点:node节点

node1

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@k8s-node1 ~]# kubeadm join 172.19.153.97:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:31a4ae4022fed0410031c099424152ff3bf7a91d95a7e67f66b17fe3e1372e02
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@k8s-node1 ~]#

node2

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@k8s-node2 ~]# kubeadm join 172.19.153.97:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:31a4ae4022fed0410031c099424152ff3bf7a91d95a7e67f66b17fe3e1372e02
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@k8s-node2 ~]#

检查-在master节点上,可以看到有两个node加入进来了,但是处于NotReady状态。

1
2
3
4
5
6
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 19m v1.19.3
k8s-node1 NotReady <none> 2m7s v1.19.3
k8s-node2 NotReady <none> 2m v1.19.3
[root@k8s-master ~]#

6.安装flannel

下载flannel的yaml文件

1
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

应用配置文件,创建flannel

1
2
3
4
5
6
7
[root@k8s-master ~]# kubectl apply -f kube-flannel.yml 
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

检查flannel是否创建

1
2
3
4
5
[root@k8s-master ~]# docker ps -a | grep flannel
b3a28fe53910 e708f4bb69e3 "/opt/bin/flanneld -…" 3 minutes ago Up 3 minutes k8s_kubeflannel_kube-flannel-ds-nvrdd_kube-system_d4d504b3-b579-4e90-94cf-3584da4395ec_0
f6e5f8546895 quay.io/coreos/flannel "cp -f /etc/kube-fla…" 3 minutes ago Exited (0) 3 minutes ago k8s_install-cni_kube-flannel-ds-nvrdd_kube-system_d4d504b3-b579-4e90-94cf-3584da4395ec_0
6a3fb130f7b3 registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-flannel-ds-nvrdd_kube-system_d4d504b3-b579-4e90-94cf-3584da4395ec_0
[root@k8s-master ~]#

7.验证集群状态

等待1-2分钟后再次查看结果

1
2
3
4
5
6
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 52m v1.19.3
k8s-node1 Ready <none> 35m v1.19.3
k8s-node2 Ready <none> 35m v1.19.3
[root@k8s-master ~]#

8.部署Dashboard

下载yaml配置文件,最新版本的dashboard参考https://github.com/kubernetes/dashboard/releases

注意兼容性:

1
2
3
4
5
6
7
8
9
10
11
[root@k8s-master ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml
--2020-10-20 20:16:00-- https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.108.133
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 7552 (7.4K) [text/plain]
Saving to: ‘recommended.yaml’

100%[===================================================================================================================================>] 7,552 --.-K/s in 0.1s

2020-10-20 20:16:02 (53.8 KB/s) - ‘recommended.yaml’ saved [7552/7552]

修改yaml文件,添加NodePort

1
2
3
4
5
6
7
39 spec:
40 ports:
41 - port: 443
42 targetPort: 8443
43 selector:
44 k8s-app: kubernetes-dashboard
45 type: NodePort # 变成nodeport 类型服务

也可以固定成某一个port端口,不设置默认会动态生成一个端口。

1
2
3
4
5
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30443 #对外暴露的端口

应用配置并检查

1
2
3
4
5
6
[root@k8s-master ~]# kubectl  apply -f recommended.yaml
[root@k8s-master ~]# kubectl -n kubernetes-dashboard get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.97.20.216 <none> 8000/TCP 32m
kubernetes-dashboard NodePort 10.107.133.135 <none> 443:30078/TCP 32m
[root@k8s-master ~]#

访问:https://x.x.x.x:30078

提示两种验证方式

9.创建token令牌

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@k8s-master ~]# kubectl create serviceaccount  dashboard-admin -n kube-system
serviceaccount/dashboard-admin created
[root@k8s-master ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
[root@k8s-master ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Name: dashboard-admin-token-cpmgn
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: dashboard-admin
kubernetes.io/service-account.uid: 1d68fc57-c316-4e5a-96bf-e2524053c2b6

Type: kubernetes.io/service-account-token

Data
====
ca.crt: 1066 bytes
namespace: 11 bytes
token: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
[root@k8s-master ~]#

在token处输入上面生成的令牌

登录成功

参考资料