centos 安装k8s集群 1 服务器 2 安装docke

1 服务器

三台linux centos服务器

2 安装docker环境

2.1 更新yum

yum update

2.2 设置yum源

1
arduino复制代码*Sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo*

image

可以查看所有仓库中所有docker版本,并选择特定版本安装:

1
bash复制代码yum list docker-ce --showduplicates | sort -r

2.3 安装

1
复制代码sudo yum install docker-ce-18.06.0.ce

2.4 启动、设置开启开机启动

1
sql复制代码sudo systemctl start docker
1
bash复制代码sudo systemctl enable docker

验证安装是否成功(有client和service两部分表示docker安装启动都成功了):

1
复制代码docker version

3 安装k8s集群

3.1 关闭防火墙

1
2
3
arduino复制代码systemctl stop firewalld

systemctl disable firewalld

3.2 关闭selinux

1
2
3
bash复制代码setenforce 0  # 临时关闭

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config # 永久关闭

1.3.3 关闭swap

1
2
3
4
5
bash复制代码swapoff -a    # 临时关闭;关闭swap主要是为了性能考虑

free            # 可以通过这个命令查看swap是否关闭了

sed -ri 's/.*swap.*/#&/' /etc/fstab  # 永久关闭

3.4 添加主机名与IP对应的关系

1
shell复制代码$ vim /etc/hosts

添加如下内容:

1
2
3
4
5
复制代码192.168.190.128 k8s-master

192.168.190.129 k8s-node1

192.168.190.130 k8s-node2

3.5 将桥接的IPV4流量传递到iptables 的链

1
2
3
4
5
6
7
bash复制代码$ cat > /etc/sysctl.d/k8s.conf << EOF

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF
1
css复制代码$ sysctl --system

3.6 添加阿里云YUM软件源

1
2
3
4
5
6
7
8
9
10
11
12
13
ini复制代码cat > /etc/yum.repos.d/kubernetes.repo << EOF

[k8s]

name=k8s

enabled=1

gpgcheck=0

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

EOF

3.7 安装kubeadm,kubelet和kubectl

1
2
3
4
5
bash复制代码kubelet # 运行在 Cluster 所有节点上,负责启动 Pod 和容器。

kubeadm # 用于初始化 Cluster。

kubectl # 是 Kubernetes 命令行工具。通过 kubectl 可以部署和管理应用,查看各种资源,创建、删除和更新各种组件。

在部署kubernetes时,要求master node和worker node上的版本保持一致,否则会出现版本不匹配导致奇怪的问题出现。本文将介绍如何在CentOS系统上,使用yum安装指定版本的Kubernetes。

我们需要安装指定版本的kubernetes。那么如何做呢?在进行yum安装时,可以使用下列的格式来进行安装:

1
2
3
xml复制代码yum install -y kubelet-<version> kubectl-<version> kubeadm-<version>

yum install kubelet-1.19.6 kubeadm-1.19.6 kubectl-1.19.6 -y

输出

1
2
3
4
5
makefile复制代码Installed:

  kubeadm.x86_64 0:1.20.1-0  kubectl.x86_64 0:1.20.1-0 

  kubelet.x86_64 0:1.20.1-0

3.8 设置自启动kubelet

此时,还不能启动kubelet,因为此时配置还不能,现在仅仅可以设置开机自启动

1
bash复制代码systemctl enable kubelet

3.9 部署Kubernetes Master

3.9.1 初始化kubeadm

1
2
3
4
5
6
7
8
9
10
11
ini复制代码kubeadm init \

  --apiserver-advertise-address=192.168.190.128 \

  --image-repository registry.aliyuncs.com/google_containers \

  --kubernetes-version v1.19.6 \

  --service-cidr=10.1.0.0/16 \

  --pod-network-cidr=10.244.0.0/16
1
2
3
4
5
6
7
shell复制代码# –image-repository string:    这个用于指定从什么位置来拉取镜像(1.13版本才有的),默认值是k8s.gcr.io,我们将其指定为国内镜像地址:registry.aliyuncs.com/google_containers

# –kubernetes-version string:  指定kubenets版本号,默认值是stable-1,会导致从https://dl.k8s.io/release/stable-1.txt下载最新的版本号,我们可以将其指定为固定版本(v1.15.1)来跳过网络请求。

# –apiserver-advertise-address  指明用 Master 的哪个 interface 与 Cluster 的其他节点通信。如果 Master 有多个 interface,建议明确指定,如果不指定,kubeadm 会自动选择有默认网关的 interface。

# –pod-network-cidr            指定 Pod 网络的范围。Kubernetes 支持多种网络方案,而且不同网络方案对  –pod-network-cidr有自己的要求,这里设置为10.244.0.0/16 是因为我们将使用 flannel 网络方案,必须设置成这个 CIDR。

输出:

1
2
3
4
5
sql复制代码Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.190.128:6443 --token 09eyru.hv3d4mkq2rxgculb \

    --discovery-token-ca-cert-hash sha256:d0141a8504afef937cc77bcac5a67669f42ff32535356953b5ab217d767f85aa

稍后在其他两台服务器中执行以上命令。

3.9.2 使用kubectl工具

复制如下命令直接执行(master)

1
2
3
4
5
bash复制代码mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

下面就可以直接使用kubectl命令了(master)

3.9.3 安装Pod网络插件(CNI)(master)

1
ruby复制代码kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

或者:

1
复制代码kubectl apply -f kube-flannel.yml

通过命令查看安装服务状态

1
sql复制代码kubectl get pods -n kube-system

4 设置NFS存储

4.1 集群所有节点安装nfs客户端

1
复制代码yum install -y nfs-utils

4.2 安装插件

1
复制代码kubectl apply -f deploymentnfs.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
yaml复制代码apiVersion: apps/v1

kind: Deployment

metadata:

  name: nfs-client-provisioner

  labels:

    app: nfs-client-provisioner

  # replace with namespace where provisioner is deployed

  namespace: arcgis

spec:

  replicas: 1

  strategy:

    type: Recreate

  selector:

    matchLabels:

      app: nfs-client-provisioner

  template:

    metadata:

      labels:

        app: nfs-client-provisioner

    spec:

      serviceAccountName: nfs-client-provisioner

      containers:

        - name: nfs-client-provisioner

          image: quay.io/external_storage/nfs-client-provisioner:latest

          volumeMounts:

            - name: nfs-client-root

              mountPath: /persistentvolumes

          env:

            - name: PROVISIONER_NAME

              value: mynfs

            - name: NFS_SERVER

              value: 192.168.190.81

            - name: NFS_PATH

              value: /gis

      volumes:

        - name: nfs-client-root

          nfs:

            server: 192.168.190.81

            path: /gis
1
复制代码kubectl apply –f rbacnfs.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
yaml复制代码apiVersion: v1

kind: ServiceAccount

metadata:

  name: nfs-client-provisioner

  # replace with namespace where provisioner is deployed

  namespace: arcgis

---

kind: ClusterRole

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: nfs-client-provisioner-runner

rules:

  - apiGroups: [""]

    resources: ["persistentvolumes"]

    verbs: ["get", "list", "watch", "create", "delete"]

  - apiGroups: [""]

    resources: ["persistentvolumeclaims"]

    verbs: ["get", "list", "watch", "update"]

  - apiGroups: ["storage.k8s.io"]

    resources: ["storageclasses"]

    verbs: ["get", "list", "watch"]

  - apiGroups: [""]

    resources: ["events"]

    verbs: ["create", "update", "patch"]

---

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: run-nfs-client-provisioner

subjects:

  - kind: ServiceAccount

    name: nfs-client-provisioner

    # replace with namespace where provisioner is deployed

    namespace: arcgis

roleRef:

  kind: ClusterRole

  name: nfs-client-provisioner-runner

  apiGroup: rbac.authorization.k8s.io

---

kind: Role

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: leader-locking-nfs-client-provisioner

  # replace with namespace where provisioner is deployed

  namespace: arcgis

rules:

  - apiGroups: [""]

    resources: ["endpoints"]

    verbs: ["get", "list", "watch", "create", "update", "patch"]

---

kind: RoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: leader-locking-nfs-client-provisioner

  # replace with namespace where provisioner is deployed

  namespace: arcgis

subjects:

  - kind: ServiceAccount

    name: nfs-client-provisioner

    # replace with namespace where provisioner is deployed

    namespace: arcgis

roleRef:

  kind: Role

  name: leader-locking-nfs-client-provisioner

  apiGroup: rbac.authorization.k8s.io

4.3 添加存储类

1
复制代码kubectl apply –f classnfs.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
vbnet复制代码apiVersion: storage.k8s.io/v1

kind: StorageClass

metadata:

  name: storage-class-default

provisioner: mynfs # or choose another name, must match deployment's env PROVISIONER_NAME'

parameters:

  archiveOnDelete: "false"

5 安装ingress

1
复制代码kubectl apply -f mandatory-ingress.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
yaml复制代码apiVersion: v1

kind: Namespace

metadata:

  name: ingress-nginx

---

apiVersion: apps/v1

kind: Deployment

metadata:

  name: default-http-backend

  labels:

    app.kubernetes.io/name: default-http-backend

    app.kubernetes.io/part-of: ingress-nginx

  namespace: ingress-nginx

spec:

  replicas: 1

  selector:

    matchLabels:

      app.kubernetes.io/name: default-http-backend

      app.kubernetes.io/part-of: ingress-nginx

  template:

    metadata:

      labels:

        app.kubernetes.io/name: default-http-backend

        app.kubernetes.io/part-of: ingress-nginx

    spec:

      terminationGracePeriodSeconds: 60

      containers:

        - name: default-http-backend

          # Any image is permissible as long as:

          # 1\. It serves a 404 page at /

          # 2\. It serves 200 on a /healthz endpoint

          image: 192.168.190.126:5000/defaultbackend-amd64:1.5

          livenessProbe:

            httpGet:

              path: /healthz

              port: 8080

              scheme: HTTP

            initialDelaySeconds: 30

            timeoutSeconds: 5

          ports:

            - containerPort: 8080

          resources:

            limits:

              cpu: 10m

              memory: 20Mi

            requests:

              cpu: 10m

              memory: 20Mi

---

apiVersion: v1

kind: Service

metadata:

  name: default-http-backend

  namespace: ingress-nginx

  labels:

    app.kubernetes.io/name: default-http-backend

    app.kubernetes.io/part-of: ingress-nginx

spec:

  ports:

    - port: 80

      targetPort: 8080

  selector:

    app.kubernetes.io/name: default-http-backend

    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap

apiVersion: v1

metadata:

  name: nginx-configuration

  namespace: ingress-nginx

  labels:

    app.kubernetes.io/name: ingress-nginx

    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap

apiVersion: v1

metadata:

  name: tcp-services

  namespace: ingress-nginx

  labels:

    app.kubernetes.io/name: ingress-nginx

    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap

apiVersion: v1

metadata:

  name: udp-services

  namespace: ingress-nginx

  labels:

    app.kubernetes.io/name: ingress-nginx

    app.kubernetes.io/part-of: ingress-nginx

---

apiVersion: v1

kind: ServiceAccount

metadata:

  name: nginx-ingress-serviceaccount

  namespace: ingress-nginx

  labels:

    app.kubernetes.io/name: ingress-nginx

    app.kubernetes.io/part-of: ingress-nginx

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole

metadata:

  name: nginx-ingress-clusterrole

  labels:

    app.kubernetes.io/name: ingress-nginx

    app.kubernetes.io/part-of: ingress-nginx

rules:

  - apiGroups:

      - ""

    resources:

      - configmaps

      - endpoints

      - nodes

      - pods

      - secrets

    verbs:

      - list

      - watch

  - apiGroups:

      - ""

    resources:

      - nodes

    verbs:

      - get

  - apiGroups:

      - ""

    resources:

      - services

    verbs:

      - get

      - list

      - watch

  - apiGroups:

      - "extensions"

    resources:

      - ingresses

    verbs:

      - get

      - list

      - watch

  - apiGroups:

      - ""

    resources:

      - events

    verbs:

      - create

      - patch

  - apiGroups:

      - "extensions"

    resources:

      - ingresses/status

    verbs:

      - update

---

apiVersion: rbac.authorization.k8s.io/v1

kind: Role

metadata:

  name: nginx-ingress-role

  namespace: ingress-nginx

  labels:

    app.kubernetes.io/name: ingress-nginx

    app.kubernetes.io/part-of: ingress-nginx

rules:

  - apiGroups:

      - ""

    resources:

      - configmaps

      - pods

      - secrets

      - namespaces

    verbs:

      - get

  - apiGroups:

      - ""

    resources:

      - configmaps

    resourceNames:

      # Defaults to "<election-id>-<ingress-class>"

      # Here: "<ingress-controller-leader>-<nginx>"

      # This has to be adapted if you change either parameter

      # when launching the nginx-ingress-controller.

      - "ingress-controller-leader-nginx"

    verbs:

      - get

      - update

  - apiGroups:

      - ""

    resources:

      - configmaps

    verbs:

      - create

  - apiGroups:

      - ""

    resources:

      - endpoints

    verbs:

      - get

---

apiVersion: rbac.authorization.k8s.io/v1

kind: RoleBinding

metadata:

  name: nginx-ingress-role-nisa-binding

  namespace: ingress-nginx

  labels:

    app.kubernetes.io/name: ingress-nginx

    app.kubernetes.io/part-of: ingress-nginx

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: Role

  name: nginx-ingress-role

subjects:

  - kind: ServiceAccount

    name: nginx-ingress-serviceaccount

    namespace: ingress-nginx

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

  name: nginx-ingress-clusterrole-nisa-binding

  labels:

    app.kubernetes.io/name: ingress-nginx

    app.kubernetes.io/part-of: ingress-nginx

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: nginx-ingress-clusterrole

subjects:

  - kind: ServiceAccount

    name: nginx-ingress-serviceaccount

    namespace: ingress-nginx

---

apiVersion: apps/v1

kind: Deployment

metadata:

  name: nginx-ingress-controller

  namespace: ingress-nginx

  labels:

    app.kubernetes.io/name: ingress-nginx

    app.kubernetes.io/part-of: ingress-nginx

spec:

  replicas: 1

  selector:

    matchLabels:

      app.kubernetes.io/name: ingress-nginx

      app.kubernetes.io/part-of: ingress-nginx

  template:

    metadata:

      labels:

        app.kubernetes.io/name: ingress-nginx

        app.kubernetes.io/part-of: ingress-nginx

      annotations:

        prometheus.io/port: "10254"

        prometheus.io/scrape: "true"

    spec:

      hostNetwork: true

      serviceAccountName: nginx-ingress-serviceaccount

      containers:

        - name: nginx-ingress-controller

          image: 192.168.190.126:5000/kubernetes-ingress-controller/nginx-ingress-controller:0.20.0

          args:

            - /nginx-ingress-controller

            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend

            - --configmap=$(POD_NAMESPACE)/nginx-configuration

            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services

            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services

            - --publish-service=$(POD_NAMESPACE)/ingress-nginx

            - --annotations-prefix=nginx.ingress.kubernetes.io

          securityContext:

            capabilities:

              drop:

                - ALL

              add:

                - NET_BIND_SERVICE

            # www-data -> 33

            runAsUser: 33

          env:

            - name: POD_NAME

              valueFrom:

                fieldRef:

                  fieldPath: metadata.name

            - name: POD_NAMESPACE

              valueFrom:

                fieldRef:

                  fieldPath: metadata.namespace

          ports:

            - name: http

              containerPort: 80

            - name: https

              containerPort: 443

          livenessProbe:

            failureThreshold: 3

            httpGet:

              path: /healthz

              port: 10254

              scheme: HTTP

            initialDelaySeconds: 10

            periodSeconds: 10

            successThreshold: 1

            timeoutSeconds: 1

          readinessProbe:

            failureThreshold: 3

            httpGet:

              path: /healthz

              port: 10254

              scheme: HTTP

            periodSeconds: 10

            successThreshold: 1

            timeoutSeconds: 1

---

本文转载自: 掘金

开发者博客 – 和开发相关的 这里全都有

0%