普拉多VX

人生一路,不问来时,不知归期

0%

部署环境

  • Ubuntu 18.04.5 LTS
  • Python 3.6.9
  • Django .2.14
  • nginx 1.14.0
  • gunicorn 20.0.4

1.创建虚拟环境

在root目录下创建虚拟环境,在线上部署建议专门在/下建立一个虚拟环境的目录。

1
2
3
4
5
6
7
8
9
10
11
12
13
root@Thortest:~# pip3 install virtualenv

root@Thortest:~# virtualenv tweb_env


root@Thortest:~# source tweb_env/bin/activate # 进入虚拟环境 退出可以使用deactivate 指令
(tweb_env) root@Thortest:~# python
Python 3.6.9 (default, Jul 17 2020, 12:50:27)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
(tweb_env) root@Thortest:~# cd /data/Twebpool # 切换到项目路径
(tweb_env) root@Thortest:/data/Twebpool# pip install -r requirements.txt 在当前虚拟环境中安装依赖文件

2.编写gunicorn启动脚本

在项目文件夹下创建gunicorn.sh,此脚本位置与manage.py同一目录

1
2
3
4
5
6
7
├── db.sqlite3
├── gunicorn.sh
├── logs
│   ├── gunicorn.info.log
│   └── web.log
├── manage.py
├── pool

gunicorn.sh

1
2
3
4
5
6
7
8
9
10
11
#!/bin/bash
set -e
TIMEOUT=300 #to solve upload app package timeout issue

# Twebpool为项目名,gunicorn路径必须是虚拟环境的绝对路径
exec /root/tweb_env/bin/gunicorn Twebpool.wsgi:application -w 6 \
-b 0.0.0.0:8000 \
--max-requests 10000 \
--timeout=300 \
--access-logfile=server.access.log \
--error-logfile=server.error.log

设置可执行权限

1
root@Thortest:/data/Twebpool# chmod +x gunicorn.sh

3.创建supervisor配置

安装supervisor并创建配置文件

1
2
3
4
5
6
7
8
9
10
11
root@Thortest:/data/Twebpool# apt-get install supervisor
root@Thortest:/data/Twebpool# vim /etc/supervisor/conf.d/web.conf
[program:web]
directory=/data/Twebpool/
command=/bin/sh gunicorn.sh
stopsignal=QUIT
autostart=true
autorestart=true
stdout_logfile=/data/Twebpool/logs/gunicorn.info.log
stderr_logfile=/data/Twebpool/logs/gunicorn.error.log
redirect_stderr=true

启动项目

1
2
3
4
5
6
root@Thortest:/data/Twebpool# supervisorctl 
supervisor> reread
supervisor> update # 默认会直接启动,如果需要停止使用stop|start|restart
supervisor> status
web RUNNING pid 305, uptime 0:0:47
supervisor>

4.配置nginx

x.x.x.x 为服务器域名或者外网ip

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
(tweb_env) root@Thortest:~# vim /etc/nginx/conf.d/api.conf 
server {
listen 80;
server_name x.x.x.x;
keepalive_timeout 300;


location / {
client_max_body_size 100m;
proxy_http_version 1.1;
proxy_pass http://127.0.0.1:8000;
proxy_connect_timeout 300s;
proxy_read_timeout 300s;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_send_timeout 15000;


proxy_buffer_size 1024k;
proxy_buffers 4 1024k;
proxy_busy_buffers_size 1024k;
proxy_temp_file_write_size 1024k;
}


}

测试nginx配置

ps:nginx -t 测试。习惯很重要,修改配置后一定要使用-t检查下有没有问题

1
2
3
4
(tweb_env) root@Thortest:~# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
(tweb_env) root@Thortest:~#

重启或者重新加载

1
(tweb_env) root@Thortest:~# systemctl  restart nginx.service # reload 不重启服务应用新配置| restart 重启

5.访问测试

crontab 秒级执行

1.将执行命令写入脚本中

run.sh

1
2
3
4
5
6
7
8
9
#!/bin/bash  
step=10 #间隔的秒数,不能大于60

for (( i = 0; i < 60; i=(i+step) )); do
cd /data/Twebpool/script && /root/tweb_env/bin/python blocks_all.py >>/tmp/blocks.txt # 执行命令
sleep $step #间隔秒数
done

exit 0

3.设置可执行权限

1
root@Thortest:/data/Twebpool/script# chmod +x run.sh

2.设置crontab

1
*  *  *  *  *   /data/Twebpool/script/run.sh

原理

每秒钟执行一次脚本程序,脚本中设置间隔多少秒重复执行,即可实现秒级别效果

问题

在浏览器中访问drf 的url,默认打开是html格式的页面。因没有获取静态资源,现在部署后是这样的格式。

那么如何才能默认展示为json呢?

方法

settings.py文件中添加如下配置即可

1
2
3
4
5
6
7
8
9
REST_FRAMEWORK = {
'DEFAULT_PARSER_CLASSES': (
'rest_framework.parsers.JSONParser',
),
'DEFAULT_RENDERER_CLASSES': (
'rest_framework.renderers.JSONRenderer',

),
}

重启项目

是我想要的结果

参考

按drf排序

目前使用drf默认的排序只能是升序,降序。如果获取最近的数据,获取的结果是倒序方式。特记录下改问题的解决方法

需求:我需要获取最近100条数据(指定返回的数据量),并按升序方式排列返回。

参考了self.list的写法

1
2
3
4
5
6
7
8
9
10
11
12
13
14
class ListModelMixin:
"""
List a queryset.
"""
def list(self, request, *args, **kwargs):
queryset = self.filter_queryset(self.get_queryset())

page = self.paginate_queryset(queryset)
if page is not None:
serializer = self.get_serializer(page, many=True)
return self.get_paginated_response(serializer.data)

serializer = self.get_serializer(queryset, many=True)
return Response(serializer.data)

直接在返回数据时做一次列表倒序排列 [::-1],修改代码如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
class BlocksDiffView(generics.ListAPIView):
serializer_class = block_s.BlocksDiffSerializer


def get_queryset(self):
queryset = block_m.NetworkStats.objects.order_by('-height')
return queryset

# 获取最近多少条数据,然后再升序排列返回
def get(self, request, *args, **kwargs):
queryset = self.filter_queryset(self.get_queryset())

page = self.paginate_queryset(queryset)
if page is not None:
serializer = self.get_serializer(page, many=True)
return self.get_paginated_response(serializer.data[::-1])

serializer = self.get_serializer(queryset, many=True)
return Response(serializer.data)

问题一 Image not found 提示报错

找了半天google,此问题真的非常少,估计使用postgresql的人不多吧。终于在以下参考链接找到了

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
Traceback (most recent call last):
File "/Users/luodi/project_vir/july_3.5/lib/python3.5/site-packages/django/db/backends/postgresql/base.py", line 20, in <module>
import psycopg2 as Database
File "/Users/luodi/project_vir/july_3.5/lib/python3.5/site-packages/psycopg2/__init__.py", line 51, in <module>
from psycopg2._psycopg import ( # noqa
ImportError: dlopen(/Users/luodi/project_vir/july_3.5/lib/python3.5/site-packages/psycopg2/_psycopg.cpython-35m-darwin.so, 2): Library not loaded: libssl.1.1.dylib
Referenced from: /Users/luodi/project_vir/july_3.5/lib/python3.5/site-packages/psycopg2/_psycopg.cpython-35m-darwin.so
Reason: image not found

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/Applications/PyCharm.app/Contents/helpers/pycharm/django_manage.py", line 43, in <module>
run_module(manage_file, None, '__main__', True)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/runpy.py", line 205, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/Users/luodi/PycharmProjects/Twebpool/manage.py", line 21, in <module>
main()
File "/Users/luodi/PycharmProjects/Twebpool/manage.py", line 17, in main
execute_from_command_line(sys.argv)
File "/Users/luodi/project_vir/july_3.5/lib/python3.5/site-packages/django/core/management/__init__.py", line 381, in execute_from_command_line
utility.execute()
File "/Users/luodi/project_vir/july_3.5/lib/python3.5/site-packages/django/core/management/__init__.py", line 375, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/Users/luodi/project_vir/july_3.5/lib/python3.5/site-packages/django/core/management/base.py", line 323, in run_from_argv
self.execute(*args, **cmd_options)
File "/Users/luodi/project_vir/july_3.5/lib/python3.5/site-packages/django/core/management/base.py", line 364, in execute
output = self.handle(*args, **options)
File "/Users/luodi/project_vir/july_3.5/lib/python3.5/site-packages/django/core/management/commands/inspectdb.py", line 34, in handle
for line in self.handle_inspection(options):
File "/Users/luodi/project_vir/july_3.5/lib/python3.5/site-packages/django/core/management/commands/inspectdb.py", line 40, in handle_inspection
connection = connections[options['database']]
File "/Users/luodi/project_vir/july_3.5/lib/python3.5/site-packages/django/db/utils.py", line 201, in __getitem__
backend = load_backend(db['ENGINE'])
File "/Users/luodi/project_vir/july_3.5/lib/python3.5/site-packages/django/db/utils.py", line 110, in load_backend
return import_module('%s.base' % backend_name)
File "/Users/luodi/project_vir/july_3.5/lib/python3.5/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 986, in _gcd_import
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 673, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 673, in exec_module
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
File "/Users/luodi/project_vir/july_3.5/lib/python3.5/site-packages/django/db/backends/postgresql/base.py", line 24, in <module>
raise ImproperlyConfigured("Error loading psycopg2 module: %s" % e)
django.core.exceptions.ImproperlyConfigured: Error loading psycopg2 module: dlopen(/Users/luodi/project_vir/july_3.5/lib/python3.5/site-packages/psycopg2/_psycopg.cpython-35m-darwin.so, 2): Library not loaded: libssl.1.1.dylib
Referenced from: /Users/luodi/project_vir/july_3.5/lib/python3.5/site-packages/psycopg2/_psycopg.cpython-35m-darwin.so
Reason: image not found

解决方法

1.关闭Mac sip保护

1
2
3
4
5
6
7
1、重启mac,按住Command+R,等到系统进入安全模式。

2、选择一个账户,然后点击屏幕上方的工具栏找到命令行工具。

3、执行,命令 csrutil disable

4、重启电脑后,不要进入安全模式,执行命令sudo mount -uw /

2.复制库文件到/usr/lib/

1
2
3
luodi@roddydeMacBook-Pro:~$ sudo cp /Library/PostgreSQL/10/lib/libcrypto.1.1.dylib /usr/lib/libcrypto.1.1.dylib

luodi@roddydeMacBook-Pro:~$ sudo cp /Library/PostgreSQL/10/lib/libssl.1.1.dylib /usr/lib/libssl.1.1.dylib

问题二

导入 psycopg2报错

1
Expected in: /usr/lib/libpq.5.dylib

解决方法

1
2
3
luodi@roddydeMacBook-Pro:~$ sudo mv /usr/lib/libpq.5.dylib /usr/lib/libpq.5.dylib.old  
Password:
luodi@roddydeMacBook-Pro:~$ sudo cp /Library/PostgreSQL/10/lib/libpq.5.dylib /usr/lib/libpq.5.dylib

参考

YAML

YAML 是专门用来写配置文件的语言,非常简洁和强大,远比 JSON 格式方便。它实质上是一种通用的数据串行化格式。

支持的三种数据结构:

  • 对象:键值对的集合,又称为映射(mapping)/ 哈希(hashes) / 字典(dictionary)
  • 数组:一组按次序排列的值,又称为序列(sequence) / 列表(list)
  • 纯量(scalars):单个的、不可再分的值

YAML规则

  • 大小写敏感
  • 使用缩进表示层级关系
  • 缩进时不允许使用Tab键,只允许使用空格。
  • 缩进的空格数目不重要,只要相同层级的元素左侧对齐即可

数据结构

对象

key:value的键值对,使用冒号表示

1
2
username: roddy
password: "ccccccc"

转成json

1
2
3
4
5

{
"username": "roddy",
"password": "ccccccc"
}

数组

以横线“-”开头的行

1
2
3
- Python
- Golang
- JAVA

转成json

1
2
3
4
5
[
"Python",
"Golang",
"JAVA"
]

复合结构

1
2
3
4
5
6
7
8
9
username: roddy
password: "passwd"
server_ip:
- 192.168.1.1
- 192.168.2.2
- 192.168.2.3
server_type:
web: "nginx"
db: "mysql"

转成json

1
2
3
4
5
6
7
8
9
10
11
12
13
{
"username": "roddy",
"password": "passwd",
"server_ip": [
"192.168.1.1",
"192.168.2.2",
"192.168.2.3"
],
"server_type": {
"web": "nginx",
"db": "mysql"
}
}

参见yaml样例

推荐个在线yaml转json的网站:https://www.json2yaml.com/convert-yaml-to-json

YAML格式:

1
2
3
4
5
6
7
8
9
10
11
12
name:
- xiaomi
- huawei
yaml:
- slim and flexible
- better for configuration
object:
key: value
array:
- null_value:
- boolean: true
- integer: 1

转成JSON格式

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
{
"name": [
"xiaomi",
"huawei"
],
"yaml": [
"slim and flexible",
"better for configuration"
],
"object": {
"key": "value",
"array": [
{
"null_value": null
},
{
"boolean": true
},
{
"integer": 1
}
]
}
}

YAML创建Pod

Kubernetes 资源是通过声明的方式创建的,因此可以使用 YAML 文件。Kubernetes 资源(比如 Pod、服务和部署)是使用 YAML 文件创建的。后期我们会经常编写yaml文件用于部署服务。

样例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[root@k8s-master ~]# vim pod-busybox.yaml
apiVersion: v1 # api版本
kind: Pod # 资源类型
metadata:
name: myapp-pod #名字
labels:
app: myapp # 标签内容
spec:
containers:
- name: myapp-container # 容器名
image: busybox # 镜像名
command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']

[root@k8s-master ~]# kubectl create -f pod-busybox.yaml
pod/myapp-pod created
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 ContainerCreating 0 11s # 正在创建pod

[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 0 21s # 状态为Running
[root@k8s-master ~]#

配置文件详解

  • apiVersion: api版本,v1表示稳定版本,可以参考下面命令查看可用的版本
  • kind: 表示要创建的资源对象,关键字Pod ,可选关键字:Pod、ReplicaSet、ReplicationController、Deployment、StatefulSet、DaemonSet、Job、CronJob、HorizontalPodAutoscaling
  • metadata: 元数据,可以包含多个元数据
  • spec:表示资源对象的具体设置,containers表示容器的集合,可以定义多个容器

查看当前可用的API版本

Kubernetes 1.19.3

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
[root@k8s-master ~]# kubectl api-versions
admissionregistration.k8s.io/v1
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
autoscaling/v2beta2
batch/v1
batch/v1beta1
certificates.k8s.io/v1
certificates.k8s.io/v1beta1
coordination.k8s.io/v1
coordination.k8s.io/v1beta1
discovery.k8s.io/v1beta1
events.k8s.io/v1
events.k8s.io/v1beta1
extensions/v1beta1
networking.k8s.io/v1
networking.k8s.io/v1beta1
node.k8s.io/v1beta1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
scheduling.k8s.io/v1
scheduling.k8s.io/v1beta1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1

apiVersion该用哪一个,可以参考:https://www.jianshu.com/p/457cf0835f88

Pod yaml格式参考

使用 kubectl explain pods.spec.containers 命令获取参数

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
[root@k8s-master ~]# kubectl explain pods.spec.containers
KIND: Pod
VERSION: v1

RESOURCE: containers <[]Object>

DESCRIPTION:
List of containers belonging to the pod. Containers cannot currently be
added or removed. There must be at least one container in a Pod. Cannot be
updated.

A single application container that you want to run within a pod.

FIELDS:
args <[]string>
Arguments to the entrypoint. The docker image's CMD is used if this is not
provided. Variable references $(VAR_NAME) are expanded using the
container's environment. If a variable cannot be resolved, the reference in
the input string will be unchanged. The $(VAR_NAME) syntax can be escaped
with a double $$, ie: $$(VAR_NAME). Escaped references will never be
expanded, regardless of whether the variable exists or not. Cannot be
updated. More info:
https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell

command <[]string>
Entrypoint array. Not executed within a shell. The docker image's
ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME)
are expanded using the container's environment. If a variable cannot be
resolved, the reference in the input string will be unchanged. The
$(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME).
Escaped references will never be expanded, regardless of whether the
variable exists or not. Cannot be updated. More info:
https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell

env <[]Object>
List of environment variables to set in the container. Cannot be updated.

envFrom <[]Object>
List of sources to populate environment variables in the container. The
keys defined within a source must be a C_IDENTIFIER. All invalid keys will
be reported as an event when the container is starting. When a key exists
in multiple sources, the value associated with the last source will take
precedence. Values defined by an Env with a duplicate key will take
precedence. Cannot be updated.

image <string>
Docker image name. More info:
https://kubernetes.io/docs/concepts/containers/images This field is
optional to allow higher level config management to default or override
container images in workload controllers like Deployments and StatefulSets.
......

基本操作

查询所有正在运行的pod

1
2
3
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 0 21s

查询单个pod名,可以使用-w持续监听

1
2
3
4
5
6
7
[root@k8s-master ~]# kubectl get pod myapp-pod
NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 0 21m

[root@k8s-master ~]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 0 24m

查询详情,可以查看调度到哪台主机上

kubectl get pod {pod 名} -o wide

1
2
3
4
[root@k8s-master ~]# kubectl get pod myapp-pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myapp-pod 1/1 Running 0 21m 10.244.2.6 k8s-node2 <none> <none>
[root@k8s-master ~]#

查询pod输出的log

1
2
3
[root@k8s-master ~]# kubectl logs myapp-pod
Hello Kubernetes!
[root@k8s-master ~]#

查询更加详细的pod数据

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
[root@k8s-master ~]# kubectl describe pods myapp-pod
Name: myapp-pod
Namespace: default
Priority: 0
Node: k8s-node2/172.19.153.99
Start Time: Fri, 23 Oct 2020 14:45:32 +0800
Labels: app=myapp
Annotations: <none>
Status: Running
IP: 10.244.2.6
IPs:
IP: 10.244.2.6
Containers:
myapp-container:
Container ID: docker://3075475b43e69240d392eec14f4fa67bfed524fedbb0f505c56d257ff916080f
Image: busybox
Image ID: docker-pullable://busybox@sha256:a9286defaba7b3a519d585ba0e37d0b2cbee74ebfe590960b0b1d6a5e97d1e1d
Port: <none>
Host Port: <none>
Command:
sh
-c
echo Hello Kubernetes! && sleep 3600
State: Running
Started: Fri, 23 Oct 2020 14:45:48 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-4dfgr (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-4dfgr:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-4dfgr
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 27m default-scheduler Successfully assigned default/myapp-pod to k8s-node2
Normal Pulling 27m kubelet Pulling image "busybox"
Normal Pulled 27m kubelet Successfully pulled image "busybox" in 16.092616692s
Normal Created 27m kubelet Created container myapp-container
Normal Started 27m kubelet Started container myapp-container

get命令支持json或者yaml的格式化输出

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@k8s-master ~]# kubectl get pod myapp-pod --output yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2020-10-23T06:45:31Z"
labels:
app: myapp
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
..............................
startedAt: "2020-10-23T06:45:48Z"
hostIP: 172.19.153.99
phase: Running
podIP: 10.244.2.6
podIPs:
- ip: 10.244.2.6
qosClass: BestEffort
startTime: "2020-10-23T06:45:32Z"

参考

概述

Kubernetes 简称K8s,是一个可移植的,可扩展的开源平台,用于管理容器化的工作负载和服务,可促进声明式配置和自动化。它拥有一个庞大且快速增长的生态系统。
Kubernetes的服务,支持和工具广泛可用。
Kubernetes这个名字起源于希腊语,意思是舵手或飞行员。Google在2014年开源了Kubernetes项目。Kubernetes将超过15年的Google在大规模生产工作负载方面的经验与社区中最好的想法和实践相结合。

Logo:

Kubernetes 特点

  • 可移植: 支持公有云,私有云,混合云,多重云(multi-cloud)
  • 可扩展: 模块化,插件化,可挂载,可组合
  • 自动化: 自动部署,自动重启,自动复制,自动伸缩/扩展

官方网站

设计架构

在Kubernetes中分为两种角色即:Master(主管理节点)、Node(工作节点)
其中Master管理Node,主要负责整个集群的管理控制,相当于集群首脑。Node管理容器并且提供容器所需要的各种环境,由Master分配工作负载。

Master(主节点)

master包含以下组件,其中master需要做高可用的话可以为多台。

1.kube-apiserver

kube-apiserver用于暴露Kubernetes API,提供了资源操作的唯一入口。对于资源的任何操作都需要经过API Server进程来处理。同时 API Server进程还提供了一系列认证授权机制。

API Server通过运行在master上的kube-apiserver进程提供服务,默认使用本机的8080端口提供REST服务,可以同时启动HTTPS安全端口(–secure-port=6443)来启动安全机制,加强REST API访问的安全性。

2.etcd

etcd是一个轻量的分布式键值存储,他是整个Kubernetes集群中非常重要的组件之一。它主要用于保存集群所有的网络配置和对象的状态信息。它也是Kubernetes的默认存储系统,只有Api Server进程才可以访问。

3.kube-scheduler

scheduler(调度器)是pod资源的调度器,主要监听最近创建但还未分配node的Pod资源,会根据调度算法自动分配到相应的node节点上。可以理解为pod找一个合适的家。

调度器考虑的因数

  • 资源需求
  • 硬件/软件等条件
  • 负载情况等

4.kube-controller-manager

kube-controller-manager运行管理控制器,它们是集群中处理常规任务的后台线程。逻辑上,每个控制器是一个单独的进程,但为了降低复杂性,它们都被编译成单个二进制文件,并在单个进程中运行。

这些控制器包括:

  • 节点(Node)控制器:负责在节点出现故障时警示和响应。

  • 副本(Replication)控制器:负责为系统中的每个副本控制器(ReplicationController)对象维护正确的pod数量。

  • 端点(Endpoints)控制器:负责生成和维护所有的Endpoint对象的控制器。用于监听service和对应的pod副本变化。

  • Service Account和Token控制器:为新的Namespace创建默认帐户访问API 访问Token。

Node(工作节点)

1.kubelet

kubelet 在每个node上运行的节点代理。每个node上的kubelet会定期调用master节点上的api server接口报告状态。它负责管理pods和它们上面的容器,images镜像、volumes、etc,确保容器在pod中运行

2.kube-proxy

proxy 顾名思义为代理,在这里为节点网络代理。负责请求转发。可以执行简单的 TCP、UDP 和 SCTP 流转发,或者在一组后端进行循环TCP、UDP和SCTP转发。当前可通过 Docker-links-compatible 环境变量找到服务集群 IP 和端口,这些环境变量指定了服务代理打开的端口。有一个可选的插件可以为这些集群IP 提供集群 DNS。用户必须使用 apiserver API 创建服务才能配置代理。

3.Container Runtime

容器运行时是负责运行容器的软件。它支持多种运行时,最常见的容器运行时就是Docker,也是目前最佳的组合。以后也许会出现更好的容器运行时。

支持的容器运行时:

  • Docker
  • cri-io
  • rktlet
  • containerd
  • Kubernetes CRI(容器接口实现)

交互流程

交互流程:

  • 1.用户通过REST API创建一个Pod
  • 2.API Server验证请求并保存到etcd当中
  • 3.etcd通知Api server
  • 4.scheduler检测到未绑定的pod,开始调度、绑定并将结果写入etcd
  • 5.kubelet检测到新的pod调度,然后创建容器
  • 6.kubelet将pod状态更新到api server
  • 7.api server 把最新的状态保存到etcd中

参考

安装方式

kubeadm 搭建kubernetes集群环境

集群节点规划

1
2
3
k8s-master     172.19.153.97      Centos 7.8       2核4G
k8s-node1 172.19.153.98 Centos 7.8 2核4G
k8s-node2 172.19.153.99 Centos 7.8 2核4G

环境准备

1.修改hostname

分别设置各台主机设备的hostname
master 节点执行

1
2
3
4
[root@iZ2zefbuojpotsgnr3i6weZ ~]# hostnamectl set-hostname k8s-master
[root@iZ2zefbuojpotsgnr3i6weZ ~]# hostname
k8s-master
[root@iZ2zefbuojpotsgnr3i6weZ ~]#

在node1节点执行

1
2
3
4
[root@iZ2zefbuojpotsgnr3i6wfZ ~]# hostnamectl set-hostname k8s-node1
[root@iZ2zefbuojpotsgnr3i6wfZ ~]# hostname
k8s-node1
[root@iZ2zefbuojpotsgnr3i6wfZ ~]#

在node2节点执行

1
2
3
4
[root@iZ2zefbuojpotsgnr3i6wfZ ~]# hostnamectl set-hostname k8s-node2
[root@iZ2zefbuojpotsgnr3i6wfZ ~]# hostname
k8s-node2
[root@iZ2zefbuojpotsgnr3i6wfZ ~]#

2.添加hosts解析

操作节点:所有节点执行

1
2
3
4
5
cat >>/etc/hosts<<EOF
172.19.153.97 k8s-master
172.19.153.98 k8s-node1
172.19.153.99 k8s-node2
EOF

ping测试下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@k8s-master ~]# 
[root@k8s-master ~]# ping k8s-node1
PING k8s-node1 (172.19.153.98) 56(84) bytes of data.
64 bytes from k8s-node1 (172.19.153.98): icmp_seq=1 ttl=64 time=0.342 ms
64 bytes from k8s-node1 (172.19.153.98): icmp_seq=2 ttl=64 time=0.221 ms
^C
--- k8s-node1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.221/0.281/0.342/0.062 ms
[root@k8s-master ~]# ping k8s-node2
PING k8s-node2 (172.19.153.99) 56(84) bytes of data.
64 bytes from k8s-node2 (172.19.153.99): icmp_seq=1 ttl=64 time=0.357 ms
64 bytes from k8s-node2 (172.19.153.99): icmp_seq=2 ttl=64 time=0.205 ms
^C
--- k8s-node2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.205/0.281/0.357/0.076 ms
[root@k8s-master ~]#

3.系统优化配置

操作节点:所有节点执行

防火墙配置
如果节点间无安全组限制,可以忽略。否则需要打开如下端口
k8s-master:tcp:6443,2379,2380,60080,60081,udp全部打开
k8s-node节点:udp协议开放

1
iptables -P FORWARD ACCEPT

关闭selinux和防火墙

1
2
3
sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久关闭
setenforce 0 #临时关闭
systemctl stop firewalld && systemctl disable firewalld

检查所有节点是否为disabled状态,阿里云默认关闭了selinux

1
2
[root@k8s-master ~]# getenforce  #查看selinux状态
Disabled

关闭swap分区

1
2
swapoff -a
sed -i 's/.\\*swap.\\*/#&/' /etc/fstab #防止开机自动挂载

修改内核参数

1
2
3
4
5
6
7
8
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
vm.max_map_count=262144
EOF
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf

4.配置yum源

配置docker、linux、kubernetes默认的yum源

1
2
3
4
5
6
7
8
9
10
11
12
13
curl -o /etc/yum.repos.d/CentOS-7.repo http://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum clean all && yum makecache

5.安装Docker

操作节点:所有节点执行

获取阿里的加速器地址

1
2
3
4
5
6
7
8
9
10
11
12
13
# 安装最新版本
yum install docker-ce

# 修改docker daemon配置文件
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://rsgc4jk0.mirror.aliyuncs.com"]
}
EOF

# 启动docker
systemctl enable docker && systemctl start docker

启动状态检查

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[root@k8s-master ~]# systemctl  status docker-ce
Unit docker-ce.service could not be found.
[root@k8s-master ~]# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2020-10-20 18:01:20 CST; 1min 13s ago
Docs: https://docs.docker.com
Main PID: 11667 (dockerd)
Tasks: 10
Memory: 38.2M
CGroup: /system.slice/docker.service
└─11667 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

Oct 20 18:01:20 k8s-master dockerd[11667]: time="2020-10-20T18:01:20.767736680+08:00" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Oct 20 18:01:20 k8s-master dockerd[11667]: time="2020-10-20T18:01:20.767752808+08:00" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/co...odule=grpc
Oct 20 18:01:20 k8s-master dockerd[11667]: time="2020-10-20T18:01:20.767764079+08:00" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Oct 20 18:01:20 k8s-master dockerd[11667]: time="2020-10-20T18:01:20.796876109+08:00" level=info msg="Loading containers: start."
Oct 20 18:01:20 k8s-master dockerd[11667]: time="2020-10-20T18:01:20.885563057+08:00" level=info msg="Default bridge (docker0) is assigned with an IP address 17...P address"
Oct 20 18:01:20 k8s-master dockerd[11667]: time="2020-10-20T18:01:20.928135025+08:00" level=info msg="Loading containers: done."
Oct 20 18:01:20 k8s-master dockerd[11667]: time="2020-10-20T18:01:20.943819098+08:00" level=info msg="Docker daemon" commit=4484c46d9d graphdriver(s)=overlay2 v...n=19.03.13
Oct 20 18:01:20 k8s-master dockerd[11667]: time="2020-10-20T18:01:20.943914032+08:00" level=info msg="Daemon has completed initialization"
Oct 20 18:01:20 k8s-master dockerd[11667]: time="2020-10-20T18:01:20.968326432+08:00" level=info msg="API listen on /var/run/docker.sock"
Oct 20 18:01:20 k8s-master systemd[1]: Started Docker Application Container Engine.
Hint: Some lines were ellipsized, use -l to show in full.

安装部署Kubernetes

参看可以安装的版本,目前最新版本为1.19.3

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@k8s-master ~]# yum list kubelet --showduplicates | tail 
Repository base is listed more than once in the configuration
Repository updates is listed more than once in the configuration
Repository extras is listed more than once in the configuration
kubelet.x86_64 1.18.4-1 kubernetes
kubelet.x86_64 1.18.5-0 kubernetes
kubelet.x86_64 1.18.6-0 kubernetes
kubelet.x86_64 1.18.8-0 kubernetes
kubelet.x86_64 1.18.9-0 kubernetes
kubelet.x86_64 1.18.10-0 kubernetes
kubelet.x86_64 1.19.0-0 kubernetes
kubelet.x86_64 1.19.1-0 kubernetes
kubelet.x86_64 1.19.2-0 kubernetes
kubelet.x86_64 1.19.3-0 kubernetes
[root@k8s-master ~]#

1.安装kubeadm,kubelet,kubectl

操作节点:所有节点执行

1
2
3
4
5
6
7
8
# 安装
yum install kubelet-1.19.3 kubeadm-1.19.3 kubectl-1.19.3 -y

# 检查版本
bueadm version

# 设置开机启动
systemctl enable kubelet

2.初始化kubeadm配置

操作节点:只在master

1
[root@k8s-master ~]# kubeadm config print init-defaults > kubeadm.yaml

修改三处配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
[root@k8s-master ~]# vim kubeadm.yaml

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.19.153.97 # master ip
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-master
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers # aliyun镜像地址
kind: ClusterConfiguration
kubernetesVersion: v1.19.0
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16 # pod 网段
serviceSubnet: 10.96.0.0/12
scheduler: {}

3.下载镜像

参看需要使用的镜像列表

1
2
3
4
5
6
7
8
9
[root@k8s-master ~]# kubeadm config images list --config kubeadm.yaml 
W1020 18:41:31.595354 12031 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.19.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.19.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.19.0
registry.aliyuncs.com/google_containers/pause:3.2
registry.aliyuncs.com/google_containers/etcd:3.4.13-0
registry.aliyuncs.com/google_containers/coredns:1.7.0

使用pull下载镜像

1
2
3
4
5
6
7
8
9
10
[root@k8s-master ~]# kubeadm config images pull --config kubeadm.yaml     
W1020 18:47:17.290279 12065 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.19.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.19.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.19.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.2
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.13-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.7.0
[root@k8s-master ~]#

4.初始化kubeadm

操作节点:只在master

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
[root@k8s-master ~]# kubeadm init --config 
flag needs an argument: --config
To see the stack trace of this error execute with --v=5 or higher
[root@k8s-master ~]# kubeadm init --config kubeadm.yaml
W1020 18:55:45.497737 12312 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.0
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster

.......

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:
# 使用以前命令加入到集群
kubeadm join 172.19.153.97:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:31a4ae4022fed0410031c099424152ff3bf7a91d95a7e67f66b17fe3e1372e02
[root@k8s-master ~]#

执行上面提示的操作

1
2
3
[root@k8s-master ~]#  mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

5.添加node节点到集群

操作节点:node节点

node1

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@k8s-node1 ~]# kubeadm join 172.19.153.97:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:31a4ae4022fed0410031c099424152ff3bf7a91d95a7e67f66b17fe3e1372e02
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@k8s-node1 ~]#

node2

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@k8s-node2 ~]# kubeadm join 172.19.153.97:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:31a4ae4022fed0410031c099424152ff3bf7a91d95a7e67f66b17fe3e1372e02
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@k8s-node2 ~]#

检查-在master节点上,可以看到有两个node加入进来了,但是处于NotReady状态。

1
2
3
4
5
6
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 19m v1.19.3
k8s-node1 NotReady <none> 2m7s v1.19.3
k8s-node2 NotReady <none> 2m v1.19.3
[root@k8s-master ~]#

6.安装flannel

下载flannel的yaml文件

1
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

应用配置文件,创建flannel

1
2
3
4
5
6
7
[root@k8s-master ~]# kubectl apply -f kube-flannel.yml 
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

检查flannel是否创建

1
2
3
4
5
[root@k8s-master ~]# docker ps -a | grep flannel
b3a28fe53910 e708f4bb69e3 "/opt/bin/flanneld -…" 3 minutes ago Up 3 minutes k8s_kubeflannel_kube-flannel-ds-nvrdd_kube-system_d4d504b3-b579-4e90-94cf-3584da4395ec_0
f6e5f8546895 quay.io/coreos/flannel "cp -f /etc/kube-fla…" 3 minutes ago Exited (0) 3 minutes ago k8s_install-cni_kube-flannel-ds-nvrdd_kube-system_d4d504b3-b579-4e90-94cf-3584da4395ec_0
6a3fb130f7b3 registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-flannel-ds-nvrdd_kube-system_d4d504b3-b579-4e90-94cf-3584da4395ec_0
[root@k8s-master ~]#

7.验证集群状态

等待1-2分钟后再次查看结果

1
2
3
4
5
6
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 52m v1.19.3
k8s-node1 Ready <none> 35m v1.19.3
k8s-node2 Ready <none> 35m v1.19.3
[root@k8s-master ~]#

8.部署Dashboard

下载yaml配置文件,最新版本的dashboard参考https://github.com/kubernetes/dashboard/releases

注意兼容性:

1
2
3
4
5
6
7
8
9
10
11
[root@k8s-master ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml
--2020-10-20 20:16:00-- https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.108.133
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 7552 (7.4K) [text/plain]
Saving to: ‘recommended.yaml’

100%[===================================================================================================================================>] 7,552 --.-K/s in 0.1s

2020-10-20 20:16:02 (53.8 KB/s) - ‘recommended.yaml’ saved [7552/7552]

修改yaml文件,添加NodePort

1
2
3
4
5
6
7
39 spec:
40 ports:
41 - port: 443
42 targetPort: 8443
43 selector:
44 k8s-app: kubernetes-dashboard
45 type: NodePort # 变成nodeport 类型服务

也可以固定成某一个port端口,不设置默认会动态生成一个端口。

1
2
3
4
5
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30443 #对外暴露的端口

应用配置并检查

1
2
3
4
5
6
[root@k8s-master ~]# kubectl  apply -f recommended.yaml
[root@k8s-master ~]# kubectl -n kubernetes-dashboard get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.97.20.216 <none> 8000/TCP 32m
kubernetes-dashboard NodePort 10.107.133.135 <none> 443:30078/TCP 32m
[root@k8s-master ~]#

访问:https://x.x.x.x:30078

提示两种验证方式

9.创建token令牌

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@k8s-master ~]# kubectl create serviceaccount  dashboard-admin -n kube-system
serviceaccount/dashboard-admin created
[root@k8s-master ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
[root@k8s-master ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Name: dashboard-admin-token-cpmgn
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: dashboard-admin
kubernetes.io/service-account.uid: 1d68fc57-c316-4e5a-96bf-e2524053c2b6

Type: kubernetes.io/service-account-token

Data
====
ca.crt: 1066 bytes
namespace: 11 bytes
token: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
[root@k8s-master ~]#

在token处输入上面生成的令牌

登录成功

参考资料

谷歌浏览器隐私设置错误 NET::ERR_CERT_AUTHORITY_INVALID 攻击者可能会试图从 xx 窃取您的信息(例如:密码、通讯内容或信用卡信息)。

谷歌浏览器访问 https 提示您的连接不是私密连接 - 隐私设置错误

例如:

解决:在当前页面用键盘输入 thisisunsafe 不是在地址栏输入,直接敲键盘页面即会自动刷新进入网页。

参考:

Influxdb介绍

InfluxDB是一个时间序列数据库,旨在处理较高的写入和查询负载。它是TICK Stack(Telegraf,InfluxDB,Chronograf,Kapacitor)的开源时间序列数据库组件。旨在处理高写入和查询负载,并提供一种称为InfluxQL的类似于SQL的查询语言,用于与数据进行交互。目前有开源版和企业版本,具体费用及价格参照官网https://www.influxdata.com/products/influxdb-overview/

主要特性有:

  • 内置HTTP接口,使用方便
  • 数据可以打标记,这样查询可以很灵活
  • 类SQL的查询语句
  • 安装管理很简单,并且读写数据很高效
  • 能够实时查询,数据在写入时被索引后就能够被立即查出
  • ……

安装

安装环境系统为 Centos7

1.下载
https://portal.influxdata.com/downloads/

选择1.8.3
后会弹出如下安装方式

2.安装

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@iZ2zecgq3cou36re3sxh4bZ ~]# wget https://dl.influxdata.com/influxdb/releases/influxdb-1.8.3.x86_64.rpm  # 下载
sudo yum localinstall influxdb-1.8.3.x86_64.rpm--2020-10-10 10:14:53-- https://dl.influxdata.com/influxdb/releases/influxdb-1.8.3.x86_64.rpm
Resolving dl.influxdata.com (dl.influxdata.com)... 13.227.75.119, 13.227.75.14, 13.227.75.22, ...
Connecting to dl.influxdata.com (dl.influxdata.com)|13.227.75.119|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 64097226 (61M) [application/octet-stream]
Saving to: ‘influxdb-1.8.3.x86_64.rpm’

100%[===========================================================================================================================>] 64,097,226 2.62MB/s in 27s

2020-10-10 10:15:20 (2.28 MB/s) - ‘influxdb-1.8.3.x86_64.rpm’ saved [64097226/64097226]

[root@iZ2zecgq3cou36re3sxh4bZ ~]# sudo yum localinstall influxdb-1.8.3.x86_64.rpm # 安装
[root@iZ2zecgq3cou36re3sxh4bZ ~]#

网络端口

默认情况下,InfluxDB使用以下端口

  • TCP端口8086通过InfluxDB HTTP API进行客户端-服务器通信。
  • TCP端口8088可用于RPC服务执行备份和还原操作。

生成文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
/etc/influxdb/influxdb.conf    # 配置文件
/etc/logrotate.d/influxdb # 日志文件轮询配置
/usr/bin/influx # cli 命令
/usr/bin/influx_inspect
/usr/bin/influx_stress
/usr/bin/influx_tsm
/usr/bin/influxd # 二进制文件
/usr/lib/influxdb/scripts/influxdb.service # 开机启动文件
/usr/lib/influxdb/scripts/init.sh
/usr/share/man/man1/influx.1.gz
/usr/share/man/man1/influx_inspect.1.gz
/usr/share/man/man1/influx_stress.1.gz
/usr/share/man/man1/influx_tsm.1.gz
/usr/share/man/man1/influxd-backup.1.gz
/usr/share/man/man1/influxd-config.1.gz
/usr/share/man/man1/influxd-restore.1.gz
/usr/share/man/man1/influxd-run.1.gz
/usr/share/man/man1/influxd-version.1.gz
/usr/share/man/man1/influxd.1.gz
/var/lib/influxdb # 库文件
/var/log/influxdb # 日志存放路径

配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
[meta]
# Where the metadata/raft database is stored # 元数据存放路径
dir = "/var/lib/influxdb/meta"

# Automatically create a default retention policy when creating a database.
# retention-autocreate = true

# If log messages are printed for the meta service
# logging-enabled = true

###
### [data]
###
### Controls where the actual shard data for InfluxDB lives and how it is
### flushed from the WAL. "dir" may need to be changed to a suitable place
### for your system, but the WAL settings are an advanced configuration. The
### defaults should work for most systems.
###

[data]
# The directory where the TSM storage engine stores TSM files.
dir = "/var/lib/influxdb/data" # 数据存储目录

# The directory where the TSM storage engine stores WAL files.
wal-dir = "/var/lib/influxdb/wal" # wal 数据存储目录

# The amount of time that a write will wait before fsyncing. A duration
# greater than 0 can be used to batch up multiple fsync calls. This is useful for slower
# disks or when WAL write contention is seen. A value of 0s fsyncs every write to the WAL.
# Values in the range of 0-100ms are recommended for non-SSD disks.
# wal-fsync-delay = "0s"


# The type of shard index to use for new shards. The default is an in-memory index that is
# recreated at startup. A value of "tsi1" will use a disk based index that supports higher
# cardinality datasets.
# index-version = "inmem"

# Trace logging provides more verbose output around the tsm engine. Turning
# this on can provide more useful output for debugging tsm engine issues.
# trace-logging-enabled = false

# Whether queries should be logged before execution. Very useful for troubleshooting, but will
# log any sensitive data contained within a query.
# query-log-enabled = true

# Validates incoming writes to ensure keys only have valid unicode characters.
# This setting will incur a small overhead because every key must be checked.
# validate-keys = false

# Settings for the TSM engine

# CacheMaxMemorySize is the maximum size a shard's cache can
# reach before it starts rejecting writes.
# Valid size suffixes are k, m, or g (case insensitive, 1024 = 1k).
# Values without a size suffix are in bytes.
cache-max-memory-size = "1g" # 最大缓存大小,可以设置k,m,g

[coordinator]
# The default time a write request will wait until a "timeout" error is returned to the caller.
write-timeout = "10s"

# The maximum number of concurrent queries allowed to be executing at one time. If a query is
# executed and exceeds this limit, an error is returned to the caller. This limit can be disabled
# by setting it to 0.
#
# max-concurrent-queries项是配置最大的可执行的命令数,此项值为零则表示无限制。
# 如果你执行的命令数超过这个配置项的数量,则会报如下错误:
# ERR: max concurrent queries reached
#
max-concurrent-queries = 0

# The maximum time a query will is allowed to execute before being killed by the system. This limit
# can help prevent run away queries. Setting the value to 0 disables the limit.
#
# query-timeout项用来配置命令的超时时间,如果命令的执行时长超过了此时间,则influxDB会杀掉这条语句并报出如下错误:
# ERR: query timeout reached
# 如果配置了连续查询,那么最好不要配置query-timeout超时时间,因为随着数据量的增加,连续查询生成的数据所需要的时间更长,配置之后会导致数据生成不成功。
query-timeout = "0"

# The time threshold when a query will be logged as a slow query. This limit can be set to help
# discover slow or resource intensive queries. Setting the value to 0 disables the slow query logging.
#
# log-queries-after用来配置执行时长为多少的语句会被记录为慢查询。配置为0则表示不会记录这些语句。
# 比如,改项配置为“1s”,则执行时长超过1秒的语句会被标记为慢查询,并记录在日志里。
#
log-queries-after = "10s"

[http]
# Determines whether HTTP endpoint is enabled.
# enabled = true

# The bind address used by the HTTP service.
bind-address = ":8066"

# Determines whether user authentication is enabled over HTTP/HTTPS.
# auth-enabled = false

# The default realm sent back when issuing a basic auth challenge.
# realm = "InfluxDB"

# Determines whether HTTP request logging is enabled.
# 默认为true,会生成很多http请求的数据,建议关闭,不然日志文件跟插入数据量成正比,大致1:1的关系
#
log-enabled = false

# The default chunk size for result sets that should be chunked.
# 查询页面显示最大记录数
max-row-limit = 10000

[continuous_queries]
# Determines whether the continuous query service is enabled.
# //开启连续查询
#
enabled = true

# Controls whether queries are logged when executed by the CQ service.
# //开启连续查询的日志,有助于异常发现
#
log-enabled = true

# Controls whether queries are logged to the self-monitoring data store.
# query-stats-enabled = false

# interval for how often continuous queries will be checked if they need to run
# run-interval = "1s"

启动

可以看到目前两个端口都已启动

1
2
3
4
5
[root@iZ2zecgq3cou36re3sxh4bZ ~]# systemctl  start influxdb.service
[root@iZ2zecgq3cou36re3sxh4bZ ~]# netstat -nlpt | grep influxd
tcp 0 0 127.0.0.1:8088 0.0.0.0:* LISTEN 2844/influxd
tcp6 0 0 :::8086 :::* LISTEN 2844/influxd
[root@iZ2zecgq3cou36re3sxh4bZ ~]#

WEBUI

从influxdb1.3+都已经取消了web管理界面,只有安装低版本的influxdb,或者两种并存,但是需要修改数据目录指向1.8版本 参考https://blog.csdn.net/wsdc0521/article/details/106064914/。

命令行基本使用

安装完成后输入influx即可进入命令行

1.进入命令行

1
2
3
4
5
[root@iZ2zecgq3cou36re3sxh4bZ ~]# influx
Connected to http://localhost:8086 version 1.8.3
InfluxDB shell version: 1.8.3
>
>

2.查看数据库

1
2
3
4
5
6
> show databases; # 查看数据库
name: databases
name
----
_internal # 默认系统自带数据库_internal
>

3.创建数据库

1
2
3
4
5
6
7
8
> create database roddydb;
> show databases;
name: databases
name
----
_internal
roddydb
>

4.使用数据库

1
2
3
> use roddydb
Using database roddydb
>

5.写入数据

向cpu中新增一条记录,如果cpu这个measurement不存在,则新建一个。host,ip 称为TAG,value为值。

1
> INSERT cpu,host=WEB,IP=172.16.2.3 value=0.64

6.查询measurements(表)

1
2
3
4
5
6
> show MEASUREMENTS;
name: measurements
name
----
cpu
>

7.查询全部记录

1
2
3
4
5
6
7
> select * from cpu
name: cpu
time IP host value
---- -- ---- -----
1602299302885589799 172.16.2.3 WEB 0.64
1602299673721103381 172.16.243 WEB 0.95
>

8.筛选条件

1
2
3
4
5
6
> select * from "cpu" where value > 0.8
name: cpu
time IP host value
---- -- ---- -----
1602299673721103381 172.16.243 WEB 0.95
>

注意 :在WHERE子句中,如果是string类型的field value,一定要用单引号括起来。如果不适用引号括起来,或者使用的是双引号,将不会返回任何数据,有时甚至都不报错!

1
2
3
4
5
6
> select * from "cpu" where "IP" = '172.16.2.3' # value要为单引号
name: cpu
time IP host value
---- -- ---- -----
1602299302885589799 172.16.2.3 WEB 0.64
>
1
2
3
4
5
6
7
> select * from "cpu" where "host" = 'WEB'
name: cpu
time IP host value
---- -- ---- -----
1602299302885589799 172.16.2.3 WEB 0.64
1602299673721103381 172.16.243 WEB 0.95
>

9.删除表/measurements

方法1.删除表里的数据,直到没有数据那么该measurements就被删除。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
> show MEASUREMENTs
name: measurements
name
----
cpu
> select * from "cpu"
name: cpu
time IP host value
---- -- ---- -----
1602300774790998139 192.168.2.3 DB 0.12

> delete from cpu where "IP"='192.168.2.3'
> select * from "cpu"
> show MEASUREMENTs
>

方法2.使用drop

1
> drop MEASUREMENT cpu

更多语法参考官网:https://docs.influxdata.com/influxdb/v1.8/query_language/sample-data/

参考