Kubernetes1.13二进制安装

1、目录

1.1、什么是 Kubernetes?

  Kubernetes,简称 k8s(k,8 个字符,s)或者 kube,是一个开源的 Linux 容器自动化运维平台,它消除了容器化应用程序在部署、伸缩时涉及到的许多手动操作。
  Kubernetes 最开始是由 Google 的工程师设计开发的。Google 作为 Linux 容器技术的早期贡献者之一,曾公开演讲介绍 Google 如何将一切都运行于容器之中(这是 Google 的云服务背后的技术)。Google 一周内的容器部署超过 20 亿次,全部的工作都由内部平台 Borg 支撑。Borg 是 Kubernetes 的前身,几年来开发 Borg 的经验教训也成了影响 Kubernetes 中许多技术的主要因素。   

1.2、Kubernetes 有哪些优势?

  使用 Kubernetes,你可以快速、高效地满足用户以下的需求:

  • 快速精准地部署应用程序
  • 即时伸缩你的应用程序
  • 无缝展现新特征
  • 限制硬件用量仅为所需资源

  Kubernetes 的优势

  • 可移动: 公有云、私有云、混合云、多态云
  • 可扩展: 模块化、插件化、可挂载、可组合
  • 自修复: 自动部署、自动重启、自动复制、自动伸缩

  Google 公司于 2014 年启动了 Kubernetes 项目。Kubernetes 是在 Google 的长达 15 年的成规模的产品级任务的经验下构建的,结合了来自社区的最佳创意和实践经验
img   

2、环境准备

本文中的案例会有两台机器,他们的Host和IP地址如下

IP地址 主机名
172.16.128.0 master(master)
172.16.128.1 node1
172.16.128.2 node2
172.16.128.3 node3

四台机器的 hostmaster 为例:

[root@master ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.128.0 master
172.16.128.1 node1
172.16.128.2 node2
172.16.128.3 node3

  

2.1、网络配置

  以下以master为例

[root@master ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=eth0
UUID=6d8d9ad6-37b5-431a-ab16-47d0aa00d01f
DEVICE=eth0
ONBOOT=yes
IPADDR0=172.16.128.0
PREFIXO0=16
GATEWAY0=172.16.0.1
DNS1=172.16.0.1
DNS2=114.114.114.114

  重启网络:

[root@master ~]# service network restart

  更改源为阿里云

[root@master ~]# yum install -y wget
[root@master ~]# cd /etc/yum.repos.d/
[root@master yum.repos.d]# mv CentOS-Base.repo CentOS-Base.repo.bak
[root@master yum.repos.d]# wget http://mirrors.aliyun.com/repo/Centos-7.repo
[root@master yum.repos.d]# wget http://mirrors.163.com/.help/CentOS7-Base-163.repo
[root@master yum.repos.d]# yum clean all
[root@master yum.repos.d]# yum makecache

  安装网络工具包和基础工具包

[root@master ~]# yum install net-tools checkpolicy gcc dkms foomatic openssh-server bash-completion -y

  

2.2、更改 HOSTNAME

  在四台机器上依次设置 hostname,以下以master为例

[root@master ~]# hostnamectl --static set-hostname master
[root@master ~]# hostnamectl status
   Static hostname: master
         Icon name: computer-vm
           Chassis: vm
        Machine ID: 04node3f6d56e788345859875d9f49bd4bd
           Boot ID: ba02919abe4245aba673aaf5f778ad10
    Virtualization: kvm
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 3.10.0-957.el7.x86_64
      Architecture: x86-64

  

2.3、配置ssh免密码登录登录

  每一台机器都单独生成

[root@master ~]# ssh-keygen
#一路按回车到最后

  将 ssh-keygen 生成的密钥,分别复制到其他三台机器,以下以 master 为例

[root@master ~]# ssh-copy-id master
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'master (172.16.128.0)' can't be established.
ECDSA key fingerprint is SHA256:O8y8TBSZfBYiHPvJPPuAd058zkfsOfnBjvnf/3cvOCQ.
ECDSA key fingerprint is MD5:da:3c:29:65:f2:86:e9:61:cb:39:57:5b:5e:e2:77:7c.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@master's password:


[root@master ~]# rm -rf ~/.ssh/known_hosts
[root@master ~]# clear
[root@master ~]# ssh-copy-id master
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'master (172.16.128.0)' can't be established.
ECDSA key fingerprint is SHA256:O8y8TBSZfBYiHPvJPPuAd058zkfsOfnBjvnf/3cvOCQ.
ECDSA key fingerprint is MD5:da:3c:29:65:f2:86:e9:61:cb:39:57:5b:5e:e2:77:7c.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@master's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'master'"
and check to make sure that only the key(s) you wanted were added.


[root@master ~]# ssh-copy-id node1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'node1 (172.16.128.1)' can't be established.
ECDSA key fingerprint is SHA256:O8y8TBSZfBYiHPvJPPuAd058zkfsOfnBjvnf/3cvOCQ.
ECDSA key fingerprint is MD5:da:3c:29:65:f2:86:e9:61:cb:39:57:5b:5e:e2:77:7c.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@node1's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'node1'"
and check to make sure that only the key(s) you wanted were added.

[root@master ~]# ssh-copy-id node2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'node2 (172.16.128.2)' can't be established.
ECDSA key fingerprint is SHA256:O8y8TBSZfBYiHPvJPPuAd058zkfsOfnBjvnf/3cvOCQ.
ECDSA key fingerprint is MD5:da:3c:29:65:f2:86:e9:61:cb:39:57:5b:5e:e2:77:7c.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@node2's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'node2'"
and check to make sure that only the key(s) you wanted were added.


[root@master ~]# ssh-copy-id node3
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'node3 (172.16.128.3)' can't be established.
ECDSA key fingerprint is SHA256:O8y8TBSZfBYiHPvJPPuAd058zkfsOfnBjvnf/3cvOCQ.
ECDSA key fingerprint is MD5:da:3c:29:65:f2:86:e9:61:cb:39:57:5b:5e:e2:77:7c.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@node3's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'node3'"
and check to make sure that only the key(s) you wanted were added.

  测试密钥是否配置成功

[root@master ~]# ssh master hostname
master.mrhan.com

[root@master ~]# for N in $(seq 1 3); do ssh node$N hostname; done;
master
node1
node2
node3

  

2.4、关闭防火墙

  在每一台机器上运行以下命令,以 master 为例:

[root@master ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

  

2.5、关闭交换分区

[root@master ~]# swapoff -a

[root@master ~]# for N in $(seq 1 3); do ssh node$N swapoff -a; done;

关闭前和关闭后,可以使用free -h命令查看swap的状态,关闭后的total应该是0

  在每一台机器上编辑配置文件: /etc/fstab , 注释最后一条/dev/mapper/centos-swap swap,以master为例

[root@master ~]# sed -i "s/\/dev\/mapper\/centos-swap/# \/dev\/mapper\/centos-swap/" /etc/fstab
[root@node1 ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Mon Jan 28 11:49:11 2019
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=93572ab6-90da-4cfe-83a4-93be7ad8597c /boot                   xfs     defaults        0 0
# /dev/mapper/centos-swap swap                    swap    defaults        0 0

  

2.6、关闭 SeLinux

  在每一台机器上,关闭 SeLinux,以 master 为例

[root@master ~]# sed -i "s/SELINUX=enforcing/SELINUX=disabled/" /etc/selinux/config
[root@master ~]# cat /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

SELinux就是安全加强的Linux

  

2.7、安装 NTP

  安装 NTP 时间同步工具,并启动 NTP

[root@master ~]# yum install ntp -y

[root@master ~]# for N in $(seq 1 3); do ssh node$N yum install ntp -y; done;

  在每一台机器上,设置 NTP 开机启动

[root@master ~]# systemctl enable ntpd && systemctl start ntpd

  依次查看每台机器上的时间:

[root@master ~]# for N in $(seq 1 3); do ssh node$N date; done;
Wed Apr 17 11:43:21 CST 2019
Wed Apr 17 11:43:21 CST 2019
Wed Apr 17 11:43:21 CST 2019

  

2.8、安装及配置 CFSSL

  使用 CFSSL 能够构建本地CA,生成后面需要使用的证书。

[root@master ~]# mkdir -p /data/kuber/work/_src
[root@master ~]# cd /data/kuber/work/_src
[root@master _src]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
[root@master _src]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
[root@master _src]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
[root@master _src]# chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
[root@master _src]# mv cfssl_linux-amd64 /usr/local/bin/cfssl
[root@master _src]# mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
[root@master _src]# mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

  

2.9、创建安装目录

  创建后面要用到的 ETCDKubernetes 使用目录

[root@master _src]# mkdir /data/kuber/work/_app/k8s/etcd/{bin,cfg,ssl} -p
[root@master _src]# mkdir /data/kuber/work/_app/k8s/kubernetes/{bin,cfg,ssl,ssl_cert} -p
[root@master _src]# mkdir /data/kuber/work/_data/etcd -p

[root@master _src]# for N in $(seq 1 3); do ssh node$N mkdir /data/kuber/work/_app/k8s/etcd/{bin,cfg,ssl} -p; done;
[root@master _src]# for N in $(seq 1 3); do ssh node$N mkdir /data/kuber/work/_app/k8s/kubernetes/{bin,cfg,ssl,ssl_cert} -p; done;
[root@master _src]# for N in $(seq 1 3); do ssh node$N mkdir /data/kuber/work/_data/etcd -p; done;

  

2.10、升级内核

  因为3.10版本内核且缺少 ip_vs_fo.ko 模块,将导致 kube-proxy 无法开启ipvs模式。ip_vs_fo.ko 模块的最早版本为3.19版本,这个内核版本在 RedHat 系列发行版的常见RPM源中是不存在的。

[root@master ~]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
[root@master ~]# yum --enablerepo=elrepo-kernel install kernel-ml-devel kernel-ml -y

  重启系统 reboot 后,手动选择新内核,然后输入以下命令,可以查看新内核的状态:

[root@master ~]# hostnamectl
   Static hostname: master
         Icon name: computer-vm
           Chassis: vm
        Machine ID: 04node3f6d56e788345859875d9f49bd4bd
           Boot ID: 40a19388698f4907bd233a8cff76f36e
    Virtualization: kvm
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 4.20.7-1.el7.elrepo.x86_64
      Architecture: x86-64

  

3、安装 Docker 18.06.1-ce

3.1、删除旧版本的 Docker

  官方提供的删除方法

$ sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine

  另外一种删除旧版的 Docker 方法,先查询安装过的 Docker

[root@master ~]# yum list installed | grep docker
Repository base is listed more than once in the configuration
Repository updates is listed more than once in the configuration
Repository extras is listed more than once in the configuration
Repository centosplus is listed more than once in the configuration
containerd.io.x86_64            1.2.2-3.el7                    @docker-ce-stable
docker-ce.x86_64                3:18.09.1-3.el7                @docker-ce-stable
docker-ce-cli.x86_64            1:18.09.1-3.el7                @docker-ce-stable

  删除已安装的 Docker

[root@master ~]# yum -y remove docker-ce.x86_64 docker-ce-cli.x86_64 containerd.io.x86_64

  删除 Docker 镜像/容器

[root@master ~]# rm -rf /var/lib/docker

  

3.2、设置存储库

  安装所需要的包,yum-utils 提供了 yum-config-manager 实用程序, device-mapper-persistent-datalvm2devicemapper 需要的存储驱动程序。
  在每一台机器上操作,以 master 为例

[root@master ~]# sudo yum install -y yum-utils device-mapper-persistent-data lvm2
[root@master ~]# sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

  

3.3、安装 Docker

[root@master ~]# sudo yum install docker-ce-18.06.1.ce-3.el7 -y

  

3.4、启动 Docker

[root@master ~]# systemctl enable docker && systemctl start docker

  

4、安装 ETCD 3.3.10

4.1、创建 ETCD 证书

4.1.1、生成 ETCD SERVER 证书用到的JSON请求文件

[root@master ~]# mkdir -p /data/kuber/work/_src/ssl_etcd
[root@master ~]# cd /data/kuber/work/_src/ssl_etcd
[root@master ssl_etcd]# cat << EOF | tee ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "etcd": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

默认策略,指定了证书的有效期是10年(87600h)
etcd策略,指定了证书的用途
signing, 表示该证书可用于签名其它证书;生成的 ca.pem 证书中 CA=TRUE
server auth:表示 client 可以用该 CA 对 server 提供的证书进行验证
client auth:表示 server 可以用该 CA 对 client 提供的证书进行验证

  

4.1.2、创建 ETCD CA 证书配置文件

[root@master ssl_etcd]# cat << EOF | tee ca-csr.json
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF

  

4.1.3、创建 ETCD SERVER 证书配置文件

[root@master ssl_etcd]# cat << EOF | tee server-csr.json
{
    "CN": "etcd",
    "hosts": [
    "172.16.128.0",
    "172.16.128.1",
    "172.16.128.2",
    "172.16.128.3"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF

  

4.1.4、生成 ETCD CA 证书和私钥

[root@master ssl_etcd]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2019/02/14 18:44:37 [INFO] generating a new CA key and certificate from CSR
2019/02/14 18:44:37 [INFO] generate received request
2019/02/14 18:44:37 [INFO] received CSR
2019/02/14 18:44:37 [INFO] generating key: rsa-2048
2019/02/14 18:44:38 [INFO] encoded CSR
2019/02/14 18:44:38 [INFO] signed certificate with serial number 384346866475232855604658229421854651219342845660
[root@master ssl_etcd]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  server-csr.json

  

4.1.5、生成 ETCD SERVER 证书和私钥

[root@master ssl_etcd]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=etcd server-csr.json | cfssljson -bare server
2019/02/09 20:52:57 [INFO] generate received request
2019/02/09 20:52:57 [INFO] received CSR
2019/02/09 20:52:57 [INFO] generating key: rsa-2048
2019/02/09 20:52:57 [INFO] encoded CSR
2019/02/09 20:52:57 [INFO] signed certificate with serial number 373071566605311458179949133441319838683720611466
2019/02/09 20:52:57 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@master ssl_etcd]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  server.csr  server-csr.json  server-key.pem  server.pem

  将生成的证书,复制到 etcd 使用目录

[root@master ssl_etcd]# cp *.pem /data/kuber/work/_app/k8s/etcd/ssl/

  

4.2、安装 ETCD

4.2.1、下载 ETCD

[root@master ssl_etcd]# cd /data/kuber/work/_src/
[root@master _src]# wget https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz
[root@master _src]# tar -xvf etcd-v3.3.10-linux-amd64.tar.gz
[root@master _src]# cd etcd-v3.3.10-linux-amd64
[root@master etcd-v3.3.10-linux-amd64]# cp etcd etcdctl /data/kuber/work/_app/k8s/etcd/bin/

  

4.2.2、创建 ETCD 系统启动文件

  创建 /usr/lib/systemd/system/etcd.service 文件并保存,内容如下:

[root@master etcd-v3.3.10-linux-amd64]# cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/data/kuber/work/_app/k8s/etcd/cfg/etcd.conf
ExecStart=/data/kuber/work/_app/k8s/etcd/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/data/kuber/work/_app/k8s/etcd/ssl/server.pem \
--key-file=/data/kuber/work/_app/k8s/etcd/ssl/server-key.pem \
--peer-cert-file=/data/kuber/work/_app/k8s/etcd/ssl/server.pem \
--peer-key-file=/data/kuber/work/_app/k8s/etcd/ssl/server-key.pem \
--trusted-ca-file=/data/kuber/work/_app/k8s/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/data/kuber/work/_app/k8s/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

  

4.2.3、将 ETCD 启动文件、证书文件、系统启动文件复制到其他节点

[root@master ~]# for N in $(seq 1 3); do scp -r /data/kuber/work/_app/k8s/etcd node$N:/data/kuber/work/_app/k8s/; done;
[root@master ~]# for N in $(seq 1 3); do scp -r /usr/lib/systemd/system/etcd.service node$N:/usr/lib/systemd/system/etcd.service; done;

  

4.2.4、ETCD 主配置文件

  在 master 创建 /data/kuber/work/_app/k8s/etcd/cfg/etcd.conf 文件,内容如下:

[root@master _src]# cat << EOF | tee /data/kuber/work/_app/k8s/etcd/cfg/etcd.conf
#[Member]
# ETCD的节点名
ETCD_NAME="etcd00"
# ETCD的数据存储目录
ETCD_DATA_DIR="/data/kuber/work/_data/etcd"
# 该节点与其他节点通信时所监听的地址列表,多个地址使用逗号隔开,其格式可以划分为scheme://IP:PORT,这里的scheme可以是http、https
ETCD_LISTEN_PEER_URLS="https://172.16.128.0:2380"
# 该节点与客户端通信时监听的地址列表
ETCD_LISTEN_CLIENT_URLS="https://172.16.128.0:2379"
 
#[Clustering]
# 该成员节点在整个集群中的通信地址列表,这个地址用来传输集群数据的地址。因此这个地址必须是可以连接集群中所有的成员的。
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.128.0:2380"
# 配置集群内部所有成员地址,其格式为:ETCD_NAME=ETCD_INITIAL_ADVERTISE_PEER_URLS,如果有多个使用逗号隔开
ETCD_ADVERTISE_CLIENT_URLS="https://172.16.128.0:2379"
ETCD_INITIAL_CLUSTER="etcd00=https://172.16.128.0:2380,etcd01=https://172.16.128.1:2380,etcd02=https://172.16.128.2:2380,etcd03=https://172.16.128.3:2380"
# 初始化集群token
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
# 初始化集群状态,new表示新建
ETCD_INITIAL_CLUSTER_STATE="new"

#[Security]
ETCD_CERT_FILE="/data/kuber/work/_app/k8s/etcd/ssl/server.pem"
ETCD_KEY_FILE="/data/kuber/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/data/kuber/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/data/kuber/work/_app/k8s/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/data/kuber/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/data/kuber/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"

EOF

  在 node1 创建 /data/kuber/work/_app/k8s/etcd/cfg/etcd.conf 文件,内容如下:

[root@node1 _src]# cat << EOF | tee /data/kuber/work/_app/k8s/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/data/kuber/work/_data/etcd"
ETCD_LISTEN_PEER_URLS="https://172.16.128.1:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.16.128.1:2379"
 
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.128.1:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://172.16.128.1:2379"
ETCD_INITIAL_CLUSTER="etcd00=https://172.16.128.0:2380,etcd01=https://172.16.128.1:2380,etcd02=https://172.16.128.2:2380,etcd03=https://172.16.128.3:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#[Security]
ETCD_CERT_FILE="/data/kuber/work/_app/k8s/etcd/ssl/server.pem"
ETCD_KEY_FILE="/data/kuber/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/data/kuber/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/data/kuber/work/_app/k8s/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/data/kuber/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/data/kuber/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"

EOF

  在 node2 创建 /data/kuber/work/_app/k8s/etcd/cfg/etcd.conf 文件,内容如下:

[root@node2 _src]# cat << EOF | tee /data/kuber/work/_app/k8s/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/data/kuber/work/_data/etcd"
ETCD_LISTEN_PEER_URLS="https://172.16.128.2:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.16.128.2:2379"
 
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.128.2:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://172.16.128.2:2379"
ETCD_INITIAL_CLUSTER="etcd00=https://172.16.128.0:2380,etcd01=https://172.16.128.1:2380,etcd02=https://172.16.128.2:2380,etcd03=https://172.16.128.3:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#[Security]
ETCD_CERT_FILE="/data/kuber/work/_app/k8s/etcd/ssl/server.pem"
ETCD_KEY_FILE="/data/kuber/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/data/kuber/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/data/kuber/work/_app/k8s/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/data/kuber/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/data/kuber/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"

EOF

  在 node3 创建 /data/kuber/work/_app/k8s/etcd/cfg/etcd.conf 文件,内容如下:

[root@node3 _src]# cat << EOF | tee /data/kuber/work/_app/k8s/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/data/kuber/work/_data/etcd"
ETCD_LISTEN_PEER_URLS="https://172.16.128.3:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.16.128.3:2379"
 
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.128.3:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://172.16.128.3:2379"
ETCD_INITIAL_CLUSTER="etcd00=https://172.16.128.0:2380,etcd01=https://172.16.128.1:2380,etcd02=https://172.16.128.2:2380,etcd03=https://172.16.128.3:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#[Security]
ETCD_CERT_FILE="/data/kuber/work/_app/k8s/etcd/ssl/server.pem"
ETCD_KEY_FILE="/data/kuber/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/data/kuber/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/data/kuber/work/_app/k8s/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/data/kuber/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/data/kuber/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"

EOF

  

4.2.5、启动 ETCD 服务

  在每一台节点机器上单独运行

[root@master _src]# systemctl daemon-reload && systemctl enable etcd && systemctl start etcd

  

4.2.6、检查 ETCD 服务运行状态

[root@master _src]# /data/kuber/work/_app/k8s/etcd/bin/etcdctl --ca-file=/data/kuber/work/_app/k8s/etcd/ssl/ca.pem --cert-file=/data/kuber/work/_app/k8s/etcd/ssl/server.pem --key-file=/data/kuber/work/_app/k8s/etcd/ssl/server-key.pem cluster-health
member 2cba54b8e3ba988a is healthy: got healthy result from https://172.16.128.3:2379
member 7node12135a398849e3 is healthy: got healthy result from https://172.16.128.2:2379
member 99node2fd4fe11e28d9 is healthy: got healthy result from https://172.16.128.0:2379
member f2fd0node12369e0d75 is healthy: got healthy result from https://172.16.128.1:2379
cluster is healthy

  

4.2.7、查看 ETCD 集群成员信息

[root@master _src]# /data/kuber/work/_app/k8s/etcd/bin/etcdctl --ca-file=/data/kuber/work/_app/k8s/etcd/ssl/ca.pem --cert-file=/data/kuber/work/_app/k8s/etcd/ssl/server.pem --key-file=/data/kuber/work/_app/k8s/etcd/ssl/server-key.pem  member list
2cba54b8e3ba988a: name=etcd03 peerURLs=https://172.16.128.3:2380 clientURLs=https://172.16.128.3:2379 isLeader=false
7node12135a398849e3: name=etcd02 peerURLs=https://172.16.128.2:2380 clientURLs=https://172.16.128.2:2379 isLeader=false
99node2fd4fe11e28d9: name=etcd00 peerURLs=https://172.16.128.0:2380 clientURLs=https://172.16.128.0:2379 isLeader=true
f2fd0node12369e0d75: name=etcd01 peerURLs=https://172.16.128.1:2380 clientURLs=https://172.16.128.1:2379 isLeader=false

  

5、安装 Flannel v0.11.0

5.1、Flanneld 网络安装

  Flannel 实质上是一种“覆盖网络(overlay network)”,也就是将TCP数据包装在另一种网络包里面进行路由转发和通信,目前已经支持UDP、VxLAN、AWS VPC和GCE路由等数据转发方式。FlannelKubernetes中用于配置第三层(网络层)网络结构。
  Flannel 负责在集群中的多个节点之间提供第 3 层 IPv4 网络。Flannel 不控制容器如何与主机联网,只负责主机之间如何传输流量。但是,Flannel 确实为 Kubernetes 提供了 CNI 插件,并提供了与 Docker 集成的指导。
Kubernetes-3

没有 Flanneld 网络,Node节点间的 pod 不能通信,只能 Node 内通信。
Flanneld 服务启动时主要做了以下几步的工作: 从 ETCD 中获取 NetWork 的配置信息划分 Subnet,并在 ETCD 中进行注册,将子网信息记录到 /run/flannel/subnet.env

  

5.2、向 ETCD 集群写入网段信息

[root@master _src]# /data/kuber/work/_app/k8s/etcd/bin/etcdctl --ca-file=/data/kuber/work/_app/k8s/etcd/ssl/ca.pem --cert-file=/data/kuber/work/_app/k8s/etcd/ssl/server.pem --key-file=/data/kuber/work/_app/k8s/etcd/ssl/server-key.pem --endpoints="https://172.16.128.0:2379,https://172.16.128.1:2379,https://172.16.128.2:2379,https://172.16.128.3:2379"  set /coreos.com/network/config  '{ "Network": "10.244.0.0/16", "Backend": {"Type": "vxlan"}}'
{ "Network": "10.244.0.0/16", "Backend": {"Type": "vxlan"}}

Flanneld 当前版本 (v0.11.0) 不支持 ETCD v3,所以使用 ETCD v2 API 写入配置 key 和网段数据;
写入的 Pod 网段 ${CLUSTER_CIDR} 必须是 /16 段地址,必须与 kube-controller-manager 的 –cluster-cidr 参数值一致;

  

5.3、安装 Flannel

[root@master _src]# pwd
/data/kuber/work/_src
[root@master _src]# wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
[root@master _src]# tar -xvf flannel-v0.11.0-linux-amd64.tar.gz
[root@master _src]# mv flanneld mk-docker-opts.sh /data/kuber/work/_app/k8s/kubernetes/bin/

  

5.4、配置 Flannel

  创建 /data/kuber/work/_app/k8s/kubernetes/cfg/flanneld 文件并保存,写入以下内容:

[root@master _src]# cat /data/kuber/work/_app/k8s/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=https://172.16.128.0:2379,https://172.16.128.1:2379,https://172.16.128.2:2379,https://172.16.128.3:2379 -etcd-cafile=/data/kuber/work/_app/k8s/etcd/ssl/ca.pem -etcd-certfile=/data/kuber/work/_app/k8s/etcd/ssl/server.pem -etcd-keyfile=/data/kuber/work/_app/k8s/etcd/ssl/server-key.pem"

  

5.5、创建 Flannel 系统启动文件

  创建 /usr/lib/systemd/system/flanneld.service 文件并保存,内容如下:

[root@master _src]# cat /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/data/kuber/work/_app/k8s/kubernetes/cfg/flanneld
ExecStart=/data/kuber/work/_app/k8s/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/data/kuber/work/_app/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target

mk-docker-opts.sh 脚本将分配给 Flanneld 的 Pod 子网网段信息写入 /run/flannel/docker 文件,后续 Docker 启动时 使用这个文件中的环境变量配置 docker0 网桥.
Flanneld 使用系统缺省路由所在的接口与其它节点通信,对于有多个网络接口(如内网和公网)的节点,可以用 -iface 参数指定通信接口;

  

5.6、配置 Docker 启动指定子网段

  编辑 /usr/lib/systemd/system/docker.service 文件,内容如下:

[root@master _src]# cat /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
# 加入环境变量的配件文件,并在 ExecStart 附加参数
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

  

5.7、将 Flannel 相关文件复制到其他机器

  主要复制 Flannel 执行文件、Flannel 配置文件、Flannel 系统启动文件、Docker 系统启动文件

[root@master _src]# for N in $(seq 1 3); do scp -r /data/kuber/work/_app/k8s/kubernetes/* node$N:/data/kuber/work/_app/k8s/kubernetes/; done;
[root@master _src]# for N in $(seq 1 3); do scp -r /usr/lib/systemd/system/docker.service node$N:/usr/lib/systemd/system/docker.service; done;
[root@master _src]# for N in $(seq 1 3); do scp -r /usr/lib/systemd/system/flanneld.service node$N:/usr/lib/systemd/system/flanneld.service; done;

  

5.8、启动服务

  在每一台机器上单独运行,以 master 为例:

[root@master _src]# systemctl daemon-reload && systemctl stop docker && systemctl enable flanneld && systemctl start flanneld && systemctl start docker
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.

启动 Flannel 前要关闭 Docker 及相关的 kubelet 这样 Flannel 才会覆盖 docker0 网桥

  

5.9、查看 Flannel 服务设置 docker0 网桥状态

[root@master _src]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:1c:42:50:8c:6a brd ff:ff:ff:ff:ff:ff
    inet 172.16.128.0/8 brd 10.255.255.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::49d:e3e6:c623:9582/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    link/ether 3e:80:5d:97:53:c4 brd ff:ff:ff:ff:ff:ff
    inet 10.172.46.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::3c80:5dff:fe97:53c4/64 scope link
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:9e:df:b9:87 brd ff:ff:ff:ff:ff:ff
    inet 10.172.46.1/24 brd 10.172.46.255 scope global docker0
       valid_lft forever preferred_lft forever

  

5.10、验证 Flannel 服务

[root@master _src]# for N in $(seq 1 3); do ssh node$N cat /run/flannel/subnet.env ; done;
DOCKER_OPT_BIP="--bip=10.172.46.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=10.172.46.1/24 --ip-masq=false --mtu=1450"
DOCKER_OPT_BIP="--bip=10.172.90.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=10.172.90.1/24 --ip-masq=false --mtu=1450"
DOCKER_OPT_BIP="--bip=10.172.5.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=10.172.5.1/24 --ip-masq=false --mtu=1450"
DOCKER_OPT_BIP="--bip=10.172.72.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=10.172.72.1/24 --ip-masq=false --mtu=1450"

  

6、安装Kubernetes

6.1、创建 Kubernetes 需要的证书

6.1.1、生成 Kubernetes 证书请求的JSON请求文件

[root@master ~]# cd /data/kuber/work/_app/k8s/kubernetes/ssl/
[root@master ssl]# cat << EOF | tee ca-config.json
{
  "signing": {
    "default": {
      "expiry": "8760h"
    },
    "profiles": {
      "server": {
        "usages": [
          "signing",
          "key encipherment",
          "server auth"
        ],
        "expiry": "8760h"
      },
      "client": {
        "usages": [
          "signing",
          "key encipherment",
          "client auth"
        ],
        "expiry": "8760h"
      }
    }
  }
}
EOF

  

6.1.2、生成 Kubernetes CA 配置文件和证书

[root@master ssl]# cat << EOF | tee ca-csr.json
{
    "CN": "kubernetes CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

  初始化一个 Kubernetes CA 证书

[root@master ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2019/02/15 00:33:49 [INFO] generating a new CA key and certificate from CSR
2019/02/15 00:33:49 [INFO] generate received request
2019/02/15 00:33:49 [INFO] received CSR
2019/02/15 00:33:49 [INFO] generating key: rsa-2048
2019/02/15 00:33:49 [INFO] encoded CSR
2019/02/15 00:33:49 [INFO] signed certificate with serial number 19178419085322799829088564182237651657158569707
[root@master ssl]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

  

6.1.3、生成 Kube API Server 配置文件和证书

  创建证书配置文件

[root@master ssl]# cat << EOF | tee kube-apiserver-server-csr.json
{
    "CN": "kubernetes",
    "hosts": [
      "127.0.0.1",
      "10.0.0.1",
      "172.16.128.0",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "API Server"
        }
    ]
}
EOF

  生成 kube-apiserver 证书

[root@master ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kube-apiserver-server-csr.json | cfssljson -bare kube-apiserver-server
2019/02/15 00:40:17 [INFO] generate received request
2019/02/15 00:40:17 [INFO] received CSR
2019/02/15 00:40:17 [INFO] generating key: rsa-2048
2019/02/15 00:40:17 [INFO] encoded CSR
2019/02/15 00:40:17 [INFO] signed certificate with serial number 73791614256163825800646464302566039201359288928
2019/02/15 00:40:17 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@master ssl]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  kube-apiserver-server.csr  kube-apiserver-server-csr.json  kube-apiserver-server-key.pem  kube-apiserver-server.pem

  

6.1.4、生成 kubelet client 配置文件和证书

  创建证书配置文件,此处CN的设置会影响kubeletAPI请求时所使用的user

[root@master ssl]# cat << EOF | tee kubelet-client-csr.json
{
  "CN": "kubelet-bootstrap",
  "hosts": [""],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "O": "k8s",
      "OU": "Kubelet",
      "ST": "Beijing"
    }
  ]
}
EOF

  生成 kubelet client证书

[root@master ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kubelet-client-csr.json | cfssljson -bare kubelet-client
2019/02/15 00:44:43 [INFO] generate received request
2019/02/15 00:44:43 [INFO] received CSR
2019/02/15 00:44:43 [INFO] generating key: rsa-2048
2019/02/15 00:44:43 [INFO] encoded CSR
2019/02/15 00:44:43 [INFO] signed certificate with serial number 285651868701760571162897366975202301612567414209
2019/02/15 00:44:43 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@master ssl]# ls
ca-config.json  ca-csr.json  ca.pem                     kube-apiserver-server-csr.json  kube-apiserver-server.pem  kubelet-client-csr.json  kubelet-client.pem
ca.csr          ca-key.pem   kube-apiserver-server.csr  kube-apiserver-server-key.pem   kubelet-client.csr         kubelet-client-key.pem

  

6.1.5、生成 Kube-Proxy 配置文件和证书

  创建证书配置文件

[root@master ssl]# cat << EOF | tee kube-proxy-client-csr.json
{
  "CN": "system:kube-proxy",
  "hosts": [""],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "O": "k8s",
      "OU": "System",
      "ST": "Beijing"
    }
  ]
}
EOF

  生成 Kube-Proxy 证书

[root@master ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-client-csr.json | cfssljson -bare kube-proxy-client
2019/02/15 01:14:39 [INFO] generate received request
2019/02/15 01:14:39 [INFO] received CSR
2019/02/15 01:14:39 [INFO] generating key: rsa-2048
2019/02/15 01:14:39 [INFO] encoded CSR
2019/02/15 01:14:39 [INFO] signed certificate with serial number 535503934939407075396917222976858989138817338004
2019/02/15 01:14:39 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@master ssl]# ls
ca-config.json  ca-csr.json  ca.pem                     kube-apiserver-server-csr.json  kube-apiserver-server.pem  kubelet-client-csr.json  kubelet-client.pem     kube-proxy-client-csr.json  kube-proxy-client.pem
ca.csr          ca-key.pem   kube-apiserver-server.csr  kube-apiserver-server-key.pem   kubelet-client.csr         kubelet-client-key.pem   kube-proxy-client.csr  kube-proxy-client-key.pem

  

6.1.6、生成 kubectl 管理员配置文件和证书

  创建 kubectl 管理员证书配置文件

[root@master ssl]# cat << EOF | tee kubernetes-admin-user.csr.json
{
  "CN": "admin",
  "hosts": [""],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "O": "k8s",
      "OU": "Cluster Admins",
      "ST": "Beijing"
    }
  ]
}
EOF

  生成 kubectl 管理员证书

[root@master ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kubernetes-admin-user.csr.json | cfssljson -bare kubernetes-admin-user
2019/02/15 01:23:22 [INFO] generate received request
2019/02/15 01:23:22 [INFO] received CSR
2019/02/15 01:23:22 [INFO] generating key: rsa-2048
2019/02/15 01:23:22 [INFO] encoded CSR
2019/02/15 01:23:22 [INFO] signed certificate with serial number 724413523889121871668676123719532667068182658276
2019/02/15 01:23:22 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@master ssl]# ls
ca-config.json  ca-key.pem                 kube-apiserver-server-csr.json  kubelet-client.csr       kubelet-client.pem          kube-proxy-client-key.pem  kubernetes-admin-user.csr.json
ca.csr          ca.pem                     kube-apiserver-server-key.pem   kubelet-client-csr.json  kube-proxy-client.csr       kube-proxy-client.pem      kubernetes-admin-user-key.pem
ca-csr.json     kube-apiserver-server.csr  kube-apiserver-server.pem       kubelet-client-key.pem   kube-proxy-client-csr.json  kubernetes-admin-user.csr  kubernetes-admin-user.pem

  

6.1.7、将相关证书复制到 Kubernetes Node 节点

[root@master ~]# for N in $(seq 1 3); do scp -r /data/kuber/work/_app/k8s/kubernetes/ssl/*.pem node$N:/data/kuber/work/_app/k8s/kubernetes/ssl/; done;

  

6.2、部署 Kubernetes Master 节点并加入集群

  Kubernetes Master 节点运行如下组件:

  • APIServer
      APIServer负责对外提供RESTful的kubernetes API的服务,它是系统管理指令的统一接口,任何对资源的增删该查都要交给APIServer处理后再交给etcd,如图,kubectl(kubernetes提供的客户端工具,该工具内部是对kubernetes API的调用)是直接和APIServer交互的。
  • Schedule
      schedule负责调度Pod到合适的Node上,如果把scheduler看成一个黑匣子,那么它的输入是pod和由多个Node组成的列表,输出是Pod和一个Node的绑定。 kubernetes目前提供了调度算法,同样也保留了接口。用户根据自己的需求定义自己的调度算法。
  • Controller manager
      如果APIServer做的是前台的工作的话,那么controller manager就是负责后台的。每一个资源都对应一个控制器。而control manager就是负责管理这些控制器的,比如我们通过APIServer创建了一个Pod,当这个Pod创建成功后,APIServer的任务就算完成了。
  • ETCD
      etcd是一个高可用的键值存储系统,kubernetes使用它来存储各个资源的状态,从而实现了Restful的API。
  • Flannel
      默认没有flanneld网络,Node节点间的pod不能通信,只能Node内通信,Flannel从etcd中获取network的配置信息 划分subnet,并在etcd中进行注册 将子网信息记录

kube-scheduler 和 kube-controller-manager 可以以集群模式运行,通过 leader 选举产生一个工作进程,其它进程处于阻塞模式。

  

6.2.1、下载文件并安装 Kubernetes Server

[root@master ~]# cd /data/kuber/work/_src/
[root@master _src]# wget https://dl.k8s.io/v1.13.0/kubernetes-server-linux-amd64.tar.gz
[root@master _src]# tar -xzvf kubernetes-server-linux-amd64.tar.gz
[root@master _src]# cd kubernetes/server/bin/
[root@master bin]# cp kube-scheduler kube-apiserver kube-controller-manager kubectl kubelet kube-proxy /data/kuber/work/_app/k8s/kubernetes/bin/

  从 master 复制 kubelet、kubectl、kube-proxy,同时复制到其他节点

[root@master ~]# cd /data/kuber/work/_src/kubernetes/server/bin/
[root@master bin]# for N in $(seq 1 3); do scp -r kubelet kubectl kube-proxy node$N:/data/kuber/work/_app/k8s/kubernetes/bin/; done;
kubelet                       100%  108MB 120.3MB/s   00:00
kubectl                       100%   37MB 120.0MB/s   00:00
kube-proxy                    100%   33MB 113.7MB/s   00:00
kubelet                       100%  108MB 108.0MB/s   00:00
kubectl                       100%   37MB 108.6MB/s   00:00
kube-proxy                    100%   33MB 106.1MB/s   00:00
kubelet                       100%  108MB 117.8MB/s   00:00
kubectl                       100%   37MB 116.6MB/s   00:00
kube-proxy                    100%   33MB 119.0MB/s   00:00

  

6.2.2、部署 Apiserver

  创建 TLS Bootstrapping Token

[root@master ~]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
e9e0cc2a1ea8bc42136e520a3cfb5b0a

这里我们生成的随机Token是e9e0cc2a1ea8bc42136e520a3cfb5b0a,记下来后面要用到

  创建 /data/kuber/work/_app/k8s/kubernetes/cfg/token-auth-file 文件并保存,内容如下:

[root@master ~]# cat /data/kuber/work/_app/k8s/kubernetes/cfg/token-auth-file
e9e0cc2a1ea8bc42136e520a3cfb5b0a,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

  

6.2.2.1、创建 Apiserver 配置文件

  创建 /data/kuber/work/_app/k8s/kubernetes/cfg/kube-apiserver 文件并保存,内容如下,原贴中没有--kubelet-client-certificatekubelet-client-key这两项,但可能会导致kubectl exec -it连入Pod失败:

[root@master ~]# cat /data/kuber/work/_app/k8s/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://172.16.128.0:2379,https://172.16.128.1:2379,https://172.16.128.2:2379,https://172.16.128.3:2379 \
--bind-address=172.16.128.0 \
--secure-port=6443 \
--advertise-address=172.16.128.0 \
--allow-privileged=true \
--service-cluster-ip-range=10.244.0.0/16 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/data/kuber/work/_app/k8s/kubernetes/cfg/token-auth-file \
--service-node-port-range=30000-50000 \
--tls-cert-file=/data/kuber/work/_app/k8s/kubernetes/ssl/kube-apiserver-server.pem  \
--tls-private-key-file=/data/kuber/work/_app/k8s/kubernetes/ssl/kube-apiserver-server-key.pem \
--client-ca-file=/data/kuber/work/_app/k8s/kubernetes/ssl/ca.pem \
--service-account-key-file=/data/kuber/work/_app/k8s/kubernetes/ssl/ca-key.pem \
--kubelet-client-certificate=/data/kuber/work/_app/k8s/kubernetes/ssl/kubelet-client.pem \
--kubelet-client-key=/data/kuber/work/_app/k8s/kubernetes/ssl/kubelet-client-key.pem \
--etcd-cafile=/data/kuber/work/_app/k8s/etcd/ssl/ca.pem \
--etcd-certfile=/data/kuber/work/_app/k8s/etcd/ssl/server.pem \
--etcd-keyfile=/data/kuber/work/_app/k8s/etcd/ssl/server-key.pem"

  

6.2.2.2、创建 Apiserver 启动文件

  创建 /usr/lib/systemd/system/kube-apiserver.service 文件并保存,内容如下:

[root@master ~]# cat /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/data/kuber/work/_app/k8s/kubernetes/cfg/kube-apiserver
ExecStart=/data/kuber/work/_app/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

  

6.2.2.3、启动 Kube Apiserver 服务

[root@master ~]# systemctl daemon-reload && systemctl enable kube-apiserver && systemctl start kube-apiserver
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.

  

6.2.2.4、检查 Apiserver 服务是否运行

[root@master ~]# systemctl status kube-apiserver
● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-02-19 22:28:03 CST; 19s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 4708 (kube-apiserver)
    Tasks: 10
   Memory: 370.9M
   CGroup: /system.slice/kube-apiserver.service
           └─4708 /data/kuber/work/_app/k8s/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://172.16.128.0:2379,https://172.16.128.1:2379,https://172.16.128.2:2379,https://172.16.128.3:2379 --bind-address=172.16.128.0 ...

Feb 19 22:28:11 master kube-apiserver[4708]: I0219 22:28:11.510271    4708 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (2.032168ms) 200 [kube-api...172.16.128.0:59408]
Feb 19 22:28:11 master kube-apiserver[4708]: I0219 22:28:11.513149    4708 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (2.1516...172.16.128.0:59408]
Feb 19 22:28:11 master kube-apiserver[4708]: I0219 22:28:11.515603    4708 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.88011ms) 200 ...172.16.128.0:59408]
Feb 19 22:28:11 master kube-apiserver[4708]: I0219 22:28:11.518209    4708 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.980109ms) 200 [k...172.16.128.0:59408]
Feb 19 22:28:11 master kube-apiserver[4708]: I0219 22:28:11.520474    4708 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.890751ms) 200 [kub...172.16.128.0:59408]
Feb 19 22:28:11 master kube-apiserver[4708]: I0219 22:28:11.522918    4708 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.80026ms) 200 [kube-...172.16.128.0:59408]
Feb 19 22:28:11 master kube-apiserver[4708]: I0219 22:28:11.525952    4708 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (2.148966ms) 200 [k...172.16.128.0:59408]
Feb 19 22:28:20 master kube-apiserver[4708]: I0219 22:28:20.403713    4708 wrap.go:47] GET /api/v1/namespaces/default: (2.463889ms) 200 [kube-apiserver/v1.13.0 (linux/amd64) kubernetes/ddf47ac 172.16.128.0:59408]
Feb 19 22:28:20 master kube-apiserver[4708]: I0219 22:28:20.406610    4708 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (2.080766ms) 200 [kube-apiserver/v1.13.0 (linux/amd64) kubernetes/ddf47ac 172.16.128.0:59408]
Feb 19 22:28:20 master kube-apiserver[4708]: I0219 22:28:20.417019    4708 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.134397ms) 200 [kube-apiserver/v1.13.0 (linux/amd64) kubernetes/ddf47ac 172.16.128.0:59408]

  

6.2.3、部署 Scheduler

  创建 /data/kuber/work/_app/k8s/kubernetes/cfg/kube-scheduler 文件并保存,内容如下:

[root@master ~]# cat /data/kuber/work/_app/k8s/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"

  

6.2.3.1、创建 Kube-scheduler 系统启动文件

  创建 /usr/lib/systemd/system/kube-scheduler.service 文件并保存,内容如下:

[root@master ~]# cat /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/data/kuber/work/_app/k8s/kubernetes/cfg/kube-scheduler
ExecStart=/data/kuber/work/_app/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

  

6.2.3.2、启动 Kube-scheduler 服务

[root@master ~]# systemctl daemon-reload && systemctl enable kube-scheduler && systemctl start kube-scheduler
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.

  

6.2.3.3、检查 Kube-scheduler 服务是否运行

[root@master ~]# systemctl status kube-scheduler
● kube-scheduler.service - Kubernetes Scheduler
   Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-02-19 22:29:07 CST; 7s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 4839 (kube-scheduler)
    Tasks: 9
   Memory: 47.0M
   CGroup: /system.slice/kube-scheduler.service
           └─4839 /data/kuber/work/_app/k8s/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect

Feb 19 22:29:09 master kube-scheduler[4839]: I0219 22:29:09.679756    4839 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
Feb 19 22:29:09 master kube-scheduler[4839]: I0219 22:29:09.779894    4839 shared_informer.go:123] caches populated
Feb 19 22:29:09 master kube-scheduler[4839]: I0219 22:29:09.779928    4839 controller_utils.go:1034] Caches are synced for scheduler controller
Feb 19 22:29:09 master kube-scheduler[4839]: I0219 22:29:09.779990    4839 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-scheduler...
Feb 19 22:29:09 master kube-scheduler[4839]: I0219 22:29:09.784100    4839 leaderelection.go:289] lock is held by master_ca2f3489-3316-11e9-87c6-001c42508c6a and has not yet expired
Feb 19 22:29:09 master kube-scheduler[4839]: I0219 22:29:09.784135    4839 leaderelection.go:210] failed to acquire lease kube-system/kube-scheduler
Feb 19 22:29:12 master kube-scheduler[4839]: I0219 22:29:12.829896    4839 leaderelection.go:289] lock is held by master_ca2f3489-3316-11e9-87c6-001c42508c6a and has not yet expired
Feb 19 22:29:12 master kube-scheduler[4839]: I0219 22:29:12.829921    4839 leaderelection.go:210] failed to acquire lease kube-system/kube-scheduler
Feb 19 22:29:14 master kube-scheduler[4839]: I0219 22:29:14.941554    4839 leaderelection.go:289] lock is held by master_ca2f3489-3316-11e9-87c6-001c42508c6a and has not yet expired
Feb 19 22:29:14 master kube-scheduler[4839]: I0219 22:29:14.941573    4839 leaderelection.go:210] failed to acquire lease kube-system/kube-scheduler

  

6.2.4、部署 Kube-Controller-Manager 组件

6.2.4.1、创建 kube-controller-manager 配置文件

  创建 /data/kuber/work/_app/k8s/kubernetes/cfg/kube-controller-manager 文件并保存,内容如下:

[root@master ~]# cat /data/kuber/work/_app/k8s/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.244.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/data/kuber/work/_app/k8s/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/data/kuber/work/_app/k8s/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/data/kuber/work/_app/k8s/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/data/kuber/work/_app/k8s/kubernetes/ssl/ca-key.pem"

  

6.2.4.2、创建 kube-controller-manager 系统启动文件

  创建 /usr/lib/systemd/system/kube-controller-manager.service 文件并保存,内容如下

[root@master ~]# cat /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/data/kuber/work/_app/k8s/kubernetes/cfg/kube-controller-manager
ExecStart=/data/kuber/work/_app/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

  

6.2.4.3、启动 kube-controller-manager 服务

[root@master ~]# systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl start kube-controller-manager
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.

  

6.2.4.4、检查 kube-controller-manager 服务是否运行

[root@master ~]# systemctl status kube-controller-manager
● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-02-19 22:29:40 CST; 12s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 4933 (kube-controller)
    Tasks: 7
   Memory: 106.7M
   CGroup: /system.slice/kube-controller-manager.service
           └─4933 /data/kuber/work/_app/k8s/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.244.0.0/16 --cluster-name=kubernet...

Feb 19 22:29:41 master kube-controller-manager[4933]: I0219 22:29:41.276841    4933 deprecated_insecure_serving.go:51] Serving insecurely on 127.0.0.1:10252
Feb 19 22:29:41 master kube-controller-manager[4933]: I0219 22:29:41.278183    4933 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-controller-manager...
Feb 19 22:29:41 master kube-controller-manager[4933]: I0219 22:29:41.301326    4933 leaderelection.go:289] lock is held by master_cae65875-3316-11e9-9071-001c42508c6a and has not yet expired
Feb 19 22:29:41 master kube-controller-manager[4933]: I0219 22:29:41.301451    4933 leaderelection.go:210] failed to acquire lease kube-system/kube-controller-manager
Feb 19 22:29:44 master kube-controller-manager[4933]: I0219 22:29:44.679518    4933 leaderelection.go:289] lock is held by master_cae65875-3316-11e9-9071-001c42508c6a and has not yet expired
Feb 19 22:29:44 master kube-controller-manager[4933]: I0219 22:29:44.679550    4933 leaderelection.go:210] failed to acquire lease kube-system/kube-controller-manager
Feb 19 22:29:47 master kube-controller-manager[4933]: I0219 22:29:47.078743    4933 leaderelection.go:289] lock is held by master_cae65875-3316-11e9-9071-001c42508c6a and has not yet expired
Feb 19 22:29:47 master kube-controller-manager[4933]: I0219 22:29:47.078762    4933 leaderelection.go:210] failed to acquire lease kube-system/kube-controller-manager
Feb 19 22:29:49 master kube-controller-manager[4933]: I0219 22:29:49.529247    4933 leaderelection.go:289] lock is held by master_cae65875-3316-11e9-9071-001c42508c6a and has not yet expired
Feb 19 22:29:49 master kube-controller-manager[4933]: I0219 22:29:49.529266    4933 leaderelection.go:210] failed to acquire lease kube-system/kube-controller-manager

  

6.2.5、验证 API Server 服务

  将 kubectl 加入到$PATH变量中

[root@master ~]# echo "PATH=/data/kuber/work/_app/k8s/kubernetes/bin:$PATH:$HOME/bin" >> /etc/profile
[root@master ~]# source /etc/profile

  查看节点状态

[root@master ~]# kubectl get cs,nodes
NAME                                 STATUS    MESSAGE             ERROR
componentstatus/controller-manager   Healthy   ok
componentstatus/scheduler            Healthy   ok
componentstatus/etcd-2               Healthy   {"health":"true"}
componentstatus/etcd-1               Healthy   {"health":"true"}
componentstatus/etcd-0               Healthy   {"health":"true"}
componentstatus/etcd-3               Healthy   {"health":"true"}

  

6.2.6、部署 Kubelet

6.2.6.1、创建 bootstrap.kubeconfig、kube-proxy.kubeconfig 配置文件

  创建 /data/kuber/work/_app/k8s/kubernetes/cfg/env.sh 文件并保存,内容如下:

[root@master cfg]# pwd
/data/kuber/work/_app/k8s/kubernetes/cfg
[root@master cfg]# cat env.sh
#!/bin/bash
#创建kubelet bootstrapping kubeconfig 
BOOTSTRAP_TOKEN=e9e0cc2a1ea8bc42136e520a3cfb5b0a
KUBE_APISERVER="https://172.16.128.0:6443"
#设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/data/kuber/work/_app/k8s/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig
 
#设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig
 
# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig
 
# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
 
#----------------------
 
# 创建kube-proxy kubeconfig文件
 
kubectl config set-cluster kubernetes \
  --certificate-authority=/data/kuber/work/_app/k8s/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig
 
kubectl config set-credentials kube-proxy \
  --client-certificate=/data/kuber/work/_app/k8s/kubernetes/ssl/kube-proxy-client.pem \
  --client-key=/data/kuber/work/_app/k8s/kubernetes/ssl/kube-proxy-client-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
 
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
 
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

BOOTSTRAP_TOKEN 使用在创建 TLS Bootstrapping Token 生成的e9e0cc2a1ea8bc42136e520a3cfb5b0a

  执行脚本:

[root@master cfg]# pwd
/data/kuber/work/_app/k8s/kubernetes/cfg
[root@master cfg]# sh env.sh
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" modified.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".
[root@master cfg]# ls
bootstrap.kubeconfig  env.sh  flanneld  kube-apiserver  kube-controller-manager  kube-proxy.kubeconfig  kube-scheduler  token.csv

  将 bootstrap.kubeconfigkube-proxy.kubeconfig 复制到其他节点

[root@master cfg]# for N in $(seq 1 3); do scp -r kube-proxy.kubeconfig bootstrap.kubeconfig node$N:/data/kuber/work/_app/k8s/kubernetes/cfg/; done;
kube-proxy.kubeconfig                                100% 6294    10.2MB/s   00:00
bootstrap.kubeconfig                                 100% 2176     4.2MB/s   00:00
kube-proxy.kubeconfig                                100% 6294    10.8MB/s   00:00
bootstrap.kubeconfig                                 100% 2176     3.3MB/s   00:00
kube-proxy.kubeconfig                                100% 6294     9.6MB/s   00:00
bootstrap.kubeconfig                                 100% 2176     2.3MB/s   00:00

  

6.2.6.2、创建 Master 节点的 kubelet 配置文件

此处内容看起来有些重复。主要是原贴中并没有说清楚kubelet要部署在所有Node节点上。导致我第一次安装费了不小的功夫排错。所以这里直接加了一段上去。其实就是把在Master节点部署kubelet的过程重复几遍而已。同时注意修改配置文件就可以了。
创建 /data/kuber/work/_app/k8s/kubernetes/cfg/kubelet.config 参数配置文件并保存,内容如下:

[root@master cfg]# cat
/data/kuber/work/_app/k8s/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 172.16.128.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.244.0.1"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true

  创建 /data/kuber/work/_app/k8s/kubernetes/cfg/kubelet 启动参数文件并保存,内容如下,原贴中并没有--client-ca-file参数,但似乎会导致kubectl exec --it命令连入pod失败:

[root@master cfg]# cat
/data/kuber/work/_app/k8s/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=172.16.128.0 \
--kubeconfig=/data/kuber/work/_app/k8s/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/data/kuber/work/_app/k8s/kubernetes/cfg/bootstrap.kubeconfig \
--config=/data/kuber/work/_app/k8s/kubernetes/cfg/kubelet.config \
--client-ca-file=/data/kuber/work/_app/k8s/kubernetes/ssl/ca.pem \
--cert-dir=/data/kuber/work/_app/k8s/kubernetes/ssl_cert \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

kubelet 启动时,如果通过 --kubeconfig 指定的文件不存在,则通过 --bootstrap-kubeconfig 指定的 bootstrap kubeconfig 用于从API服务器请求客户端证书。
在通过 kubelet 批准证书请求时,引用生成的密钥和证书将放在 --cert-dir 目录中。

  

6.2.6.3、将 kubelet-bootstrap 用户绑定到系统集群角色

[root@master ~]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:kubelet-api-admin --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created

原贴中绑定的是system:node-bootstrap这个clusterrole。但实践中此角色权限不足会导致kubectl exec -it连入Pod失败。同时,此处的--user参数指定的应该是创建kubelet client证书时CN字段指定的内容。且此用户名会被多处使用。如果修改,请注意同步修改。   

6.2.6.4、创建 kubelet 系统启动文件

  创建 /usr/lib/systemd/system/kubelet.service 并保存,内容如下:

[root@master cfg]# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
 
[Service]
EnvironmentFile=/data/kuber/work/_app/k8s/kubernetes/cfg/kubelet
ExecStart=/data/kuber/work/_app/k8s/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process
 
[Install]
WantedBy=multi-user.target

  

6.2.6.5、启动 kubelet 服务

[root@master cfg]# systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

  

6.2.6.6、查看 kubelet 服务运行状态

[root@master cfg]# systemctl status kubelet
● kubelet.service - Kubernetes Kubelet
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-02-19 22:31:23 CST; 14s ago
 Main PID: 5137 (kubelet)
    Tasks: 13
   Memory: 128.7M
   CGroup: /system.slice/kubelet.service
           └─5137 /data/kuber/work/_app/k8s/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=172.16.128.0 --kubeconfig=/data/kuber/work/_app/k8s/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/data/kuber/work/_app/k8s/kub...

Feb 19 22:31:34 master kubelet[5137]: I0219 22:31:34.488086    5137 eviction_manager.go:226] eviction manager: synchronize housekeeping
Feb 19 22:31:34 master kubelet[5137]: I0219 22:31:34.502001    5137 helpers.go:836] eviction manager: observations: signal=imagefs.inodesFree, available: 107287687, capacity: 107374144, time: 2019-02-19 22:31:34.48876...T m=+10.738964114
Feb 19 22:31:34 master kubelet[5137]: I0219 22:31:34.502103    5137 helpers.go:836] eviction manager: observations: signal=pid.available, available: 32554, capacity: 32Ki, time: 2019-02-19 22:31:34.50073593 +0800 CST m=+10.750931769
Feb 19 22:31:34 master kubelet[5137]: I0219 22:31:34.502132    5137 helpers.go:836] eviction manager: observations: signal=memory.available, available: 2179016Ki, capacity: 2819280Ki, time: 2019-02-19 22:31:34.4887683...T m=+10.738964114
Feb 19 22:31:34 master kubelet[5137]: I0219 22:31:34.502143    5137 helpers.go:836] eviction manager: observations: signal=allocatableMemory.available, available: 2819280Ki, capacity: 2819280Ki, time: 2019-02-19 22:31...T m=+10.751961068
Feb 19 22:31:34 master kubelet[5137]: I0219 22:31:34.502151    5137 helpers.go:836] eviction manager: observations: signal=nodefs.available, available: 1068393320Ki, capacity: 1048064Mi, time: 2019-02-19 22:31:34.4887...T m=+10.738964114
Feb 19 22:31:34 master kubelet[5137]: I0219 22:31:34.502161    5137 helpers.go:836] eviction manager: observations: signal=nodefs.inodesFree, available: 107287687, capacity: 107374144, time: 2019-02-19 22:31:34.488768...T m=+10.738964114
Feb 19 22:31:34 master kubelet[5137]: I0219 22:31:34.502170    5137 helpers.go:836] eviction manager: observations: signal=imagefs.available, available: 1068393320Ki, capacity: 1048064Mi, time: 2019-02-19 22:31:34.488...T m=+10.738964114
Feb 19 22:31:34 master kubelet[5137]: I0219 22:31:34.502191    5137 eviction_manager.go:317] eviction manager: no resources are starved
Feb 19 22:31:36 master kubelet[5137]: I0219 22:31:36.104200    5137 kubelet.go:1995] SyncLoop (housekeeping)

6.2.6.7、在Node节点创建 kubelet 配置文件

  在所有节点上都要运行,以 node1 为例。
  创建 /data/kuber/work/_app/k8s/kubernetes/cfg/kubelet.config 参数配置文件并保存,内容如下:

[root@node1 ~]# cat /data/kuber/work/_app/k8s/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 172.16.128.1
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.244.0.1"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true

address在不同的节点处,要改成节点的IP

  创建 /data/kuber/work/_app/k8s/kubernetes/cfg/kubelet 启动参数文件并保存,内容如下:

[root@node1 ~]# cat /data/kuber/work/_app/k8s/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=172.16.128.1 \
--kubeconfig=/data/kuber/work/_app/k8s/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/data/kuber/work/_app/k8s/kubernetes/cfg/bootstrap.kubeconfig \
--config=/data/kuber/work/_app/k8s/kubernetes/cfg/kubelet.config \
--client-ca-file=/data/kuber/work/_app/k8s/kubernetes/ssl/ca.pem \
--cert-dir=/data/kuber/work/_app/k8s/kubernetes/ssl_cert \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

--hostname-override在不同的节点处,要换成节点的IP

6.2.6.8、创建 kubelet 系统启动文件

  创建 /usr/lib/systemd/system/kubelet.service 并保存,内容如下:

[root@node1 ~]# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
 
[Service]
EnvironmentFile=/data/kuber/work/_app/k8s/kubernetes/cfg/kubelet
ExecStart=/data/kuber/work/_app/k8s/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process
 
[Install]
WantedBy=multi-user.target

  

6.2.6.9、启动 kubelet 服务

[root@node1 ~]# systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

  

6.2.6.10、查看 kubelet 服务运行状态

[root@node1 ~]# systemctl status kubelet
● kubelet.service - Kubernetes Kubelet
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2019-02-18 06:27:54 CST; 6s ago
 Main PID: 19123 (kubelet)
    Tasks: 12
   Memory: 18.3M
   CGroup: /system.slice/kubelet.service
           └─19123 /data/kuber/work/_app/k8s/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=172.16.128.1 --kubeconfig=/data/kuber/work/_app/k8s/kubernetes/cfg/kubelet.kubeconfig --bootstrap-k...

Feb 18 06:27:54 node1 kubelet[19123]: I0218 06:27:54.784286   19123 mount_linux.go:179] Detected OS with systemd
Feb 18 06:27:54 node1 kubelet[19123]: I0218 06:27:54.784416   19123 server.go:407] Version: v1.13.0

6.2.7、批准 Master 加入集群

  CSR 可以在内置批准流程之外做手动批准加入集群。
  管理员也可以使用 kubectl 手动批准证书请求。
  管理员可以使用 kubectl get csr 列出 CSR 请求, 并使用 kubectl describe csr <name> 列出详细描述。
  管理员也可以使用 kubectl certificate approve <name>kubectl certificate deny <name> 工具批准或拒绝 CSR 请求。   

6.2.7.1、查看 CSR 列表

[root@master cfg]# kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-9TZm3EM65hqCTbfYMA46NVtQL4tW06loq01YfKQsm8k   14m   kubelet-bootstrap   Pending

  

6.2.7.2、批准加入集群

[root@master cfg]# kubectl certificate approve node-csr-9TZm3EM65hqCTbfYMA46NVtQL4tW06loq01YfKQsm8k
certificatesigningrequest.certificates.k8s.io/node-csr-9TZm3EM65hqCTbfYMA46NVtQL4tW06loq01YfKQsm8k approved

  

6.2.7.3、验证 Master 是否加入集群

  再次查看 CSR 列表

[root@master cfg]# kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-9TZm3EM65hqCTbfYMA46NVtQL4tW06loq01YfKQsm8k   15m   kubelet-bootstrap   Approved,Issued

  

6.3、部署 kube-proxy 组件

  kube-proxy 运行在所有 Node 节点上,它监听 apiserverserviceEndpoint 的变化情况,创建路由规则来进行服务负载均衡,以下操作以 master 为例   

6.3.1、创建 kube-proxy 参数配置文件

  创建 /data/kuber/work/_app/k8s/kubernetes/cfg/kube-proxy 配置文件并保存,内容如下:

[root@master ~]# cat /data/kuber/work/_app/k8s/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=172.16.128.0 \
--cluster-cidr=10.244.0.0/16 \
--kubeconfig=/data/kuber/work/_app/k8s/kubernetes/cfg/kube-proxy.kubeconfig"

--hostname-override在不同的节点处,要换成节点的IP

  

6.3.2、创建 kube-proxy 系统启动文件

  创建 /usr/lib/systemd/system/kube-proxy.service 文件并保存,内容如下:

[root@master ~]# cat /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target
 
[Service]
EnvironmentFile=-/data/kuber/work/_app/k8s/kubernetes/cfg/kube-proxy
ExecStart=/data/kuber/work/_app/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target 

  

6.3.3、启动 kube-proxy 服务

[root@master ~]# systemctl daemon-reload && systemctl enable kube-proxy &&  systemctl start kube-proxy
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.

  

6.3.4、检查 kube-proxy 服务状态

[root@master cfg]# systemctl status kube-proxy
● kube-proxy.service - Kubernetes Proxy
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2019-02-18 06:08:51 CST; 3h 49min ago
 Main PID: 12660 (kube-proxy)
    Tasks: 0
   Memory: 1.9M
   CGroup: /system.slice/kube-proxy.service
           ‣ 12660 /data/kuber/work/_app/k8s/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=172.16.128.0 --cluster-cidr=10.244.0.0/16 --kubeconfig=/data/kuber/work/_app/k8s/kubernetes/cfg/...

Feb 18 09:58:38 master kube-proxy[12660]: I0218 09:58:38.205387   12660 config.go:141] Calling handler.OnEndpointsUpdate
Feb 18 09:58:38 master kube-proxy[12660]: I0218 09:58:38.250931   12660 config.go:141] Calling handler.OnEndpointsUpdate
Feb 18 09:58:40 master kube-proxy[12660]: I0218 09:58:40.249487   12660 config.go:141] Calling handler.OnEndpointsUpdate
Feb 18 09:58:40 master kube-proxy[12660]: I0218 09:58:40.290336   12660 config.go:141] Calling handler.OnEndpointsUpdate
Feb 18 09:58:42 master kube-proxy[12660]: I0218 09:58:42.264320   12660 config.go:141] Calling handler.OnEndpointsUpdate
Feb 18 09:58:42 master kube-proxy[12660]: I0218 09:58:42.318954   12660 config.go:141] Calling handler.OnEndpointsUpdate
Feb 18 09:58:44 master kube-proxy[12660]: I0218 09:58:44.273290   12660 config.go:141] Calling handler.OnEndpointsUpdate
Feb 18 09:58:44 master kube-proxy[12660]: I0218 09:58:44.359236   12660 config.go:141] Calling handler.OnEndpointsUpdate
Feb 18 09:58:46 master kube-proxy[12660]: I0218 09:58:46.287980   12660 config.go:141] Calling handler.OnEndpointsUpdate
Feb 18 09:58:46 master kube-proxy[12660]: I0218 09:58:46.377475   12660 config.go:141] Calling handler.OnEndpointsUpdate

  

6.4、验证 Server 服务

  查看 Master 状态

[root@master cfg]# kubectl get cs,nodes
NAME                                 STATUS    MESSAGE             ERROR
componentstatus/scheduler            Healthy   ok
componentstatus/controller-manager   Healthy   ok
componentstatus/etcd-0               Healthy   {"health":"true"}
componentstatus/etcd-1               Healthy   {"health":"true"}
componentstatus/etcd-3               Healthy   {"health":"true"}
componentstatus/etcd-2               Healthy   {"health":"true"}

NAME              STATUS   ROLES    AGE   VERSION
node/172.16.128.0   Ready    <none>   51m   v1.13.0

NOTE:请注意,kube-proxy需要部署在每个节点,按照上面的方式在各Node节点进行部署。Node节点需要运行的组件参照下面的内容。

6.5、Kubernetes Node 节点加入集群

  Kubernetes Node 节点运行如下组件:

  • Proxy:
      该模块实现了kubernetes中的服务发现和反向代理功能。kube-proxy支持TCP和UDP连接转发,默认基Round Robin算法将客户端流量转发到与service对应的一组后端pod。服务发现方面,kube-proxy使用etcd的watch机制监控集群中service和endpoint对象数据的动态变化,并且维护一个service到endpoint的映射关系,从而保证了后端pod的IP变化不会对访问者造成影响,另外,kube-proxy还支持session affinity。
  • Kublet
      kublet是Master在每个Node节点上面的agent,是Node节点上面最重要的模块,它负责维护和管理该Node上的所有容器,但是如果容器不是通过kubernetes创建的,它并不会管理。本质上,它负责使Pod的运行状态与期望的状态一致。
    kublet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况; 为确保安全,只开启接收 https 请求的安全端口,对请求进行认证和授权,拒绝未授权的访问(如apiserver、heapster)
  • Flannel
      默认没有flanneld网络,Node节点间的pod不能通信,只能Node内通信,Flannel从etcd中获取network的配置信息 划分subnet,并在etcd中进行注册 将子网信息记录
  • ETCD
      ETCD是一个高可用的键值存储系统,kubernetes使用它来存储各个资源的状态,从而实现了Restful的API。

  

6.5.1、批准 Node 加入集群

  查看 CSR 列表,可以看到节点有 Pending 请求

[root@master cfg]# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-9TZm3EM65hqCTbfYMA46NVtQL4tW06loq01YfKQsm8k   84m     kubelet-bootstrap   Approved,Issued
node-csr-W7DXhrhm4Xhrpr5Qz2Br1dW-3t4VuYORSajbTFrNiqA   2m45s   kubelet-bootstrap   Pending

  通过以下命令,查看请求的详细信息,能够看到是 node1 的IP地址172.16.128.1发来的请求

[root@master cfg]# kubectl describe csr node-csr-W7DXhrhm4Xhrpr5Qz2Br1dW-3t4VuYORSajbTFrNiqA
Name:               node-csr-W7DXhrhm4Xhrpr5Qz2Br1dW-3t4VuYORSajbTFrNiqA
Labels:             <none>
Annotations:        <none>
CreationTimestamp:  Mon, 18 Feb 2019 06:26:08 +0800
Requesting User:    kubelet-bootstrap
Status:             Pending
Subject:
         Common Name:    system:node:172.16.128.1
         Serial Number:
         Organization:   system:nodes
Events:  <none>

  批准加入集群

[root@master cfg]# kubectl certificate approve node-csr-W7DXhrhm4Xhrpr5Qz2Br1dW-3t4VuYORSajbTFrNiqA
certificatesigningrequest.certificates.k8s.io/node-csr-W7DXhrhm4Xhrpr5Qz2Br1dW-3t4VuYORSajbTFrNiqA approved

  再次查看 CSR 列表,可以看到节点的加入请求已经被批准

[root@master cfg]# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-9TZm3EM65hqCTbfYMA46NVtQL4tW06loq01YfKQsm8k   88m     kubelet-bootstrap   Approved,Issued
node-csr-W7DXhrhm4Xhrpr5Qz2Br1dW-3t4VuYORSajbTFrNiqA   6m57s   kubelet-bootstrap   Approved,Issued

  

6.5.2、从集群删除 Node

  要删除一个节点前,要先清除掉上面的 pod
  然后运行下面的命令删除节点

kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>

  如果想要有效删除节点,在节点启动时,重新向集群发送 CSR 请求,还需要在被删除的点节上,删除 CSR 缓存数据

[root@node1 ~]# ls /data/kuber/work/_app/k8s/kubernetes/ssl_cert/
kubelet-client-2019-02-19-23-20-05.pem  kubelet-client-current.pem  kubelet.crt  kubelet.key
[root@node1 ~]# rm -rf //data/kuber/work/_app/k8s/kubernetes/ssl_cert/*

  删除完 CSR 缓存数据以后,重启启动 kubelet 就可以在 Master 上收到新的 CSR 请求。   

6.5.3、给 Node 打标签

  查看所有节点状态

[root@master ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
172.16.128.0   Ready    <none>   96m   v1.13.0
172.16.128.1   Ready    <none>   24m   v1.13.0

  masterMaster 打标签

[root@master ~]# kubectl label node 172.16.128.0 node-role.kubernetes.io/master='master'
node/172.16.128.0 labeled

  node1Node 打标签

[root@master ~]# kubectl label node 172.16.128.1 node-role.kubernetes.io/master='node-node1'
node/172.16.128.1 labeled
[root@master ~]# kubectl label node 172.16.128.1 node-role.kubernetes.io/node='node-node1'
node/172.16.128.1 labeled
[root@master ~]# kubectl get node
NAME         STATUS   ROLES         AGE    VERSION
172.16.128.0   Ready    master        106m   v1.13.0
172.16.128.1   Ready    master,node   33m    v1.13.0

  删除掉 node1 上的 master 标签

[root@master ~]# kubectl label node 172.16.128.1 node-role.kubernetes.io/master-
node/172.16.128.1 labeled
[root@master cfg]# kubectl get node
NAME         STATUS   ROLES    AGE    VERSION
172.16.128.0   Ready    master   108m   v1.13.0
172.16.128.1   Ready    node     35m    v1.13.0

  

7、参考文章

  Linux7/Centos7 Selinux介绍
  Kubernetes网络原理及方案
  Installing a Kubernetes Cluster on CentOS 7
  How to install Kubernetes(k8) in RHEL or Centos in just 7 steps
  docker-kubernetes-tls-guide
  kubernetes1.13.1+etcd3.3.10+flanneld0.10集群部署   

8、常见问题

用虚拟机如何生成新的网卡UUID?

  例如我是在Parallels上安装的一个 node1 ,克隆 node2 后,根据本文上面的内容可以更改IP,UUID如果要更改,可以使用以下命令查看网卡的UUID:

[root@node2 ~]# uuidgen eth0
6ea1a665-0126-456c-80c7-1f69f32e83b7

9、注

文章转载自:https://www.cnblogs.com/lion.net/p/10408512.html
内容有少许不同。主要是针对我自己试验时的IP地址。


本博客所有文章除特别声明外,均采用 CC BY-SA 4.0 协议 ,转载请注明出处!