您的位置:  首页 > 技术杂谈 > 正文

K8S-基于Kubeadm安装K8S集群

2022-01-18 11:00 https://my.oschina.net/u/4431924/blog/5399675 TSL-LIUYANG 次阅读 条评论

K8S部署指南

机器环境准备

首先准备3台虚拟机,本文用的VM16.2作为虚拟机,CentOS Linux release 7.9.2009 (Core) 作为镜像
没有什么可说的正常的网上一堆教程,但是我们需要注意的是
1.节点CPU核数必须是>= 2核 ,否则k8s无法启动
2.我是用的DNS网络:最好设置为 本地网络连通的DNS,否则网络不通,无法下载一些镜像 虚拟机设置的NAT模式
3.linux内核:linux内核必须是 4 版本以上,因此必须把linux核心进行升级 
    uname -r 可以查看你的内核

依赖环境

    #1.设置主机名称
    hostnamectl set-hostname k8s-master01 
    hostnamectl set-hostname k8s-node01
    hostnamectl set-hostname k8s-node02
    ---------------------------------------------------------------------
    #查看是否设置成功
    hostname
    ---------------------------------------------------------------------
    #在每台机器设置ip与host的映射关系
    注意:前面俩行是本身就有的,无需在意,只看后面3行就行
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    192.168.183.128 k8s-master01
    192.168.183.129 k8s-node01
    192.168.183.130 k8s-node02
    ---------------------------------------------------------------------
    可以通过nslookup www.baidu.com 查看自己的dns
    然后利用 vi /etc/resolv.conf
    增加你的当前宿主机的dns,百度自行查看如何查看自己机器的dns 添加进去
    ---------------------------------------------------------------------
    
    #2.安装依赖环境,注意:每一台机器都需要安装此依赖环境 
    yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git iproute lrzsz bash-completion tree bridge-utils unzip bind-utils gcc
    ---------------------------------------------------------------------
    上述为一些基础的包,如果你的yum无法下载其中的某些,可以更改yum源,如何更改yum源这里不再讲述,请自行百度
    ---------------------------------------------------------------------
    
    #3、安装iptables,启动iptables,设置开机自启,清空iptables规则,保存当前规则到默认规则
    # 关闭防火墙 
    systemctl stop firewalld && systemctl disable firewalld 
    # 置空iptables 
    yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save
    ---------------------------------------------------------------------
    
    #4、关闭selinux 
    #闭swap分区【虚拟内存】并且永久关闭虚拟内存 
    swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
     ---------------------------------------------------------------------
    #关闭selinux 
    setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
    ---------------------------------------------------------------------
    
    #5.升级Linux内核为最新版本 
    #5.1方法一:
    rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm 
    #安装内核 
    yum --enablerepo=elrepo-kernel install -y kernel-lt 
    #设置开机从新内核启动 
    grub2-set-default 'CentOS Linux (4.4.189-1.el7.elrepo.x86_64) 7 (Core)' 
    #注意:设置完内核后,需要重启服务器才会生效。
    #reboot
    #查询内核 uname -r
    ---------------------------------------------------------------------
    #按道理是不会出现不成功的,如果你虚拟机在开 选择错误 还可以利用如下命令进行切换
    #5.2 方法二
    #查看你所有的内核版本
    awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg
    以下是我自己的内核版本
    0 : CentOS Linux (3.10.0-1160.49.1.el7.x86_64) 7 (Core)
    1 : CentOS Linux (5.4.171-1.el7.elrepo.x86_64) 7 (Core)
    2 : CentOS Linux (3.10.0-1160.el7.x86_64) 7 (Core)
    3 : CentOS Linux (0-rescue-ad98ccd8c2c446b7ae2b0387025b480b) 7 (Core)
    ---------------------------------------------------------------------
    #编辑 grub2的启动内容
    vim /etc/default/grub
    设置 GRUB_DEFAULT=1 (按照你前面查询出来的内容) 我们需要选择大于4.0的内核版本所以我是1,保存并退出
    # 重新创建内核配置
    grub2-mkconfig -o /boot/grub2/grub.cfg
    #内核重启
    reboot
    #查询内核
    uname -r
    ---------------------------------------------------------------------
    
    #6、调整内核参数,对于k8s 
    cat > kubernetes.conf <<EOF 
    net.bridge.bridge-nf-call-iptables=1 
    net.bridge.bridge-nf-call-ip6tables=1 
    net.ipv4.ip_forward=1 
    net.ipv4.tcp_tw_recycle=0 
    vm.swappiness=0 
    vm.overcommit_memory=1 
    vm.panic_on_oom=0 
    fs.inotify.max_user_instances=8192 
    fs.inotify.max_user_watches=1048576 
    fs.file-max=52706963 
    fs.nr_open=52706963 
    net.ipv6.conf.all.disable_ipv6=1 
    net.netfilter.nf_conntrack_max=2310720
    EOF
    ------------------------------------------------------------------------------
    说明:在高版本内核中  net.ipv4.tcp_tw_recycle=0 已经不在适用,大于4.4之后你就要不要填写了。改为
    cat > kubernetes.conf <<EOF 
    net.bridge.bridge-nf-call-iptables=1 
    net.bridge.bridge-nf-call-ip6tables=1 
    net.ipv4.ip_forward=1 
    vm.swappiness=0 
    vm.overcommit_memory=1 
    vm.panic_on_oom=0 
    fs.inotify.max_user_instances=8192 
    fs.inotify.max_user_watches=1048576 
    fs.file-max=52706963 
    fs.nr_open=52706963 
    net.ipv6.conf.all.disable_ipv6=1 
    net.netfilter.nf_conntrack_max=2310720
    EOF
    ------------------------------------------------------------------------------
    #将优化内核文件拷贝到/etc/sysctl.d/文件夹下,这样优化文件开机的时候能够被调用
    cp kubernetes.conf /etc/sysctl.d/kubernetes.conf
    #手动刷新,让优化文件立即生效 
    sysctl -p /etc/sysctl.d/kubernetes.conf
    ------------------------------------------------------------------------------
    
    #7、关闭系统不需要的服务 
    systemctl stop postfix && systemctl disable postfix
    ------------------------------------------------------------------------------
    
    #8、设置日志保存方式 
    
    #创建保存日志的目录 
    mkdir /var/log/journal 
    
    #.创建配置文件存放目录 
    mkdir /etc/systemd/journald.conf.d 
    
    #.创建配置文件 
    cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF 
    [Journal]
    Storage=persistent 
    Compress=yes 
    SyncIntervalSec=5m 
    RateLimitInterval=30s 
    RateLimitBurst=1000 
    SystemMaxUse=10G 
    SystemMaxFileSize=200M 
    MaxRetentionSec=2week 
    ForwardToSyslog=no 
    EOF
    
    #.重启systemd journald的配置 
    systemctl restart systemd-journald
    ------------------------------------------------------------------------------
    #10、kube-proxy 开启 ipvs 前置条件 
    modprobe br_netfilter 
    #编写ipvs.modules
    cat > /etc/sysconfig/modules/ipvs.modules <<EOF 
    #!/bin/bash 
    modprobe -- ip_vs 
    modprobe -- ip_vs_rr 
    modprobe -- ip_vs_wrr 
    modprobe -- ip_vs_sh 
    modprobe -- nf_conntrack_ipv4 
    EOF 
    -------------------------------------------------------------------------------
    内核版本高的用这个
    cat > /etc/sysconfig/modules/ipvs.modules <<EOF 
    #!/bin/bash 
    modprobe -- ip_vs 
    modprobe -- ip_vs_rr 
    modprobe -- ip_vs_wrr 
    modprobe -- ip_vs_sh 
    modprobe -- nf_conntrack
    EOF 
    
    ##使用lsmod命令查看这些文件是否被引导 
    chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack
    

Docker安装及部署

#1.安装docker 
yum install -y yum-utils device-mapper-persistent-data lvm2 

#.紧接着配置一个稳定(stable)的仓库、仓库配置会保存到/etc/yum.repos.d/docker-ce.repo文件中
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
说明:
你也可以换成其他的稳定的docker仓库 自行百度

#.更新Yum安装的相关Docke软件包&安装Docker CE 
yum update -y && yum install docker-ce

#2.设置docker daemon文件 

#.创建/etc/docker目录
mkdir /etc/docker 

#.更新daemon.json文件 
cat > /etc/docker/daemon.json <<EOF 
{"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": {"max-size": "100m"}} 
EOF 
注意:一定注意编码问题,出现错误:查看命令:journalctl -amu docker 即可发现错误 

#.创建,存储docker配置文件 
mkdir -p /etc/systemd/system/docker.service.d 

#3.重启docker服务 
systemctl daemon-reload && systemctl restart docker && systemctl enable docker

kubeadm[一键安装k8s]

#1、安装kubernetes的时候,需要安装kubelet, kubeadm等包,但k8s官网给的yum源是 packages.cloud.google.com,国内访问不了,此时我们可以使用阿里云的yum仓库镜像。 

#增加阿里云镜像
cat <<EOF > /etc/yum.repos.d/kubernetes.repo 
[kubernetes] 
name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 
enabled=1 
gpgcheck=0 
repo_gpgcheck=0 
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg 
EOF 

#2、安装kubeadm、kubelet、kubectl 
yum install -y kubeadm-1.15.1 kubelet-1.15.1 kubectl-1.15.1 
说明:
你可以就用低版本的 也可以自己弄高版的,如果想用高版本 请在执行下如下命令
yum update 

# 启动 kubelet 
systemctl enable kubelet && systemctl start kubelet

集群安装

依赖镜像

#1.查看kubeadm一键安装需要的镜像列表
kubeadm config images list
#如下:
k8s.gcr.io/kube-apiserver:v1.23.1
k8s.gcr.io/kube-controller-manager:v1.23.1
k8s.gcr.io/kube-scheduler:v1.23.1
k8s.gcr.io/kube-proxy:v1.23.1
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6

-------------------------------------------------------------------------------
特别注意:
====
由于k8s.gcr.io是依赖于那个谷歌你需要用特别手法进行,如果没那个能力,没有特别手段,那就得像我一样去搜镜像
以下步骤可以在一套机器操作然后scp到其他中
====
#2.根据上面说的镜像去对应的dockerhub搜去

#拉去镜像
    docker pull v5cn/kube-apiserver:v1.23.1
    docker pull v5cn/kube-controller-manager:v1.23.1
    docker pull v5cn/kube-scheduler:v1.23.1
    docker pull v5cn/kube-proxy:v1.23.1
    docker pull v5cn/pause:3.6
    docker pull v5cn/etcd:3.5.1-0
    docker pull xyz349925756/coredns:v1.8.6

#查看镜像
docker images 

#修改tag
docker tag xyz349925756/coredns:v1.8.6  k8s.gcr.io/coredns/coredns:v1.8.6
docker tag v5cn/kube-apiserver:v1.23.1  k8s.gcr.io/kube-apiserver:v1.23.1
docker tag v5cn/kube-controller-manager:v1.23.1  k8s.gcr.io/kube-controller-manager:v1.23.1
docker tag v5cn/kube-scheduler:v1.23.1  k8s.gcr.io/kube-scheduler:v1.23.1
docker tag v5cn/kube-proxy:v1.23.1  k8s.gcr.io/kube-proxy:v1.23.1
docker tag v5cn/pause:3.6  k8s.gcr.io/pause:3.6
docker tag v5cn/etcd:3.5.1-0  k8s.gcr.io/etcd:3.5.1-0

#删除之前的镜像
docker rmi xyz349925756/coredns:v1.8.6
docker rmi v5cn/kube-apiserver:v1.23.1
docker rmi v5cn/kube-controller-manager:v1.23.1
docker rmi v5cn/kube-scheduler:v1.23.1
docker rmi v5cn/kube-proxy:v1.23.1
docker rmi v5cn/pause:3.6
docker rmi v5cn/etcd:3.5.1-0
#docker 保存镜像
docker save -o kubeadm-basic.images  k8s.gcr.io/kube-apiserver k8s.gcr.io/kube-proxy k8s.gcr.io/kube-scheduler k8s.gcr.io/kube-controller-manager k8s.gcr.io/etcd k8s.gcr.io/coredns/coredns  k8s.gcr.io/v5cn/pause

#压缩镜像
tar -zcvf kubeadm-basic.images.tar.gz kubeadm-basic.images

#copy到其他节点
scp -r kubeadm-basic.images.tar.gz root@k8s-master01:/root/
scp -r kubeadm-basic.images.tar.gz root@k8s-node01:/root/

#其他节点解压缩
tar -zxvf kubeadm-basic.images.tar.gz

#其他节点加载镜像
docker load -i kubeadm-basic.images
-------------------------------------------------------------------------------    
    

K8S部署

重要说明 该操作只需要在主节点进行

#1拉去yaml资源配置文件 
kubeadm config print init-defaults > kubeadm-config.yaml 

#查看yaml的配置文件
cat kubeadm-config.yaml
应该会出现如下内容(这个是我的配置你刚下来应该会少)
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.183.128
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  imagePullPolicy: IfNotPresent
  name: k8s-master01
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.23.1
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

#2修改yaml资源文件的部分内容
localAPIEndpoint: 
  advertiseAddress: 1192.168.183.128 # 注意:修改配置文件的IP地址 
kubernetesVersion: v1.23.1 #注意:修改版本号,必须和kubectl版本保持一致 

nodeRegistration:
  name:为你的节点node 与hostname一致即可


#指定flannel模型通信 pod网段地址,此网段和flannel网段一致 在networking:下增加配置
podSubnet: "10.244.0.0/16" 

#指定使用ipvs网络进行通信增加 
#低版本:

--- 
apiVersion: kubeproxy.config.k8s.io/v1alpha1 
kind: kubeProxyConfiguration 
featureGates: 
  SupportIPVSProxyMode: true 
mode: ipvs 


#高版本:
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1 
kind: kubeProxyConfiguration 
mode: ipvs 


#3、初始化主节点,开始部署 
 
 kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log
 ---------------------------------------------------------------------------------------------
 上面如果不成功就按照下面执行:
 kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
 #注意:执行此命令,CPU核心数量必须大于1核,否则无法执行成功
 
#4、初始化成功后执行如下命令 

#创建目录,保存连接配置缓存,认证文件 
mkdir -p $HOME/.kube 

#拷贝集群管理配置文件 
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 

#授权给配置文件 
chown $(id -u):$(id -g) $HOME/.kube/config

#查询节点状态
kubectl get node
#发现还是NotReady状态,是因为我们缺少组件

flannel插件

这个插件基本下载不到需要特殊手段
下面为flannel.yaml的内容

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.0
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: rancher/mirrored-flannelcni-flannel:v0.16.1
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: rancher/mirrored-flannelcni-flannel:v0.16.1
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg


#自己copy出来到 kube-flannel.yaml
 ------------------------------------------------------------------------------

#部署flannel 
kubectl create -f kube-flannel.yml

节点Join

# 加入主节点以及其余工作节点,执行安装日志中的命令即可 
#查看日志文件 
cat kubeadm-init.log 

#找到类似我这个日志的
kubeadm join 192.168.183.128:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:af133b00b53480ff9ade0f091c445c9e261e7f7a30cb2eff8844152d21858d64

# 负责命令到其他几个node节点进行执行即可 
kubeadm join 192.168.183.128:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:af133b00b53480ff9ade0f091c445c9e261e7f7a30cb2eff8844152d21858d64
	
#查看node节点状态 
kubectl  get node

这个过程需要等一会,直到全部ready

总结:

以上内容就是K8S基于kubeadm的安装教程,按照这个教程粘贴即可,遇到问题就百度一下 很好解决,基本的坑都踩了一下没有什么大的问题.

展开阅读全文
  • 0
    感动
  • 0
    路过
  • 0
    高兴
  • 0
    难过
  • 0
    搞笑
  • 0
    无聊
  • 0
    愤怒
  • 0
    同情
热度排行
友情链接