环境准备

四台Linux服务器

主机名 IP 角色 k8s-master-94 192.168.0.94 master k8s-node1-95 192.168.0.95 node1 k8s-node2-96 192.168.0.96 node2 habor 192.168.0.77 镜像仓库

三台机器均执行以下命令:

查看centos版本

[root@localhost Work]# cat /etc/redhat-release

CentOS Linux release 8.5.2111

关闭防火墙和selinux

[root@localhost ~]# systemctl stop firewalld

[root@localhost ~]# systemctl disable firewalld

Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.

Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

[root@localhost ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config

[root@localhost ~]# setenforce 0

关闭swap分区(k8s禁止虚拟内存以提高性能)

# 临时关闭;关闭swap主要是为了性能考虑

[root@localhost ~]#swapoff -a

# 可以通过这个命令查看swap是否关闭了

[root@localhost ~]#free

# 永久关闭

[root@localhost ~]#sed -ri 's/.*swap.*/#&/' /etc/fstab

修改主机名

# 在192.168.0.94执行

[root@localhost ~]#hostnamectl set-hostname k8s-master-94

# 在192.168.0.95执行

[root@localhost ~]#hostnamectl set-hostname k8s-node1-95

# 在192.168.0.96执行

[root@localhost ~]#hostnamectl set-hostname k8s-node2-96

[root@localhost ~]#hostname $hostname # 立刻生效

修改hosts表

[root@localhost ~]# cat >> /etc/hosts<

> 192.168.0.94 k8s-master-94

> 192.168.0.95 k8s-node1-95

> 192.168.0.96 k8s-node2-96

> EOF

时间同步

[root@localhost ~]#yum install chrony -y

[root@localhost ~]#systemctl start chronyd

[root@localhost ~]#systemctl enable chronyd

[root@localhost ~]#chronyc sources

允许 iptables 检查桥接流量,将桥接的IPv4流量传递到iptables的链:以下net.ipv4.ip_forward如存在=0,修改为1即可

[root@localhost ~]# cat > /etc/sysctl.d/k8s.conf << EOF

> net.ipv4.ip_forward = 1

> net.ipv4.tcp_tw_recycle = 0

> net.bridge.bridge-nf-call-ip6tables = 1

> net.bridge.bridge-nf-call-iptables = 1

> EOF

[root@localhost ~]#

sysctl --system

安装docker,如果有问题,参考这里解决:

Centos 8安装Docker及报错解决办法_duansamve的博客-CSDN博客_centos8 docker 安装失败

##卸载旧版本

yum remove docker \

docker-client \

docker-client-latest \

docker-common \

docker-latest \

docker-latest-logrotate \

docker-logrotate \

docker-engine

##更换镜像

wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-8.repo

###进入yum目录

cd /etc/yum.repos.d

## 删除目录下所有文件(注意完整复制,不要漏了那个点)

rm -rf ./*

##安装正确的镜像源

wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo

##生成缓存

yum makecache

##安装需要的软件包, yum-util 提供yum-config-manager功能,另外两个是devicemapper驱动依赖的

yum install -y yum-utils device-mapper-persistent-data lvm2

##设置yum源

um-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

##安装docker

yum install -y docker-ce

##启动并加入开机启动

systemctl start docker

systemctl enable docker

##验证安装是否成功

docker version

docker info

##配置镜像加速

mkdir -p /etc/docker

tee /etc/docker/daemon.json <<-'EOF'

{

"registry-mirrors": ["https://ccdkz6eh.mirror.aliyuncs.com"]

}

EOF

systemctl daemon-reload

systemctl restart docker

安装kubeadm,kubelet和kubectl

三台机器执行

添加k8s阿里云YUM软件源

[root@k8s-node1-80 ~]# cat < /etc/yum.repos.d/kubernetes.repo

> [kubernetes]

> name=Kubernetes

> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

> enabled=1

> gpgcheck=1

> repo_gpgcheck=1

> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

> EOF

#清除缓存

[root@k8s-node1-80 ~]# yum clean all

#把服务器的包信息下载到本地电脑缓存起来,makecache建立一个缓存

[root@k8s-node1-80 ~]# yum makecache

#列出kubectl可用的版本

[root@k8s-node1-80 ~]# yum list kubectl --showduplicates | sort -r

安装kubeadm,kubelet和kubectl

[root@k8s-node1-80 ~]#yum install -y kubelet-1.21.0 kubeadm-1.21.0 kubectl-1.21.0

[root@k8s-node1-80 ~]#systemctl start kubelet

[root@k8s-node1-80 ~]#systemctl enable kubelet

#查看有没有安装

[root@k8s-node2-92 ~]# yum list installed | grep kubelet

kubelet.x86_64 1.21.0-0 @kubernetes

[root@k8s-node2-92 ~]# yum list installed | grep kubeadm

kubeadm.x86_64 1.21.0-0 @kubernetes

[root@k8s-node2-92 ~]# yum list installed | grep kubectl

kubectl.x86_64 1.21.0-0 @kubernetes

查看安装的版本

[root@k8s-node2-92 ~]# kubelet --version

Kubernetes v1.21.0

##########3

Kubelet:运行在cluster所有节点上,负责启动POD和容器;

Kubeadm:用于初始化cluster的一个工具;

Kubectl:kubectl是kubenetes命令行工具,通过kubectl可以部署和管理应用,查看各种资源,创建,删除和更新组件;

重启centos

reboot

初始化K8S集群

部署master节点,在192.168.0.94执行

kubeadm init --apiserver-advertise-address=192.168.0.94 \

--apiserver-cert-extra-sans=127.0.0.1 \

--image-repository=registry.aliyuncs.com/google_containers \

--ignore-preflight-errors=all \

--kubernetes-version=v1.21.0 \

--service-cidr=10.10.0.0/16 \

--pod-network-cidr=10.244.0.0/16

参数说明

--apiserver-advertise-address=192.168.0.94 :这个参数就是master主机的IP地址,例如我的Master主机的IP是:192.168.0.94

--image-repository=registry.aliyuncs.com/google_containers:这个是镜像地址,由于国外地址无法访问,故使用的阿里云仓库地址:registry.aliyuncs.com/google_containers

--kubernetes-version=v1.17.4:这个参数是下载的k8s软件版本号

--service-cidr=10.10.0.0/16:这个参数后的IP地址直接就套用10.10.0.0/16 ,以后安装时也套用即可,不要更改

--pod-network-cidr=10.244.0.0/16:k8s内部的pod节点之间网络可以使用的IP段,不能和service-cidr写一样,如果不知道怎么配,就先用这个10.244.0.0/16

网段问题,两个网段不要重,后面是/16,不要与当前机器网段一样。

如果报错:

[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”.

出现[WARNING IsDockerSystemdCheck],是由于docker的Cgroup Driver和kubelet的Cgroup Driver不一致导致的,此处选择修改docker的和kubelet一致

[root@k8s-master-94 ~]# docker info | grep Cgroup

Cgroup Driver: cgroupfs

Cgroup Version: 1

[root@k8s-master-94 ~]# vim /usr/lib/systemd/system/docker.service,加入--exec-opt native.cgroupdriver=systemd

[root@k8s-master-94 ~]# systemctl daemon-reload

[root@k8s-master-94 ~]# systemctl restart docker

# 重新初始化

[root@k8s-master-94 ~]# kubeadm reset # 先重置

[root@k8s-master-94 ~]# docker info | grep Cgroup

Cgroup Driver: systemd

Cgroup Version: 1

#重复上次【初始化master节点】的命令

初始化成功

其中有生成一串命令用于node节点的加入,记录下来,接着执行以下命令

[root@k8s-master-94 ~]# mkdir -p $HOME/.kube

[root@k8s-master-94 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

[root@k8s-master-94 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

看Master结点的安装状态:

[root@k8s-master-94 ~]# kubectl get node

NAME STATUS ROLES AGE VERSION

k8s-master-94 Ready control-plane,master 20m v1.21.0

 Master设备上安装K8S路由插件Calico

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/tigera-operator.yaml

然后在临时文件夹(或者随便你建一个文件夹)执行

wget https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/custom-resources.yaml

vim custom-resources.yaml

修改其中的cidr为你在初始化master节点时用--pod-network-cidr配置的那个--pod-network-cidr=10.244.0.0/16

保存修改,然后执行:

kubectl create -f custom-resources.yaml

稍等片刻等待上面的pod状态变为下图,即证明网络插件Calico已经安装完毕了

kubectl get pod --all-namespaces

非running解决方法:pod calico CoreDNS 拉取不到镜像的问题的解决办法-CSDN博客

此时Master节点就绪:

[root@k8s-master-94 ~]# kubectl get node

NAME STATUS ROLES AGE VERSION

k8s-master-94 Ready control-plane,master 165m v1.21.0

部署node节点,在192.168.0.95和192.168.0.96执行

kubeadm join 192.168.0.94:6443 --token faj2nf.5o3gwjtbst90k19y \

--discovery-token-ca-cert-hash sha256:62d91aaef65e987702ddca804330d1fe721707fdf794d2494730636e616bda09

命令执行失败,解决方法:https://www.cnblogs.com/cloud-yongqing/p/16032596.html

如果忘记,获取命令

kubeadm token create --print-join-command

执行成功

查看部署结果

node节点

[root@k8s-node1-95 ~]# kubectl get node

The connection to the server localhost:8080 was refused - did you specify the right host or port?

master节点

[root@k8s-master-94 ~]# kubectl get nodes

NAME STATUS ROLES AGE VERSION

k8s-master-94 Ready control-plane,master 14m v1.21.0

k8s-node1-95 Ready 4m v1.21.0

k8s-node2-96 Ready 5m10s v1.21.0

部署dashboard(master)

创建recommended.yaml

cat > recommended.yaml << EOF

# Copyright 2017 The Kubernetes Authors.

#

# Licensed under the Apache License, Version 2.0 (the "License");

# you may not use this file except in compliance with the License.

# You may obtain a copy of the License at

#

# http://www.apache.org/licenses/LICENSE-2.0

#

# Unless required by applicable law or agreed to in writing, software

# distributed under the License is distributed on an "AS IS" BASIS,

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

# See the License for the specific language governing permissions and

# limitations under the License.

apiVersion: v1

kind: Namespace

metadata:

name: kubernetes-dashboard

---

apiVersion: v1

kind: ServiceAccount

metadata:

labels:

k8s-app: kubernetes-dashboard

name: kubernetes-dashboard

namespace: kubernetes-dashboard

---

kind: Service

apiVersion: v1

metadata:

labels:

k8s-app: kubernetes-dashboard

name: kubernetes-dashboard

namespace: kubernetes-dashboard

spec:

type: NodePort

ports:

- port: 443

targetPort: 8443

nodePort: 30001 #DASHBOARD端口

selector:

k8s-app: kubernetes-dashboard

---

apiVersion: v1

kind: Secret

metadata:

labels:

k8s-app: kubernetes-dashboard

name: kubernetes-dashboard-certs

namespace: kubernetes-dashboard

type: Opaque

---

apiVersion: v1

kind: Secret

metadata:

labels:

k8s-app: kubernetes-dashboard

name: kubernetes-dashboard-csrf

namespace: kubernetes-dashboard

type: Opaque

data:

csrf: ""

---

apiVersion: v1

kind: Secret

metadata:

labels:

k8s-app: kubernetes-dashboard

name: kubernetes-dashboard-key-holder

namespace: kubernetes-dashboard

type: Opaque

---

kind: ConfigMap

apiVersion: v1

metadata:

labels:

k8s-app: kubernetes-dashboard

name: kubernetes-dashboard-settings

namespace: kubernetes-dashboard

---

kind: Role

apiVersion: rbac.authorization.k8s.io/v1

metadata:

labels:

k8s-app: kubernetes-dashboard

name: kubernetes-dashboard

namespace: kubernetes-dashboard

rules:

# Allow Dashboard to get, update and delete Dashboard exclusive secrets.

- apiGroups: [""]

resources: ["secrets"]

resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]

verbs: ["get", "update", "delete"]

# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.

- apiGroups: [""]

resources: ["configmaps"]

resourceNames: ["kubernetes-dashboard-settings"]

verbs: ["get", "update"]

# Allow Dashboard to get metrics.

- apiGroups: [""]

resources: ["services"]

resourceNames: ["heapster", "dashboard-metrics-scraper"]

verbs: ["proxy"]

- apiGroups: [""]

resources: ["services/proxy"]

resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]

verbs: ["get"]

---

kind: ClusterRole

apiVersion: rbac.authorization.k8s.io/v1

metadata:

labels:

k8s-app: kubernetes-dashboard

name: kubernetes-dashboard

rules:

# Allow Metrics Scraper to get metrics from the Metrics server

- apiGroups: ["metrics.k8s.io"]

resources: ["pods", "nodes"]

verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1

kind: RoleBinding

metadata:

labels:

k8s-app: kubernetes-dashboard

name: kubernetes-dashboard

namespace: kubernetes-dashboard

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: Role

name: kubernetes-dashboard

subjects:

- kind: ServiceAccount

name: kubernetes-dashboard

namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

name: kubernetes-dashboard

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: ClusterRole

name: kubernetes-dashboard

subjects:

- kind: ServiceAccount

name: kubernetes-dashboard

namespace: kubernetes-dashboard

---

kind: Deployment

apiVersion: apps/v1

metadata:

labels:

k8s-app: kubernetes-dashboard

name: kubernetes-dashboard

namespace: kubernetes-dashboard

spec:

replicas: 1

revisionHistoryLimit: 10

selector:

matchLabels:

k8s-app: kubernetes-dashboard

template:

metadata:

labels:

k8s-app: kubernetes-dashboard

spec:

securityContext:

seccompProfile:

type: RuntimeDefault

containers:

- name: kubernetes-dashboard

image: kubernetesui/dashboard:v2.7.0

imagePullPolicy: Always

ports:

- containerPort: 8443

protocol: TCP

args:

- --auto-generate-certificates

- --namespace=kubernetes-dashboard

# Uncomment the following line to manually specify Kubernetes API server Host

# If not specified, Dashboard will attempt to auto discover the API server and connect

# to it. Uncomment only if the default does not work.

# - --apiserver-host=http://my-address:port

volumeMounts:

- name: kubernetes-dashboard-certs

mountPath: /certs

# Create on-disk volume to store exec logs

- mountPath: /tmp

name: tmp-volume

livenessProbe:

httpGet:

scheme: HTTPS

path: /

port: 8443

initialDelaySeconds: 30

timeoutSeconds: 30

securityContext:

allowPrivilegeEscalation: false

readOnlyRootFilesystem: true

runAsUser: 1001

runAsGroup: 2001

volumes:

- name: kubernetes-dashboard-certs

secret:

secretName: kubernetes-dashboard-certs

- name: tmp-volume

emptyDir: {}

serviceAccountName: kubernetes-dashboard

nodeSelector:

"kubernetes.io/os": linux

# Comment the following tolerations if Dashboard must not be deployed on master

tolerations:

- key: node-role.kubernetes.io/master

effect: NoSchedule

---

kind: Service

apiVersion: v1

metadata:

labels:

k8s-app: dashboard-metrics-scraper

name: dashboard-metrics-scraper

namespace: kubernetes-dashboard

spec:

ports:

- port: 8000

targetPort: 8000

selector:

k8s-app: dashboard-metrics-scraper

---

kind: Deployment

apiVersion: apps/v1

metadata:

labels:

k8s-app: dashboard-metrics-scraper

name: dashboard-metrics-scraper

namespace: kubernetes-dashboard

spec:

replicas: 1

revisionHistoryLimit: 10

selector:

matchLabels:

k8s-app: dashboard-metrics-scraper

template:

metadata:

labels:

k8s-app: dashboard-metrics-scraper

spec:

securityContext:

seccompProfile:

type: RuntimeDefault

containers:

- name: dashboard-metrics-scraper

image: kubernetesui/metrics-scraper:v1.0.8

ports:

- containerPort: 8000

protocol: TCP

livenessProbe:

httpGet:

scheme: HTTP

path: /

port: 8000

initialDelaySeconds: 30

timeoutSeconds: 30

volumeMounts:

- mountPath: /tmp

name: tmp-volume

securityContext:

allowPrivilegeEscalation: false

readOnlyRootFilesystem: true

runAsUser: 1001

runAsGroup: 2001

serviceAccountName: kubernetes-dashboard

nodeSelector:

"kubernetes.io/os": linux

# Comment the following tolerations if Dashboard must not be deployed on master

tolerations:

- key: node-role.kubernetes.io/master

effect: NoSchedule

volumes:

- name: tmp-volume

emptyDir: {}

EOF

[root@k8s-master-94 ~]# kubectl apply -f recommended.yaml

namespace/kubernetes-dashboard created

serviceaccount/kubernetes-dashboard created

service/kubernetes-dashboard created

secret/kubernetes-dashboard-certs created

secret/kubernetes-dashboard-csrf created

secret/kubernetes-dashboard-key-holder created

configmap/kubernetes-dashboard-settings created

role.rbac.authorization.k8s.io/kubernetes-dashboard created

clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created

rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created

clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created

deployment.apps/kubernetes-dashboard created

service/dashboard-metrics-scraper created

deployment.apps/dashboard-metrics-scraper created

[root@k8s-master-94 ~]# kubectl get pods -n kubernetes-dashboard

NAME READY STATUS RESTARTS AGE

dashboard-metrics-scraper-7c857855d9-xk4d4 1/1 Running 0 2m10s

kubernetes-dashboard-658b66597c-r59xp 1/1 Running 0 2m10s

创建token登录(需要注意的是Token默认有效期是24小时,过期需要重新生成token)

创建service account并绑定默认cluster-admin管理员群集角色

#创建用户

kubectl create serviceaccount dashboard-admin -n kube-system

#用户授权

kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin

#获取用户Token

kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

使用ip登录dashboard

https://masterip:30001/#/login

https://node1ip:30001/#/login

https://node2ip:30001/#/login

配置token永不过期输入获取的TOKEN,配置token永不过期

部署metrics-server(master)

创建components.yaml

cat > components.yaml << EOF

apiVersion: v1

kind: ServiceAccount

metadata:

labels:

k8s-app: metrics-server

name: metrics-server

namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole

metadata:

labels:

k8s-app: metrics-server

rbac.authorization.k8s.io/aggregate-to-admin: "true"

rbac.authorization.k8s.io/aggregate-to-edit: "true"

rbac.authorization.k8s.io/aggregate-to-view: "true"

name: system:aggregated-metrics-reader

rules:

- apiGroups:

- metrics.k8s.io

resources:

- pods

- nodes

verbs:

- get

- list

- watch

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole

metadata:

labels:

k8s-app: metrics-server

name: system:metrics-server

rules:

- apiGroups:

- ""

resources:

- nodes/metrics

verbs:

- get

- apiGroups:

- ""

resources:

- pods

- nodes

verbs:

- get

- list

- watch

---

apiVersion: rbac.authorization.k8s.io/v1

kind: RoleBinding

metadata:

labels:

k8s-app: metrics-server

name: metrics-server-auth-reader

namespace: kube-system

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: Role

name: extension-apiserver-authentication-reader

subjects:

- kind: ServiceAccount

name: metrics-server

namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

labels:

k8s-app: metrics-server

name: metrics-server:system:auth-delegator

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: ClusterRole

name: system:auth-delegator

subjects:

- kind: ServiceAccount

name: metrics-server

namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

labels:

k8s-app: metrics-server

name: system:metrics-server

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: ClusterRole

name: system:metrics-server

subjects:

- kind: ServiceAccount

name: metrics-server

namespace: kube-system

---

apiVersion: v1

kind: Service

metadata:

labels:

k8s-app: metrics-server

name: metrics-server

namespace: kube-system

spec:

ports:

- name: https

port: 443

protocol: TCP

targetPort: https

selector:

k8s-app: metrics-server

---

apiVersion: apps/v1

kind: Deployment

metadata:

labels:

k8s-app: metrics-server

name: metrics-server

namespace: kube-system

spec:

selector:

matchLabels:

k8s-app: metrics-server

strategy:

rollingUpdate:

maxUnavailable: 0

template:

metadata:

labels:

k8s-app: metrics-server

spec:

containers:

- args:

- --cert-dir=/tmp

- --secure-port=4443

- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname

- --kubelet-use-node-status-port

- --metric-resolution=15s

- --kubelet-insecure-tls #新添加的内容

- --kubelet-preferred-address-types=InternalDNS,InternalIP,ExternalDNS,ExternalIP,Hostname #新添加的内容

image: registry.aliyuncs.com/google_containers/metrics-server:v0.6.0 #替换为阿里云的镜像

imagePullPolicy: IfNotPresent

livenessProbe:

failureThreshold: 3

httpGet:

path: /livez

port: https

scheme: HTTPS

periodSeconds: 10

name: metrics-server

ports:

- containerPort: 4443

name: https

protocol: TCP

readinessProbe:

failureThreshold: 3

httpGet:

path: /readyz

port: https

scheme: HTTPS

initialDelaySeconds: 20

periodSeconds: 10

resources:

requests:

cpu: 100m

memory: 200Mi

securityContext:

allowPrivilegeEscalation: false

readOnlyRootFilesystem: true

runAsNonRoot: true

runAsUser: 1000

volumeMounts:

- mountPath: /tmp

name: tmp-dir

nodeSelector:

kubernetes.io/os: linux

priorityClassName: system-cluster-critical

serviceAccountName: metrics-server

volumes:

- emptyDir: {}

name: tmp-dir

---

apiVersion: apiregistration.k8s.io/v1

kind: APIService

metadata:

labels:

k8s-app: metrics-server

name: v1beta1.metrics.k8s.io

spec:

group: metrics.k8s.io

groupPriorityMinimum: 100

insecureSkipTLSVerify: true

service:

name: metrics-server

namespace: kube-system

version: v1beta1

versionPriority: 100

EOF

[root@k8s-master-94 ~]# kubectl apply -f components.yaml

[root@k8s-master-94 ~]# kubectl get pods -n kube-system|grep metrics

metrics-server-5f85c44dcd-fcshj 1/1 Running 0 43s

部署harbor仓库

环境要求:服务器必须安装docker和docker-compose安装docker-compose

[root@localhost ~]# curl -L "https://github.com/docker/compose/releases/download/1.26.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

% Total % Received % Xferd Average Speed Time Time Time Current

Dload Upload Total Spent Left Speed

0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0

4 11.6M 4 572k 0 0 15439 0 0:13:13 0:00:37 0:12:36 7942

100 11.6M 100 11.6M 0 0 27290 0 0:07:29 0:07:29 --:--:-- 154k

[root@localhost ~]# chmod +x /usr/local/bin/docker-compose

[root@localhost ~]# docker-compose version

docker-compose version 1.26.0, build d4451659

docker-py version: 4.2.1

CPython version: 3.7.7

OpenSSL version: OpenSSL 1.1.0l 10 Sep 2019

下载harbor安装包

[root@localhost ~]wget https://storage.googleapis.com/harbor-releases/harbor-offline-installer-v1.5.3.tgz

解压安装包并移动位置

tar -zxvf harbor-offline-installer-v1.5.3.tgz #解压离线安装包

mv harbor /opt/ #移到/opt目录下

cd /opt #进入到/opt目录

ls #查看目录内容

cd harbor

进入harbor 目录,修改harbor.cfg配置文件

vim harbor.cfg

hostname = 192.168.0.77 #修改harbor的启动ip,这里需要依据系统ip设置

harbor_admin_password = Natux2019. #修改harbor的admin用户的密码

配置Harbor,若执行失败,安装python2.7

./prepare

安装Harbor

/install.sh

如果出现问题

将docker-compose.yml ,第一行version修改为2.1,在执行./install.sh

访问Harbor页面,默认端口为80,http://自己的ip

推荐阅读

评论可见,请评论后查看内容,谢谢!!!
 您阅读本篇文章共花了: