apiVersion:cert-manager.io/v1 kind:ClusterIssuer metadata: name:letsencrypt-staging spec: acme: # You must replace this email address with your own. # Let's Encrypt will use this to contact you about expiring # certificates, and issues related to your account. email:user@example.com server:https://acme-staging-v02.api.letsencrypt.org/directory privateKeySecretRef: # Secret resource that will be used to store the account's private key. name:example-issuer-account-key # Add a single challenge solver, HTTP01 using nginx solvers: -http01: ingress: ingressClassName:nginx
apiVersion:networking.k8s.io/v1 kind:Ingress metadata: annotations: # add an annotation indicating the issuer to use. cert-manager.io/cluster-issuer:nameOfClusterIssuer name:myIngress namespace:myIngress spec: rules: -host:example.com http: paths: -pathType:Prefix path:/ backend: service: name:myservice port: number:80 tls:# < placing a host in the TLS config will determine what ends up in the cert's subjectAltNames -hosts: -example.com secretName:myingress-cert# < cert-manager will store the created certificate in this secret.
kubectl get Issuers,ClusterIssuers,Certificates,CertificateRequests,Orders,Challenges -n partner NAME READY AGE clusterissuer.cert-manager.io/letsencrypt-staging True 42h
NAME READY SECRET AGE certificate.cert-manager.io/myingress-cert False myingress-cert 42h
NAME APPROVED DENIED READY ISSUER REQUESTOR AGE certificaterequest.cert-manager.io/myingress-cert-1 True False letsencrypt-staging system:serviceaccount:cert-manager:cert-manager 11h
NAME STATE AGE order.acme.cert-manager.io/myingress-cert-1-1904933461 invalid 11h
Jan 14 22:29:29 master02.k8s kubelet[1998]: E0114 22:29:29.187573 1998 kubelet.go:2291] "Error getting node" err="node \"master02.k8s\" not found"
在master01上查看etcd的日志发现:
1
2022-01-14 15:57:13.963722 I | embed: rejected connection from "192.168.203.4:45008" (error "tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid", ServerName "")
# kubeadm certs renew missing subcommand; "renew" is not meant to be run on its own To see the stack trace of this error execute with --v=5 or higher [root@master02 kubernetes]# kubeadm certs check-expiration [check-expiration] Reading configuration from the cluster... [check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [check-expiration] Error reading configuration from the Cluster. Falling back to default configuration
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED admin.conf Sep 10, 2022 00:10 UTC 238d no apiserver Sep 10, 2022 00:20 UTC 238d ca no apiserver-etcd-client Sep 10, 2022 00:20 UTC 238d etcd-ca no apiserver-kubelet-client Sep 10, 2022 00:20 UTC 238d ca no controller-manager.conf Sep 10, 2022 00:09 UTC 238d no etcd-healthcheck-client Dec 22, 2021 23:53 UTC <invalid> etcd-ca no etcd-peer Dec 22, 2021 23:53 UTC <invalid> etcd-ca no etcd-server Dec 22, 2021 23:53 UTC <invalid> etcd-ca no front-proxy-client Sep 10, 2022 00:20 UTC 238d front-proxy-ca no scheduler.conf Sep 10, 2022 00:09 UTC 238d no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED ca Dec 24, 2029 07:18 UTC 7y no etcd-ca Dec 24, 2029 07:18 UTC 7y no front-proxy-ca Dec 24, 2029 07:18 UTC 7y no
> kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers I0910 08:00:20.155181 16619 version.go:254] remote version is much newer: v1.22.1; falling back to: stable-1.21 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.4 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.4 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.4 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.21.4 [config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.4.1 [config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.13-0 failed to pull image "registry.aliyuncs.com/google_containers/coredns:v1.8.0": output: Error response from daemon: manifest for registry.aliyuncs.com/google_containers/coredns:v1.8.0 not found: manifest unknown: manifest unknown , error: exit status 1 To see the stack trace of this error execute with --v=5 or higher
docker pull registry.aliyuncs.com/google_containers/coredns:1.8.0 docker tag registry.aliyuncs.com/google_containers/coredns:1.8.0 registry.aliyuncs.com/google_containers/coredns:v1.8.0
containerd方法如下:
1 2
crictl pull registry.aliyuncs.com/google_containers/coredns:1.8.0 ctr -n k8s.io i tag registry.aliyuncs.com/google_containers/coredns:1.8.0 registry.aliyuncs.com/google_containers/coredns:v1.8.0
然后在正常升级就可以了。
Failed to start ContainerManager异常
升级完成后,发现工作节点上,kubelet启动异常,如下:
1
Failed to start ContainerManager failed to initialise top level QOS containers
2021-02-19T06:12:41.374Z - debug: initiateFileTransferFromGuest error: ServerFaultCode: Failed to authenticate with the guest operating system using the supplied credentials. 2021-02-19T06:12:41.374Z - debug: Failed to get fileTransferInfo:ServerFaultCode: Failed to authenticate with the guest operating system using the supplied credentials. 2021-02-19T06:12:41.374Z - debug: Failed to get url of file in guest vm:ServerFaultCode: Failed to authenticate with the guest operating system using the supplied credentials.
root@vcsa [ ~ ]# chage -l root You are required to change your password immediately (root enforced) chage: PAM: Authentication token is no longer valid; new one required
说明root的密码过期,修改密码即可:
1 2 3 4
root@vcsa [ ~ ]# passwd New password: Retype new password: passwd: password updated successfully
> for item in `find /etc/kubernetes/pki -maxdepth 2 -name "*.crt"`;do openssl x509 -in $item -text -noout| grep Not;echo ======================$item===============;done
Not Before: Dec 27 07:18:44 2019 GMT Not After : Dec 24 07:18:44 2029 GMT ======================/etc/kubernetes/pki/ca.crt=============== Not Before: Dec 27 07:18:44 2019 GMT Not After : Dec 22 23:43:16 2021 GMT ======================/etc/kubernetes/pki/apiserver.crt=============== Not Before: Dec 27 07:18:44 2019 GMT Not After : Dec 22 23:43:17 2021 GMT ======================/etc/kubernetes/pki/apiserver-kubelet-client.crt=============== Not Before: Dec 27 07:18:45 2019 GMT Not After : Dec 24 07:18:45 2029 GMT ======================/etc/kubernetes/pki/front-proxy-ca.crt=============== Not Before: Dec 27 07:18:45 2019 GMT Not After : Dec 22 23:43:17 2021 GMT ======================/etc/kubernetes/pki/front-proxy-client.crt=============== Not Before: Dec 27 07:18:45 2019 GMT Not After : Dec 24 07:18:45 2029 GMT ======================/etc/kubernetes/pki/etcd/ca.crt=============== Not Before: Dec 27 07:18:45 2019 GMT Not After : Dec 22 23:42:44 2021 GMT ======================/etc/kubernetes/pki/etcd/server.crt=============== Not Before: Dec 27 07:18:45 2019 GMT Not After : Dec 22 23:42:44 2021 GMT ======================/etc/kubernetes/pki/etcd/peer.crt=============== Not Before: Dec 27 07:18:45 2019 GMT Not After : Dec 22 23:42:45 2021 GMT ======================/etc/kubernetes/pki/etcd/healthcheck-client.crt=============== Not Before: Dec 27 07:18:45 2019 GMT Not After : Dec 22 23:43:17 2021 GMT ======================/etc/kubernetes/pki/apiserver-etcd-client.crt===============
查看后发现的确到期,那么我们renew证书即可:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
> kubeadm alpha certs renew all Command "all" is deprecated, please use the same command under "kubeadm certs" [renew] Reading configuration from the cluster... [renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed certificate for serving the Kubernetes API renewed certificate the apiserver uses to access etcd renewed certificate for the API server to connect to kubelet renewed certificate embedded in the kubeconfig file for the controller manager to use renewed certificate for liveness probes to healthcheck etcd renewed certificate for etcd nodes to communicate with each other renewed certificate for serving etcd renewed certificate for the front proxy client renewed certificate embedded in the kubeconfig file for the scheduler manager to use renewed
Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.
> kubeadm certs check-expiration [check-expiration] Reading configuration from the cluster... [check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED admin.conf Dec 28, 2021 00:06 UTC 364d no apiserver Dec 28, 2021 00:06 UTC 364d ca no apiserver-etcd-client Dec 28, 2021 00:06 UTC 364d etcd-ca no apiserver-kubelet-client Dec 28, 2021 00:06 UTC 364d ca no controller-manager.conf Dec 28, 2021 00:06 UTC 364d no etcd-healthcheck-client Dec 28, 2021 00:06 UTC 364d etcd-ca no etcd-peer Dec 28, 2021 00:06 UTC 364d etcd-ca no etcd-server Dec 28, 2021 00:06 UTC 364d etcd-ca no front-proxy-client Dec 28, 2021 00:06 UTC 364d front-proxy-ca no scheduler.conf Dec 28, 2021 00:06 UTC 364d no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED ca Dec 24, 2029 07:18 UTC 8y no etcd-ca Dec 24, 2029 07:18 UTC 8y no front-proxy-ca Dec 24, 2029 07:18 UTC 8y no