# replace gcr.io/google-containers/federation-controller-manager-arm64:v1.3.1-beta.1 to real image # this will convert gcr.io/google-containers/federation-controller-manager-arm64:v1.3.1-beta.1 # to anjia0532/google-containers.federation-controller-manager-arm64:v1.3.1-beta.1 and pull it # k8s.gcr.io/{image}/{tag} <==> gcr.io/google-containers/{image}/{tag} <==> anjia0532/google-containers.{image}/{tag}
# this code will retag all of anjia0532's image from local e.g. anjia0532/google-containers.federation-controller-manager-arm64:v1.3.1-beta.1 # to gcr.io/google-containers/federation-controller-manager-arm64:v1.3.1-beta.1 # k8s.gcr.io/{image}/{tag} <==> gcr.io/google-containers/{image}/{tag} <==> anjia0532/google-containers.{image}/{tag}
for img in $(docker images --format "{{.Repository}}:{{.Tag}}"| grep "anjia0532"); do n=$(echo${img}| awk -F'[/.:]''{printf "gcr.io/%s",$2}') image=$(echo${img}| awk -F'[/.:]''{printf "/%s",$3}') tag=$(echo${img}| awk -F'[:]''{printf ":%s",$2}') docker tag $img"${n}${image}${tag}" [[ ${n} == "gcr.io/google-containers" ]] && docker tag $img"k8s.gcr.io${image}${tag}" done
[init] using Kubernetes version: v1.11.0 [preflight] running pre-flight checks I0927 23:50:58.769986 27134 kernel_validator.go:81] Validating kernel version I0927 23:50:58.770090 27134 kernel_validator.go:96] Validating kernel config [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03 [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [leosocy-ecs1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 101.xx.xx.124] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [leosocy-ecs1 localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [leosocy-ecs1 localhost] and IPs [101.xx.xx.124 127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 45.005438 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node leosocy-ecs1 as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node leosocy-ecs1 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "leosocy-ecs1" as an annotation [bootstraptoken] using token: 3yl852.cgnsfybbkj01qtbp [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node as root:
这里有一个大坑,由于笔者用的是阿里云的ECS,又没有配置入方向的安全组,导致6443端口无法访问,一致卡在[init] this might take a minute or longer if the control plane images have to be pulled这个阶段。解决办法就是去阿里云控制台,配置ECS的6443端口安全组。