Monday, January 27, 2020

Create a single Master Kubernetes setup in ubuntu

We will be creating a single master Kubernetes setup in Ubuntu with multiple slave nodes attached to it. In this setup, there will be a single server that will act as our main Kubernetes master and then we can add slaves to it according to our needs.

We would be using the following servers to set up the cluster.
Node info:
Node NameIPPurpose
kub-master172.18.77.12k8s master / etcd node
kub-slave172.18.77.11Slave Node

Set the hostname on each of the servers in the following way:
root@iZt4n46l2ljsxndagomkp0Z:~# hostnamectl set-hostname kub-master
root@iZt4n46l2ljsxndagomkp0Z:~# bash
root@kub-master:~#

root@iZt4n46l2ljsxndagomkp1Z:~# hostnamectl set-hostname kub-slave
root@iZt4n46l2ljsxndagomkp1Z:~# bash
root@kub-slave:~#

Turn off the swap in all the Nodes.
root@kub-master:~# sudo swapoff -a
root@kub-master:~# sudo sed -i 's/^.*swap/#&/' /etc/fstab

Now run following commands on all the Nodes to install the required packages.
root@kub-master:~# apt-get update && apt-get install -y curl apt-transport-https

Install the latest version of Docker in each of the Nodes.
root@kub-master:~# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
root@kub-master:~# cat <<EOF >/etc/apt/sources.list.d/docker.list
deb https://download.docker.com/linux/$(lsb_release -si | tr '[:upper:]' '[:lower:]') $(lsb_release -cs) stable
EOF
root@kub-master:~# apt-get update && apt-get install -y docker-ce docker

Need to configure iptables for forwarding all incoming packets on all the Nodes.
root@kub-master:~# cat <<__EOF__ >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
__EOF__
root@kub-master:~# sysctl --system
root@kub-master:~# sysctl -p /etc/sysctl.d/k8s.conf
root@kub-master:~# iptables -P FORWARD ACCEPT

Install kubeadm, kubelet and kubectl.
root@kub-master:~# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
root@kub-master:~# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
root@kub-master:~# apt-get update
root@kub-master:~# apt-get install -y kubelet=1.15.4-00 kubectl=1.15.4-00 kubeadm=1.15.4-00

Run the following command to initialize the configuration only on the master server.
root@kub-master:~# kubeadm init --pod-network-cidr=10.244.0.0/16
I0127 12:48:37.761157   13545 version.go:248] remote version is much newer: v1.17.2; falling back to: stable-1.15
[init] Using Kubernetes version: v1.15.9
[preflight] Running pre-flight checks
 [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
 [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.5. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [izt4nicu8fd63j4cm5tj1uz kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.18.77.13]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [izt4nicu8fd63j4cm5tj1uz localhost] and IPs [172.18.77.13 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [izt4nicu8fd63j4cm5tj1uz localhost] and IPs [172.18.77.13 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 34.001523 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node izt4nicu8fd63j4cm5tj1uz as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node izt4nicu8fd63j4cm5tj1uz as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: zyijtb.ee2b9nyarusddaa1
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.18.77.13:6443 --token zyijtb.ee2b9nyarusddaa1 \
    --discovery-token-ca-cert-hash sha256:abcb142fb79a87ddb481c26d2e5266f597753d33e40547da378b1d833a85e4b6

Run the following commands to setup authentication for the user who will control the Kubernetes cluster.
root@kub-master:~# mkdir -p $HOME/.kube
root@kub-master:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@kub-master:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config

Deploy the pod network to the cluster so that the cluster has proper networking. In our case, we are deploying flannel networking.
root@kub-master:~# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

Check if all the pods are up and running successfully.
root@kub-master:~# kubectl get pods --all-namespaces
NAMESPACE     NAME                                              READY   STATUS    RESTARTS   AGE
kube-system   coredns-5c98db65d4-k77mn                          0/1     Running   0          18m
kube-system   coredns-5c98db65d4-mv7p9                          0/1     Running   0          18m
kube-system   etcd-izt4nicu8fd63j4cm5tj1uz                      1/1     Running   0          17m
kube-system   kube-apiserver-izt4nicu8fd63j4cm5tj1uz            1/1     Running   0          17m
kube-system   kube-controller-manager-izt4nicu8fd63j4cm5tj1uz   1/1     Running   0          17m
kube-system   kube-flannel-ds-amd64-gfct6                       1/1     Running   0          21s
kube-system   kube-proxy-jgql9                                  1/1     Running   0          18m
kube-system   kube-scheduler-izt4nicu8fd63j4cm5tj1uz            1/1     Running   0          17m

Add the slave to the cluster by running the following command in the slave node. The command can be copied from the output of init command to set up the server.
root@kub-slave:~# kubeadm join 172.18.77.13:6443 --token zyijtb.ee2b9nyarusddaa1 \
>     --discovery-token-ca-cert-hash sha256:abcb142fb79a87ddb481c26d2e5266f597753d33e40547da378b1d833a85e4b6
[preflight] Running pre-flight checks
 [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
 [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.5. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

Check on the master node to see if all nodes are added successfully.
root@kub-master:~# kubectl get nodes
NAME                      STATUS   ROLES    AGE    VERSION
kub-master                Ready    master   26m    v1.15.4
kub-slave                 Ready    <none>   108s   v1.15.4

1 comment:

Setup fully configurable EFK Elasticsearch Fluentd Kibana setup in Kubernetes

In the following setup, we will be creating a fully configurable Elasticsearch, Flunetd, Kibana setup better known as EKF setup. There is a...