Tuesday, January 28, 2020

Install and Configure Rancher as deployment and service in Kubernetes

We will be installing Rancher as a Kubernetes deployment and service so that we don't have to run it as a separate docker image. It will give us the ability to maintain Rancher as a part of the Kubernetes system as well. Rancher can install and configure whole Kubernetes cluster on its own but in our case, we will be deploying it to already created Kubernetes Cluster.

Create the deployment and service for Rancher.
root@kub-master:~# mkdir rancher
root@kub-master:~# vi rancher/rancher_deployment.yaml

###Content of rancher_deployment.yaml###
apiVersion: apps/v1
kind: Deployment
metadata:
 name: rancher
 labels:
   app: rancher
 namespace: rancher
 annotations:
   monitoring: "true"
spec:
 replicas: 1
 selector:
   matchLabels:
     app: rancher
 template:
   metadata:
     labels:
       app: rancher
   spec:
     containers:
     - image: rancher/rancher
       name: rancher
       ports:
       - containerPort: 443

root@kub-master:~# vi rancher/rancher_service.yaml

###Content of rancher_service.yaml###
apiVersion: v1
kind: Service
metadata:
 labels:
   app: rancher
 name: rancher
 namespace: rancher
spec:
 ports:
 - nodePort: 30500
   port: 443
   protocol: TCP
   targetPort: 443
 selector:
   app: rancher
 type: NodePort

Create a namespace as rancher in which we will be deploying Rancher.
root@kub-master:~# kubectl create ns rancher
namespace/rancher created

Deploy the yaml files to create Rancher Application.
root@kub-master:~/rancher# kubectl create -f .
deployment.apps/rancher created
service/rancher created

Check if Rancher has been applied successfully or not.
root@kub-master:~/rancher# kubectl get all -n rancher
NAME                          READY   STATUS             RESTARTS   AGE
pod/rancher-ccdcb4fd8-nxvrs   0/1     CrashLoopBackOff   9          23m


NAME              TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
service/rancher   NodePort   10.102.40.105   <none>        443:30500/TCP   23m


NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/rancher   0/1     1            0           23m

NAME                                DESIRED   CURRENT   READY   AGE
replicaset.apps/rancher-ccdcb4fd8   1         1         0       23m

As you can see that our rancher has not been deployed successfully and is showing status as CrashLoopBackoff. To see the related issue check the logs of Pod.
root@kub-master:~/rancher# kubectl logs pod/rancher-ccdcb4fd8-nxvrs -n rancher
2020/01/28 09:18:17 [INFO] Rancher version v2.3.5 is starting
2020/01/28 09:18:17 [INFO] Rancher arguments {ACMEDomains:[] AddLocal:auto Embedded:false KubeConfig: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:false NoCACerts:false ListenConfig:<nil> AuditLogPath:/var/log/auditlog/rancher-api-audit.log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0 Features:}
2020/01/28 09:18:17 [INFO] Listening on /tmp/log.sock
2020/01/28 09:18:17 [INFO] Running in single server mode, will not peer connections
panic: creating CRD store customresourcedefinitions.apiextensions.k8s.io is forbidden: User "system:serviceaccount:rancher:default" cannot list resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scope

goroutine 84 [running]:
github.com/rancher/rancher/vendor/github.com/rancher/norman/store/crd.(*Factory).BatchCreateCRDs.func1(0xc0006b2b40, 0xc0007c3000, 0x3b, 0x3b, 0xc000f52820, 0x62e2660, 0x40bfa00, 0xc001263280, 0x3938b12, 0x4)
 /go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/store/crd/init.go:65 +0x2db
created by github.com/rancher/rancher/vendor/github.com/rancher/norman/store/crd.(*Factory).BatchCreateCRDs
 /go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/store/crd/init.go:50 +0xce

So from error its clear that our Pod doesn't have the required permission to perform its task. We will give it the admin privileges for now so that it can be created. In a production environment, we shouldn't be doing this and only provide the necessary permissions.
root@kub-master:~/rancher# kubectl create clusterrolebinding permissive-binding --clusterrole=cluster-admin --user=admin --user=kubelet --user=kube-system --user=default --user=rancher --group=system:serviceaccounts
clusterrolebinding.rbac.authorization.k8s.io/permissive-binding created

Now let's delete and recreate the deployment.
root@kub-master:~/rancher# kubectl delete -f .
deployment.apps "rancher" deleted
service "rancher" deleted

root@kub-master:~/rancher# kubectl get all -n rancher









No resources found.

root@kub-master:~/rancher# kubectl create -f .
deployment.apps/rancher created
service/rancher created
root@kub-master:~/rancher# kubectl get all -n rancher
NAME                          READY   STATUS              RESTARTS   AGE
pod/rancher-ccdcb4fd8-kjwdf   0/1     ContainerCreating   0          4s


NAME              TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
service/rancher   NodePort   10.105.70.47   <none>        443:30500/TCP   4s


NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/rancher   0/1     1            0           4s

NAME                                DESIRED   CURRENT   READY   AGE
replicaset.apps/rancher-ccdcb4fd8   1         1         0       4s




root@kub-master:~/rancher# kubectl get all -n rancher
NAME                          READY   STATUS    RESTARTS   AGE
pod/rancher-ccdcb4fd8-kjwdf   1/1     Running   0          11s


NAME              TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
service/rancher   NodePort   10.105.70.47   <none>        443:30500/TCP   11s


NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/rancher   1/1     1            1           11s

NAME                                DESIRED   CURRENT   READY   AGE
replicaset.apps/rancher-ccdcb4fd8   1         1         1       11s




Let's access our Rancher on port no 30500 with any of the Nodes IP. As we have deployed it with a NodePort so it will be available on 30500 port on each of the Nodes.
Rancher Kubernetes

No comments:

Post a Comment

Setup fully configurable EFK Elasticsearch Fluentd Kibana setup in Kubernetes

In the following setup, we will be creating a fully configurable Elasticsearch, Flunetd, Kibana setup better known as EKF setup. There is a...