Wednesday, January 29, 2020

How to upgrade update Helm.

There are cases when you need to upgrade or update your helm to work with new charts and its a generally good practice to have updated packages.

Check the current version.
root@kub-master:~# helm version
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}

Download the latest or version you want to install.
root@kub-master:~# wget https://get.helm.sh/helm-v2.16.1-linux-amd64.tar.gz
--2020-01-28 19:28:27--  https://get.helm.sh/helm-v2.16.1-linux-amd64.tar.gz
Resolving get.helm.sh (get.helm.sh)... 152.199.39.108, 2606:2800:247:1cb7:261b:1f9c:2074:3c
Connecting to get.helm.sh (get.helm.sh)|152.199.39.108|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 25262108 (24M) [application/x-tar]
Saving to: ‘helm-v2.16.1-linux-amd64.tar.gz’

helm-v2.16.1-linux-amd64.tar.gz                    100%[================================================================================================================>]  24.09M  --.-KB/s    in 0.1s

2020-01-28 19:28:27 (239 MB/s) - ‘helm-v2.16.1-linux-amd64.tar.gz’ saved [25262108/25262108]

Extract the package and move them executable directory.
root@kub-master:~# tar -xvzf helm-v2.16.1-linux-amd64.tar.gz
linux-amd64/
linux-amd64/helm
linux-amd64/LICENSE
linux-amd64/tiller
linux-amd64/README.md
root@kub-master:~# mv linux-amd64/helm /usr/local/bin/
root@kub-master:~# helm version
Client: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}

As you can see there is a difference between client and server version so we have to reinitialize the Helm with the new version.
root@kub-master:~# helm init --service-account tiller --tiller-namespace kube-system --upgrade
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been updated to gcr.io/kubernetes-helm/tiller:v2.16.1 .
root@kub-master:~# helm version
Client: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}

Install and configure Spinnaker on Kubernetes

We will be installing and configuring Spinnaker in our Kubernetes cluster to streamline CI/CD process. Spinnaker was created by Netflix to handle there CI/CD in Kubernetes setup and later on they open-sourced it. Spinnaker is a great tool with inbuilt integration with major cloud providers. We will be installing it in the Bare Metal Kubernetes cluster though.

Clone the configuration file from Spinnaker GitHub repo and make your changes accordingly.
root@kub-master:~# wget https://raw.githubusercontent.com/helm/charts/master/stable/spinnaker/values.yaml

I have disabled all the persistence storage entries in the values.yaml file as it's not for production deployment but in case of Production deployment it's highly recommended to use persistence storage. My values.yaml file looks like this after disabling the persistence storage.
root@kub-master:~# cat values.yaml
halyard:
  spinnakerVersion: 1.16.1
  image:
    repository: gcr.io/spinnaker-marketplace/halyard
    tag: 1.23.2
    pullSecrets: []
  # Set to false to disable persistence data volume for halyard
  persistence:
    enabled: false
  # Provide additional parameters to halyard deploy apply command
  additionalInstallParameters: []
  # Provide a config map with Hal commands that will be run the core config (storage)
  # The config map should contain a script in the config.sh key
  additionalScripts:
    enabled: false
    configMapName: my-halyard-config
    configMapKey: config.sh
    # If you'd rather do an inline script, set create to true and put the content in the data dict like you would a configmap
    # The content will be passed through `tpl`, so value interpolation is supported.
    create: false
    data: {}
  additionalSecrets:
    create: false
    data: {}
    ## Uncomment if you want to use a pre-created secret rather than feeding data in via helm.
    # name:
  additionalConfigMaps:
    create: false
    data: {}
    ## Uncomment if you want to use a pre-created ConfigMap rather than feeding data in via helm.
    # name:
  ## Define custom profiles for Spinnaker services. Read more for details:
  ## https://www.spinnaker.io/reference/halyard/custom/#custom-profiles
  ## The contents of the files will be passed through `tpl`, so value interpolation is supported.
  additionalProfileConfigMaps:
    data: {}
      ## if you're running spinnaker behind a reverse proxy such as a GCE ingress
      ## you may need the following profile settings for the gate profile.
      ## see https://github.com/spinnaker/spinnaker/issues/1630
      ## otherwise its harmless and will likely become default behavior in the future
      ## According to the linked github issue.
      # gate-local.yml:
      #   server:
      #     tomcat:
      #       protocolHeader: X-Forwarded-Proto
      #       remoteIpHeader: X-Forwarded-For
      #       internalProxies: .*
      #       httpsServerPort: X-Forwarded-Port

  ## Define custom settings for Spinnaker services. Read more for details:
  ## https://www.spinnaker.io/reference/halyard/custom/#custom-service-settings
  ## You can use it to add annotations for pods, override the image, etc.
  additionalServiceSettings: {}
    # deck.yml:
    #   artifactId: gcr.io/spinnaker-marketplace/deck:2.9.0-20190412012808
    #   kubernetes:
    #     podAnnotations:
    #       iam.amazonaws.com/role: <role_arn>
    # clouddriver.yml:
    #   kubernetes:
    #     podAnnotations:
    #       iam.amazonaws.com/role: <role_arn>

  ## Populate to provide a custom local BOM for Halyard to use for deployment. Read more for details:
  ## https://www.spinnaker.io/guides/operator/custom-boms/#boms-and-configuration-on-your-filesystem
  bom: ~
  #   artifactSources:
  #     debianRepository: https://dl.bintray.com/spinnaker-releases/debians
  #     dockerRegistry: gcr.io/spinnaker-marketplace
  #     gitPrefix: https://github.com/spinnaker
  #     googleImageProject: marketplace-spinnaker-release
  #   services:
  #     clouddriver:
  #       commit: 031bcec52d6c3eb447095df4251b9d7516ed74f5
  #       version: 6.3.0-20190904130744
  #     deck:
  #       commit: b0aac478e13a7f9642d4d39479f649dd2ef52a5a
  #       version: 2.12.0-20190916141821
  #     ...
  #   timestamp: '2019-09-16 18:18:44'
  #   version: 1.16.1

  ## Define local configuration for Spinnaker services.
  ## The contents of these files would be copies of the configuration normally retrieved from
  ## `gs://halconfig/<service-name>`, but instead need to be available locally on the halyard pod to facilitate
  ## offline installation. This would typically be used along with a custom `bom:` with the `local:` prefix on a
  ## service version.
  ## Read more for details:
  ## https://www.spinnaker.io/guides/operator/custom-boms/#boms-and-configuration-on-your-filesystem
  ## The key for each entry must be the name of the service and a file name separated by the '_' character.
  serviceConfigs: {}
  # clouddriver_clouddriver-ro.yml: |-
  #   ...
  # clouddriver_clouddriver-rw.yml: |-
  #   ...
  # clouddriver_clouddriver.yml: |-
  #   ...
  # deck_settings.json: |-
  #   ...
  # echo_echo.yml: |-
  #   ...

  ## Uncomment if you want to add extra commands to the init script
  ## run by the init container before halyard is started.
  ## The content will be passed through `tpl`, so value interpolation is supported.
  # additionalInitScript: |-

  ## Uncomment if you want to add annotations on halyard and install-using-hal pods
  # annotations:
  #   iam.amazonaws.com/role: <role_arn>

  ## Uncomment the following resources definitions to control the cpu and memory
  # resources allocated for the halyard pod
  resources: {}
    # requests:
    #   memory: "1Gi"
    #   cpu: "100m"
    # limits:
    #   memory: "2Gi"
    #   cpu: "200m"

  ## Uncomment if you want to set environment variables on the Halyard pod.
  # env:
  #   - name: JAVA_OPTS
  #     value: -Dhttp.proxyHost=proxy.example.com
  customCerts:
    ## Enable to override the default cacerts with your own one
    enabled: false
    secretName: custom-cacerts

# Define which registries and repositories you want available in your
# Spinnaker pipeline definitions
# For more info visit:
#   https://www.spinnaker.io/setup/providers/docker-registry/

# Configure your Docker registries here
dockerRegistries:
- name: dockerhub
  address: index.docker.io
  repositories:
    - library/alpine
    - library/ubuntu
    - library/centos
    - library/nginx
# - name: gcr
#   address: https://gcr.io
#   username: _json_key
#   password: '<INSERT YOUR SERVICE ACCOUNT JSON HERE>'
#   email: 1234@5678.com
# - name: ecr
#   address: <AWS-ACCOUNT-ID>.dkr.ecr.<REGION>.amazonaws.com
#   username: AWS
#   passwordCommand: "aws --region <REGION> ecr get-authorization-token --output text --query 'authorizationData[].authorizationToken' | base64 -d | sed 's/^AWS://'"

# If you don't want to put your passwords into a values file
# you can use a pre-created secret instead of putting passwords
# (specify secret name in below `dockerRegistryAccountSecret`)
# per account above with data in the format:
# <name>: <password>

# dockerRegistryAccountSecret: myregistry-secrets

kubeConfig:
  # Use this when you want to register arbitrary clusters with Spinnaker
  # Upload your ~/kube/.config to a secret
  enabled: false
  secretName: my-kubeconfig
  secretKey: config
  # Use this when you want to configure halyard to reference a kubeconfig from s3
  # This allows you to keep your kubeconfig in an encrypted s3 bucket
  # For more info visit:
  #   https://www.spinnaker.io/reference/halyard/secrets/s3-secrets/#secrets-in-s3
  # encryptedKubeconfig: encrypted:s3!r:us-west-2!b:mybucket!f:mykubeconfig
  # List of contexts from the kubeconfig to make available to Spinnaker
  contexts:
  - default
  deploymentContext: default
  omittedNameSpaces:
  - kube-system
  - kube-public
  onlySpinnakerManaged:
    enabled: false

  # When false, clouddriver will skip the permission checks for all kubernetes kinds at startup.
  # This can save a great deal of time during clouddriver startup when you have many kubernetes
  # accounts configured. This disables the log messages at startup about missing permissions.
  checkPermissionsOnStartup: true

  # A list of resource kinds this Spinnaker account can deploy to and will cache.
  # When no kinds are configured, this defaults to ‘all kinds'.
  # kinds:
  # -

  # A list of resource kinds this Spinnaker account cannot deploy to or cache.
  # This can only be set when –kinds is empty or not set.
  # omittedKinds:
  # -

# Change this if youd like to expose Spinnaker outside the cluster
ingress:
  enabled: false
  # host: spinnaker.example.org
  # annotations:
    # ingress.kubernetes.io/ssl-redirect: 'true'
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  # tls:
  #  - secretName: -tls
  #    hosts:
  #      - domain.com

ingressGate:
  enabled: false
  # host: gate.spinnaker.example.org
  # annotations:
    # ingress.kubernetes.io/ssl-redirect: 'true'
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  # tls:
  #  - secretName: -tls
  #    hosts:
  #      - domain.com

# spinnakerFeatureFlags is a list of Spinnaker feature flags to enable
# Ref: https://www.spinnaker.io/reference/halyard/commands/#hal-config-features-edit
# spinnakerFeatureFlags:
#   - artifacts
#   - pipeline-templates
spinnakerFeatureFlags:
  - artifacts

# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
# nodeSelector to provide to each of the Spinnaker components
nodeSelector: {}

# Redis password to use for the in-cluster redis service
# Enable redis to use in-cluster redis
redis:
  enabled: true
  # External Redis option will be enabled if in-cluster redis is disabled
  external:
    host: "<EXTERNAL-REDIS-HOST-NAME>"
    port: 6379
    # password: ""
  password: password
  nodeSelector: {}
  cluster:
    enabled: false
# Uncomment if you don't want to create a PVC for redis
  master:
    persistence:
      enabled: false

# Minio access/secret keys for the in-cluster S3 usage
# Minio is not exposed publically
minio:
  enabled: true
  imageTag: RELEASE.2019-02-13T19-48-27Z
  serviceType: ClusterIP
  accessKey: spinnakeradmin
  secretKey: spinnakeradmin
  bucket: "spinnaker"
  nodeSelector: {}
# Uncomment if you don't want to create a PVC for minio
  persistence:
    enabled: false

# Google Cloud Storage
gcs:
  enabled: false
  project: my-project-name
  bucket: "<GCS-BUCKET-NAME>"
  ## if jsonKey is set, will create a secret containing it
  jsonKey: '<INSERT CLOUD STORAGE JSON HERE>'
  ## override the name of the secret to use for jsonKey, if `jsonKey`
  ## is empty, it will not create a secret assuming you are creating one
  ## external to the chart. the key for that secret should be `key.json`.
  secretName:

# AWS Simple Storage Service
s3:
  enabled: false
  bucket: "<S3-BUCKET-NAME>"
  # rootFolder: "front50"
  # region: "us-east-1"
  # endpoint: ""
  # accessKey: ""
  # secretKey: ""
  # assumeRole: "<role to assume>"
  ## Here you can pass extra arguments to configure s3 storage options
  extraArgs: []
  #  - "--path-style-access true"

# Azure Storage Account
azs:
  enabled: false
#   storageAccountName: ""
#   accessKey: ""
#   containerName: "spinnaker"

rbac:
  # Specifies whether RBAC resources should be created
  create: true

serviceAccount:
  # Specifies whether a ServiceAccount should be created
  create: true
  # The name of the ServiceAccounts to use.
  # If left blank it is auto-generated from the fullname of the release
  halyardName:
  spinnakerName:
securityContext:
  # Specifies permissions to write for user/group
  runAsUser: 1000
  fsGroup: 1000

We will use helm to install our spinnaker setup taking values from values.yaml.
root@kub-master:~# helm install --name my-spinnaker -f values.yaml stable/spinnaker --namespace spinnaker
NAME:   my-spinnaker
LAST DEPLOYED: Tue Jan 28 19:35:17 2020
NAMESPACE: spinnaker
STATUS: DEPLOYED

RESOURCES:
==> v1/ClusterRoleBinding
NAME                              AGE
my-spinnaker-spinnaker-spinnaker  3m9s

==> v1/ConfigMap
NAME                                                   AGE
my-spinnaker-minio                                     3m9s
my-spinnaker-spinnaker-additional-profile-config-maps  3m9s
my-spinnaker-spinnaker-halyard-config                  3m9s
my-spinnaker-spinnaker-halyard-init-script             3m9s
my-spinnaker-spinnaker-service-settings                3m9s

==> v1/Pod(related)
NAME                                 AGE
my-spinnaker-minio-668c59b766-bhw6p  3m9s
my-spinnaker-redis-master-0          3m9s
my-spinnaker-spinnaker-halyard-0     3m9s

==> v1/RoleBinding
NAME                            AGE
my-spinnaker-spinnaker-halyard  3m9s

==> v1/Secret
NAME                             AGE
my-spinnaker-minio               3m9s
my-spinnaker-redis               3m9s
my-spinnaker-spinnaker-registry  3m9s

==> v1/Service
NAME                            AGE
my-spinnaker-minio              3m9s
my-spinnaker-redis-master       3m9s
my-spinnaker-spinnaker-halyard  3m9s

==> v1/ServiceAccount
NAME                            AGE
my-spinnaker-spinnaker-halyard  3m9s

==> v1/StatefulSet
NAME                            AGE
my-spinnaker-spinnaker-halyard  3m9s

==> v1beta2/Deployment
NAME                AGE
my-spinnaker-minio  3m9s

==> v1beta2/StatefulSet
NAME                       AGE
my-spinnaker-redis-master  3m9s


NOTES:
1. You will need to create 2 port forwarding tunnels in order to access the Spinnaker UI:
  export DECK_POD=$(kubectl get pods --namespace spinnaker -l "cluster=spin-deck" -o jsonpath="{.items[0].metadata.name}")
  kubectl port-forward --namespace spinnaker $DECK_POD 9000

  export GATE_POD=$(kubectl get pods --namespace spinnaker -l "cluster=spin-gate" -o jsonpath="{.items[0].metadata.name}")
  kubectl port-forward --namespace spinnaker $GATE_POD 8084

2. Visit the Spinnaker UI by opening your browser to: http://127.0.0.1:9000

To customize your Spinnaker installation. Create a shell in your Halyard pod:

  kubectl exec --namespace spinnaker -it my-spinnaker-spinnaker-halyard-0 bash

For more info on using Halyard to customize your installation, visit:
  https://www.spinnaker.io/reference/halyard/

For more info on the Kubernetes integration for Spinnaker, visit:
  https://www.spinnaker.io/reference/providers/kubernetes-v2/

Our spinnaker setup has been installed successfully now lets run the commands as suggested by spinnaker to create port forwarding tunnels.
root@kub-master:~# export DECK_POD=$(kubectl get pods --namespace spinnaker -l "cluster=spin-deck" -o jsonpath="{.items[0].metadata.name}")
root@kub-master:~# kubectl port-forward --namespace spinnaker $DECK_POD 9000 &
[1] 24013
root@kub-master:~# Forwarding from 127.0.0.1:9000 -> 9000

root@kub-master:~# export GATE_POD=$(kubectl get pods --namespace spinnaker -l "cluster=spin-gate" -o jsonpath="{.items[0].metadata.name}")

root@kub-master:~# kubectl port-forward --namespace spinnaker $GATE_POD 8084 &
[2] 24371
root@kub-master:~# Forwarding from 127.0.0.1:8084 -> 8084

Let's check our spinnaker deployment.
root@kub-master:~# kubectl get all -n spinnaker
NAME                                       READY   STATUS      RESTARTS   AGE
pod/my-spinnaker-install-using-hal-nk854   0/1     Completed   0          5m22s
pod/my-spinnaker-minio-668c59b766-bhw6p    1/1     Running     0          5m24s
pod/my-spinnaker-redis-master-0            1/1     Running     0          5m24s
pod/my-spinnaker-spinnaker-halyard-0       1/1     Running     0          5m24s
pod/spin-clouddriver-67474d5679-gcd52      1/1     Running     0          2m23s
pod/spin-deck-565f65c6b5-lfbx9             1/1     Running     0          2m25s
pod/spin-echo-98f95c9f5-m9kz8              0/1     Running     0          2m27s
pod/spin-front50-5b774b446b-mps7m          0/1     Running     0          2m21s
pod/spin-gate-78f5787989-5ns7x             1/1     Running     0          2m25s
pod/spin-igor-75f89f6d58-5k2fc             1/1     Running     0          2m24s
pod/spin-orca-5bd445469-sr8n9              1/1     Running     0          2m21s
pod/spin-rosco-c757f4488-lq6x9             1/1     Running     0          2m20s


NAME                                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/my-spinnaker-minio               ClusterIP   None             <none>        9000/TCP   5m24s
service/my-spinnaker-redis-master        ClusterIP   10.104.160.74    <none>        6379/TCP   5m24s
service/my-spinnaker-spinnaker-halyard   ClusterIP   None             <none>        8064/TCP   5m24s
service/spin-clouddriver                 ClusterIP   10.99.31.174     <none>        7002/TCP   2m31s
service/spin-deck                        ClusterIP   10.105.31.148    <none>        9000/TCP   2m34s
service/spin-echo                        ClusterIP   10.97.251.129    <none>        8089/TCP   2m34s
service/spin-front50                     ClusterIP   10.100.81.247    <none>        8080/TCP   2m33s
service/spin-gate                        ClusterIP   10.109.86.157    <none>        8084/TCP   2m33s
service/spin-igor                        ClusterIP   10.109.198.34    <none>        8088/TCP   2m32s
service/spin-orca                        ClusterIP   10.104.52.95     <none>        8083/TCP   2m32s
service/spin-rosco                       ClusterIP   10.110.146.173   <none>        8087/TCP   2m33s


NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-spinnaker-minio   1/1     1            1           5m24s
deployment.apps/spin-clouddriver     1/1     1            1           2m23s
deployment.apps/spin-deck            1/1     1            1           2m25s
deployment.apps/spin-echo            0/1     1            0           2m27s
deployment.apps/spin-front50         0/1     1            0           2m21s
deployment.apps/spin-gate            1/1     1            1           2m25s
deployment.apps/spin-igor            1/1     1            1           2m24s
deployment.apps/spin-orca            1/1     1            1           2m21s
deployment.apps/spin-rosco           1/1     1            1           2m20s

NAME                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/my-spinnaker-minio-668c59b766   1         1         1       5m24s
replicaset.apps/spin-clouddriver-67474d5679     1         1         1       2m23s
replicaset.apps/spin-deck-565f65c6b5            1         1         1       2m25s
replicaset.apps/spin-echo-98f95c9f5             1         1         0       2m27s
replicaset.apps/spin-front50-5b774b446b         1         1         0       2m21s
replicaset.apps/spin-gate-78f5787989            1         1         1       2m25s
replicaset.apps/spin-igor-75f89f6d58            1         1         1       2m24s
replicaset.apps/spin-orca-5bd445469             1         1         1       2m21s
replicaset.apps/spin-rosco-c757f4488            1         1         1       2m20s

NAME                                              READY   AGE
statefulset.apps/my-spinnaker-redis-master        1/1     5m24s
statefulset.apps/my-spinnaker-spinnaker-halyard   1/1     5m24s


NAME                                       COMPLETIONS   DURATION   AGE
job.batch/my-spinnaker-install-using-hal   1/1           3m7s       5m22s

As I would be using the spinnaker via NodePort so I would be editing the following services type to run from ClusterIP to NodePort.
root@kub-master:~# kubectl edit service/spin-deck -n spinnaker
service/spin-deck edited
root@kub-master:~# kubectl edit service/spin-gate -n spinnaker
service/spin-gate edited
root@kub-master:~# kubectl get services -n spinnaker
NAME                             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
my-spinnaker-minio               ClusterIP   None             <none>        9000/TCP         16h
my-spinnaker-redis-master        ClusterIP   10.104.160.74    <none>        6379/TCP         16h
my-spinnaker-spinnaker-halyard   ClusterIP   None             <none>        8064/TCP         16h
spin-clouddriver                 ClusterIP   10.99.31.174     <none>        7002/TCP         16h
spin-deck                        NodePort    10.105.31.148    <none>        9000:30306/TCP   16h
spin-echo                        ClusterIP   10.97.251.129    <none>        8089/TCP         16h
spin-front50                     ClusterIP   10.100.81.247    <none>        8080/TCP         16h
spin-gate                        NodePort    10.109.86.157    <none>        8084:30825/TCP   16h
spin-igor                        ClusterIP   10.109.198.34    <none>        8088/TCP         16h
spin-orca                        ClusterIP   10.104.52.95     <none>        8083/TCP         16h
spin-rosco                       ClusterIP   10.110.146.173   <none>        8087/TCP         16h

Now our services spin-deck and spin-gate are working as NodePort and we can access it via corresponding ports. In my case, I would be accessing it via any of Node IP followed up by port no 30306.
Spinnaker Kubernetes

To customize and do any changes in Spinnaker use the following command.
root@kub-master:~# kubectl exec --namespace spinnaker -it my-spinnaker-spinnaker-halyard-0 bash
spinnaker@my-spinnaker-spinnaker-halyard-0:/workdir$

Tuesday, January 28, 2020

Install and Configure Rancher as deployment and service in Kubernetes

We will be installing Rancher as a Kubernetes deployment and service so that we don't have to run it as a separate docker image. It will give us the ability to maintain Rancher as a part of the Kubernetes system as well. Rancher can install and configure whole Kubernetes cluster on its own but in our case, we will be deploying it to already created Kubernetes Cluster.

Create the deployment and service for Rancher.
root@kub-master:~# mkdir rancher
root@kub-master:~# vi rancher/rancher_deployment.yaml

###Content of rancher_deployment.yaml###
apiVersion: apps/v1
kind: Deployment
metadata:
 name: rancher
 labels:
   app: rancher
 namespace: rancher
 annotations:
   monitoring: "true"
spec:
 replicas: 1
 selector:
   matchLabels:
     app: rancher
 template:
   metadata:
     labels:
       app: rancher
   spec:
     containers:
     - image: rancher/rancher
       name: rancher
       ports:
       - containerPort: 443

root@kub-master:~# vi rancher/rancher_service.yaml

###Content of rancher_service.yaml###
apiVersion: v1
kind: Service
metadata:
 labels:
   app: rancher
 name: rancher
 namespace: rancher
spec:
 ports:
 - nodePort: 30500
   port: 443
   protocol: TCP
   targetPort: 443
 selector:
   app: rancher
 type: NodePort

Create a namespace as rancher in which we will be deploying Rancher.
root@kub-master:~# kubectl create ns rancher
namespace/rancher created

Deploy the yaml files to create Rancher Application.
root@kub-master:~/rancher# kubectl create -f .
deployment.apps/rancher created
service/rancher created

Check if Rancher has been applied successfully or not.
root@kub-master:~/rancher# kubectl get all -n rancher
NAME                          READY   STATUS             RESTARTS   AGE
pod/rancher-ccdcb4fd8-nxvrs   0/1     CrashLoopBackOff   9          23m


NAME              TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
service/rancher   NodePort   10.102.40.105   <none>        443:30500/TCP   23m


NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/rancher   0/1     1            0           23m

NAME                                DESIRED   CURRENT   READY   AGE
replicaset.apps/rancher-ccdcb4fd8   1         1         0       23m

As you can see that our rancher has not been deployed successfully and is showing status as CrashLoopBackoff. To see the related issue check the logs of Pod.
root@kub-master:~/rancher# kubectl logs pod/rancher-ccdcb4fd8-nxvrs -n rancher
2020/01/28 09:18:17 [INFO] Rancher version v2.3.5 is starting
2020/01/28 09:18:17 [INFO] Rancher arguments {ACMEDomains:[] AddLocal:auto Embedded:false KubeConfig: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:false NoCACerts:false ListenConfig:<nil> AuditLogPath:/var/log/auditlog/rancher-api-audit.log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0 Features:}
2020/01/28 09:18:17 [INFO] Listening on /tmp/log.sock
2020/01/28 09:18:17 [INFO] Running in single server mode, will not peer connections
panic: creating CRD store customresourcedefinitions.apiextensions.k8s.io is forbidden: User "system:serviceaccount:rancher:default" cannot list resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scope

goroutine 84 [running]:
github.com/rancher/rancher/vendor/github.com/rancher/norman/store/crd.(*Factory).BatchCreateCRDs.func1(0xc0006b2b40, 0xc0007c3000, 0x3b, 0x3b, 0xc000f52820, 0x62e2660, 0x40bfa00, 0xc001263280, 0x3938b12, 0x4)
 /go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/store/crd/init.go:65 +0x2db
created by github.com/rancher/rancher/vendor/github.com/rancher/norman/store/crd.(*Factory).BatchCreateCRDs
 /go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/store/crd/init.go:50 +0xce

So from error its clear that our Pod doesn't have the required permission to perform its task. We will give it the admin privileges for now so that it can be created. In a production environment, we shouldn't be doing this and only provide the necessary permissions.
root@kub-master:~/rancher# kubectl create clusterrolebinding permissive-binding --clusterrole=cluster-admin --user=admin --user=kubelet --user=kube-system --user=default --user=rancher --group=system:serviceaccounts
clusterrolebinding.rbac.authorization.k8s.io/permissive-binding created

Now let's delete and recreate the deployment.
root@kub-master:~/rancher# kubectl delete -f .
deployment.apps "rancher" deleted
service "rancher" deleted

root@kub-master:~/rancher# kubectl get all -n rancher









No resources found.

root@kub-master:~/rancher# kubectl create -f .
deployment.apps/rancher created
service/rancher created
root@kub-master:~/rancher# kubectl get all -n rancher
NAME                          READY   STATUS              RESTARTS   AGE
pod/rancher-ccdcb4fd8-kjwdf   0/1     ContainerCreating   0          4s


NAME              TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
service/rancher   NodePort   10.105.70.47   <none>        443:30500/TCP   4s


NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/rancher   0/1     1            0           4s

NAME                                DESIRED   CURRENT   READY   AGE
replicaset.apps/rancher-ccdcb4fd8   1         1         0       4s




root@kub-master:~/rancher# kubectl get all -n rancher
NAME                          READY   STATUS    RESTARTS   AGE
pod/rancher-ccdcb4fd8-kjwdf   1/1     Running   0          11s


NAME              TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
service/rancher   NodePort   10.105.70.47   <none>        443:30500/TCP   11s


NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/rancher   1/1     1            1           11s

NAME                                DESIRED   CURRENT   READY   AGE
replicaset.apps/rancher-ccdcb4fd8   1         1         1       11s




Let's access our Rancher on port no 30500 with any of the Nodes IP. As we have deployed it with a NodePort so it will be available on 30500 port on each of the Nodes.
Rancher Kubernetes

Monday, January 27, 2020

Install and configure Istio in Kubernetes cluster.

Istio is an open-source service mesh platform that provides a way to control how microservices share data with one another. It acts as a very good ingress controller to serve traffic and similar stuff.  If you are looking to use Kubernetes in a production setup then any Ingress controller is a must and Istio is very good in this.

Download and extract the Istio package.
root@kub-master:~# wget https://github.com/istio/istio/releases/download/1.2.5/istio-1.2.5-linux.tar.gz
--2020-01-27 17:17:22--  https://github.com/istio/istio/releases/download/1.2.5/istio-1.2.5-linux.tar.gz
Resolving github.com (github.com)... 13.250.177.223
Connecting to github.com (github.com)|13.250.177.223|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://github-production-release-asset-2e65be.s3.amazonaws.com/74175805/97273080-c5f8-11e9-8c14-48c4704e1ec9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20200127%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20200127T091723Z&X-Amz-Expires=300&X-Amz-Signature=8f66d7e0cb13d5b4d4542c22d512a0deb419f94f88476bb82a1e3ab2f88a605e&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Distio-1.2.5-linux.tar.gz&response-content-type=application%2Foctet-stream [following]
--2020-01-27 17:17:23--  https://github-production-release-asset-2e65be.s3.amazonaws.com/74175805/97273080-c5f8-11e9-8c14-48c4704e1ec9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20200127%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20200127T091723Z&X-Amz-Expires=300&X-Amz-Signature=8f66d7e0cb13d5b4d4542c22d512a0deb419f94f88476bb82a1e3ab2f88a605e&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Distio-1.2.5-linux.tar.gz&response-content-type=application%2Foctet-stream
Resolving github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)... 52.216.140.68
Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)|52.216.140.68|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 32445384 (31M) [application/octet-stream]
Saving to: ‘istio-1.2.5-linux.tar.gz’

istio-1.2.5-linux.tar.gz                           100%[================================================================================================================>]  30.94M  7.07MB/s    in 4.4s

2020-01-27 17:17:28 (7.07 MB/s) - ‘istio-1.2.5-linux.tar.gz’ saved [32445384/32445384]

root@kub-master:~# tar -xvzf istio-1.2.5-linux.tar.gz

Copy the istioctl binary in the executable path.
root@kub-master:~# cp istio-1.2.5/bin/istioctl /usr/bin/
root@kub-master:~# istioctl version
1.2.5

Check the prerequisite to install Istio.
root@kub-master:~# istioctl verify-install

Checking the cluster to make sure it is ready for Istio installation...

Kubernetes-api
-----------------------
Can initialize the Kubernetes client.
Can query the Kubernetes API Server.

Kubernetes-version
-----------------------
Istio is compatible with Kubernetes: v1.15.9.


Istio-existence
-----------------------
Istio will be installed in the istio-system namespace.

Kubernetes-setup
-----------------------
Can create necessary Kubernetes configurations: Namespace,ClusterRole,ClusterRoleBinding,CustomResourceDefinition,Role,ServiceAccount,Service,Deployments,ConfigMap.

SideCar-Injector
-----------------------
This Kubernetes cluster supports automatic sidecar injection. To enable automatic sidecar injection see https://istio.io/docs/setup/kubernetes/additional-setup/sidecar-injection/#deploying-an-app

-----------------------
Install Pre-Check passed! The cluster is ready for Istio installation.

Create a namespace for Istio.
root@kub-master:~# kubectl create ns istio-system
namespace/istio-system created
root@kub-master:~# cd istio-1.2.5/

You should have Helm installed in your cluster to install Istio so make sure you have it configured. If you don't have Helm installed follow the following link to set it up.
root@kub-master:~# helm template install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl apply -f -
configmap/istio-crd-10 created
configmap/istio-crd-11 created
configmap/istio-crd-12 created
serviceaccount/istio-init-service-account created
clusterrole.rbac.authorization.k8s.io/istio-init-istio-system created
clusterrolebinding.rbac.authorization.k8s.io/istio-init-admin-role-binding-istio-system created
job.batch/istio-init-crd-10 created
job.batch/istio-init-crd-11 created
job.batch/istio-init-crd-12 created

Check Istio certificates that have been installed.
root@kub-master:~# kubectl get crds | grep 'istio.io\|certmanager.k8s.io' | wc -l
23
root@kub-master:~/istio-1.2.5# kubectl get crds | grep 'istio.io\|certmanager.k8s.io'
adapters.config.istio.io               2020-01-27T09:59:26Z
attributemanifests.config.istio.io     2020-01-27T09:59:26Z
authorizationpolicies.rbac.istio.io    2020-01-27T09:59:27Z
clusterrbacconfigs.rbac.istio.io       2020-01-27T09:59:26Z
destinationrules.networking.istio.io   2020-01-27T09:59:26Z
envoyfilters.networking.istio.io       2020-01-27T09:59:26Z
gateways.networking.istio.io           2020-01-27T09:59:26Z
handlers.config.istio.io               2020-01-27T09:59:26Z
httpapispecbindings.config.istio.io    2020-01-27T09:59:26Z
httpapispecs.config.istio.io           2020-01-27T09:59:26Z
instances.config.istio.io              2020-01-27T09:59:26Z
meshpolicies.authentication.istio.io   2020-01-27T09:59:26Z
policies.authentication.istio.io       2020-01-27T09:59:26Z
quotaspecbindings.config.istio.io      2020-01-27T09:59:26Z
quotaspecs.config.istio.io             2020-01-27T09:59:26Z
rbacconfigs.rbac.istio.io              2020-01-27T09:59:26Z
rules.config.istio.io                  2020-01-27T09:59:26Z
serviceentries.networking.istio.io     2020-01-27T09:59:26Z
servicerolebindings.rbac.istio.io      2020-01-27T09:59:26Z
serviceroles.rbac.istio.io             2020-01-27T09:59:26Z
sidecars.networking.istio.io           2020-01-27T09:59:26Z
templates.config.istio.io              2020-01-27T09:59:26Z
virtualservices.networking.istio.io    2020-01-27T09:59:26Z

Install the Istio template.
root@kub-master:~/istio-1.2.5# helm template install/kubernetes/helm/istio --name istio --namespace istio-system | kubectl apply -f -
configmap/istio-galley-configuration created
configmap/prometheus created
configmap/istio-security-custom-resources created
configmap/istio created
configmap/istio-sidecar-injector created
serviceaccount/istio-galley-service-account created
serviceaccount/istio-ingressgateway-service-account created
serviceaccount/istio-mixer-service-account created
serviceaccount/istio-pilot-service-account created
serviceaccount/prometheus created
serviceaccount/istio-cleanup-secrets-service-account created
clusterrole.rbac.authorization.k8s.io/istio-cleanup-secrets-istio-system created
clusterrolebinding.rbac.authorization.k8s.io/istio-cleanup-secrets-istio-system created
job.batch/istio-cleanup-secrets-1.2.5 created
serviceaccount/istio-security-post-install-account created
clusterrole.rbac.authorization.k8s.io/istio-security-post-install-istio-system created
clusterrolebinding.rbac.authorization.k8s.io/istio-security-post-install-role-binding-istio-system created
job.batch/istio-security-post-install-1.2.5 created
serviceaccount/istio-citadel-service-account created
serviceaccount/istio-sidecar-injector-service-account created
serviceaccount/istio-multi created
clusterrole.rbac.authorization.k8s.io/istio-galley-istio-system created
clusterrole.rbac.authorization.k8s.io/istio-mixer-istio-system created
clusterrole.rbac.authorization.k8s.io/istio-pilot-istio-system created
clusterrole.rbac.authorization.k8s.io/prometheus-istio-system created
clusterrole.rbac.authorization.k8s.io/istio-citadel-istio-system created
clusterrole.rbac.authorization.k8s.io/istio-sidecar-injector-istio-system created
clusterrole.rbac.authorization.k8s.io/istio-reader created
clusterrolebinding.rbac.authorization.k8s.io/istio-galley-admin-role-binding-istio-system created
clusterrolebinding.rbac.authorization.k8s.io/istio-mixer-admin-role-binding-istio-system created
clusterrolebinding.rbac.authorization.k8s.io/istio-pilot-istio-system created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-istio-system created
clusterrolebinding.rbac.authorization.k8s.io/istio-citadel-istio-system created
clusterrolebinding.rbac.authorization.k8s.io/istio-sidecar-injector-admin-role-binding-istio-system created
clusterrolebinding.rbac.authorization.k8s.io/istio-multi created
role.rbac.authorization.k8s.io/istio-ingressgateway-sds created
rolebinding.rbac.authorization.k8s.io/istio-ingressgateway-sds created
service/istio-galley created
service/istio-ingressgateway created
service/istio-policy created
service/istio-telemetry created
service/istio-pilot created
service/prometheus created
service/istio-citadel created
service/istio-sidecar-injector created
deployment.apps/istio-galley created
deployment.apps/istio-ingressgateway created
deployment.apps/istio-policy created
deployment.apps/istio-telemetry created
deployment.apps/istio-pilot created
deployment.apps/prometheus created
deployment.apps/istio-citadel created
deployment.apps/istio-sidecar-injector created
horizontalpodautoscaler.autoscaling/istio-ingressgateway created
horizontalpodautoscaler.autoscaling/istio-policy created
horizontalpodautoscaler.autoscaling/istio-telemetry created
horizontalpodautoscaler.autoscaling/istio-pilot created
mutatingwebhookconfiguration.admissionregistration.k8s.io/istio-sidecar-injector created
poddisruptionbudget.policy/istio-galley created
poddisruptionbudget.policy/istio-ingressgateway created
poddisruptionbudget.policy/istio-policy created
poddisruptionbudget.policy/istio-telemetry created
poddisruptionbudget.policy/istio-pilot created
poddisruptionbudget.policy/istio-sidecar-injector created
attributemanifest.config.istio.io/istioproxy created
attributemanifest.config.istio.io/kubernetes created
instance.config.istio.io/requestcount created
instance.config.istio.io/requestduration created
instance.config.istio.io/requestsize created
instance.config.istio.io/responsesize created
instance.config.istio.io/tcpbytesent created
instance.config.istio.io/tcpbytereceived created
instance.config.istio.io/tcpconnectionsopened created
instance.config.istio.io/tcpconnectionsclosed created
handler.config.istio.io/prometheus created
rule.config.istio.io/promhttp created
rule.config.istio.io/promtcp created
rule.config.istio.io/promtcpconnectionopen created
rule.config.istio.io/promtcpconnectionclosed created
handler.config.istio.io/kubernetesenv created
rule.config.istio.io/kubeattrgenrulerule created
rule.config.istio.io/tcpkubeattrgenrulerule created
instance.config.istio.io/attributes created
destinationrule.networking.istio.io/istio-policy created
destinationrule.networking.istio.io/istio-telemetry created

Check if Istio is being installed successfully or not.
root@kub-master:~/istio-1.2.5# kubectl get all -n istio-system
NAME                                          READY   STATUS      RESTARTS   AGE
pod/istio-citadel-555dbdfd6b-ksqzn            1/1     Running     0          25m
pod/istio-cleanup-secrets-1.2.5-fr6tj         0/1     Completed   0          25m
pod/istio-galley-6855ffd77f-5b2nd             1/1     Running     0          25m
pod/istio-ingressgateway-7cfcbf4fb8-ntmr5     1/1     Running     0          25m
pod/istio-init-crd-10-f4xjf                   0/1     Completed   0          25m
pod/istio-init-crd-11-ct2t7                   0/1     Completed   0          25m
pod/istio-init-crd-12-nwgp8                   0/1     Completed   0          25m
pod/istio-pilot-9589bcff5-lt85f               2/2     Running     0          25m
pod/istio-policy-9dbbb8ccd-s5lpc              2/2     Running     2          25m
pod/istio-security-post-install-1.2.5-l8cw2   0/1     Completed   0          25m
pod/istio-sidecar-injector-74f597fb84-kv2tn   1/1     Running     0          25m
pod/istio-telemetry-5d95788576-sr5nr          2/2     Running     1          25m
pod/prometheus-7d7b9f7844-bsh42               1/1     Running     0          25m


NAME                             TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                                                                                                      AGE
service/istio-citadel            ClusterIP      10.108.238.135   <none>        8060/TCP,15014/TCP                                                                                                                           25m
service/istio-galley             ClusterIP      10.103.63.106    <none>        443/TCP,15014/TCP,9901/TCP                                                                                                                   25m
service/istio-ingressgateway     LoadBalancer   10.99.247.174    <pending>     15020:31884/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:31236/TCP,15030:30003/TCP,15031:32047/TCP,15032:30130/TCP,15443:32711/TCP   25m
service/istio-pilot              ClusterIP      10.98.94.187     <none>        15010/TCP,15011/TCP,8080/TCP,15014/TCP                                                                                                       25m
service/istio-policy             ClusterIP      10.98.153.137    <none>        9091/TCP,15004/TCP,15014/TCP                                                                                                                 25m
service/istio-sidecar-injector   ClusterIP      10.102.180.126   <none>        443/TCP                                                                                                                                      25m
service/istio-telemetry          ClusterIP      10.104.132.38    <none>        9091/TCP,15004/TCP,15014/TCP,42422/TCP                                                                                                       25m
service/prometheus               ClusterIP      10.96.51.228     <none>        9090/TCP                                                                                                                                     25m


NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/istio-citadel            1/1     1            1           25m
deployment.apps/istio-galley             1/1     1            1           25m
deployment.apps/istio-ingressgateway     1/1     1            1           25m
deployment.apps/istio-pilot              1/1     1            1           25m
deployment.apps/istio-policy             1/1     1            1           25m
deployment.apps/istio-sidecar-injector   1/1     1            1           25m
deployment.apps/istio-telemetry          1/1     1            1           25m
deployment.apps/prometheus               1/1     1            1           25m

NAME                                                DESIRED   CURRENT   READY   AGE
replicaset.apps/istio-citadel-555dbdfd6b            1         1         1       25m
replicaset.apps/istio-galley-6855ffd77f             1         1         1       25m
replicaset.apps/istio-ingressgateway-7cfcbf4fb8     1         1         1       25m
replicaset.apps/istio-pilot-9589bcff5               1         1         1       25m
replicaset.apps/istio-policy-9dbbb8ccd              1         1         1       25m
replicaset.apps/istio-sidecar-injector-74f597fb84   1         1         1       25m
replicaset.apps/istio-telemetry-5d95788576          1         1         1       25m
replicaset.apps/prometheus-7d7b9f7844               1         1         1       25m


NAME                                                       REFERENCE                         TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
horizontalpodautoscaler.autoscaling/istio-ingressgateway   Deployment/istio-ingressgateway   <unknown>/80%   1         5         1          25m
horizontalpodautoscaler.autoscaling/istio-pilot            Deployment/istio-pilot            <unknown>/80%   1         5         1          25m
horizontalpodautoscaler.autoscaling/istio-policy           Deployment/istio-policy           <unknown>/80%   1         5         1          25m
horizontalpodautoscaler.autoscaling/istio-telemetry        Deployment/istio-telemetry        <unknown>/80%   1         5         1          25m

NAME                                          COMPLETIONS   DURATION   AGE
job.batch/istio-cleanup-secrets-1.2.5         1/1           2s         25m
job.batch/istio-init-crd-10                   1/1           12s        25m
job.batch/istio-init-crd-11                   1/1           11s        25m
job.batch/istio-init-crd-12                   1/1           13s        25m
job.batch/istio-security-post-install-1.2.5   1/1           8s         25m

By default, istio-ingressgateway works as a load balancer and it's fine if you're using any cloud provider or any load balancer software but I like to use it as NodePort as I can manage it better in our bare metal set up so if you're looking to do the same edit the service configuration of istio-ingressgateway and replace type LoadBalancer with NodePort and save it.
root@kub-master:~/istio-1.2.5# kubectl edit service/istio-ingressgateway -n istio-system
service/istio-ingressgateway edited

You can see the service istio-ingressgateway now works as NodePort now and we can access Istio on any of the Cluster Node port.
root@kub-master:~/istio-1.2.5# kubectl get services -n istio-system
NAME                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                                                                                                      AGE
istio-citadel            ClusterIP   10.108.238.135   <none>        8060/TCP,15014/TCP                                                                                                                           33m
istio-galley             ClusterIP   10.103.63.106    <none>        443/TCP,15014/TCP,9901/TCP                                                                                                                   33m
istio-ingressgateway     NodePort    10.99.247.174    <none>        15020:31884/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:31236/TCP,15030:30003/TCP,15031:32047/TCP,15032:30130/TCP,15443:32711/TCP   33m
istio-pilot              ClusterIP   10.98.94.187     <none>        15010/TCP,15011/TCP,8080/TCP,15014/TCP                                                                                                       33m
istio-policy             ClusterIP   10.98.153.137    <none>        9091/TCP,15004/TCP,15014/TCP                                                                                                                 33m
istio-sidecar-injector   ClusterIP   10.102.180.126   <none>        443/TCP                                                                                                                                      33m
istio-telemetry          ClusterIP   10.104.132.38    <none>        9091/TCP,15004/TCP,15014/TCP,42422/TCP                                                                                                       33m
prometheus               ClusterIP   10.96.51.228     <none>        9090/TCP                                                                                                                                     33m

Install and configure Helm in Kubernetes

Helm is a great way to install new services in Kubernetes and it can be used to run custom services as well. Helm works as a package manager service that is used to deploy the applications in versioned and preconfigured way.

Create an RBAC configuration to provide admin access to Helm.
root@kub-master:~# cat <<__EOF__>~/helm-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
__EOF__

Create the RBAC rule on your cluster.
root@kub-master:~# kubectl create -f helm-rbac.yaml
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created

Download the helm package.
root@kub-master:~# root@kub-master:~# wget https://storage.googleapis.com/kubernetes-helm/helm-v2.9.1-linux-amd64.tar.gz
--2020-01-27 17:21:03--  https://storage.googleapis.com/kubernetes-helm/helm-v2.9.1-linux-amd64.tar.gz
Resolving storage.googleapis.com (storage.googleapis.com)... 172.217.194.128, 2404:6800:4003:c03::80
Connecting to storage.googleapis.com (storage.googleapis.com)|172.217.194.128|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 9160761 (8.7M) [application/x-tar]
Saving to: ‘helm-v2.9.1-linux-amd64.tar.gz’

helm-v2.9.1-linux-amd64.tar.gz                     100%[================================================================================================================>]   8.74M  17.3MB/s    in 0.5s

2020-01-27 17:21:04 (17.3 MB/s) - ‘helm-v2.9.1-linux-amd64.tar.gz’ saved [9160761/9160761]

Extract and copy the binary to executable path.
root@kub-master:~# tar -xvzf helm-v2.9.1-linux-amd64.tar.gz
linux-amd64/
linux-amd64/README.md
linux-amd64/helm
linux-amd64/LICENSE
root@kub-master:~# cp linux-amd64/helm /usr/local/bin/
root@kub-master:~# helm version
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}

Initial the tiller to complete the setup of Helm.
root@kub-master:~# helm init --service-account tiller --tiller-namespace kube-system
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

Check if tiller is installed an running successfully.
root@kub-master:~# kubectl get all --all-namespaces
NAMESPACE     NAME                                                  READY   STATUS    RESTARTS   AGE
default       pod/nginx-deployment-5b44794677-dlv5p                 1/1     Running   0          3h26m
default       pod/nginx-deployment-5b44794677-hdj9n                 1/1     Running   0          3h26m
default       pod/nginx-deployment-5b44794677-r2gbd                 1/1     Running   0          3h26m
kube-system   pod/coredns-5c98db65d4-k77mn                          1/1     Running   0          4h33m
kube-system   pod/coredns-5c98db65d4-mv7p9                          1/1     Running   0          4h33m
kube-system   pod/etcd-izt4nicu8fd63j4cm5tj1uz                      1/1     Running   0          4h32m
kube-system   pod/kube-apiserver-izt4nicu8fd63j4cm5tj1uz            1/1     Running   0          4h32m
kube-system   pod/kube-controller-manager-izt4nicu8fd63j4cm5tj1uz   1/1     Running   0          4h32m
kube-system   pod/kube-flannel-ds-amd64-gfct6                       1/1     Running   0          4h15m
kube-system   pod/kube-flannel-ds-amd64-l8ts5                       1/1     Running   0          20m
kube-system   pod/kube-flannel-ds-amd64-rdlfc                       1/1     Running   0          20m
kube-system   pod/kube-flannel-ds-amd64-tx8dm                       1/1     Running   1          4h8m
kube-system   pod/kube-proxy-8958c                                  1/1     Running   0          20m
kube-system   pod/kube-proxy-fthqr                                  1/1     Running   0          20m
kube-system   pod/kube-proxy-jgql9                                  1/1     Running   0          4h33m
kube-system   pod/kube-proxy-jh6m8                                  1/1     Running   0          4h8m
kube-system   pod/kube-scheduler-izt4nicu8fd63j4cm5tj1uz            1/1     Running   0          4h32m
kube-system   pod/tiller-deploy-788b748dc8-grspx                    1/1     Running   0          30s


NAMESPACE     NAME                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes      ClusterIP   10.96.0.1       <none>        443/TCP                  4h33m
default       service/nginx           NodePort    10.96.121.113   <none>        80:30080/TCP             3h26m
kube-system   service/kube-dns        ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP   4h33m
kube-system   service/tiller-deploy   ClusterIP   10.107.143.64   <none>        44134/TCP                30s

NAMESPACE     NAME                                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                 AGE
kube-system   daemonset.apps/kube-flannel-ds-amd64     4         4         4       4            4           <none>                        4h15m
kube-system   daemonset.apps/kube-flannel-ds-arm       0         0         0       0            0           <none>                        4h15m
kube-system   daemonset.apps/kube-flannel-ds-arm64     0         0         0       0            0           <none>                        4h15m
kube-system   daemonset.apps/kube-flannel-ds-ppc64le   0         0         0       0            0           <none>                        4h15m
kube-system   daemonset.apps/kube-flannel-ds-s390x     0         0         0       0            0           <none>                        4h15m
kube-system   daemonset.apps/kube-proxy                4         4         4       4            4           beta.kubernetes.io/os=linux   4h33m

NAMESPACE     NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
default       deployment.apps/nginx-deployment   3/3     3            3           3h26m
kube-system   deployment.apps/coredns            2/2     2            2           4h33m
kube-system   deployment.apps/tiller-deploy      1/1     1            1           30s

NAMESPACE     NAME                                          DESIRED   CURRENT   READY   AGE
default       replicaset.apps/nginx-deployment-5b44794677   3         3         3       3h26m
kube-system   replicaset.apps/coredns-5c98db65d4            2         2         2       4h33m
kube-system   replicaset.apps/tiller-deploy-788b748dc8      1         1         1       30s

Add a slave to Kubernetes cluster by creating new token

The default life span of a Kubernetes cluster join token is 24 hours and what if we want to add new slaves after that time then we can use the following ways to create a new token and add a slave to current running cluster.

By the following command, we can check TTL of any token.
root@kub-master:~# kubeadm token list
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
rybj3q.rc4st2ll9w87tm7q   23h       2020-01-28T16:48:16+08:00   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
zyijtb.ee2b9nyarusddaa1   19h       2020-01-28T12:49:16+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

Now if we want to create a new token and RSA keys for slaves to authenticate with Master following commands are used.
root@kub-master:~# sudo openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der | openssl dgst -sha256 -hex
writing RSA key
(stdin)= abcb142fb79a87ddb481c26d2e5266f597753d33e40547da378b1d833a85e4b6

root@kub-master:~# sudo kubeadm token create --groups system:bootstrappers:kubeadm:default-node-token
rybj3q.rc4st2ll9w87tm7q

Now add these values in the kubeadm join command to add slaves to the cluster.
root@kub-slave2:~# kubeadm join 172.18.77.13:6443 --token rybj3q.rc4st2ll9w87tm7q   --discovery-token-ca-cert-hash sha256:abcb142fb79a87ddb481c26d2e5266f597753d33e40547da378b1d833a85e4b6
[preflight] Running pre-flight checks
 [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
 [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.5. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

root@kub-slave3:~# kubeadm join 172.18.77.13:6443 --token rybj3q.rc4st2ll9w87tm7q   --discovery-token-ca-cert-hash sha256:abcb142fb79a87ddb481c26d2e5266f597753d33e40547da378b1d833a85e4b6
[preflight] Running pre-flight checks
 [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
 [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.5. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

Then we can check on our master Node that slaves have been added successfully or not.
root@kub-master:~# kubectl get nodes -o wide
NAME                      STATUS   ROLES    AGE     VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
kub-master                Ready    master   4h15m   v1.15.4   172.18.77.13   <none>        Ubuntu 18.04.3 LTS   4.15.0-72-generic   docker://19.3.5
kub-slave                 Ready    <none>   3h50m   v1.15.4   172.18.77.14   <none>        Ubuntu 18.04.3 LTS   4.15.0-72-generic   docker://19.3.5
kub-slave2                Ready    <none>   2m36s   v1.15.4   172.18.77.15   <none>        Ubuntu 18.04.3 LTS   4.15.0-72-generic   docker://19.3.5
kub-slave3                Ready    <none>   2m36s   v1.15.4   172.18.77.16   <none>        Ubuntu 18.04.3 LTS   4.15.0-72-generic   docker://19.3.5


Create a simple Nginx deployment and service in Kubernetes

We will create a simple Nginx deployment and service in Kubernetes taking default Nginx image into consideration. We can use this configuration to run vanilla Nginx to check if your cluster is running fine or not and can be modified to run any custom image.

In the following configuration, we are creating deployment which will create three Nginx pods to have high availability. A Service of type NodePort has been created to expose our Pods to custom port on Cluster Nodes.
root@kub-master:~# vi nginx-deployment.yaml

##Content of nginx-deployment.yaml file##
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    name: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      name: nginx
  template:
    metadata:
      labels:
        name: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80

---
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    name: nginx
spec:
  type: NodePort
  ports:
    - port: 80
      nodePort: 30080
      name: http
  selector:
    name: nginx

Create deployment and service by running the following command.
root@kub-master:~# kubectl create -f nginx-deployment.yaml
deployment.apps/nginx-deployment created
service/nginx created

Check if the required Pods have been successfully created or not.
root@kub-master:~# kubectl get all
NAME                                    READY   STATUS    RESTARTS   AGE
pod/nginx-deployment-5b44794677-dlv5p   1/1     Running   0          19s
pod/nginx-deployment-5b44794677-hdj9n   1/1     Running   0          19s
pod/nginx-deployment-5b44794677-r2gbd   1/1     Running   0          19s


NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        67m
service/nginx        NodePort    10.96.121.113   <none>        80:30080/TCP   19s


NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-deployment   3/3     3            3           19s

NAME                                          DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-deployment-5b44794677   3         3         3       19s

Let's check on our custom port which we have defined to see if we are able to access service or not.
root@kub-master:~# curl localhost:30080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>


Create a single Master Kubernetes setup in ubuntu

We will be creating a single master Kubernetes setup in Ubuntu with multiple slave nodes attached to it. In this setup, there will be a single server that will act as our main Kubernetes master and then we can add slaves to it according to our needs.

We would be using the following servers to set up the cluster.
Node info:
Node NameIPPurpose
kub-master172.18.77.12k8s master / etcd node
kub-slave172.18.77.11Slave Node

Set the hostname on each of the servers in the following way:
root@iZt4n46l2ljsxndagomkp0Z:~# hostnamectl set-hostname kub-master
root@iZt4n46l2ljsxndagomkp0Z:~# bash
root@kub-master:~#

root@iZt4n46l2ljsxndagomkp1Z:~# hostnamectl set-hostname kub-slave
root@iZt4n46l2ljsxndagomkp1Z:~# bash
root@kub-slave:~#

Turn off the swap in all the Nodes.
root@kub-master:~# sudo swapoff -a
root@kub-master:~# sudo sed -i 's/^.*swap/#&/' /etc/fstab

Now run following commands on all the Nodes to install the required packages.
root@kub-master:~# apt-get update && apt-get install -y curl apt-transport-https

Install the latest version of Docker in each of the Nodes.
root@kub-master:~# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
root@kub-master:~# cat <<EOF >/etc/apt/sources.list.d/docker.list
deb https://download.docker.com/linux/$(lsb_release -si | tr '[:upper:]' '[:lower:]') $(lsb_release -cs) stable
EOF
root@kub-master:~# apt-get update && apt-get install -y docker-ce docker

Need to configure iptables for forwarding all incoming packets on all the Nodes.
root@kub-master:~# cat <<__EOF__ >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
__EOF__
root@kub-master:~# sysctl --system
root@kub-master:~# sysctl -p /etc/sysctl.d/k8s.conf
root@kub-master:~# iptables -P FORWARD ACCEPT

Install kubeadm, kubelet and kubectl.
root@kub-master:~# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
root@kub-master:~# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
root@kub-master:~# apt-get update
root@kub-master:~# apt-get install -y kubelet=1.15.4-00 kubectl=1.15.4-00 kubeadm=1.15.4-00

Run the following command to initialize the configuration only on the master server.
root@kub-master:~# kubeadm init --pod-network-cidr=10.244.0.0/16
I0127 12:48:37.761157   13545 version.go:248] remote version is much newer: v1.17.2; falling back to: stable-1.15
[init] Using Kubernetes version: v1.15.9
[preflight] Running pre-flight checks
 [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
 [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.5. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [izt4nicu8fd63j4cm5tj1uz kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.18.77.13]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [izt4nicu8fd63j4cm5tj1uz localhost] and IPs [172.18.77.13 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [izt4nicu8fd63j4cm5tj1uz localhost] and IPs [172.18.77.13 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 34.001523 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node izt4nicu8fd63j4cm5tj1uz as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node izt4nicu8fd63j4cm5tj1uz as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: zyijtb.ee2b9nyarusddaa1
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.18.77.13:6443 --token zyijtb.ee2b9nyarusddaa1 \
    --discovery-token-ca-cert-hash sha256:abcb142fb79a87ddb481c26d2e5266f597753d33e40547da378b1d833a85e4b6

Run the following commands to setup authentication for the user who will control the Kubernetes cluster.
root@kub-master:~# mkdir -p $HOME/.kube
root@kub-master:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@kub-master:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config

Deploy the pod network to the cluster so that the cluster has proper networking. In our case, we are deploying flannel networking.
root@kub-master:~# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

Check if all the pods are up and running successfully.
root@kub-master:~# kubectl get pods --all-namespaces
NAMESPACE     NAME                                              READY   STATUS    RESTARTS   AGE
kube-system   coredns-5c98db65d4-k77mn                          0/1     Running   0          18m
kube-system   coredns-5c98db65d4-mv7p9                          0/1     Running   0          18m
kube-system   etcd-izt4nicu8fd63j4cm5tj1uz                      1/1     Running   0          17m
kube-system   kube-apiserver-izt4nicu8fd63j4cm5tj1uz            1/1     Running   0          17m
kube-system   kube-controller-manager-izt4nicu8fd63j4cm5tj1uz   1/1     Running   0          17m
kube-system   kube-flannel-ds-amd64-gfct6                       1/1     Running   0          21s
kube-system   kube-proxy-jgql9                                  1/1     Running   0          18m
kube-system   kube-scheduler-izt4nicu8fd63j4cm5tj1uz            1/1     Running   0          17m

Add the slave to the cluster by running the following command in the slave node. The command can be copied from the output of init command to set up the server.
root@kub-slave:~# kubeadm join 172.18.77.13:6443 --token zyijtb.ee2b9nyarusddaa1 \
>     --discovery-token-ca-cert-hash sha256:abcb142fb79a87ddb481c26d2e5266f597753d33e40547da378b1d833a85e4b6
[preflight] Running pre-flight checks
 [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
 [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.5. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

Check on the master node to see if all nodes are added successfully.
root@kub-master:~# kubectl get nodes
NAME                      STATUS   ROLES    AGE    VERSION
kub-master                Ready    master   26m    v1.15.4
kub-slave                 Ready    <none>   108s   v1.15.4

Wednesday, January 8, 2020

How to check size of hidden folders and files in linux.

There are cases when we see high disk utilization but the disk utility of Linux is not able to tell you the exact usage or size of hidden files and folders.
root@ravi-172-21-42-201:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev             16G  4.0K   16G   1% /dev
tmpfs           3.2G  388K  3.2G   1% /run
/dev/vda1        40G   25G   14G  65% /
none            4.0K     0  4.0K   0% /sys/fs/cgroup
none            5.0M     0  5.0M   0% /run/lock
none             16G     0   16G   0% /run/shm
none            100M     0  100M   0% /run/user
root@ravi-172-21-42-201:~# cd /
root@ravi-172-21-42-201:/# du -sh *  |sort -h
556M lib
581M var
1.5G usr
22G root

root@ravi-172-21-42-201:/# cd root/
root@ravi-172-21-42-201:~# du -sh
22G .
root@ravi-172-21-42-201:~# du -sh *  |sort -h
0 starter.bash
4.0K Untitled Folder
4.0K Untitled.ipynb
4.0K nohup2.out
4.0K nohup6.out
4.0K test_connectivity.ipynb
12K tokopedia_challenge_report.log
20K scripts
80K AppsFlyer
176K c.log
200K tokopedia_challenge_report
428K nohup12.out
952K UserPersonalisation
1.6M weekly_branch_GA_DB_report
40M Miniconda2-latest-Linux-x86_64.sh
67M Miniconda3-latest-Linux-x86_64.sh
310M RFM Analysis
2.4G market_basket_analysis
3.1G miniconda3

As we can see in the above case that it's not able to show the exact size as we have hidden folders in our directory which need to be considered as well.
root@ravi-172-21-42-201:~# ls -a
.              .bashrc-miniconda2.bak  .ipynb_checkpoints  .pip              .selected_editor  .viminfo                           Untitled Folder         miniconda3   starter.bash
..             .bashrc-miniconda3.bak  .ipython            .profile          .ssh              AppsFlyer                          Untitled.ipynb          nohup12.out  test_connectivity.ipynb
.ansible       .cache                  .jupyter            .pydistutils.cfg  .test_mba.py.swp  Miniconda2-latest-Linux-x86_64.sh  UserPersonalisation     nohup2.out   tokopedia_challenge_report
.bash_history  .conda                  .local              .python_history   .tsh              Miniconda3-latest-Linux-x86_64.sh  c.log                   nohup6.out   tokopedia_challenge_report.log
.bashrc        .config                 .oracle_jre_usage   .rpmdb            .vim              RFM Analysis                       market_basket_analysis  scripts      weekly_branch_GA_DB_report

Now we will use the following command to check the size of hidden folders and files.
root@ravi-172-21-42-201:~# du -sch .[!.]* * |sort -h

Let's see the following command in action showing us the exact size of hidden files and folders.
root@ravi-172-21-42-201:~# du -sch .[!.]* * |sort -h
0 starter.bash
4.0K .bashrc-miniconda2.bak
4.0K .bashrc-miniconda3.bak
4.0K .profile
4.0K .pydistutils.cfg
4.0K .python_history
4.0K .selected_editor
4.0K .vim
4.0K Untitled Folder
4.0K Untitled.ipynb
4.0K nohup2.out
4.0K nohup6.out
4.0K test_connectivity.ipynb
8.0K .ansible
8.0K .bash_history
8.0K .bashrc
8.0K .oracle_jre_usage
12K .conda
12K .ipynb_checkpoints
12K .jupyter
12K .ssh
12K .test_mba.py.swp
12K tokopedia_challenge_report.log
16K .config
16K .viminfo
20K scripts
24K .tsh
80K AppsFlyer
100K .pip
176K c.log
200K tokopedia_challenge_report
428K nohup12.out
436K .ipython
448K .rpmdb
952K UserPersonalisation
1.6M weekly_branch_GA_DB_report
8.7M .cache
40M Miniconda2-latest-Linux-x86_64.sh
67M Miniconda3-latest-Linux-x86_64.sh
310M RFM Analysis
2.4G market_basket_analysis
3.1G miniconda3
16G .local
22G total

Setup fully configurable EFK Elasticsearch Fluentd Kibana setup in Kubernetes

In the following setup, we will be creating a fully configurable Elasticsearch, Flunetd, Kibana setup better known as EKF setup. There is a...