Thursday, May 7, 2020

Setup fully configurable EFK Elasticsearch Fluentd Kibana setup in Kubernetes

In the following setup, we will be creating a fully configurable Elasticsearch, Flunetd, Kibana setup better known as EKF setup. There is an option to use logstash as well instead of Fluentd but we are using Fluentd as it has better performance in the case of Kubernetes Logging. First of all, we will require a Kubernetes set up to perform these tasks. I'm using the GKE for my current setup but the same can be replicated on any of the Kubernetes setups. If you want to create a single master setup can follow this link or for multi-master set up the following link.

We will be creating an EFK setup with open-source Xpack enabled to have authentication and authorization enabled on Kibana plus authentication on part of Elasticsearch and Fluentd.

Let's start by creating a separate Namespace for our EKF as kube-logging.
ravi_gadgil@cloudshell:~/logging/efk_bk$ cat namespace-kube-logging.yaml
kind: Namespace
apiVersion: v1
metadata:
  name: kube-logging
ravi_gadgil@cloudshell:~/logging/efk_bk$ kubectl apply -f namespace-kube-logging.yaml
namespace/kube-logging created

This is an optional command which can be used to provide admin access to your Namespace if in case it's not able to get data from other Namespaces.
ravi_gadgil@cloudshell:~/logging/efk_bk$ kubectl create clusterrolebinding permissive-binding --clusterrole=cluster-admin --user=admin --user=kubelet --group=system:serviceaccounts --user=kube-logging

We will start by creating Elasticsearch setup first which will include Storage Class so that we can have persistence storage for our Elasticsearch nodes and can have data if in case any pods fail. In my case, as I'm working on GKE so provisioner is based upon that. If you're using a different setup than GKE use the respective provisioner.
ravi_gadgil@cloudshell:~/logging/efk_bk$ cat elasticsearch_sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: elasticsearch-sc
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-ssd
  fstype: ext4
  replication-type: none
ravi_gadgil@cloudshell:~/logging/efk_bk$ kubectl apply -f elasticsearch_sc.yaml
storageclass.storage.k8s.io/elasticsearch-sc created

As we are using the Xpack for our setup so we need to create a password for the elastic user which is superuser in our case. We will be using Kubernetes Secrets to store this password and it will be called on respective EFK services. We are using the password in base64 as Kubernetes support it in that way only.
ravi_gadgil@cloudshell:~/logging/efk_bk$ echo -n 'changeme' | base64
Y2hhbmdlbWU=
ravi_gadgil@cloudshell:~/logging/efk_bk$ cat elastic_user_secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: elasticsearch-pw-elastic
  namespace: kube-logging
type: Opaque
data:
  password: Y2hhbmdlbWU=
ravi_gadgil@cloudshell:~/logging/efk_bk$ kubectl apply -f elastic_user_secret.yaml
secret/elasticsearch-pw-elastic created

Now let's deploy our Elasticsearch as Statefulset to have high availability of our cluster which includes 3 Master's with persistence storage and Xpack enabled.
ravi_gadgil@cloudshell:~/logging/efk_bk$ cat elastic_user_secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: elasticsearch-pw-elastic
  namespace: kube-logging
type: Opaque
data:
  password: Y2hhbmdlbWU=
ravi_gadgil@cloudshell:~/logging/efk_bk$ kubectl apply -f elastic_user_secret.yaml
secret/elasticsearch-pw-elastic created
ravi_gadgil@cloudshell:~/logging/efk_bk$ cat elasticsearch_statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: es-cluster
  namespace: kube-logging
spec:
  serviceName: elasticsearch
  replicas: 3
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      containers:
      - name: elasticsearch
        image: docker.elastic.co/elasticsearch/elasticsearch:7.6.2
        resources:
            limits:
              cpu: 1000m
            requests:
              cpu: 100m
        ports:
        - containerPort: 9200
          name: rest
          protocol: TCP
        - containerPort: 9300
          name: inter-node
          protocol: TCP
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
        env:
          - name: cluster.name
            value: k8s-logs
          - name: node.name
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: discovery.seed_hosts
            value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
          - name: cluster.initial_master_nodes
            value: "es-cluster-0,es-cluster-1,es-cluster-2"
          - name: ES_JAVA_OPTS
            value: "-Xms512m -Xmx512m"
          - name: xpack.security.enabled
            value: "true"
          - name: xpack.monitoring.collection.enabled
            value: "true"
          - name: ELASTIC_PASSWORD
            valueFrom:
              secretKeyRef:
                name: elasticsearch-pw-elastic
                key: password
      initContainers:
      - name: fix-permissions
        image: busybox
        command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
        securityContext:
          privileged: true
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
      - name: increase-vm-max-map
        image: busybox
        command: ["sysctl", "-w", "vm.max_map_count=262144"]
        securityContext:
          privileged: true
      - name: increase-fd-ulimit
        image: busybox
        command: ["sh", "-c", "ulimit -n 65536"]
        securityContext:
          privileged: true
  volumeClaimTemplates:
  - metadata:
      name: data
      labels:
        app: elasticsearch
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: elasticsearch-sc
      resources:
        requests:
          storage: 50Gi
ravi_gadgil@cloudshell:~/logging/efk_bk$ kubectl apply -f elasticsearch_statefulset.yaml
statefulset.apps/es-cluster created

Now let's create a service for our Eleasticsearch cluster so that it can be accessed via that.
ravi_gadgil@cloudshell:~/logging/efk_bk$ cat elasticsearch_svc.yaml
kind: Service
apiVersion: v1
metadata:
  name: elasticsearch
  namespace: kube-logging
  labels:
    app: elasticsearch
spec:
  selector:
    app: elasticsearch
  clusterIP: None
  ports:
    - port: 9200
      name: rest
    - port: 9300
      name: inter-node
ravi_gadgil@cloudshell:~/logging/efk_bk$ kubectl apply -f elasticsearch_svc.yaml
service/elasticsearch created

Let's check how our Elasticsearch setup looks like.
ravi_gadgil@cloudshell:~/logging/efk_bk$ kubectl get all -n kube-logging
NAME               READY   STATUS    RESTARTS   AGE
pod/es-cluster-0   1/1     Running   0          20m
pod/es-cluster-1   1/1     Running   0          19m
pod/es-cluster-2   1/1     Running   0          19m

NAME                    TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)             AGE
service/elasticsearch   ClusterIP   None         <none>        9200/TCP,9300/TCP   36s

NAME                          READY   AGE
statefulset.apps/es-cluster   3/3     20m

Now let's deploy Kibana for our cluster so that we can visualize our data.
First, we need to create a ConnfigMap for our Kibana as it will work as a configuration file for the same. It will have the credentials which will be used to connect Elasticsearch. The path from which we want to access the Kibana in our case we will be example.com/kibana instead of accessing it from the root domain path. The rest of the security parameters are optional according to personal needs.
ravi_gadgil@cloudshell:~/logging/efk_bk$ cat kibana_configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: kube-logging
  name: kibana-config
  labels:
    app: kibana
data:
  kibana.yml: |-
    server.host: 0.0.0.0
    elasticsearch:
      hosts: ${ELASTICSEARCH_HOSTS}
      username: ${ELASTICSEARCH_USER}
      password: ${ELASTICSEARCH_PASSWORD}
    server.basePath: ${SERVER_BASEPATH}
    server.rewriteBasePath: ${SERVER_REWRITEBASEPATH}
    xpack.security.enabled: ${XPACK_SECURITY_ENABLED}
    xpack.security.sessionTimeout: 3600000
    xpack.security.encryptionKey: "REDACTEDbvhjdbvwkbvwdjnbkjvbsdvjskvjiwufwifuhiuwefbwkjcsdjckbdskj"
ravi_gadgil@cloudshell:~/logging/efk_bk$ kubectl apply -f kibana_configmap.yaml
configmap/kibana-config created

Let's deploy our Kibana with the following deployment.
ravi_gadgil@cloudshell:~/logging/efk_bk$ cat kibana.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: kube-logging
  name: kibana
  labels:
    app: kibana
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kibana
  template:
    metadata:
      labels:
        app: kibana
    spec:
      containers:
      - name: kibana
        image: docker.elastic.co/kibana/kibana:7.6.2
        ports:
        - containerPort: 5601
          name: webinterface
        env:
        - name: ELASTICSEARCH_HOSTS
          value: "http://elasticsearch.kube-logging.svc.cluster.local:9200"
        - name: SERVER_BASEPATH
          value: "/kibana"
        - name: SERVER_REWRITEBASEPATH
          value: "true"
        - name: XPACK_SECURITY_ENABLED
          value: "true"
        - name: ELASTICSEARCH_USER
          value: "elastic"
        - name: ELASTICSEARCH_PASSWORD
          valueFrom:
            secretKeyRef:
              name: elasticsearch-pw-elastic
              key: password
        volumeMounts:
        - name: config
          mountPath: /usr/share/kibana/config/kibana.yml
          readOnly: true
          subPath: kibana.yml
      volumes:
      - name: config
        configMap:
          name: kibana-config
ravi_gadgil@cloudshell:~/logging/efk_bk$ kubectl apply -f kibana.yaml
deployment.apps/kibana created

Let's create a service for our Kibana deployment.
dravi_gadgil@cloudshell:~/logging/efk_bk$ cat kibana_service.yaml
apiVersion: v1
kind: Service
metadata:
  namespace: kube-logging
  name: kibana
  labels:
    app: kibana
spec:
  type: NodePort
  ports:
  - port: 5601
    name: webinterface
    targetPort: 5601
  selector:
    app: kibana
ravi_gadgil@cloudshell:~/logging/efk_bk$ kubectl apply -f kibana_service.yaml
service/kibana created

Let's check how our Kibana setup looks like.
ravi_gadgil@cloudshell:~/logging/efk_bk$ kubectl get all -n kube-logging
NAME                          READY   STATUS    RESTARTS   AGE
pod/es-cluster-0              1/1     Running   0          66m
pod/es-cluster-1              1/1     Running   0          66m
pod/es-cluster-2              1/1     Running   0          66m
pod/kibana-5c4b9749f7-mmpgg   1/1     Running   0          4m39s

NAME                    TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)             AGE
service/elasticsearch   ClusterIP   None          <none>        9200/TCP,9300/TCP   47m
service/kibana          NodePort    10.44.13.43   <none>        5601:32023/TCP      3m16s

NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/kibana   1/1     1            1           4m39s

NAME                                DESIRED   CURRENT   READY   AGE
replicaset.apps/kibana-5c4b9749f7   1         1         1       4m39s

NAME                          READY   AGE
statefulset.apps/es-cluster   3/3     66m

Our Elasticsearch and Kibana are deployed successfully and the last thing which is left is Fluentd to collect logs from all the other pods in our Kubernetes cluster and push them to Elasticsearch.
We will create a ConfigMap for our Fluentd deployment so we can have better control over the parsing of logs and can pass the Elasticsearch credentials to it. You can download the latest copy of this config from the following link.

We are using the following ConfigMap with entries made to connect with our Elasticsearch cluster.
ravi_gadgil@cloudshell:~/logging/efk_bk$ cat fluentd-config.yaml
kind: ConfigMap
apiVersion: v1
metadata:
  name: fluentd-config-v0.2.0
  namespace: kube-logging
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
data:
  system.conf: |-
    <system>
      root_dir /tmp/fluentd-buffers/
    </system>

  containers.input.conf: |-
    # This configuration file for Fluentd / td-agent is used
    # to watch changes to Docker log files. The kubelet creates symlinks that
    # capture the pod name, namespace, container name & Docker container ID
    # to the docker logs for pods in the /var/log/containers directory on the host.
    # If running this fluentd configuration in a Docker container, the /var/log
    # directory should be mounted in the container.
    #
    # These logs are then submitted to Elasticsearch which assumes the
    # installation of the fluent-plugin-elasticsearch & the
    # fluent-plugin-kubernetes_metadata_filter plugins.
    # See https://github.com/uken/fluent-plugin-elasticsearch &
    # https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter for
    # more information about the plugins.
    #
    # Example
    # =======
    # A line in the Docker log file might look like this JSON:
    #
    # {"log":"2014/09/25 21:15:03 Got request with path wombat\n",
    #  "stream":"stderr",
    #   "time":"2014-09-25T21:15:03.499185026Z"}
    #
    # The time_format specification below makes sure we properly
    # parse the time format produced by Docker. This will be
    # submitted to Elasticsearch and should appear like:
    # $ curl 'http://elasticsearch-logging:9200/_search?pretty'
    # ...
    # {
    #      "_index" : "logstash-2014.09.25",
    #      "_type" : "fluentd",
    #      "_id" : "VBrbor2QTuGpsQyTCdfzqA",
    #      "_score" : 1.0,
    #      "_source":{"log":"2014/09/25 22:45:50 Got request with path wombat\n",
    #                 "stream":"stderr","tag":"docker.container.all",
    #                 "@timestamp":"2014-09-25T22:45:50+00:00"}
    #    },
    # ...
    #
    # The Kubernetes fluentd plugin is used to write the Kubernetes metadata to the log
    # record & add labels to the log record if properly configured. This enables users
    # to filter & search logs on any metadata.
    # For example a Docker container's logs might be in the directory:
    #
    #  /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b
    #
    # and in the file:
    #
    #  997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log
    #
    # where 997599971ee6... is the Docker ID of the running container.
    # The Kubernetes kubelet makes a symbolic link to this file on the host machine
    # in the /var/log/containers directory which includes the pod name and the Kubernetes
    # container name:
    #
    #    synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
    #    ->
    #    /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log
    #
    # The /var/log directory on the host is mapped to the /var/log directory in the container
    # running this instance of Fluentd and we end up collecting the file:
    #
    #   /var/log/containers/synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
    #
    # This results in the tag:
    #
    #  var.log.containers.synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
    #
    # The Kubernetes fluentd plugin is used to extract the namespace, pod name & container name
    # which are added to the log message as a kubernetes field object & the Docker container ID
    # is also added under the docker field object.
    # The final tag is:
    #
    #   kubernetes.var.log.containers.synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
    #
    # And the final log record look like:
    #
    # {
    #   "log":"2014/09/25 21:15:03 Got request with path wombat\n",
    #   "stream":"stderr",
    #   "time":"2014-09-25T21:15:03.499185026Z",
    #   "kubernetes": {
    #     "namespace": "default",
    #     "pod_name": "synthetic-logger-0.25lps-pod",
    #     "container_name": "synth-lgr"
    #   },
    #   "docker": {
    #     "container_id": "997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b"
    #   }
    # }
    #
    # This makes it easier for users to search for logs by pod name or by
    # the name of the Kubernetes container regardless of how many times the
    # Kubernetes pod has been restarted (resulting in a several Docker container IDs).

    # Json Log Example:
    # {"log":"[info:2016-02-16T16:04:05.930-08:00] Some log text here\n","stream":"stdout","time":"2016-02-17T00:04:05.931087621Z"}
    # CRI Log Example:
    # 2016-02-17T00:04:05.931087621Z stdout F [info:2016-02-16T16:04:05.930-08:00] Some log text here
    <source>
      @id fluentd-containers.log
      @type tail
      path /var/log/containers/*.log
      pos_file /var/log/es-containers.log.pos
      tag raw.kubernetes.*
      read_from_head true
      <parse>
        @type multi_format
        <pattern>
          format json
          time_key time
          time_format %Y-%m-%dT%H:%M:%S.%NZ
        </pattern>
        <pattern>
          format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/
          time_format %Y-%m-%dT%H:%M:%S.%N%:z
        </pattern>
      </parse>
    </source>

    # Detect exceptions in the log output and forward them as one log entry.
    <match raw.kubernetes.**>
      @id raw.kubernetes
      @type detect_exceptions
      remove_tag_prefix raw
      message log
      stream stream
      multiline_flush_interval 5
      max_bytes 500000
      max_lines 1000
    </match>

    # Concatenate multi-line logs
    <filter **>
      @id filter_concat
      @type concat
      key message
      multiline_end_regexp /\n$/
      separator ""
    </filter>

    # Enriches records with Kubernetes metadata
    <filter kubernetes.**>
      @id filter_kubernetes_metadata
      @type kubernetes_metadata
    </filter>

    # Fixes json fields in Elasticsearch
    <filter kubernetes.**>
      @id filter_parser
      @type parser
      key_name log
      reserve_data true
      remove_key_name_field true
      <parse>
        @type multi_format
        <pattern>
          format json
        </pattern>
        <pattern>
          format none
        </pattern>
      </parse>
    </filter>

  system.input.conf: |-
    # Example:
    # 2015-12-21 23:17:22,066 [salt.state       ][INFO    ] Completed state [net.ipv4.ip_forward] at time 23:17:22.066081
    <source>
      @id minion
      @type tail
      format /^(?<time>[^ ]* [^ ,]*)[^\[]*\[[^\]]*\]\[(?<severity>[^ \]]*) *\] (?<message>.*)$/
      time_format %Y-%m-%d %H:%M:%S
      path /var/log/salt/minion
      pos_file /var/log/salt.pos
      tag salt
    </source>

    # Example:
    # Dec 21 23:17:22 gke-foo-1-1-4b5cbd14-node-4eoj startupscript: Finished running startup script /var/run/google.startup.script
    <source>
      @id startupscript.log
      @type tail
      format syslog
      path /var/log/startupscript.log
      pos_file /var/log/es-startupscript.log.pos
      tag startupscript
    </source>

    # Examples:
    # time="2016-02-04T06:51:03.053580605Z" level=info msg="GET /containers/json"
    # time="2016-02-04T07:53:57.505612354Z" level=error msg="HTTP Error" err="No such image: -f" statusCode=404
    # TODO(random-liu): Remove this after cri container runtime rolls out.
    <source>
      @id docker.log
      @type tail
      format /^time="(?<time>[^"]*)" level=(?<severity>[^ ]*) msg="(?<message>[^"]*)"( err="(?<error>[^"]*)")?( statusCode=($<status_code>\d+))?/
      path /var/log/docker.log
      pos_file /var/log/es-docker.log.pos
      tag docker
    </source>

    # Example:
    # 2016/02/04 06:52:38 filePurge: successfully removed file /var/etcd/data/member/wal/00000000000006d0-00000000010a23d1.wal
    <source>
      @id etcd.log
      @type tail
      # Not parsing this, because it doesn't have anything particularly useful to
      # parse out of it (like severities).
      format none
      path /var/log/etcd.log
      pos_file /var/log/es-etcd.log.pos
      tag etcd
    </source>

    # Multi-line parsing is required for all the kube logs because very large log
    # statements, such as those that include entire object bodies, get split into
    # multiple lines by glog.

    # Example:
    # I0204 07:32:30.020537    3368 server.go:1048] POST /stats/container/: (13.972191ms) 200 [[Go-http-client/1.1] 10.244.1.3:40537]
    <source>
      @id kubelet.log
      @type tail
      format multiline
      multiline_flush_interval 5s
      format_firstline /^\w\d{4}/
      format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
      time_format %m%d %H:%M:%S.%N
      path /var/log/kubelet.log
      pos_file /var/log/es-kubelet.log.pos
      tag kubelet
    </source>

    # Example:
    # I1118 21:26:53.975789       6 proxier.go:1096] Port "nodePort for kube-system/default-http-backend:http" (:31429/tcp) was open before and is still needed
    <source>
      @id kube-proxy.log
      @type tail
      format multiline
      multiline_flush_interval 5s
      format_firstline /^\w\d{4}/
      format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
      time_format %m%d %H:%M:%S.%N
      path /var/log/kube-proxy.log
      pos_file /var/log/es-kube-proxy.log.pos
      tag kube-proxy
    </source>

    # Example:
    # I0204 07:00:19.604280       5 handlers.go:131] GET /api/v1/nodes: (1.624207ms) 200 [[kube-controller-manager/v1.1.3 (linux/amd64) kubernetes/6a81b50] 127.0.0.1:38266]
    <source>
      @id kube-apiserver.log
      @type tail
      format multiline
      multiline_flush_interval 5s
      format_firstline /^\w\d{4}/
      format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
      time_format %m%d %H:%M:%S.%N
      path /var/log/kube-apiserver.log
      pos_file /var/log/es-kube-apiserver.log.pos
      tag kube-apiserver
    </source>

    # Example:
    # I0204 06:55:31.872680       5 servicecontroller.go:277] LB already exists and doesn't need update for service kube-system/kube-ui
    <source>
      @id kube-controller-manager.log
      @type tail
      format multiline
      multiline_flush_interval 5s
      format_firstline /^\w\d{4}/
      format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
      time_format %m%d %H:%M:%S.%N
      path /var/log/kube-controller-manager.log
      pos_file /var/log/es-kube-controller-manager.log.pos
      tag kube-controller-manager
    </source>

    # Example:
    # W0204 06:49:18.239674       7 reflector.go:245] pkg/scheduler/factory/factory.go:193: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [2578313/2577886]) [2579312]
    <source>
      @id kube-scheduler.log
      @type tail
      format multiline
      multiline_flush_interval 5s
      format_firstline /^\w\d{4}/
      format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
      time_format %m%d %H:%M:%S.%N
      path /var/log/kube-scheduler.log
      pos_file /var/log/es-kube-scheduler.log.pos
      tag kube-scheduler
    </source>

    # Example:
    # I0603 15:31:05.793605       6 cluster_manager.go:230] Reading config from path /etc/gce.conf
    <source>
      @id glbc.log
      @type tail
      format multiline
      multiline_flush_interval 5s
      format_firstline /^\w\d{4}/
      format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
      time_format %m%d %H:%M:%S.%N
      path /var/log/glbc.log
      pos_file /var/log/es-glbc.log.pos
      tag glbc
    </source>

    # Example:
    # I0603 15:31:05.793605       6 cluster_manager.go:230] Reading config from path /etc/gce.conf
    <source>
      @id cluster-autoscaler.log
      @type tail
      format multiline
      multiline_flush_interval 5s
      format_firstline /^\w\d{4}/
      format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
      time_format %m%d %H:%M:%S.%N
      path /var/log/cluster-autoscaler.log
      pos_file /var/log/es-cluster-autoscaler.log.pos
      tag cluster-autoscaler
    </source>

    # Logs from systemd-journal for interesting services.
    # TODO(random-liu): Remove this after cri container runtime rolls out.
    <source>
      @id journald-docker
      @type systemd
      matches [{ "_SYSTEMD_UNIT": "docker.service" }]
      <storage>
        @type local
        persistent true
        path /var/log/journald-docker.pos
      </storage>
      read_from_head true
      tag docker
    </source>

    <source>
      @id journald-container-runtime
      @type systemd
      matches [{ "_SYSTEMD_UNIT": "{{ fluentd_container_runtime_service }}.service" }]
      <storage>
        @type local
        persistent true
        path /var/log/journald-container-runtime.pos
      </storage>
      read_from_head true
      tag container-runtime
    </source>

    <source>
      @id journald-kubelet
      @type systemd
      matches [{ "_SYSTEMD_UNIT": "kubelet.service" }]
      <storage>
        @type local
        persistent true
        path /var/log/journald-kubelet.pos
      </storage>
      read_from_head true
      tag kubelet
    </source>

    <source>
      @id journald-node-problem-detector
      @type systemd
      matches [{ "_SYSTEMD_UNIT": "node-problem-detector.service" }]
      <storage>
        @type local
        persistent true
        path /var/log/journald-node-problem-detector.pos
      </storage>
      read_from_head true
      tag node-problem-detector
    </source>

    <source>
      @id kernel
      @type systemd
      matches [{ "_TRANSPORT": "kernel" }]
      <storage>
        @type local
        persistent true
        path /var/log/kernel.pos
      </storage>
      <entry>
        fields_strip_underscores true
        fields_lowercase true
      </entry>
      read_from_head true
      tag kernel
    </source>

  forward.input.conf: |-
    # Takes the messages sent over TCP
    <source>
      @id forward
      @type forward
    </source>

  monitoring.conf: |-
    # Prometheus Exporter Plugin
    # input plugin that exports metrics
    <source>
      @id prometheus
      @type prometheus
    </source>

    <source>
      @id monitor_agent
      @type monitor_agent
    </source>

    # input plugin that collects metrics from MonitorAgent
    <source>
      @id prometheus_monitor
      @type prometheus_monitor
      <labels>
        host ${hostname}
      </labels>
    </source>

    # input plugin that collects metrics for output plugin
    <source>
      @id prometheus_output_monitor
      @type prometheus_output_monitor
      <labels>
        host ${hostname}
      </labels>
    </source>

    # input plugin that collects metrics for in_tail plugin
    <source>
      @id prometheus_tail_monitor
      @type prometheus_tail_monitor
      <labels>
        host ${hostname}
      </labels>
    </source>

  output.conf: |-
    <match **>
      @id elasticsearch
      @type elasticsearch
      @log_level info
      type_name _doc
      include_tag_key true
      host elasticsearch.kube-logging.svc.cluster.local
      port 9200
      user "#{ENV['ELASTICSEARCH_USER']}"
      password "#{ENV['ELASTICSEARCH_PASSWORD']}"
      logstash_format true
      <buffer>
        @type file
        path /var/log/fluentd-buffers/kubernetes.system.buffer
        flush_mode interval
        retry_type exponential_backoff
        flush_thread_count 2
        flush_interval 5s
        retry_forever
        retry_max_interval 30
        chunk_limit_size 2M
        total_limit_size 500M
        overflow_action block
      </buffer>
    </match>
ravi_gadgil@cloudshell:~/logging/efk_bk$ kubectl apply -f fluentd-config.yaml
configmap/fluentd-config-v0.2.0 created

Let's deploy our Fluentd with the following DaemonSet. It also includes ClusterRole which is necessary to provide required permission to our Fluentd to be able to fetch logs from all pods within our Kubernetes Cluster. It includes credentials to connect to the Elasticsearch cluster and other required configuration.
ravi_gadgil@cloudshell:~/logging/efk_bk$ cat fluentd.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentd
  namespace: kube-logging
  labels:
    app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: fluentd
  labels:
    app: fluentd
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - namespaces
  verbs:
  - get
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd
roleRef:
  kind: ClusterRole
  name: fluentd
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: fluentd
  namespace: kube-logging
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-v3.0.1
  namespace: kube-logging
  labels:
    k8s-app: fluentd
    version: v3.0.1
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      k8s-app: fluentd
      version: v3.0.1
  template:
    metadata:
      labels:
        k8s-app: fluentd
        version: v3.0.1
      # This annotation ensures that fluentd does not get evicted if the node
      # supports critical pod annotation based priority scheme.
      # Note that this does not guarantee admission on the nodes (#40573).
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      serviceAccount: fluentd
      serviceAccountName: fluentd
      containers:
      - name: fluentd
        image: quay.io/fluentd_elasticsearch/fluentd:v3.0.1
        env:
          - name:  FLUENT_ELASTICSEARCH_HOST
            value: "elasticsearch.kube-logging.svc.cluster.local"
          - name:  FLUENT_ELASTICSEARCH_PORT
            value: "9200"
          - name: FLUENT_ELASTICSEARCH_SCHEME
            value: "http"
          - name: FLUENT_UID
            value: "0"
          - name: FLUENTD_ARGS
            value: --no-supervisor -q
          - name: ELASTICSEARCH_USER
            value: "elastic"
          - name: ELASTICSEARCH_PASSWORD
            valueFrom:
              secretKeyRef:
                name: elasticsearch-pw-elastic
                key: password
        resources:
          limits:
            memory: 500Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: config-volume
          mountPath: /etc/fluent/config.d
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: config-volume
        configMap:
          name: fluentd-config-v0.2.0
ravi_gadgil@cloudshell:~/logging/efk_bk$ kubectl apply -f fluentd.yaml
serviceaccount/fluentd created
clusterrole.rbac.authorization.k8s.io/fluentd created
clusterrolebinding.rbac.authorization.k8s.io/fluentd created
daemonset.apps/fluentd-v3.0.1 created

Let's check how our Kibana setup looks like.
ravi_gadgil@cloudshell:~/logging/efk_bk$ kubectl get all -n kube-logging
NAME                          READY   STATUS    RESTARTS   AGE
pod/es-cluster-0              1/1     Running   0          117m
pod/es-cluster-1              1/1     Running   0          117m
pod/es-cluster-2              1/1     Running   0          117m
pod/fluentd-v3.0.1-697gv      1/1     Running   0          2m15s
pod/fluentd-v3.0.1-c5x76      1/1     Running   0          2m15s
pod/fluentd-v3.0.1-js6xj      1/1     Running   0          2m16s
pod/fluentd-v3.0.1-kfrpf      1/1     Running   0          2m16s
pod/fluentd-v3.0.1-wtjgj      1/1     Running   0          2m15s
pod/fluentd-v3.0.1-x5ws9      1/1     Running   0          2m16s
pod/kibana-5c4b9749f7-mmpgg   1/1     Running   0          55m

NAME                    TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)             AGE
service/elasticsearch   ClusterIP   None          <none>        9200/TCP,9300/TCP   98m
service/kibana          NodePort    10.44.13.43   <none>        5601:32023/TCP      54m

NAME                            DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/fluentd-v3.0.1   6         6         6       6            6           <none>          2m16s

NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/kibana   1/1     1            1           55m

NAME                                DESIRED   CURRENT   READY   AGE
replicaset.apps/kibana-5c4b9749f7   1         1         1       55m

NAME                          READY   AGE
statefulset.apps/es-cluster   3/3     117m

Cool, Now our setup is up and running. Let's check our Kibana to see how it turns up. I'm using port-forward just to check the Kibana dashboard.
ravi_gadgil@cloudshell:~/logging/efk_bk$ kubectl port-forward --namespace kube-logging service/kibana 5601:5601 &
[1] 5494
ravi_gadgil@cloudshell:~/logging/efk_bk$ Forwarding from 127.0.0.1:5601 -> 5601

As we have defined our Kibana to open on /kibana URL so use that to access.
Kibana DashBoard
Kibana Dashboard
Use credentials elastic/changeme to login to your Kibana dashboard if you have not changed the same in the Secret file.
Kibana Dashboard
Kibana Dashboard
Create an index pattern to access all the logs from Elasticseach as logstash-* in the management section.
Kibana Dashboard
Kibana Dashboard
Kibana Dashboard
Kibana Dashboard
Now let's go to Discover option to see our logs and how they come up.
Kibana Dashboard
Kibana Dashboard
As we have deployed Kibana with example.com/kibana so following Istio configuration can be used to do this route if you're using Istio and Ingress Gateway. If you want to set up Istio can follow the link.
ravi_gadgil@cloudshell:~/logging/efk_bk$ cat istio-virtualservice.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: default-gateway
  namespace: istio-system
spec:
  selector:
    istio: ingressgateway
  servers:

  - hosts:
    - '*'
    port:
      name: http
      number: 80
      protocol: HTTP
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: kibana
  namespace: kube-logging
spec:
  hosts:
  - "example.com"
  gateways:
  - default-gateway.istio-system.svc.cluster.local
  http:
  - match:
    - uri:
        prefix: /kibana
    route:
    - destination:
        host: kibana.kube-logging.svc.cluster.local
        port:
          number: 5601

If you want to test whether logs are coming from new deployments and all namespaces the following deployment can be used.
ravi_gadgil@cloudshell:~/logging/efk_bk$ cat counter.yaml
apiVersion: v1
kind: Pod
metadata:
  name: counter
spec:
  containers:
  - name: count
    image: busybox
    args: [/bin/sh, -c,
            'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']

Wednesday, January 29, 2020

How to upgrade update Helm.

There are cases when you need to upgrade or update your helm to work with new charts and its a generally good practice to have updated packages.

Check the current version.
root@kub-master:~# helm version
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}

Download the latest or version you want to install.
root@kub-master:~# wget https://get.helm.sh/helm-v2.16.1-linux-amd64.tar.gz
--2020-01-28 19:28:27--  https://get.helm.sh/helm-v2.16.1-linux-amd64.tar.gz
Resolving get.helm.sh (get.helm.sh)... 152.199.39.108, 2606:2800:247:1cb7:261b:1f9c:2074:3c
Connecting to get.helm.sh (get.helm.sh)|152.199.39.108|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 25262108 (24M) [application/x-tar]
Saving to: ‘helm-v2.16.1-linux-amd64.tar.gz’

helm-v2.16.1-linux-amd64.tar.gz                    100%[================================================================================================================>]  24.09M  --.-KB/s    in 0.1s

2020-01-28 19:28:27 (239 MB/s) - ‘helm-v2.16.1-linux-amd64.tar.gz’ saved [25262108/25262108]

Extract the package and move them executable directory.
root@kub-master:~# tar -xvzf helm-v2.16.1-linux-amd64.tar.gz
linux-amd64/
linux-amd64/helm
linux-amd64/LICENSE
linux-amd64/tiller
linux-amd64/README.md
root@kub-master:~# mv linux-amd64/helm /usr/local/bin/
root@kub-master:~# helm version
Client: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}

As you can see there is a difference between client and server version so we have to reinitialize the Helm with the new version.
root@kub-master:~# helm init --service-account tiller --tiller-namespace kube-system --upgrade
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been updated to gcr.io/kubernetes-helm/tiller:v2.16.1 .
root@kub-master:~# helm version
Client: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}

Install and configure Spinnaker on Kubernetes

We will be installing and configuring Spinnaker in our Kubernetes cluster to streamline CI/CD process. Spinnaker was created by Netflix to handle there CI/CD in Kubernetes setup and later on they open-sourced it. Spinnaker is a great tool with inbuilt integration with major cloud providers. We will be installing it in the Bare Metal Kubernetes cluster though.

Clone the configuration file from Spinnaker GitHub repo and make your changes accordingly.
root@kub-master:~# wget https://raw.githubusercontent.com/helm/charts/master/stable/spinnaker/values.yaml

I have disabled all the persistence storage entries in the values.yaml file as it's not for production deployment but in case of Production deployment it's highly recommended to use persistence storage. My values.yaml file looks like this after disabling the persistence storage.
root@kub-master:~# cat values.yaml
halyard:
  spinnakerVersion: 1.16.1
  image:
    repository: gcr.io/spinnaker-marketplace/halyard
    tag: 1.23.2
    pullSecrets: []
  # Set to false to disable persistence data volume for halyard
  persistence:
    enabled: false
  # Provide additional parameters to halyard deploy apply command
  additionalInstallParameters: []
  # Provide a config map with Hal commands that will be run the core config (storage)
  # The config map should contain a script in the config.sh key
  additionalScripts:
    enabled: false
    configMapName: my-halyard-config
    configMapKey: config.sh
    # If you'd rather do an inline script, set create to true and put the content in the data dict like you would a configmap
    # The content will be passed through `tpl`, so value interpolation is supported.
    create: false
    data: {}
  additionalSecrets:
    create: false
    data: {}
    ## Uncomment if you want to use a pre-created secret rather than feeding data in via helm.
    # name:
  additionalConfigMaps:
    create: false
    data: {}
    ## Uncomment if you want to use a pre-created ConfigMap rather than feeding data in via helm.
    # name:
  ## Define custom profiles for Spinnaker services. Read more for details:
  ## https://www.spinnaker.io/reference/halyard/custom/#custom-profiles
  ## The contents of the files will be passed through `tpl`, so value interpolation is supported.
  additionalProfileConfigMaps:
    data: {}
      ## if you're running spinnaker behind a reverse proxy such as a GCE ingress
      ## you may need the following profile settings for the gate profile.
      ## see https://github.com/spinnaker/spinnaker/issues/1630
      ## otherwise its harmless and will likely become default behavior in the future
      ## According to the linked github issue.
      # gate-local.yml:
      #   server:
      #     tomcat:
      #       protocolHeader: X-Forwarded-Proto
      #       remoteIpHeader: X-Forwarded-For
      #       internalProxies: .*
      #       httpsServerPort: X-Forwarded-Port

  ## Define custom settings for Spinnaker services. Read more for details:
  ## https://www.spinnaker.io/reference/halyard/custom/#custom-service-settings
  ## You can use it to add annotations for pods, override the image, etc.
  additionalServiceSettings: {}
    # deck.yml:
    #   artifactId: gcr.io/spinnaker-marketplace/deck:2.9.0-20190412012808
    #   kubernetes:
    #     podAnnotations:
    #       iam.amazonaws.com/role: <role_arn>
    # clouddriver.yml:
    #   kubernetes:
    #     podAnnotations:
    #       iam.amazonaws.com/role: <role_arn>

  ## Populate to provide a custom local BOM for Halyard to use for deployment. Read more for details:
  ## https://www.spinnaker.io/guides/operator/custom-boms/#boms-and-configuration-on-your-filesystem
  bom: ~
  #   artifactSources:
  #     debianRepository: https://dl.bintray.com/spinnaker-releases/debians
  #     dockerRegistry: gcr.io/spinnaker-marketplace
  #     gitPrefix: https://github.com/spinnaker
  #     googleImageProject: marketplace-spinnaker-release
  #   services:
  #     clouddriver:
  #       commit: 031bcec52d6c3eb447095df4251b9d7516ed74f5
  #       version: 6.3.0-20190904130744
  #     deck:
  #       commit: b0aac478e13a7f9642d4d39479f649dd2ef52a5a
  #       version: 2.12.0-20190916141821
  #     ...
  #   timestamp: '2019-09-16 18:18:44'
  #   version: 1.16.1

  ## Define local configuration for Spinnaker services.
  ## The contents of these files would be copies of the configuration normally retrieved from
  ## `gs://halconfig/<service-name>`, but instead need to be available locally on the halyard pod to facilitate
  ## offline installation. This would typically be used along with a custom `bom:` with the `local:` prefix on a
  ## service version.
  ## Read more for details:
  ## https://www.spinnaker.io/guides/operator/custom-boms/#boms-and-configuration-on-your-filesystem
  ## The key for each entry must be the name of the service and a file name separated by the '_' character.
  serviceConfigs: {}
  # clouddriver_clouddriver-ro.yml: |-
  #   ...
  # clouddriver_clouddriver-rw.yml: |-
  #   ...
  # clouddriver_clouddriver.yml: |-
  #   ...
  # deck_settings.json: |-
  #   ...
  # echo_echo.yml: |-
  #   ...

  ## Uncomment if you want to add extra commands to the init script
  ## run by the init container before halyard is started.
  ## The content will be passed through `tpl`, so value interpolation is supported.
  # additionalInitScript: |-

  ## Uncomment if you want to add annotations on halyard and install-using-hal pods
  # annotations:
  #   iam.amazonaws.com/role: <role_arn>

  ## Uncomment the following resources definitions to control the cpu and memory
  # resources allocated for the halyard pod
  resources: {}
    # requests:
    #   memory: "1Gi"
    #   cpu: "100m"
    # limits:
    #   memory: "2Gi"
    #   cpu: "200m"

  ## Uncomment if you want to set environment variables on the Halyard pod.
  # env:
  #   - name: JAVA_OPTS
  #     value: -Dhttp.proxyHost=proxy.example.com
  customCerts:
    ## Enable to override the default cacerts with your own one
    enabled: false
    secretName: custom-cacerts

# Define which registries and repositories you want available in your
# Spinnaker pipeline definitions
# For more info visit:
#   https://www.spinnaker.io/setup/providers/docker-registry/

# Configure your Docker registries here
dockerRegistries:
- name: dockerhub
  address: index.docker.io
  repositories:
    - library/alpine
    - library/ubuntu
    - library/centos
    - library/nginx
# - name: gcr
#   address: https://gcr.io
#   username: _json_key
#   password: '<INSERT YOUR SERVICE ACCOUNT JSON HERE>'
#   email: 1234@5678.com
# - name: ecr
#   address: <AWS-ACCOUNT-ID>.dkr.ecr.<REGION>.amazonaws.com
#   username: AWS
#   passwordCommand: "aws --region <REGION> ecr get-authorization-token --output text --query 'authorizationData[].authorizationToken' | base64 -d | sed 's/^AWS://'"

# If you don't want to put your passwords into a values file
# you can use a pre-created secret instead of putting passwords
# (specify secret name in below `dockerRegistryAccountSecret`)
# per account above with data in the format:
# <name>: <password>

# dockerRegistryAccountSecret: myregistry-secrets

kubeConfig:
  # Use this when you want to register arbitrary clusters with Spinnaker
  # Upload your ~/kube/.config to a secret
  enabled: false
  secretName: my-kubeconfig
  secretKey: config
  # Use this when you want to configure halyard to reference a kubeconfig from s3
  # This allows you to keep your kubeconfig in an encrypted s3 bucket
  # For more info visit:
  #   https://www.spinnaker.io/reference/halyard/secrets/s3-secrets/#secrets-in-s3
  # encryptedKubeconfig: encrypted:s3!r:us-west-2!b:mybucket!f:mykubeconfig
  # List of contexts from the kubeconfig to make available to Spinnaker
  contexts:
  - default
  deploymentContext: default
  omittedNameSpaces:
  - kube-system
  - kube-public
  onlySpinnakerManaged:
    enabled: false

  # When false, clouddriver will skip the permission checks for all kubernetes kinds at startup.
  # This can save a great deal of time during clouddriver startup when you have many kubernetes
  # accounts configured. This disables the log messages at startup about missing permissions.
  checkPermissionsOnStartup: true

  # A list of resource kinds this Spinnaker account can deploy to and will cache.
  # When no kinds are configured, this defaults to ‘all kinds'.
  # kinds:
  # -

  # A list of resource kinds this Spinnaker account cannot deploy to or cache.
  # This can only be set when –kinds is empty or not set.
  # omittedKinds:
  # -

# Change this if youd like to expose Spinnaker outside the cluster
ingress:
  enabled: false
  # host: spinnaker.example.org
  # annotations:
    # ingress.kubernetes.io/ssl-redirect: 'true'
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  # tls:
  #  - secretName: -tls
  #    hosts:
  #      - domain.com

ingressGate:
  enabled: false
  # host: gate.spinnaker.example.org
  # annotations:
    # ingress.kubernetes.io/ssl-redirect: 'true'
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  # tls:
  #  - secretName: -tls
  #    hosts:
  #      - domain.com

# spinnakerFeatureFlags is a list of Spinnaker feature flags to enable
# Ref: https://www.spinnaker.io/reference/halyard/commands/#hal-config-features-edit
# spinnakerFeatureFlags:
#   - artifacts
#   - pipeline-templates
spinnakerFeatureFlags:
  - artifacts

# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
# nodeSelector to provide to each of the Spinnaker components
nodeSelector: {}

# Redis password to use for the in-cluster redis service
# Enable redis to use in-cluster redis
redis:
  enabled: true
  # External Redis option will be enabled if in-cluster redis is disabled
  external:
    host: "<EXTERNAL-REDIS-HOST-NAME>"
    port: 6379
    # password: ""
  password: password
  nodeSelector: {}
  cluster:
    enabled: false
# Uncomment if you don't want to create a PVC for redis
  master:
    persistence:
      enabled: false

# Minio access/secret keys for the in-cluster S3 usage
# Minio is not exposed publically
minio:
  enabled: true
  imageTag: RELEASE.2019-02-13T19-48-27Z
  serviceType: ClusterIP
  accessKey: spinnakeradmin
  secretKey: spinnakeradmin
  bucket: "spinnaker"
  nodeSelector: {}
# Uncomment if you don't want to create a PVC for minio
  persistence:
    enabled: false

# Google Cloud Storage
gcs:
  enabled: false
  project: my-project-name
  bucket: "<GCS-BUCKET-NAME>"
  ## if jsonKey is set, will create a secret containing it
  jsonKey: '<INSERT CLOUD STORAGE JSON HERE>'
  ## override the name of the secret to use for jsonKey, if `jsonKey`
  ## is empty, it will not create a secret assuming you are creating one
  ## external to the chart. the key for that secret should be `key.json`.
  secretName:

# AWS Simple Storage Service
s3:
  enabled: false
  bucket: "<S3-BUCKET-NAME>"
  # rootFolder: "front50"
  # region: "us-east-1"
  # endpoint: ""
  # accessKey: ""
  # secretKey: ""
  # assumeRole: "<role to assume>"
  ## Here you can pass extra arguments to configure s3 storage options
  extraArgs: []
  #  - "--path-style-access true"

# Azure Storage Account
azs:
  enabled: false
#   storageAccountName: ""
#   accessKey: ""
#   containerName: "spinnaker"

rbac:
  # Specifies whether RBAC resources should be created
  create: true

serviceAccount:
  # Specifies whether a ServiceAccount should be created
  create: true
  # The name of the ServiceAccounts to use.
  # If left blank it is auto-generated from the fullname of the release
  halyardName:
  spinnakerName:
securityContext:
  # Specifies permissions to write for user/group
  runAsUser: 1000
  fsGroup: 1000

We will use helm to install our spinnaker setup taking values from values.yaml.
root@kub-master:~# helm install --name my-spinnaker -f values.yaml stable/spinnaker --namespace spinnaker
NAME:   my-spinnaker
LAST DEPLOYED: Tue Jan 28 19:35:17 2020
NAMESPACE: spinnaker
STATUS: DEPLOYED

RESOURCES:
==> v1/ClusterRoleBinding
NAME                              AGE
my-spinnaker-spinnaker-spinnaker  3m9s

==> v1/ConfigMap
NAME                                                   AGE
my-spinnaker-minio                                     3m9s
my-spinnaker-spinnaker-additional-profile-config-maps  3m9s
my-spinnaker-spinnaker-halyard-config                  3m9s
my-spinnaker-spinnaker-halyard-init-script             3m9s
my-spinnaker-spinnaker-service-settings                3m9s

==> v1/Pod(related)
NAME                                 AGE
my-spinnaker-minio-668c59b766-bhw6p  3m9s
my-spinnaker-redis-master-0          3m9s
my-spinnaker-spinnaker-halyard-0     3m9s

==> v1/RoleBinding
NAME                            AGE
my-spinnaker-spinnaker-halyard  3m9s

==> v1/Secret
NAME                             AGE
my-spinnaker-minio               3m9s
my-spinnaker-redis               3m9s
my-spinnaker-spinnaker-registry  3m9s

==> v1/Service
NAME                            AGE
my-spinnaker-minio              3m9s
my-spinnaker-redis-master       3m9s
my-spinnaker-spinnaker-halyard  3m9s

==> v1/ServiceAccount
NAME                            AGE
my-spinnaker-spinnaker-halyard  3m9s

==> v1/StatefulSet
NAME                            AGE
my-spinnaker-spinnaker-halyard  3m9s

==> v1beta2/Deployment
NAME                AGE
my-spinnaker-minio  3m9s

==> v1beta2/StatefulSet
NAME                       AGE
my-spinnaker-redis-master  3m9s


NOTES:
1. You will need to create 2 port forwarding tunnels in order to access the Spinnaker UI:
  export DECK_POD=$(kubectl get pods --namespace spinnaker -l "cluster=spin-deck" -o jsonpath="{.items[0].metadata.name}")
  kubectl port-forward --namespace spinnaker $DECK_POD 9000

  export GATE_POD=$(kubectl get pods --namespace spinnaker -l "cluster=spin-gate" -o jsonpath="{.items[0].metadata.name}")
  kubectl port-forward --namespace spinnaker $GATE_POD 8084

2. Visit the Spinnaker UI by opening your browser to: http://127.0.0.1:9000

To customize your Spinnaker installation. Create a shell in your Halyard pod:

  kubectl exec --namespace spinnaker -it my-spinnaker-spinnaker-halyard-0 bash

For more info on using Halyard to customize your installation, visit:
  https://www.spinnaker.io/reference/halyard/

For more info on the Kubernetes integration for Spinnaker, visit:
  https://www.spinnaker.io/reference/providers/kubernetes-v2/

Our spinnaker setup has been installed successfully now lets run the commands as suggested by spinnaker to create port forwarding tunnels.
root@kub-master:~# export DECK_POD=$(kubectl get pods --namespace spinnaker -l "cluster=spin-deck" -o jsonpath="{.items[0].metadata.name}")
root@kub-master:~# kubectl port-forward --namespace spinnaker $DECK_POD 9000 &
[1] 24013
root@kub-master:~# Forwarding from 127.0.0.1:9000 -> 9000

root@kub-master:~# export GATE_POD=$(kubectl get pods --namespace spinnaker -l "cluster=spin-gate" -o jsonpath="{.items[0].metadata.name}")

root@kub-master:~# kubectl port-forward --namespace spinnaker $GATE_POD 8084 &
[2] 24371
root@kub-master:~# Forwarding from 127.0.0.1:8084 -> 8084

Let's check our spinnaker deployment.
root@kub-master:~# kubectl get all -n spinnaker
NAME                                       READY   STATUS      RESTARTS   AGE
pod/my-spinnaker-install-using-hal-nk854   0/1     Completed   0          5m22s
pod/my-spinnaker-minio-668c59b766-bhw6p    1/1     Running     0          5m24s
pod/my-spinnaker-redis-master-0            1/1     Running     0          5m24s
pod/my-spinnaker-spinnaker-halyard-0       1/1     Running     0          5m24s
pod/spin-clouddriver-67474d5679-gcd52      1/1     Running     0          2m23s
pod/spin-deck-565f65c6b5-lfbx9             1/1     Running     0          2m25s
pod/spin-echo-98f95c9f5-m9kz8              0/1     Running     0          2m27s
pod/spin-front50-5b774b446b-mps7m          0/1     Running     0          2m21s
pod/spin-gate-78f5787989-5ns7x             1/1     Running     0          2m25s
pod/spin-igor-75f89f6d58-5k2fc             1/1     Running     0          2m24s
pod/spin-orca-5bd445469-sr8n9              1/1     Running     0          2m21s
pod/spin-rosco-c757f4488-lq6x9             1/1     Running     0          2m20s


NAME                                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/my-spinnaker-minio               ClusterIP   None             <none>        9000/TCP   5m24s
service/my-spinnaker-redis-master        ClusterIP   10.104.160.74    <none>        6379/TCP   5m24s
service/my-spinnaker-spinnaker-halyard   ClusterIP   None             <none>        8064/TCP   5m24s
service/spin-clouddriver                 ClusterIP   10.99.31.174     <none>        7002/TCP   2m31s
service/spin-deck                        ClusterIP   10.105.31.148    <none>        9000/TCP   2m34s
service/spin-echo                        ClusterIP   10.97.251.129    <none>        8089/TCP   2m34s
service/spin-front50                     ClusterIP   10.100.81.247    <none>        8080/TCP   2m33s
service/spin-gate                        ClusterIP   10.109.86.157    <none>        8084/TCP   2m33s
service/spin-igor                        ClusterIP   10.109.198.34    <none>        8088/TCP   2m32s
service/spin-orca                        ClusterIP   10.104.52.95     <none>        8083/TCP   2m32s
service/spin-rosco                       ClusterIP   10.110.146.173   <none>        8087/TCP   2m33s


NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-spinnaker-minio   1/1     1            1           5m24s
deployment.apps/spin-clouddriver     1/1     1            1           2m23s
deployment.apps/spin-deck            1/1     1            1           2m25s
deployment.apps/spin-echo            0/1     1            0           2m27s
deployment.apps/spin-front50         0/1     1            0           2m21s
deployment.apps/spin-gate            1/1     1            1           2m25s
deployment.apps/spin-igor            1/1     1            1           2m24s
deployment.apps/spin-orca            1/1     1            1           2m21s
deployment.apps/spin-rosco           1/1     1            1           2m20s

NAME                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/my-spinnaker-minio-668c59b766   1         1         1       5m24s
replicaset.apps/spin-clouddriver-67474d5679     1         1         1       2m23s
replicaset.apps/spin-deck-565f65c6b5            1         1         1       2m25s
replicaset.apps/spin-echo-98f95c9f5             1         1         0       2m27s
replicaset.apps/spin-front50-5b774b446b         1         1         0       2m21s
replicaset.apps/spin-gate-78f5787989            1         1         1       2m25s
replicaset.apps/spin-igor-75f89f6d58            1         1         1       2m24s
replicaset.apps/spin-orca-5bd445469             1         1         1       2m21s
replicaset.apps/spin-rosco-c757f4488            1         1         1       2m20s

NAME                                              READY   AGE
statefulset.apps/my-spinnaker-redis-master        1/1     5m24s
statefulset.apps/my-spinnaker-spinnaker-halyard   1/1     5m24s


NAME                                       COMPLETIONS   DURATION   AGE
job.batch/my-spinnaker-install-using-hal   1/1           3m7s       5m22s

As I would be using the spinnaker via NodePort so I would be editing the following services type to run from ClusterIP to NodePort.
root@kub-master:~# kubectl edit service/spin-deck -n spinnaker
service/spin-deck edited
root@kub-master:~# kubectl edit service/spin-gate -n spinnaker
service/spin-gate edited
root@kub-master:~# kubectl get services -n spinnaker
NAME                             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
my-spinnaker-minio               ClusterIP   None             <none>        9000/TCP         16h
my-spinnaker-redis-master        ClusterIP   10.104.160.74    <none>        6379/TCP         16h
my-spinnaker-spinnaker-halyard   ClusterIP   None             <none>        8064/TCP         16h
spin-clouddriver                 ClusterIP   10.99.31.174     <none>        7002/TCP         16h
spin-deck                        NodePort    10.105.31.148    <none>        9000:30306/TCP   16h
spin-echo                        ClusterIP   10.97.251.129    <none>        8089/TCP         16h
spin-front50                     ClusterIP   10.100.81.247    <none>        8080/TCP         16h
spin-gate                        NodePort    10.109.86.157    <none>        8084:30825/TCP   16h
spin-igor                        ClusterIP   10.109.198.34    <none>        8088/TCP         16h
spin-orca                        ClusterIP   10.104.52.95     <none>        8083/TCP         16h
spin-rosco                       ClusterIP   10.110.146.173   <none>        8087/TCP         16h

Now our services spin-deck and spin-gate are working as NodePort and we can access it via corresponding ports. In my case, I would be accessing it via any of Node IP followed up by port no 30306.
Spinnaker Kubernetes

To customize and do any changes in Spinnaker use the following command.
root@kub-master:~# kubectl exec --namespace spinnaker -it my-spinnaker-spinnaker-halyard-0 bash
spinnaker@my-spinnaker-spinnaker-halyard-0:/workdir$

Tuesday, January 28, 2020

Install and Configure Rancher as deployment and service in Kubernetes

We will be installing Rancher as a Kubernetes deployment and service so that we don't have to run it as a separate docker image. It will give us the ability to maintain Rancher as a part of the Kubernetes system as well. Rancher can install and configure whole Kubernetes cluster on its own but in our case, we will be deploying it to already created Kubernetes Cluster.

Create the deployment and service for Rancher.
root@kub-master:~# mkdir rancher
root@kub-master:~# vi rancher/rancher_deployment.yaml

###Content of rancher_deployment.yaml###
apiVersion: apps/v1
kind: Deployment
metadata:
 name: rancher
 labels:
   app: rancher
 namespace: rancher
 annotations:
   monitoring: "true"
spec:
 replicas: 1
 selector:
   matchLabels:
     app: rancher
 template:
   metadata:
     labels:
       app: rancher
   spec:
     containers:
     - image: rancher/rancher
       name: rancher
       ports:
       - containerPort: 443

root@kub-master:~# vi rancher/rancher_service.yaml

###Content of rancher_service.yaml###
apiVersion: v1
kind: Service
metadata:
 labels:
   app: rancher
 name: rancher
 namespace: rancher
spec:
 ports:
 - nodePort: 30500
   port: 443
   protocol: TCP
   targetPort: 443
 selector:
   app: rancher
 type: NodePort

Create a namespace as rancher in which we will be deploying Rancher.
root@kub-master:~# kubectl create ns rancher
namespace/rancher created

Deploy the yaml files to create Rancher Application.
root@kub-master:~/rancher# kubectl create -f .
deployment.apps/rancher created
service/rancher created

Check if Rancher has been applied successfully or not.
root@kub-master:~/rancher# kubectl get all -n rancher
NAME                          READY   STATUS             RESTARTS   AGE
pod/rancher-ccdcb4fd8-nxvrs   0/1     CrashLoopBackOff   9          23m


NAME              TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
service/rancher   NodePort   10.102.40.105   <none>        443:30500/TCP   23m


NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/rancher   0/1     1            0           23m

NAME                                DESIRED   CURRENT   READY   AGE
replicaset.apps/rancher-ccdcb4fd8   1         1         0       23m

As you can see that our rancher has not been deployed successfully and is showing status as CrashLoopBackoff. To see the related issue check the logs of Pod.
root@kub-master:~/rancher# kubectl logs pod/rancher-ccdcb4fd8-nxvrs -n rancher
2020/01/28 09:18:17 [INFO] Rancher version v2.3.5 is starting
2020/01/28 09:18:17 [INFO] Rancher arguments {ACMEDomains:[] AddLocal:auto Embedded:false KubeConfig: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:false NoCACerts:false ListenConfig:<nil> AuditLogPath:/var/log/auditlog/rancher-api-audit.log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0 Features:}
2020/01/28 09:18:17 [INFO] Listening on /tmp/log.sock
2020/01/28 09:18:17 [INFO] Running in single server mode, will not peer connections
panic: creating CRD store customresourcedefinitions.apiextensions.k8s.io is forbidden: User "system:serviceaccount:rancher:default" cannot list resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scope

goroutine 84 [running]:
github.com/rancher/rancher/vendor/github.com/rancher/norman/store/crd.(*Factory).BatchCreateCRDs.func1(0xc0006b2b40, 0xc0007c3000, 0x3b, 0x3b, 0xc000f52820, 0x62e2660, 0x40bfa00, 0xc001263280, 0x3938b12, 0x4)
 /go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/store/crd/init.go:65 +0x2db
created by github.com/rancher/rancher/vendor/github.com/rancher/norman/store/crd.(*Factory).BatchCreateCRDs
 /go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/store/crd/init.go:50 +0xce

So from error its clear that our Pod doesn't have the required permission to perform its task. We will give it the admin privileges for now so that it can be created. In a production environment, we shouldn't be doing this and only provide the necessary permissions.
root@kub-master:~/rancher# kubectl create clusterrolebinding permissive-binding --clusterrole=cluster-admin --user=admin --user=kubelet --user=kube-system --user=default --user=rancher --group=system:serviceaccounts
clusterrolebinding.rbac.authorization.k8s.io/permissive-binding created

Now let's delete and recreate the deployment.
root@kub-master:~/rancher# kubectl delete -f .
deployment.apps "rancher" deleted
service "rancher" deleted

root@kub-master:~/rancher# kubectl get all -n rancher









No resources found.

root@kub-master:~/rancher# kubectl create -f .
deployment.apps/rancher created
service/rancher created
root@kub-master:~/rancher# kubectl get all -n rancher
NAME                          READY   STATUS              RESTARTS   AGE
pod/rancher-ccdcb4fd8-kjwdf   0/1     ContainerCreating   0          4s


NAME              TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
service/rancher   NodePort   10.105.70.47   <none>        443:30500/TCP   4s


NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/rancher   0/1     1            0           4s

NAME                                DESIRED   CURRENT   READY   AGE
replicaset.apps/rancher-ccdcb4fd8   1         1         0       4s




root@kub-master:~/rancher# kubectl get all -n rancher
NAME                          READY   STATUS    RESTARTS   AGE
pod/rancher-ccdcb4fd8-kjwdf   1/1     Running   0          11s


NAME              TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
service/rancher   NodePort   10.105.70.47   <none>        443:30500/TCP   11s


NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/rancher   1/1     1            1           11s

NAME                                DESIRED   CURRENT   READY   AGE
replicaset.apps/rancher-ccdcb4fd8   1         1         1       11s




Let's access our Rancher on port no 30500 with any of the Nodes IP. As we have deployed it with a NodePort so it will be available on 30500 port on each of the Nodes.
Rancher Kubernetes

Monday, January 27, 2020

Install and configure Istio in Kubernetes cluster.

Istio is an open-source service mesh platform that provides a way to control how microservices share data with one another. It acts as a very good ingress controller to serve traffic and similar stuff.  If you are looking to use Kubernetes in a production setup then any Ingress controller is a must and Istio is very good in this.

Download and extract the Istio package.
root@kub-master:~# wget https://github.com/istio/istio/releases/download/1.2.5/istio-1.2.5-linux.tar.gz
--2020-01-27 17:17:22--  https://github.com/istio/istio/releases/download/1.2.5/istio-1.2.5-linux.tar.gz
Resolving github.com (github.com)... 13.250.177.223
Connecting to github.com (github.com)|13.250.177.223|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://github-production-release-asset-2e65be.s3.amazonaws.com/74175805/97273080-c5f8-11e9-8c14-48c4704e1ec9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20200127%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20200127T091723Z&X-Amz-Expires=300&X-Amz-Signature=8f66d7e0cb13d5b4d4542c22d512a0deb419f94f88476bb82a1e3ab2f88a605e&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Distio-1.2.5-linux.tar.gz&response-content-type=application%2Foctet-stream [following]
--2020-01-27 17:17:23--  https://github-production-release-asset-2e65be.s3.amazonaws.com/74175805/97273080-c5f8-11e9-8c14-48c4704e1ec9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20200127%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20200127T091723Z&X-Amz-Expires=300&X-Amz-Signature=8f66d7e0cb13d5b4d4542c22d512a0deb419f94f88476bb82a1e3ab2f88a605e&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Distio-1.2.5-linux.tar.gz&response-content-type=application%2Foctet-stream
Resolving github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)... 52.216.140.68
Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)|52.216.140.68|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 32445384 (31M) [application/octet-stream]
Saving to: ‘istio-1.2.5-linux.tar.gz’

istio-1.2.5-linux.tar.gz                           100%[================================================================================================================>]  30.94M  7.07MB/s    in 4.4s

2020-01-27 17:17:28 (7.07 MB/s) - ‘istio-1.2.5-linux.tar.gz’ saved [32445384/32445384]

root@kub-master:~# tar -xvzf istio-1.2.5-linux.tar.gz

Copy the istioctl binary in the executable path.
root@kub-master:~# cp istio-1.2.5/bin/istioctl /usr/bin/
root@kub-master:~# istioctl version
1.2.5

Check the prerequisite to install Istio.
root@kub-master:~# istioctl verify-install

Checking the cluster to make sure it is ready for Istio installation...

Kubernetes-api
-----------------------
Can initialize the Kubernetes client.
Can query the Kubernetes API Server.

Kubernetes-version
-----------------------
Istio is compatible with Kubernetes: v1.15.9.


Istio-existence
-----------------------
Istio will be installed in the istio-system namespace.

Kubernetes-setup
-----------------------
Can create necessary Kubernetes configurations: Namespace,ClusterRole,ClusterRoleBinding,CustomResourceDefinition,Role,ServiceAccount,Service,Deployments,ConfigMap.

SideCar-Injector
-----------------------
This Kubernetes cluster supports automatic sidecar injection. To enable automatic sidecar injection see https://istio.io/docs/setup/kubernetes/additional-setup/sidecar-injection/#deploying-an-app

-----------------------
Install Pre-Check passed! The cluster is ready for Istio installation.

Create a namespace for Istio.
root@kub-master:~# kubectl create ns istio-system
namespace/istio-system created
root@kub-master:~# cd istio-1.2.5/

You should have Helm installed in your cluster to install Istio so make sure you have it configured. If you don't have Helm installed follow the following link to set it up.
root@kub-master:~# helm template install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl apply -f -
configmap/istio-crd-10 created
configmap/istio-crd-11 created
configmap/istio-crd-12 created
serviceaccount/istio-init-service-account created
clusterrole.rbac.authorization.k8s.io/istio-init-istio-system created
clusterrolebinding.rbac.authorization.k8s.io/istio-init-admin-role-binding-istio-system created
job.batch/istio-init-crd-10 created
job.batch/istio-init-crd-11 created
job.batch/istio-init-crd-12 created

Check Istio certificates that have been installed.
root@kub-master:~# kubectl get crds | grep 'istio.io\|certmanager.k8s.io' | wc -l
23
root@kub-master:~/istio-1.2.5# kubectl get crds | grep 'istio.io\|certmanager.k8s.io'
adapters.config.istio.io               2020-01-27T09:59:26Z
attributemanifests.config.istio.io     2020-01-27T09:59:26Z
authorizationpolicies.rbac.istio.io    2020-01-27T09:59:27Z
clusterrbacconfigs.rbac.istio.io       2020-01-27T09:59:26Z
destinationrules.networking.istio.io   2020-01-27T09:59:26Z
envoyfilters.networking.istio.io       2020-01-27T09:59:26Z
gateways.networking.istio.io           2020-01-27T09:59:26Z
handlers.config.istio.io               2020-01-27T09:59:26Z
httpapispecbindings.config.istio.io    2020-01-27T09:59:26Z
httpapispecs.config.istio.io           2020-01-27T09:59:26Z
instances.config.istio.io              2020-01-27T09:59:26Z
meshpolicies.authentication.istio.io   2020-01-27T09:59:26Z
policies.authentication.istio.io       2020-01-27T09:59:26Z
quotaspecbindings.config.istio.io      2020-01-27T09:59:26Z
quotaspecs.config.istio.io             2020-01-27T09:59:26Z
rbacconfigs.rbac.istio.io              2020-01-27T09:59:26Z
rules.config.istio.io                  2020-01-27T09:59:26Z
serviceentries.networking.istio.io     2020-01-27T09:59:26Z
servicerolebindings.rbac.istio.io      2020-01-27T09:59:26Z
serviceroles.rbac.istio.io             2020-01-27T09:59:26Z
sidecars.networking.istio.io           2020-01-27T09:59:26Z
templates.config.istio.io              2020-01-27T09:59:26Z
virtualservices.networking.istio.io    2020-01-27T09:59:26Z

Install the Istio template.
root@kub-master:~/istio-1.2.5# helm template install/kubernetes/helm/istio --name istio --namespace istio-system | kubectl apply -f -
configmap/istio-galley-configuration created
configmap/prometheus created
configmap/istio-security-custom-resources created
configmap/istio created
configmap/istio-sidecar-injector created
serviceaccount/istio-galley-service-account created
serviceaccount/istio-ingressgateway-service-account created
serviceaccount/istio-mixer-service-account created
serviceaccount/istio-pilot-service-account created
serviceaccount/prometheus created
serviceaccount/istio-cleanup-secrets-service-account created
clusterrole.rbac.authorization.k8s.io/istio-cleanup-secrets-istio-system created
clusterrolebinding.rbac.authorization.k8s.io/istio-cleanup-secrets-istio-system created
job.batch/istio-cleanup-secrets-1.2.5 created
serviceaccount/istio-security-post-install-account created
clusterrole.rbac.authorization.k8s.io/istio-security-post-install-istio-system created
clusterrolebinding.rbac.authorization.k8s.io/istio-security-post-install-role-binding-istio-system created
job.batch/istio-security-post-install-1.2.5 created
serviceaccount/istio-citadel-service-account created
serviceaccount/istio-sidecar-injector-service-account created
serviceaccount/istio-multi created
clusterrole.rbac.authorization.k8s.io/istio-galley-istio-system created
clusterrole.rbac.authorization.k8s.io/istio-mixer-istio-system created
clusterrole.rbac.authorization.k8s.io/istio-pilot-istio-system created
clusterrole.rbac.authorization.k8s.io/prometheus-istio-system created
clusterrole.rbac.authorization.k8s.io/istio-citadel-istio-system created
clusterrole.rbac.authorization.k8s.io/istio-sidecar-injector-istio-system created
clusterrole.rbac.authorization.k8s.io/istio-reader created
clusterrolebinding.rbac.authorization.k8s.io/istio-galley-admin-role-binding-istio-system created
clusterrolebinding.rbac.authorization.k8s.io/istio-mixer-admin-role-binding-istio-system created
clusterrolebinding.rbac.authorization.k8s.io/istio-pilot-istio-system created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-istio-system created
clusterrolebinding.rbac.authorization.k8s.io/istio-citadel-istio-system created
clusterrolebinding.rbac.authorization.k8s.io/istio-sidecar-injector-admin-role-binding-istio-system created
clusterrolebinding.rbac.authorization.k8s.io/istio-multi created
role.rbac.authorization.k8s.io/istio-ingressgateway-sds created
rolebinding.rbac.authorization.k8s.io/istio-ingressgateway-sds created
service/istio-galley created
service/istio-ingressgateway created
service/istio-policy created
service/istio-telemetry created
service/istio-pilot created
service/prometheus created
service/istio-citadel created
service/istio-sidecar-injector created
deployment.apps/istio-galley created
deployment.apps/istio-ingressgateway created
deployment.apps/istio-policy created
deployment.apps/istio-telemetry created
deployment.apps/istio-pilot created
deployment.apps/prometheus created
deployment.apps/istio-citadel created
deployment.apps/istio-sidecar-injector created
horizontalpodautoscaler.autoscaling/istio-ingressgateway created
horizontalpodautoscaler.autoscaling/istio-policy created
horizontalpodautoscaler.autoscaling/istio-telemetry created
horizontalpodautoscaler.autoscaling/istio-pilot created
mutatingwebhookconfiguration.admissionregistration.k8s.io/istio-sidecar-injector created
poddisruptionbudget.policy/istio-galley created
poddisruptionbudget.policy/istio-ingressgateway created
poddisruptionbudget.policy/istio-policy created
poddisruptionbudget.policy/istio-telemetry created
poddisruptionbudget.policy/istio-pilot created
poddisruptionbudget.policy/istio-sidecar-injector created
attributemanifest.config.istio.io/istioproxy created
attributemanifest.config.istio.io/kubernetes created
instance.config.istio.io/requestcount created
instance.config.istio.io/requestduration created
instance.config.istio.io/requestsize created
instance.config.istio.io/responsesize created
instance.config.istio.io/tcpbytesent created
instance.config.istio.io/tcpbytereceived created
instance.config.istio.io/tcpconnectionsopened created
instance.config.istio.io/tcpconnectionsclosed created
handler.config.istio.io/prometheus created
rule.config.istio.io/promhttp created
rule.config.istio.io/promtcp created
rule.config.istio.io/promtcpconnectionopen created
rule.config.istio.io/promtcpconnectionclosed created
handler.config.istio.io/kubernetesenv created
rule.config.istio.io/kubeattrgenrulerule created
rule.config.istio.io/tcpkubeattrgenrulerule created
instance.config.istio.io/attributes created
destinationrule.networking.istio.io/istio-policy created
destinationrule.networking.istio.io/istio-telemetry created

Check if Istio is being installed successfully or not.
root@kub-master:~/istio-1.2.5# kubectl get all -n istio-system
NAME                                          READY   STATUS      RESTARTS   AGE
pod/istio-citadel-555dbdfd6b-ksqzn            1/1     Running     0          25m
pod/istio-cleanup-secrets-1.2.5-fr6tj         0/1     Completed   0          25m
pod/istio-galley-6855ffd77f-5b2nd             1/1     Running     0          25m
pod/istio-ingressgateway-7cfcbf4fb8-ntmr5     1/1     Running     0          25m
pod/istio-init-crd-10-f4xjf                   0/1     Completed   0          25m
pod/istio-init-crd-11-ct2t7                   0/1     Completed   0          25m
pod/istio-init-crd-12-nwgp8                   0/1     Completed   0          25m
pod/istio-pilot-9589bcff5-lt85f               2/2     Running     0          25m
pod/istio-policy-9dbbb8ccd-s5lpc              2/2     Running     2          25m
pod/istio-security-post-install-1.2.5-l8cw2   0/1     Completed   0          25m
pod/istio-sidecar-injector-74f597fb84-kv2tn   1/1     Running     0          25m
pod/istio-telemetry-5d95788576-sr5nr          2/2     Running     1          25m
pod/prometheus-7d7b9f7844-bsh42               1/1     Running     0          25m


NAME                             TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                                                                                                      AGE
service/istio-citadel            ClusterIP      10.108.238.135   <none>        8060/TCP,15014/TCP                                                                                                                           25m
service/istio-galley             ClusterIP      10.103.63.106    <none>        443/TCP,15014/TCP,9901/TCP                                                                                                                   25m
service/istio-ingressgateway     LoadBalancer   10.99.247.174    <pending>     15020:31884/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:31236/TCP,15030:30003/TCP,15031:32047/TCP,15032:30130/TCP,15443:32711/TCP   25m
service/istio-pilot              ClusterIP      10.98.94.187     <none>        15010/TCP,15011/TCP,8080/TCP,15014/TCP                                                                                                       25m
service/istio-policy             ClusterIP      10.98.153.137    <none>        9091/TCP,15004/TCP,15014/TCP                                                                                                                 25m
service/istio-sidecar-injector   ClusterIP      10.102.180.126   <none>        443/TCP                                                                                                                                      25m
service/istio-telemetry          ClusterIP      10.104.132.38    <none>        9091/TCP,15004/TCP,15014/TCP,42422/TCP                                                                                                       25m
service/prometheus               ClusterIP      10.96.51.228     <none>        9090/TCP                                                                                                                                     25m


NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/istio-citadel            1/1     1            1           25m
deployment.apps/istio-galley             1/1     1            1           25m
deployment.apps/istio-ingressgateway     1/1     1            1           25m
deployment.apps/istio-pilot              1/1     1            1           25m
deployment.apps/istio-policy             1/1     1            1           25m
deployment.apps/istio-sidecar-injector   1/1     1            1           25m
deployment.apps/istio-telemetry          1/1     1            1           25m
deployment.apps/prometheus               1/1     1            1           25m

NAME                                                DESIRED   CURRENT   READY   AGE
replicaset.apps/istio-citadel-555dbdfd6b            1         1         1       25m
replicaset.apps/istio-galley-6855ffd77f             1         1         1       25m
replicaset.apps/istio-ingressgateway-7cfcbf4fb8     1         1         1       25m
replicaset.apps/istio-pilot-9589bcff5               1         1         1       25m
replicaset.apps/istio-policy-9dbbb8ccd              1         1         1       25m
replicaset.apps/istio-sidecar-injector-74f597fb84   1         1         1       25m
replicaset.apps/istio-telemetry-5d95788576          1         1         1       25m
replicaset.apps/prometheus-7d7b9f7844               1         1         1       25m


NAME                                                       REFERENCE                         TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
horizontalpodautoscaler.autoscaling/istio-ingressgateway   Deployment/istio-ingressgateway   <unknown>/80%   1         5         1          25m
horizontalpodautoscaler.autoscaling/istio-pilot            Deployment/istio-pilot            <unknown>/80%   1         5         1          25m
horizontalpodautoscaler.autoscaling/istio-policy           Deployment/istio-policy           <unknown>/80%   1         5         1          25m
horizontalpodautoscaler.autoscaling/istio-telemetry        Deployment/istio-telemetry        <unknown>/80%   1         5         1          25m

NAME                                          COMPLETIONS   DURATION   AGE
job.batch/istio-cleanup-secrets-1.2.5         1/1           2s         25m
job.batch/istio-init-crd-10                   1/1           12s        25m
job.batch/istio-init-crd-11                   1/1           11s        25m
job.batch/istio-init-crd-12                   1/1           13s        25m
job.batch/istio-security-post-install-1.2.5   1/1           8s         25m

By default, istio-ingressgateway works as a load balancer and it's fine if you're using any cloud provider or any load balancer software but I like to use it as NodePort as I can manage it better in our bare metal set up so if you're looking to do the same edit the service configuration of istio-ingressgateway and replace type LoadBalancer with NodePort and save it.
root@kub-master:~/istio-1.2.5# kubectl edit service/istio-ingressgateway -n istio-system
service/istio-ingressgateway edited

You can see the service istio-ingressgateway now works as NodePort now and we can access Istio on any of the Cluster Node port.
root@kub-master:~/istio-1.2.5# kubectl get services -n istio-system
NAME                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                                                                                                      AGE
istio-citadel            ClusterIP   10.108.238.135   <none>        8060/TCP,15014/TCP                                                                                                                           33m
istio-galley             ClusterIP   10.103.63.106    <none>        443/TCP,15014/TCP,9901/TCP                                                                                                                   33m
istio-ingressgateway     NodePort    10.99.247.174    <none>        15020:31884/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:31236/TCP,15030:30003/TCP,15031:32047/TCP,15032:30130/TCP,15443:32711/TCP   33m
istio-pilot              ClusterIP   10.98.94.187     <none>        15010/TCP,15011/TCP,8080/TCP,15014/TCP                                                                                                       33m
istio-policy             ClusterIP   10.98.153.137    <none>        9091/TCP,15004/TCP,15014/TCP                                                                                                                 33m
istio-sidecar-injector   ClusterIP   10.102.180.126   <none>        443/TCP                                                                                                                                      33m
istio-telemetry          ClusterIP   10.104.132.38    <none>        9091/TCP,15004/TCP,15014/TCP,42422/TCP                                                                                                       33m
prometheus               ClusterIP   10.96.51.228     <none>        9090/TCP                                                                                                                                     33m

Setup fully configurable EFK Elasticsearch Fluentd Kibana setup in Kubernetes

In the following setup, we will be creating a fully configurable Elasticsearch, Flunetd, Kibana setup better known as EKF setup. There is a...