Wednesday, January 29, 2020

Install and configure Spinnaker on Kubernetes

We will be installing and configuring Spinnaker in our Kubernetes cluster to streamline CI/CD process. Spinnaker was created by Netflix to handle there CI/CD in Kubernetes setup and later on they open-sourced it. Spinnaker is a great tool with inbuilt integration with major cloud providers. We will be installing it in the Bare Metal Kubernetes cluster though.

Clone the configuration file from Spinnaker GitHub repo and make your changes accordingly.
root@kub-master:~# wget https://raw.githubusercontent.com/helm/charts/master/stable/spinnaker/values.yaml

I have disabled all the persistence storage entries in the values.yaml file as it's not for production deployment but in case of Production deployment it's highly recommended to use persistence storage. My values.yaml file looks like this after disabling the persistence storage.
root@kub-master:~# cat values.yaml
halyard:
  spinnakerVersion: 1.16.1
  image:
    repository: gcr.io/spinnaker-marketplace/halyard
    tag: 1.23.2
    pullSecrets: []
  # Set to false to disable persistence data volume for halyard
  persistence:
    enabled: false
  # Provide additional parameters to halyard deploy apply command
  additionalInstallParameters: []
  # Provide a config map with Hal commands that will be run the core config (storage)
  # The config map should contain a script in the config.sh key
  additionalScripts:
    enabled: false
    configMapName: my-halyard-config
    configMapKey: config.sh
    # If you'd rather do an inline script, set create to true and put the content in the data dict like you would a configmap
    # The content will be passed through `tpl`, so value interpolation is supported.
    create: false
    data: {}
  additionalSecrets:
    create: false
    data: {}
    ## Uncomment if you want to use a pre-created secret rather than feeding data in via helm.
    # name:
  additionalConfigMaps:
    create: false
    data: {}
    ## Uncomment if you want to use a pre-created ConfigMap rather than feeding data in via helm.
    # name:
  ## Define custom profiles for Spinnaker services. Read more for details:
  ## https://www.spinnaker.io/reference/halyard/custom/#custom-profiles
  ## The contents of the files will be passed through `tpl`, so value interpolation is supported.
  additionalProfileConfigMaps:
    data: {}
      ## if you're running spinnaker behind a reverse proxy such as a GCE ingress
      ## you may need the following profile settings for the gate profile.
      ## see https://github.com/spinnaker/spinnaker/issues/1630
      ## otherwise its harmless and will likely become default behavior in the future
      ## According to the linked github issue.
      # gate-local.yml:
      #   server:
      #     tomcat:
      #       protocolHeader: X-Forwarded-Proto
      #       remoteIpHeader: X-Forwarded-For
      #       internalProxies: .*
      #       httpsServerPort: X-Forwarded-Port

  ## Define custom settings for Spinnaker services. Read more for details:
  ## https://www.spinnaker.io/reference/halyard/custom/#custom-service-settings
  ## You can use it to add annotations for pods, override the image, etc.
  additionalServiceSettings: {}
    # deck.yml:
    #   artifactId: gcr.io/spinnaker-marketplace/deck:2.9.0-20190412012808
    #   kubernetes:
    #     podAnnotations:
    #       iam.amazonaws.com/role: <role_arn>
    # clouddriver.yml:
    #   kubernetes:
    #     podAnnotations:
    #       iam.amazonaws.com/role: <role_arn>

  ## Populate to provide a custom local BOM for Halyard to use for deployment. Read more for details:
  ## https://www.spinnaker.io/guides/operator/custom-boms/#boms-and-configuration-on-your-filesystem
  bom: ~
  #   artifactSources:
  #     debianRepository: https://dl.bintray.com/spinnaker-releases/debians
  #     dockerRegistry: gcr.io/spinnaker-marketplace
  #     gitPrefix: https://github.com/spinnaker
  #     googleImageProject: marketplace-spinnaker-release
  #   services:
  #     clouddriver:
  #       commit: 031bcec52d6c3eb447095df4251b9d7516ed74f5
  #       version: 6.3.0-20190904130744
  #     deck:
  #       commit: b0aac478e13a7f9642d4d39479f649dd2ef52a5a
  #       version: 2.12.0-20190916141821
  #     ...
  #   timestamp: '2019-09-16 18:18:44'
  #   version: 1.16.1

  ## Define local configuration for Spinnaker services.
  ## The contents of these files would be copies of the configuration normally retrieved from
  ## `gs://halconfig/<service-name>`, but instead need to be available locally on the halyard pod to facilitate
  ## offline installation. This would typically be used along with a custom `bom:` with the `local:` prefix on a
  ## service version.
  ## Read more for details:
  ## https://www.spinnaker.io/guides/operator/custom-boms/#boms-and-configuration-on-your-filesystem
  ## The key for each entry must be the name of the service and a file name separated by the '_' character.
  serviceConfigs: {}
  # clouddriver_clouddriver-ro.yml: |-
  #   ...
  # clouddriver_clouddriver-rw.yml: |-
  #   ...
  # clouddriver_clouddriver.yml: |-
  #   ...
  # deck_settings.json: |-
  #   ...
  # echo_echo.yml: |-
  #   ...

  ## Uncomment if you want to add extra commands to the init script
  ## run by the init container before halyard is started.
  ## The content will be passed through `tpl`, so value interpolation is supported.
  # additionalInitScript: |-

  ## Uncomment if you want to add annotations on halyard and install-using-hal pods
  # annotations:
  #   iam.amazonaws.com/role: <role_arn>

  ## Uncomment the following resources definitions to control the cpu and memory
  # resources allocated for the halyard pod
  resources: {}
    # requests:
    #   memory: "1Gi"
    #   cpu: "100m"
    # limits:
    #   memory: "2Gi"
    #   cpu: "200m"

  ## Uncomment if you want to set environment variables on the Halyard pod.
  # env:
  #   - name: JAVA_OPTS
  #     value: -Dhttp.proxyHost=proxy.example.com
  customCerts:
    ## Enable to override the default cacerts with your own one
    enabled: false
    secretName: custom-cacerts

# Define which registries and repositories you want available in your
# Spinnaker pipeline definitions
# For more info visit:
#   https://www.spinnaker.io/setup/providers/docker-registry/

# Configure your Docker registries here
dockerRegistries:
- name: dockerhub
  address: index.docker.io
  repositories:
    - library/alpine
    - library/ubuntu
    - library/centos
    - library/nginx
# - name: gcr
#   address: https://gcr.io
#   username: _json_key
#   password: '<INSERT YOUR SERVICE ACCOUNT JSON HERE>'
#   email: 1234@5678.com
# - name: ecr
#   address: <AWS-ACCOUNT-ID>.dkr.ecr.<REGION>.amazonaws.com
#   username: AWS
#   passwordCommand: "aws --region <REGION> ecr get-authorization-token --output text --query 'authorizationData[].authorizationToken' | base64 -d | sed 's/^AWS://'"

# If you don't want to put your passwords into a values file
# you can use a pre-created secret instead of putting passwords
# (specify secret name in below `dockerRegistryAccountSecret`)
# per account above with data in the format:
# <name>: <password>

# dockerRegistryAccountSecret: myregistry-secrets

kubeConfig:
  # Use this when you want to register arbitrary clusters with Spinnaker
  # Upload your ~/kube/.config to a secret
  enabled: false
  secretName: my-kubeconfig
  secretKey: config
  # Use this when you want to configure halyard to reference a kubeconfig from s3
  # This allows you to keep your kubeconfig in an encrypted s3 bucket
  # For more info visit:
  #   https://www.spinnaker.io/reference/halyard/secrets/s3-secrets/#secrets-in-s3
  # encryptedKubeconfig: encrypted:s3!r:us-west-2!b:mybucket!f:mykubeconfig
  # List of contexts from the kubeconfig to make available to Spinnaker
  contexts:
  - default
  deploymentContext: default
  omittedNameSpaces:
  - kube-system
  - kube-public
  onlySpinnakerManaged:
    enabled: false

  # When false, clouddriver will skip the permission checks for all kubernetes kinds at startup.
  # This can save a great deal of time during clouddriver startup when you have many kubernetes
  # accounts configured. This disables the log messages at startup about missing permissions.
  checkPermissionsOnStartup: true

  # A list of resource kinds this Spinnaker account can deploy to and will cache.
  # When no kinds are configured, this defaults to ‘all kinds'.
  # kinds:
  # -

  # A list of resource kinds this Spinnaker account cannot deploy to or cache.
  # This can only be set when –kinds is empty or not set.
  # omittedKinds:
  # -

# Change this if youd like to expose Spinnaker outside the cluster
ingress:
  enabled: false
  # host: spinnaker.example.org
  # annotations:
    # ingress.kubernetes.io/ssl-redirect: 'true'
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  # tls:
  #  - secretName: -tls
  #    hosts:
  #      - domain.com

ingressGate:
  enabled: false
  # host: gate.spinnaker.example.org
  # annotations:
    # ingress.kubernetes.io/ssl-redirect: 'true'
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  # tls:
  #  - secretName: -tls
  #    hosts:
  #      - domain.com

# spinnakerFeatureFlags is a list of Spinnaker feature flags to enable
# Ref: https://www.spinnaker.io/reference/halyard/commands/#hal-config-features-edit
# spinnakerFeatureFlags:
#   - artifacts
#   - pipeline-templates
spinnakerFeatureFlags:
  - artifacts

# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
# nodeSelector to provide to each of the Spinnaker components
nodeSelector: {}

# Redis password to use for the in-cluster redis service
# Enable redis to use in-cluster redis
redis:
  enabled: true
  # External Redis option will be enabled if in-cluster redis is disabled
  external:
    host: "<EXTERNAL-REDIS-HOST-NAME>"
    port: 6379
    # password: ""
  password: password
  nodeSelector: {}
  cluster:
    enabled: false
# Uncomment if you don't want to create a PVC for redis
  master:
    persistence:
      enabled: false

# Minio access/secret keys for the in-cluster S3 usage
# Minio is not exposed publically
minio:
  enabled: true
  imageTag: RELEASE.2019-02-13T19-48-27Z
  serviceType: ClusterIP
  accessKey: spinnakeradmin
  secretKey: spinnakeradmin
  bucket: "spinnaker"
  nodeSelector: {}
# Uncomment if you don't want to create a PVC for minio
  persistence:
    enabled: false

# Google Cloud Storage
gcs:
  enabled: false
  project: my-project-name
  bucket: "<GCS-BUCKET-NAME>"
  ## if jsonKey is set, will create a secret containing it
  jsonKey: '<INSERT CLOUD STORAGE JSON HERE>'
  ## override the name of the secret to use for jsonKey, if `jsonKey`
  ## is empty, it will not create a secret assuming you are creating one
  ## external to the chart. the key for that secret should be `key.json`.
  secretName:

# AWS Simple Storage Service
s3:
  enabled: false
  bucket: "<S3-BUCKET-NAME>"
  # rootFolder: "front50"
  # region: "us-east-1"
  # endpoint: ""
  # accessKey: ""
  # secretKey: ""
  # assumeRole: "<role to assume>"
  ## Here you can pass extra arguments to configure s3 storage options
  extraArgs: []
  #  - "--path-style-access true"

# Azure Storage Account
azs:
  enabled: false
#   storageAccountName: ""
#   accessKey: ""
#   containerName: "spinnaker"

rbac:
  # Specifies whether RBAC resources should be created
  create: true

serviceAccount:
  # Specifies whether a ServiceAccount should be created
  create: true
  # The name of the ServiceAccounts to use.
  # If left blank it is auto-generated from the fullname of the release
  halyardName:
  spinnakerName:
securityContext:
  # Specifies permissions to write for user/group
  runAsUser: 1000
  fsGroup: 1000

We will use helm to install our spinnaker setup taking values from values.yaml.
root@kub-master:~# helm install --name my-spinnaker -f values.yaml stable/spinnaker --namespace spinnaker
NAME:   my-spinnaker
LAST DEPLOYED: Tue Jan 28 19:35:17 2020
NAMESPACE: spinnaker
STATUS: DEPLOYED

RESOURCES:
==> v1/ClusterRoleBinding
NAME                              AGE
my-spinnaker-spinnaker-spinnaker  3m9s

==> v1/ConfigMap
NAME                                                   AGE
my-spinnaker-minio                                     3m9s
my-spinnaker-spinnaker-additional-profile-config-maps  3m9s
my-spinnaker-spinnaker-halyard-config                  3m9s
my-spinnaker-spinnaker-halyard-init-script             3m9s
my-spinnaker-spinnaker-service-settings                3m9s

==> v1/Pod(related)
NAME                                 AGE
my-spinnaker-minio-668c59b766-bhw6p  3m9s
my-spinnaker-redis-master-0          3m9s
my-spinnaker-spinnaker-halyard-0     3m9s

==> v1/RoleBinding
NAME                            AGE
my-spinnaker-spinnaker-halyard  3m9s

==> v1/Secret
NAME                             AGE
my-spinnaker-minio               3m9s
my-spinnaker-redis               3m9s
my-spinnaker-spinnaker-registry  3m9s

==> v1/Service
NAME                            AGE
my-spinnaker-minio              3m9s
my-spinnaker-redis-master       3m9s
my-spinnaker-spinnaker-halyard  3m9s

==> v1/ServiceAccount
NAME                            AGE
my-spinnaker-spinnaker-halyard  3m9s

==> v1/StatefulSet
NAME                            AGE
my-spinnaker-spinnaker-halyard  3m9s

==> v1beta2/Deployment
NAME                AGE
my-spinnaker-minio  3m9s

==> v1beta2/StatefulSet
NAME                       AGE
my-spinnaker-redis-master  3m9s


NOTES:
1. You will need to create 2 port forwarding tunnels in order to access the Spinnaker UI:
  export DECK_POD=$(kubectl get pods --namespace spinnaker -l "cluster=spin-deck" -o jsonpath="{.items[0].metadata.name}")
  kubectl port-forward --namespace spinnaker $DECK_POD 9000

  export GATE_POD=$(kubectl get pods --namespace spinnaker -l "cluster=spin-gate" -o jsonpath="{.items[0].metadata.name}")
  kubectl port-forward --namespace spinnaker $GATE_POD 8084

2. Visit the Spinnaker UI by opening your browser to: http://127.0.0.1:9000

To customize your Spinnaker installation. Create a shell in your Halyard pod:

  kubectl exec --namespace spinnaker -it my-spinnaker-spinnaker-halyard-0 bash

For more info on using Halyard to customize your installation, visit:
  https://www.spinnaker.io/reference/halyard/

For more info on the Kubernetes integration for Spinnaker, visit:
  https://www.spinnaker.io/reference/providers/kubernetes-v2/

Our spinnaker setup has been installed successfully now lets run the commands as suggested by spinnaker to create port forwarding tunnels.
root@kub-master:~# export DECK_POD=$(kubectl get pods --namespace spinnaker -l "cluster=spin-deck" -o jsonpath="{.items[0].metadata.name}")
root@kub-master:~# kubectl port-forward --namespace spinnaker $DECK_POD 9000 &
[1] 24013
root@kub-master:~# Forwarding from 127.0.0.1:9000 -> 9000

root@kub-master:~# export GATE_POD=$(kubectl get pods --namespace spinnaker -l "cluster=spin-gate" -o jsonpath="{.items[0].metadata.name}")

root@kub-master:~# kubectl port-forward --namespace spinnaker $GATE_POD 8084 &
[2] 24371
root@kub-master:~# Forwarding from 127.0.0.1:8084 -> 8084

Let's check our spinnaker deployment.
root@kub-master:~# kubectl get all -n spinnaker
NAME                                       READY   STATUS      RESTARTS   AGE
pod/my-spinnaker-install-using-hal-nk854   0/1     Completed   0          5m22s
pod/my-spinnaker-minio-668c59b766-bhw6p    1/1     Running     0          5m24s
pod/my-spinnaker-redis-master-0            1/1     Running     0          5m24s
pod/my-spinnaker-spinnaker-halyard-0       1/1     Running     0          5m24s
pod/spin-clouddriver-67474d5679-gcd52      1/1     Running     0          2m23s
pod/spin-deck-565f65c6b5-lfbx9             1/1     Running     0          2m25s
pod/spin-echo-98f95c9f5-m9kz8              0/1     Running     0          2m27s
pod/spin-front50-5b774b446b-mps7m          0/1     Running     0          2m21s
pod/spin-gate-78f5787989-5ns7x             1/1     Running     0          2m25s
pod/spin-igor-75f89f6d58-5k2fc             1/1     Running     0          2m24s
pod/spin-orca-5bd445469-sr8n9              1/1     Running     0          2m21s
pod/spin-rosco-c757f4488-lq6x9             1/1     Running     0          2m20s


NAME                                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/my-spinnaker-minio               ClusterIP   None             <none>        9000/TCP   5m24s
service/my-spinnaker-redis-master        ClusterIP   10.104.160.74    <none>        6379/TCP   5m24s
service/my-spinnaker-spinnaker-halyard   ClusterIP   None             <none>        8064/TCP   5m24s
service/spin-clouddriver                 ClusterIP   10.99.31.174     <none>        7002/TCP   2m31s
service/spin-deck                        ClusterIP   10.105.31.148    <none>        9000/TCP   2m34s
service/spin-echo                        ClusterIP   10.97.251.129    <none>        8089/TCP   2m34s
service/spin-front50                     ClusterIP   10.100.81.247    <none>        8080/TCP   2m33s
service/spin-gate                        ClusterIP   10.109.86.157    <none>        8084/TCP   2m33s
service/spin-igor                        ClusterIP   10.109.198.34    <none>        8088/TCP   2m32s
service/spin-orca                        ClusterIP   10.104.52.95     <none>        8083/TCP   2m32s
service/spin-rosco                       ClusterIP   10.110.146.173   <none>        8087/TCP   2m33s


NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-spinnaker-minio   1/1     1            1           5m24s
deployment.apps/spin-clouddriver     1/1     1            1           2m23s
deployment.apps/spin-deck            1/1     1            1           2m25s
deployment.apps/spin-echo            0/1     1            0           2m27s
deployment.apps/spin-front50         0/1     1            0           2m21s
deployment.apps/spin-gate            1/1     1            1           2m25s
deployment.apps/spin-igor            1/1     1            1           2m24s
deployment.apps/spin-orca            1/1     1            1           2m21s
deployment.apps/spin-rosco           1/1     1            1           2m20s

NAME                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/my-spinnaker-minio-668c59b766   1         1         1       5m24s
replicaset.apps/spin-clouddriver-67474d5679     1         1         1       2m23s
replicaset.apps/spin-deck-565f65c6b5            1         1         1       2m25s
replicaset.apps/spin-echo-98f95c9f5             1         1         0       2m27s
replicaset.apps/spin-front50-5b774b446b         1         1         0       2m21s
replicaset.apps/spin-gate-78f5787989            1         1         1       2m25s
replicaset.apps/spin-igor-75f89f6d58            1         1         1       2m24s
replicaset.apps/spin-orca-5bd445469             1         1         1       2m21s
replicaset.apps/spin-rosco-c757f4488            1         1         1       2m20s

NAME                                              READY   AGE
statefulset.apps/my-spinnaker-redis-master        1/1     5m24s
statefulset.apps/my-spinnaker-spinnaker-halyard   1/1     5m24s


NAME                                       COMPLETIONS   DURATION   AGE
job.batch/my-spinnaker-install-using-hal   1/1           3m7s       5m22s

As I would be using the spinnaker via NodePort so I would be editing the following services type to run from ClusterIP to NodePort.
root@kub-master:~# kubectl edit service/spin-deck -n spinnaker
service/spin-deck edited
root@kub-master:~# kubectl edit service/spin-gate -n spinnaker
service/spin-gate edited
root@kub-master:~# kubectl get services -n spinnaker
NAME                             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
my-spinnaker-minio               ClusterIP   None             <none>        9000/TCP         16h
my-spinnaker-redis-master        ClusterIP   10.104.160.74    <none>        6379/TCP         16h
my-spinnaker-spinnaker-halyard   ClusterIP   None             <none>        8064/TCP         16h
spin-clouddriver                 ClusterIP   10.99.31.174     <none>        7002/TCP         16h
spin-deck                        NodePort    10.105.31.148    <none>        9000:30306/TCP   16h
spin-echo                        ClusterIP   10.97.251.129    <none>        8089/TCP         16h
spin-front50                     ClusterIP   10.100.81.247    <none>        8080/TCP         16h
spin-gate                        NodePort    10.109.86.157    <none>        8084:30825/TCP   16h
spin-igor                        ClusterIP   10.109.198.34    <none>        8088/TCP         16h
spin-orca                        ClusterIP   10.104.52.95     <none>        8083/TCP         16h
spin-rosco                       ClusterIP   10.110.146.173   <none>        8087/TCP         16h

Now our services spin-deck and spin-gate are working as NodePort and we can access it via corresponding ports. In my case, I would be accessing it via any of Node IP followed up by port no 30306.
Spinnaker Kubernetes

To customize and do any changes in Spinnaker use the following command.
root@kub-master:~# kubectl exec --namespace spinnaker -it my-spinnaker-spinnaker-halyard-0 bash
spinnaker@my-spinnaker-spinnaker-halyard-0:/workdir$

No comments:

Post a Comment

Setup fully configurable EFK Elasticsearch Fluentd Kibana setup in Kubernetes

In the following setup, we will be creating a fully configurable Elasticsearch, Flunetd, Kibana setup better known as EKF setup. There is a...