19/02/2018

A local Kubernetes set-up in 15 minutes

Author: Joeri Peeters

Containerisation, and Docker being a leader, has revolutionized the way we manage and deploy applications. It was a great help in standardization, maintainability, simplicity, faster deployments and it made moving apps to the cloud easier etc…

Delivering services … faster!

Building our apps and putting a container around it so all application dependencies are encapsulated and easily deployable to .. euh some server? is one thing. But what about managing this deployment? What about building the infrastructure and keeping control of what and where you are deploying?

Enter Kubernetes (K8s). K8s provides components to manage infrastructure (deploying, scaling, management, scheduling, load balancing..) and it comes with a CLI, so you are in full control of your infrastructure.

Docker and K8s have been a catalyst for DevOps teams to build fully automated pipelines that enable them to ship software faster.

I hear you think: Sounds good…. but I’m not an Ops guy. I’m a dev.

Exactly! And that’s where K8s will help you to become a better developer (of cloud-native applications). In practise, you declare the state, dependencies and steps of an entire infrastructure setup and deployment in a yaml file and K8s will make sure your actual applications are in the exact state as you declared in your file. You can use K8s’ kubectl api’s to manage your setup.

Pods, services and deployments

There is a lot to learn about the K8s ecosystem, as we will take some shortcuts, but as a developer you are mostly concerned with “Pods”, “Services” and “Deployments”.

Let’s quickly get to know them:

Pod

The smallest unit of work. Basically it hosts your application. It is linked directly to one or a group of your (docker) containers.

A Pod gets assigned a unique cluster IP address. It knows stuff about your container. e.g that you want to run a specific version of the image or you want to expose port 5555 on it.

The life of a pod can be harsh. They are created, restarted, destroyed.. but they are never resurrected. Typically they have a short life…

You probably already wonder… ok but what if we have dependencies between pods, how will they know which pods are available? Meet the service below.

Service

Services expose your pods without you having to worry about the harsh life the pods are having. They define a collection of pods that belongs to them. It’s important to note here that, although a Pod has an IP Address, they are not reachable outside the K8s cluster. You need a Service if you want your application to be reachable.

Deployment

A deployment will create a ReplicateSet of x number of pods. Default: 1

Remember that when we trigger a Deployment on Kubernetes, it will create a ReplicaSet of x number of Pods with containers inside them, never containers directly.

Obviously we scratched the surface here and made some shortcuts in our explanations. Visit https://docs.docker.com/get-started/ and https://kubernetes.io/docs/tutorials/kubernetes-basics/ for more documetation.

BTW: Do you know where the name Kubernetes originates from? It’s a reference to a Greek nautical term kybernetes. Which stands for steersman.

Ready? Read on!

What’s our goal.

We’ll be containerizing and deploying a typical 3-layer application stack.

The front-end is an Angular 6 client running on NodeJS, exposed publicely. The backend is a Spring boot REST application and a repository layer is a standard mysql database. The backend and repository are only reachable from inside the Minikube cluster. The client will use service discovery to make REST calls to the backend. The backend will use service discovery to the repository node.

We forked the popular Spring PetClinic project and containerized it by defining a Dockerfile.

We’ll:

  • See how to containerize a simple Spring boot and NodeJS application on a local K8s cluster.
  • See how K8s is used to achieve container orchestration.
  • Get to know more about K8s Automatic service discovery.
  • See that stateful applications are easy to accomplish using K8s.

Let’s go!

Prerequisites

This blog post is written on a Windows 10 machine. There are multiple ways to install the tools we need and you’ll find plenty of instructions on the web for other OS. Unfortunately on Windows 10, tooling can be cumbersome. Let’s install some basic stuff:

  • Install a JDK, git, Docker for Maven, Apache Maven.
  • Install Chocolatey (Windows 10) We’ll use Chocolatey, which is a package manager, to install kubernetes-cli
  • Open an administrative powershell and run the following command. Follow all instructions.
Set-ExecutionPolicy Bypass -Scope Process -Force; iex ((New-Object  System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))    
  • Install Minikube To develop kubernetes clusters on your local machine, you can use Minikube. Minikube runs a single-node k8s cluster on a VM on your localhost.
  choco install minikube  
  • Install Kubernetes CLI The k8s command-line tool; kubectl, to deploy and manage applications on Kubernetes. Open an administrative cmd or powershell and run the following command:
  choco install kubernetes-cli 

TO-DO: Setup Windows Primary Virtual Switch

Imperative commands vs Configuration files

We already mentioned in K8s you declare stuff in a yaml file… but that’s not the only way.

It’s important to note that before you start a K8s project, you should determine the way you’re going to manage kubernetes objects. Kubernetes objects are persistent entities in the Kubernetes system.

There are two ways to manage K8s objects.

  • via kubectl directly manipulating objects that only live on your cluster.
  • via a yaml file that is fed to kubectl and lives in your cluster.

In projects it makes sense to use the imperative object configuration. You’ll define everything in a yaml file and use the kubectl to manipulate the K8s objects. You can store that in your SCM tool and the configuration can also be part of your code review flow.

We’ll be showing both styles.. both direct manipulation of the K8s objects, and the apply command via the yaml file. The yaml files are also available in the git repo.

If you want to know more and check pro’s & cons of object mgmt approaches: https://kubernetes.io/docs/concepts/overview/object-management-kubectl/overview/

Build the containers

We’ve containerized the Spring Petclinic project. The Dockerfiles are pretty straightforward and you can have a look here on github.

To check out the projects to your local filesystem:

git clone https://github.com/inventj/spring-petclinic-rest git clone https://github.com/inventj/spring-petclinic-angular 

and build the docker images:

mvn clean install docker build -t petclinic-backend:v1 . docker build -t petclinic-angular:v1 . 
Build the K8s configuration on minikube cluster
Configure minikube

On our Windows 10 machine we need to set hyperv as virtual machine manager driver and make sure we use the primary virtual switch we defined earlier.

 minikube start --vm-driver hyperv --hyperv-virtual-switch "Primary Virtual Switch" 
Reusing the docker daemon

Minikube comes with docker and K8s on-board. A docker installation has it’s own daemon/repository. So, when you installed docker and minikube, you got two separate dockers (and repositories). In order to enable Minikube to pull images from the repository, we’ll point the docker daemon configuration to the one of minikube. This way, you can reuse the Docker daemon managed by Minikube for other Docker use-cases as well.

To do this, we’ll use the docker-env command. This command will display a command to configure your shell. You have to copy the last line that it outputs and execute it in your shell.

minikube docker-env 
Namespacing.

K8s has a feature called Namespaces. They should be used when you have many users in multiple teams because clashes of services, deployments, etc could happen. But for small environments, and certainly, for our playground here, it’s overkill so we’ll just use the default namespace. For reference, here is how you create a namespace:

 kubectl create namespace backoffice-apps 

When you have created a namespace, you can use it when creating K8S components like services and deployments. If you create a component in a namspace, you’ll need to specify this namespace afterwards when doing other kubectl operations with the –namespace= option. e.g:

kubectl create deployment ... --namespace=backoffice-apps kubectl get pods --namespace=backoffice-apps 

Creating K8s components

Deployment

The following commands will create deployments for our PetClinic containers. We are using the image-pull-policy=IfNotPresent option because this will ensure that Docker won’t try to pull the image from the public dockerhub, but rather look in our local docker repository first.

kubectl run petclinic-backend --image=petclinic-backend:v1 --port=9966 --image-pull-policy=IfNotPresent kubectl run petclinic-angular --image=petclinic-angular:v1  --port=4200 --type=NodePort --image-pull-policy=IfNotPresent 

You can see the yaml configuration of the deployment like this:

kubectl get deployments/petclinic-backend --export -o yaml 

Another useful command to see what happened is described. The Events section gives information on what’s happening with your pod.

kubectl describe pods petclinic-backend-bf84b6847-r8hcr 
Service

We want the petclinic-backend only to be available to our NodeJS client Pod. Therefore we are launching a service which exposes the ClusterIP on port 9966, a service of type ClusterIP is only available to services within the cluster, not to the outside world!

kubectl expose deployment petclinic-backend 

Let’s check if our service is running:

kubectl get services NAME                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE kubernetes          ClusterIP   10.96.0.1        <none>        443/TCP    4d petclinic-backend   ClusterIP   10.111.166.106   <none>        9966/TCP   2s 

And let’s check some details about the petclinic-backend service:

kubectl describe service petclinic-backend Name:              petclinic-backend Namespace:         default Labels:            run=petclinic-backend Annotations:       <none> Selector:          run=petclinic-backend Type:              ClusterIP IP:                10.105.130.119 Port:              <unset>  9966/TCP TargetPort:        9966/TCP Endpoints:         172.17.0.5:9966 Session Affinity:  None Events:            <none> 

You might be tempted here to visit (in my case): http://172.17.0.5:9966/petclinic/swagger-ui.html but, no it won’t work because we are not exposing this service to the outside world.

Next up; the NodeJS client application. For this, we will specify a node port because we want to expose the NodeJS application outside our cluster, so it’s available in our browser.

kubectl expose deployment petclinic-angular --type=NodePort 

Now the service is exposed, let’s check on which port it is exposed (TargetPort):

kubectl describe service petclinic-angular  Name:                     petclinic-angular Namespace:                default Labels:                   run=petclinic-angular Annotations:              <none> Selector:                 run=petclinic-angular Type:                     NodePort IP:                       10.101.149.33 Port:                     <unset>  4200/TCP TargetPort:               4200/TCP NodePort:                 <unset>  30473/TCP Endpoints: Session Affinity:         None External Traffic Policy:  Cluster Events:                   <none> 

Access the service via it’s exposed nodeport

minikube service petclinic-angular
gallery image