[Get Hands Dirty] Kubernetes Part 1: Deploying Jhipster Generated Project on Minikube & Kind
Sometimes I feel confused after watched or studied tutorials on internet, or faced a lot issues when trying to apply on real world use cases. That’s why I decided to start this Get Hands Dirty series to apply what I have learned from tutorials. In the mean time, I will document the solutions for those odds/unexpected issues or requirements.
Goals
This is my first time trying on kubernetes, before deploy to a real kubernetes, let’s try with local kubernetes such as minikube and kind first to reduce the experiment costs.
Update: Since here already provide a complete tutorial about kubernetes on AWS EKS, I will not write about deploy kubernetes on AWS EKS in the future.
Prerequisites
- Basic understanding of kubernetes and docker
- I will skip those terms explanation since there is a lot basic explanations of them everywhere online - Installed Jhipster, Docker, Minikube, kubectl, kind
- I will skip those installation and configuration here since there is a lots detailed tutorials available on internet
Journey Start
Agenda
- Getting a jhispter project
- Build docker image
- Prepare kubernetes deployment scripts
- Deployment on minikube
- Deployment on kind
Getting a jhipster project
To start deployment with kubernetes, the very first thing to do must be prepare a project.
Put your clone/generated project under a clean folder for later usage.
Since I am using jhipster generated project as example here, you may refer to here to create your own project.
Or you may clone my project from here directly.
Build Your Docker Image
After you get the jhipster generated project, you can direct build as docker image with generated config.
./mvnw -ntp -Pprod verify jib:dockerBuild
You might hit docker image pull failed issue, just login your docker account on the terminal you used to build your docker image. (Reference)
After build success, tag and push your docker image to your docker hub account.
docker image tag local-image:tagname new-repo:tagname
docker push new-repo:tagname
The new-repo
here is combination of your docker hub account username and the repository name for your images. For example, if I am using zy-example
as username and bogateway
as repository name, my command line will be
docker image tag bogateway zy-example/bogateway:v0.0.1
docker push zy-example/bogateway:v0.0.1
Prepare Kubernetes Deployment Scripts
Jhipster has provided functions to create scripts for kubernetes deployment for jhipster projects, we will use that function to create those scripts to ease our setup.
Following instructions are referred from here
Terms definitions I will use at below:
- Root Folder : The root folder of your project
Ex: C:\workspace\example\bogateway\ - Outer Folder : The parent folder of Root Folder
Ex: C:\workspace\example\
- Open a terminal (I was using Windows powershell, but linux related terminal should be working fine)
- Create a new folder named
k8s
in Outer Folder cd k8s; jhipster kubernetes;
- Configure accordingly (Can refer to jhipster website: here)
After configuration, you will have your project kubernetes setup files under <your-project-name>-k8s
.
Deployment on minikube
Deploying on actual kubernetes will require some cost, so you might want to try on local minikube before go for it.
Basic minikube setup and start up refer to here
After setup minikube, you can run the script generated by jhipster kubernetes just now to start the deployment on minikube.
You can choose to deploy all at once, change your terminal directory to k8s
folder, and run./kubectl-apply.sh
Or you can deploy specific service only
kubectl apply ./bogateway-k8s/
Remember to deploy registry-k8s first if you are using jhipster-registry for your jhipster projects
Until here, you already success to make your project deployment on minikube. You can open a new terminal and run minikube dashboard
to view your deployment status on browser.
As life always welcomed by various surprises, my first time deployment also comes with a lot unexpected issues. I will list down those issues and the solution I used to overcome those issues.
Pull your project image failed
When I deployed my project for the first time, the logs in my project’s pods show that failed to pull my project docker image even though I did push them to docker hub. After googled for solutions, it is an issue of minikube that failed to pull docker image when there is existing local build docker image. To overcome this issue, just open your <project>-deployment.yml
that generated by jhipster kubernetes
, then modified the imagePullPolicy
value to IfNotPresent
and deploy the project again.
Insufficient CPU
There are two ways to handle this issue.
First method is add more resources for your docker, the screenshot is configuration in docker desktop.
Another method is add more nodes for your minikube by using minikube node add
. More minikube node control can refer here.
External Endpoints Always Pending
If you are not using Ingress to load balance your minikube deployments, you have to run minikube tunnel
to tunnel the minikube port with your local port.
Deployment on kind
Kind is another local kubernetes cluster that you can use to simulate deployment on kubernetes.
You can refer kind configuration at here
Before start the kind cluster, we have to add in configuration about port mapping that we need to test our project on local machine browser. Kind is different with minikube here (without Ingress), while minikube can use minikube tunnel
to do port forwarding from clusters to local machine, you have to put your port mapping requirement in configuration when starting kind clusters. At current version (v0.11.0
) of kind, real time update cluster configuration is not supported yet, so you have to prepare the configuration file and start cluster with the configuration. You can create your kind-config.yaml
and copy the following configuration into the file.
# this config file contains all config fields with comments
# NOTE: this is not a particularly useful config file
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
# patch the generated kubeadm config with some extra settings
kubeadmConfigPatches:
- |
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
nodefs.available: "0%"# patch it further using a JSON 6902 patch
kubeadmConfigPatchesJSON6902:
- group: kubeadm.k8s.io
version: v1beta2
kind: ClusterConfiguration
patch: |
- op: add
path: /apiServer/certSANs/-
value: my-hostname# 1 control plane node
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30950
hostPort: 8080
As showing in configuration file, we are mapping node’s port 30950 to host machine port 8080, so we need to update our deployment scripts to expose service on same node port too.
Modify your <project>-k8s/<project>-service.yml
as below.
apiVersion: v1
kind: Service
metadata:
name: bogateway
namespace: default
labels:
app: bogateway
spec:
selector:
app: bogateway
type: NodePort
ports:
- name: http
port: 8080
nodePort: 30950
The major changes are type
and ports
under spec
. Note that the nodePort
should same port number as containerPort
inkind-config.yaml
.
- Start your kind clusters
kind create cluster — config kind-config.yaml
- Deploy your jhispter-registry
kubectl apply -f .\registry-k8s\
- Deploy your project (I am using bogateway as example here)
kubectl apply -f .\bogateway-k8s\
After deployment, you can use kubectl get pods
and kubectl get services
to check your pods and services status.
Once all pods are ready, you can open 127.0.0.1:8080
on your local machine’s browser to access bogateway.
To Be Continue…
We know how to deploy jhipster projects on local kubernetes now, before deploy to real kubernetes, I would like to explore on Ingress first since I found that’s quite a lot places mentioned this keyword. Please stay tuned for it.