[Get Hands Dirty] Kubernetes Part 2: The love between Kubernetes (Minikube + Kind) and Ingress

Zong Yuan
4 min readJul 6, 2021

For this section, I will like to write about my first hand experience of Ingress Controller setup on minikube and kind. As previous section, I will skip the Ingress definition here as the official documentation on Kubernetes had covered it more than my personal understanding.

Update: Since here already provide a complete tutorial about kubernetes on AWS EKS, I will not write about deploy kubernetes on AWS EKS in the future.

Ingress on minikube

Actually I failed to access ingress on minikube by open url (hello-world.info) on browser, I did some research on it and some info mentioned that ingress on minikube not supported on current Docker Desktop (I am using Windows 10). That’s why I will skip the minikube steps here, just follow the official steps here, after configuration done, use minikube ssh to log into minikube terminal, then run curl hello-world.info to verify your configuration.

Ingress on kind

First, let’s prepare a kind configuration example-kind.yaml.

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP

The configuration above is enable port mapping between local machine and kind cluster. We are mapping port 80 and port 443 in this example.

To remove original kind cluster:
kind delete cluster

To create new kind cluster with specific configuration:
kind create cluster --config example-kind.yaml

After kind clusters created, deploy Ingress Nginx on kind:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml

Use following commands to wait for Ingress Nginx ready on kind:
kubectl wait --namespace ingress-nginx --for=condition=ready pod --selector=app.kubernetes.io/component=controller --timeout=90s

The commands will stop once the Ingress Nginx are ready.

The error screen: failed calling webhook “validate.nginx.ingress.kubernetes.io”

After installed Ingress Nginx, let’s modify the kubernetes scripts from last article to make the services exposed to Ingress instead of Node Port. Go to k8s folder and run jhipster kubernetes again, this time choose Ingress at Choose the Kubernetes service type for your edge services , and choose NGINX Ingress at Choose the Kubernetes Ingress type .

Choose Ingress and NGINX Ingress at jhipster kubernetes

After jhipster generated kubernetes scripts, you will find the ingress resource configuration file called <project>-ingress.yml under k8s/<project>-k8s. The default ingress resource configuration file allows any hostname to connect your exposed service, however, you can add the host under rules to make it only available for specific domains.

The example here shows that only hello-world.info allow to access the bogateway service
The spec -> type is ClusterIP instead of NodePort after selected Ingress as Edge Service

Now every scripts are ready, you can deploy the scripts using kubectl as explained here.

kubectl apply -f .\registry-k8s\

kubectl apply -f .\bogateway-k8s\

After deployed everything, you can access your service with your local machine browser.

Remember to edit your hosts file to allow hello-world.info to connect to 127.0.0.1

Troubleshoot

Can’t connect to hello-world.info

Edit your hosts file. (Google is da best)

Can’t start your pods

First run kubectl get pods to check your pods status.

If your pods keep stuck at error status (Like ImagePullBackOff that I faced), run kubectl describe pod <pod-name> to get more details. You can find the root issues from events in the result of describe.

In my case, I can’t locate my docker images.

Failed to pull docker image

There are some different reasons that may cause your pod failed to pull docker images, in my case, the pod will default use latest as docker tag, while my docker hub only have v0.0.1 as tag. Just add the tag in your deployment script to let the pod use v0.0.1 as docker tag.

Put your tag after your image path

Post words

Having fun on tackling different issues when trying on kubernetes, but I am losing my direction. It is difficult for me to keep motivated while lacking real world use cases.

Let’s try with something new before I continue on kubernetes…

--

--