[Get Hands Dirty] Kubernetes Part 3: Configure Shared Folder on Local Kubernetes (Kind)

While figuring on how to migrate one of my experienced project to Kubernetes, I came to a roadblock that some of the data are storing as a shared folder among different servers. After doing some researches, found that Kubernetes provide volume functionality which would perfect for my case. Kubernetes supports several type of volume including AWS EBS, Azure Disk and more, as last two parts, let’s go with free version — local hostPath first.


Share a local folder among Kubernetes’ services on Windows 10 + Kind.


Please refer to Part 1 for docker image build, refer to Part 2 for Ingress setup.

Kind & Docker Configuration

To allow our services deployed on Kubernetes(Kind) share a folder on local machine, first we need to enable configuration on Docker, then we need to create a new kind cluster with volume mount with the shared folder.

  1. Create a new folder that will be used as volume mount target folder. (Z:\docker_shared\)
  2. Open Docker Desktop -> Settings -> Resources -> File Sharing
  3. Click + to add your created folder to docker file sharing setup.
  4. Click Apply & Restart
  5. Wait Docker Engine complete restart

Add the host path and container path to your kind configuration yml. Host path is your Windows folder absolute path with linux syntax (For example: Z:\docker_shared should write as /z/docker_shared), while the container path should be the absolute path on Kind cluster instance. The example below included Ingress setup on Part 2.

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
- role: control-plane
- |
kind: InitConfiguration
node-labels: "ingress-ready=true"
- hostPath: /z/docker_shared
containerPath: /test-volume
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP

As you can see from above, I have added extraMounts in the kind configuration file, which request to mount /z/docker_shared on host(Windows) as /test-volume on Kind cluster instance.

You can always delete current cluster first before start the cluster with new configuration.

kind delete cluster

Then starts your kind cluster with the configuration file.

kind create cluster — config .\example-kind.yaml

After kind cluster created, setup Ingress as mentioned in Part 2 first since we will use it for testing. I will skip the explanation here.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml
============= Divider ===============
kubectl wait --namespace ingress-nginx --for=condition=ready pod --selector=app.kubernetes.io/component=controller --timeout=90s
============= Divider ===============
kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission
============= Divider ===============
kubectl apply -f .\bogateway-k8s\bogateway-ingress.yml

Until here, we completed the configuration of kind + Docker + Ingress.

Write/Read File on Shared Folder

You can skip this section if you want to proceed directly with docker image provided

Before we proceed on deployment, we need to add apis on our Java Spring docker image to verify that our deployment are succeed. Therefore, I added 2 public access apis to read and write on a file located on shared folder.

// PublicTestingApiResource.java
public class PublicTestingApiResource {
private final Logger log = LoggerFactory.getLogger(PublicTestingApiResource.class);private final String SHARED_FOLDER_PATH = "/test-volume/";// Use random generated unique ID to identify different instance
private String uniqueID = null;
public Mono<ResponseEntity<String>> getHelloWorld() {
String uid = getUniqueId();
try (BufferedReader reader = new BufferedReader(new FileReader(SHARED_FOLDER_PATH + "hello-world"))) {
String data = uid + reader.lines().collect(Collectors.joining("\n"));
return Mono.just(ResponseEntity.ok().body(data));
} catch (IOException e) {
log.error("Failed to read file", e);
return Mono.just(ResponseEntity.badRequest().body(e.getMessage()));
// use get here for easier testing
public Mono<ResponseEntity<String>> writeHelloWorld(@ApiParam String content) {
String uid = getUniqueId();
try (BufferedWriter writer = new BufferedWriter(new FileWriter(SHARED_FOLDER_PATH + "hello-world"))) {
return Mono.just(ResponseEntity.ok().body(uid + "OK"));
} catch (IOException e) {
log.error("Failed to write file", e);
return Mono.just(ResponseEntity.badRequest().body(uid + "NOK"));
private String getUniqueId() {
if (uniqueID == null) {
synchronized (this) {
if (uniqueID == null) {
uniqueID = "From Unique ID: " + UUID.randomUUID().toString() + "\n";
return uniqueID;

Remember to make the apis allow for public access. Just add 1 line in your SecurityConfiguration.java.

.pathMatchers("/api/public/**").permitAll() // Add this line

The source code can be found on github.

Next, build docker image for your Spring project.

.\mvnw -ntp -Pprod verify jib:dockerBuild

Tag your docker image with new version (I used v2.0.2 here) and push to your docker hub.

docker tag bogateway docker-login/bogateway:v2.0.2;
docker push docker-login/bogateway:v2.0.2

Deployment Scripts

Next, you have to modify your deployment scripts (From Part 2) to mount volume on your services.

Edit your bogateway-deployment.yml as below.

apiVersion: apps/v1
replicas: 2
- name: bogateway-app
image: docker-login/bogateway:v2.0.2
imagePullPolicy: Always
- name: test-volume
mountPath: /test-volume
- name: test-volume
# directory location on host
path: /test-volume
# this field is optional
type: Directory

The major changes are

  1. spec -> replicas change to 2, so we can verify 2 instances are reading from same file.
  2. Change the version of docker image under containers -> image.
  3. imagePullPolicy is optional, can use back PullIfNotPresent
  4. Add volumeMounts to mount test-volume on path /test-volume . The test-volume name is defined at volumes -> name
  5. Add volumes to declare volume’s name and path which used in volumeMounts

After make your changes, just proceed the deployments.

kubectl apply -f .\registry-k8s\

kubectl apply -f .\bogateway-k8s\

Use kubectl get po to check your services’ status.

Result Verification

First write some content to the share file.


Then try access several times of


to verify your result. Sometimes the unique ID just won’t change, I guess is because Kubernetes route the request to same pod, just open a private window to access the url.

You should be able to view same content even your requests are handled by different pods. You can also open the file in your text editor to check on the written content.


Trace log for your service

First run kubectl get po to get your pods’ name. Then run kubectl logs <pod-name> to dump the logs.

To Be Continued

Actually, hostPath is not recommended by Kubernetes, if you really need to use it, please make sure your security configuration are setup correctly. There are different kinds of volume provided by Kubernetes, I will test on AWS EBS when I test everything on AWS in future.

Learning while getting my hand dirty.