In the , I created a simple port scanning tool with GO. Now it's time to run this tool in docker and scale/manage it with k8s! previous post First, I'll create a with multistage build to reduce image size: Dockerfile golang: . as builder alpine:latest # stage 1 FROM 1.15 6 WORKDIR /app # fetch dependeicnies first as they're not changing often and will get cached COPY ./go.mod ./go.sum ./ RUN go mod download # copy source to working dir of a container COPY . . # build the app RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o server cmd/server/main.go # stage 2 FROM WORKDIR /app COPY --from=builder /app/server . EXPOSE 8080 ENTRYPOINT [ ] "./server" Let's build and run the scanner. From the root directory of the project: $ $ docker build -t portscanner . $ $ docker run --name portscanner --rm -p 8080:8080 portscanner $ $ INFO 2021/01/28 15:32:15 starting server at :8080 # build the image first # now that image is built, run it # logs: Port scanner has started on port 8080, and I've exposed the same port to the host with . I'll use that for testing. Since I know that the application is available on port 8080, my port scanning tool should be able to detect itself: -p 8080:8080 $ curl -X GET http://localhost:8080/open-ports\?domain\=127.0.0.1\&toPort\=9000 $ { : 0, : 9000, : , : [ 8080 ] } $ $ curl -X GET http://localhost:8080/open-ports | jq $ { : , : , : { : [ ], : [ , ] } } # output: "from_port" "to_port" "domain" "127.0.0.1" "open_ports" # with missing query params # output: "Result" "ERROR" "Cause" "INVALID_REQUEST" "InvalidFields" "domain" "can't be blank" "toPort" "can't be blank" "invalid decimal string" It's all working as expected. Let's take it one step further and run it in Kubernetes! To do so, I'll use and . You can find OS-specific installation instructions for minikube - and for kubectl - . When all tools are installed, I'll run and allow it some time to start with default params. Minikube runs a separate VM, and that VM doesn’t have access to the local Docker registry and hence no access to the previously built image. To fix this issue, I have to switch to minikube environment: minikube kubectl here here minikube start $ $(minikube docker-env) eval Now I'll build the image again, but this time, inside minikube env: $ docker build -t portscanner . $ $ docker image ls # list all images to see if portscanner is there Now that image is accessible to minikube; I'll create deployment: $ kubectl create deployment portscanner --image=portscanner This tells Kubernetes to create a deployment named and use a previously built image for it. There's few handy commands to check the status of deployment and pods: portscanner portscanner $ $ kubectl get deployment portscanner $ $ kubectl get pods # to check deployment status # to get list of pods Both of the above have 0/1 in READY status, and has STATUS. From command, I can get the pod name and investigate what's wrong: get pods ImagePullBackOff get pods kubectl describe pod portscanner-xxxx... The above command has a more informative output where I can see that although the image is accessible in minikube VM, deployment still tries to pull it from the cloud. To fix this, I need to edit the default deployment configuration: kubectl edit deployment portscanner The above will open configuration file in your default editor. This part says which means that no matter what, deployment will always try to pull image from the cloud. Change to , save the file and check deployment status once again: .yaml spec.template.spec.containers.0.imagePullPolicy Always Always IfNotPresent kubectl get deployment portscanner This time the Ready status is 1/1! Success! Now is running in Kubernetes. There's still one small issue, though. I can't access it. To get access to my tool, I'll have to create a service of a type that will take incoming requests and distribute them to pods: portscanner LoadBalancer kubectl expose deployment portscanner -- =LoadBalancer --port=8080 type In most other environments, when you use a Kubernetes , it will provision a load balancer external to your cluster, for example, an Elastic Load Balancer (ELB) in AWS. That's not the case when running it locally. Luckily I can simulate the connection with minikube service: LoadBalancer minikube service portscanner This will print out connection details and try to open the browser with the given url. This url can be used to issue curl commands to the port scanner, just like before. Let's try it out: $ curl -X GET http://{your_service_ip}:{your_service_port}/open-ports\?domain\=127.0.0.1\&toPort\=9000 | jq $ { : 0, : 9000, : , : [ 8080 ] } # Output: "from_port" "to_port" "domain" "127.0.0.1" "open_ports" Right, now I have 1 pod running and serving my application. So, for my last trick, I'll scale the deployment to 4 pods: kubectl scale deployments/portscanner --replicas=4 Now, if I run , I'll see that there are 4 pods running. kubectl get pods To get log output for all pods: kubectl logs -l app=portscanner -f I hope this will get you going and help to build some exciting things. You can find the source code . here