Connecting Dots: Go, Docker and k8s [Part 1]by@danstenger
577 reads
577 reads

Connecting Dots: Go, Docker and k8s [Part 1]

by DanielFebruary 5th, 2021
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

In this post my plan is to create open tcp port scanning tool, use GO and worker pool to make it very fast. Expose it via REST resource, containerise and deploy

Company Mentioned

Mention Thumbnail
featured image - Connecting Dots: Go, Docker and k8s [Part 1]
Daniel HackerNoon profile picture

Nowadays successful applications often consist of containers and container management system to: 1) ease scaling 2) reduce down time 3) add self healing capabilities. Let’s take a high level overview of most commonly used tools and build something cool too.

My plan is to create open tcp port scanning tool. Use GO and worker pool to make it very fast. Expose it via REST resource, containerise, deploy and scale using kubernetes.

For this example I’ll use my favourite project structure. You can find more about it here. Let’s review the REST server:

// cmd/server/main.go

package main

import (

func main() {
	cfg := cfg.New()

	// configure server instance
	srv := server.

	// start server in separate goroutine so that it doesn't block graceful
	// shutdown handler
	go func() {
		cfg.Infolog.Printf("starting server at %s", cfg.GetAPIPort())
		if err := srv.Start(); err != nil {
			cfg.Errlog.Printf("starting server: %s", err)

	// initiate graceful shutdown handler that will listen to api crash signals
	// and perform cleanup
	gracefulshutdown.Init(cfg.Errlog, func() {
		if err := srv.Close(); err != nil {
			cfg.Errlog.Printf("closing server: %s", err)

I created an instance of cfg package that will be used to hold program wide configuration and will ease the DI (dependency injection). This usually is a good place to parse any command line arguments, assemble and hold DB connection details and more.

I then create and configure reusable server instance by passing port, router and logger to it. Server is started in separate go routine so I could initiate a graceful shutdown service to handle program exits and perform cleanup if needed.

I will only have one

GET open-ports
end-point. Let’s review the handler:

// cmd/server/handlers/scanports.go

package handlers

import (


type openPorts struct {
	FromPort  int    `json:"from_port"`
	ToPort    int    `json:"to_port"`
	Domain    string `json:"domain"`
	OpenPorts []int  `json:"open_ports"`

func ScanOpenPorts(w http.ResponseWriter, r *http.Request, _ httprouter.Params) {
	defer r.Body.Close()
	w.Header().Add("Content-Type", "application/json")

	queryValues := r.URL.Query()
	domain := queryValues.Get("domain")
	toPort := queryValues.Get("toPort")

	v := reqvalidator.New()
	v.Required("domain", domain)
	v.Required("toPort", toPort)
	v.ValidDecimalString("toPort", toPort)
	if !v.Valid() {

	// safe to skip error check here as validator above has done that already
	port, _ := strconv.Atoi(toPort)
	op := portscanner.

	report := openPorts{
		FromPort:  0,
		ToPort:    port,
		Domain:    domain,
		OpenPorts: op,

	resp, err := json.Marshal(report)
	if err != nil {


I expect to receive few query params in this handler: domain and toPort. Both params are mandatory and toPort must be valid decimal string representation. I have validator in place to check exactly that. The toPort number will represent the port limit that my scanner will check and also define the size of worker pool. This will allow the scanner to perform all calls at once.

Before going any further, let’s review the validator:

// pkg/reqvalidator/reqvalidator.go

package reqvalidator

import (

type validationerrors map[string][]string

func (ve validationerrors) Add(field, message string) {
	ve[field] = append(ve[field], message)

type errResp struct {
	Result        string
	Cause         string
	InvalidFields validationerrors

type ReqValidator struct {
	Errors validationerrors

func New() ReqValidator {
	return ReqValidator{

func (rv ReqValidator) Required(field, value string) {
	if strings.TrimSpace(value) == "" {
		rv.Errors.Add(field, "can't be blank")

func (rv ReqValidator) ValidDecimalString(field, value string) {
	_, err := strconv.Atoi(value)
	if err != nil {
		rv.Errors.Add(field, "invalid decimal string")

func (rv ReqValidator) Valid() bool {
	return len(rv.Errors) == 0

func (rv ReqValidator) GetErrResp() []byte {
	er := errResp{
		Result:        "ERROR",
		Cause:         "INVALID_REQUEST",
		InvalidFields: rv.Errors,

	b, _ := json.Marshal(er)
	return b

are my basic validations. Given the field name and value, I keep track of error messages if validation fails. Call to
checks to see if there're any errors and if errors are present I can retrieve them with
and send that information to consumer.

Now let’s review the scanner:

// pkg/portscanner/portscanner.go

package portscanner

import (

type Scanner struct {
	domain string

func New(domain string) Scanner {
	return Scanner{domain}

func (s Scanner) ScanTo(toPort int) (openPorts []int) {
	jobs := make(chan int)
	results := make(chan int)

	for i := 0; i < toPort; i++ {
		go worker(s.domain, jobs, results)

	go func() {
		for i := 1; i <= toPort; i++ {
			jobs <- i

	for i := 1; i <= toPort; i++ {
		port := <-results
		if port != 0 {
			openPorts = append(openPorts, port)


	return openPorts

func worker(domain string, jobs <-chan int, results chan<- int) {
	for j := range jobs {
		_, err := net.DialTimeout("tcp", fmt.Sprintf("%s:%d", domain, j), 2*time.Second)
		if err != nil {
			results <- 0

		results <- j


function does the heavy lifting here. I first create jobs and results channels. I then spawn workers. Each worker will be receiving port over jobs channel, scan that port and return result of the TCP call over results channels. I use
so that worker does not wait for response for longer than 2 seconds. When last port is finished scanning I return all open ports to consumer.

You can already try interacting with this tool by running:

$ go run cmd/server/main.go

And issuing curl commands for


I hope you have learned something useful. In next post I'll go through steps needed to containerise this app, deploy and scale using k8s. You can find the source code here.