We’ve all been there. You’ve read a lot about the basics of Docker, Kubernetes, Pods, ReplicaSets, Deployments & more. The different parts that are used to build cloud native applications. Now, you’re looking for a practical example where you can connect all the parts together. That’s exactly the purpose of these articles. Here’s a little backstory: I was recently going over the process of converting a standard RPM package to a cloud application with a friend who’s a Software Engineer. He’d already read a lot of what’s out there about containerization and was ready to take the next step, to try to do it himself. That’s how we got here, experimenting and going over the basic steps of . We hope us sharing this real-time, intuitive experiment with our community piques your interest, as it did ours. how to deploy a mail server application into a Docker container, and then into a Kubernetes cluster We’re going to show you every step, what issues we encountered, and how we solved them. We want to avoid switching to ‘cloud native’ just because it’s a trendy buzzword. So, as we examine the technology, we also take a look at who can benefit the most from this particular mail server approach. TL;DR If you’re a veteran Kubernetes user, this article may not add much value to your know-how. However, the main objective is to bring together the typical steps that a user needs to follow when deploying a Dockerized mail server application to a Kubernetes cluster. You can think of this article as a reference that you can get back to whenever you need a quick refresher about the most common resources used when deploying mail server applications with Docker and Kubernetes. Through our discussion, we visit the conversion of the RPM to a Docker image, publishing to Docker Hub, creating and clustering in Google Cloud Platform, and deploying our image. The sample application — Axigen Mail Server For this demonstration, we’re going to be using the Axigen Mail Server as a sample application. Why? I find that, while it’s specific to mail server technology, it shares a lot of modern web application requirements: A front end part where user requests are received. A backend that stores state. A static component that displays a nice user interface and connects to the front end. This experiment can, therefore, be replicated with other applications as well. Axigen already provides a fully functional, ready-to-use Docker image in Docker Hub. You can find all the information and try it out for yourself . Note: here A Kubernetes Helm chart will also be available very soon. Both are intended for actual production use and are adjusted as such. Stay tuned for more. Why run a Cloud Native mail server As a stateful application, the benefits of turning a mail server into a cloud native, container-based app have only recently become known: 1. Weak coupling with the underlying operating system (running the container) the container can be easily moved without repackaging; relinquishes control of the underlying operating system, in terms of monitoring, software upgrades & more; (unlike the ‘Rehost’ model where the customer still needs to manage an operating system image provided by cloud providers) independence from the provider itself (no more application reinstall and data migration when switching providers). 2. Significantly simplified scale-up while vertical scaling is fairly simple in a virtualized environment (add CPUs, Memory and Disk capacity), horizontal scaling is considerably more limited; the container paradigm forces the application developer/packager to ‘think cloud native’, thus separating compute from storage from the model itself; due mainly to overwhelmingly lower overhead of a container versus a virtual machine, the number of instances can scale from tenths to thousands, thus lowering the burden on the software itself to handle too high of a concurrency level. Who can benefit the most from a Cloud Native mail server approach The question is: why not just ‘rehost’? After all, there are numerous platforms out there (like AWS, Azure, IBM Cloud, to name a few) that offer cloud machines on which the same product package can be installed and operated just as easy (if not easier) than on premises. Since going ‘cloud native’ is a significant initial investment in research and training, and most often yields benefits further down the road, it might make more sense to be embraced by users for whom the ‘cloud native’ benefits are higher: software developers, that can provide their customers with ‘cloud native’ benefits; medium to large companies, harnessing the resources needed for the initial investment; service providers, for whom scalability and maintenance costs reduction are important business factors. Now that we’ve answered the and questions, let’s get started on the . why for whom how The Cloud Native approach - Replatform We’ve already touched on the ‘rehost’ or 'lift and shift’ approach - aka virtualizing a physical machine and importing it to a cloud provider, or simply migrating an existing on-prem virtual machine towards a cloud service. With the steps below, we’re closing in significantly on our holy grail, cloud nativeness, via the ‘replatform’ approach. Creating an Email Server environment with Docker Here’s the simplest way to achieve a container image based on a legacy packaged application (RPM, DEB). For starters, since we need to run this application on Kubernetes, the first step we need to take is to Dockerize it. That is, to enable it to run on Docker. While you can choose between several container software providers, Docker still reigns king, with a . staggering 79% market share As for the standard package to start from, we used the RPM distribution, this time based on what we know best and use most (RedHat vs Debian / Ubuntu). Creating a docker image is quite similar to installing a package on a ‘real’ operating system. We assumed that the user has a basic knowledge of using the command line and . The goal, as stated above, is to exemplify how to obtain a container from a CentOS image. has Docker installed An important takeaway is the difference between image and instance (‘container’ in docker ‘speak' — we shall use the term ‘instance’, to attain a clear distinction from the ‘container’ as a concept). Note: An ‘instance’ (or container) is an equivalent of a machine; it has an IP, one can run commands in a shell in it, and so on. An ‘image’ is the equivalent of a package; you always use an ‘image’ to create an ‘instance’ (or container). 1. Creating a CentOS Docker instance Let’s go ahead and create this Docker instance from the CentOS image: ion@IN-MBP ~ % docker run -it centos:latest [root@ b716163d /]# 7294 From another terminal, we can observe the newly created instance: ion@IN-MBP ~ % docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b716163d centos:latest seconds ago Up seconds zen_austin 7294 "/bin/bash" 20 20 Next, we perform OS updates, as we would with any regular operating system instance: [root@ b716163d /]# dnf -y update Failed to set locale, defaulting to C.UTF CentOS - AppStream MB/s | MB : CentOS - Base MB/s | MB : CentOS - Extras kB/s | kB : Dependencies resolved. ================================================================================================================================================================================================================================================ Package Architecture Version Repository Size ================================================================================================================================================================================================================================================ Upgrading: audit-libs x86_64 gitf58ec40.el8 BaseOS k binutils x86_64 el8_1 BaseOS M centos-gpg-keys noarch .el8 BaseOS k centos-release x86_64 .el8 BaseOS k centos-repos x86_64 .el8 BaseOS k coreutils-single x86_64 el8_1 BaseOS k glibc x86_64 el8_1 BaseOS M glibc-common x86_64 el8_1 BaseOS k glibc-minimal-langpack x86_64 el8_1 BaseOS k kexec-tools x86_64 el8_1 BaseOS k libarchive x86_64 el8_1 BaseOS k openldap x86_64 el8_1 BaseOS k openssl-libs x86_64 : c el8_1 BaseOS M python3-rpm x86_64 el8_1 BaseOS k rpm x86_64 el8_1 BaseOS k rpm-build-libs x86_64 el8_1 BaseOS k rpm-libs x86_64 el8_1 BaseOS k sqlite-libs x86_64 el8_1 BaseOS k systemd x86_64 el8_1 BaseOS M systemd-libs x86_64 el8_1 BaseOS k systemd-pam x86_64 el8_1 BaseOS k systemd-udev x86_64 el8_1 BaseOS M Installing dependencies: xkeyboard-config noarch el8 AppStream k kbd-legacy noarch el8 BaseOS k kbd-misc noarch el8 BaseOS M openssl x86_64 : c el8_1 BaseOS k Installing weak dependencies: libxkbcommon x86_64 el8 AppStream k diffutils x86_64 el8 BaseOS k glibc-langpack-en x86_64 el8_1 BaseOS k kbd x86_64 el8 BaseOS k openssl-pkcs11 x86_64 el8 BaseOS k Transaction Summary ================================================================================================================================================================================================================================================ Install Packages Upgrade Packages [………………………………] Upgraded: audit-libs gitf58ec40.el8.x86_64 binutils el8_1 .x86_64 centos-gpg-keys .el8.noarch centos-release .el8.x86_64 centos-repos .el8.x86_64 coreutils-single el8_1 .x86_64 glibc el8_1 .x86_64 glibc-common el8_1 .x86_64 glibc-minimal-langpack el8_1 .x86_64 kexec-tools el8_1 .x86_64 libarchive el8_1.x86_64 openldap el8_1.x86_64 openssl-libs : c el8_1 .x86_64 python3-rpm el8_1.x86_64 rpm el8_1.x86_64 rpm-build-libs el8_1.x86_64 rpm-libs el8_1.x86_64 sqlite-libs el8_1.x86_64 systemd el8_1 .x86_64 systemd-libs el8_1 .x86_64 systemd-pam el8_1 .x86_64 systemd-udev el8_1 .x86_64 Installed: libxkbcommon el8.x86_64 diffutils el8.x86_64 glibc-langpack-en el8_1 .x86_64 kbd el8.x86_64 openssl-pkcs11 el8.x86_64 xkeyboard-config el8.noarch kbd-legacy el8.noarch kbd-misc el8.noarch openssl : c el8_1 .x86_64 Complete! 7294 -8 -8 4.6 7.0 00 01 -8 2.0 2.2 00 01 -8 9.7 5.9 00 00 3.0 -0.13 .20190507 116 2.30 -58. .2 5.7 8.1 -1.1911 .0 .9 12 8.1 -1.1911 .0 .9 21 8.1 -1.1911 .0 .9 13 8.30 -6. .1 630 2.28 -72. .1 3.7 2.28 -72. .1 836 2.28 -72. .1 48 2.0 .19 -12. .2 482 3.3 .2 -8. 359 2.4 .46 -11. 352 1 1.1 .1 -2. .1 1.5 4.14 .2 -26. 156 4.14 .2 -26. 539 4.14 .2 -26. 153 4.14 .2 -26. 336 3.26 .0 -4. 579 239 -18. .5 3.5 239 -18. .5 562 239 -18. .5 232 239 -18. .5 1.3 2.24 -3. 828 2.0 .4 -8. 481 2.0 .4 -8. 1.4 1 1.1 .1 -2. .1 686 0.8 .2 -1. 116 3.6 -5. 359 2.28 -72. .1 818 2.0 .4 -8. 392 0.4 .8 -2. 64 9 22 -3.0 -0.13 .20190507 -2.30 -58. .2 -8.1 -1.1911 .0 .9 -8.1 -1.1911 .0 .9 -8.1 -1.1911 .0 .9 -8.30 -6. .1 -2.28 -72. .1 -2.28 -72. .1 -2.28 -72. .1 -2.0 .19 -12. .2 -3.3 .2 -8. -2.4 .46 -11. -1 1.1 .1 -2. .1 -4.14 .2 -26. -4.14 .2 -26. -4.14 .2 -26. -4.14 .2 -26. -3.26 .0 -4. -239 -18. .5 -239 -18. .5 -239 -18. .5 -239 -18. .5 -0.8 .2 -1. -3.6 -5. -2.28 -72. .1 -2.0 .4 -8. -0.4 .8 -2. -2.24 -3. -2.0 .4 -8. -2.0 .4 -8. -1 1.1 .1 -2. .1 Great - everything is up to date now. 2. Installing Axigen in the container instance First, get the RPM. ion@IN-MBP ~ % docker exec -it b716163d bash [root@ b716163d app]# curl -O https: % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed M M M : : : : --:--:-- M 7294 7294 //www.axigen.com/usr/files/axigen-10.3.1/axigen-10.3.1.x86_64.rpm.run 100 386 100 386 0 0 9.7 0 0 00 39 0 00 39 9.8 Then install it: root@ b716163d /]# ./axigen .x86_64.rpm.run Please accept the terms the license before continuing Press ENTER to display the license (after reading it press to exit viewer) q Do you accept the terms the license? (yes/no): y ====================================== RPM Package x86_64 Installer AXIGEN Mail Server ====================================== Detecting OS flavor... CentOS Installer started Axigen embedded archive extracted successfully Please select one the options displayed below: ==== Main options Install axigen Exit installer Exit installer without deleting temporary directory ===== Documentation axigen Show the RELEASE NOTES Show the README file Show other licenses included the package Show manual install instructions Show manual uninstall instructions Your choice: Verifying... ################################# [ %] Preparing... ################################# [ %] Updating / installing... :axigen ################################# [ %] Thank you installing AXIGEN Mail Server. In order to configure AXIGEN the first time, please connect to WebAdmin by using one the URLs below: https: https: Starting AXIGEN Mail Server...Axigen[ ]: INFO: Starting Axigen Mail Server version (Linux/x64) Axigen[ ]: SUCCESS: supervise ready... (respawns per minute: ) Axigen[ ]: INFO: supervise: spawning a process to execute Axigen Mail Server version (Linux/x64) [ OK ] Installer finished. 7294 -10.3 .1 of 'q' of for for 10.3 .1 -1 8.1 of 1. -10.3 .1 -1 9. 0. for -10.3 .1 -1 4. 5. 6. in 7. 8. 1 100 100 1 -10.3 .1 -1 100 for for of //172.17.0.2:9443/ //[2a02:2f0b:a20c:a500:0:242:ac11:2]:9443/ 336 10.3 .1 .5 336 3 336 new 10.3 .1 .5 Now we have Axigen installed in the container. It’s even already running (the installer starts it automatically). [root@ b716163d /]# ps ax | grep axigen ? Ss : /opt/axigen/bin/axigen --max-respawns -W / /opt/axigen ? SNl : /opt/axigen/bin/axigen --max-respawns -W / /opt/axigen ? Sl : axigen-tnef pts/ S+ : grep --color=auto axigen 7294 336 0 00 3 var 337 0 01 3 var 351 0 00 375 0 0 00 Let's see what happens when we leave the shell: [root@ b716163d /]# exit exit ion@IN-MBP ~ % 7294 Is the instance still running? ion@IN-MBP ~ % docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ion@IN-MBP ~ % No; that’s because the CentOS image runs ‘bash’ as the main container process; when bash exists, the container stops as well. This is an important point to remember: the container has one process that it considers its ‘main’ one; when that process ends, the container will stop. This is in fact the entire nature of experimentation: trying out different means of achieving the desired results and seeing exactly where that gets us. Good or bad. This is a crucial difference between a container and a classical Linux ‘host’ - there are no ‘daemons’ — in other words, no need to fork in the background (as, usually, programs started by SystemV-style — and Systemd, as well — init scripts work). We can take advantage of this when creating the Axigen image. Start the container again. ion@IN-MBP ~ % docker start b716163d b716163d 7294 7294 And check if it's running: ion@IN-MBP ~ % docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b716163d centos:latest minutes ago Up seconds zen_austin 7294 "/bin/bash" 13 41 Attach to it and check if Axigen is still running: ion@IN-MBP ~ % docker attach b716163d [root@ b716163d /]# ps ax PID TTY STAT TIME COMMAND pts/ Ss : /bin/bash pts/ R+ : ps ax 7294 7294 1 0 0 00 14 0 0 00 It’s not — and the reason for this shouldn’t come as a surprise: the Axigen process (and all subprocesses it forks / threads it starts) are stopped along with the original bash — 'the grandfather of them all’. Nonetheless, Axigen is still installed: [root@ b716163d /]# ls -la / /opt/axigen/ total drwxr-xr-x axigen axigen May : . drwxr-xr-x root root May : .. -rw-r----- axigen axigen May : axigen_cert.pem -rw-r----- axigen axigen May : axigen_dh.pem drwxr-xr-x axigen axigen May : aximigrator -rw------- axigen axigen Feb : cacert_default.pem drwxr-x--x axigen axigen May : cyren drwx--x--- axigen axigen May : filters drwx--x--- axigen axigen May : kas drwx--x--- axigen axigen May : kav drwx------ axigen axigen May : letsencrypt drwxr-x--- axigen axigen May : log -rw------- axigen axigen Feb : mobile_ua.cfg drwxr-x--- axigen axigen May : queue drwxr-x--- axigen axigen May : reporting drwxr-x--- axigen axigen May : run drwxr-x--- axigen axigen May : serverData drwx--x--- axigen axigen May : templates drwx--x--- axigen axigen May : webadmin drwx--x--- axigen axigen May : webmail [root@ b716163d /]# ls -la /opt/axigen/bin/ total drwxr-x--x root root May : . drwxr-x--x root root May : .. -rwxr-xr-x root root Feb : axigen -rwxr-xr-x root root Feb : axigen-migrator -rwxr-xr-x root root Feb : axigen-tnef -rwxr-xr-x root root Feb : cyren.bin -rwxr-xr-x root root Feb : kasserver -rwxr-xr-x root root Feb : kavserver -rwxr-xr-x root root Feb : mqview -rwxr-xr-x root root Feb : sendmail 7294 var 288 16 4096 25 15 07 1 4096 25 15 07 1 2969 25 15 07 1 245 25 15 07 2 4096 25 15 07 1 215556 7 12 57 2 4096 25 15 07 2 4096 25 15 07 3 4096 25 15 07 4 4096 25 15 07 2 4096 25 15 07 2 4096 25 15 07 1 121 7 12 57 67 4096 25 15 07 2 4096 25 15 07 2 4096 25 15 07 2 4096 25 15 07 5 4096 25 15 07 8 4096 25 15 07 3 4096 25 15 07 7294 135028 2 4096 25 15 07 5 4096 25 15 07 1 81771736 7 12 57 1 12824731 7 12 57 1 11838532 7 12 57 1 1049336 7 12 57 1 205632 7 12 58 1 180992 7 12 58 1 663136 7 12 57 1 29704280 7 12 57 Good. We have a container with Axigen installed. However, our goal was an image, not a container. 3. Creating an image from a container Stop the container again by leaving the shell: [root@ b716163d /]# exit ion@IN-MBP ~ % docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ion@IN-MBP ~ % 7294 And then summon the Docker magic: ion@IN-MBP ~ % docker commit b716163d my_new_and_shiny_axigen_image sha256:e7ca09e1933bff546d7acbd7090543e2a4f886ee3aa60b7cbf04eefd70fcbe3b 7294 Excellent; from the existing container, we’ve created a new image (my_new_and_shiny_axigen_image), that we may now use to create another container, with the OS updates already applied and Axigen installed: ion@IN-MBP ~ % docker run -it my_new_and_shiny_axigen_image [root@ f8d /]# dnf update Last metadata expiration check: : : ago on Mon May : : PM UTC. Dependencies resolved. Nothing to . Complete! [root@ f8d /]# rpm -qa | grep axigen axigen x86_64 479421167 1 01 28 25 2020 03 03 43 do 479421167 -10.3 .1 -1. We still have to configure the container to start the Axigen binary on instantiation. As the newly created image has inherited the entrypoint of the CentOS image, which is ‘bash’. We could, of course, start it: [root@ f8d /]# /etc/init.d/axigen start Starting AXIGEN Mail Server...Axigen[ ]: INFO: Starting Axigen Mail Server version (Linux/x64) Axigen[ ]: SUCCESS: supervise ready... (respawns per minute: ) Axigen[ ]: INFO: supervise: spawning a process to execute Axigen Mail Server version (Linux/x64) [ OK ] [root@ f8d /]# [root@ f8d /]# ps ax | grep axigen ? Ss : /opt/axigen/bin/axigen --max-respawns -W / /opt/axigen ? Sl : /opt/axigen/bin/axigen --max-respawns -W / /opt/axigen ? Sl : axigen-tnef pts/ S+ : grep --color=auto axigen 479421167 29 10.3 .1 .5 29 3 29 new 10.3 .1 .5 479421167 479421167 29 0 00 3 var 30 0 00 3 var 42 0 00 66 0 0 00 But this is to have a container running an app. The correct way is to configure the binary that will be used when the container is started, directly in the image. not the proper way 4. Setting the entrypoint in the container To do that, we must revisit the image creation step: ion@IN-MBP ~ % docker commit -c b716163d my_2nd_new_and_shiny_axigen_image sha256:ef7ce0fd9a47acb4703e262c4eb64c3564a54866b125413c17a63c1f832d1443 ion@IN-MBP ~ % 'CMD ["/opt/axigen/bin/axigen", "--foreground"]' 7294 In the image configuration, add the name of the command and arguments to be executed when the container is started: /opt/axigen/bin/axigen --foreground Remember that the main process of the container must not fork in the background; it must continue to run, otherwise the container will stop. This is the reason the ‘--foreground’ argument is needed. Like Axigen, most Linux servers have such an argument, instructing them to run in foreground instead of forking in background. 5. Running the created image Let’s check the updated image: ion@IN-MBP ~ % docker run -dt my_2nd_new_and_shiny_axigen_image fd1b608174c402787152f5934294f370dfdb4d9b0f0b25e4edf4725dbe4c5700 ion@IN-MBP ~ % We’ve changed the -it ‘docker run’ parameter to ‘-dt’; without diving too much into details, this instructs docker to detach from the process. Axigen is the main process, hence an interactive mode does not make sense, as it would for a bash shell. Docker allows us to run another process (not main, but a secondary process) by using ‘exec’. We shall run a bash, in interactive mode, so that we may review what happens in the container. ion@IN-MBP ~ % docker exec -it fd1b608174c4 bash [root@fd1b608174c4 /]# ps ax PID TTY STAT TIME COMMAND pts/ Ss+ : /opt/axigen/bin/axigen --foreground pts/ SNl+ : /opt/axigen/bin/axigen --foreground pts/ Sl+ : axigen-tnef pts/ Ss : bash pts/ R+ : ps ax 1 0 0 00 7 0 0 00 19 0 0 00 39 1 0 00 54 1 0 00 Ok, so Axigen is running. Is the WebAdmin interface (port 9000) running as well? [root@fd1b608174c4 /]# telnet localhost Trying :: .. telnet: connect to address :: : Connection refused Trying ... Connected to localhost. Escape character is . GET / HTTP/ Host: localhost HTTP/ Moved Temporarily Server: Axigen-Webadmin Location: 9000 1. 1 127.0 .0 .1 '^]' 1.1 1.1 303 /install Connection: Close It is, and it redirects us to the initial setup flow (/install). Now, is the 9000 WebAdmin port also available from outside the container? ion@IN-MBP ~ % telnet localhost Trying :: .. Connection failed: Connection refused Trying ... telnet: Unable to connect to remote host: Connection refused 9000 1. 127.0 .0 .1 We need to instruct the container, upon instantiation, to map the 9000 port to the host, so it may be accessed from the outside. ion@IN-MBP ~ % docker run -dt -p : my_2nd_new_and_shiny_axigen_image dcc95e912bafc97ba63484abfeb7e2d1983d524b8834a5ccc62928796259818 ion@IN-MBP ~ % 9000 9000 1 Notice the ‘-p 9000:9000’ parameter. This instructs Docker to make the 9000 container port available in the host as well, on the same port number. And now, voilà: ion@IN-MBP ~ % telnet localhost Trying :: .. Connected to localhost. Escape character is . GET / HTTP/ Host: localhost HTTP/ Moved Temporarily Server: Axigen-Webadmin Location: 9000 1. '^]' 1.1 1.1 303 /install Connection: Close Wrapping up So what have we learned from this little experiment? 1. Converting an existing RPM / DEB package to a container image is fairly simple: instantiate a container with an OS of your preference, and for which you have the target software package; from the container, install the software (and optionally, perform some configurations); stop the container and convert it into an image, making sure to set the appropriate entrypoint (CMD); create as many containers as desired, using the new image; publish the image, if you need to make it available to others as well. 2. The image we’ve created above is not yet ready for production. Here’s what it would take for it to become ready: implement persistent data storage (it’s an email service, it stores mailboxes, so some data needs to be persistent); create network communication definitions: Docker will, by default, through NAT, allow the processes to communicate with outside (initiate connections); we need, though, to be able to receive connections (email routing, client access) on specific ports. define options to preconfigure the Axigen instance upon startup (we may want to deploy it using a specific license. Now that a basic image is available, we would need to do some further digging into addressing the issues above, as well as automate the creation of the image . (what happens when an updated CentOS image is available? How about when a new Axigen package is available?) These topics above go way outside the scope of this article, but here are a few hints into what it would take: use a Dockerfile to create the image instead of instantiating the base then manually installing the Axigen software; define and map, in the Dockerfile, the required ENV, EXPOSE, VOLUME, and CMD directives; make use of Docker push / pull to share your image with others, through a public or private docker registry (image repository). Now that we’ve created a containerized version of our Axigen package, what we have is a re-packaging of the app that allows deployment in the cloud. Part 2 of this series is now up! Read on to see how we address actually running an instance of this application in the cloud using Kubernetes and GCP. Previously published at https://www.axigen.com/articles/cloud-native-applications-mail-server-docker_69.html