paint-brush
Yet another sbt-docker introductionby@anicolaspp
7,691 reads
7,691 reads

Yet another sbt-docker introduction

by Nicolas A PerezNovember 26th, 2016
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Recently, the team has been looking at creating, in an automatic fashion, the Docker images that are used in our Docker environment.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Yet another sbt-docker introduction
Nicolas A Perez HackerNoon profile picture

Recently, the team has been looking at creating, in an automatic fashion, the Docker images that are used in our Docker environment.

These images are currently created manually by defining a Dockerfile, but this process is completely decoupled from our build process and it puts us all in a really bad shape every time the content of the image itself changes. We need to keep updating these files manually every time.

In order to overcome this small inconvenience, we started looking for alternatives. Of course, sbt-native-packager looks like the right tool for the job, and because we were using it for other packaging formats we started testing it out.

There is plenty documentation on how we could use sbt-native-packager for creating the Docker images and I don’t intend to go over it. However, there is little to be found on using the plugin on more complex builds, especially if we have dependencies between modules or want to create images for selected modules of our project.

In order to demonstrate our requirements, let’s take a look at one example project layouts we could use.

├── README.md├── build.sbt├── core├── json-processor├── json-sub├── project├── src├── target└── version.sbt

In here, we have a multi project build that contains 3 projects or submodules, core, json-processor, json-sub. Inside core is where we have our main abstractions that are shared by the other two modules. The core module doesn’t have the ability to run by itself. This is the responsibility of the other two projects.

Let’s take a look at our build.sbt file located in the root folder to get a better idea how our build works.

name := "sample-app"version := "1.0"scalaVersion := "2.11.7"lazy val root = project.in(file("."))  .aggregate(core, jsonProcessor, jsonSub)  .dependsOn(core, jsonProcessor, jsonSub)    lazy val core = project.in(file("core"))  .settings(    name := "core"  )    lazy val jsonProcessor = project.in(file("json-processor"))  .aggregate(core)  .dependsOn(core)  lazy val jsonSub = project.in(file("json-sub"))  .aggregate(core)  .dependsOn(core)

As we can see, there is nothing new, the same modules we mentioned before are being defined, including the dependency to the module core.

Sbt-Docker

In the same way other tutorials add this plugin, we just added it our our plugins.sbt file along with the sbt-native-packager.

addSbtPlugin("com.typesafe.sbt" % "sbt-native-packager" % "1.1.1")addSbtPlugin("se.marcuslonnberg" % "sbt-docker" % "1.4.0")

Now, we need to activate the plugin in the module we want to use for creating the Docker image. As you can imagine, there is nothing interesting in doing so on the root project since it is only an artificial aggregate for project organization, nothing else. The same happens to the core module, it cannot run by itself, so we don’t need the plugin there. On the other hand, the two other modules are the ones we are interested in.

It is important to notice that in every single tutorial / post / article we found, the following keys and definitions were defined in the root project, but we are going to avoid this by focusing ourselves on the two modules that actually do something in our project.

In order to add the ability to construct Dockerfiles and images we need to activate the sbt-docker plugin on the modules in question. This is a very simplistic task.

lazy val jsonProcessor = project.in(file("json-processor"))  .aggregate(core)  .dependsOn(core)  .enablePlugins(sbtdocker.DockerPlugin, JavaServerAppPackaging)lazy val jsonSub = project.in(file("json-sub"))  .aggregate(core)  .dependsOn(core)  .enablePlugins(sbtdocker.DockerPlugin, JavaServerAppPackaging)

Notice the enablePlugins section on each of the modules.

At this point we should be able to create Docker images for these two modules, a task achieved by running:

sbt "jsonProcessor/docker"

or

sbt "jsonSub/docker"

Any of these commands, should create a Dockerfile and then publish the image to the local Docker registry. However, if we try to run any of the recently created images we will quickly notice that classes on the module core will not be found.

The solution is quite simple, but it took us little longer to figure out.

We need to make sure that the core module is included on the image created by jsonProcessor and jsonSub. We need to define a dependency on our build process that indicates that we need the core modules to be included with our own (each of the modules) image.

An interesting way to do this process without too many complications is using the sbt-assembly plugin. It will take care of bundle core up along with its dependencies, so we can add the output jar file (fat jar) to our own (module) packaging process before the Docker image is created.

At this point, we also need to specify a custom Docker image so it complies with our requirement of having include core.

lazy val dockerSettings = Seq(  docker <<= (docker dependsOn (AssemblyKeys.assembly in core)),  dockerfile in docker := {    val artifact: File = (AssemblyKeys.assemblyOutputPath in AssemblyKeys.assembly in core).value    val artifactTargetPath = s"/app/${artifact.name}"    new Dockerfile {      from("java")      add(artifact, artifactTargetPath)      entryPoint("java", "-jar", artifactTargetPath)    }  })

Let’s explain how dockerSettings is defined:

First, we make the docker task dependent on the assembly task of the core module. This means that core needs to be assembled into a solely jar file before continuing to execute the docker task.

Second, we have defined how the Dockerfile will look like. Based on this file docker will continue to create the corresponding Docker image.

The variable artifact defines the fat jar created by the assembly task on core and we proceed to add this fat jar (core) into the image we are building by using the Docker add command.

The result of this task will be a Docker image that contains the fat jar of the core along with any other library dependency.

Now we need to indicate that the two modules we want images for are going to use the settings (dockerSettings).

lazy val jsonProcessor = project.in(file("json-processor"))  .aggregate(core)  .dependsOn(core)  .enablePlugins(sbtdocker.DockerPlugin, JavaServerAppPackaging)  .settings(dockerSettings)lazy val jsonSub = project.in(file("json-sub"))  .aggregate(core)  .dependsOn(core)  .enablePlugins(sbtdocker.DockerPlugin, JavaServerAppPackaging)  .settings(dockerSettings)

Notice we have added the dockerSettings to each of the modules.

After this, we should be able to run:

sbt "jsonProcessor/docker"

or

sbt "jsonSub/docker"

and the created images should be ready to be used.

At this point, we are using sbt-docker plugin, but remember that the root project is not activating the plugin which means that we will always need to specify the project name when running the docker task. In order to avoid this, we also added a sbt command alias to facilitate this process.

addCommandAlias("dockerize", ";jsonProcessor/docker;jsonSub/docker")

After adding this alias, we should be able execute:

sbt dockerize

on our root project and sbt will know exactly what to do.

There are also other interesting sbt-docker options that you might want to use, but they are completely optional. We wanted to create two images for each of the modules. An image which name is the name of the module and tag the version of the module and a second image with the same name but the tag latest. This is very helpful if we are planning to use docker-compose since we don’t have the change the tag on the compose file every time the project version changes. We could simply use latest all the time and keep the image with the version number as the history of all images we have ever created.

In order to do this, we changed our dockerSettings as follows:

lazy val dockerSettings = Seq(  docker <<= (docker dependsOn (AssemblyKeys.assembly in core)),  dockerfile in docker := {    val artifact: File = (AssemblyKeys.assemblyOutputPath in AssemblyKeys.assembly in core).value        val artifactTargetPath = s"/app/${artifact.name}"        new Dockerfile {      from("java")      add(artifact, artifactTargetPath)      entryPoint("java", "-jar", artifactTargetPath)    }  },  imageNames in docker := Seq(    // Sets the latest tag    ImageName(s"${organization.value}/${name.value}:latest"),    // Sets a name with a tag that contains the project version    ImageName(      repository = name.value,      tag = Some("v" + version.value)    )  ))

Basically, we just added the imageNames section of the settings. Again, this will create two images for each project as we explained above.

To finish, we did not want to construct these images if something is wrong with our project. We want to be sure that these images have working code that has been validated. We extended our sbt command alias to do this.

addCommandAlias("dockerize", ";clean;compile;test;jsonProcessor/docker;jsonSub/docker")

It will now do all the necessary validations before creating the images.

By automating this process, we have removed the tedious, manual steps of productization. Using a simple command we have an entire pipeline, so our code can be easily deployed to our Docker environment. We have also shown how to create sbt task dependencies and how we can access to each necessary piece we required.

Remember that we are using the sbt-docker plugin along with sbt-native-packager and sbt-assembly. The sbt-native-packager is still missing some of the functionalities of the sbt-docker plugin and it is less flexible than the latest one.

We strongly recommend taking a look at other tutorials about this topic since they offer extended information about other parts and processes you might need. However, building images with dependencies to submodules of your project is something we have not found anywhere else.

We hope this small guide helps you on your journey to the Docker world.

Hacker Noon is how hackers start their afternoons. We’re a part of the @AMIfamily. We are now accepting submissions and happy to discuss advertising &sponsorship opportunities.

To learn more, read our about page, like/message us on Facebook, or simply, tweet/DM @HackerNoon.

If you enjoyed this story, we recommend reading our latest tech stories and trending tech stories. Until next time, don’t take the realities of the world for granted!