Building an open source Scala gRPC/REST HTTP Proxy for Kafka (II)

Day 1 of the journey on building my first Scala GraalVM ZIO application

»
»

Continued from

The Iron Rolling Mill — Adolph Menzel

Day 1: Getting builds under control

Now that we have a basic project setup and somewhat building, we need to stabilise it a bit more, so we can spend all our future energy on actually writing the code. This means CI/CD, dockerising, versioning, but more important getting the binary stable on *nix and mac environments. This means diving deeper into GraalVM and it’s available build tools. So far we (the native packager) only used the native-imagetool from graal, but we need a way to start defining the scala/java reflections via the native-image-agent
Fortunately reflections are not encouraged in Scala and many libraries have few, but few is not the same as none.

Local & remote

As we’ve seen we can build the native image locally (on mac) if we do all these prerequisite steps, but I just discovered this awesome sbt library, , that basically does everything for you, downloading all binaries using coursier, without needing to install & run graal and subsequent tools project local. This would have saved me a lot of time, but the code is not wasted. The biggest strength is also the biggest drawback of this library. It does everything for you, but leaves you with little control over the end product, especially if the goal is dockerising the end product.

So for now the approach is twofold, and why not. We keep both the sbt-native-packager and sbt-native-image plugins and use each of their strengths.

We do define some basic graalvm settings that can be used by both approaches. I don’t know if we’ll keep all these settings as is, but for now it works

val jvmVersion = "11"
val graalVersion = "21.0.0.2"
val baseGraalOptions = Seq(
"--verbose",
"--no-fallback",
"--no-server",
"--install-exit-handlers",
"--allow-incomplete-classpath",
"--enable-http",
"--enable-https",
"--enable-url-protocols=https,http",
"--initialize-at-build-time",
"--report-unsupported-elements-at-runtime",
"-H:+RemoveSaturatedTypeFlows",
"-H:+ReportExceptionStackTraces",
"-H:-ThrowUnsafeOffsetErrors",
"-H:+PrintClassInitialization"
)

Next we’ll define the local graal builds

lazy val graalLocalSettings = Seq(
nativeImageVersion := graalVersion,
nativeImageJvm := s"graalvm-java$jvmVersion",
nativeImageOptions ++= baseGraalOptions,
nativeImageOutput := file("output") / name.value
)

Which will create a binary (output/service) of our service moduleby running sbt service/nativeImage , the nice thing is that, in contrast with the bt-native-packager , this module shows the progress of the build when running it. Running output/service will yield immediate results.

Hello, world from thread: zio-default-async-1!
Hello, world from thread: zio-default-async-2!

Also: I’ve updated the hello world example a bit to remove the user prompt and include some threading.

For the remote builds we keep the sbt-native-packager , but make sure the build is run from a graal container, resulting in a linux binary wrapped in an basic alpine docker. This is of course the actual goal. The local run is nice for showing off ;-) and testing without all the docker stuff.

So we first define how we build our linux binary, by using containerBuildImage config and add the "--static" option to the build process.

lazy val graalDockerSettings = Seq(
GraalVMNativeImage / containerBuildImage := GraalVMN ativeImagePlugin
.generateContainerBuildImage(s"ghcr.io/graalvm/graalvm-ce:java$jvmVersion-$graalVersion")
.value,
graalVMNativeImageOptions ++= baseGraalOptions ++ Seq(
"--static"
),
...
)

Running sbt service/graalvm-native-image:packageBin will use this image to build a linux binary. And store it in the target folder somewhere.

Next we want to inject this binary in a linux container, to actually run it. I’m using alpine because it’s pretty lightweight.

lazy val baseImage = "alpine:3.13.1"
lazy val dockerBasePath = "/opt/docker/bin"
lazy val graalDockerSettings = Seq(
...
dockerBaseImage
:= baseImage,
dockerEntrypoint := Seq(dockerBinaryPath.value),
dockerChmodType := DockerChmodType.Custom("ugo=rwX"),
dockerAdditionalPermissions += (DockerChmodType.Custom(
"ugo=rwx"
), dockerBinaryPath.value),
mappings in Docker := Seq(
((target in GraalVMNativeImage).value / name.value) -> dockerBinaryPath.value
),
...
)

Running sbt service/docker:publishLocal will use the binary created by the previous step, put it in the dockerBasePath, updating execute rights and build a new local docker image called service

You can verify the results by running docker run --rm -it service , which will yield you the same results as the native run from before.

Add these configs to your project including the plugins, and everything is setup to go to the CI/CD step.

CI/CD

Next we want to get a basic pipeline running that does all these steps on push and also publish the resulting docker image to github’s container registry.

We probably want to adhere to some more advanced release flow later in the project, but for now let’s say that every push will result in a snapshot build.

To do a lot of automated steps, version control and release management there is yet another great sbt plugin that helps a lot with this, called

Normally running sbt release will run a predefined series of steps, like: test, bump version, build, publish, commits & push. We need to adjust it a bit, because we want to add building the graal binary, and to be fair also a different flow for releasing snapshots vs official releases.

So we start by putting our product version in version.sbt : ThisBuild / version := "0.1.0-SNAPSHOT" Followed by creating a specific ReleaseStep for building the graal binary & docker image

def publishNativeDocker(project: Project): ReleaseStep =
ReleaseStep(
action = { beginState: State =>
val extracted = Project.extract(beginState)
Seq(
(state: State) => extracted.runTask(packageBin in GraalVMNativeImage in project, state),
(state: State) => extracted.runTask(publish in Docker in project, state)
).foldLeft(beginState) {
case (newState, runTask) => runTask(newState)._1
}
}
)

I’ve added some custom code from other projects that deal with the specific versioning, that I’ve become accustomed to and like, but that will be evident from the codebase.

What we need to do for making the publish step actually work is to define what the registry and alias for the publish step

lazy val graalDockerSettings = Seq(
...
dockerRepository
:= sys.env.get("DOCKER_REPOSITORY"),
dockerAlias := DockerAlias(
dockerRepository.value,
dockerUsername.value,
s"$baseName/$baseName-${name.value}".toLowerCase,
Some(version.value)
),
dockerUpdateLatest := true,
dockerUsername := sys.env.get("DOCKER_USERNAME").map(_.toLowerCase)
)

As you see some of the config comes from the project, some comes from the github actions workflow. But assuming the DOCKER_REPOSITORY is ghcr.io (you need to at time of writing), and DOCKER_USERNAME is TomLous the final alias will look something like ghcr.io/tomlous/prokzio/prokzio-service:0.2.0–044ef4d-SNAPSHOT

Finally we can put this all together in a nice Github Actions workflow in .github/worklows/ci.yaml

For now just run this for every single push to the repo, because: why not:

on:
push:
branches: ['**']
tags: [v*]
jobs:
ci:
runs-on: ubuntu-latest
steps:
...

Next we need a simple job that creates check outs the code

- name: Checkout code
uses: actions/checkout@v2

Installs sbt & java

- name: Set Java / Scala
uses: olafurpg/setup-scala@v10
with:
java-version: 11

Sets git & GPG config correctly. The flow will update & commit the version.sbt and will be able to tag (and more) in the future as well.

Add GPG_PRIVATE_KEY & GPG_PASSWORD to the secrets in the Github repo

- name: Import GPG key & set Git config
id: import_gpg
uses: crazy-max/ghaction-import-gpg@v3
with:
gpg-private-key: ${{ secrets.GPG_PRIVATE_KEY }}
passphrase: ${{ secrets.GPG_PASSWORD }}
git-user-signingkey: true
git-commit-gpgsign: true
git-tag-gpgsign: true
git-push-gpgsign: false

Log into the Github docker registry

- name: Docker Login
uses: azure/docker-login@v1
with:
login-server: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}

and finally run the release flow defined in our build.sbt

- name: Test, Bump & Deploy
run: sbt bumpSnapshot
env:
DOCKER_REPOSITORY: ghcr.io
DOCKER_USERNAME: ${{ github.repository_owner }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
shell: bash

We push and see a succesfull build

Finally we can verify our build by running

docker run --rm -it ghcr.io/tomlous/prokzio/prokzio-service:latest

And voila, the exact same results

Conclusion Day 1

Day 1 was the longest day so far. It was way more work than I anticipated, especially switching between the two plugins, coming to the conclusion to keep them both (for now) and leverage each individual strength.

There is also more stuff I want to incorporate in the automated tooling: for release management, changelogs, github releases, code linting & codecoverage, and much more, but let’s leave it to rest for now.

I’m quite happy with the result and I’m excited to actually start focussing on the actual code and start diving deep into the Zio eco-system.

[Onwards to Day 2, coming soon…]

Freelance Data & ML Engineer | husband + father of 2 | #Spark #Scala #BigData #ML #DeepLearning #Airflow #Kubernetes | Shodan Aikido

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store