Let’s say for example you started a new job as a DevOps/Dev/SRE/etc at a company that created a new smart speaker (think Amazon Echo or Google home), said device gained a lot of success and you quickly find yourself with a million clients, each with a single device at his\hers home, Sounds great right? Now the only problem you have is how do you handle deployments to a million of devices located all across the world?
As you may have guessed from the title I want to discuss about the last option from the list.
Nebula Container Orchestrator aims to help devs and ops treat IoT devices just like distributed Dockerized apps. It aim is to act as Docker orchestrator for IoT devices as well as for distributed services such as CDN or edge computing that can span thousands (or even millions) of devices worldwide and it does it all while being open-source and completely free.
Different requirements leads to different orchestrators
When you think about it a distributed orchestrator has the following requirements:
This is quite different from the big Three orchestrators (Kubernetes, Mesos & Swarm) which are designed to pack as many different apps\microservices onto the same servers in a single (or relatively few) data centers and as a result non of them provide truly latency tolerant connection and the scalability of Swarm & Kubernetes is limited to a few thousands workers.
Nebula architecture
Nebula was designed with stateless RESTful Manger microservice to provide a single point to manage the clusters as well as providing a single point which all containers check for updates with a Kafka inspired Monotonic ID configuration updates in a pull based methodology, this ensure that changes to any of the applications managed by Nebula are pulled to all managed devices at the same time and also ensures that all devices will always have the latest version of the configuration(thanks to the monotonic ID), all data is stored in MongoDB which is the single point of truth for the system, on the workers side it’s based around a worker container on each devices that is in charge of starting\stopping\changing the other containers running on that device, due to the design each component can be scaled out & as such Nebula can grow as much as you require it.
you can read more about Nebula architecture at https://nebula.readthedocs.io/en/latest/architecture/
Nebula features
As it was designed from the ground up to support distributed systems Nebula has a few neat features that allows it to control distributed IoT systems:
A little example
The following command will install an Nebula cluster for you to play on and will create an example app as well, requires Docker, curl & docker-compose installed:
curl -L "https://raw.githubusercontent.com/nebula-orchestrator/docs/master/examples/hello-world/start_example_nebula_cluster.sh" -o start_example_nebula_cluster.sh && sudo sh start_example_nebula_cluster.sh
But let’s go over what this command does to better understand the process:
a) A MongoDB container — the backend DB where Nebula apps current state is saved.
b) A manager container — A RESTful API endpoint, this is where the admin manages Nebula from & where devices pulls the latest configuration state from to match against their current state
c) A worker container — this normally runs on the IoT devices, only one is needed on each device but as this is just an example it runs on the same server as the management layer components runs on.
It’s worth mentioning the “DEVICE_GROUP=example” environment variable set on the worker container, this DEVICE_GROUP variable controls what nebula apps will be connected to the device (similar to a pod concept in other orchestrators).
2. The script then waits for the API to become available.
3. Once the API is available the scripts sends the following 2 commands:
curl -X POST \http://127.0.0.1/api/v2/apps/example \-H 'authorization: Basic bmVidWxhOm5lYnVsYQ==' \ -H 'cache-control: no-cache' \-H 'content-type: application/json' \ -d '{"starting_ports": [{"81":"80"}],"containers_per": {"server": 1},"env_vars": {},"docker_image" : "nginx","running": true,"volumes": [],"networks": ["nebula"],"privileged": false,"devices": [],"rolling_restart": false}'
This command creates an app named “example” and configures it to run an nginx container to listen on port 81 , as you can see it can also control other parameters usually passed to the docker run command such as envvars or networks or volume mounts.
curl -X POST \http://127.0.0.1/api/v2/device_groups/example \ -H 'authorization: Basic bmVidWxhOm5lYnVsYQ==' \ -H 'cache-control: no-cache' \-H 'content-type: application/json' \ -d '{"apps": ["example"]}'
This command creates a device_group that is also named “example” & attaches the app named “example” to it,
4. After the app & device_groups arecreated on the nebula API the worker container will pick it up the changes to the device_group which is been confiugred to be part of (“example” in this case) and will start an Nginx container on the server, you can run “docker logs worker” to see the Nginx container being downloaded before it starts (this might take a bit if your on a slow connection). and after it’s completed you can access http://<server_exterior_fqdn>:81/ on your browser to see it running
Now that we have a working Nebula system running we can start playing around with it to see it’s true strengths:
sudo docker run -d --restart unless-stopped -v /var/run/docker.sock:/var/run/docker.sock --env DEVICE_GROUP=example --env REGISTRY_HOST=https://index.docker.io/v1/ --env MAX_RESTART_WAIT_IN_SECONDS=0 --env NEBULA_MANAGER_AUTH_USER=nebula --env NEBULA_MANAGER_AUTH_PASSWORD=nebula --env NEBULA_MANAGER_HOST=<your_manager_server_ip_or_fqdn> --env NEBULA_MANAGER_PORT=80 --env nebula_manager_protocol=http --env NEBULA_MANAGER_CHECK_IN_TIME=5 --name nebula-worker nebulaorchestrator/worker
It’s worth mentioning that a lot of the envvars passed through the command above are optional (with sane defaults) & that there is no limit on how many devices we can run this command on, at some point you might have to scale out the managers and\or backend DB but those are not limited as well.
curl -X PUT \
http://127.0.0.1/api/v2/apps/example/update \
-H ‘authorization: Basic bmVidWxhOm5lYnVsYQ==’ \
-H ‘cache-control: no-cache’ \
-H ‘content-type: application/json’ \
-d ‘{
“docker_image”: “httpd:alpine”
}’
Hopefully this little guide allowed you to see the need of an IoT docker orchestrator and it’s use case & should you find yourself interested in reading more about it you can visit Nebula Container Orchestrator site at https://nebula-orchestrator.github.io/ or skip right ahead to the documentation at https://nebula.readthedocs.io