Image courtesy of Pexels: Troy Squillaci
In this series of articles, we will explore the convergence happening between the IT and Telecom industries, which is referred to as âTelco Softwarisationâ.
This is the third and last article of the series with title âDevOps and Telco Softwarisationâ. I have previously talked about DevOps fundamentals and how DevOps can be used in practice with an example of a simple CI/CD pipeline.
Initially, I would like to give a short introduction to âFuture Networksâ and why the current focus of the Networking industry (including Telecommunications) is on improving the software and hardware technologies while transitioning towards 5G and 6G networks.
Future networks go beyond 5G
Future Networks refer to the continuous improvement of networking towards the field effectiveness in data heavy use cases, with requirements for real time exchange between regional, global and even extra-terrestrial distances (!). The main advantages of 5G networks are:
- higher speed in data transmissions lower latencyâ â âtherefore greater capacity of remote process execution
- greater number of connected devices
- the benefit of implementing virtual networks i.e. virtual slicing, providing more adjusted connectivity according to the needs
The âArt of Softwareâ has been already very useful in this challenging quest. Software Defined Networks (SDN), Network Function Virtualization (NFV) and Edge Computing are among the most prevalent technologies used during the evolution of current networks, services and application platforms, particularly in the networking industry.
Software art is a work of art where the creation of software, or concepts from software, play an important role; for example software applications which were created by artists and which were intended as artworks. As an artistic discipline software art has attained growing attention since the late 1990s.â
Several challenges across multiple industries can be solved when using solutions, techniques, tools and practices from other industries. Multi-disciplined initiatives have proved successful with impressive results. For example Automated Guided Vehicles is the result of efficient Networking, Software development, Artificial Intelligence and Robotics. Similarly, the aim for the Networks is becoming autonomous, self-healed and self-optimised by leveraging software, data and artificial intelligence.
On the business side, future networks enable a new market place for network apps and services allowing for rapid customer service introduction including more personalised services which eventually lead to customer satisfaction (QoE).
The internet protocol as we know it today is being deprecated and migrated to a new network that restructures how packets of information are handled. Outcomes from the aforementioned transformations result to improved communication services that will be offered by companies to end users and customers.
Onboarding Softwarised Concepts
Following the introduction on the Future Networks, itâs now the time to acknowledge and describe at a high level the technological concepts used in the process of softwarising the networks. The concepts mentioned below are borrowed or originated from the IT industry. They have been proved as efficient in multiple occasions/projects and therefore it would be helpful to give a short description for each one:
- Microservices or a microservices architecture is an architectural style that structures an application as collection of services with the following characteristics: highly maintainable and testable, loosely coupled, independently deployable, organised around business capabilities, owned by a small team.
- Disaggregated networks follow the lines of microservices and it is the idea of breaking up a monolithic system for cloud and network service providers while adopting agile, disaggregated, open networking. By unbundling a single network function into separate hardware and software, as well as harnessing software capabilities to install any operating system on open white boxes, operators can achieve new levels of efficiency. The main benefits are flexibility, avoiding vendor lock-in and enabling access to inexpensive products.
- Edge computing is a distributed computing paradigm that brings computation and data storage closer to the location where it is needed, to improve response times and save bandwidth.
- Infrastructure as Code (IaC) helps provisioning and managing computer data centers (infrastructure) using machine readable definition files, rather than physical equipment, such as bare-metal servers, virtual machines and configuration resources. These files can be added in any version control system, such as Git, and therefore they can be managed in the same way as our application code. Terraform and Cloudformation are the biggest names in this topic with the first being the lead player.
- Containerisation is the packaging of software and itâs dependencies in one unit leaving outside the irrelevant functionality and coupling with other software. There is no better place to learn about containers from https://www.docker.com/resources/what-container. By the way, Docker is only one out of a few containerisation technologies, however it has has evolved as the leader and most commonly used in the industry.
- Monitoring across the stack i.e. infrastructure level and application level is a major requirement as entire architectures move from monolithic to microservices. An extensive set of tools are available and can be integrated with the reference system. For example New Relic, Splunk, Datadog, Dynatrace, Pagerduty provide tools to monitor application and infrastructure performance. Additionally development tools such as Kibana, Grafana, Fluentd can integrate with the architecture to help in the development process as well as in production.
- Automation comes in hand with all above technologies and can be found in all stages of the software development process, as well as during monitoring of a software system. Itâs also the most important factor of Dev/Ops mindset, which aims to improve team efficiency. I have previously talked about Dev/Ops in the first article of this series.CI/CD in Telco softwarisation projects
A very inspirational article from Bassem Aly was the reason I wanted to cover this topic too, hence I created the âDevOps and Telcoâ softwarisation series. Bassemâs article talks about CI/CD and how it help to ship high quality and better telco service while reducing the time taken to rollout a new service i.e. Time To Market (TTM).
The obvious answer is âArchitecting a good DevOps CI/CD strategy in Telco!â.
However, there are many factors to consider for implementing a CI/CD strategy for a Telco project. The following bullet points are the result of a brainstorm session with my colleagues in my current project. My findings in comparison with Assemâs article are as follows:
- Deployment requirements may vary according to the hardware and software versions requirements in place for the specific area, where the system is utilised.
- Mix of cloud and on-site (bare metal) servers can increase the complexity of the deployments. Distinct configurations for each type of infrastructure is a possible solution.
- The move to Cloud Native applications from legacy on-site systems cannot be done in one step, although this is the obvious goal that will simplify and improve at the same time the efficiency of the system setup and performance(!)
- CI/CD pipeline in a large scale 5G Telco softwarisation project
CI/CD is a pipeline with several stages according to the project needs. Each stage contains one or more jobs that might be executed or not depending on the service requirements.
Gitlab CI/CD Telco pipeline stages and jobs
.gitlab.yml is responsible for creating the pipeline. This file will be read by the Gitlab and generate the above structure.
stages:
- build
- analyze
- unit-test
- build-docker
- package
- deploy
- e2e-test
build:
stage: build
image:
script:
- make build
artifacts:
paths:
sca-golint:
stage: analyze
image:
script:
- sca golint -set_exit_status
sast-gosec:
stage: analyze
image:
script:
- sca gosec
variables:
helm-lint:
stage: analyze
image:
name:
entrypoint: [""]
script:
- helm lint serrvice
unit_test:
stage: unit-test
image:
script:
- make test-cicd
dockerize:
stage: build-docker
image:
script:
dockerize-e2e:
stage: build-docker
image:
script:
helm:
stage: package
image:
name:
entrypoint: [""]
before_script:
- /ci-tools/dlhelper dl -t helm
script:
- semver print
- helm template service
- helm package service --app-version=$(semver docker) --version=$(semver helm)
deploy:
stage: deploy
image:
name:
entrypoint: [""]
before_script:
script:
- helm upgrade --install service
e2e-test:
stage: e2e-test
image:
before_script:
script:
- make setup-tools
- make build
- make test-e2e
when: manual
The pipeline above is consisted of seven stages:
- Buildâââbuilds the application code.
- Analyzeâââruns a set of jobs for Helm linting and code security checking.
- Unit-testâââtesting the internals of said microservice irrespective of outside dependencies.
- Build-dockerâââcontainerisation of the application.
- Packageâââprepares/packages the helm chart and pushes it to Artifactory.
- Deployâââdeployment of the previously created helm chart into the selected environment. In this specific case, the application is deployed in a QA environment, where test team executes automated and manual functional and service tests. It is a manual step because it requires importing dynamic parameters.
- E2E-testâââthis is another deployment, but this time to âstagingâ environment, where all services meet together and the project e2e tests are executed.
Test coverage should follow the
Test Pyramid
paradigm
Last and certainly not least, the technologies used for the pipeline are quite broad and they are shown in the picture below. On premise Gitlab and Gitlab CI are used for repository management and Continuous Delivery.
Go, Python, Java are the three programming languages to implement the several microservices. Bash is also used for scripting and Docker for containerising the services. Robot framework has been a useful test automation framework that makes QA peopleâs lives. Several development environments are built using AWX, Ansible, Terraform, Kubernetes and more.
As mentioned above, Helm is a packaging mechanism that gives a good level of flexibility when deploying to the Kubernetes clusters. All artefacts i.e. test reports, helm charts are stored in an on-premise Artifactory repository.
CI/CD pipeline technologies from repository management to deployment in the environmentConclusion
In this series I have demonstrated the usefulness of DevOps practices in complex software projects. Starting with the DevOps foundations, weâve explored the different aspects and highlights. Moving on, we have seen how a simple CI/CD pipeline looks like and what it looks like to deploy automatically.
Finally, after looking at the â Future Networksâ, weâve discussed the need for re-using technologies and even mentalities from different industries, hence the plan for multi-disciplinary action between the Networking and IT industries.
Please let me know in the comments or email me directly, if you enjoyed reading this article and you would like me to write on any other relevant topics.
Also, please donât forget to clap/give feedback. It means a lot to me :)