Tech Story Teller
It was a big year for DevOps adoption. Most of the organizations than ever before are ditching their leadership philosophies, old methodologies, and legacy processes in favor of DevOps to realize speed and agility in today’s constantly evolving technology landscape.
So what have we learned? If you're early on your journey, you'll have the interest to understand the most important lessons learned by companies who are well down the trail with DevOps. We’ve rounded up many of these below.
DevOps isn't an individual role, it's a group effort. For decades, development and operations were isolated modules. As there was limited communication between these two silos i.e. developers and the system administrators worked mostly separately within a project. Short sprints and frequent releases occurring every fortnight require a new approach and new team roles. Today, DevOps is one of the foremost discussed software development approaches. It is applied to Facebook, Netflix, Amazon, Etsy, and many other industry-leading companies. It is a mix of Developers & Ops Engineers who can do each other's work.
Your infra should be ready to handle a failed deployment automatically and rollback. If it can’t, you’re doing it wrong. Overcoming the risk to avoid failure should be a priority. The ability to use patterns to define consistent environments eliminates the failures that occur through configuration inconsistencies. Operations and development must work together to make the provisioning process so developers create their test environments an equivalent way they're created in production.
As the ADT study points out, organizations increasingly look to continuous deployment to get over failure. Technologies like IBM UrbanCode Deploy can help transform the deployment process, while practices like shift left and DevOps will increase the feedback loop and reduce failure.
And, always design with autoscaling in mind. Automate-all-the-things. One-click deploy to one-click revert.
Do not automate a bad process, you end up with an automated bad process — [Credits: DOES18 ]
DevOps means different things to different people - It’s not like the people who say “DevOps is a set of tools” are right and people who say “DevOps is a culture” are wrong. Everyone who is often considered a “DevOps professional” will have a special answer. Most of them will be right. It’s actually not a question of fact, or of opinion — it’s a question of focus. Of concerns, and experience. Of temperament. And more. How a DevOps professional would answer “What does DevOps mean to you?” says an excellent deal about who they're, and the way they're going to do their work.
If any task can't be automated, remove it instantly from the task list. DevOps came into practice in order to reduce manual intervention by automating tasks if it can't be achieved then there is no point in having it in the workflow.
Communication is more important than tooling because it can make or break any organization. In May 2013, the PMI released its annual "Pulse of the Profession" report, including the subsequent jarring statistics: https://techbeacon.com/devops/devops-importance-clear-communication
“$135 million is at risk for every $1 billion spent on a project. Further research on the importance of effective communications uncovers that a startling 56 percent ($75 million of that $135 million) is at risk due to ineffective communications.”
Is this thing (and more) critical for DevOps folks? Of course. Today, however, I'm going to argue that communication and interpersonal skills are more important than pure technical proficiency.
“Shift left” impacts the entire software lifecycle. Shift the main target of Security to the left within the development life-cycle. Shift left testing is an increasingly popular approach to testing applications and software, where the testing is generally performed earlier in the development project timeline (hence ‘shifted left’) and is a fundamental aspect of the DevOps approach. It becomes critical because it places the responsibility for testing software and ensuring the stability of applications to include the developers at the earliest feasible stage in the development process.
Here are the top 4 tips for DevOps to shift security left:
The core technology driving that initiative of building a CI/CD pipeline in DevOps from scratch was Jenkins. Jenkins, an open-source, Java-based CI/CD tool based on the MIT License, is the tool that popularized the DevOps movement and has become the de facto standard. It is just one of many open-source CI/CD tools that you can leverage to build a DevOps pipeline.
Jenkins is the obvious choice due to its flexibility, openness, powerful plugin-capabilities, and ease of use.
Here's what a DevOps process looks like with a CI/CD tool.
The CI/CD pipeline is one of the simplest & best practices for DevOps teams to implement, for delivering code changes more frequently and reliably.
Testers who have the skills to code and automate scripts to check various cases are in massive demand in DevOps. But there is a true skillset shortage within the DevOps field. Here is 'Why is it difficult to hire for DevOps?'
Utilize Infrastructure as a code template to build the infra. Ex. cloud-formation, terraform, etc.
Implement security policies right at the time of building infrastructure the first time. And close loopholes if any before the final deployment.
Add monitoring to infrastructure as well as applications. The monitoring system should send alerts on every important event.
Store everything as code and adopt a GitOps mindset for version control.
Better to create ready to use images in AWS and use those images for auto-scaling rather than scripted from GitHub builds.
I had a misconception that if you need to SSH into your VM then your automation is failed. But, AWS recommends having a way of SSHing into your ECS containers if necessary, for diagnostics purposes. You absolutely must have a way of getting into your containers, but you should also avoid the need to do so by incrementally improving your systems to handle problems automatically.
Serverless is not actually serverless, it is simply 'pay as you use' model. Here you can save a lot of money by building your own serverless platform for your machine learning needs. It sounds hard, but it's really not. A lot of those capabilities are open-sourced and/or are really affordable and scale nicely. My company Machine Box is $499/month for unlimited machine learning action. OpenFAAS, Docker, and Kubernetes are free unless you need enterprise support, and even then it's still a lot cheaper than running this all in the public cloud.
Cloud-native is not a synonym for microservices or kubernetes, it is all about utilizing the advantages of the cloud computing model including PaaS, multi-cloud, microservices, agile methodology, containers, CI/CD, and DevOps. It is an approach to building and running applications that exploits the advantages of the cloud computing delivery model. Cloud-native is all about how the applications are created and deployed.
This cloud-native app development typically includes microservices, agile methodology, DevOps, cloud platforms, containers like Docker and Kubernetes, and continuous delivery—in short, every latest and modern method of application deployment.
Make use of container registries i.e. Google Cloud, IBM, Microsoft Azure for publishing, storing, locating, downloading, and managing container images
If you're experienced with DevOps, feel free to share your biggest lesson learned within the comments below.