paint-brush
Understanding Linux Containers Before Changing the Worldby@bennykillua
612 reads
612 reads

Understanding Linux Containers Before Changing the World

by killua4mAugust 1st, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Linux containers are a virtualization setup and the first system container implementation based solely on features of mainstream Linux. Containers provide a way to run your application by packaging it with the runtime, operating system, libraries, and every dependency it needs. This brings simplicity, speed, and flexibility to application development and deployment, with a more efficient way to utilize system resources. Various container technologies are available, like Docker containers, Kubernetes containers, and Linux containers (LXC) This article will look at Linux containers and their uses.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Understanding Linux Containers Before Changing the World
killua HackerNoon profile picture


Driven by an array of factors—productivity, automation, and cost-effective deployments—organizations have grown to love container technology, especially as they make it possible to run infrastructure more efficiently. Container technology introduces something we call containers. Containers are application sandboxes.


Containers provide a way to run your application by packaging it with the runtime, operating system, libraries, and every dependency it needs. This brings simplicity, speed, and flexibility to application development and deployment, with a more efficient way to utilize system resources. A major step up from virtual machines, I must say. Various container technologies are available, like Docker containers, Kubernetes containers, and Linux containers (LXC).


This article will look at Linux containers and their uses.

What are Linux containers?

Linux containers sometimes referred to as LXC, are a virtualization setup and the first system container implementation based solely on features of mainstream Linux.


LXC creates an environment where you can share resources, such as memory and libraries, while creating an entire virtual operating system. Without needing a separate kernel, you can design a setup similar to a standard Linux installation with just the components your applications require and thus no overhead processes.


I mentioned virtualization quite a lot, so I should explain that too. So what exactly is virtualization?

What is virtualization?

Virtualization is the process of running virtual instances of computer components that are traditionally bound to hardware. This is the foundation of cloud computing. A popular use case is utilizing this virtualization technology to run applications for a different operating system like Linux on another like MacOS, or using it to run multiple operating systems on a computer simultaneously.


Virtualization utilizes hypervisors to emulate the underlying hardware like CPU memory and separate the physical resources so they can be used by the virtual environment. Because of the hypervisors, the guest operating system interacts with the hardware.


Traditional and virtual architecture server comparison. Source: Workload Stability-by Hoyeong Yun


What are virtual machines?

Virtual machines (VMs) are isolated computing environments created when the hypervisor separates the computing resources from the physical machine or hardware. They can access several resources, including but not limited to the host’s computing power, CPU, and storage.


While virtual machines might sound like containers, they’re quite different.


In virtualization, each VM requires and runs its own operating system. While this allows organizations to maximize the benefit from their hardware investments, it also makes them heavyweight. In containerization, applications run inside a container instead of sharing operating system resources; thus, they carry less overhead and are lightweight.


Another issue with virtualization is over-allocation, which means that whenever an instance in a virtual environment starts, all the resources assigned to it start to be used. For example, when you create a virtual server, you specify how much space the drive should have. Once the server starts, the entire space is automatically assigned to the virtual server, whether or not they need it. Thus, there are wasted resources since it takes up all the space even if it needs just one-tenth of it.


Container-based virtualization changed this. For one, they are less resource-intensive. Instead of having a full operating system, you'd rather have a container with all the tiny bits and pieces the application needs to run. Thus, resources are shared more effectively.


Containers vs VMs Image by Veritis


You are probably thinking, “Why do we still need virtual machines?'' Well, there are some instances where VMs are the right choice. For example, if you want to run the Windows operating system on macOS or Linux, you will need a virtual machine. Another use case will be when you need a kernel version different from the host's kernel.

Why use Linux containers?

Let's look at some reasons why you should use Linux containers:


  • Resource management: They are more effective at managing resources than hypervisors are.


  • Pipeline Management: LXC keeps the consistency of the code pipeline as it progresses from development to testing and production, despite the differences between these environments.


  • Modularity: Applications can be split into modules rather than being housed in a single container as a whole. We refer to this as the microservices strategy. Management is now easier thanks to this, and several tools are available to handle management for complex use cases.


  • The landscape of tooling: Despite not being technically particular to containers, the ecosystem of orchestration, management, and debugging tools coexists well with containers. Kubernete, Sematext cloud, and Cloudify are a few examples.


  • They support continuous deployment and integration. Because of how they operate, you can effectively deploy your applications in various environments. It prevents redundancy in your codes and deployments.


  • Application Isolation: Without the need to restart the system or start the OS from scratch, containers package your apps with all the necessary dependencies. These apps can be set up in various environments, and updating them only requires changing the container image. A container image is a file that contains the code and configuration needed to create a container.


  • Linux Container is open-source. It provides a user-friendly, intuitive user experience through its various tools, languages, templates, and libraries. For these reasons, Linux containers are great for development and production environments. Even Docker's earlier versions were built right on top of it. You can find the source code here.


Conclusion

The presence of container technology has changed how we create applications. With containers, you may virtualize the operating system so that each container only contains the application and its libraries, rather than using a VM with a guest OS and a virtual hardware copy.


This article was more of a beginner's guide to the container technology landscape; there is more to the landscape. Check out the resources, explore them, and change the world.