In the first part of this two-part blog series, we demystified containers and understood why they are indispensable for enterprises looking to adopt the agile delivery model. If you haven’t read it, I would highly recommend getting your basics in place before proceeding with this piece. In this blog, we will cover the essentials that everyone needs to know if they are looking to leverage containerization right and migrate effectively.
Before we dive in, it is important to understand that most applications were designed before the advent of container technology, hence even if one runs a monolithic application in a container, there is a high likelihood of the application needing some modifications.
Now that we are on the same page, let’s talk migration! Migration of existing applications to containers can be broadly categorized into three models, lift and shift, augment and rewrite. No matter what model one chooses, there are certain considerations and guidelines that must be adhered to for successfully migrating applications into containers. This piece also has a bonus checklist for anyone looking to migrate systematically.
Here are some of the top considerations and guidelines that make for a successful migration,
1) Stable Base Image
When migrating to containers for the first time, it is important that you find a reliable base image. This can be sourced from the docker registry, which lists several possible images. However, one must ensure that the following things are taken care of,
- Image must have a tag
- Image must have a docker file
- Image must have a common base
2) Monolithic vs Microservices
If an enterprise has a vision of truly leveraging all the potential that containers have to offer, then mechanically moving monolithic VM-hosted applications to containers without considering future uses is a mistake. Containers work best when supporting linked components and even microservices. So, the first step in a container-driven future is to decompose monolithic applications to support a linked component model. Look for components which are shared by applications so that they can be independently scaled to improve performance. One should also be careful of too much componentization as it may result in network latency, thereby degrading the performance of applications. This can also hamper the efficiency of the cluster.
3) Stronger version control
Review your target application and look at how strong the version control is for the targeted software platform. Weak version control, third-party software and side-lined applications are poor candidates for transformation to container hosting. Set them aside, if possible. If not, you’ll need to, at least, harmonize the platform software versions and tools before you migrate, because you’ll share them in a container deployment.
All your cluster-hosting points can be networked in a secure and efficient way, leaving your container deployment free to focus on connectivity within the cluster itself. Ensure all your applications co-exist in a single subnet, which has a gateway that connects to VPN or Internet. All port enablement for external connectivity should be done in an explicit way and IP exposed should be managed by VPN domain name system processes.
One of the most important considerations for performance is the choice between running containers on a VM vs on Bare Metal strategy. There is no layer of abstraction like virtualization and hence all containerized applications in a cluster must be based on the same hardware architecture and operating system. Workloads in an HPC like environment/requirements can be containerized and run on top of BareMetal. Running containers inside of virtual machines offers a combination of hardware freedom, better isolation, and increased manageability using container images and can potentially leverage greater cloud benefits.
6) Stateless Containers
By storing data outside Containers, the containers can be created, destroyed or shut down without any loss of data.
The long-term success of containerization will depend on effectively producing and streamlining containers at the early stages of production. And these best practices will help you achieve just that. If enterprises plan on reaping benefits like portability, seamless compatibility, fault isolation, ease of management and security to push agile DevOps and more forward, getting the foundation right is critical.
And since container technology is still evolving, here is the checklist I promised. This serves as a great guideline for developers and operation engineers tasked with migrating to containers. Here’s wishing you a migration process that uses fewer resources and provides a truly cloud-native experience. Bon voyage!
A technical checklist for containerization
- Layer your Application. Package a single app per container. Do not replace VM’s with containers
- Use the smallest base image possible. To reduce the size of your image, install only what is strictly needed inside it
- Try to create images with common layers. After the initial download, only the layers that make each image unique are needed, thereby reducing the overhead on downloads.
- Leverage Docker build cache as appropriate. This will accelerate the building of container images
- Use the start script layer to provide a simple extraction from the process runtime
- Never build off the latest tag—this prevents builds from being reproducible over time
- Store your images in ‘Container Registries’ and enable vulnerability scanning. In case of a vulnerability, do not patch them. The best practice is to rebuild the image (patches included) and redeploy.
- Properly tag your images. Tagging the image lets users identify a specific version of your software in order to download it. For this reason, tightly link the tagging system on container images to the release policy of your software.
- Before you include third-party libraries and packages in your Docker image, ensure that the respective licenses allow you to do so.
- Identify and separate code, configuration, and data. Configuration, data, and keys should come from the environment
- If you are containerizing databases, then consider shared (networked) file systems or block storage (volumes) that can be independently managed and attached/detached to/from any host.
- Avoid privileged containers through policy management to reduce the effects of potential attacks.
- Key functionality for container deployment is provided at the orchestration and scheduling layers. Leverage single tools like Kubernetes or Docker swarm for orchestration.