Containers

The DevOps Dojo - En podkast av Johan Abildskov

Kategorier:

Containers are all the jazz, and they all sorts of positive outcomes. In this episode I cover the basics of Containerization.SourcesContainers will not fix your broken Culturedocker.ioTranscriptContainers - If one single technology could represent the union of Dev and Ops it would be containers. In 1995, Sun Microsystems told us that using Java we could write once and run anywhere. Containers are the modern, and arguably in this respect more successful, way to go about this portability. Brought to the mainstream by Docker, containers promise us the blessed land of immutability, portability and ease of use. Containers can serve as a breaker of silos or the handoff mechanism between traditional Dev and Ops. This is the DevOps Dojo Episode #4, I’m Johan Abildskov, join me in the dojo to learn.As with anything, containers came to solve problems in software development. The problems containers solve are around the deployment and operability of applications or services in traditional siloed Dev and Ops organizations.  On the Development side of things deployment was and is most commonly postponed to the final stages of a project. Software is perhaps only on run the developers own computer. This can lead to all sorts of problems. The architecture might not be compatible with the environments that we deploy the software into. We might not have covered security and operability issues, because we are still working in a sandbox environment. We have not gotten feedback from those who operate applications on how we can enable monitoring and lifecycle management of our applications. And thus, we might have created a lot of value, but we are completely unable to deliver it. On the Operations side of things, we struggle with things such as implicit dependencies. The applications run perfectly fine on staging servers, or on the developer PC, but when we receive it, it is broken. This could be because the version of the operating systems doesn’t match, there are different versions of tooling, or even something as simple as an environment variable or file being present. Different applications can also have different dependencies to operating systems and libraries. This makes it difficult to utilize hardware in a cost-efficient way. Operations commonly serve many teams, and there might be many different frameworks, languages, and delivery mechanisms. Some teams might come with a jar file and no instructions, while others bring thousands of lines of bash. In both camps, there can be problems with testing happening on something other than the thing we end up deploying.  Containers can remedy most of these pains. As with physical containers, it does not matter what we stick into them, we will still be able to stack them high and ship them across the oceans. In the case of Docker we create a so called Dockerfile that describes what goes into our container. This typically starts at the operating system level or from some framework like nodejs. Then we can add additional configurations and dependencies, install our application and define how it is run and what it exposes. This means that we can update our infrastructure and applications independently. It also means that we can update our applications independently from each other. If we want to move to a new PHP version, it doesn’t have to be everyone at the same time, but rather product by product fitting it into their respective timelines. This can of course lead to a diverse landscape of diverging versions, which is not a good thing. With great power comes great responsibility.The Dockerfile can be treated like source code and versioned together with our application source.  The Dockerfile is then compiled into a container image that can be run locally or distributed for deployment. This image can be shared through private or public registries. Because many people and organizations create and publish these container images, it has become easy to run a test on tooling. We can run a single command, and then we have a configured Jenkins, Jira or whatever instance running, that we can throw away when we are done with it. This leads to faster and safer experimentation. The beautiful thing is that this container image then becomes our build artifact, and we can test this image extensively, deploy it to different environments to poke and prod it. And it is the same artifact that we test and deploy. The container that we run, can be traced to an image which can be traced to a Dockerfile from a specific Git s ha. That is a lot of traceability. Because we now have pushed some of the deployment responsibility to the developers, we have an easier time architecting for deployment. Our local environments look more like production environments. Which should remove some surprises from the process of releasing our software leading to better outcomes and happier employees. Some of you might think, isn’t this just virtual machines. And it kind of almost is. Intuitively at least. But containers are implemented to borrow more directly from the host operating system, which leads to lower startup times, and smaller images. We can create and share so-called base images. Images that are can be seen as a template or runtime for specific types of applications. This can help reduce the lead time from project start to something that can be deployed in production to almost zero, as the packaging and deployment has been taken care of. But as Bridget Kromhout said, “Containers will not fix your broken culture”. Containers are not the silver bullet that they are sometimes touted as.  When we move into a container universe, perhaps even moving towards container orchestration with Kubernetes, we tend to neglect or forget about the Ops capabilities and problems we still need to solve. Backups and failovers. Patching of OS and libraries. Performance monitoring and alerting. There are many things that might become implicit and that can lead to risky business decisions. While Docker may lead us as developers to be able to somewhat better maintain and run our application in production, I want to make it very clear. Docker is not a replacement for Ops Using containers is an enabler for many things, and will also create tension against a bureaucratic organization, because of its ease of use. It will be mind-blowing for some, and will require mindset shifts in both Dev and Ops. It also paves the way for more lifecycle management later on, with for instance Kubernetes. To reap the full benefits of containers we have to architect our applications for it using principles such as the 12 factor application.This will again introduce tension and help us build better applications.So while containers will not fix your broken culture, if you are not already thinking about containerization, you probably should be.This has been the DevOpsDojo on Containers. You can follow me on twitter @randomsort. You can find show notes and more at dojo.fm. Support the show by leaving a review, sharing this episode with a friend or colleague or subscribing to the DevOpsDojo on your favourite podcast platform. Thank you for listening, keep learning.

Visit the podcast's native language site