Docker sq
Docker and Kubernetes Container Virtualization - What Is Server Virtualization?
  •  

Explaining Containers - and a Quick Peek at LXD

Outline

To understand just how effective containers can be, you need to appreciate how they're different from virtual machines. A VM, from a software perspective, must be completely self-contained. That means it has to bring its own kernel, device drivers, and security controls to the party. All those extra layers in the software stack take up a lot of drive space, slow down boot times, increase demand on your system resources, and add a whole lot of complexity. Even if it's not technically true, for all intents and purposes, a VM is a complete operating system running on its own hardware.

Containers, on the other hand, know enough about their host system to allow them to share the host OS kernel. That architecture eliminates the need for all the software layers that would normally be required for tasks like managing hardware interface drivers.

Here's another important consideration. In most cases, containers are built to be ephemeral. That means they're allowed to run for as long as nothing goes wrong or their configuration isn't made obsolete by software updates. Once they're no longer working the way you want, rather than open a remote session to apply fixes or updates they way you would on a VM or traditional server, containers are simply killed off and replaced by new versions. I like the analogy I once heard comparing a traditional server to your family pet, who's cared for and pampered so it'll live a long and healthy life. Containers, on the other hand, are closer to cattle: once they're no longer performing the once task set for them, they're sadly sent off to the slaughterhouse.

With that in mind, you can understand how containers are generally most effective when they're provisioned using automated scripts that precisely define their software stack and configuration, rather than launched in their out-of-the-box state and configured live. With such a setup, a simple edit to your orchestration script can trigger the controlled launch of a new fleet of containers and also oversee their orderly deployment of the old and outdated containers they're replacing.

As you can see in the video, LXD container performance is incredibly lightweight and fast. So take those qualities and imagine adding complex infrastructure environments designed to support reliable image sharing along with complex multi-layered applications at enterprise scale. That should give you some idea of what you can get from Docker and Kubernetes deployments.

So Docker and Kubernetes tools are ideal for modern microservices deployments. And, I would suggest, they're really not all that difficult to use. But they do exist in a very complicated, jargon-heavy world. What does it actually mean when people say things like "Docker works with Kubernetes" or "minikube is a reduced implementation of Kubernetes"? What's the difference between Docker Engine and Docker Swarm; or between nodes, pods, containers and Kubelets? How is Helm different from Docker Hub? What's kubectl, containerd, kube-proxy, and etc.d? How to you control storage, scheduling, networking for your container fleets?

Until you get a handle on all that vocabulary - and on the tools and tool categories it represents - you'll have trouble making solid progress with your own projects. So my primary goal for the rest of this course is to provide some context. I will add some high-level demos running simple container workloads later on, but the real takeaway should be clarity. If you're comfortable making strategic infrastructure choices about your application stack when we're done, then I'll be more than satisfied.