Docker sq
Docker and Kubernetes Container Virtualization - An Introduction to Docker

Working With Docker Storage and Networking

PRO
Outline

From the command line perspective, as we've already learned, Docker containers are effectively identical to traditional physical servers. And that includes having a file system where you can create and manipulate files in all the predictable locations. But there is one very significant difference: all internal container files are ephemeral, which means they'll disappear without a trace if the container ever shuts down.

Since containers aren't meant to be permanent, and being "shut down" is an expected fact of life for a container, we're now faced with two big problems: one, that any file content created during a session will NOT be preserved and, two, that there's no easy way for data created on one container to be available anywhere else. Given the nature of containers and of modern microservices architectures, those are killer problems.

Working with docker volumes

Docker volumes were designed to address this issue. A Docker volume is a storage object that can be created and used independently of any one container. You can attach a volume to a container, mount it to the file system, and its pre-existing data will be instantly available to container processes. More importantly, all old AND new data will continue to be available after the container shuts down.

Here's how it works. I'll run volume create to generate a brand new empty volume called my-vol. docker volume ls lists all my volumes. docker volume inspect will print its vital statistics, including its mountpoint. That refers to the location on the host machine's file system where the volume and its data are stored.

Now let's create a new volume and actually deploy it as part of an actual container. This new volume is called my-vol1. The key line in this command is the --mount argument, which sets the my-vol1 volume as the source, and the target is an as-yet non-existent directory in the root of the container's file system called app. That means Docker will create the app directory and then mount the contents of the volume to that point. Let's see how that goes.

I'll drop into the running container and poke around. The app directory is there as we'd expect. I'll create an empty file in that directory and then add some silly text. Long after this container is nothing more than a fading memory, the volume - along with this file - will live on and remain available for mounting within any new container.

How Docker handles networking.

Computers that can run independently - completely cut off from the outside world - are a part of ancient history. I'm old enough to remember such noisy, heavy, and clunky dinosaurs, but I doubt too many of you have ever enjoyed that pleasure. So given how important connectivity is to modern computing, your containers will desperately need secure and efficient tools to navigate among themselves and outwards across the dark and dangerous internet. Such tools, obviously, are known as networks, and Docker offers you more than one flavor.

By default, a fresh, clean Docker Engine environment will come with a bridge network driver already set up. Bridge networks work at the network link layer and will usually connect a container to its host and to any other containers running on that host. Containers will share internet connections with their host, allowing them to perform important tasks like software updates.

I'm pretty sure you can already predict what the network commands will look like. docker network ls will list all existing networks. The only network that interests us on this default system is bridge. docker network inspect followed by the network name will show us the network's settings. The Subnet value here defines the IP address space that's available for containers that will be launched into this network.

If we wanted to provide custom isolation to multiple groups of containers, we could, say, reduce the size of the IP address pool on our default bridge network and then create one or more new subnets with parallel address pools. Each subnet could be configured with specific access controls and new containers could be launched into the network that would provide the most appropriate balance of isolation and connectivity.

Scrolling down a bit, we're shown the network name, Endpoint ID, and key configuration details - all of which can be edited and updated.

If I display all the networks configured on my host machine using ip a, I'll be shown a network called docker0. I know this is the bridge network we've been using, because it's got the same subnet range.

All that will work fine as long as your containers exist on a single host. But if your workflow requires connectivity between containers running on multiple Docker Engine hosts, then you're far better off with an Overlay network. The alternative to an overlay is to manually configure network routes between your hosts, but I'm betting you'll prefer to avoid that complication.