Outline
All the tutorials in this course:
- What Is Server Virtualization?
- An Introduction to Docker
- Other Docker Tools
- An Introduction to Kubernetes
- Working With Kubernetes
Let's begin with Docker. More specifically, I'm referring to Docker Engine - the software layer that acts as a manager for Docker containers.
We're not going to concern ourselves here with Docker's inner workings: how it interfaces with processes and controls on its host system and builds secure isolation between a container and its host and between multiple containers sharing a single host. It's not that those details aren't important - or interesting! - but the thing is that we're more focused on helping you understand the practical stuff that'll help you design and deploy real-world applications on real-world infrastructure. If you're curious how all that stuff works, check out the links included with this lesson.
As long as you're confident that Docker itself has been properly installed in your hosting environment and that you're following all the security best-practices, you're pretty safe leaving Docker and Linux - or Windows or macOS - alone to do what they do best.
So then what is Docker Engine? It's the orchestrator that gets all the key elements of a Docker environment to work together. Among those tools would be dockerd, which is a class of system background process known as a daemon. It's dockerd that actually controls the behavior of your containers. Requests are sent to the dockerd daemon through the Docker Engine API. And primary access to the API is provided by the Docker command line interface. That CLI is controlled through the docker
command tool.
docker ps -a
A "Docker image" is a template-based object that contains all the operating system and application resources you would need to launch a container. The Docker Hub image registry - which we'll explore in more detail a bit later - hosts thousands of pre-configured images built from just about any combination of OS distro and software stack you can imagine. As I'll demonstrate in a few minutes, you can easily download any one image and launch it as a container. Or, alternatively, you can create your own images and upload them to either Docker Hub or your own private registry to make them available for collaboration.
Launching an image as a container from the command line uses syntax like this:
docker run -d --name apache2-container \
-e TZ=UTC -p 8080:80 \
ubuntu/apache2:2.4-22.04_beta
docker ps
docker exec -it apache2-container /bin/bash
exit
docker
run
tells the system that you want to launch a new container. -d
means you want to detach the current shell from the new container so it'll continue running even if you close this shell. It'll also give you your command line back immediately. The --name
argument gives the container a name. You can use any value you like. The -e
argument sets the timezone environment value.
I believe this is recommended to bypass a lingering bug in the Ubuntu install process. -p
tells Docker to listen for external traffic on port 8080, and then send that traffic to the container on port 80. You can, of course, substitute any values that work for your system. And the final code points to the Docker Hub address of an image. In this case, I'm fetching the official Ubuntu image that comes with Apache2 pre-installed. How did I get that specific image address? By searching the Docker Hub website. I usually prefer to work with official and supported images where possible, and Docker Hub will normally show me those at the top of my searches.