All the tutorials in this course:
- What Is Server Virtualization?
- An Introduction to Docker
- Other Docker Tools
- An Introduction to Kubernetes
- Working With Kubernetes
While it's true that I did tell you I wouldn't be going into any detail describing how to install Kubernetes environments, that doesn't mean it wouldn't be helpful to at least talk about your options. Let me begin that by breaking your choices down into two categories: solutions that allow you to deploy workloads locally as part of testing or development, and solutions that are built primarily for enterprise-scale production deployments.
My own preference for microk8s - which you'll see in action a bit later - comes from the fact that I happen to be running Ubuntu Linux. What's the connection between the two? Well microk8s is built and supported by Canonical - which happens to be the company behind Ubuntu...and the company behind the snap package manager that delivered microk8s to my system. I'll bet you can see a pattern developing here.
microk8s is relatively fast and simple to deploy, and I've found it to be a pretty good way to experience multi-node Kubernetes workflows. But it also scales up to production capacity, should you need that. A lot of work has been done with microk8s integrations into serious production environments. You can use it to run containers just about anywhere: from Raspberry Pi's to cloud environments and cell towers. So this is something that can grow with you.
If you're looking for something a bit lighter, consider minikube. You can install minikube on just about any hardware running just about any operating system you like, and it does support load balancing and complex network policies. But it'll only work for a single node, and only using local resources.
If you've already got Docker installed on your local machine, kind might be a more straightforward solution. kind lets you build and run local Kubernetes "clusters" on top of Docker hosts. kind is written in the Go language, so a single
go install command might be all you need to get started.
go install firstname.lastname@example.org && kind create cluster
Let's move on to production deployments. For most of you - assuming you're not running the IT department of a Fortune 500 company - launching your application on a public cloud platform will probably make more sense than doing it yourself on-premises. That's because cloud platforms can deliver millions of dollars worth of investments in infrastructure reliability and state-of-the-art security with no up-front expense to you. You just pay for the services you use, as you use them.
Of course, you certainly can set up your own private cloud to host your container workloads using open source software like OpenStack. In fact, many of the world's largest organizations have done just that, including American Airlines, Walmart, the UK civil service, T-Mobile, and Gap - 90% of whose customer-facing apps are running on OpenStack. But be prepared to hire huge teams of expensive engineers and invest millions in infrastructure to get it done right.
All the big public clouds offer serious Kubernetes hosting services. Those will include Azure Container Instances, Google Kubernetes Engine, IBM Cloud Kubernetes Service, and Digital Ocean. And, of course, Amazon's AWS. Their Elastic Container Service isn't Kubernetes-specific, but instead, it will invisibly provision worker instances on which you can run and control your container images. But their Elastic Kubernetes Service is, obviously, focused on Kubernetes. You're expected to define configuration details for one or more clusters, and you'll be given a Kubernetes API endpoint through which you can push images and control deployments just as you would on your own local environments. All the actual infrastructure is managed by AWS.
I should close out this section with just a few words about infrastructure orchestration. Besides a place to host your containers, some categories of container deployments can be simplified - and automated - using third party tools like Ansible and Terraform. The idea is that, not only can you declaratively define your images in a Dockerfile or Kubernetes YAML config file, but you can take a step back and automate the entire process from end to end - even across multiple cloud platforms - using declarative playbooks.