How to Install Kubernetes on Ubuntu 18.04 – A Comprehensive Walkthrough

Hey there! Kubernetes has completely transformed how developers and ops teams build, deploy and manage modern cloud-native applications. If you want hands-on experience with this leading container orchestration platform, you’ve come to the right place. By following this guide, we’ll walk through installing a production-grade Kubernetes cluster on Ubuntu 18.04 from start to finish.

Why Kubernetes?

First, what exactly is Kubernetes? At a high level, Kubernetes (also known as K8s) is an open source platform for automating deployment, scaling and operations of application containers across clusters of hosts.

It coordinates between hosts and containers to make efficient use of resources, ensures workload availability despite failures, adds discovery capabilities and much more – essentially acting as an operating system for your cluster!

Adoption of Kubernetes is absolutely exploding:

  • Over 50% of global companies use Kubernetes in production according to CNCF
  • Manages trillions of container instances daily
  • Support across every major cloud provider, Linux distros, hardware architectures
  • 2600+ contributors and growing rapidly

Using Kubernetes delivers major benefits like:

  • Agile application development and delivery
  • Faster time-to-market for features
  • Improved infrastructure efficiency
  • Consistent deployments across environments
  • Higher application availability

Sound awesome right! Kubernetes provides these advantages by abstracting infrastructure complexities away – so you can focus on developing the applications that run ON Kubernetes.

That’s enough background, so let’s dive into our Ubuntu setup guide!

Installing Kubernetes on Ubuntu 18.04

The goal is to have a single master node for managing the cluster state and workload scheduling, along with one or more worker nodes to actually run containerized applications:

Kubernetes cluster diagram

Image source: Real Python

We will install the latest stable version – Kubernetes 1.23 at time of writing.

Here is an overview of what we will cover:

  • Provisioning servers
  • Preparing hosts – configuring runtimes, repos, packages
  • Initializing the control plane
  • Joining worker nodes
  • Verifying health and connectivity
  • Deploying test workloads
  • Operational tips for managing your cluster

Let’s get started!

Servers and Requirements

I have provisioned 2 Ubuntu 18.04 servers on DigitalOcean with these specs:

Kubernetes Master

  • 4 vCPUs
  • 8GB Memory
  • 80GB SSD Storage

Kubernetes Worker

  • 4 vCPUs
  • 4GB Memory
  • 40GB SSD Storage

These sizes should suffice for our learning/testing purposes. Also make sure both can resolve each other by hostname.

Now we are ready to begin installation!

Installing Docker Runtime

Kubernetes utilizes Docker (or containerd) under the hood to run containerized applications. All nodes require a compatible container runtime installed.

Let‘s add the official Docker repos and install latest version:

# Install Docker prerequisites 
sudo apt update
sudo apt install apt-transport-https ca-certificates curl gnupg lsb-release

# Add Docker GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

# Add Docker stable repo
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# Install Docker CE 
sudo apt update 
sudo apt install docker-ce docker-ce-cli containerd.io

# Enable docker service
sudo systemctl enable docker && sudo systemctl start docker

Check Docker version to confirm:

docker --version
# Docker version 23.0.1, build a5ee18

Great, ready for installing Kubernetes next!

Configuring Kubernetes Repository

To install the Kubernetes components, we first need to configure access to the official software repos provided by the CNCF.

Add GPG key:

# Kubernetes repo key 
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

Next, append the K8s repo for your Ubuntu version:

echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

Update and install packages now:

sudo apt update
sudo apt install -y kubelet=1.25.3 kubectl=1.25.3 kubeadm=1.25.3

These packages provide the cli tooling and processes to build out our cluster next.

Initializing Kubernetes Master

The first step is to initialize the master node using kubeadm. This sets up all control plane components to begin orchestrating container workloads:

# Initialize cluster 
sudo kubeadm init --control-plane-endpoint "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" --upload-certs

We provide a hypothetical load balancer endpoint to be used for high availability. The output will look similar to:

As you can see above, kubeadm handles everything from generating keys/certs and secrets to deploying the Kubernetes control plane as pods and configuring RBAC permissions.

Very cool right!

It will also provide commands to start administering your cluster with kubectl:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

This copies the admin kubeconfig containing API access credentials to your user account.

Verify Kubernetes services are now running with:

kubectl get pods -A

Output should list controller manager, scheduler, etcd and other system pods active on the master.

Now we can join our worker nodes to build out the cluster!

Joining Kubernetes Worker Nodes

To add workers, retrieve the join command output from master initialization:

Then execute this generated command on each node you want participating as K8s workers:

sudo kubeadm join <master-ip>:<master-port> --token <token> \ 
    --discovery-token-ca-cert-hash sha256:<hash>

It will connect securely and become available shortly:

kubectl get nodes

# NAME           STATUS   ROLES           AGE   VERSION
# master         Ready    control-plane   10m   v1.25.3  
# worker-1       Ready    <none>          2m    v1.25.3

Both nodes now show as Ready meaning they can receive and schedule workloads!

Verifying Cluster Functionality

Before deployment, always verify health and connectivity across all moving parts:

Control Plane Services

kubectl get pods -n kube-system -o wide

# confirms API server, controller manager, scheduler + more

Network Connectivity

# test connectivity between pods on different nodes
kubectl run test --image=busybox -- ping <other-pod-ip> 

# confirms inter-pod communication working  

Application Load Balancing

kubectl expose deploy nginx --port 80 --type LoadBalancer
kubectl get svc

# External IP provisioned if LoadBalancer service active

If you hit issues here, double check Docker runtime, ports, service CIDRs, networking plugins and other common causes!

Otherwise, feel free to start utilizing your Kubernetes cluster!

Deploying Applications on Kubernetes

The simplest way to launch applications is via deployments. This handles scaling, updates, rollbacks and more:

# Create deployment
kubectl create deployment my-app --image=myregistry/app:v1

# Expose as service
kubectl expose deployment my-app --type=NodePort --port=8080 

# Visit app using any node‘s IP  
http://<node-ip>:<exposed-port>

Of course, Kubernetes offers a robust set of workload resources:

  • Pods – atomic units of scheduling/execution
  • Jobs – one-off tasks with specified parallelism
  • CronJobs – recurring workloads like backups
  • StatefulSets – apps requiring stable hostnames, storage etc.
  • DaemonSets – run replica per eligible node
  • and more!

Now that your environment is running, time for some operational best practices next.

Managing Your Kubernetes Cluster

Running reliable container workloads requires diligence across multiple fronts:

Upgrades – Kubernetes releases quarterly upgrades. Test changes in dev environments first before promoting to production. Sync across at least 2 minor versions.

Scaling – Monitor resource usage on nodes/pods and scale up/down based on application demand to optimize costs. Distribute workloads evenly.

Security – Limit API server exposure, enforce pod security policies, rotate certificates frequently, and integrate RBAC authorization for access control.

Backups – Snapshots of etcd and periodic metadata backups are crucial for disaster recovery. Schedule regular jobs and secure assets externally.

Logging/Monitoring – Collect metrics, logs and data on performance. Use tools like Prometheus, Grafana, Jaeger, Elastic and more for observability.

Multi-cluster – Support multiple isolated environments like dev, test, prod via clusters. Maintain consistency with configuration management.

I may cover these in more depth another time!

Becoming a Certified Kubernetes Administrator

Once comfortable administering Kubernetes, consider getting officially certified as a Certified Kubernetes Administrator (CKA).

Requirements:

  • Pass the CKA exam administered by the Cloud Native Computing Foundation
  • Demonstrate competence in production-ready clusters
  • Display critical thinking and troubleshooting skills under time pressure
  • Renew certification every 3 years to stay current

Certification verifies hands-on proficiency in operating Kubernetes which continues rising in value.

Closing Thoughts

Congratulations – we have a fully built-out Kubernetes cluster on Ubuntu!

We covered planning, installation, validation, applications and more during this journey:

  • Discussed Kubernetes architecture
  • Prepared Ubuntu infrastructure
  • Initialized master control plane
  • Joined worker nodes
  • Verified health and networking
  • Deployed sample workloads
  • Operational and management tips

You should now have the foundation in place to containerize applications and take advantage of declarative infrastructure. Kubernetes unlocks rapid application development and portability across infrastructures.

As next steps, continue learning concepts like:

  • Managing stateful apps
  • Automating deployments
  • Optimizing resources
  • Securing clusters
  • Extending functionality

The official documentation offers a wealth of guides – feel free to reach out if any questions come up!

I hope you enjoyed working through this hands-on Kubernetes setup tutorial. Let me know if you have any other topics you’d like covered related to Kubernetes or containerization technology!