Running Wordpress using Kubernetes locally
Our aim with this post is to describe the process to run Kubernetes locally, by demonstrating running MySQL and Wordpress on the Kubernetes cluster.
We won't explain what Kubernetes is, suffice to say it is the container orchestration solution that everyone seems to be leaning towards; normally you'd want to run up a Kubernetes cluster on a cloud provider such as AWS or GCE, however for the purposes of training / evaluation, we run it up locally using a cluster of Vagrant virtual machines.
Prerequisites
We assume we're on Mac OSX, though the Vagrantfile
is OS-agnostic, so you should be able to get things working on both Windows and Linux relatively easily.
We need to install a few things, and we'll do so using the brew package manager:
brew install vagrant
brew install kubectl
brew install wget
Creating the Kubernetes cluster
Our first step is to clone the kubernetes-vagrant-coreos-cluster repo, which bills itself as a:
Turnkey Kubernetes cluster setup with Vagrant 1.8+ and CoreOS.
This project provides a comprehensive Vagrantfile
which will, by default, run up a cluster of three CoreOS Vagrant virtual machines.
# clone the repo
git clone https://github.com/pires/kubernetes-vagrant-coreos-cluster.git
cd kubernetes-vagrant-coreos-cluster
We can then vagrant up
the cluster:
# remove Virtualbox's DHCP server
VBoxManage dhcpserver remove --netname HostInterfaceNetworking-vboxnet0
# run up the cluster, specifying that we want the web ui
USE_KUBE_UI=true vagrant up
Note that we specify that we want the web-based user interface; other settings are described in the project's README.md, but do take a look through the Vagrantfile to get a better idea about what values can be supplied.
The cluster startup will likely take some time, as vagrant will most likely need to pull a number of images; the Vagrantfile does allow the following providers, though we're using Virtualbox here:
- Virtualbox (the default)
- Parallels Desktop
- VMware Fusion or VMware Workstation
Once Vagrant has pulled the necessary VM images, you should be able to inspect them at:
# display the virtualbox image directories
ls -la ~/VirtualBox\ VMs/
# display the uncompressed sizes of the virtualbox image directories
du -sh ~/VirtualBox\ VMs/*
2.8G /Users/jasondarwin/VirtualBox VMs/kubernetes-vagrant-coreos-cluster_master_1507421853719_25399
4.4G /Users/jasondarwin/VirtualBox VMs/kubernetes-vagrant-coreos-cluster_node-01_1507422227192_50843
2.7G /Users/jasondarwin/VirtualBox VMs/kubernetes-vagrant-coreos-cluster_node-02_1507422346442_20718
Inspecting the cluster
Once the cluster has been created, we can then get some information about it.
We'll be using kubectl
, and there's a cheatsheet that lists the useful kubectl
commands.
# view the three nodes using vagrant
vagrant status
Current machine states:
master running (virtualbox)
node-01 running (virtualbox)
node-02 running (virtualbox)
# get info about our cluster
kubectl cluster-info
Kubernetes master is running at https://172.17.8.101
KubeDNS is running at https://172.17.8.101/api/v1/namespaces/kube-system/services/kube-dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
# display nodes
kubectl get nodes
NAME STATUS AGE VERSION
172.17.8.101 Ready,SchedulingDisabled 3h v1.7.5
172.17.8.102 Ready 3h v1.7.5
172.17.8.103 Ready 3h v1.7.5
At this point we might like to ssh into our cluster nodes and have a look around; we can either use vagrant ssh
or we can ssh
directly by referring to the vagrant insecure_private_key
:
# ssh into the master
vagrant ssh master
# ssh directly into a machine
ssh -i ~/.vagrant.d/insecure_private_key core@172.17.8.101
The Kubernetes dashboard
During the creation of the cluster, you'll see messages regarding the creation of the kubernetes-dashboard
:
==> master: Configuring Kubernetes dashboard...
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created
==> master: Kubernetes dashboard will be available at http://172.17.8.101:8080/ui
Visting the specified URL (in our case http://172.17.8.101:8080/ui) will present you with the dashboard:
Note that visiting the base URL (http://172.17.8.101:8080) will present a JSON file listing all exposed endpoints, including some very handy ones:
{
"paths": [
"/api",
"/api/v1",
"/apis",
"/apis/",
"/apis/admissionregistration.k8s.io",
"/apis/admissionregistration.k8s.io/v1alpha1",
"/apis/apiextensions.k8s.io",
"/apis/apiextensions.k8s.io/v1beta1",
"/apis/apiregistration.k8s.io",
"/apis/apiregistration.k8s.io/v1beta1",
"/apis/apps",
"/apis/apps/v1beta1",
"/apis/authentication.k8s.io",
"/apis/authentication.k8s.io/v1",
"/apis/authentication.k8s.io/v1beta1",
"/apis/authorization.k8s.io",
"/apis/authorization.k8s.io/v1",
"/apis/authorization.k8s.io/v1beta1",
"/apis/autoscaling",
"/apis/autoscaling/v1",
"/apis/batch",
"/apis/batch/v1",
"/apis/batch/v2alpha1",
"/apis/certificates.k8s.io",
"/apis/certificates.k8s.io/v1beta1",
"/apis/extensions",
"/apis/extensions/v1beta1",
"/apis/networking.k8s.io",
"/apis/networking.k8s.io/v1",
"/apis/policy",
"/apis/policy/v1beta1",
"/apis/rbac.authorization.k8s.io",
"/apis/rbac.authorization.k8s.io/v1alpha1",
"/apis/rbac.authorization.k8s.io/v1beta1",
"/apis/settings.k8s.io",
"/apis/settings.k8s.io/v1alpha1",
"/apis/storage.k8s.io",
"/apis/storage.k8s.io/v1",
"/apis/storage.k8s.io/v1beta1",
"/healthz",
"/healthz/autoregister-completion",
"/healthz/ping",
"/healthz/poststarthook/apiservice-registration-controller",
"/healthz/poststarthook/apiservice-status-available-controller",
"/healthz/poststarthook/bootstrap-controller",
"/healthz/poststarthook/ca-registration",
"/healthz/poststarthook/extensions/third-party-resources",
"/healthz/poststarthook/generic-apiserver-start-informers",
"/healthz/poststarthook/kube-apiserver-autoregistration",
"/healthz/poststarthook/start-apiextensions-controllers",
"/healthz/poststarthook/start-apiextensions-informers",
"/healthz/poststarthook/start-kube-aggregator-informers",
"/healthz/poststarthook/start-kube-apiserver-informers",
"/logs",
"/metrics",
"/swagger-2.0.0.json",
"/swagger-2.0.0.pb-v1",
"/swagger-2.0.0.pb-v1.gz",
"/swagger.json",
"/swaggerapi",
"/ui",
"/ui/",
"/version"
]
}
Installing MySQL and Wordpress
Once our cluster is running, we then need to use it; for this demonstration we'll run up MySQL and Wordpress as two separate services.
We're working from the Kubernetes example using MySQL and Wordpress, which is described in detail on the Kubernetes site as a tutorial, though we did find we needed to make a few tweaks to get everything working.
Create our directory and download the yaml files:
mkdir mysql-wordpress
cd mysql-wordpress
wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/mysql-wordpress-pd/local-volumes.yaml
wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/mysql-wordpress-pd/mysql-deployment.yaml
wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/mysql-wordpress-pd/wordpress-deployment.yaml
Amend local-volumes.yaml to add annotations
, otherwise we get problems with Kubernetes not being able to connect the persistent volumes:
# Before:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: wordpress
# After:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
annotations:
volume.beta.kubernetes.io/storage-class: ""
labels:
app: wordpress
Create our local persistent volume mounts.
Note: Any other Type of PersistentVolume would allow you to recreate the Deployments and Services at this point without losing data, but hostPath
loses the data as soon as the Pod stops running. However, for the purposes of this demonstration, we're not worried about persisting data.
mkdir -p /tmp/data/pv-1
mkdir -p /tmp/data/pv-2
Create two PersistentVolumes from the local-volumes.yaml
file:
kubectl create -f local-volumes.yaml
# display the persistent volumes:
kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
local-pv-1 20Gi RWO Retain Bound default/wp-pv-claim 1h
local-pv-2 20Gi RWO Retain Bound default/mysql-pv-claim 1h
Creating secrets for MySQL Password
The Kubernetes site has documentation about handling secrets; in our case we need to handle the MySQL password.
One option is to create a secret in a password file:
echo -n "WHATEVER" > ./password.txt
kubectl create secret generic mysql-pass --from-file=password.txt
# Display the existence of the secret
kubectl get secrets
In the mysql-deployment.yaml
we'll then need to ensure the container spec is set accordingly:
# mysql-deployment.yaml
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password.txt
Alternatively, we can create a literal secret, which is supplied directly on the command line:
# kubectl create secret generic mysql-pass --from-literal=password=WHATEVER
# Display the existence of the secret
kubectl get secrets
# mysql-deployment.yaml
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
Deploy MySQL
At this point we're ready to deploy MySQL:
# deploy MySQL
kubectl create -f mysql-deployment.yaml
# Show the pod as deployed
kubectl get pods
NAME READY STATUS RESTARTS AGE
wordpress-mysql-2917821887-m4cgc 1/1 Running 0 1h
# Describe the deployment, including the assigned persistent volume
kubectl describe -f mysql-deployment.yaml
Find the node running MySQL via the pods in the Kubernetes UI; amongst other things, it will indicate which node the service is running on (172.17.8.103 in this case):
Inspect MySQL
# ssh into the node
ssh -i ~/.vagrant.d/insecure_private_key core@172.17.8.103
# find the running mysql container
docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0c24ee28b333 mysql "docker-entrypoint..." 29 seconds ago Up 29 seconds k8s_mysql_wordpress-mysql-2917821887-r67gk_default_c269b585-abcb-11e7-bd0b-08002751ec84_0
# docker exec into the running mysql container
# and run a interactive bash shell
docker exec -it $(docker ps -ql) /bin/bash
# login to mysql with the previously-configured password
mysql -u root -p
Rolling back
If you encounter problems, it's quite easy to rollback the deployment:
# if you need to rollback:
kubectl delete service wordpress-mysql
kubectl delete deployment wordpress-mysql
kubectl delete pvc mysql-pv-claim
If you get really stuck, you can also delete the local persistant volume mounts, though you'll need to then recreate them if you want to re-deploy:
# Delete the local pv mounts
kubectl delete pv local-pv-1 local-pv-2
# Receate the pv mounts using local-volumes.yaml file:
kubectl create -f local-volumes.yaml
# Redeploy MySQL
kubectl create -f mysql-deployment.yaml
Deploy Wordpress
Deploying Wordpress is quite similar
# deploy wordpress
kubectl create -f wordpress-deployment.yaml
# Show the pod as deployed
kubectl get pods
NAME READY STATUS RESTARTS AGE
wordpress-559664747-h22qq 1/1 Running 0 1h
wordpress-mysql-2917821887-m4cgc 1/1 Running 0 1h
# Describe the deployment, including the assigned persistent volume
kubectl describe -f wordpress-deployment.yaml
Check the Kubernetes UI to ensure that the Wordpress pod has deployed successfully.
We need to take note of the Cluster IP
assigned to Wordpress, which can be seen on the UI under Services
, or can be found via kubectl
:
# find the internal ip
kubectl get services wordpress
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
wordpress 10.100.111.132 <pending> 80:32367/TCP 5m
Note that the Wordpress service will be shown as pending
; this is nothing to worry about.
We'll also need to take note from the Kubernetes UI of Node that it's been assigned to; in our case it was 172.17.8.102
.
Inspect Wordpress
# ssh into the node
ssh -i ~/.vagrant.d/insecure_private_key core@172.17.8.102
# then curl the page using the CLUSTER-IP from
# 'kubectl get services wordpress'.
# Note we follow the redirect
curl -i -L 10.100.111.132
# lots of HTML then follows...
Use the node ip, and the port reported in kubectl get services wordpress
to get to the wordpress install screen in a browser; in our case the URL is http://172.17.8.102:32367
You should then see the familiar Wordpress install screen:
Rolling back
If you encounter problems, it's quite easy to rollback the deployment:
# if you need to rollback:
kubectl delete deployment wordpress
kubectl delete service wordpress
kubectl delete pvc wp-pv-claim
If you get really stuck, you can also delete the MySQL deployment and the local persistent volume mounts, though you'll need to then recreate them if you want to re-deploy:
# Delete the MySQL deployment
kubectl delete service wordpress-mysql
kubectl delete deployment wordpress-mysql
kubectl delete pvc mysql-pv-claim
# Delete the local pv mounts
kubectl delete pv local-pv-1 local-pv-2
# Receate the pv mounts using local-volumes.yaml file:
kubectl create -f local-volumes.yaml
# Redeploy MySQL
kubectl create -f mysql-deployment.yaml
# Redeploy Wordpress
kubectl create -f wordpress-deployment.yaml
Cleanup
Once you tire of your kubernetes cluster, cleanup is quite straightforward:
# Destroy the vagrant cluster
vagrant destroy
# Delete the local persistent volumes
kubectl delete pv local-pv-1 local-pv-2
# Remove the local mounts
rm -rf /tmp/data/pv-1/
rm -rf /tmp/data/pv-2/
Summary
There are plenty of other Kubernetes examples to play around with and plenty of other tutorials on the Kubernetes site, and possibly in a future post we'll look at deployment on a cloud service such as AWS.