You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

This article provides a tutorial on deploying, running and scaling Joget on Azure Kubernetes Service (AKS). AKS is a managed Kubernetes service offered by Azure.

Deploy Joget on Azure Kubernetes Service

1. Create Kubernetes cluster in AKS

This guide will go through creation process with the Azure portal, if you want to create a cluster through Azure CLI please refer to the article Azure CLI.

From the Azure portal, go to the Kubernetes services then Create a Kubernetes cluster.


In the Basics page, choose the Subscription, Resource Group and input the Kubernetes cluster name. Adjust the other configuration settings as desired, or leave as default.

In the Node pools tab, you can configure to add node pools into the cluster. Read on multiple node pools in AKS. For this guide, we will use a single node configuration.

For other tab options - Access, Networking, Integrations, Advanced and Tags, you can leave the default options or make adjustments/changes as necessary. After that, you can click on the Review + create and deploy the Kubernetes cluster. 


When the resource has completed their deployment, you can then connect to the cluster (read here) using Azure CLI/Azure Cloud Shell.

2.Deploy MySQL Database

Once we have a running cluster, you will need to deploy a database to be used by the Joget platform. You can pretty much follow the same method of deploying MySQL DB as in the Joget Kubernetes page.

  • Create persistent storage using PersistentVolume and PersistentVolumeClaim

kubectl apply -f https://k8s.io/examples/application/mysql/mysql-pv.yaml

  • Deploy the MySQL image

kubectl apply -f https://k8s.io/examples/application/mysql/mysql-deployment.yaml

  • Inspect the deployment

kubectl describe deployment mysql
kubectl get pods -l app=mysql
kubectl describe pvc mysql-pv-claim

You need to modify the original yaml files for production usage (eg. using different version of MySQL image and setting up secret instead of plain password in the yaml).

3.Deploy shared storage in AKS

If you are running a multiple node Kubernetes cluster, you will need to allocate shared persistent storage with read write access by multiple nodes. In Azure, you need to set up Azure NFS volume to be used in the Azure Kubernetes cluster. Refer to the official documentation here for detailed info and steps.

  1. Create an Azure Ubuntu VM at the same Virtual Network as the AKS cluster.
  2. Setup NFS server into the VM.

From the link, you can use this script to set up the NFS server (edit the variables as necessary especially the AKS_SUBNET).

#!/bin/bash
# This script should be executed on Linux Ubuntu Virtual Machine

EXPORT_DIRECTORY=${1:-/export/data}
DATA_DIRECTORY=${2:-/data}
AKS_SUBNET=${3:-*}

echo "Updating packages"
apt-get -y update

echo "Installing NFS kernel server"

apt-get -y install nfs-kernel-server

echo "Making data directory ${DATA_DIRECTORY}"
mkdir -p ${DATA_DIRECTORY}

echo "Making new directory to be exported and linked to data directory: ${EXPORT_DIRECTORY}"
mkdir -p ${EXPORT_DIRECTORY}

echo "Mount binding ${DATA_DIRECTORY} to ${EXPORT_DIRECTORY}"
mount --bind ${DATA_DIRECTORY} ${EXPORT_DIRECTORY}

echo "Giving 777 permissions to ${EXPORT_DIRECTORY} directory"
chmod 777 ${EXPORT_DIRECTORY}

parentdir="$(dirname "$EXPORT_DIRECTORY")"
echo "Giving 777 permissions to parent: ${parentdir} directory"
chmod 777 $parentdir

echo "Appending bound directories into fstab"
echo "${DATA_DIRECTORY}    ${EXPORT_DIRECTORY}   none    bind  0  0" >> /etc/fstab

echo "Appending localhost and Kubernetes subnet address ${AKS_SUBNET} to exports configuration file"
echo "/export        ${AKS_SUBNET}(rw,async,insecure,fsid=0,crossmnt,no_subtree_check)" >> /etc/exports
echo "/export        localhost(rw,async,insecure,fsid=0,crossmnt,no_subtree_check)" >> /etc/exports

nohup service nfs-kernel-server restart

After the NFS server has been set up, you can then create the PersistentVolume and PersistentVolumeClaim.

Example azurenfsstorage.yaml;


apiVersion: v1
kind: PersistentVolume
metadata:
  name: aks-nfs
  labels:
    type: nfs
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: NFS_INTERNAL_IP
    path: NFS_EXPORT_FILE_PATH
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: aks-nfs
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: ""
  resources:
    requests:
      storage: 1Gi
  selector:
    matchLabels:
      type: nfs

kubectl apply -f azurenfsstorage.yaml

4.Deploy Joget DX

With the prerequisite database and persistent storage available, you can now deploy Joget. You can apply the example joget-dx7-tomcat9-aks.yaml file to deploy.

Example joget-dx7-tomcat9-aks.yaml;


apiVersion: apps/v1
kind: Deployment
metadata:
  name: joget-dx7-tomcat9
  labels:
    app: joget-dx7-tomcat9
spec:
  replicas: 1
  selector:
    matchLabels:
      app: joget-dx7-tomcat9
  template:
    metadata:
      labels:
        app: joget-dx7-tomcat9
    spec:
      initContainers:
       - name: init-volume
         image: busybox:1.28
         command: ['sh', '-c', 'chmod -f -R g+w /opt/joget/wflow; exit 0']
         volumeMounts:
           - name: joget-dx7-tomcat9-volume
             mountPath: "/opt/joget/wflow"
      volumes:
        - name: joget-dx7-tomcat9-volume
          persistentVolumeClaim:
            claimName: aks-nfs
      securityContext:
        runAsUser: 1000
        fsGroup: 0
      containers:
        - name: joget-dx7-tomcat9
          image: jogetworkflow/joget-dx7-tomcat9:latest
          ports:
            - containerPort: 8080
            - containerPort: 9080
          volumeMounts:
            - name: joget-dx7-tomcat9-volume
              mountPath: /opt/joget/wflow
          env:
            - name: KUBERNETES_NAMESPACE
              valueFrom:
                fieldRef:
                    fieldPath: metadata.namespace
---
apiVersion: v1
kind: Service
metadata:
  name: joget-dx7-tomcat9
  labels:
    app: joget-dx7-tomcat9
spec:
  ports:
  - name: http
    port: 8080
    targetPort: 8080
  - name: https
    port: 9080
    targetPort: 9080  
  selector:
    app: joget-dx7-tomcat9
  type: ClusterIP
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: joget-dx7-tomcat9-clusterrolebinding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: view
subjects:
  - kind: ServiceAccount
    name: default
    namespace: default


You can then check the deployment progress from the Azure portal. (Or use kubectl commands eg. kubectl get deployment joget-dx7-tomcat9)

5.Deploy Ingress for external connections

You can then expose the application for external access through Ingress. You can read more regarding Ingress in Kubernetes here. In this guide, we will use Nginx Ingress Controller as an example to access Joget.

Deploy Nginx Ingress Controller to AKS cluster

You can refer to the AKS documentation regarding creating ingress-nginx and also the nginx-ingress document.

There are 2 known methods of deploying the Nginx Ingress Controller to the AKS cluster;

  1. Deploy through Helm
  2. Use yaml file from the Nginx Ingress Controller Github

Install using Helm

Using Azure CLI/Cloud shell, set up the Helm for Nginx Ingress

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update

helm install ingress-nginx ingress-nginx/ingress-nginx -create-namespace -namespace nginx-ingress

Install using yaml file

You can use kubectl apply command.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.4.0/deploy/static/provider/cloud/deploy.yaml

After the Ingress Controller has been deployed, we can then apply the Ingress yaml so that we can access the Joget application externally.

Example joget-ingress.yaml;

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: joget-dx7-tomcat9-ingress
  annotations:
    nginx.ingress.kubernetes.io/affinity: cookie
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
  ingressClassName: nginx
  rules:
    - http:
        paths:
          - path: /jw
            pathType: Prefix
            backend:
                service:
                name: joget-dx7-tomcat9
                port:
                  number: 8080

After the Ingress deployment is completed, you can get the public IP from the Kubernetes resources > Services and Ingresses pane in the Azure portal (eg. http://<external-ip>/jw).

Setup Database

To complete the Joget deployment, you need to perform a one-time Database Setup. Key in the MySQL service name and the Database Username and Password. Click on Save.



Once the setup is complete, click on Done and you will be brought to the Joget App Center.

6.Setup cert-manager for TLS termination

Before starting the TLS setup, you need to set ‘enable-underscores-in-headers’ as true for Ingress by using configmap.

Example ingress-configmap.yaml;

apiVersion: v1
kind: ConfigMap
metadata:
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
  enable-underscores-in-headers: "true"
  allow-snippet-annotations: "true"

Update the Ingress configuration with kubectl apply -f ingress-configmap.yaml

Install cert-manager into the cluster

Similar to installing the ingress controller, you can install cert-manager either through Helm or through yaml file. Refer to the cert-manager official documentation here for detail. For this guide we will be using the yaml file method.

**Before going further with these steps, make sure that you have set up DNS to the public IP of the ingress that has been generated by AKS earlier.

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.10.0/cert-manager.yaml

[ ~/jogetaks ]$ kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.10.0/cert-manager.yaml
namespace/cert-manager created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
serviceaccount/cert-manager-cainjector created
serviceaccount/cert-manager created
serviceaccount/cert-manager-webhook created
configmap/cert-manager-webhook created
clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrole.rbac.authorization.k8s.io/cert-manager-view created
clusterrole.rbac.authorization.k8s.io/cert-manager-edit created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
role.rbac.authorization.k8s.io/cert-manager:leaderelection created
role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
service/cert-manager created
service/cert-manager-webhook created
deployment.apps/cert-manager-cainjector created
deployment.apps/cert-manager created
deployment.apps/cert-manager-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created


Configure Let’s Encrypt issuer

Example stagingissuer.yaml file;

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
  metadata:
    name: letsencrypt-staging
  spec:
    acme:
      # The ACME server URL
      server: https://acme-staging-v02.api.letsencrypt.org/directory
      # Email address used for ACME registration
      email: [update email here]
      # Name of a secret used to store the ACME account private key
      privateKeySecretRef:
        name: letsencrypt-staging
      # Enable the HTTP-01 challenge provider
      solvers:
      - http01:
          ingress:
            class:  nginx

kubectl apply -f stagingissuer.yaml


You can check on the status of the issuer resource after you have deployed it

kubectl describe issuer letsencrypt-staging


Deploy/Update the Ingress with TLS configuration

As we have previously deploy the Ingress without TLS configuration, we can update the Ingress yaml file to include the TLS configuration.

Example Ingress yaml with TLS;

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: joget-dx7-tomcat9-ingress
  annotations:
    nginx.ingress.kubernetes.io/affinity: cookie
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    cert-manager.io/cluster-issuer: "letsencrypt-staging"
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - exampledomain.com
    secretName : aks-jogetworkflow
  rules:
    - host: exampledomain.com
      http:
        paths:
          - path: /jw
            pathType: Prefix
            backend:
              service:
                name: joget-dx7-tomcat9
                port:
                  number: 9080


This staging procedure is to ensure that the certificate is generated correctly before we setup the Issuer with Let’s Encrypt production.

kubectl get certificate

[ ~/jogetaks ]$ kubectl get certificate
NAME                READY   SECRET              AGE
aks-jogetworkflow   True    aks-jogetworkflow   30s

kubectl describe certificate aks-jogetworkflow


If the certificate is generated correctly then we can set up the production Issuer.

Example productionissuer.yaml file;

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    # The ACME server URL
    server: https://acme-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: [update email here]
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-prod
    # Enable the HTTP-01 challenge provider
    solvers:
    - http01:
        ingress:
          class:  nginx



Update the ingress yaml file with the production annotation.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: joget-dx7-tomcat9-ingress
  annotations:
    nginx.ingress.kubernetes.io/affinity: cookie
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    cert-manager.io/issuer: "letsencrypt-prod"
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - exampledomain.com
    secretName : aks-jogetworkflow
  rules:
    - host: exampledomain.com
    - http:
        paths:
          - path: /jw
            pathType: Prefix
            backend:
              service:
                name: joget-dx7-tomcat9
                port:
                  number: 9080


After applying the updated ingress yaml, you need to delete the previous secret so that the new certificate can be generated for the production.

kubectl delete secret aks-jogetworkflow


Then run back the describe command to check on the cert status

kubectl describe certificate aks-jogetworkflow


After the new certificate has been issued, you can then access the Joget domain with https to ensure that everything is working properly.


  • No labels