🌇 Sunsetting Kubernetes Deployments
This page covers our PostHog Kubernetes deployment, which we are currently in the process of sunsetting. Existing customers will receive support until May 31, 2023 and we will continue to provide security updates for the next year.
For existing customers
We highly recommend migrating to PostHog Cloud (US or EU). Take a look at this guide for more information on the migration process.Looking to continue self-hosting?
We still maintain our Open-source Docker Compose deployment. Instructions for deploying can be found here.
Requirements
You need to run a Kubernetes cluster with the Volume Expansion feature enabled. This feature is supported on the majority of volume types since Kubernetes version >= 1.11 (see docs).
Details
PersistentVolumes
can be configured to be expandable. This feature when set to true
, allows the users to resize the volume by editing the corresponding PersistentVolumeClaims
object.
This can become useful in case your storage usage grows and you want to resize the disk on-the-fly without having to resync data across PVCs.
To verify if your storage class allows volume expansion you can run:
kubectl get storageclass -o json | jq '.items[].allowVolumeExpansion'true
In case it returns false
, you can enable volume expansion capabilities for your storage class by running:
DEFAULT_STORAGE_CLASS=$(kubectl get storageclass -o=jsonpath='{.items[?(@.metadata.annotations.storageclass\.kubernetes\.io/is-default-class=="true")].metadata.name}')kubectl patch storageclass "$DEFAULT_STORAGE_CLASS" -p '{"allowVolumeExpansion": true}'storageclass.storage.k8s.io/gp2 patched
N.B:
- expanding a persistent volume is a time consuming operation
- some platforms have a per-volume quota of one modification every 6 hours
- not all the volume types support this feature. Please take a look at the official docs for more info
How-to
List your pods
Terminalkubectl get pods -n posthogNAME READY STATUS RESTARTS AGEposthog-posthog-kafka-0 1/1 Running 0 5m15sConnect to the Kafka container to verify the data directory filesystem size (in this example 15GB)
Terminalkubectl -n posthog exec -it posthog-posthog-kafka-0 -- /bin/bashposthog-posthog-kafka-0:/$ df -h /bitnami/kafkaFilesystem Size Used Avail Use% Mounted on/dev/disk/by-id/scsi-0DO_Volume_pvc-97776a5e-9cdc-4fac-8dad-199f1728b857 15G 40M 14G 1% /bitnami/kafkaResize the underlying PVC (in this example we are resizing it to 20G)
Terminalkubectl -n posthog patch pvc data-posthog-posthog-kafka-0 -p '{ "spec": { "resources": { "requests": { "storage": "20Gi" }}}}'persistentvolumeclaim/data-posthog-posthog-kafka-0 patchedNote: while resizing the PVC you might get an error
disk resize is only supported on Unattached disk, current disk state: Attached
(see below for more details).In this specific case you need to temporary scale down the
StatefulSet
replica value to zero. This will briefly disrupt the Kafka service availability and all the events after this point will be dropped as event ingestion will stop workingYou can do that by running:
kubectl -n posthog patch statefulset posthog-posthog-kafka -p '{ "spec": { "replicas": 0 }}'
After you successfully resized the PVC, you can restore the initial replica definition with:
kubectl -n posthog patch statefulset posthog-posthog-kafka -p '{ "spec": { "replicas": 1 }}'
Delete the
StatefulSet
definition but leave itspod
s online (this is to avoid an impact on the ingestion pipeline availability):kubectl -n posthog delete sts --cascade=orphan posthog-posthog-kafka
In your Helm chart configuration, update the
kafka.persistence
value invalue.yaml
to the target size (20G in this example). You might want to update the retention policy too, more info hereRun a
helm
upgrade to recycle all the pods and re-deploy theStatefulSet
definitionConnect to the Kafka container to verify the new filesystem size
Terminalkubectl -n posthog exec -it posthog-posthog-kafka-0 -- /bin/bashposthog-posthog-kafka-0:/$ df -h /bitnami/kafkaFilesystem Size Used Avail Use% Mounted on/dev/disk/by-id/scsi-0DO_Volume_pvc-97776a5e-9cdc-4fac-8dad-199f1728b857 20G 40M 19G 1% /bitnami/kafka