Since I would like to push forward with the coordination of work regarding the chart, I founded matrix.to/#/#mastodon-on-kuber

A room for Mastodon on which should act as both development and help chat.

One of the greatest benefits of my current work on making each and every service I run HA on is that I can pick my battle times.

A node is broken? Well cordon/drain it, deal with it tomorrow.

One of the postgresql instances fails, no problem, I can look at it tonight.

My major pain point currently is .

Running redis HA seems to require 6 containers, which feels like a hassle. Currently looking into keydb but it's not convincing me yet.

Turned out to be a setting in with the ingress object

For a reason, I haven't yet fully understood, ingress-nginx resorted to 502s and 504s for seemingly random requests when using endpoints. By using `nginx.ingress.kubernetes.io/service-upstream: true` this was resolved and it suddenly decided to return proper answers.

What I could validate from the logs is that the requests actually never hit the mastodon-web pods. by using the service, things work exactly as expected.

Show thread

For all the folks out there, as I just rediscovered it in my bookmarks, share this guide with your users:

learnk8s.io/troubleshooting-de

The PDF, providing a nice flow-chart diagram to diagnose Pod issues, will save you a lot of time.

And even when you are already familar with , it'll help you to structure your debugging process and not jump to conclusions to quickly, just to figure out, that you missed something along the way!

Yesterday night I deployed on my cluster, wasn't as successful as I liked, there is a bit of a lack in fields to set (Jobs being terminating workload, being able to set resources for ALL Pods, …) in the official chart. Working on some improvements, first merge request already landed, just updating metadata. There is more to come!

github.com/mastodon/mastodon/p

For those running and look concerned at the upcoming vulnerability release, I just build a simple script to collect the libssl and openssl versions of all images in your cluster:

git.shivering-isles.com/-/snip

(Should also work for participial views on a cluster, e.g. if you just run a few namespaces.)

Ein Cluster alleine zu betreiben ist wahnsinn. Wahnsinnig viel Spaß. Deshalb tue ich es. :X

Mhm, it all started with "I would need to reconfigure my postfix" and ended up with me building images for dovecot and postfix and currently writing a helm chart for all of it 👀

Things always escalate so quickly.

And another quick blog article: How to use harbor and cri-o to mirror all images in your cluster, without changing any registry path?

shivering-isles.com/mirroring-

Wrote it down as an article after explaining it to someone on Matrix. Someone might consider it useful.

Well, that went unexpected smooth for such a sensitive part of the cluster:

git.shivering-isles.com/shiver

Just relocate the entire networkstack without downtime by fiddling with some labels and Kubernetes objects.

I'm still searching where it broke, because I can't believe I planned this out and it worked first try.

Mhm, I'm kind of tired of talks about container security that chant for "build minimal container images". Does anyone have a case where minimal container images would have prevented a compromise?

And if so: would a minimal "distroful" (e.g. alpine or debian base image) have prevented it, or only a "distroless"/scratch container for static linked builds?

Finished a short article this weekend about the Zalando postgres-operator and how to deploy it along side with monitoring metrics.

shivering-isles.com/postgres-o

Looked for a comprehensive guide there for a few days and couldn't find one, so I wrote it myself :)

Me in the middle of the night: "How complicated is it to get vulnerability scanning for everything that is deployed in my cluster?"
Me 2 hours later: "It works! But there is this minor issue that currently all images for scanning are downloaded from upstream instead of my local mirrors. Maybe there is a setting for that."
Suddenly: github.com/aquasecurity/starbo
Today: It's merged!

For those running Kubernetes it might be worth asking themselves what they do with their credentials.

Today I moved mine into pass, the unix password manager and configure the kubeconfig to query them on demand, which means the certificates are now secured by my Yubikey:

shivering-isles.com/store-kube

Might be an idea for you as well, so I documented it for myself and others :)

I finally found the solution for that I was looking for: k8up.io (v2).

Why? Because it can 2 things:
1. Backups per namespace
2. Encrypt the backup before uploading it.

Now we are moving towards real workload in the cluster.

By the way, are there any recommended helm charts for Mastodon?

The ones I found were quite outdated, so unless someone can hit me with one, I guess I'll have to write one myself.

Another day, another flatpak. Currently building a "Lens" flatpak.

Putting k8slens.dev/ into a flatpak. Once this is done and working, submitting to FlatHub :)

Currently a bit undecided about the permissions, because it contains a CLI feature, but probably less than more permissions.

Something that really drives me crazy is how everything K8s related boils down to "curl this binary from our website and run it in your full user context" 👀

This should help.

:blobfox_com: if you run a shared K8s environment you might want to take action to prevent CVE-2020-8554:

blog.champtar.fr/K8S_MITM_Load

"MitM-as-a-service" as anyone with the rights to create a service objecr can take over IP addresses for all namespaces of a cluster.

Show older