• 0 Posts
  • 4 Comments
Joined 11 months ago
cake
Cake day: October 22nd, 2023

help-circle
  • Let’s Encrypt uses what is called “ACME protocol” for proof of owner when generating certificates.

    There are various challenges they use to prove ownership of the domain. The default one just places a special file on your web server that Let’s Encrypt then reads.

    However there are a number of different types of challenges.

    If you don’t want to expose anything to the internet then a common one to use is ‘DNS Challenge’.

    With DNS challenge the certbot uses your DNS server/provider’s API to update DNS records as a response to the challenge. Let’s Encrypt reads the special TXT response and verifies that you own the domain.

    So to use this you need two things:

    1. A DNS domain

    2. A DNS domain provider that has a API that certbot can use.

    AWS Route53 is a good one to use. But I have used Digital Ocean’s free DNS Service, Bind servers, Njalla, and other things. Most commonly used DNS providers are supported one way or the other.

    You can also get fancy and designate sub domains or other domains to respond to the challenges. So if your DNS is locked down you can still add a record to point to a different one on a different server.

    The big win for going with DNS Challenge is that you can do wildcard certificates.

    So say you are setting up a reverse proxy that will serve vault.home.example.com, fileshare.home.example.com, torrent.home.example.com, and a bunch of others… all you need to do is configure your reverse proxy with a single *.home.example.com cert and it’ll handle anything you throw at it.

    You can’t do that with the normal http challenge. Which makes doing the DNS challenge worth it, IMO, even if you do have a public-facing web server.


  • The problem with hosting kubernetes on VPSes is that exposing the Kubernetes API to the public is pretty sketchy. I know a lot of people do it, but I don’t like the idea.

    I also like having multiple smaller Kubernetes clusters then a single big one. Easier to manage and breakage is more isolated. You can incorporate external services into kubernetes pretty easily using kubes services and endpoints.

    I suggest using K3s as it is very lightweight, easy to deploy, and is k8s compliant. There are default set of services k3s deploys by default and are designed for more ‘IOT’ applications. Things like service-lb. These things can be disabled if you want during install time.

    For managing it I like to ArgoCD on a ‘administrative’ kubes cluster local to you. It has no problem connecting to multiple clusters and has a nice declarative yaml files for configuring things that work well with a git-based workflow. The web UI is nice and is used widely.


  • Distrobox is similar to Fedora’s Toolbox.

    It allows you to run a Linux distribution integrated into your desktop environment. It uses podman (prefered) or docker containers to do this.

    Essentially it creates a container that shares your $HOME and sets up the environment to integrate into your desktop as seamlessly as possible.

    Typically people would use it with a “immutable” (read-only root) Linux distribution like Silverblue for building development environments. But you can use it with any Linux environment.

    https://github.com/89luca89/distrobox

    I run Silverblue on my desktop, but run Emacs out of a Arch Linux container. I launch Emacs with a .desktop file, which means it looks and behaves like a normal GUI application.

    Flatpak is very good for desktop applications, but any sort of command line or daemon type service it falls short. Emacs, especially Emacs Doom, is very complicated with lots of dependencies in applications and LSP libraries and multiple languages and that sort of thing. Not super easy to use with Flatpak, but distrobox is ideal for that containerizing sort of thing.