• 0 Posts
  • 7 Comments
Joined 11 months ago
cake
Cake day: November 2nd, 2023

help-circle

  • So, I run OPNsense in a VM on Proxmox. There is only one drawback I am aware of, which is when I update the Proxmox host itself, I’ll need to attach a monitor/keyboard/mouse to it. Theoretically, if the upgrade was fully automatic and never needing any intervention or user input, it’d be possible without: But the reality is more that it might need user input, but the OPNsense VM will not be booted i.e. network will be down i.e. I need direct access to the Proxmox host.



  • The NAS can do almost everything you need except offsite C2.
    For example Synology ((so far I only had these)) has a built-in DynDNS service that gives you a subdomain you can access the NAS through without extra steps. I bet all the other NAS brands have this built-in as well. Whichever you pick, definitely have 2FA enabled. Also if you can setup your storage pool as btrfs that’s great too.

    As others pointed out, you need an offsite copy on some C2 provider or a friend’s NAS or whatever. (if you’ve really no budget, you could get a bunch on free subscriptions (dropbox etc.) and split up the backups between them).
    The NAS will have an app that already supports a whole lot of providers + things like external USB drive and you can setup automatic backup there.



  • It’s funny how as a self-hoster with no open ports, sort of supply chain attacks are almost my biggest worry… Here’s the tidbits I’ve collected so far, but just getting into this so take it with a grain of salt …

    1. working out how to run my containers as non-root… Most support this already. It’s adding a user:UID:GID in the compose file and making sure that user can read and write to any dirs you want to map, and it’s done. Now whatever runs in the container does not have root and less chance of shenanigans in its container and on the host.
      Some smaller projects, you have to tweak or rebuild.*
    2. If I can manage I’ll also run the docker daemon as rootless as the next milestone. I already had this working on Proxmox Ubuntu VM, but could not get it to work on a netcup VPS, for example.
    3. Docker sock proxy
    4. VLANs
    5. in compose files, if the containers can handle it:
      security_opt:
      - no-new-privileges:true
      cap_drop:
      - ALL
    6. (I have to work out the secrets stuff! secrets in files, ansible vault,…)

    (* One example for non-rootifying a docker, I got tempo running as non root the other night as it is based on a nginx alpine linux image, after a while I found a nginx.conf file online where all the dirs are redirected to /tmp so nginx can still run if a non-root user launches it. Mapped that config file to the one in the container, set it to run as my user and it works. Did not even have to rebuild it.)


  • Hey, this is where I am stuck just now: I want to keep the docker volumes, as bind mounts, also on my NAS share. If the containers run as a separate non root user (say 1001) then I can mount that share as 1001… sounds good right?

    But somebody suggested running each container from their own user. But then I would need lots of differently owned directories. I wonder if I could keep mounting subdirs of the same NAS share, as different users, so each of them can have their own file access? Perhaps that is overkill.

    (For OP: I’ve been on a selfhosting binge the past week and trying to work my way in at least the general direction of best practice… At least for the container databases I’ve been starting to use tiredofit/docker-db-backup (does database dumps) but also discovered this jdfranel docker backup as well which looks great as well. I save the dumps on a volume mounted from NAS. btrfs and there is a folder replication (snapshots) tool. So far, so good. )