• 1 Post
  • 15 Comments
Joined 10 months ago
cake
Cake day: October 17th, 2023

help-circle

  • It would be nice if we, and apps’ developers, always knew what the vulnerabilities are. They generally exist because the developer doesn’t know about them yet, or hasn’t found a solution yet (though ideally has been transparent about that). Zero-day exploits happen. There’s always a first person or group discovering a flaw.

    If being up to date and using SSL was all it took, security would be a lot simpler.

    No one security measure is ever foolproof, other than taking everything offline. But multiple used in tandem make it somewhere between inconveniently and impractically difficult to breach a system.


  • It’s not. SSL in itself doesn’t make any exposed service safe, just safer. An updated service isn’t necessarilu free of vulnerabilities.

    The difference between exposing your login page and most other services is the attack surface. If someone gets into your NAS administration, game over. You’re getting hit with ransomware or worse.

    If someone gets into my Calibre Web server, for instance, my vulnerability is much more limited. That runs in a docker container that only has access to the resources and folders is absolutely needs. The paths to doing harm to anything besides my ebook library are limited.

    I of course still use SSL, with my Calibre Wev behind a reverse proxy, with long complex passwords, and I’ll probably soon move it to an OATH login where I can use MFA (since it doesn’t support it natively itself). And there are more measures I could take beyond that, if I chose.



  • Only expose applications to the Internet if you have a good need to. Otherwise, use a VPN to access your home network and get to your applications that way.

    If you are exposing them to the internet, take precautions. Use a reverse proxy. Use 2FA if the app supports it. Always use good, long passwords. Login as a limited user whenever possible, and disable admin users for services whenever possible. Consider an alternative solution for authentication, like Authentik. Consider using Fail2ban or Crowdsec to help mitigate the risks of brute force attacks or attacks by known bad actors. Consider the use of Cloudflare tunnels (there are plusses and minuses) to help mitigate the risk of DDOS attacks or to implement other security enhancements that can sit in front of the service.

    What might be a good reason for exposing an application to the Internet? Perhaps you want to make it available to multiple people who you don’t expect to all install VPN clients. Perhaps you want to use it from devices where you can’t install one yourself, like a work desktop. This is why my Nextcloud and Calibre Web installs, plus an instance of Immich I’m test-driving, are reachable online.

    But if the application only needs to be accessed by you, with devices you control, use a VPN. There are a number of ways to do this. I run a Wireguard server directly on my router, and it only took a few clicks to enable and configure in tandem with the router company’s DDNS service. Tailscale makes VPN setup very easy with minimal setup as well. My NAS administration has no reason to be accessible over the internet. Neither does my Portainer instance. Or any device on my network I might want to SSH into. For all of that, I connect with the VPN first, and then connect to the service.



  • If you’re happy with those services … maybe you shouldn’t?

    I self-host because I prefer to house my data locally when possible. It’s easier for backups and I’m not subject to the whims and financial decisions made by a company about whether their service will remain available, what it will cost, what functions it will offer. The tradeoff is work on my part, but I enjoy tinkering and learning.

    In my case, I self-host a NextCloud instance for remote access to my docs, a Calibre Web server for eBooks (and to share those with a few trusted friends), a Vaultwarden instance because I’d prefer my vaults not be stored by a company whose servers are likely a major target for bad actors and that could change its TOS or offerings in the future.


  • I installed it because I was curious, and still learning some things about Docker.

    I pretty quickly used it to install portrainer, and I’ve since managed everything from there.

    The file manager is moderately handy, but nothing I couldn’t do either with the command line or another filemanager tool I’d install through Docker itself.

    I still have it set up because I have no need to change it, but I wouldn’t use it if I were doing my setup from scratch.

    I’m kind of curious about Cosmos as what seems like a more comprehensive alternative, but I’m pretty happy with how I have some of its other functions (like reverse proxy) set up now, so if I try it, it’ll probably just be to tinker.


  • Thanks for the heads up on this project. It looks like it might work very well for some people who basically want a web app as a view right into a filestystem for dealing with folders.

    Unfortunately, it doesn’t really meet the needs I’m laying out. The use case I’m describing is still one where the web app abstracts away the file system and uses albums. It just lays out a (smart, I think) way of recognizing and interpreting the organization in a pre-existing library, like one created from a Google Photos takeout, when bringing photos into its own system – accounting for duplicates in albums without doubling them up on disks.

    Direct editing of EXIF is handy. Memories does that too, and it’s part of why it’s what I’m using. But my ideal situation would be one where the app only writes metadata changes to its own database initially, but then (optionally) applies it to EXIF when exporting/downloading files without touching the original files. And it would also give the user an option to apply metadata to EXIF for the original files, but only after first prompting with warnings.

    It seems your design goals are pretty different than any of that – which isn’t a criticism, as I’m sure it works well for the way a lot of people like to work (just not me).




  • Sorry, but this sounds a bit: “I’d like to eat this piece of cake, but also still have it available to me when I’m done.”

    There are front-ends that can make docker apps easier to manage, like CasaOS. The tradeoff for ease of use is flexibility compared to something like Portainer or the CLI. CasaOS’s app library (for instance) frequently has out-of-date versions of apps, and if their default configuration doesn’t make sense for your purposes, you’re still going to have it delve deeper (whether in the CasaOS UI or another tool) to customize things to your needs.

    That’s pretty much a given with any tool - if you don’t want to deal with how it works, then you need to accept the default configuration and cross your fingers that it works for your purposes.

    And you’re still not going to get away from the fundamentals of how docker works, if you find them troublesome for some reason. Updating a docker app with something like CasaOS is doing the same thing it would be with Portainer or the command line. I’m not quite sure what seems “wrong” about it to you, but it would be “wrong” in the same way no matter what front end you use.


  • It can handle almost any service you might care to self-host - and with that much RAM, several at a time. You could run multiple VMs and still have breathing room.

    But a much less powerful box can also handle most self-hosted services well. If your existing Pi is doing the job, I wouldn’t switch. The 9900K will consume way more power, which is bad for the environment and your wallet.

    Maybe make it into a testing station. Or donate it to a nonprofit. Or sell it. Or turn it into a living room gaming station, playing light games natively and streaming AAA games from another machine with Steam Link or Moonlight (in sleep mode when it’s not in use?). Or give it to a family member. Or make it available to a neighbor via Freecycle/Buy Nothing/similar gifting networks.


  • Safe-r. Not inherently safe. It’s one good practice to consider among others. Like any measure that increases security, it makes your service less accessible - which may compromise usability or interoperability with other services.

    You want to think through multiple security measures with any given service, decide what creates undo hassle, decide what’s most important to you, limit the attack surface by making unauthorized access somewhere between inconvenient and near-impossible. And limit the damage that can be done if someone gets unauthorized access - ie not running as root, giving the container limited access to folders, etc.


  • Only give the container access to the folders it needs for your application to operate as intended.

    Only give the container access to the networks it needs for the application to run as intended.

    Don’t run containers as root unless absolutely necessary.

    Don’t expose an application to the Internet unless necessary. If you’re the only one accessing it remotely, or if you can manage any of the other devices that might (say, for family members), access your home network via a VPN. There are multiple ways to do this. I run a VPN server on my router. Tailscale is a good user-friendly option.

    If you do need to expose an application to the Internet, don’t do so directly. Use a reverse proxy. One common setup: Put your containers on private networks (shared among multiple only in cases where they need to speak to each other), with ports forwarded from the containers to the host. Install a reverse proxy like Nginx Proxy Manager. Forward 80 and 443 from the router to NGM, but don’t forward anything else from the router. Register a domain, with subdomains for each service you use. Point the domain and subdomains to your IP, or using aliases, to a dynamic dns domain that connects to a service on your network (in my case, I use my Asus router’s DDNS service). Have NGM connect each subdomain to the appropriate port on the host (ie, nc.example.com going to a port on the hose being used for NextCloud). Have NGM handle SSL certificate requests and renewals.

    There are other options that don’t involve any open ports, like Cloudflare tunnels. There are also other good reverse proxy options.

    Consider using something like fail2ban or crowdsec to mitigate brute force attacks and ban bad actors. Consider something like Authentik for an extra layer of authentication. If you use Cloudflare, consider its DDOS protection and other security enhancements.

    Keep good and frequent backups.

    Don’t use the same password for multiple services, whether they’re ones you run or elsewhere.

    Throw salt over your shoulder, say three Hail Marys and cross your fingers.


  • In my case, I run a Wireguard server on my router. Not every router firmware has that option, though (and some people may have the option and not realize it).

    I think there are some people who worry about opening up the port for the VPN. But it’s not a particularly high security risk, and services like Tailscale aren’t automatically better just because they initiate outbound connections.

    People overestimate what something like Cloudflare does for them. It can be helpful for a number of use cases and includes some good risk mitigation options, but it a service is still available to the outside world, it’s still a potential vulnerability point that needs to be hardened reasonably at the level of the application and one’s own network, too.