I have installed nginx on an Arch Linux VPS with Vultr. I intend to use it to serve files to myself and two colleagues. I have setup three accounts for us all with login names and passwords via the .htaccess and .htpasswd files. I will also be adding a certificate with let’s encrypt before the server will be used.

The data we will be sharing is commercially sensitive. Is there anything else I need to worry about? Is there anything else I can do to harden the server?

  • ElevenNotes@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    A simple webserver secured by htaccess is not inherit insecure, but there are a lot of steps you can take to improve security further: Like proper authentication via OICD or something similar. Only access to the server via VPN, files encrypted, and so on.

  • zoredache@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Is there anything else I can do to harden the server?

    Depending on how sensitive your docs are, you could configure the server to only be usable when accessed via a VPN connection.

    • Atlasatlastatleast@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I’m not asking from a knowledgeable position, so bear with me if it’s a dumb question: Why don’t people use client certificates for this and restrict access to only clients with the certificate? It seems about as a VPN, and also is revocable, and a time expiry can be put on (I suppose that’s the case with VPNs too). They seem like rather similar solutions but I only see VPNs suggested

      • zoredache@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Why don’t people use client certificates

        The difference is that the client certificates are usually implemented as part of the web server. If there is a issue with either configuration, or bug in the web server, you potentially immediately can bypass the certificate requirement. On the other hand a VPN is often a completely separate piece of software, that is operating at the network layer.

        Another thing. If you run a simple port scan against the Internet it is easy to find http/https servers. Some VPN protocols that have been strongly configured will be more or less invisible to any kind of port scans. This eliminates a lot of the scanning and probing get for basically thing that is visible on the Internet.

        Not saying client certs don’t have their place. Just not sure I would choose them, when I think a VPN provides stronger protection, and is potentially pretty easy to implement for a selfhosted environment.

        • Atlasatlastatleast@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          What I meant, and perhaps I have a misunderstanding, is that I was under the impression that SSL could be configured such that it behaves in the way that’s widely known - either a website is “trusted” because an authority has verified that the true owner owns it within a certain period of time - but also as second method more akin to SSH keys, wherein the server has one certificate, the client has a signed cert, and you can only access the server if you’re in possession of a signed certificate on the device being used to access the site. This digicert description matches mine, so I don’t think I’m too far off but I’m missing something

          • zoredache@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            What I meant, and perhaps I have a misunderstanding, i

            Yes, I understand what you mean, and you don’t seem to be misunderstanding how TLS client certificates function.

            But my point was, that usually it is web server is that accepts and validates the client certificate. A web server is externally visible, and so it is potentially something that can be attacked even if the attacker doesn’t have a valid client certificate.

  • Spacelord09@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    nginx doesnt know .htaccess files. You need to configure this in the nginx config. You can use a .htpasswd with some basic auth to get the job done. But I would use something like nextcloud for your usecase. If you need help with nginx config, just ask 😉

  • gwillen@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Personally, if correctly configured (and with a strong password), I treat this setup as more secure than anything more complex that I could assemble for myself.

    It’s very easy to accidentally screw up the configuration. Nginx is generally reverse-proxying some other server; if that server is exposed in any other way than via Nginx, your security is gone.

    If you ever transmit the password over http (rather than https) by accident, your security is gone.

    If you are somehow treating the three accounts as separate within the underlying application, I wouldn’t trust the security of that part; I only use nginx with htpasswd to gate security of single-user apps.

    If you’re just serving static files, it’s harder to mess up and most of these comments don’t apply.

  • myradishes@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Do yourself a favor and use a commercial provider until you have an idea of that answer yourself. Or encrypt them before upload.