I’ve wanted to install pihole so I can access my machines via DNS, currently I have names for my machines in my /etc/hosts files across some of my machines, but that means that I have to copy the configuration to each machine independently which is not ideal.

I’ve seen some popular options for top-level domain in local environments are *.box or *.local.

I would like to use something more original and just wanted to know what you guys use to give me some ideas.

  • ohuf@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    RFC 6762 defines the TLDs you can use safely in a local-only context:

    *.intranet
    *.internal
    *.private
    *.corp
    *.home
    *.lan

    Be a selfhosting rebel, but stick to the RFCs!

      • Diligent_Ad_9060@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        https is not a problem. But you’ll need an internal CA and distributed its certificate to your hosts’ trust store.

  • ellipsoidalellipsoid@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    “.home.arpa” for A records.

    I run my own CA and DNS, and can create vanity TLDs like: a.git, a.webmail, b.sync, etc for internal services. These are CNAMEs pointing to A records.

  • Deathmeter@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Nothing. I have all devices using tailscale DNS and I refer to things in my network by their host name directly.

    • Daniel15@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      I use *.home.mydomain for publicly-accessible IPs (IPv6 addresses plus anything that I’ve port forwarded so it’s accessible externally) and *.int.mydomain for internal IPv4 addresses.

  • DullPhilosopher@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I’ve got a .com for my internal only services with tls and a .pro for my external facing services. I could probably throw them all on one but because legacy (I didn’t think things through) I have two

  • Delyzr@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I have a registered domain and my lan domain is “int.registereddomain.com”. This way I can use letsencrypt etc for my internal hosts (*.int.registereddomain.com via dns challenge). The actual dns for my internal domain itself is not public but static records in pihole.

    • NewDad907@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      I want to do this, but I have no clue how to set it up on Asustor AS6706T. I’ve got a bunch of docker apps up and running and I’d like to simplify stuff with subdomains and better ssl. The whole self signed stuff is just a whole project in itself to work right.

    • Sir-Kerwin@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Can I ask why this is done over something like hosting your own certificate authority? I’m quite new to all this DNS stuff

      • liquoredonlife@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        If you own your own domain, the lifecycle toolchain to request, renew, deliver certs around a variety of cert authorities (letsencrypt is a popular one) makes it really easy, along with not having to worry about hosting an internal CA but more importantly dealing with distributing root certs to client devices that would need to trust it.

        I’ve used https://github.com/acmesh-official/acme.sh as a one-off for updating my Synology’s https certificate (two lines - one fetch, one deploy - finishes in 20 seconds and can be cron’d to run monthly) and Caddy natively handles the entire lifecycle for me (i use cloudflare for my domain registrar which makes it both free and a snap to handle TXT challenge requests).

        Certbot is another popular one.

    • liquoredonlife@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      I did something similar, though I’ve done a slight bifurcation-

      *.i.domain.tld -> the actual internal host/IP (internal dns is adguard)

      *.domain.tld all resolve internally using a DNS rewrite to a keepalived VIP that’s shared between a few hosts serving caddy that handle automatic wildcard cert renewals / SSL / reverse proxy.

      While I talk to things via *.domain.tld, a lot of my other services also talk to each other through this method - having some degree of reverse proxy HA was kinda necessary after introducing this sort of dependency.

    • Tripanafenix@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Hmm I thought when I add tls internal to my reverse proxy rule for local domains, it does not get letsencrypt certs. But when I leave it out of the Caddyfile rule, it gets reachable from outside of the local network. How do I use your recommondation? Using a .home.lab domain locally with a DNS name resolve for every single local subdomain (dashboard.home.lab, grafana.home.lab, etc) right now with a caddy managing the outside and the inside reverse proxy work

  • iavael@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I’ve never used DNS in my local network (because it’s additional burden to support, so I tried to avoid it), but couple of month ago when I needed several internal web-sites on standard http port, I’ve just came up with “localdomain.”

    Yep, it’s non-standard too, but probability of it’s usage of gTLD is lowest among all other variants because of it’s usage in Unix world and how non-pretty it is :)

    • tech2but1@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      If DNS is a burden to support you’re doing it wrong. I set it up once and haven’t touched it since. Everything new that gets added “just works”.

      • iavael@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        It’s not like DNS is a huge burden by itself, it’s just approach of avoiding creation of critical services unless they become necessary. Because infrastructure around them is a burden: they needs additional firewall rules on middleboxes, monitoring, redundancy, IaC, backups etc.

        • tech2but1@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          I don’t fully follow that but like I said, sounds like you’re doing it wrong if you have to alter firewall rules every time you add a host because of DNS issues.

          • iavael@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            I am not speaking about maintainance of DNS zones (that’s easy), but about maintanance of authoritative DNS servers.

  • MrSliff84@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I Just use a .de tld and for all my sites a *.mysite.mydomain.de.

    Ssl certs from cloudflare with a dns challenge for internal use.

  • certuna@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    .local is mDNS - and I’m using that, saves me so much hassle with split-horizon issues etc.

    I also use global DNS for local servers (AAAA records on my own domain), again, this eliminates split-horizon issues. Life is too short to deal with the hassle of running your own DNS server.