• Jerry@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Has anyone tried creating a KBIN server using the Docker instructions since Saturday? I got a development server running on Saturday and I decided to bring up a production ready version on Sunday, but now I’m running into two errors while creating it.

    First:

    In these instructions:

    $ sudo chown 82:82 public/media
    $ sudo chown 82:82 var
    $ cp .env.example .env

    There is no directory “var” so the chown fails

    Second:

    When running docker compose build --pull --no-cache

    It now fails with this:

    => [php app_php 20/21] RUN rm -Rf docker/ 0.2s
    => ERROR [php app_php 21/21] RUN set -eux; mkdir -p var/cache var/log; if [ -f composer.json ]; then composer dump-autoload --classmap-authoritative --no-dev; compose …

    This also seems to relate to the var directory, and so must be part of a recent change in the build that relates to the var directory.

    Can anyone help?

    Thanks

    • wally@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      I have tried creating an instance on both Debian 11 and Ubuntu 22.04 with both Docker and the manual steps.

      I can get the build to work fine. But at best I get 500 errors with no signs of logs as to why, not in postgres, not in nginx, not in redis or rabbitmq or even syslog.

      On Docker i have both the kbin_messenger and kbin container boot looping with an error about creating cache directories. But since the containers are restarting I cant even get into bash on them to see why.

      I finally said fuck it and ran the compose for “prod” and the containers start but again, 500 errors.

      At this point…i give up i feel. I would love to host an instance and I am familiar with some of these modules (ie: NGINX, redis and postgres) but with how this is built, I dont see where the breakpoint is.

      • golgy@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        I operate my own homelab and have a background in SRE, so I figured I’d try out the same. I’ve wrangled a Mastodon instance install before, so this couldn’t be too hard, right?

        My approach started by using containers via lxc as a quick and dirty way of getting a development environment that enabled me to figure out if it worked and then see if I could then wrap it into a proper docker container and look at potentially publishing that.

        On my first attempt, I did the manual route. I skipped the redis, rabbitmq, and postgres installs as I already operate those elsewhere on the network, but I got everything else running. Unfortunately, I also experienced the 500 errors. Most of the front page loaded, except for the content where the 500 error was displayed. Even with some digging around, I couldn’t find a clear path to figuring out what was causing the 500 error, as the Mercure hub was seeing subscribers connect and disconnect. Gave up.

        I then figured maybe the docker route might be easier/streamlined. I’m not a fan of duplicating services, but I thought that if the core workflow was solid enough, I could put effort into splitting them apart and go from there. Unfortunately, I don’t even get past the docker-compose build for dev. Docker compose hangs forever.

        • wally@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Yeah. Kbin install made me eat some humble pie for sure. I think someone called me a normie in a lemmy thread describing my troubles. Lol.

          To be fair some of the parts like mercure and rabbitmq im unfamiliar with. But it was a stone cold stumper for me and that’s rare. I’m fairly familiar with Linux admin, and even some of the tooling like docker. But I just couldn’t get it to work in a cohesive way. I ran plenty of Linux servers from Drupal instances, Postgres, nginx for all sorts of shit, etc etc.

          My lemmy instance took….45 minutes to roll out. Though I already had an Ansible box sitting in my lab.