this is in part because it’s for (yet another) post I’m working on, but I figured I’d pop some things here and see if others have contributions too. the post will be completed (and include examples, usecases, etc), but, yeah.

I’ve always taken a fairly strong interest in the tooling I use, for QoL and dtrt reasons usually (but also sometimes tool capability). conversely, I also have things I absolutely loathe using

  1. wireguard. a far better vpn software and protocol than most others (and I have slung tunnels with many a vpn protocol). been using this a few years already, even before the ios app beta came around. good shit, take a look if you haven’t before
  2. smallstep cli. it’s one of two pieces of Go software I actually like. smallstep is trying to build its own ecosystem of CA tools and solutions (and that’s usable in its own right, albeit by default focused to containershit), but the cli is great for what you typically want with certificate handling. compare step certificate inspect file and step certificate inspect --insecure https://totallyreal.froztbyte.net/ to the bullshit you need with openssl. check it out
  3. restic. the other of the two Go-softwares I like. I posted about it here previously
  4. rust cli things! oh damn there’s so many, I’m going to put them on their own list below
  5. zsh, extremely lazily configured, with my own little module and scoping system and no oh-my-zsh. fish has been a thing I’ve seen people be happy about but I’m just an extremely lazy computerer so zsh it stays. zsh’s complexity is extremely nonzero and it definitely has sharp edges, but it does work well. sunk cost, I guess. bonus round: race your zsh, check your times:
% hyperfine -m 50 'zsh -i -c echo'
Benchmark 1: zsh -i -c echo
  Time (mean ± σ):      69.1 ms ±   2.8 ms    [User: 35.1 ms, System: 28.6 ms]
  Range (min … max):    67.0 ms …  86.2 ms    50 runs
  1. magic-wormhole. this is a really, really neat little bit of software for just fucking sending files to someone. wormhole send filename one side, wormhole receive the-code-it-gives the other side, bam! it uses SPAKE2 (disclaimer: I did help review that post, it’s still good) for session-tied keying, and it’s just generally good software
  2. [macos specifically] alfred. I gotta say, I barely use this to its full potential, and even so it is a great bit of assistive stuff. more capable than spotlight, has a variety of extensibility, and generally snappy as hell.
  3. [macos specifically] choosy. I use this to control link-routing and link-opening on my workstation to a fairly wide degree (because a lot of other software irks me, and does the wrong thing by default). this will be a fuller post on its own, too
  4. [macos specifically] little snitch. application-level per-connection highly granular-capable firewalling. with profiles. their site does a decent explanation of it. the first few days of setup tends to be Quite Involved with how many rules you need to add (and you’ll probably be surprised at just how many things try to make various kinds of metrics etc connections), but well worth it. one of the ways to make modern software less intolerable. (honorary extra mention: obdev makes a number of handy pieces of mac software, check their site out)
  5. [macos specifically] soundsource. highly capable per-application per-sink audio control software. with the ability to pop in VSTs and AUs at multiple points. extremely helpful for a lot of things (such as perma-muting discord, which never shuts up, even in system dnd mode)

rust tools:

  1. b3sum. file checksum thing, but using blake3. fast!. worth checking out. probably still niche, might catch on eventually
  2. hyperfine. does what it says on the tin. see example use above.
  3. dust. like du, but better, and way faster. oh dear god it is so much faster. I deal with a lot of pets, and this thing is one of the invaluables in dealing with those.
  4. ripgrep. the one on this list that people are most likely to know. grep, but better, and faster.
  5. fd. again, find but better and faster.
  6. tokei. sloccount but not shit. handy for if you quickly want to assess a codebase/repo.
  7. bottom. down the evolutionary chain from top and htop, has more feature modes and a number of neat interactive view functions/helpers

honorary mentions (things I know of but don’t use that much):

  1. mrh. not doing as much consulting as I used to, using it less. quickly checks all git(?) repos in a path for uncommitted changes
  2. fzf. still haven’t really gotten to integrating it into my usage
  3. just. need to get to using it more.
  4. jql. I … tend to avoid jq? my “this should be in a program. with safety rails.” reflex often kicks in when I see jq things. haven’t really explored this
  5. rtx. their tagline is “a better asdf”. I like the idea of it because asdf is a miserable little pile of shell scripts and fuck that, but I still haven’t really gotten to using it in anger myself. I have my own wrapper methods for keeping pyenv/nvm/etc out of my shell unless needed
  6. pomsky. previously rulex. regex creation tool and language. been using it a little bit. not enough to comment in detail yet
  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    Back when I was an intern at a small ads company, there was one guy hyping fish really hard, and as a pathological contrarian I was compelled to write it off completely as hipster brogrammer trash. Anyone here use it? Any reason to switch to it from, say, zsh?

    • froztbyte@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      1 year ago

      A couple of my friends use it, it seems nice. I’m just a stubborn lazy ass

      More seriously: I’m not entirely certain it’s what I want in my shell, tbh, and I haven’t really had the spoons to try figure that out either

  • bitofhope@awful.systems
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    These recommendations are lovely and I will try most or at least some of them, at least the ones relevant to my use cases.

    I adore wireguard and I use it for couple of personal and small org things. I would hate to set it up for a double-digit number of people. OpenVPN and IPsec are a pain in the ass and not just once, but in some use cases I would absolutely prefer them to wireguard.

    Zsh is my main shell and I love it. Sometimes it breaks in bizarre ways through odd interactions between plugins. Sometimes I just spawn a different terminal with a more primitive shell because zsh’s tab completion shits itself attempting to list a dirtily dropped remote filesystem.

    I don’t particularly like language-specific package managers. It’s not too bad to install rust packages through cargo instead of apt when I’m writing rust, but if I just want to put a thing on a Debian box, why do I have to give a shit if it’s implementes in C, Python, Rust, JS or Fortran? Fully a case of sysadmin brain over programmer one.

    I risk playing the heel because I believe these tools, lovely as they are, have weaknesses that can sometimes be addressed and sometimes constitute inherent tradeoffs that need to be considered. This is not synonymous with discounting them or considering them inferior to other, possibly older solutions.

    For a more positive contribution I will second ag as a grep/ripgrep alternative, though I will refrain from comparing it to rg itself.

    And for a more old school recommendation, awk has been invaluable in my career and I would heartily recommend any unixist to learn it a little further than { print $n }.’

    • froztbyte@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      re vpns: yeah ipsec is just human-hazardous. even few technical people get it correct. never mind the mistakes with cipher choices and inter-vendor/-kit fuckups. something that might be worth you knowing about: pritunl has wireguard support, and they generally make an okay product for wrapping ovpn et al. wireguard distribution also improved a lot in recent years, in the form of being able to load tunnel configs from files (and even QR codes, in mobile apps). if you’ve only learned your wg from the early “edit a config” or “wg-quick” era, might be worth a review

      re zsh completion: do you mean completion on e.g. filenames in something like a directory that was linked over sshfs/nfs/etc? admittedly I don’t use remote filesystems much in my workflow (in part because I think it’s fundamentally broken, likely to lead to such issues, and thus avoid it in my control loop), so I’m unlikely to hit that pain point in a lot of cases

      lang-specific managers: yeah, I know where you’re coming from. I’m on both sides of that fence. languages are generally at most equipped to progress their own tooling as fast as they can manage (and even then not so fast, e.g. npm still being dogshit at dependency solving, instead of just learning from the other kids), so I do understand the why of lesser integration. but it irks my inner sysadmin/automationist just as much as you, because holy shit are some of those things badly thought through. the amount of implicit “oh I’m just gonna drop 19 envvars everywhere, and some undocumented magic files” involved in a lot of these things is… 🤬. I would like if languages would work with distro tooling teams, to see how they can end up making some of that smoother. python (and them interfacing with the debian lang teams) is notorious here. ah well. at least rust and clojure don’t suck that much

      (aside: still getting to know the nix thing. being on an m1 makes it a bit spicy)

      ag: get that metal out of your blood, it’s bad for you~ :D but yeah, I will say that ag was what I used for a few years before I found out about ripgrep

      awk: the sheer amount of heavy lifting I do with cut, tr, awk, etc over the years has definitely amounted to sizable amounts of compute hours saved. solid recommendation

      • flere-imsaho@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 months ago

        funnily enough i never thought about recommending awk, because it was so obvious to me that it’s incredibly useful.

        (i just realised that i learned awk almost thirty years ago, and that for most people it might be just that slightly dusty thing that’s lying there, unused)

        • froztbyte@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          11 months ago

          It’s like pipes and redirection and numbered/named FDs, I think. Once you know about them you just constantly use them without thinking but it’s nearly unconscious - and yet someone else may be awed by learning about them

          Job control too!

  • self@awful.systemsM
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    jql looks like exactly the kind of thing I’ve been looking for, as someone who bounced right off of jq.

    some quick recommendations:

    • amaranth is the only good hardware description language (and I’ve tried many), though its docs are lacking. pair with symbiyosys formal verification and a Lattice FPGA for a very advanced, fully open source hardware design environment
    • doom emacs is a modernized, easy to pick up emacs config distro that runs a lot faster than spacemacs
    • gnu parallel lets you easily parallelize arbitrary tasks on the command line. it’s flexible enough to do everything from devops tooling to job handling on massively multicore machines

    I’ve got many more recommendations, I’ll come back to this when I’ve got more free time

    • froztbyte@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I really need to get to trying out doom sometime. still on spacemacs, and yeah fuck it is slow. I don’t have the desire to write my own emacs config from scratch either.

      parallel: yeah. I… it’s mostly sorta good? but god I hate it. the way the args layout works sucks. I’ve looked for alternatives and haven’t really found any. I’ll often still just go for xargs -P -n 1 over parallel because parallel just ugh

      • self@awful.systemsM
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        doom’s compilation model is great! it AOC native compiles everything it can, then the JIT runs for everything else. it’s all pinned to specific versions of packages too so breakages are relatively rare. there’s even a package I use that replaces doom’s build mechanism with Nix, though it’s in desperate need of updates for newer versions of doom

        I agree on parallel, it’s a good tool to have but I do wish there was a good alternative. parallelizing tasks like it does feels like something that should maybe be built into my shell, especially given how good desktops are at multiprocess workloads now — even something like a multiprocess for loop that operates sensibly could be a massive improvement over anything we have now

        • froztbyte@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          every so often I find myself wanting the ability to run a pipe chain but with some kind of resumability, in a just-far-enough-to-be-beyond-job-control sense that still isn’t enough to write a whole-ass program

          and the amount of times I write the pattern while read line; do ...{nested instructions, multiple shellscripts}...; done < inputfile, same

          but I know that doing this on current shells would be a big ball of spaghetti nightmarecode, and be incredibly painful to debug when it any time fails (and it is a when, not an if, because current shells just don’t have that kind of safety)

          continuing down that ideological avenue, I find myself considering that powershell, while being an extremely shitty shell, really does have a nice thing going with the rich API surface (and that follows from other things in the environment, but I digress)

          that leads me to thinking about things like oilshell etc, which while interesting still don’t really scratch my itches here

          at this point about 5~10min has passed since I started writing the pipechain. I breathe deeply, sigh about computers, bang out a shitty one-liner, and wish I had a pile of free money to spend on just building better tools. with a secondary flash of thinking that, despite Developer Experience things being en vogue over in the bay, this just absolutely wouldn’t get money because it isn’t pretty and visible (and then sigh deeply at that too, bottling up the impending flash of rage)

          • froztbyte@awful.systemsOP
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 year ago

            this also reminds me: moreutils by joeyh has a bunch of sane handy things, which also include a nicer baseline parallel. I keep forgetting about that.

    • flere-imsaho@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 year ago

      …the little tools i’m personally using without too much complaining (and haven’t seen mentioned so far) include:

      • ag, aka the silver searcher which is probably like ripgrep,
      • gopass, which we’re currently using as the secret storage for our ansibles,
      • gojq, which is just like jq, but also has --yaml-output and --yaml-input built-in,
      • vcsh for my config file management,
      • himalaya, the cli (…not tui) email client,
      • direnv for location-dependent shell configuration,
      • desk for more general task groupings.
      • froztbyte@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        I don’t follow evans so I wasn’t aware of that post existing. cool to see some overlap I guess

        aka the silver searcher

        just switch to ripgrep:

        froztbyte@bambam ~ % cd ~/code
        froztbyte@bambam code % dust -n 0
        10G ┌── .│████████████████ │ 100%
        froztbyte@bambam code % hyperfine -m 15 -i 'ag somerandomstring' 'rg somerandomstring'
        Benchmark 1: ag somerandomstring
          Time (mean ± σ):      8.728 s ±  0.149 s    [User: 4.112 s, System: 12.585 s]
          Range (min … max):    8.585 s …  9.133 s    15 runs
        
        Benchmark 2: rg somerandomstring
          Time (mean ± σ):      2.359 s ±  0.095 s    [User: 0.935 s, System: 3.690 s]
          Range (min … max):    2.285 s …  2.665 s    15 runs
        
          Warning: Ignoring non-zero exit code.
        
        Summary
          rg somerandomstring ran
            3.70 ± 0.16 times faster than ag somerandomstring
        

        for our ansibles

        I’m sorry. e: this wasn’t said in snark - ansible sucks so much, and it sucks that you have to use it

        • gopass: why … is this its own thing in go? it doesn’t seem to advocate why it’s a thing. not that I particularly care, I’ll almost certainly avoid it (on the basis of it in go), but it’s weird that it just exists as a plural copy without advocating why
        • gojq: see above
        • vcsh: heh I forgot this exists. I recall when richi first started working on that. worth mentioning indeed, some people might use that (I don’t (my hg ~ repo has been going since the dark times and I really like that I can still just hg clone . ssh://host//path/to/dest and have it just dtrt))
        • himalaya: heh, ran across that a while back when looking into rust mail implementations (also when I found vomit/vmt). guess I should setup a trial sometime just to see how it rolls
        • direnv: years ago I first wrote this off because on spinny rust it was just awful (and even running into it on AWS hosts it was annoyingly slow because people run systems on ext3/4). recently reinstalled it on a host but haven’t found myself using it much
        • desk: cute. how does it handle/where does it store state? how does it deal with e.g. path moves? not sure I’d ever use it (see aforeposted grump about miserable piles of shellscript), but cute
        • flere-imsaho@awful.systems
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          11 months ago

          i actually quite like ansible; the alternatives aren’t much better (and i did use all of them, starting from the unlamented cfengine), they just suck differently.

          …and people mostly know at least a bit about ansible (i might start moving some parts of the machinery to saltstack, which i hate the least these days, but it’s owned by vmware, and vmware is now being manhandled by broadcom.)

          also, i’m not really prejudiced against go tools – as long as they’re maintained by someone else and easily installable in binary form.

          regarding gopass; i wouldn’t use it just for myself, just like i wouldn’t use pass – they’re of no use for me personally; gopass manages the integration with git in a very easy way, knows to push changes automatically when secrets are created or updated and is extremely easy to set up as a secret storage for a small group of users: you just need to generate some throwaway gpg keys and you’re all set. and it does have a nice ansible lookup support, which means i can autogenerate secrets on first use, regenerate them automatically when needed, and never bother to know them unless it’s really necessary.

          as for desk, it’s a nice way to delineate, say, workspaces, i.e. set up separate shell environments for interactive work. not for everyone, but i already write too much glue code in bash. so when i start work, i just run “desk work”, and it starts the right vpn, autologs me into teleport, and adds the required ssh keys to the agent. (unfortunately it cannot yet trigger the time accounting system, but if our hr annoys me badly enough once more time, i’ll work on that too.)

            • flere-imsaho@awful.systems
              link
              fedilink
              English
              arrow-up
              2
              ·
              11 months ago

              ah now. cfengine2 was fine, bloody fast and resource-light local agent, and just slightly convoluted configuration – cfengine and cfengine3 though…

              • flere-imsaho@awful.systems
                link
                fedilink
                English
                arrow-up
                1
                ·
                11 months ago

                i mean we did have situations where the puppet agent was leaking memory so badly it smothered the systems it was running on; we had resigned ourselves to simply run the bloody thing from cron.

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    OK this is totally niche but QPDF is a nice tool for doing most “content preserving” PDF operations and it’s maintained by a single person who writes good code and is always super nice when I send pull requests and really positive to people who file bugs even if they’re kinda basic questions.

    I use it at work for PDF linearization but really I just wanted to share how drama free the project is talk about a breath of fresh air.

    • froztbyte@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 year ago

      You missed the key indicator on that deliverable

      NOT oh-my-zsh

      Shit’s janky and slow as hell

      • flere-imsaho@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        but it does free me from having to care about few things which for me is worth these 100ms on startup; anyways: what would you recommend instead?

        • froztbyte@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          if it works for you that’s great I guess

          I hate my computers being slow. I hate slow shells for the same reason as I hate modern js stacks, chrome being everything that chrome is, and FUCKING ELECTRON: all wasteful garbage trading low developer attention (“VELOCITY”) for user suffering. fuck that completely.

          the core of my zsh setup is really rather simple:

          function load_if_exists() {
            [[ -a $1 ]] && source $1
          }
          
          function path_if_exists() {
            [[ -a $1 ]] && path=($1 $path)
          }
          
          function var_if_exists() {
            [[ -a $1 ]] && export $2=$1
          }
          

          use like so:

          # nixos
          load_if_exists ${HOME}/.nix-profile/etc/profile.d/nix.sh
          
          # python local shit
          path_if_exists ${HOME}/.local/bin
          
          # virtualenv
          load_if_exists /etc/bash_completion.d/virtualenvwrapper
          

          scope OS-/platform-local things by slapping them under a folder: ${HOME}/.zsh/{macos,linux,...}

          I keep things like nvm/pyenv/etc out of my normal shell flow until needed (this could be where I’d use rtx, need to try habit it):

          % functions enable_pyenv
          enable_pyenv () {
          	export PYENV_ROOT="${HOME}/.pyenv"
          	path=(${PYENV_ROOT}/bin $path)
          	eval "$(pyenv init --path)"
          	eval "$(pyenv init -)"
          	if which pyenv-virtualenv-init > /dev/null
          	then
          		eval "$(pyenv virtualenv-init -)"
          	fi
          	pyenv virtualenvwrapper
          }
          
          % functions enable_nvm
          enable_nvm () {
          	export NVM_DIR="${HOME}/.nvm"
          	load_if_exists /usr/local/opt/nvm/nvm.sh
          	load_if_exists /usr/local/opt/nvm/etc/bash_completion.d/nvm
          	load_if_exists /opt/homebrew/opt/nvm/nvm.sh
          	load_if_exists /opt/homebrew/opt/nvm/etc/bash_completion.d/nvm
          }
          

          other aliases, functions, etc I load in a very simple fashion. this is among my earliest config, and One Day™ I’ll enrich this a bit to be more like the other loaders:

          source ${HOME}/.zsh/aliaslist   # list of files with aliases
          source ${HOME}/.zsh/functionlist    # list of files with functions, broken up by context
          source ${HOME}/.zsh/env_scripts/loader    # loader for other things like lscolour, etc etc
          

          that’s really about all of the complexity. very, very simple. done so to make inter-host use as painless as possible. it’s one of the rules I chose over a decade ago for how I do my computering, and it’s served me incredibly well

          • flere-imsaho@awful.systems
            link
            fedilink
            English
            arrow-up
            3
            ·
            11 months ago

            so i went and tested things, and it seems that oh-my-zsh adds roughtly 200ms to my shell startup, which is not worth optimizing away considering its usefulness. i’m not starting or restarting zsh frequently enough to care for 0.2s – bash is what i’m using for non-interactive shell scripting.

            the real slog, as it happens, is the teleport autologger, which takes at least half a sec even for a status check. all other tests, including vpn checks, take less than 0.1s.

            (which taught me that (a) hyperfine can be useful, and (b) that stopping using tools that provide affordances is the wrong first reaction. now i’m going to spend half a day to find a not entirely unelegant way to handle teleport session validity without running teleport commands.)

            • froztbyte@awful.systemsOP
              link
              fedilink
              English
              arrow-up
              2
              ·
              11 months ago

              Fair enough. And yeah hyperfine is great

              I spawn easily hundreds of shells a day (think of it as an attention forking model), so that 200ms absolutely grinds for me

              On another note: trying out starship.rs (installed today), will feedback about that soon